Next Article in Journal
Fuzzy Fixed Point Theorems in S-Metric Spaces: Applications to Navigation and Control Systems
Next Article in Special Issue
A Robust and Optimal Iterative Algorithm Employing a Weight Function for Solving Nonlinear Equations with Dynamics and Applications
Previous Article in Journal
Pretest Estimation for the Common Mean of Several Normal Distributions: In Meta-Analysis Context
Previous Article in Special Issue
Numerical Solution of Third-Order Rosenau–Hyman and Fornberg–Whitham Equations via B-Spline Interpolation Approach
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A New Methodology for the Development of Efficient Multistep Methods for First–Order IVPs with Oscillating Solutions IV: The Case of the Backward Differentiation Formulae

by
Theodore E. Simos
1,2,3
1
School of Mechanical Engineering, Hangzhou Dianzi University, Er Hao Da Jie 1158, Xiasha, Hangzhou 310018, China
2
Center for Applied Mathematics and Bioinformatics, Gulf University for Science and Technology, West Mishref 32093, Kuwait
3
Section of Mathematics, Department of Civil Engineering, Democritus University of Thrace, 67100 Xanthi, Greece
Axioms 2024, 13(9), 649; https://doi.org/10.3390/axioms13090649
Submission received: 25 August 2024 / Revised: 18 September 2024 / Accepted: 20 September 2024 / Published: 22 September 2024
(This article belongs to the Special Issue The Numerical Analysis and Its Application)

Abstract

:
A theory for the calculation of the phase–lag and amplification–factor for explicit and implicit multistep techniques for first–order differential equations was recently established by the author. His presentation also covered how the approaches’ efficacy is affected by the elimination of the phase–lag and amplification–factor derivatives. This paper will apply the theory for computing the phase–lag and amplification–factor, originally developed for implicit multistep methods, to a subset of implicit methods, called backward differentiation formulae (BDF), and will examine the impact of the phase–lag and amplification–factor derivatives on the efficiency of these strategies. Next, we will show you the stability zones of these brand-new approaches. Lastly, we will discuss the results of numerical experiments and draw some conclusions about the established approaches.

1. Introduction

An equation or system of equations that looks like this:
Ψ ( t ) = Q ( t , Ψ ) , Ψ ( t 0 ) = Ψ 0 .
Astrophysics, electronics, materials science, chemistry, physics, and nanotechnology are just a few of the fields that make use of these compounds. See references [1,2] for further information on the class of equations with oscillatory or periodic solutions.
Over the last two decades, several efforts have been made to determine the numerical solution to the specified problem or set of equations (for instances, refer to [3,4,5,6,7], and the cited works therein). Refer to [3,4], and the cited works for a more in-depth analysis of the techniques used to solve (1) with solutions that display oscillating behavior; for example, Quinlan and Tremaine [5]. Certain characteristics define the numerical strategies found in the literature for solving (1), the most prominent of which is that they are either multistep or hybrid approaches. Additionally, the majority of these techniques were developed for numerically solving second-order differential equations. Along with their corresponding bibliographies, the following theoretical frameworks for research are discussed:
  • Runge–Kutta and Runge–Kutta-Nyström methods with minimal phase–lag, as well as exponentially–fitted, trigonometrically–fitted, phase–fitted, and amplification–fitted variations of these methods (see [8,9,10,11,12,13,14,15])
  • Amplification–Fitted Multistep Methods, Multistep Methods with Minimal Phase–Lag, Phase–Fitted Methods and Methods that are Trigonometrically or Exponentially Fitted (see [16,17,18,19,20,21,22,23,24,25,26])
Recently, Simos developed the theory for building first–order IVPs using multistep procedures with minimal phase–lag or phase–fitted methods in his publications [27,28,29]. More specifically, he developed his theory for estimating the phase–lag and amplification error of multistep techniques for first–order IVPs (explicit [27], implicit [28]) and the effect of removing the derivatives of the phase–lag and amplification–factor on the efficacy of the method [29]. On the topic of backward differentiation formulae ( B D F ) for second–order IVPs with oscillating solutions, Saadat et al. [30] has also recently put forward a theory. In particular, B D F techniques are known to be implicit and built for stiff issues. It is possible to use the phase–fitted and amplification–fitted techniques to very long integration intervals. None of the instances in the study [30] had integration areas larger than 0 , 10 π . The behavior of the specified approaches in intervals 0 , 50000 π or 0 , 100000 π would be interesting to see.
In this study, we will utilize the theory that has been established for implicit multistep methods for first-order IVPs [28] to analyze BDF approaches and create efficient algorithms for this specific type of implicit multistep method. Additionally, we will examine the impact of phase–lag and amplification–factor derivatives on the efficacy of BDF methods for first-order IVPs.
Here is how the paper is structured:
  • The theory for determining the phase–lag and amplification–factor of backward differentiation formulae (BDF) was established in Section 2. The theory presented in [28] will serve as the foundation for the development of the general theory since these techniques belong to a specific class of implicit multistep methods. We shall generate the direct equations for the phase–lag and amplification–factor calculations here.
  • Section 3 presents the methodologies which will be used for the development of the new backward differentiation formulae (BDF).
  • Section 4 introduces the Backward Differentiation Formula that will be used to examine.
  • Section 5 studies the amplification–fitted backward differentiation formula of fourth algebraic order with phase–lag of order four.
  • Section 6 studies the phase–fitted and amplification–fitted backward differentiation formula of fourth algebraic order with elimination of the first derivative of the amplification–factor.
  • Section 7 studies the phase–fitted and amplification–fitted backward differentiation formula of fourth algebraic order with the elimination of the first derivative of the phase–lag.
  • Section 8 studies the amplification–fitted backward differentiation formula of second algebraic order with phase–lag of order six.
  • Section 9 studies the phase–fitted and amplification–fitted backward differentiation formula of fourth algebraic order.
  • Section 10 studies the phase–fitted and amplification–fitted backward differentiation formula of fourth algebraic order with the elimination of the first derivative of the phase–lag and the first derivative of the amplification–factor.
  • In Section 11, we examine the stability of the newly acquired algorithms.
  • In Section 12, we present several numerical examples and we comment on the numerical results.
  • We provide the numerical results and conclusions in Section 13.

2. The Theory

2.1. Direct Formulae for Computation of the Phase–Lag and Amplification–Factor of Backward Differentiation Formulae

The theory for the development of direct formulae for the calculation of the phase–lag and amplification–factor of Backward Differentiation Formulae is based on the theory developed in the papers [27,28].
We consider the general form of the Backward Differentiation Formulae (BDF):
j = 0 m a j Ψ n + j = h A m Ψ n + m .
We may analyze the phase–lag of the implicit multistep techniques (2) for the issues (1) using the scalar test equation that follows.
Ψ t = I ω Ψ t .
where h is the step size of the integration, and I = 1
The following formula may be used to solve this problem:
Ψ t = exp I ω t .
Applying Formula (2) to Equation (3) we obtain:
j = 0 m a j Ψ n + j = h I ω A m ω h Ψ n + m .
Considering:
υ = ω h ,
Equation (5) gives:
a m I υ A m υ Ψ n + m + j = 0 m 1 a j Ψ n + j = 0 .
The difference equation described in (7) has the following characteristic equation:
a m I υ A n + m υ λ m + j = 0 m 1 a j λ j = 0 .
Definition 1. 
Given that the theoretical solution of the scalar test Equation (3) for t = h is exp I ω h , which can also be expressed as exp I υ (see (6)), and the numerical solution of the scalar test Equation (3) for t = h is exp I θ υ , we can define the phase–lag in the following way:
Φ = υ θ υ .
Given that υ 0 , the phase–lag order is q if and only if Φ = O υ q + 1 , and θ υ is an approximated value of υ produced by the numerical solution.
Taking into account what is below:
λ n = exp n I θ ( υ ) = cos n θ ( υ ) + I sin n θ ( υ ) n = 1 , 2 , ,
we obtain:
a m I υ A m υ cos m θ ( υ ) + I sin m θ ( υ ) + j = 0 m 1 a j cos j θ ( υ ) + I sin j θ ( υ ) = 0 .
It is necessary to apply the following lemmas to comprehend the connection provided in the Formula (11).
Lemma 1. 
These relations are true:
cos θ ( υ ) = c o s υ + c υ q + 2 + O υ q + 4 .
sin θ ( υ ) = s i n υ c υ q + 1 + O υ q + 3 .
For the proof, see [27].
Lemma 2. 
This relation holds:
cos j θ ( υ ) = c o s j υ + c j 2 υ q + 2 + O υ q + 4 .
sin j θ ( υ ) = s i n j υ c j υ q + 1 + O υ q + 3 .
For the proof, see [27].
When the relations (14) and (15) are considered, the relation (11) shifts to:
cos m υ + c m 2 υ q + 2 + I sin m υ c m υ q + 1 a m I υ A m υ + j = 0 m 1 a j cos j υ + c j 2 υ q + 2 + I sin j υ c j υ q + 1 = 0 .
Dividing the previously given connection (16) into two halves, real and imaginary, is obviously conceivable.

2.1.1. The Real Part

The real part gives:
a m cos m υ + a m c m 2 v q + 2 + υ A m υ sin m υ υ A m υ c m υ q + 1 + j = 0 m 1 a j cos j υ + j = 0 m 1 a j c j 2 υ q + 2 = 0 .
The relation (17) gives:
a m cos m υ + υ A m υ sin m υ + j = 0 m 1 a j cos j υ = c υ q + 2 a m m 2 m A m υ + j = 0 m 1 a j j 2 c υ q + 2 = a m cos m υ + υ A m υ sin m υ + j = 0 m 1 a j cos j υ a m m 2 m A m υ + j = 0 m 1 a j j 2 .
Ref. (2) states that this is the direct formula for determining the phase–lag of the Backward Differentiation Formulae. We will describe how to determine the phase–lag of the method (2) in the next sections.

2.1.2. The Imaginary Part

The imaginary part gives:
a m sin m υ a m c m υ q + 1 υ A m υ cos m υ υ A m υ c m 2 υ q + 2 + j = 0 m 1 a j sin j υ c j υ q + 1 = 0 .
The relation (19) gives:
a m sin m υ + j = 0 m 1 a j sin j υ υ A m υ cos m υ = c υ q + 1 a m m j = 0 m 1 a j j A m m 2 υ 2 c υ q + 1 = a m sin m υ + j = 0 m 1 a j sin j υ υ A m υ cos m υ a m m j = 0 m 1 a j j A m m 2 υ 2 .
Ref. (2) states that this is the direct formula for determining the amplification–factor of the Backward Differentiation Formulae. We will describe how to determine the amplification–factor of the method (2) in the next sections.
Definition 2. 
We refer to the algorithm that has eliminated phase–lag as the phase–fitted method.
Definition 3. 
The technique where the amplification–factor has disappeared is referred to as the amplification–fitted approach.

2.2. The Role of the Derivatives of the Phase–Lag and Derivatives of the Amplification–Factor on the Efficiency of BDF Algorithms

For the role of the derivatives of the phase–lag and amplification–factor on the efficiency of generally multistep methods (and consequently, of Backward Differentiation Formulae), one can see in [29].
We use the following approach to vanish of the phase–lag and amplification–factor derivatives:
  • We determine the method’s phase–lag and amplification–factor using Formula (18) and (20), respectively.
  • The formulae generated in the prior stage are differentiated on υ .
  • For the derivatives of the phase–lag and the amplification–factor that were generated in the preceding stages, we request that they be set equal to zero.
We provide straightforward equations in Appendix A for calculating the derivatives of the phase–lag and the amplification–factor. These formulae were developed by differentiating the Formulae (18) and (20) on υ .

3. Minimizing or Eliminating the Phase–Lag and the Amplification–Factor and the Derivatives of the Phase–Lag and the Amplification–Factor

Methodologies for eliminating the phase–lag and the amplification–factor as well as methodologies for eliminating phase–lag and amplification–factor derivatives will be presented in this paper.
To be more precise, we will be presenting methodologies for:
  • Minimization of the phase–lag
  • Eliminating the phase–lag
  • Eliminating the amplification–factor
  • Eliminating the derivative of the phase–lag
  • Eliminating the derivative of the amplification–factor
  • Eliminating the derivatives of the phase–lag and the amplification–factor
We note here that all the above methodologies lead to an amplification–fitted method.

4. Backward Differentiation Formulae (BDF)

In this section, we present the Backward Differentiation Formula on which we will present the methodologies for eliminating the phase–lag, amplification–factor, derivative of the phase–lag, and derivative of the amplification–factor.
We will analyze the backward differentiation formula of the form:
Ψ n + 2 + κ 3 Ψ n + 1 + κ 2 Ψ n + κ 1 Ψ n 1 + κ 0 Ψ n 2 = h ϱ Ψ n + 2 .
In the case where:
κ 3 = 48 25 , κ 2 = 36 25 κ 1 = 16 25 , κ 0 = 3 25 ϱ = 12 25 ,
we have the well known case of backward differentiation formula (which we call from now Classical Case)
The Classical Case has a local truncation error ( L T E ) given by:
L T E = 12 125 h 5 Ψ 5 t + O h 6 .

5. Amplification–Fitted Backward Differentiation Formula of Fourth Algebraic Order with Phase–Lag of Order Four

Let us consider the method (21) with:
κ 3 = 48 25 , κ 2 = 36 25 κ 0 = 3 25 .

5.1. An Approach to Reducing Phase–Lag

This is the process that reduces the phase–lag
  • Elimination of the Amplification–Factor.
  • Using the coefficient obtained in the previous step, perform the phase–lag calculation.
  • Expansion of the previously determined phase–lag via the use of Taylor series.
  • Determining the system of equations that reduces the phase–lag.
  • Determine the revised coefficients.
Applying the above procedure, we obtain the following:

5.2. Eliminating the Amplification Factor

Applying the direct formula for the computation of the amplification–factor (20) to the method (21) with coefficients given by (24), we obtain the following result:
A m p l F = 25 υ ϱ cos 4 υ 36 sin 2 υ 25 sin 4 υ + 48 sin 3 υ 25 κ 1 sin υ 400 υ 2 ϱ + 25 κ 1 188 ,
where the Amplification–Factor is represented by A m p l F .
Requesting the elimination of the amplification–factor, or requesting A m p l F = 0 , we obtain:
ϱ = 1 25 25 κ 1 sin υ + 25 sin 4 υ + 36 sin 2 υ 48 sin 3 υ υ cos 4 υ .
Applying the coefficients κ i , i = 0 , 2 , 3 given by (22) and the coefficient ϱ obtained in (24) to the direct formula for computation of the phase–lag given by (18), we achieve the following:
P h E r r = P a r 3 P a r 4 ,
where:
P a r 3 = [ 24 cos υ 4 + 100 cos υ 3 κ 1 + 48 cos υ 2 75 κ 1 cos υ 48 cos υ 8 ] υ
P a r 4 = 200 cos υ 4 υ κ 1 + 896 cos υ 4 υ 200 cos υ 2 υ κ 1 + 768 sin υ cos υ 2 896 cos υ 2 υ + 112 sin υ cos υ 100 κ 1 sin υ + 25 υ κ 1 + 112 υ 192 sin υ 800 sin υ cos υ 3 ,
and P h E r r represents the phase–lag.
We may obtain the following by expanding the Formula (27) using the Taylor series:
P h E r r = 25 κ 1 + 16 75 κ 1 v 2 75 κ 1 72 225 κ 1 2 + 25 κ 1 + 16 75 κ 1 550 κ 1 3 1504 3 v 4 75 κ 1 [ 54 + 675 κ 1 8 + 25 κ 1 + 16 75 κ 1 1595 κ 1 6 + 10376 15 78125 κ 1 2 + 125200 κ 1 + 48128 33750 κ 1 2 550 κ 1 3 1504 3 ] v 6 75 κ 1 [ 101 5 405 κ 1 16 + 25 κ 1 + 16 75 κ 1 11945 κ 1 84 5876 15 78125 κ 1 2 + 125200 κ 1 + 48128 33750 κ 1 2 1595 κ 1 6 + 10376 15 + P a r 5 30375000 κ 1 3 550 κ 1 3 1504 3 ] + .
where
P a r 5 = 241934375 κ 1 3 + 883662000 κ 1 2 + 918850560 κ 1 + 289538048
The following equation is obtained by requesting that the phase–lag be minimized:
25 κ 1 + 16 75 κ 1 = 0 κ 1 = 16 25 .
This innovative algorithm has the following characteristics:
κ 3 = 48 25 , κ 2 = 36 25 , κ 1 = 16 25 , κ 0 = 3 25 , ϱ = 1 25 16 sin υ + 25 sin 4 υ + 36 sin 2 υ 48 sin 3 υ υ cos 4 υ , L T E = 12 125 h 5 Ψ 5 t ω 4 Ψ t + O h 6 , P h E r r = 1 12 v 6 59 96 v 8 + , A m p l F = 0 .
We mention that ϱ may be expressed as a Taylor series expansion:
ϱ = 12 25 12 υ 4 125 86 υ 6 175 4601 υ 8 1500 + .
Remark 1. 
The outcome will be identical to what was stated before if we choose 3 free parameters, such as κ j , j = 0 1 2 .

6. Phase–Fitted and Amplification–Fitted Backward Differentiation Formula of Algebraic Order Four with the Elimination of the First Derivative of the Amplification–Factor

Let us consider the method (21) with coefficients:
κ 3 = 48 25 , κ 1 = 16 25

Methodology for the Vanishing of the First Derivative of the Amplification–Factor

  • We apply the Formula (20) to the specific method (35). Thus, we obtain the formula of the Amplification–Factor, let us say A m p l F .
  • We calculate the first derivative of the above formula, i.e., A m p l F υ .
  • We request the formula of the previous step to be equal to zero, i.e., A m p l F υ = 0 .
Based on the above methodology, and using the Formulae (18) and (20) to determine the phase–lag, amplification–factor, we obtain the Formulae (44) and (45).
The derivative of the amplification–factor gives the following result:
A m p l F υ = 1 4 P a r 22 100 υ 2 ϱ + 25 κ 2 87 2 ,
where the formula P a r 22 is in Appendix D.
Requesting the elimination of the phase–lag, the amplification–factor, and the derivative of the amplification–factor, i.e., requesting P h E r r = 0 , A m p l F = 0 , and A m p l F υ = 0 we obtain:
κ 2 = P a r 23 P a r 24
κ 0 = P a r 25 P a r 26
ϱ = 48 sin 5 υ + 192 sin υ + 50 sin 6 υ 150 sin 2 υ + 16 sin 3 υ 50 υ cos 6 υ 150 υ cos 2 υ + 25 sin 6 υ 25 sin 2 υ ,
where the formulae P a r j , j = 23 1 26 are in Appendix E.
The following is the result of applying the Taylor series to the previously stated formulae:
κ 0 = 3 25 18 υ 2 1375 7641 υ 4 1058750 1014751 υ 6 116462500 5154660383 υ 8 358704500000 10843023989719 υ 10 512947435000000 615072100312758827 υ 12 21328354347300000000 +
κ 2 = 36 25 + 18 υ 2 1375 75519 υ 4 1058750 + 11556491 υ 6 116462500 + 33843209503 υ 8 358704500000 + 61617963722379 υ 10 512947435000000 + 3295535984888285107 υ 12 21328354347300000000 +
ϱ = 12 25 + 36 υ 2 1375 24699 υ 4 529375 4594589 υ 6 58231250 19274549037 υ 8 179352250000 326113634777069 υ 10 2308263457500000 1967731067173147153 υ 12 10664177173650000000 +
This novel algorithm has the following features:
κ 2 is given by ( 37 ) κ 0 is given by ( 38 ) ϱ is given by ( 39 ) L T E = 108 1375 h 4 ω 4 Ψ 4 t + ω 2 Ψ t + O h 5 , P h E r r = 0 A m p l F = 0 A m p l F υ = 0 .

7. Phase–Fitted and Amplification–Fitted Backward Differentiation Formula of Algebraic Order Four with the Elimination of the First Derivative of the Phase–Lag

Let us consider the method (21) with coefficients given by (35).

Methodology for the Vanishing of the First Derivative of the Phase–Lag

  • We apply the Formula (18) to the specific method (35). Thus, we obtain the formula of the phase–lag, let say P h E r r .
  • We calculate the first derivative of the above formula, i.e., P h E r r υ .
  • We request the formula of the previous step to be equal to zero, i.e., P h E r r υ = 0 .
Based on the above methodology, and using the Formulae (18) and (20) to determine the phase–lag, amplification–factor, and the derivative of the phase–lag, yields the following result:
P h E r r = P a r 15 48 100 ϱ + 100 κ 2
A m p l F = P a r 16 400 υ 2 ϱ + 100 κ 2 348
P h E r r υ = P a r 17 48 100 ϱ + 100 κ 2 ,
where
P a r 15 = 25 υ ϱ sin 4 υ + 25 cos 4 υ 48 cos 3 υ 16 cos υ + 25 κ 2 cos 2 υ + 25 κ 0
P a r 16 = 25 υ ϱ cos 4 υ 25 κ 2 sin 2 υ + 16 sin υ 25 sin 4 υ + 48 sin 3 υ
P a r 17 = 100 υ ϱ cos 4 υ 50 κ 2 sin 2 υ + 25 ϱ sin 4 υ + 16 sin υ 100 sin 4 υ + 144 sin 3 υ .
Requesting the elimination of the phase–lag, the amplification–factor and the derivative of the phase–lag, i.e., requesting P h E r r = 0 , A m p l F = 0 , and P h E r r υ = 0 we obtain:
κ 2 = P a r 18 P a r 19
κ 0 = P a r 20 P a r 21
ϱ = 50 sin 4 υ 48 sin 3 υ + 16 sin υ 50 υ cos 4 υ + 25 sin 4 υ ,
where the formulae P a r k , k = 18 1 21 are in Appendix C.
The following is the result of applying the Taylor series to the previously stated formulae:
κ 0 = 3 25 + 6 125 υ 4 + 431 2625 υ 6 + 475163 945000 υ 8 + 286895963 187110000 υ 10 + 14344436937211 3064861800000 υ 12 +
κ 2 = 36 25 6 125 υ 4 + 149 525 υ 6 + 754417 945000 υ 8 + 456321589 187110000 υ 10 + 22821441228569 3064861800000 υ 12 +
ϱ = 12 25 24 125 υ 4 1648 2625 υ 6 455713 236250 υ 8 3939379 668250 υ 10 13795647774791 766215450000 υ 12 +
This novel algorithm has the following features:
κ 2 is given by ( 50 ) κ 0 is given by ( 51 ) ϱ is given by ( 52 ) L T E = 12 125 h 5 Ψ 5 t ω 4 Ψ t + O h 6 , P h E r r = 0 A m p l F = 0 P h E r r υ = 0 .
where P h E r r is the phase–lag, A m p l F is the amplification–factor, and P h E r r υ is the first derivative of the phase–lag.

8. Amplification–Fitted Backward Differentiation Formula of Second Algebraic Order with Phase–Lag of Order Six

Let us consider the method (21) with all the coefficients free.
Following the algorithm given in Section 5.1, we have:

Eliminating the Amplification Factor

Applying the direct formula for the computation of the amplification–factor (20) to the method (21), we obtain the following result:
A m p l F = sin 4 υ + κ 1 sin υ + κ 2 sin 2 υ + κ 3 sin 3 υ υ ϱ cos 4 υ 16 υ 2 ϱ κ 1 4 κ 2 9 κ 3 4 ,
Requesting the elimination of the amplification–factor, or requesting A m p l F = 0 , we obtain:
ϱ = κ 1 sin υ + κ 2 sin 2 υ + κ 3 sin 3 υ + sin 4 υ υ cos 4 υ .
Applying the coefficient ϱ obtained in (57) to the direct formula for computation of the phase–lag given by (18), we achieve the following:
P h E r r = P a r 6 P a r 7 ,
where:
P a r 6 = [ 8 cos υ 4 κ 0 + 4 cos υ 3 κ 1 8 cos υ 2 κ 0 + 2 cos υ 2 κ 2 3 κ 1 cos υ + κ 3 cos υ + κ 0 κ 2 + 1 ] υ
P a r 7 = 8 cos υ 4 υ κ 1 + 32 cos υ 4 υ κ 2 + 72 cos υ 4 υ κ 3 + 128 cos υ 4 υ 32 sin υ cos υ 3 16 cos υ 2 sin υ κ 3 8 cos υ 2 υ κ 1 32 cos υ 2 υ κ 2 72 cos υ 2 υ κ 3 8 κ 2 sin υ cos υ + 16 sin υ cos υ 4 κ 1 sin υ + 4 sin υ κ 3 + υ κ 1 + 4 υ κ 2 + 9 υ κ 3 + 16 υ 128 cos υ 2 υ .
We may obtain the following by expanding the Formula (59) using the Taylor series:
P h E r r = P a r 8 + P a r 9 υ 2 + P a r 10 υ 4 + P a r 11 υ 6 + P a r 12 υ 8 + ,
where P a r i , i = 8 1 12 are given in the Appendix B.
The following system of equations is obtained by requesting that the phase–lag be minimized:
P a r 8 = 0 P a r 9 = 0 P a r 10 = 0 P a r 11 = 0 P a r 12 = 0
The solution of the above system of equations is given by:
κ 0 = 1 35 , κ 1 = 8 35 , κ 2 = 4 5 , κ 3 = 8 5 .
This innovative algorithm has the following characteristics:
κ 3 = 8 5 , κ 2 = 4 5 , κ 1 = 8 35 , κ 0 = 1 35 , ϱ = 1 35 8 sin υ + 28 sin 2 υ 56 sin 3 υ + 35 sin 4 υ υ cos 4 υ , L T E = 8 105 h 3 Ψ 3 t ω 2 Ψ t + O h 4 , P h E r r = 1 160 v 8 + 39 800 v 10 + , A m p l F = 0 .
We mention that ϱ may be expressed as a Taylor series expansion:
ϱ = 4 7 + 8 105 υ 2 + 2 105 υ 4 + 151 11025 υ 6 + 1517 31752 υ 8 + .

9. Phase–Fitted and Amplification–Fitted Backward Differentiation Formula of Algebraic Order Four

Let us consider the method (21) with:
κ 3 = 48 25 , κ 1 = 16 25 κ 0 = 3 25 .
Using the Formulae (18) and (20) to determine the phase–lag and amplification–factor, respectively, yields the following result:
P h E r r = P a r 13 48 100 ϱ + 100 κ 2
A m p l F = P a r 14 400 υ 2 ϱ + 100 κ 2 348 ,
where
P a r 13 = 25 υ ϱ sin 4 υ + 25 κ 2 cos 2 υ + 25 cos 4 υ 16 cos υ 48 cos 3 υ + 3
P a r 14 = 25 υ ϱ cos 4 υ 25 κ 2 sin 2 υ 25 sin 4 υ + 16 sin υ + 48 sin 3 υ .
and A m p l F stands for the Amplification Factor, while P h E r r is the phase–lag.
Requesting the elimination of the phase–lag and the amplification–factor, i.e., requesting P h E r r = 0 and A m p l F = 0 , we obtain:
κ 2 = 1 25 25 3 cos 4 υ + 48 cos υ + 16 cos 3 υ cos 2 υ
ϱ = 1 25 22 sin 2 υ 32 sin υ cos 2 υ υ .
The following is the result of applying the Taylor series to the previously stated formulae:
κ 2 = 36 25 + 4 25 υ 6 + 11 50 υ 8 + 179 500 υ 10 + 219259 378000 υ 12 +
ϱ = 12 25 12 125 υ 4 6 35 υ 6 421 1500 υ 8 63131 138600 υ 10 1344073 1820000 υ 12 +
This innovative algorithm has the following characteristics:
κ 3 = 48 25 , κ 2 see ( 72 ) , κ 1 = 16 25 , κ 0 = 3 25 , ϱ see ( 73 ) , L T E = 12 125 h 5 Ψ 5 t ω 4 Ψ t + O h 6 , P h E r r = 0 , A m p l F = 0 .

10. Phase–Fitted and Amplification–Fitted Backward Differentiation Formula of Algebraic Order Four with the Elimination of the First Derivative of the Phase–Lag and the First Derivative of the Amplification–Factor

Let us consider the method (21) with:
κ 3 = 48 25

Methodology for the Vanishing of the First Derivative of the Phase–Lag and the First Derivative of the Amplification–Factor

  • We apply the Formula (18) to the specific method (78). Thus, we obtain the formula of the Phase–Lag, let say P h E r r .
  • We calculate the first derivative of the above formula, i.e., P h E r r υ .
  • We apply the Formula (20) to the specific method (78). Thus, we obtain the formula of the Amplification–Factor, let say A m p l F .
  • We calculate the first derivative of the above formula, i.e., A m p l F υ .
  • We request the formulae of the previous steps to be equal to zero, i.e., P h E r r υ = 0 , and A m p l F υ = 0 .
Based on the above methodology, and using the Formulae (18) and (20) to determine the phase–lag, amplification–factor, we obtain:
P h E r r = P a r 27 32 100 ϱ + 25 κ 1 + 100 κ 2
A m p l F = P a r 28 400 υ 2 ϱ + 25 κ 1 + 100 κ 2 332
P h E r r υ = P a r 29 32 100 ϱ + 25 κ 1 + 100 κ 2
A m p l F υ = P a r 30 P a r 31 ,
where the formulae P a r q , q = 27 1 31 are in Appendix F.
Requesting the elimination of the phase–lag, the amplification–factor, the derivative of the phase–lag, and the derivative of the amplification–factor, i.e., requesting P h E r r = 0 , A m p l F = 0 , P h E r r υ = 0 , and A m p l F υ = 0 we obtain:
κ 2 = P a r 32 100 cos 2 υ υ 50 cos 4 υ υ + 25 sin 2 υ 25 sin 4 υ
κ 1 = P a r 33 100 cos 2 υ υ 50 cos 4 υ υ + 25 sin 2 υ 25 sin 4 υ
κ 0 = P a r 34 100 cos 2 υ υ 50 cos 4 υ υ + 25 sin 2 υ 25 sin 4 υ
ϱ = 50 sin 4 υ + 100 sin 2 υ + 48 sin 3 υ 144 sin υ 100 cos 2 υ υ 50 cos 4 υ υ + 25 sin 2 υ 25 sin 4 υ ,
where the formulae P a r i , i = 32 1 34 are in Appendix G.
This novel algorithm has the following features:
κ 2 is given by ( 82 ) κ 2 is given by ( 83 ) κ 0 is given by ( 84 ) ϱ is given by ( 85 ) L T E = 12 125 h 5 Ψ 5 t + 2 ω 2 Ψ 3 t + ω 4 Ψ t + O h 6 , P h E r r = 0 A m p l F = 0 A m p l F υ = 0 .

11. Stability Analysis

The Backward Differentiation Formula which studied in this paper has the general form:
Ψ n + 2 + Λ 3 Ψ n + 1 + Λ 2 Ψ n + Λ 1 Ψ n 1 + Λ 0 Ψ n 2 = h Ω Ξ n + 2 .
where Ξ n + 2 = Ψ n + 2 .
The method (87) encompasses the methods that were created in Section 4, Section 5, Section 6, Section 7, Section 8, Section 9 and Section 10, namely the methods (22), (33), (43), (56), (65), (76), and (86).
Based on the scalar test equation:
Ψ = λ Ψ where λ C ,
we can obtain the following difference equation by implementing the scheme (A21).
Γ 4 Ψ n + 2 + Γ 3 Ψ n + 1 + Γ 2 Ψ n + Γ 1 Ψ n 1 + Γ 0 Ψ n 2 = 0 .
where
Γ 4 ( Z ) = 1 Ω Z , Γ 3 ( Z ) = Λ 3 , Γ 2 ( Z ) = Λ 2 , Γ 1 ( V ) = Λ 1 , Γ 0 ( V ) = Λ 0 .
and Z = λ h
The characteristic equation of (A22) is given by
Γ 4 ( Z ) q 4 + Γ 3 ( Z ) q 3 + Γ 2 ( Z ) q 2 + Γ 1 ( Z ) q + Γ 0 ( Z ) = 0 .
It is possible to determine the stability regions for ζ [ 0 , 2 π ] by solving the above Equation (91) in Z and replacing q with q = exp ( i ζ ) , where i = 1 . Figure 1 illustrates the stability of the Backward Differentiation Formula stated in Section 4 Classical Case with constant coefficients. The stability regions of the algorithms derived from Section 5, Section 6, Section 8, Section 9 and Section 10 are shown in Figure 2, Figure 3, Figure 4, Figure 5, Figure 6, Figure 7, Figure 8 and Figure 9. For these cases we present the stability regions for v = 1 , v = 5 , v = 10 , and v = 100 .

12. Numerical Results

For the below mentioned problems, and in the case that we have second order differential equations or systems of second order differential equations, we analyze them to systems of first order differential equations using the techniques presented in the literature (for example see the book [33], chapter 7). For all the examples which are presented below, we use a step length of 1 2 q for several values of q.

12.1. Problem of Stiefel and Bettis

The following nearly periodic orbit problem was studied by Stiefel and Bettis [31] and is taken into account here:
Ψ 1 ( t ) = Ψ 1 ( t ) + 0.001 cos ( t ) , Ψ 1 ( 0 ) = 1 , Ψ 1 ( 0 ) = 0 , Ψ 2 ( t ) = Ψ 2 ( t ) + 0.001 sin ( t ) , Ψ 2 ( 0 ) = 0 , Ψ 2 ( 0 ) = 0.9995 .
The exact solution is
Ψ 1 ( t ) = cos ( t ) + 0.0005 t sin ( t ) , Ψ 2 ( t ) = sin ( t ) 0.0005 t cos ( t ) .
For this problem, we use ω = 1 .
The problem (92) has numerical solutions for 0 t 100,000 that have been determined by using the corresponding methods:
  • The Backward Differentiation Formula presented in Section 4 (Classical Case), which is mentioned as Comput. Algorith. I
  • The Runge–Kutta Dormand and Prince fourth order method [14], which is mentioned as Comput. Algorith. II
  • The Runge–Kutta Dormand and Prince fifth order method [14], which is mentioned as Comput. Algorith. III
  • The Runge–Kutta Fehlberg fourth order method [34], which is mentioned as Comput. Algorith. IV
  • The Runge–Kutta Fehlberg fifth order method [34], which is mentioned as Comput. Algorith. V
  • The Runge–Kutta Cash and Karp fifth order method [35], which is mentioned as Comput. Algorith. VI
  • The Amplification–Fitted Backward Differentiation Formula with Phase–Lag of order 4, which is developed in Section 5, and which is mentioned as Comput. Algorith. VII
  • The Phase–Fitted and Amplification–Fitted Backward Differentiation Formula with vanished the first derivative of the amplification–factor which is developed in Section 6, and which is mentioned as Comput. Algorith. VIII
  • The Phase–Fitted and Amplification–Fitted Backward Differentiation Formula with vanished the first derivative of the phase–lag which is developed in Section 7, and which is mentioned as Comput. Algorith. IX
  • The Amplification–Fitted Backward Differentiation Formula with Phase–Lag of order 6, which is developed in Section 8, and which is mentioned as Comput. Algorith. X
  • The Phase–Fitted and Amplification–Fitted Backward Differentiation Formula which is developed in Section 9, and which is mentioned as Comput. Algorith. XI
  • The Phase–Fitted and Amplification–Fitted Backward Differentiation Formula with vanished the first derivative of the phase–lag and vanished the first derivative of the amplification–factor which is developed in Section 10, which is mentioned as Comput. Algorith. XII
Figure 8 displays the maximum absolute error of the solutions derived from the Stiefel and Bettis [31] problem using each of the numerical techniques previously described.
Figure 8 provides the data that enable us to see the following:
  • Comput. Algorith. X is more efficient than Comput. Algorith. III
  • Comput. Algorith. I is more efficient than Comput. Algorith. V
  • Comput. Algorith. VI has the same, generally, behavior with Comput. Algorith. I
  • Comput. Algorith. V is more efficient than Comput. Algorith. I
  • Comput. Algorith. IV has the same, generally, behavior with Comput. Algorith. V
  • Comput. Algorith. II is more efficient than Comput. Algorith. V
  • Comput. Algorith. VII is more efficient than Comput. Algorith. II
  • Comput. Algorith. VIII gives mixed results. For big step sizes is more efficient than Comput. Algorith. VII. For small step sizes is less efficient than Comput. Algorith. VII.
  • Comput. Algorith. XI has the same, generally, behavior with Comput. Algorith. IX
  • Comput. Algorith. XI is more efficient than Comput. Algorith. VIII
  • Finally, Comput. Algorith. XII, is the most efficient one.

12.2. Problem of Franco and Palacios [32]

The following problem was examined by Franco and Palacios [32] and is taken into account here:
Ψ 1 ( t ) = Ψ 1 ( t ) + ε cos ( ϑ t ) , Ψ 1 ( 0 ) = 1 , Ψ 1 ( 0 ) = 0 , Ψ 2 ( t ) = Ψ 2 ( t ) + ε sin ( ϑ t ) , Ψ 2 ( 0 ) = 0 , Ψ 2 ( 0 ) = 1 .
The exact solution is
Ψ 1 ( t ) = 1 ε ϑ 2 1 ϑ 2 cos ( t ) + ε 1 ϑ 2 cos ( ϑ t ) , Ψ 2 ( t ) = 1 ε ϑ ϑ 2 1 ϑ 2 sin ( t ) + ε 1 ϑ 2 sin ( ϑ t ) ,
where ε = 0.9 and ϑ = 0.9 . For this problem, we use ω = max 1 , | ϑ | .
We have discovered the numerical solution to the system of Equation (94) for 0 t 1,000,000 by using the procedures described in Section 12.1.
Figure 9 provides the data that enable us to see the following:
  • Comput. Algorith. X is more efficient than Comput. Algorith. I.
  • Comput. Algorith. VII gives mixed results. For small step sizes is more efficient than Comput. Algorith. X. For big step sizes is less efficient than Comput. Algorith. X.
  • Comput. Algorith. II is more efficient than Comput. Algorith. VII.
  • Comput. Algorith. IV has the same, generally, behavior with Comput. Algorith. II.
  • Comput. Algorith. VIII is more efficient than Comput. Algorith. IV.
  • Comput. Algorith. VI gives mixed results. For small step sizes is more efficient than Comput. Algorith. VIII. For big step sizes is less efficient than Comput. Algorith. VIII.
  • Comput. Algorith. V is more efficient than Comput. Algorith. VI and Comput. Algorith. VIII.
  • Comput. Algorith. XI is more efficient than Comput. Algorith. V. For very small step sizes Comput. Algorith. XI is less efficient than Comput. Algorith. V.
  • Comput. Algorith. IX has the same, generally, behavior with Comput. Algorith. XI.
  • Comput. Algorith. III has the same, generally, behavior with Comput. Algorith. XI. For small step sizes Comput. Algorith. XI is less efficient than Comput. Algorith. III.
  • Finally, Comput. Algorith. XII, is the most efficient one.

12.3. Nonlinear Problem of Petzold [36]

The following nonlinear problem was examined by Petzold [36] and is taken into account here:
Ψ 1 ( t ) = λ Ψ 2 ( t ) , Ψ 1 ( 0 ) = 1 , Ψ 2 ( t ) = λ Ψ 1 ( t ) + α λ sin ( λ x ) , Ψ 2 ( 0 ) = α 2 λ 2 .
The exact solution is
Ψ 1 ( t ) = 1 α 2 λ t cos ( λ t ) , Ψ 2 ( t ) = 1 α 2 λ t sin ( λ t ) α 2 λ 2 cos ( λ x ) ,
where λ = 1000 , α = 100 . For this problem, we use ω = 1000 .
We have discovered the numerical solution to the system of Equation (96) for 0 t 1000 by using the procedures described in Section 12.1.
Figure 10 provides the data that enable us to see the following:
  • Comput. Algorith. I and Comput. Algorith. IV is more efficient than Comput. Algorith. III.
  • Comput. Algorith. IV has the same, generally, behavior with Comput. Algorith. I.
  • Comput. Algorith. VI is more efficient than Comput. Algorith. IV.
  • Comput. Algorith. X gives mixed results. For big step sizes is more efficient than Comput. Algorith. V, Comput. Algorith. VI, and Comput. Algorith. VII. For medium step sizes is more efficient than Comput. Algorith. VI. For very small step sizes is less efficient than Comput. Algorith. VI.
  • Comput. Algorith. V gives mixed results. For very big step sizes is less efficient than Comput. Algorith. X. For small step sizes is more efficient than Comput. Algorith. X.
  • Comput. Algorith. II has the same, generally, behavior with Comput. Algorith. VII.
  • Comput. Algorith. VII is more efficient than Comput. Algorith. V.
  • Comput. Algorith. VIII is more efficient than Comput. Algorith. VII.
  • Comput. Algorith. IX is more efficient than Comput. Algorith. VIII.
  • Comput. Algorith. XI has the same, generally, behavior with Comput. Algorith. IX.
  • Finally, Comput. Algorith. XII, is the most efficient one.

12.4. A Nonlinear Orbital Problem [37]

The following nonlinear orbital problem was examined by Simos in [37] and is taken into account here:
Ψ 1 ( t ) = φ 2 Ψ 1 ( t ) + 2 Ψ 1 ( t ) Ψ 2 ( t ) sin ( 2 φ t ) Ψ 1 ( t ) 2 + Ψ 2 ( t ) 2 3 2 , Ψ 1 ( 0 ) = 1 , Ψ 1 ( 0 ) = 0 , Ψ 2 ( t ) = φ 2 Ψ 2 ( t ) + Ψ 1 ( t ) 2 Ψ 2 ( t ) 2 cos ( 2 φ t ) Ψ 1 ( t ) 2 + Ψ 2 ( t ) 2 3 2 , Ψ 2 ( 0 ) = 0 , Ψ 2 ( 0 ) = φ .
The exact solution is
Ψ 1 ( t ) = cos ( φ t ) , Ψ 2 ( t ) = sin ( φ t ) .
where φ = 1000 . For this problem, we use ω = 1000 .
We have discovered the numerical solution to the system of Equation (99) for 0 t 1,000,000 by using the procedures described in Section 12.1.
Figure 11 provides the data that enable us to see the following:
  • Comput. Algorith. I, Comput. Algorith. II, Comput. Algorith. III, Comput. Algorith. IV, Comput. Algorith. V, Comput. Algorith. VI, Comput. Algorith. VII, and Comput. Algorith. X are not convergent.
  • Comput. Algorith. VIII, Comput. Algorith. IX, Comput. Algorith. XI, and Comput. Algorith. XII have approximately the same behavior and are very efficient.

12.5. Perturbed Two–Body Gravitational Problem

We take into account the perturbed two-body Kepler’s problem:
Ψ 1 ( x ) = Ψ 1 ( x ) Ψ 1 ( x ) 2 + Ψ 2 ( x ) 2 3 2 μ μ + 2 Ψ 1 ( x ) Ψ 1 ( x ) 2 + Ψ 2 ( x ) 2 5 2 , Ψ 1 ( 0 ) = 1 , Ψ 1 ( 0 ) = 0 Ψ 2 ( x ) = Ψ 2 ( x ) Ψ 1 ( x ) 2 + Ψ 2 ( x ) 2 3 2 μ μ + 2 Ψ 2 ( x ) Ψ 1 ( x ) 2 + Ψ 2 ( x ) 2 5 2 , Ψ 2 ( 0 ) = 0 , Ψ 2 ( 0 ) = 1 + μ .
The exact solution is
Ψ 1 ( x ) = cos ( x + μ x ) , Ψ 2 ( x ) = sin ( x + μ x ) .
To solve this problem, we set ω = 1 + μ μ + 2 Φ 1 ( x ) 2 + Φ 2 ( x ) 2 3 4 .
Figure 11. Numerical results for the nonlinear orbital problem of [37].
Figure 11. Numerical results for the nonlinear orbital problem of [37].
Axioms 13 00649 g011

12.5.1. The Case of μ = 0.1

We have discovered the numerical solution to the system of Equation (100) for 0 t 100,000 by using the procedures described in Section 12.1.
Figure 12 provides the data that enable us to see the following:
  • Comput. Algorith. I has the same, generally, behavior with Comput. Algorith. VII. We note here that Comput. Algorith. I converge to the solution faster than Comput. Algorith. VII
  • Comput. Algorith. X gives mixed results. For big step sizes is less efficient than Comput. Algorith. II, Comput. Algorith. II, Comput. Algorith. IV, Comput. Algorith. V, Comput. Algorith. VI, Comput. Algorith. VIII, Comput. Algorith. IX, Comput. Algorith. X, Comput. Algorith. XI, and Comput. Algorith. XII, and is more efficient than Comput. Algorith. I, and Comput. Algorith. VII. For medium step sizes has the same, approximately, accuracy with Comput. Algorith. II, Comput. Algorith. III, Comput. Algorith. IV, Comput. Algorith. V, and Comput. Algorith. VI. For small step sizes is more efficient than all the algorithms except Comput. Algorith. VIII, Comput. Algorith. IX, Comput. Algorith. XI, and Comput. Algorith. XII.
  • Comput. Algorith. V is more efficient than Comput. Algorith. I, and Comput. Algorith. VII.
  • Comput. Algorith. IV is more efficient than Comput. Algorith. V
  • Comput. Algorith. III gives mixed results. For big step sizes is more efficient than Comput. Algorith. V. For small step sizes has the same, generally, behavior with Comput. Algorith. V.
  • Comput. Algorith. II is more efficient than Comput. Algorith. III.
  • Comput. Algorith. IX gives mixed results. For very big step sizes has the same, generally, behavior with Comput. Algorith. XI. For medium step sizes is less efficient than Comput. Algorith. XI. For small step sizes has the same, generally, behavior with Comput. Algorith. XI.
  • Comput. Algorith. VIII has the same, generally, behavior with Comput. Algorith. IX.
  • Comput. Algorith. XII for big step sizes is the most efficient one. For medium step sizes is less efficient than Comput. Algorith. XI. For small step sizes has the same, generally, behavior with Comput. Algorith. VIII, Comput. Algorith. IX, Comput. Algorith. XI.

12.5.2. The Case of μ = 0.4

We have discovered the numerical solution to the system of Equation (100) for 0 t 100,000 by using the procedures described in Section 12.1.
Figure 13 provides the data that enable us to see the following:
  • Comput. Algorith. I, Comput. Algorith. II, Comput. Algorith. III, Comput. Algorith. IV, Comput. Algorith. V, Comput. Algorith. VI, Comput. Algorith. VII, and Comput. Algorith. X are not convergent.
  • Comput. Algorith. IX is more efficient than Comput. Algorith. VIII.
  • Comput. Algorith. XI is more efficient than Comput. Algorith. IX.
  • Comput. Algorith. XII for big step sizes is the most efficient one. For medium step sizes is less efficient than Comput. Algorith. IX, and Comput. Algorith. XI. For small step sizes has the same, generally, behavior with Comput. Algorith. VIII, Comput. Algorith. IX, Comput. Algorith. XI.
The best algorithms presented here are based on a careful assessment of all factors:
  • The algorithm described in Section 10 has as its primary goals the elimination of the phase–lag and the amplification–factor (phase–fitted and amplification–fitted method) and the vanishing of their first derivatives.
  • The algorithm described in Section 9 has as its primary goals the elimination of the phase–lag and the amplification–factor (phase–fitted and amplification–fitted method).
  • The algorithm described in Section 7 is centered on the elimination of the phase–lag and amplification–factor (the phase–fitted and amplification–fitted method) and the first derivative of the phase–lag simultaneously.
  • The algorithm described in Section 6 is centered on the elimination of the phase–lag and amplification–factor (the phase–fitted and amplification–fitted method) and the first derivative of the amplification–factor simultaneously.
Recently developed frequency-dependent algorithms rely on finding the optimum value of the parameter ω to assess their effectiveness. This choice is defined explicitly in the issue model for many problems. The parameter ω may be found using techniques suggested in the literature for cases when this is not easy to accomplish (refer to [38,39]).
Figure 12. Numerical results for perturbed two–body gravitational problem (perturbed Kepler’s problem) with μ = 0.1 .
Figure 12. Numerical results for perturbed two–body gravitational problem (perturbed Kepler’s problem) with μ = 0.1 .
Axioms 13 00649 g012
Figure 13. Numerical results for perturbed two–body gravitational problem (perturbed Kepler’s problem) with μ = 0.4 .
Figure 13. Numerical results for perturbed two–body gravitational problem (perturbed Kepler’s problem) with μ = 0.4 .
Axioms 13 00649 g013

12.6. High–Order Ordinary Differential Equations and Partial Differential Equations

Always keep in mind that there are well-established methods for reducing systems of high-order ordinary differential equations to first-order differential equations before attempting to solve them using the newly devised approaches. According to Boyce et al. [33], there are several ways that may be used in this context, such as changing the variables, introducing new variables, rebuilding the system with new variables for each derivative, and so on.
Prior to mentioning the newly introduced techniques, it is worth mentioning that there are already known ways to reduce a system of partial differential equations to a system of first-order differential equations, like the characteristics method (see [40]).

13. Conclusions

Based on the theory and the methodologies for the proofs of the relevant formulae, which we developed in our previous papers [27,28,29], we developed the direct formulae for the computation of the phase–lag and amplification–factor of the Backward Differentiation Formulae (BDF). Using the approach of minimizing or eliminating the phase–lag and amplification–factor to find Backward Differentiation Formulae (BDF) was our main objective in this study. We also looked at how the efficiency of the aforementioned approaches changed when the phase–lag and amplification–factor derivatives were removed. We then detailed several ways to efficiently create methods using the phase–lag and/or amplification–factor and its derivatives, taking into consideration the above. In particular, we developed the following methodologies:
  • Methodology for the elimination of the amplification–factor.
  • Methodology for the elimination of the amplification–factor and minimization of the phase–lag.
  • Methodology for the elimination of the phase–lag and elimination of the amplification–factor.
  • Methodology for the elimination of the phase–lag and elimination of the amplification–factor together with elimination of the derivatives of the phase–lag.
  • Methodology for the elimination of the phase–lag and elimination of the amplification–factor together with elimination of the derivatives of the amplification–factor.
  • Methodology for the elimination of the phase–lag and elimination of the amplification–factor together with the elimination of the derivatives of the phase–Lag and the derivatives of the amplification–Factor
Based on our theoretical and numerical results we conclude that the most efficient methods are the methods in which the coefficients are defined with the requirement of the elimination of the phase–lag and elimination of the amplification–factor, and methods in which the coefficients are defined with the requirement of the elimination of the phase–lag and elimination of the amplification–factor, together with the elimination of the derivatives of the phase–Lag and the derivatives of the amplification–Factor.
Following the aforementioned procedures, a number of Backward Differentiation Formulae (BDF) were developed. We built our methodology on the fourth-order algebraic backward differentiation formula.
The aforementioned approaches were tested on various problems with oscillating solutions to see how well they worked.
All calculations adhered to the IEEE Standard 754 and were executed on a personal computer featuring an x 86 64 compatible architecture and utilizing a quadruple precision arithmetic data type consisting of 64 bits.

Funding

This research received no external funding.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The author declares no conflicts of interest.

Appendix A. Direct Formulae for the Calculation of the Derivatives of the Phase–Lag and the Amplification–Factor

Appendix A.1. Direct Formula for the Derivative of the Phase–Lag

P h E r r v = P a r 1 a m m 2 + m A m υ j = 0 m 1 a j j 2 2
where
P a r 1 = A m υ cos m υ m 3 υ a m + sin m υ d d υ A m υ m 2 υ a m sin m υ m 3 a m 2 A m υ 2 cos m υ m 2 υ + 2 sin m υ A m υ m 2 a m + A m υ cos m υ j = 0 m 1 a j j 2 m υ sin m υ A m υ 2 m + sin m υ d d υ A m υ j = 0 m 1 a j j 2 υ sin m υ j = 0 m 1 a j j 2 m a m + d d υ A m υ cos m υ m a m j = 0 m 1 a j j sin j υ m 2 a m + sin m υ A m υ j = 0 m 1 a j j 2 + A m υ j = 0 m 1 a j j sin j υ m j = 0 m 1 a j j sin j υ j = 0 m 1 a j j 2 + d d υ A m υ j = 0 m 1 a j cos j υ m ,
and the phase–lag is denoted by P h E r r .

Appendix A.2. Direct Formula for the Derivative of the Amplification–Factor

A F v = P a r 2 m 2 υ 2 A m υ + a m m + j = 0 m 1 a j j 2
where
P a r 2 = sin m υ A m υ 2 m 3 υ 3 A m υ cos m υ m 3 υ 2 a m + sin m υ d d υ A m υ m 2 υ 2 a m A m υ 2 cos m υ m 2 υ 2 j = 0 m 1 j cos j υ a j A m υ m 2 υ 2 + j = 0 m 1 a j j d d υ A m υ cos m υ υ + j = 0 m 1 sin j υ a j d d υ A m υ m 2 υ 2 + sin m υ A m υ m 2 υ a m j = 0 m 1 a j j sin m υ A m υ m υ + 2 j = 0 m 1 sin j υ a j A m υ m 2 υ + d d υ A m υ cos m υ m υ a m cos m υ m 2 a m 2 j = 0 m 1 a j j cos m υ m a m + A m υ cos m υ m a m + j = 0 m 1 a j j A m υ cos m υ j = 0 m 1 j cos j υ a j j = 0 m 1 a j j j = 0 m 1 j cos j υ a j m a m ,
and the amplification–factor is denoted by A F .

Appendix B. Formulae P a r i , i = 8–12

P a r 8 = κ 0 + κ 1 + κ 2 + κ 3 + 1 3 κ 1 + 4 κ 2 + 3 κ 3 P a r 9 = 1 6 E x p r e s s 1 3 κ 1 + 4 κ 2 + 3 κ 3 2 P a r 10 = E x p r e s s 2 360 3 κ 1 + 4 κ 2 + 3 κ 3 3 P a r 11 = E x p r e s s 3 15120 3 κ 1 + 4 κ 2 + 3 κ 3 4 P a r 12 = E x p r e s s 4 1814400 3 κ 1 + 4 κ 2 + 3 κ 3 5 ,
where
E x p r e s s 1 = 188 κ 0 κ 1 + 352 κ 0 κ 2 + 468 κ 0 κ 3 + 125 κ 1 2 + 348 κ 1 κ 2 + 458 κ 1 κ 3 + 208 κ 2 2 + 532 κ 2 κ 3 + 333 κ 3 2 + 512 κ 0 + 556 κ 1 + 672 κ 2 + 836 κ 3 + 512 E x p r e s s 2 = 128764 κ 0 κ 1 2 + 608080 κ 0 κ 1 κ 2 + 990576 κ 0 κ 1 κ 3 + 684544 κ 0 κ 2 2 + 2152944 κ 0 κ 2 κ 3 + 1645812 κ 0 κ 3 2 + 77419 κ 1 3 + 456164 κ 1 2 κ 2 + 746425 κ 1 2 κ 3 + 793664 κ 1 κ 2 2 + 2466400 κ 1 κ 2 κ 3 + 1842993 κ 1 κ 3 2 + 396544 κ 2 3 + 1762528 κ 2 2 κ 3 + 2525436 κ 2 κ 3 2 + 1173987 κ 3 3 + 1335296 κ 0 κ 1 + 2818048 κ 0 κ 2 + 4202496 κ 0 κ 3 + 1043580 κ 1 2 + 3371344 κ 1 κ 2 + 4915568 κ 1 κ 3 + 2396672 κ 2 2 + 6754288 κ 2 κ 3 + 4655988 κ 3 2 + 2621440 κ 0 + 3219456 κ 1 + 4456448 κ 2 + 6086656 κ 3 + 2621440 E x p r e s s 3 = 57868724 κ 0 κ 1 3 + 453513312 κ 0 κ 1 2 κ 2 + 790045668 κ 0 κ 1 2 κ 3 + 1141137216 κ 0 κ 1 κ 2 2 + 3881690496 κ 0 κ 1 κ 2 κ 3 + 3248289324 κ 0 κ 1 κ 3 2 + 925278208 κ 0 κ 2 3 + 4615247808 κ 0 κ 2 2 κ 3 + 7552268064 κ 0 κ 2 κ 3 2 + 4052752380 κ 0 κ 3 3 + 35080931 κ 1 4 + 315503360 κ 1 3 κ 2 + 545710700 κ 1 3 κ 3 + 988326768 κ 1 2 κ 2 2 + 3318441480 κ 1 2 κ 2 κ 3 + 2735295282 κ 1 2 κ 3 2 + 1260925312 κ 1 κ 2 3 + 6152828832 κ 1 κ 2 2 κ 3 + 9816805008 κ 1 κ 2 κ 3 2 + 5115286188 κ 1 κ 3 3 + 536915968 κ 2 4 + 3372047872 κ 2 3 κ 3 + 7783807536 κ 2 2 κ 3 2 + 7823658888 κ 2 κ 3 3 + 2890620675 κ 3 4 + 1118588928 κ 0 κ 1 2 + 5409128448 κ 0 κ 1 κ 2 + 8952791040 κ 0 κ 1 κ 3 + 6327631872 κ 0 κ 2 2 + 20464680960 κ 0 κ 2 κ 3 + 16263770112 κ 0 κ 3 2 + 784002932 κ 1 3 + 4705860192 κ 1 2 κ 2 + 7679294628 κ 1 2 κ 3 + 8625503040 κ 1 κ 2 2 + 27234831744 κ 1 κ 2 κ 3 + 21042611820 κ 1 κ 3 2 + 4697890816 κ 2 3 + 21488905152 κ 2 2 κ 3 + 32102176032 κ 2 κ 3 2 + 15683960124 κ 3 3 + 6121586688 κ 0 κ 1 + 13740539904 κ 0 κ 2 + 21535653888 κ 0 κ 3 + 5357457408 κ 1 2 + 18764316672 κ 1 κ 2 + 28633251840 κ 1 κ 3 + 14739308544 κ 2 2 + 43618811904 κ 2 κ 3 + 31705362432 κ 3 2 + 9395240960 κ 0 + 12874416128 κ 1 + 19612565504 κ 2 + 28288483328 κ 3 + 9395240960
E x p r e s s 4 = 74855517764 κ 0 κ 1 4 + 817342821520 κ 0 κ 1 3 κ 2 + 1461601126872 κ 0 κ 1 3 κ 3 + 3262883805888 κ 0 κ 1 2 κ 2 2 + 11492587128528 κ 0 κ 1 2 κ 2 κ 3 + 10024326507888 κ 0 κ 1 2 κ 3 2 + 5645489273600 κ 0 κ 1 κ 2 3 + 29365602746112 κ 0 κ 1 κ 2 2 κ 3 + 50411077116912 κ 0 κ 1 κ 2 κ 3 2 + 28545543450216 κ 0 κ 1 κ 3 3 + 3575078846464 κ 0 κ 2 4 + 24412853276928 κ 0 κ 2 3 κ 3 + 61878069730368 κ 0 κ 2 2 κ 3 2 + 68969223401904 κ 0 κ 2 κ 3 3 + 28513146551436 κ 0 κ 3 4 + 45356966969 κ 1 5 + 547633493604 κ 1 4 κ 2 + 971806031861 κ 1 4 κ 3 + 2522207086768 κ 1 3 κ 2 2 + 8787334492600 κ 1 3 κ 2 κ 3 + 7574537845530 κ 1 3 κ 3 2 + 5493649373888 κ 1 2 κ 2 3 + 28128142819248 κ 1 2 κ 2 2 κ 3 + 47456461278288 κ 1 2 κ 2 κ 3 2 + 26359705248714 κ 1 2 κ 3 3 + 5588278523904 κ 1 κ 2 4 + 37292526567680 κ 1 κ 2 3 κ 3 + 92163932860752 κ 1 κ 2 2 κ 3 2 + 99899998614792 κ 1 κ 2 κ 3 3 + 40048658653437 κ 1 κ 3 4 + 2074251034624 κ 2 5 + 16846471777792 κ 2 4 κ 3 + 53983184625216 κ 2 3 κ 3 2 + 85259967150672 κ 2 2 κ 3 3 + 66338073535500 κ 2 κ 3 4 + 20337042185361 κ 3 5 + 2106413760512 κ 0 κ 1 3 + 16410866221056 κ 0 κ 1 2 κ 2 + 28460132352000 κ 0 κ 1 2 κ 3 + 41520755638272 κ 0 κ 1 κ 2 2 + 141630699995136 κ 0 κ 1 κ 2 κ 3 + 119454698225664 κ 0 κ 1 κ 3 2 + 34162538971136 κ 0 κ 2 3 + 171968809402368 κ 0 κ 2 2 κ 3 + 285360431431680 κ 0 κ 2 κ 3 2 + 156041754034176 κ 0 κ 3 3 + 1413117924676 κ 1 4 + 12676267045008 κ 1 3 κ 2 + 21732797451224 κ 1 3 κ 3 + 40244031097536 κ 1 2 κ 2 2 + 134963676549840 κ 1 2 κ 2 κ 3 + 111690811500912 κ 1 2 κ 3 2 + 52949147269888 κ 1 κ 2 3 + 260034020491008 κ 1 κ 2 2 κ 3 + 419877146134512 κ 1 κ 2 κ 3 2 + 222794647370856 κ 1 κ 3 3 + 23788358860800 κ 2 4 + 151519307124992 κ 2 3 κ 3 + 356685488294976 κ 2 2 κ 3 2 + 367678433461680 κ 2 κ 3 3 + 140028699396492 κ 3 4 + 20128091602944 κ 0 κ 1 2 + 99063907221504 κ 0 κ 1 κ 2 + 165850883555328 κ 0 κ 1 κ 3 + 118913148911616 κ 0 κ 2 2 + 391480212455424 κ 0 κ 2 κ 3 + 318360344592384 κ 0 κ 3 2 + 15541831417856 κ 1 3 + 95623084376064 κ 1 2 κ 2 + 157128351301632 κ 1 2 κ 3 + 182721978826752 κ 1 κ 2 2 + 585775130345472 κ 1 κ 2 κ 3 + 462649023184896 κ 1 κ 3 2 + 105918960238592 κ 2 3 + 495389850206208 κ 2 2 κ 3 + 760965646123008 κ 2 κ 3 2 + 384003277701120 κ 3 3 + 76364518522880 κ 0 κ 1 + 177983444746240 κ 0 κ 2 + 286817916026880 κ 0 κ 3 + 73366224502784 κ 1 2 + 271165931126784 κ 1 κ 2 + 424858886340608 κ 1 κ 3 + 228348915613696 κ 2 2 + 696806753173504 κ 2 κ 3 + 524130214477824 κ 3 2 + 96207267430400 κ 0 + 145513491988480 κ 1 + 238112986890240 κ 2 + 355966889492480 κ 3 + 96207267430400

Appendix C. Formulae P a r 18 , P a r 19 , P a r 20 , and P a r 21

P a r 18 = 48 υ cos υ + 144 υ cos 3 υ + 144 υ cos 5 υ + 48 υ cos 7 υ + 48 sin υ 25 sin 8 υ 50 sin 2 υ 50 sin 6 υ 50 sin 4 υ + 48 sin 7 υ + 112 sin 5 υ + 112 sin 3 υ
P a r 19 = 50 υ cos 2 υ + 50 υ cos 6 υ + 100 υ cos 4 υ + 25 sin 2 υ + 25 sin 6 υ + 50 sin 4 υ
P a r 20 = 11568 υ cos υ 2320 υ cos 3 υ + 4608 υ cos 5 υ 2304 υ cos 7 υ + 4800 υ cos 2 υ + 1600 υ cos 6 υ 9600 υ cos 4 υ + 256 υ cos 9 υ + 15904 sin υ + 2400 sin 8 υ 800 sin 2 υ 1200 sin 6 υ + 7625 sin 4 υ 3584 sin 7 υ + 5120 sin 5 υ 14880 sin 3 υ + 512 sin 9 υ 400 sin 10 υ 15250 υ
P a r 21 = 3200 υ cos 2 υ 2400 υ cos 6 υ + 15250 υ cos 4 υ + 4800 υ cos 8 υ 800 υ cos 10 υ + 2400 sin 8 υ 800 sin 2 υ 1200 sin 6 υ + 7625 sin 4 υ 400 sin 10 υ + 4800 υ .

Appendix D. Formula Par22

P a r 22 = 8700 + 6850 κ 2 + 2175 ϱ 109600 cos υ 3 sin υ υ ϱ 28700 cos υ 2 κ 2 + 54800 cos υ sin υ υ ϱ 36192 cos υ + 69600 cos υ 2 + 50112 cos υ 3 69600 cos υ 4 + 5000 cos υ 2 κ 2 ϱ + 20000 cos υ 4 υ 2 ϱ 2 + 41600 cos υ υ 2 ϱ 5000 υ 2 κ 2 ϱ 20000 cos υ 2 υ 2 ϱ 2 57600 cos υ 3 υ 2 ϱ + 80000 cos υ 4 υ 2 ϱ 6400 sin υ υ ϱ + 20000 cos υ 3 sin υ υ κ 2 ϱ 5000 cos υ 4 κ 2 ϱ 20000 sin υ cos υ υ κ 2 ϱ + 10000 υ 2 ϱ + 10400 cos υ κ 2 + 2500 υ 2 ϱ 2 625 κ 2 ϱ + 20000 cos υ 4 κ 2 + 2500 cos υ 2 κ 2 2 + 17400 cos υ 4 ϱ 14400 cos υ 3 κ 2 17400 cos υ 2 ϱ 80000 cos υ 2 υ 2 ϱ + 10000 cos υ 2 υ 2 κ 2 ϱ + 38400 cos υ 2 sin υ υ ϱ + 80000 cos υ 3 sin υ υ 3 ϱ 2 40000 sin υ cos υ υ 3 ϱ 2 1250 κ 2 2 .

Appendix E. Formulae Par23, Par24, Par25, and Par26

P a r 23 = 48 υ cos 8 υ + 96 υ cos 6 υ 416 υ cos 2 υ 32 υ cos 4 υ + 400 cos υ υ + 48 sin 8 υ + 64 sin 6 υ 64 sin 2 υ 336 υ 25 sin 9 υ 25 sin 7 υ
P a r 24 = 50 υ cos 5 υ + 50 υ cos 7 υ 150 υ cos 3 υ + 25 sin 5 υ 25 sin 3 υ 25 sin υ 150 cos υ υ + 25 sin 7 υ
P a r 25 = 64 + 96 cos 4 υ + 100 cos υ 75 cos 3 υ + 50 cos 7 υ 80 υ sin 12 υ + 32 υ sin 6 υ + 32 υ sin 10 υ + 256 cos 8 υ υ 2 64 cos 6 υ υ 2 + 400 υ 2 cos 5 υ + 400 υ 2 cos 3 υ + 1200 υ 2 cos υ + 352 υ sin 8 υ 64 cos 8 υ + 50 υ sin 13 υ 300 υ sin 9 υ + 50 υ sin 11 υ 25 cos 13 υ 512 cos 2 υ υ 2 200 υ 2 cos 7 υ 200 υ 2 cos 9 υ + 50 cos 9 υ + 32 cos 12 υ 25 cos 11 υ 75 cos 5 υ + 50 υ sin 3 υ + 50 υ sin 5 υ + 64 cos 10 υ υ 2 80 υ sin 4 υ 224 cos 4 υ υ 2 32 υ 2 cos 12 υ 300 υ sin 7 υ 512 υ 2
P a r 26 = 100 cos υ 75 cos 3 υ 600 υ 2 cos 7 υ + 100 υ 2 cos 11 υ 600 υ 2 cos 9 υ + 300 υ 2 cos 5 υ + 300 υ 2 cos 3 υ + 2000 υ 2 cos υ + 100 υ 2 cos 13 υ + 100 υ sin 13 υ 400 υ sin 9 υ + 100 υ sin 11 υ 400 υ sin 7 υ + 100 υ sin 5 υ + 100 υ sin 3 υ 25 cos 11 υ 25 cos 13 υ + 50 cos 9 υ 75 cos 5 υ + 50 cos 7 υ .

Appendix F. Formulae Parq, q = 27–31

P a r 27 = 25 υ ϱ sin 4 υ + 25 κ 1 cos υ + 25 κ 2 cos 2 υ + 25 cos 4 υ 48 cos 3 υ + 25 κ 0
P a r 28 = 25 υ ϱ cos 4 υ 25 κ 1 sin υ 25 κ 2 sin 2 υ 25 sin 4 υ + 48 sin 3 υ
P a r 29 = 25 ϱ sin 4 υ + 100 υ ϱ cos 4 υ 25 κ 1 sin υ 50 κ 2 sin 2 υ 100 sin 4 υ + 144 sin 3 υ
P a r 30 = 8300 ϱ cos 4 υ 40000 sin 4 υ υ 3 ϱ 2 10000 cos 4 υ υ 2 ϱ 2 40000 cos 4 υ υ 2 ϱ + 57600 cos 3 υ υ 2 ϱ 38400 sin 3 υ υ ϱ + 2500 cos 4 υ κ 2 ϱ 2500 cos υ κ 1 κ 2 1250 cos 2 υ κ 1 κ 2 + 8300 κ 1 cos υ + 16600 κ 2 cos 2 υ 625 cos υ κ 1 2 2500 cos 4 υ κ 1 10000 cos 4 υ κ 2 + 3600 cos 3 υ κ 1 + 33200 cos 4 υ + 53200 υ ϱ sin 4 υ 10000 cos υ υ 2 κ 1 ϱ 2500 sin 4 υ υ κ 1 ϱ 10000 sin 4 υ υ κ 2 ϱ + 20000 sin υ υ κ 1 ϱ + 20000 sin 2 υ υ κ 2 ϱ 47808 cos 3 υ 20000 cos 2 υ υ 2 κ 2 ϱ + 625 cos 4 υ κ 1 ϱ 5000 cos 2 υ κ 2 2 + 14400 cos 3 υ κ 2
P a r 31 = 160000 υ 4 ϱ 2 + 20000 υ 2 κ 1 ϱ + 80000 υ 2 κ 2 ϱ 265600 υ 2 ϱ + 625 κ 1 2 + 10000 κ 2 2 16600 κ 1 66400 κ 2 + 5000 κ 1 κ 2 + 110224

Appendix G. Formulae Pari, i = 32–34

P a r 32 = 48 υ cos 5 υ 48 cos 3 υ υ + 384 cos υ υ 48 sin 5 υ 48 sin 3 υ + 25 sin 6 υ + 25 sin 2 υ + 25 sin 4 υ + 96 sin υ 300 υ
P a r 33 = 192 cos 2 υ υ + 96 cos 4 υ υ + 400 cos υ υ 50 sin 5 υ 50 sin 3 υ 48 sin 2 υ + 96 sin 4 υ 192 υ
P a r 34 = 48 cos 3 υ υ + 144 cos υ υ 48 sin 3 υ + 25 sin 2 υ + 25 sin 4 υ + 48 sin υ 150 υ

References

  1. Landau, L.D.; Lifshitz, F.M. Quantum Mechanics; Pergamon: New York, NY, USA, 1965. [Google Scholar]
  2. Prigogine, I.; Rice, S. (Eds.) Advances in Chemical Physics; New Methods in Computational Quantum Mechanics; John Wiley & Sons: Berlin/Heidelberg, Germany, 1997; Volume 93. [Google Scholar]
  3. Simos, T.E. Numerical Solution of Ordinary Differential Equations with Periodical Solution. Ph.D. Thesis, National Technical University of Athens, Athens, Greece, 1990. (In Greek). [Google Scholar]
  4. Ixaru, L.G. Numerical Methods for Differential Equations and Applications; Reidel: Dordrecht, The Netherlands; Boston, MA, USA; Lancaster, UK, 1984. [Google Scholar]
  5. Quinlan, G.D.; Tremaine, S. Symmetric multistep methods for the numerical integration of planetary orbits. Astron. J. 1990, 100, 1694–1700. [Google Scholar] [CrossRef]
  6. Lyche, T. Chebyshevian multistep methods for ordinary differential equations. Numer. Math. 1972, 10, 65–75. [Google Scholar] [CrossRef]
  7. Konguetsof, A.; Simos, T.E. On the construction of Exponentially-Fitted Methods for the Numerical Solution of the Schrödinger Equation. J. Comput. Meth. Sci. Eng. 2001, 1, 143–165. [Google Scholar] [CrossRef]
  8. Dormand, J.R.; El-Mikkawy, M.E.A.; Prince, P.J. Families of Runge-Kutta-Nyström formulae. IMA J. Numer. Anal. 1987, 7, 235–250. [Google Scholar] [CrossRef]
  9. Franco, J.M.; Gomez, I. Some procedures for the construction of high-order exponentially fitted Runge-Kutta-Nyström Methods of explicit type. Comput. Phys. Commun. 2013, 184, 1310–1321. [Google Scholar] [CrossRef]
  10. Franco, J.M.; Gomez, I. Accuracy and linear Stability of RKN Methods for solving second-order stiff problems. Appl. Numer. Math. 2009, 59, 959–975. [Google Scholar] [CrossRef]
  11. Chien, L.K.; Senu, N.; Ahmadian, A.; Ibrahim, S.N.I. Efficient Frequency-Dependent Coefficients of Explicit Improved Two-Derivative Runge-Kutta Type Methods for Solving Third- Order IVPs. Pertanika J. Sci. Technol. 2023, 31, 843–873. [Google Scholar] [CrossRef]
  12. Zhai, W.J.; Fu, S.H.; Zhou, T.C.; Xiu, C. Exponentially-fitted and trigonometrically-fitted implicit RKN methods for solving y′′ = f (t, y). J. Appl. Math. Comput. 2022, 68, 1449–1466. [Google Scholar] [CrossRef]
  13. Fang, Y.L.; Yang, Y.P.; You, X. An explicit trigonometrically fitted Runge-Kutta method for stiff and oscillatory problems with two frequencies. Int. J. Comput. Math. 2020, 97, 85–94. [Google Scholar] [CrossRef]
  14. Dormand, J.R.; Prince, P.J. A family of embedded Runge-Kutta formulae. J. Comput. Appl. Math. 1980, 6, 19–26. [Google Scholar] [CrossRef]
  15. Kalogiratou, Z.; Monovasilis, T.; Psihoyios, G.; Simos, T.E. Runge–Kutta type methods with special properties for the numerical integration of ordinary differential equations. Phys. Rep. 2014, 536, 75–146. [Google Scholar] [CrossRef]
  16. Anastassi, Z.A.; Simos, T.E. Numerical multistep methods for the efficient solution of quantum mechanics and related problems. Phys. Rep. 2009, 482–483, 1–240. [Google Scholar] [CrossRef]
  17. Chawla, M.M.; Rao, P.S. A Noumerov-Type Method with Minimal Phase-Lag for the Integration of 2nd Order Periodic Initial-Value Problems. J. Comput. Appl. Math. 1984, 11, 277–281. [Google Scholar] [CrossRef]
  18. Ixaru, L.G.; Rizea, M. A Numerov-like scheme for the numerical solution of the Schrödinger equation in the deep continuum spectrum of energies. Comput. Phys. Commun. 1980, 19, 23–27. [Google Scholar] [CrossRef]
  19. Raptis, A.D.; Allison, A.C. Exponential-fitting Methods for the numerical solution of the Schrödinger equation. Comput. Phys. Commun. 1978, 14, 1–5. [Google Scholar] [CrossRef]
  20. Wang, Z.; Zhao, D.; Dai, Y.; Wu, D. An improved trigonometrically fitted P-stable Obrechkoff Method for periodic initial-value problems. Proc. R. Soc. A-Math. Phys. Eng. Sci. 2005, 461, 1639–1658. [Google Scholar] [CrossRef]
  21. Wang, C.; Wang, Z. A P-stable eighteenth-order six-Step Method for periodic initial value problems. Int. J. Mod. Phys. C 2007, 18, 419–431. [Google Scholar] [CrossRef]
  22. Shokri, A.; Khalsaraei, M.M. A new family of explicit linear two-step singularly P-stable Obrechkoff methods for the numerical solution of second-order IVPs. Appl. Math. Comput. 2020, 376, 125116. [Google Scholar] [CrossRef]
  23. Abdulganiy, R.I.; Ramos, H.; Okunuga, S.A.; Majid, Z.A. A trigonometrically fitted intra-step block Falkner method for the direct integration of second-order delay differential equations with oscillatory solutions. Afr. Mat. 2023, 34, 36. [Google Scholar] [CrossRef]
  24. Lee, K.C.; Senu, N.; Ahmadian, A.; Ibrahim, S.N.I. High-order exponentially fitted and trigonometrically fitted explicit two-derivative Runge-Kutta-type methods for solving third-order oscillatory problems. Math. Sci. 2022, 16, 281–297. [Google Scholar] [CrossRef]
  25. Fang, Y.L.; Huang, T.; You, X.; Zheng, J.; Wang, B. Two-frequency trigonometrically-fitted and symmetric linear multi-step methods for second-order oscillators. J. Comput. Appl. Math. 2021, 392, 113312. [Google Scholar] [CrossRef]
  26. Chun, C.; Neta, B. Trigonometrically-Fitted Methods: A Review. Mathematics 2019, 7, 1197. [Google Scholar] [CrossRef]
  27. Simos, T.E. A New Methodology for the Development of Efficient Multistep Methods for First–Order IVPs with Oscillating Solutions. Mathematics 2024, 12, 504. [Google Scholar] [CrossRef]
  28. Simos, T.E. Efficient Multistep Algorithms for First–Order IVPs with Oscillating Solutions: II Implicit and Predictor—Corrector Algorithms. Symmetry 2024, 16, 508. [Google Scholar] [CrossRef]
  29. Simos, T.E. A new methodology for the development of efficient multistep methods for first–order IVPs with oscillating solutions: III The Role of the Derivative of the Phase–Lag and the Derivative of the Amplification–Factor. Axioms 2024, 13, 514. [Google Scholar] [CrossRef]
  30. Saadat, H.; Kiyadeh, S.H.H.; Karim, R.G.; Safaie, A. Family of phase fitted 3-step second-order BDF methods for solving periodic and orbital quantum chemistry problems. J. Math. Chem. 2024, 62, 1223–1250. [Google Scholar] [CrossRef]
  31. Stiefel, E.; Bettis, D.G. Stabilization of Cowell’s method. Numer. Math. 1969, 13, 154–175. [Google Scholar] [CrossRef]
  32. Franco, J.M.; Palacios, M. High-order P-stable multistep methods. J. Comput. Appl. Math. 1990, 30, 1–10. [Google Scholar] [CrossRef]
  33. Boyce, W.E.; DiPrima, R.C.; Meade, D.B. Elementary Differential Equations and Boundary Value Problems, 11th ed.; John Wiley & Sons: Hoboken, NJ, USA, 2017. [Google Scholar]
  34. Fehlberg, E. Classical Fifth-, Sixth-, Seventh-, and Eighth-order Runge-Kutta Formulas with Stepsize Control. NASA Technical Report 287. 1968. Available online: https://ntrs.nasa.gov/api/citations/19680027281/downloads/19680027281.pdf (accessed on 15 June 2024).
  35. Cash, J.R.; Karp, A.H. A variable order Runge–Kutta method for initial value problems with rapidly varying right-hand sides. ACM Trans. Math. Softw. 1990, 16, 201–222. [Google Scholar] [CrossRef]
  36. Petzold, L.R. An efficient numerical method for highly oscillatory ordinary differential equations. SIAM J. Numer. Anal. 1981, 18, 455–479. [Google Scholar] [CrossRef]
  37. Simos, T.E. New Open Modified Newton Cotes Type Formulae as Multilayer Symplectic Integrators. Appl. Math. Model. 2013, 37, 1983–1991. [Google Scholar] [CrossRef]
  38. Ramos, H.; Vigo-Aguiar, J. On the frequency choice in trigonometrically fitted methods. Appl. Math. Lett. 2010, 23, 1378–1381. [Google Scholar] [CrossRef]
  39. Ixaru, L.G.; Vanden Berghe, G.; De Meyer, H. Frequency evaluation in exponential fitting multistep algorithms for ODEs. J. Comput. Appl. Math. 2002, 140, 423–434. [Google Scholar] [CrossRef]
  40. Evans, L.C. Partial Differential Equations, 2nd ed.; American Mathematical Society: Providence, RI, USA, 2010; Chapter 3; pp. 91–135. [Google Scholar]
Figure 1. Stability region for the Backward Differentiation Formula of Algebraic Order Four (Classical Case) presented in Section 4.
Figure 1. Stability region for the Backward Differentiation Formula of Algebraic Order Four (Classical Case) presented in Section 4.
Axioms 13 00649 g001
Figure 2. Stability region for the Backward Differentiation Formula developed in Section 5.
Figure 2. Stability region for the Backward Differentiation Formula developed in Section 5.
Axioms 13 00649 g002aAxioms 13 00649 g002b
Figure 3. Stability region for the Backward Differentiation Formula developed in Section 6.
Figure 3. Stability region for the Backward Differentiation Formula developed in Section 6.
Axioms 13 00649 g003
Figure 4. Stability region for the Backward Differentiation Formula developed in Section 7.
Figure 4. Stability region for the Backward Differentiation Formula developed in Section 7.
Axioms 13 00649 g004
Figure 5. Stability region for the Backward Differentiation Formula developed in Section 8.
Figure 5. Stability region for the Backward Differentiation Formula developed in Section 8.
Axioms 13 00649 g005
Figure 6. Stability region for the Backward Differentiation Formula developed in Section 9.
Figure 6. Stability region for the Backward Differentiation Formula developed in Section 9.
Axioms 13 00649 g006
Figure 7. Stability region for the Backward Differentiation Formula developed in Section 10.
Figure 7. Stability region for the Backward Differentiation Formula developed in Section 10.
Axioms 13 00649 g007
Figure 8. Numerical results for the problem of Stiefel and Bettis [31].
Figure 8. Numerical results for the problem of Stiefel and Bettis [31].
Axioms 13 00649 g008
Figure 9. Numerical results for the problem of Franco and Palacios [32].
Figure 9. Numerical results for the problem of Franco and Palacios [32].
Axioms 13 00649 g009
Figure 10. Numerical results for the nonlinear problem of [36].
Figure 10. Numerical results for the nonlinear problem of [36].
Axioms 13 00649 g010
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Simos, T.E. A New Methodology for the Development of Efficient Multistep Methods for First–Order IVPs with Oscillating Solutions IV: The Case of the Backward Differentiation Formulae. Axioms 2024, 13, 649. https://doi.org/10.3390/axioms13090649

AMA Style

Simos TE. A New Methodology for the Development of Efficient Multistep Methods for First–Order IVPs with Oscillating Solutions IV: The Case of the Backward Differentiation Formulae. Axioms. 2024; 13(9):649. https://doi.org/10.3390/axioms13090649

Chicago/Turabian Style

Simos, Theodore E. 2024. "A New Methodology for the Development of Efficient Multistep Methods for First–Order IVPs with Oscillating Solutions IV: The Case of the Backward Differentiation Formulae" Axioms 13, no. 9: 649. https://doi.org/10.3390/axioms13090649

APA Style

Simos, T. E. (2024). A New Methodology for the Development of Efficient Multistep Methods for First–Order IVPs with Oscillating Solutions IV: The Case of the Backward Differentiation Formulae. Axioms, 13(9), 649. https://doi.org/10.3390/axioms13090649

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop