Next Article in Journal
Solution to a Conjecture on the Permanental Sum
Next Article in Special Issue
Accurate Approximations for a Nonlinear SIR System via an Efficient Analytical Approach: Comparative Analysis
Previous Article in Journal
Four Families of Summation Formulas for 4F3(1) with Application
Previous Article in Special Issue
Exact and Approximate Solutions for Some Classes of the Inhomogeneous Pantograph Equation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Comparing the Performance of Two Butcher-Based Block Hybrid Algorithms for the Solution of Initial Value Problems

by
Richard Olatokunbo Akinola
1,†,
Ali Shokri
2,*,†,
Joshua Sunday
1,†,
Daniela Marian
3,*,† and
Oyindamola D. Akinlabi
1,†
1
Department of Mathematics, University of Jos, Jos 930105, Nigeria
2
Department of Mathematics, University of Maragheh, Maragheh 83111-55181, Iran
3
Department of Mathematics, Technical University of Cluj-Napoca, 28 Memorandumului Street, 400114 Cluj-Napoca, Romania
*
Authors to whom correspondence should be addressed.
These authors contributed equally to this work.
Axioms 2024, 13(3), 165; https://doi.org/10.3390/axioms13030165
Submission received: 18 January 2024 / Revised: 21 February 2024 / Accepted: 23 February 2024 / Published: 1 March 2024
(This article belongs to the Special Issue Differential Equations and Related Topics)

Abstract

:
In this paper, we compare the performances of two Butcher-based block hybrid methods for the numerical integration of initial value problems. We compare the condition numbers of the linear system of equations arising from both methods and the absolute errors of the solution obtained. The results of the numerical experiments illustrate that the better conditioned method outperformed its less conditioned counterpart based on the absolute errors. In addition, after applying our method on some examples, it was discovered that the absolute errors in this work were better than those of a recent study in the literature. Hence, we recommend this method for the numerical solution of stiff and non-stiff initial value problems.
MSC:
65D30; 65L04; 65L05; 65L06

1. Introduction

Differential equations play crucial roles in ecology, modeling, chemical kinetics, engineering, population dynamics and medicine. Some of them do not have exact or closed-form solutions and it is imperative to find new numerical methods on the one hand or improve upon already existing methods on the other. More precisely, this paper considers finding numerical approximations to first-order initial valued systems of ordinary differential equations of the form;
y ( x ) = f ( x , y ) , x R ,
on the interval [ a , b ] subject to the initial condition y ( a ) = y 0 , with f : R r R r , r N { 0 } . One of the numerical methods for solving systems of differential equations is linear multi-step methods which though powerful but most times suffers the disadvantage that matrices arising from its use are often ill-conditioned according to Shampine [1]. As a result of this and according to Trefethen and Bau [2] p. 95, if a system is ill-conditioned, then one may lose the logarithm to base 10 of the condition number of the matrix’s significant digits. However, there is paucity of literature on this topic of overcoming ill-conditioning as it relates to linear multi-step methods, though the subject of ill-conditioning, as it pertains to linear systems of equations, is well-studied.
In addition to this, block hybrid methods which incorporate interpolating and collocating at both grid and off grid points multiplies the size of the linear system to be solved, thereby increasing the propensity of the linear multi-step-based method to ill-conditioning. While Runge–Kutta methods, on the other hand, suffer the inherent problem of requiring much more cputime time than their linear multi-step methods counterpart. Hence, a linear multi-step method that is computationally cheap, stable and well-conditioned is often sought. That is why, in this study, we improve upon the two-step Butcher’s block hybrid method which is numerically better conditioned with the exception of outperforming those in the literature for non-linear systems of differential equations.
The two-step Butcher’s hybrid scheme has been derived by several authors [3,4,5,6,7,8] using different off-grid points. The two-step Butcher’s block hybrid method is a linear multi-step of order five for the numerical solution of one- and high-dimensional systems of ordinary differential equations which is the crux of the matter in this article. In an earlier work, we showed that, for one-dimensional stiff and non-stiff, linear and nonlinear differential equations, the performance of the Butcher-based block hybrid method obtained at the points { 1 , 3 2 , 7 4 , 2 } with those obtained at { 1 , 3 2 , 2 , 5 2 } were at par, though the latter was better conditioned than the former. In order to avoid abuse of notation, for the purpose of this study, we will refer to y n + 7 4 as the block hybrid method obtained at the following combination of grid and off-grid points: { 1 , 3 2 , 7 4 , 2 } and y n + 5 2 the block hybrid method obtained at the latter points { 1 , 3 2 , 2 , 5 2 } in line with [7].
In fact, this work is an extension of the works of Akinola et al. [7], where both methods were shown to be of order five and convergent. Nevertheless, we did not compare their regions of absolute stability nor their error constants because the goal in that study was to provide a brief solution to Shampine’s [1] claim about the ill-conditioning of matrices obtained from linear multi-step methods (see also [9,10,11,12,13,14,15,16]) and solving first order differential equations. No Jacobian was presented in that paper nor any proof of the non-singularity of the corresponding Jacobians, neither did we provide any algorithms.
In [6] we gave a proof of the non-singularity of the D matrix used when deriving the two-step Butcher’s hybrid scheme for the solution of initial value problems. However, the test problems were one-dimensional. In addition to our discussion in the second to the last paragraph above, the focus in [7] was to discuss with numerical examples how ill-conditioning arising from using linear multi-step methods in approximating the solution of initial value problems could be reduced, albeit for one-dimensional ODEs, which is also an improvement from the works in [8]. Similarly, Adee and Atabo [17] presented a two point linear multi-step method for solving similar problems as the ones considered in [7] and the present work, except that their method is non starting and they used a fourth order Runge–Kutta method in obtaining the starting values. In this work, our method is self-starting and does not rely on other methods to start (see also [18,19,20,21,22,23,24,25].)
In this work, we extend the idea in [7] to numerical examples of two, three, four and six-dimensional systems of differential equations to confirm the earlier claim that y n + 5 2 is better well-conditioned than those of y n + 7 4 . We compared their respective absolute errors and cputime to see which performs better between the two. The plan of this study is as follows: in Section 1, we give an overview of the content of this article, which is followed in Section 2, where we present the continuous form of the block hybrid method and state important results and in Section 3 we examine the convergence and give the appropriate algorithms of the methods. Finally, we conclude by presenting the results of the numerical experiments which validate and confirm the theory in Section 4.

2. Methodology

The emphasis in this section is two folds: we re-present the continuous formulation from which the two block hybrid methods y n + 7 4 and y n + 5 2 are derived at various grid and off grid points. Secondly, we show that the respective Jacobians (to be defined shortly) are non-singular at the root.

Derivation of Multi-Step Collocation Methods

Following in the footsteps of Onumanyi et al. [26], we present the re–derivation of our method, where a 2-step multi-step method having m collocation points is
y ( x ) = j = 0 t 1 α j ( x ) y ( x n + j ) + h j = 0 m 1 β j ( x ) f ( x j , y ( x j ) ) ,
where α j ( x ) and β j ( x )
α j ( x ) = j = 0 t + m 1 α j , i + 1 x i , h β j ( x ) = h j = 0 t + m 1 β j , i + 1 x i ,
are the continuous coefficients for j = 0 , 1 , , t 1 . The t ( 0 < t k ) in (2) are interpolation points taken from { x 0 , x 1 , , x n + k } and x j for j = 0 , 1 , 2 , , m 1 are the m collocation points belonging to { x 0 , x 1 , , x n + k } . To obtain α j ( x ) , β j ( x ) , Onumanyi [26] used at a matrix equation of the form D C = I , where I is an identity matrix of size ( t + m ) , and D and C are matrices defined by
D = 1 x n x n 2 x n 3 x n t + m 1 1 x n + 1 x n + 1 2 x n + 1 3 x n + 1 t + m 1 1 x n + t 1 x n + t 1 2 x n + t 1 3 x n + t 1 t + m 1 0 1 2 x n 3 x n 2 ( t + m 1 ) x n t + m 1 0 1 2 x n + δ 3 x n + δ 2 ( t + m 1 ) x n + δ t + m 1 .
The matrix in (4) is the collocation matrix of size ( t + m ) × ( t + m ) as C. The matrix C whose first row gives the continuous coefficients is defined as
C = α 0 , 1 α 1 , 1 α t 1 , 1 h β 0 , 1 h β m 1 , 1 α 0 , 2 α 1 , 2 α t 1 , 2 h β 0 , 2 h β m 1 , 2 α 0 , t + m α 1 , t + m α t 1 , t + m h β 0 , t + m h β m 1 , t + m .
Let y n be an approximation to the theoretical solution at x n , that is, y n y ( x n ) and let f n = f ( x n , y n ) , y n + j = y ( x n + j ) = y ( x n + j h ) and f n + j = f ( x n + j , y n + j ) = f ( x n + j h , y ( x n + j h ) ) , for j { 1 , 3 2 , 2 } , x i = x 0 + i h , i = 0 , 1 , 2 , , N 1 , h = b a N .
We substituted t = 2 , m = 4 and δ { 1 , 3 / 2 , 2 } into (2) to obtain the continuous formulation of the two-step Butcher’s block hybrid method:
y ( x ) = α 0 ( x ) y n + α 1 ( x ) y n + 1 + h β 0 ( x ) f n + β 1 ( x ) f n + 1 + β 3 2 ( x ) f n + 3 2 + β 2 ( x ) f n + 2 ,
and D from (4) reduces to
D = 1 x n x n 2 x n 3 x n 4 x n 5 1 x n + 1 x n + 1 2 x n + 1 3 x n + 1 4 x n + 1 5 0 1 2 x n 3 x n 2 4 x n 3 5 x n 4 0 1 2 x n + 1 3 x n + 1 2 4 x n + 1 3 5 x n + 1 4 0 1 2 x n + 2 3 x n + 2 2 4 x n + 2 3 5 x n + 2 4 0 1 2 x n + 3 2 3 x n + 3 2 2 4 x n + 3 2 3 5 x n + 3 2 4 .
Replacing x n = x n + 1 h , x n + 3 2 = x n + 1 + h 2 and x n + 2 = x n + 1 + h , the determinant of D is
det D = 93 h 11 4 .
The following continuous scheme was discussed in [6] with μ = x n + 1 x :
y ( x ) = 24 μ 5 15 h μ 4 + 40 h 2 μ 3 + 30 h 3 μ 2 31 h 5 y n + 24 μ 5 + 15 h μ 4 40 h 2 μ 3 30 h 3 μ 2 + 31 h 5 31 h 5 y n + 1 + 96 μ 5 91 h μ 4 + 98 h 2 μ 3 + 89 h 3 μ 2 372 h 4 f n + 28 μ 5 2 h μ 4 + 57 h 2 μ 3 + 4 h 3 μ 2 31 h 4 μ 31 h 4 f n + 1 + 48 μ 5 32 h μ 4 80 h 2 μ 3 + 64 h 3 μ 2 93 h 4 f n + 3 2 + 16 μ 5 + 21 h μ 4 + 6 h 2 μ 3 11 h 3 μ 2 124 h 4 f n + 2 .
Differentiate the continuous formulation with respect to μ , evaluating the derivative at μ = 3 h 4 and making y n + 1 the subject results in (10). The differentiation is important in this context because we want to obtain a square system of four equations in four unknowns. In addition to this, without the differentiation and subsequent substitution, an under–determined system of three equations in four unknowns will suffice and this has been shown in [7] not to give accurate approximations to the exact solution. Moreover, it allows us to add an extra function evaluation point f n + 7 4 , thereby improving the accuracy. In the same vein, evaluating (9) at the following points μ = { 3 h 4 , h 2 , h } . That is, when μ = 3 h 4 , using μ = x n + 1 x , 3 h 4 = x n + 1 x , x = x n + 1 + 3 h 4 = x n + 7 4 . Hence,
y ( x ) = y ( x n + 7 4 ) = y n + 7 4 .
When μ = h 2 , h 2 = x n + 1 x , x = x n + 1 + h 2 = x n + 3 2 . Thus,
y ( x ) = y ( x n + 3 2 ) = y n + 3 2 .
Finally, when μ = h , h = x n + 1 x , x = x n + 1 + h = x n + 2 . Thus,
y ( x ) = y ( x n + 2 ) = y n + 2 .
Therefore, we have explained the link between μ and the left-hand sides. Hence, after the same substitutions have been made to the right-hand sides, we obtained the remaining schemes (11)–(13):
y n + 1 = y n + h 630 179 f n 1169 f n + 1 + 2156 f n + 3 2 1984 f n + 7 4 + 546 f n + 2 ,
y n + 3 2 = 37 496 y n + 459 496 y n + 1 + h 1984 39 f n + 648 f n + 1 + 480 f n + 3 2 27 f n + 2 ,
y n + 7 4 = 243 7936 y n + 7693 7936 y n + 1 + h 31744 231 f n + 7644 f n + 1 + 16464 f n + 3 2 + 441 f n + 2 ,
y n + 2 = 1 31 y n + 32 31 y n + 1 + h 93 f n + 12 f n + 1 + 64 f n + 3 2 + 15 f n + 2 .
The above schemes (10)–(13) is what makes up the block hybrid method which we denote by y n + 7 4 for ease of reference. It can be expressed as the following nonlinear system:
y n + 1 y n h 630 179 f n 1169 f n + 1 + 2156 f n + 3 2 1984 f n + 7 4 + 546 f n + 2 = 0 , y n + 3 2 37 496 y n 459 496 y n + 1 h 1984 39 f n + 648 f n + 1 + 480 f n + 3 2 27 f n + 2 = 0 , y n + 7 4 243 7936 y n 7693 7936 y n + 1 h 31744 231 f n + 7644 f n + 1 + 16464 f n + 3 2 + 441 f n + 2 = 0 , y n + 2 + 1 31 y n 32 31 y n + 1 h 93 f n + 12 f n + 1 + 64 f n + 3 2 + 15 f n + 2 = 0 .
For r = 1 , the Jacobian of the above nonlinear system is
J 7 = 167 h 90 f n + 1 y n + 1 1 154 h 45 f n + 3 2 y n + 3 2 992 h 315 f n + 7 4 y n + 7 4 13 h 15 f n + 2 y n + 2 81 h 248 f n + 1 y n + 1 459 496 1 15 h 62 f n + 3 2 y n + 3 2 0 27 h 1984 f n + 2 y n + 2 1911 h 7936 f n + 1 y n + 1 7693 7936 1029 h 1984 f n + 3 2 y n + 3 2 1 441 h 31744 f n + 2 y n + 2 4 h 31 f n + 1 y n + 1 32 31 64 h 93 f n + 3 2 y n + 3 2 0 1 5 h 31 f n + 2 y n + 2 .
Lemma 1. 
The Jacobian J 7 in (14) is nonsingular at the root.
Proof. 
After carrying out elementary row operations, we obtained
1 120 f n + 3 2 y n + 3 2 h 496 162 h f n + 1 y n + 1 + 459 0 h f n + 2 y n + 2 24 h f n + 1 y n + 1 + 68 0 1 0 27 h 2 f n + 1 y n + 1 f n + 2 y n + 2 + 81 f n + 2 y n + 2 162 f n + 1 y n + 1 h 459 96 h 2 f n + 1 y n + 1 f n + 3 2 y n + 3 2 + 192 f n + 3 2 y n + 3 2 + 64 f n + 1 y n + 1 h + 512 0 0 1 ν 0 0 0 1 ,
where
ν = 441 h 3 f n + 1 y n + 1 f n + 3 2 y n + 3 2 f n + 2 y n + 2 + 1029 f n + 3 2 y n + 3 2 + 1176 f n + 1 y n + 1 f n + 2 y n + 2 3528 f n + 1 y n + 1 f n + 3 2 y n + 3 2 h 2 6144 h 2 f n + 1 y n + 1 f n + 3 2 y n + 3 2 + 12288 f n + 3 2 y n + 3 2 + 4096 f n + 1 y n + 1 h + 32768 + 4508 f n + 2 y n + 2 7791 f n + 3 2 y n + 3 2 7644 f n + 1 y n + 1 h 30772 6144 h 2 f n + 1 y n + 1 f n + 3 2 y n + 3 2 + 12288 f n + 3 2 y n + 3 2 + 4096 f n + 1 y n + 1 h + 32768 .
Its determinant which is the product of the diagonal elements is one. Hence, J 7 is nonsingular.    □
An alternative proof of the above result using Keller’s ABCD Lemma [27] is given below, but before then, we give a brief synopsis on the ABCD Lemma. The implicit determinant method of Spence and Poulton [28] is an application of Newton’s method in finding the zeros of certain bordered matrices arising from finding a photonic band structure in periodic materials such as photonic crystals. Nevertheless, at the heart of the implicit determinant method is Keller’s ABCD Lemma [27], whereby a partitioned matrix is shown to be nonsingular under certain conditions. The “ABCD” Lemma has been used in [29,30,31] in different ways to show that certain parameter-dependent matrices are nonsingular and Newton’s method is applied accordingly. The Lemma, as the name implies, relies on splitting a matrix into its partitioned ABCD components, and after certain conditions which must have been satisfied, the matrix under consideration is described as non-singular or as maybe not the case. These references and others form our motivation for using the ABCD Lemma to show that the Jacobians in this work are non-singular, as shown below.
We state without proof the one-dimensional version of Keller’s [27] ABCD Lemma. The aim is to use it in proving the non-singularity of the one-dimensional version of J 7 .
Lemma 2. 
“ABCD” Lemma.
Let A R n × n , b , c R n , d R and
J 7 = A b c T d R ( n + 1 ) × ( n + 1 ) .
Assume that A is non-singular; then, J 7 has the following decomposition:
A b c T d = I 0 c T A 1 1 A b 0 T d c T A 1 b .
The matrix J 7 is non-singular if d c T A 1 b 0 .
Proof. 
Following [27], we split J 7 ; thus,
A = 167 90 f n + 1 y n + 1 h 1 154 45 f n + 3 2 y n + 3 2 h 992 315 f n + 7 4 y n + 7 4 h 81 248 f n + 1 y n + 1 h 459 496 1 15 62 f n + 3 2 y n + 3 2 h 0 1911 7936 f n + 1 y n + 1 h 7693 7936 1029 1984 f n + 3 2 y n + 3 2 h 1 R 3 × 3 ,
b = 13 15 f n + 2 y n + 2 h 27 1984 f n + 2 y n + 2 h 441 31744 f n + 2 y n + 2 h R 3 × 1 ,
c T = 4 31 f n + 1 y n + 1 h 32 31 64 93 f n + 3 2 y n + 3 2 h 0 R 1 × 3 , and d = 1 5 31 f n + 2 y n + 2 h R .
Before we apply the above Lemma, we need to show that A is non-singular. Hence, using elementary row operations, it reduces to
1 120 f n + 3 2 y n + 3 2 h 496 162 f n + 1 y n + 1 h + 459 0 0 1 10368 f n + 1 y n + 1 h + 29376 3528 f n + 1 y n + 1 f n + 3 2 y n + 3 2 h 2 + 7791 f n + 3 2 y n + 3 2 + 7644 f n + 1 y n + 1 h + 30772 0 0 1 .
Since the pivots are three, A is indeed nonsingular.
Now, we give a condition under which d c T A 1 b 0 . The scalar d c T A 1 b will be non-zero if and only if the terms in h in the numerator do not add up to 1440 as shown below:
d c T A 1 b = 63 f n + 1 y n + 1 f n + 3 2 y n + 3 2 f n + 7 4 y n + 7 4 f n + 2 y n + 2 h 4 + 147 f n + 3 2 y n + 3 2 + 168 f n + 1 y n + 1 f n + 7 4 y n + 7 4 156 f n + 1 y n + 1 f n + 3 2 y n + 3 2 f n + 2 y n + 2 504 f n + 1 y n + 1 f n + 3 2 y n + 3 2 f n + 7 4 y n + 7 4 h 3 504 f n + 1 y n + 1 f n + 3 2 y n + 3 2 f n + 7 4 y n + 7 4 h 3 + 1113 f n + 3 2 y n + 3 2 + 1092 f n + 1 y n + 1 f n + 7 4 y n + 7 4 2256 f n + 1 y n + 1 f n + 3 2 y n + 3 2 h 2 + 4396 f n + 7 4 y n + 7 4 4212 f n + 3 2 y n + 3 2 + 2672 f n + 1 y n + 1 h 1440 644 f n + 7 4 y n + 7 4 252 f n + 3 2 y n + 3 2 + 592 f n + 1 y n + 1 f n + 2 y n + 2 + 1113 f n + 3 2 y n + 3 2 1092 f n + 1 y n + 1 f n + 7 4 y n + 7 4 + 2256 f n + 1 y n + 1 f n + 3 2 y n + 3 2 h 2 504 f n + 1 y n + 1 f n + 3 2 y n + 3 2 f n + 7 4 y n + 7 4 h 3 + 1113 f n + 3 2 y n + 3 2 + 1092 f n + 1 y n + 1 f n + 7 4 y n + 7 4 2256 f n + 1 y n + 1 f n + 3 2 y n + 3 2 h 2 + 4396 f n + 7 4 y n + 7 4 4212 f n + 3 2 y n + 3 2 + 2672 f n + 1 y n + 1 h 1440 1056 f n + 2 y n + 2 4396 f n + 7 4 y n + 7 4 + 4212 f n + 3 2 y n + 3 2 2672 f n + 1 y n + 1 h + 1440 504 f n + 1 y n + 1 f n + 3 2 y n + 3 2 f n + 7 4 y n + 7 4 h 3 + 1113 f n + 3 2 y n + 3 2 + 1092 f n + 1 y n + 1 f n + 7 4 y n + 7 4 2256 f n + 1 y n + 1 f n + 3 2 y n + 3 2 h 2 + 4396 f n + 7 4 y n + 7 4 4212 f n + 3 2 y n + 3 2 + 2672 f n + 1 y n + 1 h 1440 .
Alternatively, as h tends to zero, all terms in h tends to zero and the above expression becomes d c T A 1 b = 1 . Hence, the one-dimensional Jacobian matrix J 7 is non-singular using the ABCD Lemma.    □
For higher-dimensional systems, i.e., r 2 of differential equations, the analogous Jacobian is
J 7 = 167 h 90 f n + 1 y n + 1 I 154 h 45 f n + 3 2 y n + 3 2 992 h 315 f n + 7 4 y n + 7 4 13 h 15 f n + 2 y n + 2 81 h 248 f n + 1 y n + 1 459 496 I I 15 h 62 f n + 3 2 y n + 3 2 O 27 h 1984 f n + 2 y n + 2 1911 h 7936 f n + 1 y n + 1 7693 7936 I 1029 h 1984 f n + 3 2 y n + 3 2 I 441 h 31744 f n + 2 y n + 2 4 h 31 f n + 1 y n + 1 32 31 I 64 h 93 f n + 3 2 y n + 3 2 O I 5 h 31 f n + 2 y n + 2 ,
where I is the identity matrix of size r and O R r × r . For r = 2 , substituting
f n + 1 y n + 1 = a 1 a 2 a 3 a 4 ; f n + 3 2 y n + 3 2 = b 1 b 2 b 3 b 4 ;
and
f n + 7 4 y n + 7 4 = c 1 c 2 c 3 c 4 , f n + 2 y n + 2 = d 1 d 2 d 3 d 4 ,
into the above Jacobian yields the partitioned form of J 7
J 7 = A B C T D R 8 × 8 ,
where
A = 167 a 1 h 90 1 167 a 2 h 90 154 b 1 h 45 154 b 2 h 45 992 c 1 h 315 992 c 2 h 315 167 a 3 h 90 167 a 4 h 90 1 154 b 3 h 45 154 b 4 h 45 992 c 3 h 315 992 c 4 h 315 81 a 1 h 248 459 496 81 a 2 h 248 1 15 b 1 h 62 15 b 2 h 62 0 0 81 a 3 h 248 81 a 4 h 248 459 496 15 b 3 h 62 1 15 b 4 h 62 0 0 1911 a 1 h 7936 7693 7936 1911 a 2 h 7936 1029 b 1 h 1984 1029 b 2 h 1984 1 0 1911 a 3 h 79360 1911 a 4 h 7936 7693 7936 1029 b 3 h 1984 1029 b 4 h 1984 0 1 R 6 × 6 ,
B = 13 d 1 h 15 13 d 2 h 15 13 d 3 h 15 13 d 4 h 15 27 d 1 h 1984 27 d 2 h 1984 27 d 3 h 1984 27 d 4 h 1984 441 d 1 h 31744 441 d 2 h 31744 441 d 3 h 31744 441 d 4 h 31744 R 6 × 2 ,
C T = 4 a 1 h 31 32 31 4 a 2 h 31 64 b 1 h 93 64 b 2 h 93 0 0 4 a 3 h 31 4 a 4 h 31 32 31 64 b 3 h 93 64 b 4 h 93 0 0 R 2 × 6 ,
and
D = 1 5 d 1 h 31 5 d 2 h 31 5 d 3 h 31 1 5 d 4 h 31 R 2 × 2 .
Lemma 3. 
Two-dimensional version of the “ABCD” Lemma.
Let J 7 be as defined in (18). Assume that A is nonsingular; then, J 7 has the following decomposition:
J 7 = A B C T D = I O C T A 1 I A B O T D C T A 1 B .
Proof. 
We first need to show that A is non-singular as a requirement for using the ABCD Lemma. After some elementary row operations, we obtained six pivots from the echelon form of A. Thus, A is non-singular.
Since A has been shown to be non-singular, then
D C T A 1 B 0 0 0 0 .
Since D C T A 1 B 0 0 0 0 by the two-dimensional version of the ABCD Lemma, J 7 is non-singular whether at the root or not.    □
The next corollary further enforces the above result.
Corollary 1. 
In the worst case scenario, where the Jacobian of the function to be integrated is singular, that is when a i = b i = c i = d i = 0 for i = 1 ( 1 ) 4 ; then, the Jacobian J 7 is non-singular.
Proof. 
Since a i = b i = c i = d i = 0 for i = 1 ( 1 ) 4 , then
A = 1 0 0 0 0 0 0 1 0 0 0 0 459 496 0 1 0 0 0 0 459 496 0 1 0 0 7693 7936 0 0 0 1 0 0 7693 7936 0 0 0 1 ,
A is nonsingular; hence, it satisfies the first condition of the ABCD Lemma; thus,
B = 0 0 0 0 0 0 0 0 0 0 0 0 , C T = 32 31 0 0 0 0 0 0 32 31 0 0 0 0 , and D = 167 90 154 45 992 315 13 15 81 248 15 62 0 27 1984 1911 7936 1029 1984 0 441 31744 4 31 64 93 0 5 31 .
Hence,
D C T A 1 B = 1 0 0 1 0 0 0 0 .
Therefore, the partitioned Jacobian J 7 is also nonsingular using the two-dimensional version of the ABCD Lemma.    □
Differentiating the continuous formulation (9) with respect to μ and evaluating the derivative at μ = 3 h 2 results in (20). As explained earlier, the differentiation is important in this context because we want to obtain a square system of four equations in four unknowns. Moreover, this allows us to add an extra function evaluation point f n + 5 2 , thereby improving the accuracy and leading to a reduction in the condition number.
In the same vein, evaluating (9) at the following points μ = { 3 h 2 , h 2 , h } gives the respective remaining schemes (21)–(23):
y n + 1 = y n + h 900 269 f n + 1360 f n + 1 1220 f n + 3 2 124 f n + 5 2 + 615 f n + 2 ,
y n + 3 2 = 37 496 y n + 459 496 y n + 1 + h 1984 39 f n + 648 f n + 1 + 480 f n + 3 2 27 f n + 2 ,
y n + 2 = 1 31 y n + 32 31 y n + 1 + h 93 f n + 12 f n + 1 + 64 f n + 3 2 + 15 f n + 2 ,
y n + 5 2 = 2484 1984 y n 500 1984 y n + 1 + h 1984 735 f n + 4200 f n + 1 2400 f n + 3 2 + 2925 f n + 2 .
The above schemes (20)–(23) is what makes up the block hybrid method which we denote by y n + 5 2 for ease of reference. The Jacobian of the above non-linear system of equations is
J 5 = 1 68 h 45 f n + 1 y n + 1 61 45 h f n + 3 2 y n + 3 2 41 h 60 f n + 2 y n + 2 31 h 225 f n + 5 2 y n + 5 2 81 h 248 f n + 1 y n + 1 459 496 1 15 h 62 f n + 3 2 y n + 3 2 27 h 1984 f n + 2 y n + 2 0 4 h 31 f n + 1 y n + 1 32 31 64 h 93 f n + 3 2 y n + 3 2 1 5 h 31 f n + 2 y n + 2 0 125 496 525 h 248 f n + 1 y n + 1 75 h 62 f n + 3 2 y n + 3 2 2925 h 1984 f n + 2 y n + 2 1 .
After performing a series of row operations on J 5 , we obtained the following echelon form:
1 120 f n + 3 2 y n + 3 2 h 496 162 f n + 1 y n + 1 h + 459 f n + 2 y n + 2 h 24 f n + 1 y n + 1 h + 68 0 0 1 27 f n + 1 y n + 1 f n + 2 y n + 2 h 2 + 81 f n + 2 y n + 2 162 f n + 1 y n + 1 h 459 96 f n + 1 y n + 1 f n + 3 2 y n + 3 2 h 2 + 192 f n + 3 2 y n + 3 2 + 64 f n + 1 y n + 1 h + 512 0 0 0 1 ν 1 0 0 0 1
where
ν 1 = 96 f n + 1 y n + 1 f n + 3 2 y n + 3 2 h 2 + 192 f n + 3 2 y n + 3 2 + 64 f n + 1 y n + 1 h + 512 225 f n + 1 y n + 1 f n + 3 2 y n + 3 2 f n + 2 y n + 2 h 3 + 375 f n + 3 2 y n + 3 2 75 f n + 1 y n + 1 f n + 2 y n + 2 450 f n + 1 y n + 1 f n + 3 2 y n + 3 2 h 2 + v 125 ,
and
v = 775 f n + 2 y n + 2 525 f n + 3 2 y n + 3 2 + 1050 f n + 1 y n + 1 h .
This shows that the Jacobian has four pivots. Hence, it is non-singular at the root. For r = 2 , the Jacobian J 5 becomes
J 5 = I 68 h 45 f n + 1 y n + 1 61 h 45 f n + 3 2 y n + 3 2 41 h 60 f n + 2 y n + 2 31 h 225 f n + 5 2 y n + 5 2 81 h 248 f n + 1 y n + 1 459 496 I I 15 h 62 f n + 3 2 y n + 3 2 27 h 1984 f n + 2 y n + 2 O 4 h 31 f n + 1 y n + 1 32 31 I 64 h 93 f n + 3 2 y n + 3 2 I 5 h 31 f n + 2 y n + 2 O 125 496 I 525 h 248 f n + 1 y n + 1 75 h 62 f n + 3 2 y n + 3 2 2925 h 1984 f n + 2 y n + 2 I .
and the proof of its non-singularity is analagous to the proof of Lemma 3.

3. Convergence Analysis

In this section, we summarize the order, error constant, zero stability, consistency and convergence of the two block hybrid methods under discussion. We also plotted the regions of absolute stability of the two methods which were hitherto missing in the earlier papers. In fact, as will be discovered in the numerical experiments in the next section, the clear distinction in their regions of absolute stability accounts for the disparity in their numerical accuracy. In addition to these, we present their corresponding Newton-based algorithms premised on the non-singularity of the Jacobians which were proved in the last section.
We begin by defining the order and error constant of a linear multi-step method. In a linear difference operator L associated with the linear multi-step method [32],
j = 0 k α j ( x ) y n + j = h j = 0 k β j ( x ) f n + j , for j = 0 , 1 , , k ,
the coefficients α 0 and β 0 are both nonzero which is defined by
L [ y ( x ) ; h ] = j = 0 k α j y ( x + j h ) h β j y ( x + j h ) = C 0 y ( x ) + C 1 h y ( 1 ) ( x ) + + C q h q y ( q ) ( x ) + ,
where the C q ’s are real constants and y ( x ) is an arbitrary function, continuously differentiable on the interval [ a , b ] .
Definition 1. 
A linear multi-step method and the associated difference operator (24) is said to be of order p if, C 0 = C 1 = = C p = 0 , C p + 1 0 , where C p + 1 is the error constant of the method.
Furthermore,
C 0 = j = 0 k α j , C 1 = j = 1 k ( j α j β j ) , C 2 = j = 1 k j 2 2 α j j β j , C q = j = 1 k j q α j q ! j q 1 β j ( q 1 ) ! ,
for q 2 .
In line with [6,7] and from [33], we present a summary of the basic convergence properties of the two block hybrid methods in Table 1 and Table 2 below.
From both Table 1 and Table 2, the order 5 and error constants have been shown in [6] (Lemma 2.1, pp. 3184–3185) and [7] (Theorem 2, pp. 4–5).
Definition 2 
([32]). The local truncation error (LTE) T n + k at x n + k of a linear multi-step method is defined to be the expression L [ y ( x ) ; h ] given by (24), when y ( x ) is the theoretical solution of the initial value problem (1). The local truncation error of order p can be expressed in the form
T n + k = C p + 1 h p + 1 y ( p + 1 ) ( x n ) + O ( h p + 2 ) ,
and C p + 1 h p + 1 y ( p + 1 ) ( x n ) is the principal local truncation error.
With the above definition and the error constants from Table 1 and Table 2, respectively, the LTEs for the methods are given by
T n + 2 = [ 9.0962 × 10 4 , 1.3230 × 10 4 , 1.4471 × 10 5 , 1.7921 × 10 4 ] T h 6 y ( 6 ) ( x n ) + O ( h 7 ) ,
and
T n + 2 = [ 3.2510 × 10 2 , 1.3230 × 10 4 , 5.1663 × 10 3 , 1.7921 × 10 4 ] T h 6 y ( 6 ) ( x n ) + O ( h 7 ) .
The following definition of A ( α ) -stability is necessary to describe the nature of the region of absolute stability of the methods.
Definition 3. 
A numerical method is A ( α ) -stable [34], where α 0 , π 2 , if its region of absolute stability contains the wedge
W α = z = λ h C | α < arg z π < α .
In plotting the region of absolute stability of the first block hybrid method, we start by re-writting (10)–(13) in the following linear multi-step form:
1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 y n + 1 y n + 3 2 y n + 7 4 y n + 2 = 0 0 1 0 0 0 37 496 459 496 0 0 243 7936 7693 7936 0 0 1 31 32 31 y n 2 y n 1 y n y n + 1 + h 0 0 0 179 630 0 0 0 39 1984 0 0 0 231 31744 0 0 0 1 93 f n 3 f n 2 f n 1 f n + h 167 90 154 45 992 315 13 15 81 248 15 62 0 27 1984 1911 7936 1029 1984 0 441 31744 4 31 64 93 0 5 31 f n + 1 f n + 3 2 f n + 7 4 f n + 2 ,
where
P = 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 , Q = 0 0 1 0 0 0 37 496 459 496 0 0 243 7936 7693 7936 0 0 1 31 32 31 ,
R = 0 0 0 179 630 0 0 0 39 1984 0 0 0 231 31744 0 0 0 1 93 , and S = 167 90 154 45 992 315 13 15 81 248 15 62 0 27 1984 1911 7936 1029 1984 0 441 31744 4 31 64 93 0 5 31 .
We used the matrices above to find the determinant
| P w Q R z S z w | = 167 w z 90 + w 154 w z 45 992 w z 315 1 179 z 630 13 w z 15 81 w z 248 w 15 w z 62 37 496 27 w z 1984 39 z 1984 459 496 1911 w z 7936 1029 w z 1984 w 243 7936 441 w z 31744 231 z 31744 7693 7936 4 w z 31 64 w z 93 1 31 5 w z 31 + w + z 93 32 31 ,
where y = λ y , z = λ h is the usual test equation, w = exp ( i θ ) , i = 1 and θ [ 0 , 2 π ] . This results in the following stability polynomial:
| P w Q R z S z w | = 5249664 z 4 + 40997376 z 3 140517888 z 2 + 174268416 z + 119992320 w 4 119992320 + 249984 z 4 2080035 z 3 54842099 z 2 313812240 z 127537200 w 3 119992320 + 158481 z 3 + 3392687 z 2 + 29999424 z + 7544880 w 2 119992320 .
By collecting terms like in z, the stability polynomial reduces to
| P w Q R z S z w | = ( 5249664 w 4 + 249984 w 3 ) z 4 119992320 + ( 40997376 w 4 2080035 w 3 158481 w 2 ) z 3 119992320 + ( 140517888 w 4 54842099 w 3 + 3392687 w 2 ) z 2 119992320 + ( 174268416 w 4 313812240 w 3 + 29999424 w 2 ) z 119992320 + ( 119992320 w 4 127537200 w 3 + 7544880 w 2 ) 119992320 = 21 w 4 w 3 z 4 480 + 4555264 w 4 231115 w 3 17609 w 2 z 3 13332480 + ( 140517888 w 4 54842099 w 3 + 3392687 w 2 ) z 2 119992320 + 518656 w 4 933965 w 3 + 89284 w 2 z 357120 + 7936 w 4 8435 w 3 + 499 w 2 7936 .
We used the above stability polynomial to plot the region of absolute stability of y n + 7 4 in octave and this results in Figure 1, which is A ( α ) -stable.
Next, we describe how to plot the region of absolute stability of the second block hybrid method y n + 5 2 . We begin by re-writting (20)–(23) in the following linear multi-step form:
1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 y n + 1 y n + 3 2 y n + 2 y n + 5 2 = 0 0 1 0 0 0 37 496 459 496 0 0 1 31 32 31 0 0 621 496 125 496 y n 3 y n 2 y n y n + 1 + h 0 0 0 269 900 0 0 0 39 1984 0 0 0 1 93 0 0 0 735 1984 f n 3 f n 2 f n 1 f n + h 68 45 61 45 41 60 31 225 81 248 15 62 27 1984 0 4 31 64 93 5 31 0 525 248 75 62 2925 1984 0 f n + 1 f n + 3 2 f n + 2 f n + 5 2 ,
where
P 1 = 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 , Q 1 = 0 0 1 0 0 0 37 496 459 496 0 0 1 31 32 31 0 0 621 496 125 496 ,
R 1 = 0 0 0 269 900 0 0 0 39 1984 0 0 0 1 93 0 0 0 735 1984 , and S 1 = 68 45 61 45 41 60 31 225 81 248 15 62 27 1984 0 4 31 64 93 5 31 0 525 248 75 62 2925 1984 0 .
Using the above matrices, one obtains the stability polynomial | P 1 w Q 1 R 1 z S 1 z w | . An application of Newton’s method in solving the corresponding stability polynomial equation and subsequent plot gives Figure 2, which is A ( α ) -stable.
Figure 1 and Figure 2 show the regions of absolute stability of the two hybrid methods and they are A ( α ) -stable, albeit one has a larger region of absolute stability than the other. In both figures, the regions outside the contours represent the stable region while the region inside the contour correspond to the unstable region.
In the discussions underneath, we present Newton-based algorithms for the solution of systems of initial value problems since we have shown that the Jacobians are non-singular. We state clearly that both algorithms are self-starting and do not rely on predictors or correctors to start. The starting values are the initial values of the differential equations.

3.1. Algorithm for y n + 7 4

Input: h, tol, system of differential equations to be solved, their initial values and corresponding Jacobians.
For k = 1 , 2 , 3 , , until convergence
Form v ( k ) = y n + 1 ( k ) y n + 3 2 ( k ) y n + 2 ( k ) y n + 5 2 ( k ) ,
and
F ( v ( k ) ) = y n + 1 y n h 630 179 f n 1169 f n + 1 + 2156 f n + 3 2 1984 f n + 7 4 + 546 f n + 2 y n + 3 2 37 496 y n 459 496 y n + 1 h 1984 39 f n + 648 f n + 1 + 480 f n + 3 2 27 f n + 2 y n + 7 4 243 7936 y n 7693 7936 y n + 1 h 31744 231 f n + 7644 f n + 1 + 16464 f n + 3 2 + 441 f n + 2 y n + 2 + 1 31 y n 32 31 y n + 1 h 93 f n + 12 f n + 1 + 64 f n + 3 2 + 15 f n + 2 .
Compute J 7 ( v ( k ) ) .
Find the P L U factorization of J 7 ( v ( k ) ) , i.e., P J 7 ( v ( k ) ) = L U , P is a permutation matrix.
Solve L w ( k ) = P F ( v ( k ) ) for w ( k ) .
Solve for Δ v ( k ) in U Δ v ( k ) = w ( k ) .
Increment v ( k + 1 ) = v ( k ) + Δ v ( k ) .
Output: y n + 1 ( k + 1 ) .
We now present the second y n + 5 2 algorithm below.

3.2. Algorithm for y n + 5 2

Input: h, tol, system of differential equations to be solved, their initial values and corresponding Jacobians.
For k = 1 , 2 , 3 , , until convergence
Form v ( k ) = y n + 1 ( k ) y n + 3 2 ( k ) y n + 2 ( k ) y n + 3 2 ( k ) ,
and
F ( v ( k ) ) = y n + 1 y n h 900 269 f n + 1360 f n + 1 1220 f n + 3 2 124 f n + 5 2 + 615 f n + 2 y n + 3 2 37 496 y n 459 496 y n + 1 h 1984 39 f n + 648 f n + 1 + 480 f n + 3 2 27 f n + 2 y n + 2 + 1 31 y n 32 31 y n + 1 h 93 f n + 12 f n + 1 + 64 f n + 3 2 + 15 f n + 2 y n + 5 2 2484 1984 y n + 500 1984 y n + 1 h 1984 735 f n + 4200 f n + 1 2400 f n + 3 2 + 2925 f n + 2 .
Compute J 5 ( v ( k ) ) .
Find the P L U factorization of J 5 ( v ( k ) ) , i.e., P J 5 ( v ( k ) ) = L U , P is a permutation matrix.
Solve L w ( k ) = P F ( v ( k ) ) for w ( k ) .
Solve U Δ v ( k ) = w ( k ) for Δ v ( k ) .
Increment v ( k + 1 ) = v ( k ) + Δ v ( k ) .
Output: y n + 1 ( k + 1 ) .
The stopping criterion for both algorithms is
Δ v ( k ) tol ( 1 + v ( k ) ) .
When the size of the matrix is rather large, the use of iterative solvers like GMRES is highly recommended. Next, we support our theoretical results with some numerical examples in the next section.

4. Numerical Experiments

In this section, we compare the performance of the two methods to see which is better. We compare the absolute errors of the exact solution with the numerical approximations as well as their respective cputimes. We used a 64-bit DELL Latitude laptop running on Intel(R) Core(TM) i5-5200U CPU @ 2.20GHz, manufactured in the USA, in all numerical computations. The first and last test problems were drawn from [35] with the sole aim of comparing the results of our method with theirs. The second test problem was from Chemistry. Throughout this section, we used a constant step size of h = 0.1 on the same integration interval of [ 0 , 50 ] , as shown in Table 3, Table 4, Table 5, Table 6 and Table 7 and their respective figures. However, for the purpose of comparison with the work in [35], we extended the interval to [ 0 , 500 ] , as shown in Table 4 and Table 8.
Example 1. 
The non-linear stiff Kaps problem [36]
y 1 ( x ) y 2 ( x ) = 1002 y 1 ( x ) 1000 y 2 2 ( x ) y 1 ( x ) y 2 ( x ) y 2 2 ( x ) such that y 1 ( 0 ) = 1 , y 2 ( 0 ) = 1 .
The analytic solution is y 1 ( x ) = exp ( 2 x ) and y 2 ( x ) = exp ( x ) .
The results of the numerical experiments for this example are presented in Table 3 and Table 4 and Figure 3. Table 3 showed that the block hybrid method y n + 5 2 performed better than y n + 7 4 using the same initial conditions, especially for x = 50 . Nevertheless, both methods, though of order five, outperformed Yakubu et al.’s [35] fourteenth-order method, as summarized in Table 4. Figure 3, which is a log–log plot of the errors and step sizes, showed that, while both methods performed at par for y 1 ( x ) , the same was not the case for y 2 ( x ) because y n + 7 4 performed better than y n + 5 2 .
Example 2. 
The Wu problem [37] arises from Chemistry
y 1 ( x ) y 2 ( x ) = 500000 y 1 ( x ) + 499999.5 y 2 ( x ) 499999.5 y 1 ( x ) 500000 y 2 ( x ) with initial conditions y 1 ( 0 ) = 0 , y 2 ( 0 ) = 2 .
The analytic solution is y 1 ( x ) = exp ( x 2 ) exp ( 999999.5 x ) and y 2 ( x ) = exp ( x 2 ) + exp ( 999999.5 x ) .
The results of the numerical experiments for this example are presented in Table 5 and Figure 4. Table 5 showed that the block hybrid method y n + 5 2 performed at par with y n + 7 4 using the same initial conditions. However, a plot of the log–log of the error and step size showed that y n + 5 2 outperformed y n + 7 4 .
Example 3. 
We consider the following three-dimensional linear problem:
y 1 ( x ) y 2 ( x ) y 3 ( x ) = 10 21 0 21 10 0 0 0 10 y 1 ( x ) y 2 ( x ) y 3 ( x ) ,
with initial condition [ y 1 ( 0 ) , y 2 ( 0 ) , y 3 ( 0 ) ] = [ 1 , 1 , 1 ] . The analytic solution is
y 1 ( x ) y 2 ( x ) y 3 ( x ) = exp ( 10 x ) cos ( 21 x ) + sin ( 21 x ) exp ( 10 x ) cos ( 21 x ) sin ( 21 x ) exp ( 10 x ) .
The results of the numerical experiments for this example are presented in Table 6 and Figure 5. Table 6 showed that the block hybrid method y n + 7 4 performed at par with y n + 5 2 using the same initial values. In addition, this is confirmed by the log–log plots of the error and step size in Figure 5.
Example 4. 
The linear stiff IVPs considered by Fatunla [38].
y 1 ( x ) y 2 ( x ) y 3 ( x ) y 4 ( x ) y 5 ( x ) y 6 ( x ) = 10 100 0 0 0 100 10 0 0 0 0 0 0 4 0 0 0 0 0 0 1 0 0 0 0 0 0 0.5 0 0 0 0 0 0 0.1 y 1 ( x ) y 2 ( x ) y 3 ( x ) y 4 ( x ) y 5 ( x ) y 6 ( x ) , y 1 ( 0 ) y 2 ( 0 ) y 3 ( 0 ) y 4 ( 0 ) y 5 ( 0 ) y 6 ( 0 ) = 1 1 1 1 1 1 .
The results of the numerical experiments for this example are presented in Table 7 and Figure 6. Table 7 and Figure 6 showed that the block hybrid method y n + 5 2 performed at par with those of y n + 7 4 using the same initial conditions. Nevertheless, both methods, though of order five, outperformed Yakubu et al.’s [35] fourteenth-order method, as shown in Table 8.
From Table 9, we observed that the Wu problem of Example 2 was ill-conditioned at 1,072,275.37 for y n + 7 4 versus 652,920 for y n + 5 2 . Upon inspection, it was discovered that the increase in the condition number of the system was due to the ill-conditioning of the Jacobian matrix of the problem under consideration with a condition number of 2 × 10 6 . The same table shows that, in Examples 1–3, the condition numbers for y n + 7 4 are, respectively, 1.72, 3 and 2 times that for y n + 5 2 .
We remark that, in order to reduce the condition number and accuracy, we also tried at the off-grid point 5 4 to derive a new block hybrid method. However, this resulted in a higher condition number than both y n + 7 4 and y n + 5 2 . On a final note, the results of both methods presented in this study outperform the second-order and fourteenth-order second derivative method of [35] with less function evaluations and computational time.

5. Conclusions

As shown in the numerical examples in this study, we have confirmed that, for higher-dimensional systems of initial value, the problems y n + 5 2 still have a lower condition number in agreement with one-dimensional initial value problems. It was also observed that, for higher-dimensional systems, the result of numerical experiments showed that y n + 5 2 performed favorably well in comparison to the exact solution y n + 7 4 . Therefore, we recommend the block hybrid method y n + 5 2 for the numerical integration of systems of linear, nonlinear, stiff and non-stiff differential equations due to its better conditioning and accuracy.

Author Contributions

Conceptualization, R.O.A., A.S., J.S., D.M. and O.D.A.; methodology, R.O.A., A.S., J.S., D.M. and O.D.A.; software, R.O.A., A.S., J.S., O.D.A. and D.M.; formal analysis, R.O.A., A.S., J.S., D.M. and O.D.A.; investigation, R.O.A., A.S., J.S. and O.D.A.; resources, R.O.A., A.S. and D.M.; writing—original draft preparation, R.O.A., J.S. and A.S.; writing—review and editing, R.O.A., A.S. and D.M. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data are contained within the article.

Acknowledgments

This article owes a great debt of thanks to three excellent referees whose comments helped to improve the final version of this manuscript.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Shampine, L.F. Ill-conditioned matrices and the integration of stiff ODEs. J. Comput. Appl. Math. 1993, 48, 279–292. [Google Scholar] [CrossRef]
  2. Trefethen, L.N.; Bau, D., III. Numerical Linear Algebra; SIAM: Philadelphia, PA, USA, 1997. [Google Scholar]
  3. Butcher, J.C. A multistep generalization of Runge-Kutta method with four or five stages. J. Ass. Comput. Mach. 1967, 14, 84–99. [Google Scholar] [CrossRef]
  4. Butcher, J.C. Numerical methods for ordinary differential equations in the 20th century. J. Comput. Appl. Math. 2000, 125, 1–29. [Google Scholar] [CrossRef]
  5. Butcher, J.C. Numerical Methods for Ordinary Differential Equations; John Wiley and Sons Ltd.: Oxford, UK, 2016. [Google Scholar]
  6. Akinola, R.O.; Ajibade, K.J. A proof of the non singularity of the D matrix used in deriving the two-step Butcher’s hybrid scheme for the solution of initial value problems. J. Appl. Math. Phys. 2021, 9, 3177–3201. [Google Scholar] [CrossRef]
  7. Akinola, R.O.; Shokri, A.; Yao, S.-W.; Kutchin, S.Y. Circumventing ill-conditioning arising from using linear multistep methods in approximating the solution of initial value problems. Mathematics 2022, 10, 2910. [Google Scholar] [CrossRef]
  8. Sirisena, U.W.; Kumleng, G.M.; Yahaya, Y.A. A new Butcher type two-step block hybrid multistep method for accurate and efficient parallel solution of ordinary differential equations. Abacus Math. Ser. 2004, 31, 1–7. [Google Scholar]
  9. Golub, G.H.; Wilkinson, J.H. Ill-conditioned eigensystems and the computation of the Jordan canonical form. SIAM Rev. 1976, 18, 578–619. [Google Scholar] [CrossRef]
  10. Peters, G.; Wilkinson, J.H. Inverse iteration, ill conditioned equations and Newton’s method. J. SIAM Rev. 1979, 21, 339–360. [Google Scholar] [CrossRef]
  11. Farooq, M.; Salhi, A. Improving the solvability of Ill-conditioned systems of linear equations by reducing the condition number of their matrices. J. Korean Math. Soc. 2011, 48, 939–952. [Google Scholar] [CrossRef]
  12. Douglas, C.C.; Lee, L.; Yeung, M. On solving ill conditioned linear systems. Procedia Comput. Sci. 2016, 80, 941–950. [Google Scholar] [CrossRef]
  13. Demmel, J.W. Applied Numerical Linear Algebra; Society for Industrial and Applied Mathematics: Philadelphia, PA, USA, 1997. [Google Scholar]
  14. Akram, G.; Elahi, Z.; Siddiqi, S.S. Use of Laguerre Polynomials for Solving System of Linear Differential Equations. Appl. Comput. Math. 2022, 21, 137–146. [Google Scholar]
  15. Khankishiyev, Z.F. Solution of one problem for a loaded differential equation by the method of finite differences. Appl. Comput. Math. 2022, 21, 147–157. [Google Scholar]
  16. Juraev, D.A.; Shokri, A.; Marian, D. Solution of the ill-posed Cauchy problem for systems of elliptic type of the first order. Fractal Fract. 2022, 6, 358. [Google Scholar] [CrossRef]
  17. Adee, S.O.; Atabo, V.O. Improved two-point block backward differentiation formulae for solving first order stiff initial value problems of ordinary differential equations. Niger. Ann. Pure Appl. Sci. 2020, 3, 200–209. [Google Scholar] [CrossRef]
  18. Antczak, T.; Arana-Jimenez, M. Optimality and duality results for new classes of nonconvex quasidifferentiable vector optimization problems. Appl. Comput. Math. 2022, 21, 21–34. [Google Scholar]
  19. Qi, F. Necessary and sufficient conditions for a difference defined by four derivatives of a function containing trigamma function to be completely monotonic. Appl. Comput. Math. 2022, 21, 61–70. [Google Scholar]
  20. Iskandarov, S.; Komartsova, E. On the influence of integral perturbations on the boundedness of solutions of a fourth-order linear differential equation. TWMS J. Pure Appl. Math. 2022, 13, 3–9. [Google Scholar]
  21. Mansour, M.A.; Lahrache, J.; El Ayoubi, A.; El Bekkali, A. The stability of parametric Cauchy problem of initial-value ordinary differential equation revisited. J. Appl. Numer. Optim. 2023, 5, 111–124. [Google Scholar]
  22. Ren, J.; Bai, L.; Zhai, C. A decreasing operator method for a fractional initial value problem on finite interval. J. Nonlinear Funct. Anal. 2023, 2023, 35. [Google Scholar]
  23. Hamidov, S.I. Optimal trajectories in reproduction models of economic dynamics. TWMS J. Pure Appl. Math. 2022, 13, 16–24. [Google Scholar]
  24. Akbay, A.; Turgay, N.; Ergüt, M. On Space-like Generalized Constant Ratio Hypersufaces in Minkowski Spaces. TWMS J. Pure Appl. Math. 2022, 13, 25–37. [Google Scholar]
  25. Juraev, D.A.; Shokri, A.; Marian, D. Regularized solution of the Cauchy problem in an unbounded domain. Symmetry 2022, 14, 1682. [Google Scholar] [CrossRef]
  26. Onumanyi, P.; Awoyemi, D.O.; Jator, S.N.; Sirisena, U.W. New Linear Multistep Methods with Continuous Coefficients for First Order Initial Value Problems. J. Niger. Math. Soc. 1994, 13, 37–51. [Google Scholar]
  27. Keller, H.B. Numerical Solution of Bifurcation and Nonlinear Eigenvalue Problems. In Applications of Bifurcation Theory; Rabinowitz, P., Ed.; Academic Press: New York, NY, USA, 1997; pp. 359–384. [Google Scholar]
  28. Spence, A.; Poulton, C. Photonic band structure calculations using nonlinear eigenvalue techniques. J. Comput. Phys. 2005, 204, 65–81. [Google Scholar] [CrossRef]
  29. Akinola, R.O.; Freitag, M.A.; Spence, A. A method for the computation of Jordan blocks in parameter-dependent matrices. IMA J. Numer. Anal. 2014, 34, 955–976. [Google Scholar] [CrossRef]
  30. Freitag, M.A.; Spence, A. A Newton-based method for the calculation of the distance to instability. Linear Algebra Appl. 2011, 435, 3189–3205. [Google Scholar] [CrossRef]
  31. Akinola, R.O.; Spence, A. A comparison of the Implicit Determinant Method and Inverse Iteration. J. Niger. Math. Soc. 2014, 33, 205–230. [Google Scholar]
  32. Lambert, J.D. Computational Methods in Ordinary Differential Equations; John Wiley: New York, NY, USA, 1973. [Google Scholar]
  33. Demmel, J.W. Discrete Variable Methods in Ordinary Differential Equations; John Wiley: New York, NY, USA, 1962. [Google Scholar]
  34. Widlund, O.B. A note on unconditionally stable linear multistep methods. BIT Numer. Math. 1967, 7, 65–70. [Google Scholar] [CrossRef]
  35. Yakubu, D.G.; Shokri, A.; Kumleng, G.M.; Marian, D. Second derivative block hybrid methods for the the numerical integration of differential systems. Fractal Fract. 2022, 6, 386. [Google Scholar] [CrossRef]
  36. Kaps, P.; Poon, S.W.H.; Bui, T.D. Rosenbrock methods for Stiff ODEs: A comparison of Richardson extrapolation and embedding technique. Pac. J. Sci. Technol. 1985, 34, 17–40. [Google Scholar] [CrossRef]
  37. Wu, X.Y.; Xiu, J.L. Two low accuracy methods for Stiff systems. Appl. Math. Comput. 2001, 123, 141–153. [Google Scholar] [CrossRef]
  38. Fatunla, S.O. Numerical integrators for stiff and highly oscillatory differential equations. J. Math. Comput. 1980, 34, 373–390. [Google Scholar] [CrossRef]
Figure 1. Region of absolute stability of the block hybrid method y n + 7 4 .
Figure 1. Region of absolute stability of the block hybrid method y n + 7 4 .
Axioms 13 00165 g001
Figure 2. Region of absolute stability of the block hybrid method y n + 5 2 .
Figure 2. Region of absolute stability of the block hybrid method y n + 5 2 .
Axioms 13 00165 g002
Figure 3. The log–log plots of the errors and the step size on Example 1.
Figure 3. The log–log plots of the errors and the step size on Example 1.
Axioms 13 00165 g003
Figure 4. The log−log plots of the errors and the step size on Example 2.
Figure 4. The log−log plots of the errors and the step size on Example 2.
Axioms 13 00165 g004
Figure 5. The log–log plots of the errors and the step size on Example 3.
Figure 5. The log–log plots of the errors and the step size on Example 3.
Axioms 13 00165 g005
Figure 6. The log–log plots of the errors and the step size on Example 4. The red and blue lines represents y n + 5 2 and y n + 7 4 , respectively.
Figure 6. The log–log plots of the errors and the step size on Example 4. The red and blue lines represents y n + 5 2 and y n + 7 4 , respectively.
Axioms 13 00165 g006
Table 1. Properties of the block method y n + 7 4 .
Table 1. Properties of the block method y n + 7 4 .
y n + i Order
p
Error Constants
C p + 1 0
Consistency?Zero
Stability?
Convergence?
y n + 1 5−9.0962   ×   10 04 YesYesYes
y n + 3 2 51.3230   ×   10 04 YesYesYes
y n + 7 4 51.4471   ×   10 05 YesYesYes
y n + 2 5−1.7921   ×   10 04 YesYesYes
The symbol is to avoid confusion with either of the methods.
Table 2. Properties of the block method y n + 5 2 .
Table 2. Properties of the block method y n + 5 2 .
y n + i Order
p
Error Constants
C p + 1 0
Consistency?Zero
Stability?
Convergence?
y n + 1 53.2510   ×   10 02 YesYesYes
y n + 3 2 51.3230   ×   10 04 YesYesYes
y n + 5 2 55.1663   ×   10 03 YesYesYes
y n + 2 5−1.7921   ×   10 04 YesYesYes
The symbol is to avoid confusion with either of the methods.
Table 3. Absolute error of the block hybrid methods y n + 7 4 and y n + 5 2 on Example 1.
Table 3. Absolute error of the block hybrid methods y n + 7 4 and y n + 5 2 on Example 1.
xyAbsolute Error for y n + 7 4 Absolute Error for y n + 5 2
5 y 1 4.5935115213239299   ×   10 07 4.4495405902951008   ×   10 07
y 2 4.8050326706232382   ×   10 08 4.6460347875344754   ×   10 08
10 y 1 2.0855112094424000   ×   10 11 2.0201772875313122   ×   10 11
y 2 3.1704212170890252   ×   10 10 3.0313075502139391   ×   10 10
20 y 1 4.2987802361462157   ×   10 20 4.1642371192651194   ×   10 20
y 2 1.3851474630459919   ×   10 14 1.2925765285153073   ×   10 14
30 y 1 8.8609023013571046   ×   10 29 8.5838358912098099   ×   10 29
y 2 6.0424991742526876   ×   10 19 5.4886853366277116   ×   10 19
40 y 1 1.8264620443729859   ×   10 37 1.7694054396306910   ×   10 37
y 2 2.6315790821435655   ×   10 23 2.3195198529150539   ×   10 23
50 y 1 3.7648125181410106   ×   10 46 3.6473152891560397   ×   10 46
y 2 1.1440182168782429   ×   10 27 9.7481843383636344   ×   10 28
Table 4. Absolute error of the block hybrid methods, Yakubu et al. [35], y n + 7 4 and y n + 5 2 on Example 1.
Table 4. Absolute error of the block hybrid methods, Yakubu et al. [35], y n + 7 4 and y n + 5 2 on Example 1.
xyAbsolute Error in [35]Absolute Error for y n + 7 4 Absolute Error for y n + 5 2
5 y 1 1.228938367083   ×   10 03 4.5935115213239   ×   10 07 4.4495405902951   ×   10 07
y 2 1.800318343625   ×   10 06 4.8050326706232   ×   10 08 4.6460347875344   ×   10 08
50 y 1 3.325679258575   ×   10 05 3.7648125181410   ×   10 46 3.6473152891560   ×   10 46
y 2 5.804723043345   ×   10 07 1.1440182168782   ×   10 27 9.7481843383636   ×   10 28
250 y 1 3.622719245691   ×   10 12 00
y 2 2.101212666995   ×   10 10 00
500 y 1 7.173620185942   ×   10 21 00
y 2 9.350493168888   ×   10 15 00
Table 5. Absolute error of the block hybrid methods y n + 7 4 and y n + 5 2 on Example 2.
Table 5. Absolute error of the block hybrid methods y n + 7 4 and y n + 5 2 on Example 2.
xyAbsolute Error for y n + 7 4 Absolute Error for y n + 5 2
5 y 1 1.8429201220637736   ×   10 10 2.7234449417878892   ×   10 10
y 2 1.8429326120728007   ×   10 10 2.7233922061942195   ×   10 10
10 y 1 2.8917541972095506   ×   10 11 4.4907285355610949   ×   10 11
y 2 2.8917645188142327   ×   10 11 4.4906853409465430   ×   10 11
20 y 1 3.8537197115081148   ×   10 13 6.0765400957535354   ×   10 13
y 2 3.8537266910596002   ×   10 13 6.0765109578201498   ×   10 13
30 y 1 3.9031860015233805   ×   10 15 6.1072016762397691   ×   10 15
y 2 3.9031907131441496   ×   10 15 6.1071820886028638   ×   10 15
40 y 1 3.5794145759740676   ×   10 17 5.4886823015453322   ×   10 17
y 2 3.5794177192603953   ×   10 17 5.4886691080145620   ×   10 17
50 y 1 3.0204844833668178   ×   10 19 4.6178900419926924   ×   10 19
y 2 3.0204866159418346   ×   10 19 4.6178811401075847   ×   10 19
Table 6. Absolute error of the block hybrid methods y n + 7 4 and y n + 5 2 on Example 3.
Table 6. Absolute error of the block hybrid methods y n + 7 4 and y n + 5 2 on Example 3.
xyAbsolute Error for y n + 7 4 Absolute Error for y n + 5 2
5 y 1 2.3285830553148310   ×   10 22 2.2493291039341911   ×   10 22
y 2 1.3218783622033109   ×   10 22 1.4477085626385313   ×   10 22
y 3 1.2354721309575536   ×   10 23 1.7115476217234377   ×   10 23
10 y 1 1.5438485820040401   ×   10 44 1.5546760679948769   ×   10 44
y 3 5.0309547106817356   ×   10 44 5.0285879396184281 × 10 44
y 2 4.6131942308588293   ×   10 45 6.3093549042213966   ×   10 45
20 y 1 3.6580869562867370   ×   10 88 3.6581192736544930   ×   10 88
y 2 1.9226360458970499   ×   10 87 1.9226360578422238   ×   10 87
y 3 3.2194709960708681   ×   10 88 4.2961763276024321   ×   10 88
30 y 1 00
y 2 00
y 3 00
40 y 1 00
y 2 00
y 3 00
50 y 1 00
y 2 00
y 3 00
Table 7. Absolute error of the block hybrid methods y n + 7 4 and y n + 5 2 on Example 4.
Table 7. Absolute error of the block hybrid methods y n + 7 4 and y n + 5 2 on Example 4.
xyAbsolute Error for y n + 7 4 Absolute Error for y n + 5 2
5 y 1 2.6069389501515157   ×   10 22 2.6069389501515157   ×   10 22
y 2 8.0250935335644924   ×   10 23 8.0250935335644924   ×   10 23
y 3 8.6744998698744801   ×   10 13 1.2898041482202381   ×   10 12
y 4 8.8587245265087100   ×   10 10 1.3666554329189173   ×   10 09
y 5 1.7596617218895716   ×   10 10 2.7328000973270150   ×   10 10
y 6 8.6153306710912148   ×   10 14 1.3411494137471891   ×   10 13
10 y 1 5.1681476049220830   ×   10 44 5.1681476049220830   ×   10 44
y 2 9.8396182267041682   ×   10 45 9.8396182267041682   ×   10 45
y 3 3.5751428964356635   ×   10 21 5.3153053899417913   ×   10 21
y 4 1.1937922479439249   ×   10 11 1.8416901846878692   ×   10 11
y 5 2.8888368260038266   ×   10 11 4.4864376970432662   ×   10 11
y 6 1.0447198661722723   ×   10 13 1.6314727346866675   ×   10 13
20 y 1 7.7855244617256059   ×   10 88 7.7855244617256059   ×   10 88
y 2 1.7956044336063368   ×   10 87 1.7956044336063368   ×   10 87
y 3 3.0364165427948283   ×   10 38 4.5134348071673213   ×   10 38
y 4 1.0839615410958194   ×   10 15 1.6722517595367738   ×   10 15
y 5 3.8929657295103809   ×   10 13 6.0458761478102141   ×   10 13
y 6 7.6882944455292090   ×   10 14 1.2007062011321068   ×   10 13
30 y 1 00
y 2 00
y 3 1.9341519135769713   ×   10 55 2.8744015994507605   ×   10 55
y 4 7.3817656946758547   ×   10 20 1.1388014559059823   ×   10 19
y 5 3.9345892715868008   ×   10 15 6.1105188160798945   ×   10 15
y 6 4.2445214010200516   ×   10 14 6.6252558994506217   ×   10 14
40 y 1 00
y 2 00
y 3 1.0951341397262779   ×   10 72 1.6271787057567443   ×   10 72
y 4 4.4684213318548600   ×   10 24 6.8935327550012643   ×   10 24
y 5 3.5348071243628592   ×   10 17 5.4896470009347222   ×   10 17
y 6 2.0799334476961917   ×   10 14 3.2505248492853411   ×   10 14
50 y 1 00
y 2 00
y 3 5.8132012201935056   ×   10 90 8.6356372622664460   ×   10 90
y 4 2.5358248498385642   ×   10 28 3.9120729933952014   ×   10 28
y 5 2.9771677973414147   ×   10 19 4.6236187847225050   ×   10 19
y 6 9.5591937143701955   ×   10 15 1.4946377469016170   ×   10 14
Table 8. Absolute error of the block hybrid methods, Yakubu et al. [35], y n + 7 4 and y n + 5 2 on Example 4.
Table 8. Absolute error of the block hybrid methods, Yakubu et al. [35], y n + 7 4 and y n + 5 2 on Example 4.
xyAbsolute Error in [35]Absolute Error for y n + 7 4 Absolute Error for y n + 5 2
5 y 1 2.220446049250   ×   10 16 2.6069389501515   ×   10 22 2.6069389501515   ×   10 22
y 2 1.318389841742   ×   10 16 8.0250935335644   ×   10 23 8.0250935335644   ×   10 23
y 3 08.6744998698744   ×   10 13 1.2898041482202   ×   10 12
y 4 08.8587245265087   ×   10 10 1.3666554329189   ×   10 09
50 y 1 3.330669073875   ×   10 16 00
y 2 7.771561172376   ×   10 16 00
y 3 4.440892098500   ×   10 16 5.8132012201935   ×   10 90 8.6356372622664   ×   10 90
y 4 1.110223024625   ×   10 16 2.5358248498385   ×   10 28 3.9120729933952   ×   10 28
250 y 1 6.591949208711   ×   10 17 00
y 2 1.734723475976   ×   10 18 00
y 3 8.326672684688   ×   10 17 00
y 4 6.661338147750   ×   10 16 00
500 y 1 6.810144895924   ×   10 19 00
y 2 2.710505431213   ×   10 20 00
y 3 4.857225732735   ×   10 17 00
y 4 3.330669073875   ×   10 16 00
Table 9. Comparing the condition number and cputime for both block hybrid methods under discussion. Here, κ ( J 7 ) and κ ( J 5 ) are the condition numbers of the Jacobians J 7 and J 5 at the root, respectively.
Table 9. Comparing the condition number and cputime for both block hybrid methods under discussion. Here, κ ( J 7 ) and κ ( J 5 ) are the condition numbers of the Jacobians J 7 and J 5 at the root, respectively.
ExampleSize of System κ ( J 7 ) κ ( J 5 ) cputime ( y n + 7 4 ) cputime ( y n + 5 2 )
Example 18   ×   8 1091.10633.143.6000   ×   10 5 3.6000   ×   10 5
Example 28   ×   8 1,072,275.37652,920.003.6000   ×   10 5 3.6666   ×   10 5
Example 312   ×   12 67.6522.115.7000   ×   10 5 5.8000   ×   10 5
Example 424   ×   24 137.3468.073.9333   ×   10 5 3.9333   ×   10 5
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Akinola, R.O.; Shokri, A.; Sunday, J.; Marian, D.; Akinlabi, O.D. Comparing the Performance of Two Butcher-Based Block Hybrid Algorithms for the Solution of Initial Value Problems. Axioms 2024, 13, 165. https://doi.org/10.3390/axioms13030165

AMA Style

Akinola RO, Shokri A, Sunday J, Marian D, Akinlabi OD. Comparing the Performance of Two Butcher-Based Block Hybrid Algorithms for the Solution of Initial Value Problems. Axioms. 2024; 13(3):165. https://doi.org/10.3390/axioms13030165

Chicago/Turabian Style

Akinola, Richard Olatokunbo, Ali Shokri, Joshua Sunday, Daniela Marian, and Oyindamola D. Akinlabi. 2024. "Comparing the Performance of Two Butcher-Based Block Hybrid Algorithms for the Solution of Initial Value Problems" Axioms 13, no. 3: 165. https://doi.org/10.3390/axioms13030165

APA Style

Akinola, R. O., Shokri, A., Sunday, J., Marian, D., & Akinlabi, O. D. (2024). Comparing the Performance of Two Butcher-Based Block Hybrid Algorithms for the Solution of Initial Value Problems. Axioms, 13(3), 165. https://doi.org/10.3390/axioms13030165

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop