1. Introduction
In several important scientific fields such as quantum mechanics, elasticity, and electronics, many problems can be represented by mathematical models of symmetric systems, see, e.g., [
1,
2,
3,
4], which usually lead to ordinary differential equations characterized by oscillation and periodicity [
5,
6], i.e.,
In view of the oscillation and periodicity of the equations in symmetric systems, the exponentially fitted methods, whose theoretical basis was first provided by Gautschi [
7] and Lyche [
8], have been considered to solve these equations in many studies. For instance, The authors of [
9,
10] investigated exponentially fitted two-step BDF methods and linear multistep methods. The idea of exponential fitting was first applied to the Runge–Kutta methods by Simos [
11], in 1998. Since then, based on Simos’ research, a few exponentially fitted Runge–Kutta methods have been constructed [
12,
13,
14]. Some scholars have also tried to estimate the frequency of the methods by analyzing the local truncation error, and control the step size to improve the effects and efficiency [
15,
16].
However, the exponentially fitted technique has been mostly applied to Runge–Kutta methods, which have difficulty in solving stiff problems by using the explicit form or cost large amounts of computation by using the implicit form. For this reason, Rosenbrock methods, which are based on the idea introduced by Rosenbrock [
17] that a single Newton iteration is enough to preserve the stability properties of the diagonally implicit Runge–Kutta methods, have been considered to solve ordinary differential equations in symmetric systems. The general form of Rosenbrock methods for first-order ordinary differential equations has been given by Hairer and Wanner [
18]; then, many scholars developed this form and analyzed the implementation, see, e.g., [
19,
20] and their references. Rosenbrock methods not only keep the stability of the relative diagonal implicit Runge–Kutta methods, but also reduce the amount of calculation compared against the implicit methods because only a linear system of equations needs to be solved per step. At present, there have been some studies on the exponentially fitted Rosenbrock methods [
21,
22,
23], but these methods, which use constant frequency and step size, have difficulty in solving the equations efficiently and adaptively.
In this paper, we will combine the exponentially fitted Rosenbrock methods with the embedded Rosenbrock methods to estimate the frequency before each step and control the step size by using Richardson extrapolation. By the frequency estimation and step size control, our methods with variable coefficients can solve the equations with oscillation and periodicity efficiently, and the order of convergence can be improved by one compared with the methods with constant coefficients.
The outline of this paper is as follows. In
Section 2, a class of exponentially fitted Rosenbrock methods is constructed, and we give the local truncation error, frequency estimation, and stability analysis of the methods. In
Section 3, we combine the exponentially fitted Rosenbrock methods with the embedded Rosenbrock methods to construct a kind of embedded variable coefficient exponentially fitted Rosenbrock (3,2) methods, and perform the frequency estimation and step size control strategy. In
Section 4, three numerical tests are presented to verify the validity of our methods by comparing the number of calculation steps, the error and the calculating time with other numerical methods.
Section 5 gives some discussion and remarks.
2. A Class of Exponentially Fitted Rosenbrock Methods
In this section, a class of exponentially fitted Rosenbrock methods for the models of the ordinary differential equations is constructed, and we give the local truncation error, frequency estimation, and stability analysis of the methods.
Applying the s-stage Rosenbrock method to solve system (
1) yields
where
h is the step size,
,
,
,
,
,
and
are real coefficients which satisfy
and
for
.
We can also change the nonautonomous system (
1) to an autonomous system by the following transformation,
Thus, we will focus on the autonomous problems for simplicity of presentation, i.e.,
where
is assumed to be thrice continuously differentiable and
is assumed to be twice continuously differentiable for the subsequent theoretical analysis. We consider the following s-stage Rosenbrock methods for the autonomous system (
2),
or, in tableau form,
i.e.,
where
,
,
,
for
and
Compared with the classic Rosenbrock methods, the methods (
4) in which
for
, need only one LU-decomposition per step and their order conditions can be simplified [
18]. Moreover, this kind of method has higher degree of freedom in its coefficient selection due to the extra coefficients
in each step.
Let this method exactly integrate the function
, then, we have
Let
, it follows that
We now try to construct 1–2 stage Rosenbrock methods by (
5). If
, together with
, we have
Solving the system of equations above with
derives in
,
,
, which yields a class of 1-stage exponentially fitted Rosenbrock methods, i.e.,
If
, together with
and
, we have
In order to obtain the second order methods, Hairer and Wanner provided the following order conditions in [
18], i.e.,
Combining (
6) and (
7), and letting the coefficients of the methods satisfy the order conditions when
, then we have
. Therefore, we obtain the following coefficients of the 2-stage exponentially fitted Rosenbrock methods with order 2,
where
is a free coefficient or, in tableau form,
We now consider the following 2-stage exponentially fitted Rosenbrock methods of order 2 and analyze the local truncation error, i.e.,
where the coefficients are given by (
8). Based on Bui’s idea in [
24], we expand
in the geometrical series, i.e.,
If we assume that
in (
9) is the exact solution
at
and expand the hyperbolic functions in the coefficients in Taylor series, we can obtain a one-step approximation
of the solution at
by (
9) and (
10), i.e.,
Meanwhile, for the exact solution at
, we have
Based on (
11) and (
12), the local truncation error
of exponentially fitted Rosenbrock methods (
9) can be expressed as
where
,
,
,
,
and all functions in (
13) are evaluated at
and
.
If we let the principle local truncation error in (
13) be zero, we can approximate and renew the frequency
in each step to make the coefficients variable by the following equation,
where
and
for
are respectively the
ith-component of
and
, then the order of the methods can be improved by one.
On the other hand, we consider the stability of (
9). Let
, then we get a class of constant coefficient exponentially fitted Rosenbrock methods, i.e.,
By analogy with the definition of the stability function of Runge–Kutta methods in [
25], the definition of the stability function of Rosenbrock methods is as follows.
Definition 1. Applying the Rosenbrock methods (3) to the following linear test equationsyields withthen is called the stability function of Rosenbrock methods (3). It is obvious that
where the numerator and denominator of
are polynomials of degree no more than
s. It means that
can be expressed as
where
,
are two polynomials with real coefficients,
,
,
,
, and
and
contain no common factors. Li pointed out in [
25] that the A-stability of single-step methods was equivalent to the A-acceptability of the rational approximations to the function
. The following lemma gives the necessary and sufficient condition for the A-acceptability of
.
Lemma 1 ([
25]).
Assume that , then is A-acceptable iff the three inequalities hold- (i)
;
- (ii)
when ;
- (iii)
,
where , and we add the definitions that ⋯ if .
Based on Lemma 1, we can give the following theorem for the A-stability of method (
15).
Theorem 1. If , the 2-stage Rosenbrock method (15) with single coefficient γ is A-stable. Proof. According to (
16), the stability function of the method (
15) is
Let , then we have . When , we have , which means that the condition (i) in Lemma 1 holds if .
According to (
17), we have
. For
, we have
, which means that the condition (ii) in Lemma 1 holds.
Consider the condition (iii) in Lemma 1 when
, it is easy to find in (
17) that
,
,
and
,
,
. If
, we have
which means that the condition (iii) in Lemma 1 holds.
To sum up, if
,
is A-acceptable as the rational approximation to the function
, which means that the 2-stage Rosenbrock method (
15) with single coefficient
is A-stable. □
3. Frequency Estimation and Step Size Control
This section will combine the exponentially fitted Rosenbrock methods with the embedded Rosenbrock methods to construct a kind of embedded variable coefficient exponentially fitted Rosenbrock (3,2) methods, and perform the frequency estimation and step size control strategy.
We consider the embedded Rosenbrock methods for system (
2). This kind of method combines two Rosenbrock methods with different orders, which have the same coefficients of the lower stage part. The tableau form of the methods is as follows,
and we define
and
. To estimate the local truncation error of the embedded methods, we give the following lemma referred to in [
26].
Lemma 2 ([
26]).
Whenever a starting step h has been chosen, the Rosenbrock methods with order p and q respectively compute two approximations to the solution, and , where , then the error of is estimated by , i.e., Now, we construct a class of embedded Rosenbrock methods by using the coefficients of the 2-stage Rosenbrock methods (
8). It can be expressed in the following tableau form
where
is a free coefficient. Together with the order conditions for order 3 in [
18]
when we let
and
, the coefficients of method (
18) are determined by (
19), i.e.,
where
or, in tableau form,
We now choose one of the methods above to introduce the frequency estimation and step size control strategy. We record the exponentially fitted Rosenbrock (3,2) method (
20) as EFRB(3,2) when
, i.e.,
We also record the Rosenbrock (3,2) method (
21) as RB(3,2) when
, i.e.,
Suppose that
and
are the numerical solution and the local truncation error for the second-order component of RB(3,2), and
and
are the numerical solution and the local truncation error for the second-order component of EFRB(3,2), then we have
where
,
,
and all functions in (
23) and (
24) are evaluated at
and
. Together with (
23) and (
24), we have
For each integration step, we can estimate
by the following equation,
where
and
for
are respectively the
ith-component of
and
. For the first integration step,
is set as a suitable starting frequency
.
On the other hand, if we suppose that the numerical solution for the third-order component of RB(3,2) is
, we can estimate
based on lemma 2 by
After obtaining the approximations of
and
by (
25) and (
26), we substitute them into (
14) to estimate and renew the frequency
. In addition, if the estimate of
is close to zero, the estimate of
is zero, which means that the coefficients of the EFRB(3,2) method (
21) are equal to the coefficients of the RB(3,2) method (
22). If the estimate of
is close to zero, it means that the principle local truncation error is not related to
. In this case, we do not renew the frequency. If we estimate the frequency before each step of the method, we will get a variable coefficient method and its order will be increased by one.
Now, we try to control its step size by Richadson extrapolation. We first give the following lemma referred to in [
26].
Lemma 3 ([
26]).
Assume that is the numerical solution of one step with step size h of a Rosenbrock method of order p from , and is the numerical solution of two steps with step size , then the error of can be expressed as Based on Lemma 3, we give the following step size control strategy. Let
, we compare
with the tolerance
which is given by user. If
, then we accept the step and progress with the
value; if
, then we reject the step and repeat the whole procedure with a new step size. In both cases, referred to in [
18], the new step size is given by
where
and
are the maximum and minimum acceptable factors, respectively, and
is the safety factor. In this paper, we let
and
.
4. Numerical Experiments
In this section, we present three numerical experiments to test the performance of our methods and compare the error and computational efficiency with other numerical methods. All the numerical experiments were executed by using MATLAB® on a Windows 11 PC with an Intel® Core i5-10210U CPU.
Example 1. Consider the following ODE system in [27],with exact solution . Let , then we get a new ODE systemProblem (28) has been solved in the interval with for each component of the solution when . Example 2. Consider the following ODE system in [28],with the following solution: Problem (29) has been solved in the interval with for each component of the solution. Example 3. Consider the following PDE system in [29],with exact solution . The PDE system (30) can be transformed into an ODE system by spatial discretization with central finite difference of second order, which results inwhere , is meant to approximate the solution of (30) at the point and we define . Then, problem (31) has been solved in the interval with for each component of the solution when . We first solve Example 1 by the EFRB(3,2) method with constant step size to test the order of our method. We use the following formula to estimate the order of our method,
where
,
represents the error of
when
h is the step size and
. Let
, then the error and the order of convergence of our method are shown in
Table 1. The results in
Table 1 imply that the EFRB(3,2) method can achieve third order convergence.
We now compare the EFRB(3,2) method with the stiff ODE solvers in MATLAB
® such as ode23s, ode23t, and ode23tb, which have the same stage as our method. For each stiff ODE solver in MATLAB
®, the relative tolerance
is set as
and the absolute tolerance
is set as
. The error for each component of the solution was calculated as the maximum of the absolute value of the difference between the numerical and exact solutions, and we use the largest error across all components as the error of the problems.
Figure 1,
Figure 2 and
Figure 3 show the relationship between the error and the average calculating time for each method when
.
Table 2,
Table 3 and
Table 4 show the calculating steps, the error, and the calculating time for each method when
and
. From the figures and tables, we conclude that the EFRB(3,2) method achieves better performance than all the stiff ODE solvers for ODE Examples 1 and 2. For the PDE Example 3, the EFRB(3,2) method achieves similar performance with ode23tb and better performance than other stiff ODE solvers. Furthermore, our method performs better than ode23tb in the small-tolerance range for Example 3. The performance of the EFRB(3,2) method in these three examples verifies the effectiveness and efficiency of our method, making it possible to be applied to the stiff ODE systems and PDE systems.