Next Article in Journal
Riemann–Liouville Fractional Sobolev and Bounded Variation Spaces
Next Article in Special Issue
High-Order Compact Difference Method for Solving Two- and Three-Dimensional Unsteady Convection Diffusion Reaction Equations
Previous Article in Journal
Multi-Layered Blockchain Governance Game
Previous Article in Special Issue
Reconstructed Interpolating Differential Operator Method with Arbitrary Order of Accuracy for the Hyperbolic Equation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Numerical Scheme Based on the Implicit Runge-Kutta Method and Spectral Method for Calculating Nonlinear Hyperbolic Evolution Equations

by
Yasuhiro Takei
1,† and
Yoritaka Iwata
2,*,†
1
Mizuho Research & Technologies, 2-3 Kanda-Nishiki-Cho, Chiyoda, Tokyo 101-8443, Japan
2
Department of Chemistry and Materials Engineering, Kansai University, 3-3-35 Yamate-Cho, Suita, Osaka 564-8680, Japan
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Axioms 2022, 11(1), 28; https://doi.org/10.3390/axioms11010028
Submission received: 8 November 2021 / Revised: 21 December 2021 / Accepted: 30 December 2021 / Published: 10 January 2022

Abstract

:
A numerical scheme for nonlinear hyperbolic evolution equations is made based on the implicit Runge-Kutta method and the Fourier spectral method. The detailed discretization processes are discussed in the case of one-dimensional Klein-Gordon equations. In conclusion, a numerical scheme with third-order accuracy is presented. The order of total calculation cost is O ( N log 2 N ) . As a benchmark, the relations between numerical accuracy and discretization unit size and that between the stability of calculation and discretization unit size are demonstrated for both linear and nonlinear cases.

1. Introduction

The dynamics of nonlinear hyperbolic equations are fascinating enough to be applicable to wave propagation on any scale, from elementary particles to waves on a cosmic scale. Even fundamental properties have not been fully understood for nonlinear hyperbolic problems, and it is difficult to elucidate these properties only by pure mathematical analysis. To understand the fundamental properties of nonlinear waves we employ numerical calculations. While in terms of treating nonlinear problems (e.g., various types of boundary, discontinuity such as shock propagation) it is important to individually specialize numerical schemes, we are going to establish a basic framework for calculating nonlinear hyperbolic evolution equations. In this context, much attention is paid to “universal applicability” and “reliability”. In this paper, as benchmarks for linear and nonlinear hyperbolic problems, we start with reproducing solutions with a simple and general framework (i.e., spatially-continuous solutions under periodic boundary conditions).
We consider nonlinear hyperbolic evolution equations. For concrete examples of hyperbolic evolution equations, here we take one-dimensional linear and nonlinear Klein-Gordon equations (for a textbook, see [1]). The initial and boundary values problem of one-dimensional Klein-Gordon equations is written by
2 u t 2 + α 2 u x 2 + β F u = 0 , u ( x , 0 ) = f ( x ) , u ( 0 , t ) = u ( L , t ) , u t ( x , 0 ) = g ( x ) , u t ( 0 , t ) = u t ( L , t ) ,
for ( x , t ) 0 , L × 0 , T , where α , β and T are real numbers, and f ( x ) and g ( x ) are initial functions. The periodic boundary condition is imposed. Inhomogeneous term F ( u ) is either linear or nonlinear function of u (e.g., the polynomials and trigonometric functions of u). For constructing the numerical scheme, the equations are represented by the first-order evolution equations as for time.
u t = v , v t + α 2 u x 2 + β F u = 0 , u ( x , 0 ) = f ( x ) , u ( 0 , t ) = u ( L , t ) , v ( x , 0 ) = g ( x ) , v ( 0 , t ) = v ( L , t ) .
Various numerical schemes have been investigated to accurately and efficiently reproduce the nonlinear solutions of partial differential equations such as the nonlinear Klein-Gordon equations mentioned above. For nonlinear hyperbolic equations, numerical schemes well reproducing the conservation laws are required to be highly accurate, since the smoothing effect particularly associated with parabolic partial differential equations cannot be expected.
For the spatial discretization, a conventional finite difference method can be used to discretize the spatial variables in calculating Klein-Gordon equations, but it requires a very small spatial unit Δ x to keep sufficient accuracy. Furthermore, the problem of numerical dispersion is inevitable in typical finite difference methods in which numerical solutions are known to be difficult to satisfy the conservation laws with certain sufficient accuracy. Meanwhile, in the spectral method, the solution in the wavenumber space is known to be efficient to avoid the problem arising from the numerical dispersion, and rather easy to satisfy conservation laws to certain satisfactory degrees. Since the calculation cost of the spectral method is generally higher than that of the finite difference method, some approximation methods have been used to improve the feasibility of calculations (cf. pseudo-spectral method [2,3,4,5], collocation method [6,7]). Note that the spectral method is a discretization method in which the solution is represented by a linear combination of a finite number of Fourier series, and therefore the boundary condition is a periodic boundary condition. On the other hand, numerical schemes (e.g., spectral element method, spectral penalty method, etc. [8,9,10,11]) have been also developed for various boundary conditions such as Dirichlet boundary condition, Neumann boundary condition, and other boundary conditions.
For the time discretization, explicit methods have been used in calculating Klein-Gordon equations, but it requires additional treatments to obtain sufficiently accurate solutions. While linear and nonlinear solvers based on explicit methods are relatively simple with low computation cost, it is also known that there is a restriction called the Courant-Friedrichs-Lewy Condition (CFL condition, for short) on the time unit Δ t in order to obtain numerical results stably. Therefore, a numerical scheme combining explicit and implicit methods has been studied: e.g., a method for weakening the restriction on Δ t with keeping the stability of calculations [12,13,14]. In this context unconditionally-stable fully implicit method has been studied recently [15,16,17]. When it comes to a fully implicit method, the problem is generally reduced to a nonlinear type self-consistent equation, and it is necessary to apply numerical iteration. Numerical iterative methods generally require extra-ordinary high computational costs compared to the explicit method (non-iterative methods). The convergence and efficient implementation of numerical iterative methods have also come into the recent spotlight.
In this paper, we discuss how to construct a high-precision scheme for hyperbolic evolution equations. Based on the two-stage and third-order implicit Runge-Kutta method [18] and the spectral method [2,4,19,20,21,22], the order of computation is confirmed to be O ( N log 2 N ) in the benchmark calculations. This order estimate simply shows the feasibility and the applicability of the proposed scheme. Consequently, the precision and stability of the numerical scheme are examined for linear and nonlinear cases. As a remark for the limitation of the proposed method, we will not go into a detailed discussion of boundary conditions other than the periodic boundary conditions. However, by incorporating spectral elements methods, spectral penalty methods, and so on, it is definitely possible to construct numerical schemes for the other boundary conditions.

2. Discretization of Space Using Spectral Method

The spectral method [2,4,19,20,21] is employed to discretize the spatial variables. Let the solution of Equation (2) be expanded by the Fourier series.
u ( x , t ) = a 0 ( t ) + k = 1 N a k ( t ) cos 2 π L k x + k = 1 N b k ( t ) sin 2 π L k x , v ( x , t ) = c 0 ( t ) + k = 1 N c k ( t ) cos 2 π L k x + k = 1 N d k ( t ) sin 2 π L k x .
Then substitute them into the first equation of (2). After multiplying cos 2 π L l x and sin 2 π L l x respectively, they are integrated for Ω = [ 0 , L ] with respect to x. It follows that
d a 0 d t = c 0 , d a l d t = c l , ( l = 1 , , N ) , d b l d t = d l , ( l = 1 , , N ) .
Similarly, substitute Equation (3) into the second equation of (2). After multiplying cos 2 π L l x and sin 2 π L l x respectively, they are integrate for Ω = [ 0 , L ] with respect to x. It follows that
L d c 0 d t + β 0 L F ( u ) d x = 0 , L 2 d c l d t ( α 2 π 2 L ) l 2 a l + β 0 L F ( u ) cos 2 π L l x d x = 0 , ( l = 1 , , N ) , L 2 d d l d t ( α 2 π 2 L ) l 2 b l + β 0 L F ( u ) sin 2 π L l x d x = 0 , ( l = 1 , , N ) .
The solution to the original Equation (2) is obtained by solving Equations (4) and (5) in which a 0 , c 0 and a l , b l , c l , d l ( l = 1 , , N ) are found.
In terms of dealing with the nonlinearity, the following integral values appearing in Equation (5) is the bottle neck of the computational cost.
  • 0 L F ( u ) d x
  • 0 L F ( u ) cos 2 π L l x d x
  • 0 L F ( u ) sin 2 π L l x d x
To deal with these terms, we use the operator transformation method [2,4,19,20]. Its main idea is not to solve the nonlinear term in the Fourier-transformed momentum space, but in the original coordinate space by carrying out the inverse Fourier transform. This procedure remarkably reduce the computational cost; indeed, if we introduce the approximation based on the trapezoidal formula, the nonlinear terms are written by
0 L F ( u ) cos 2 π L l x d x L J j = 0 J 1 F ( u j ) cos 2 π L l x j , 0 L F ( u ) sin 2 π L l x d x L J j = 0 J 1 F ( u j ) sin 2 π L l x j , 0 L F ( u ) d x L J j = 0 J 1 F ( u j ) ,
where, under the periodic boundary condition, the equidistant J division of Ω is denoted as x j for j = 0 , , J , and the value of u in time t at each point is denoted as u j = u ( x j , t ) . Here, the right-hand side of the above equation is characterized by the fact that it is expressed in the same form as the discrete Fourier transform. Therefore, when calculating the integral (6) numerically, u j is obtained by the discrete inverse Fourier transform from a 0 ( t ) , a l ( t ) , and b l ( t ) , ( l = 1 , , N ) , and the right side of Equation (6) is obtained by the discrete Fourier transform from F ( u j ) , ( j = 0 , , J 1 ) . Note that if F ( u ) is a M-degree polynomial of u, the values of the left and right sides coincide for J ( M + 1 ) N + 1 [2,4,19,20]. Furthermore, by using the Fast Fourier Transform (FFT) for the above discrete inverse Fourier transform and discrete Fourier transform, the computational cost of (6) becomes O ( N log 2 N ) . Consequently, the spatially-discretized equation becomes
d a 0 d t = c 0 , d a l d t = c l , d b l d t = d l , d c 0 d t = β J j = 0 J 1 F ( u j ) , d c l d t = α ( 2 π l L ) 2 a l 2 β J j = 0 J 1 F ( u j ) cos 2 π L l x j , d d l d t = α ( 2 π l L ) 2 b l 2 β J j = 0 J 1 F ( u j ) sin 2 π L l x j ,
based on the spectral method with the operator transformation treatment. For the discretization of spatial variables by the spectral method, the pseudo-spectral method and collocation methods are sometimes applied as effective numerical solution methods [2,3,4,5,6,7]. In particular, integral values are approximately obtained by weakening the condition in which the number of fractional points is taken to be J ( M + 1 ) N + 1 . In such situations, there is a risk that the conservation law may not be well reproduced due to an aliasing error arising from the overlaps between different wave number components. This error is caused by the superposition of high wave number components. If the high wave number components of the solution are sufficiently small compared to N by keeping the cut-off wave number N to be sufficiently large, it cannot be a significant problem. Therefore, pseudo-spectral method and collocation methods are sometimes preferred.

3. Discretization of Time Using Implicit Runge-Kutta Method

3.1. Matrix Form

As a preparation for the discretization of the time variables, Equation (7) is represented as a matrix form. Vectors a , b , c , d are defined as follows.
a = ( a 0 , a 1 , , a N ) t , b = ( 0 , b 1 , , b N ) t , c = ( c 0 , c 1 , , c N ) t , d = ( 0 , d 1 , , d N ) t .
Also, let us denote g 0 , h 0 , g l and h l ( l = 1 , , N ) by g 0 = 1 J j = 0 J 1 F ( u j ) , h 0 = 0 , g l = 2 J j = 0 J 1 F ( u j ) cos ( 2 π l L x j ) and h l = 2 J j = 0 J 1 F ( u j ) sin ( 2 π l L x j ) . We define g and h by
g = β ( g 0 , g 1 , , g l , , g N ) t , h = β ( h 0 , h 1 , , h l , , h N ) t .
Furthermore, let us denote α ˜ l = α ( 2 π l L ) 2 and ( N + 1 ) -order square matrix A , E , E by the following equation.
A = α ˜ 0 α ˜ 1 α ˜ N , E = 1 1 1 , E = 0 1 1 ,
Here, using Equations (8) to (10), the matrix representation of Equation (7) is as shown below.
d d t a b c d = 0 0 E 0 0 0 0 E A 0 0 0 0 A 0 0 a b c d + 0 0 g h .
where 0 = ( 0 , 0 , , 0 ) t . Using the notations
W = a b c d , F = 0 0 g h , M = 0 0 E 0 0 0 0 E A 0 0 0 0 A 0 0 ,
Equation (11) is represented by
d d t W = M W + F ( W ) ,
where g , h depend on { u i } i = 0 J 1 , which is obtained by the inverse Fourier transform of W . Therefore, after the spatial discretization, the inhomogeneous term generally holds the nonlinearity F = F ( W ) .

3.2. Implicit Runge-Kutta Method

Following the literature [18], we introduce the implicit Runge-Kutta method of two-stage and third-order for discretizing the time. Let u ( t ) be the solution of the initial value problem of the abstract evolution equation.
d u d t = f ( t , u ) , α < t < β , u ( a ) = u 0 ,
in a Hilbert space (corresponding to Equation (13)). The time interval ( α , β ) is divided into equally-discretized M segments being incremented by Δ t = ( β α ) / M . The discrete time sequence { t m } is represented by
t m = α + m Δ t ( m = 0 , 1 , , M ) ,
and the approximated value of unknown function u ( t m ) is denoted by U m . In this case, the method for obtaining the approximate value U m + 1 is called the Runge-Kutta method. More precisely, the Runge-Kutta method is represented by
U m + 1 = U m + Δ t i = 1 s b i l i , l i = f ( t m + c i Δ t , U m + Δ t j = 1 s a i j l j ) ( i = 1 , 2 , , s ) .
Here, the natural number s is called the number of steps, and a i j , b i , c i are the parameters that define the formula. It is called the implicit Runge-Kutta method when a i j 0 ( j > i ) . The conditions
c i = j = 1 s a i j ( i = 1 , 2 , , s )
are imposed on the parameters a i j , c i . The table of parameters a i j , b i , c i
c 1 a 11 a 1 s c s a s 1 a s s b 1 b s
is known as the Butcher tableau [18]. Under the assumption that U m = u ( t m ) , the local discretization error is defined by
T m + 1 = 1 Δ t u ( t m + 1 ) u ( t m ) Δ t i = 1 s b i l i .
Note that when the local discretization error is evaluates as T n + 1 = O ( ( Δ t ) p ) , the Runge-Kutta formula is said to be of order p. Although the number of stages and orders is arbitrary, in this paper, we adopt the Runge-Kutta method with two-stage and third-order by taking into account the balance of the accuracy and the calculation cost. The Butcher tableau for the implicit Runge-Kutta formula of two-stage and third-order is given by
1 / 3 5 / 12 1 / 12 1 3 / 4 1 / 4 3 / 4 1 / 4
In general, explicit schemes have a restriction on the setting of the time spacing variables Δ t , called the CFL condition. Therefore, it is important for numerical schemes to be A-stable. On the other hand, the Implicit Runge-Kutta Method is known to be A-stable. In this sense, the Implicit Runge-Kutta Method is preferably employed as a stable and high-order scheme [22,23,24]. However, the Implicit Runge-Kutta Method is not widely used. This is firstly due to the calculation cost; indeed the application of implicit schemes to nonlinear partial differential equations requires numerical iteration for each single time step. Another issue is that the convergence of the iterative method is not guaranteed when the degree of nonlinearity and the time increment range of the target problem are large. The numerical scheme proposed in this paper achieves a good balance between convergence (stability) and simplicity by using simple devices without applying complicated methods (cf. Section 3.4).

3.3. Application of Implicit Runge-Kutta Method and Iteration Formula

Applying the two-stage and third-order implicit Runge-Kutta method to discretize the time variables in (13), the resulting equations are shown by
W n + 1 = W n + 3 4 Δ t k 1 + 1 4 Δ t k 2 , k 1 = M ( W n + 5 12 Δ t k 1 1 12 Δ t k 2 ) + F ( W n + 5 12 Δ t k 1 1 12 Δ t k 2 ) , k 2 = M ( W n + 3 4 Δ t k 1 + 1 4 Δ t k 2 ) + F ( W n + 3 4 Δ t k 1 + 1 4 Δ t k 2 ) ,
where, using the ( N + 1 ) -dimensional vector k i a , k i b , k i c , k i d ( i = 1 , 2 ) , vectors k 1 and k 2 are represented by
k 1 = ( ( k 1 a ) t , ( k 1 b ) t , ( k 1 c ) t , ( k 1 d ) t ) t , k 2 = ( ( k 2 a ) t , ( k 2 b ) t , ( k 2 c ) t , ( k 2 d ) t ) t .
Time evolution of solution can be found by calculating k 1 and k 2 , and substituting them into the first equation of (14). The second equation of (14) can be decomposed into four parts.
k 1 a = c + 5 12 Δ t k 1 c 1 12 Δ t k 2 c , k 1 b = E d + 5 12 Δ t E k 1 d 1 12 Δ t E k 2 d , k 1 c = A a + 5 12 Δ t A k 1 a 1 12 Δ t A k 2 a + g ( W n + 5 12 Δ t k 1 1 12 Δ t k 2 ) , k 1 d = A b + 5 12 Δ t A k 1 b 1 12 Δ t A k 2 b + h ( W n + 5 12 Δ t k 1 1 12 Δ t k 2 ) .
Similarly, the third equation of (14) is decomposed into four parts.
k 2 a = c + 3 4 Δ t k 1 c + 1 4 Δ t k 2 c , k 2 b = E d + 3 4 Δ t E k 1 d + 1 4 Δ t E k 2 d , k 2 c = A a + 3 4 Δ t A k 1 a + 1 4 Δ t A k 2 a + g ( W n + 3 4 Δ t k 1 + 1 4 Δ t k 2 ) , k 2 d = A b + 3 4 Δ t A k 1 b + 1 4 Δ t A k 2 b + h ( W n + 3 4 Δ t k 1 + 1 4 Δ t k 2 ) .
k 1 , k 2 satisfying Equations (15) and (16) are found by means of an iterative method. Let the value of the ν -th iteration be represented by
k 1 ν = ( ( k 1 a , ν ) t , ( k 1 b , ν ) t , ( k 1 c , ν ) t , ( k 1 d , ν ) t ) t , k 2 ν = ( ( k 2 a , ν ) t , ( k 2 b , ν ) t , ( k 2 c , ν ) t , ( k 2 d , ν ) t ) t .
Then the formula for finding the ( ν + 1 ) -th value from the ν -th, which corresponds to the transforms (15) and (16) are summarized as
k 1 a , ν + 1 = c + 5 12 Δ t k 1 c , ν 1 12 Δ t k 2 c , ν , k 2 a , ν + 1 = c + 3 4 Δ t k 1 c , ν + 1 4 Δ t k 2 c , ν , k 1 b , ν + 1 = E d + 5 12 Δ t E k 1 d , ν 1 12 Δ t E k 2 d , ν , k 2 b , ν + 1 = E d + 3 4 Δ t E k 1 d , ν + 1 4 Δ t E k 2 d , ν , k 1 c , ν + 1 = A a + 5 12 Δ t A k 1 a , ν 1 12 Δ t A k 2 a , ν + g ( W n + 5 12 Δ t k 1 ν 1 12 Δ t k 2 ν ) , k 2 c , ν + 1 = A a + 3 4 Δ t A k 1 a , ν + 1 4 Δ t A k 2 a , ν + g ( W n + 3 4 Δ t k 1 ν + 1 4 Δ t k 2 ν ) , k 1 d , ν + 1 = A b + 5 12 Δ t A k 1 b , ν 1 12 Δ t A k 2 b , ν + h ( W n + 5 12 Δ t k 1 ν 1 12 Δ t k 2 ν ) , k 2 d , ν + 1 = A b + 3 4 Δ t A k 1 b , ν + 1 4 Δ t A k 2 b , ν + h ( W n + 3 4 Δ t k 1 ν + 1 4 Δ t k 2 ν ) .
This is a fully discretized equation for both time and space.

3.4. Implementation of Iterative Method

3.4.1. Implementation of Half Step

To solve Equation (17), take k 1 1 = W n , k 2 1 = W n as the initial value. Iteration is carried out until both k 1 ν and k 2 ν converge. In order to make the iteration process as short as possible (i.e., k 1 ν and k 2 ν preferably converge in smaller iteration numbers). we introduce k 1 ν + 1 / 2 and k 2 ν + 1 / 2 .
k 1 ν + 1 / 2 = ( ( k 1 a , ν + 1 ) t , ( k 1 b , ν + 1 ) t , ( k 1 c , ν ) t , ( k 1 d , ν ) t ) t , k 2 ν + 1 / 2 = ( ( k 2 a , ν + 1 ) t , ( k 2 b , ν + 1 ) t , ( k 2 c , ν ) t , ( k 2 d , ν ) t ) t .
Accordingly Equation (17) is modified to
k 1 a , ν + 1 = c + 5 12 Δ t k 1 c , ν 1 12 Δ t k 2 c , ν , k 2 a , ν + 1 = c + 3 4 Δ t k 1 c , ν + 1 4 Δ t k 2 c , ν , k 1 b , ν + 1 = E d + 5 12 Δ t E k 1 d , ν 1 12 Δ t E k 2 d , ν , k 2 b , ν + 1 = E d + 3 4 Δ t E k 1 d , ν + 1 4 Δ t E k 2 d , ν , k 1 c , ν + 1 = A a + 5 12 Δ t A k 1 a , ν + 1 1 12 Δ t A k 2 a , ν + 1 + g ( W n + 5 12 Δ t k 1 ν + 1 / 2 1 12 Δ t k 2 ν + 1 / 2 ) , k 2 c , ν + 1 = A a + 3 4 Δ t A k 1 a , ν + 1 + 1 4 Δ t A k 2 a , ν + 1 + g ( W n + 3 4 Δ t k 1 ν + 1 / 2 + 1 4 Δ t k 2 ν + 1 / 2 ) , k 1 d , ν + 1 = A b + 5 12 Δ t A k 1 b , ν + 1 1 12 Δ t A k 2 b , ν + 1 + h ( W n + 5 12 Δ t k 1 ν + 1 / 2 1 12 Δ t k 2 ν + 1 / 2 ) , k 2 d , ν + 1 = A b + 3 4 Δ t A k 1 b , ν + 1 + 1 4 Δ t A k 2 b , ν + 1 + h ( W n + 3 4 Δ t k 1 ν + 1 / 2 + 1 4 Δ t k 2 ν + 1 / 2 ) .
In the following, we describe the procedure for calculating Equation (18) in detail. The calculation consists of two stages.

3.4.2. The First Stage

Let the l-th element of the vector k x X , ν ( X = a , b , c , d ; x = 1 , 2 ) be made of k x , l X , ν ( l = 0 , 1 , , N ) , and let the l-th element of the vector a , b , c , d be made of a l , b l , c l , d l ( l = 0 , 1 , , N ) respectively. Calculations are performed for each components.
k 1 , l a , ν + 1 = c l + 5 12 Δ t k 1 , l c , ν 1 12 Δ t k 2 , l c , ν , k 1 , l b , ν + 1 = d l + 5 12 Δ t k 1 , l d , ν 1 12 Δ t k 2 , l d , ν .
k 2 , l a , ν + 1 = c l + 3 4 Δ t k 1 , l c , ν + 1 4 Δ t k 2 , l c , ν , k 2 , l b , ν + 1 = d l + 3 4 Δ t k 1 , l d , ν + 1 4 Δ t k 2 , l d , ν .
Using the results, we obtain k 1 ν + 1 / 2 , k 2 ν + 1 / 2 .

3.4.3. The Second Stage

Let the l-th element of the vector g , h be represented by g ˜ l , h ˜ l ( l = 0 , 1 , , N ) , and calculate each component by the Fourier inverse transform using k 1 ν + 1 / 2 , k 2 ν + 1 / 2 , W n . Then, calculate the following equation using k 1 ν + 1 / 2 , k 2 ν + 1 / 2 , W n , g ˜ l , h ˜ l ( l = 0 , 1 , , N ) .
k 1 , l c , ν + 1 = a ˜ l a l + 5 12 Δ t a ˜ l k 1 , l a , ν + 1 1 12 Δ t a ˜ l k 2 , l a , ν + 1 + g ˜ l ( W n + 5 12 Δ t k 1 ν + 1 / 2 1 12 Δ t k 2 ν + 1 / 2 ) , k 1 , l d , ν + 1 = a ˜ l b l + 5 12 Δ t a ˜ l k 1 , l b , ν + 1 1 12 Δ t a ˜ l k 2 , l b , ν + 1 + h ˜ l ( W n + 5 12 Δ t k 1 ν + 1 / 2 1 12 Δ t k 2 ν + 1 / 2 ) .
k 2 , l c , ν + 1 = a ˜ l a l + 3 4 Δ t a ˜ l k 1 , l a , ν + 1 + 1 4 Δ t a ˜ l k 2 , l a , ν + 1 + g ˜ l ( W n + 3 4 Δ t k 1 ν + 1 / 2 + 1 4 Δ t k 2 ν + 1 / 2 ) , k 2 , l d , ν + 1 = a ˜ l b l + 3 4 Δ t a ˜ l k 1 , l b , ν + 1 + 1 4 Δ t a ˜ l k 2 , l b , ν + 1 + h ˜ l ( W n + 3 4 Δ t k 1 ν + 1 / 2 + 1 4 Δ t k 2 ν + 1 / 2 ) .
Using these results and k 1 ν + 1 / 2 , k 2 ν + 1 / 2 , we obtain k 1 ν + 1 , k 2 ν + 1 .

3.4.4. Decision of Convergence

If ϵ is a sufficiently small positive value (e.g., 10 12 ), and if the smaller value of the relative or absolute error between the ν -th and ( ν 1 ) -th iterations is smaller than ϵ , then the ν -th iteration is regarded as converged. If k 1 ν + 1 and k 2 ν + 1 are not converged, return to the first stage of calculation. If k 1 ν + 1 , k 2 ν + 1 is convergent, then W n + 1 is obtained using the formula
W n + 1 = W n + 3 4 Δ t k 1 ν + 1 + 1 4 Δ t k 2 ν + 1 .
The calculation cost of this numerical scheme applying the implicit Runge-Kutta method and the spectral method is estimated to be O ( N log 2 N ) , and the local discretization error of the time variable is estimated to be of third-order.

4. Benchmark Calculations

4.1. Linear Case

4.1.1. Comparison to Exact Solution

Let us take the initial and boundary value problem (2) with F ( u ) = u , α = 1 , β = 1 , Ω = [ 0 , L ] .
v t 2 u x 2 + u = 0 , u t = v , u ( x , 0 ) = 0 , u ( 0 , t ) = u ( L , t ) , v ( x , 0 ) = cos ( 2 π L x ) , v ( 0 , t ) = v ( L , t ) .
The exact solution to this problem is given by
u ( x , t ) = 1 1 + 2 π L 2 sin ( 1 + 2 π L 2 t ) cos 2 π L x , v ( x , t ) = cos ( 1 + 2 π L 2 t ) cos 2 π L x .
For Equation (24), the time evolutions of the numerical solution u, v ( J 2 N + 1 , L = 8 ) until t = 1 are shown in Figure 1, and those at t = 1 are shown in Figure 2. Since the exact solution is represented by Equation (25), it is possible to evaluate the accuracy depending on the spatio-temporal discretization. Furthermore, in terms of clarifying the merit of the proposed scheme, the error between the numerical results by the θ method with θ = 1 / 2 (for the preceding calculation using the θ method in time, see [21]) are also calculated.
The comparison between the numerical and the exact solutions are shown in Figure 3 and Figure 4. In the present paper, the error is defined as the smaller one of the relative and absolute errors. In Figure 3, numerical solutions with different time spacing unit Δ t are examined. The maximum error, which is determined by running for x j ( j = 0 , , J ) , between the numerical and the exact solutions are calculated. We see that exponential dependence on the time discretization is noticed. It also shows that the high accuracy of implicit Runge-Kutta method; indeed at Δ t < 10 4 , almost 10 4 times accurate result can be obtained compared to the θ -method with θ = 1 / 2 (the Crank-Nicolson method). In Figure 4, numerical solutions with different spatial spacing parameter N are examined. The maximum error, which is determined by running for x j ( j = 0 , , J ) , between the numerical and the exact solutions are calculated. We see that in the logarithmic scale, there is no significant N dependence in calculations of any kinds.

4.1.2. Accuracy Depending on Discretization of Time Variables

Figure 3 shows Δ t -dependence of errors. As for the two-stage and third-order implicit Runge-Kutta method (+ and ∘ in the figure), we see that the error becomes 1 / 8 times smaller if Δ t is taken to be 1 / 2 times smaller. In other words, the numerical results confirm that the scheme holds the third-order accuracy with respect to time. On the other hand, as for the θ method (× and □ in the figure), we see that the error becomes 1 / 4 times smaller if Δ t is taken to be 1 / 2 times smaller.
For each of the above two numerical schemes, by comparing the errors in case of N = 2 5 and N = 2 10 , the values of the errors are almost unchanged if the amplitude of Δ t is the same (note that in case of N = 2 10 , the numerical calculation does not converge and no numerical solution is obtained for large Δ t ). That is, if N is taken to be sufficiently large, the error depends only on the size of Δ t and not on the size of N. If we limit ourselves to the linear case of our benchmark calculations, this advantage arises essentially from the introduction of the Fourier spectral method for the spatial direction.
In any case, the error depends only on Δ t for sufficiently large N, and the smaller Δ t results in the smaller error. Comparing two different schemes, it is concluded in the linear case of benchmark that the implicit Runge-Kutta method possibly includes 10 4 times smaller errors compared to the θ method.

4.1.3. Accuracy Depending on Discretization of Spatial Variables

Figure 4 shows that the error does not depend on N = 2 2 , 2 3 , , 2 15 (refer to + and ∘ in Figure 4 for the errors by implicit Runge-Kutta method, and to × and □ for those by θ method). Regardless of the choice of N, it is confirmed in Figure 4 that the errors with Δ t = 2 13 are smaller than those with Δ t = 2 6 (note that in case of Δ t = 2 6 , the numerical calculation does not converge and no numerical solution is obtained for large N). This means that the error depends only on Δ t and not on N.
In conclusion, most of the error arises from the discretization of time. Therefore, if we limit ourselves to the benchmark (the linear case), for example, by taking Δ t 10 4 and N 2 5 , we can obtain high-precision numerical solutions.

4.1.4. Convergence/Stability of Iteration

The numerical scheme proposed in this paper is A-stable, because it employs the implicit Runge-Kutta method. However, the problem to be solved by the implicit scheme results in numerical iterations. Since the convergence of the iterative method is not guaranteed in general, the convergence of the iterative method and the speed of convergence, which indicate a kind of stability in the case of applying implicit Runge-Kutta methods, must be discussed. Here, for improving convergence and accelerating an iterative process, the intermediate step shown in Equation (18) is equipped. This iterative method is referred to as the modified iterative method in this paper. In order to examine the efficiency of the modified iterative method, we compare the average number of iterations in calculating the time evolution of Equation (24) up to t = 1 with several N and Δ t . In addition, the results of applying the iterative method with Equation (17) (normal iterative method) are also shown for comparison.
First, Table 1 shows the average number of iterations when N = 2 10 is fixed and Δ t = 2 2 , 2 3 , ⋯, 2 10 . All the calculations by the modified iterative method converge in 4 iterations when Δ t = 2 8 , 2 9 , 2 10 . On the contrary, the corresponding results by the normal iterative method have 6 or 5 iterations, respectively. When Δ t = 2 2 , 2 3 , , 2 7 , both modified iterative method and normal iterative method do not converge. Next, Table 2 shows the average number of iterations when N = 2 5 is fixed and Δ t = 2 2 , 2 3 , ⋯, 2 10 . For Δ t = 2 3 , the modified iterative method converges in 7 iterations, and the smaller Δ t leads to the fewer convergent iterations. The same trend can be seen also in the case of the normal iterative method. However, when Δ t = 2 2 , both modified iterative method and normal iterative method do not converge. Accordingly, by applying the same convergence condition
Δ t N 1 2 2 ,
it is confirmed that the modified iterative method reduces the number of iterations by 20–30%, and the calculation cost is reduced significantly. Here not that the condition (26) plays a role of the CFL condition in the implicit Runge-Kutta cases.

4.2. Nonlinear Case

4.2.1. Comparison to Exact Solution

Let us take the initial and boundary value problem (2) with F ( u ) = sin u , α = 1 , β = 1 , Ω = [ 0 , L ] . The nonlinear wave equation in this case is known as the Sine-Gordon equation.
v t 2 u x 2 + sin u = 0 , u t = v , v ( x , 0 ) = 2 cn ( x , 1 2 ) dn ( x , 1 2 ) 1 1 4 sn ( x , 1 2 ) , v ( 0 , t ) = v ( L , t ) , u ( x , 0 ) = 2 sin 1 1 2 sn ( x , 1 2 ) , u ( 0 , t ) = u ( L , t ) .
The exact solution to this problem is given by
u ( x , t ) = 2 sin 1 1 2 sn ( x 2 t , 1 2 ) , v ( x , t ) = 2 cn ( x 2 t , 1 2 ) dn ( x 2 t , 1 2 ) 1 1 4 sn 2 ( x 2 t , 1 2 ) ,
where sn , cn , dn are Jacobi’s elliptic functions, and L can be expressed using complete elliptic integral of the first kind (See Appendix A and textbook [25]).
L = 4 F π 2 , 1 2 = 4 0 π / 2 1 1 1 2 2 sin 2 θ d θ = 6.743001419250385098
For Equation (27), the time evolutions of the solution u, v (with J 2 N + 1 , L = 6.7430014192503 ) until t = 1 are shown in Figure 5, and those at time t = 1 are shown in Figure 6. Since the exact solution is represented by Equation (28), it is possible to evaluate the accuracy depending on the spatio-temporal discretization. Furthermore, similar to the linear case, the error between the numerical results by the θ method with θ = 1 / 2 are also calculated.
The comparison between the numerical and the exact solutions are shown in Figure 7 and Figure 8. In Figure 7, numerical solutions with different time spacing unit Δ t are examined. The maximum error, which is determined by running for x j ( j = 0 , , J ) , between the numerical and the exact solutions are calculated. The exponential dependence on the time discretization is also noticed in the nonlinear case. It also shows that the high accuracy of implicit Runge-Kutta method; indeed at Δ t < 10 4 , almost 10 4 times accurate result can be obtained compared to the θ -method with θ = 1 / 2 . In Figure 8, numerical solutions with different spatial spacing parameters N are examined. The maximum error, which is determined by running for x j ( j = 0 , , J ) , between the numerical and the exact solutions are calculated. We see that, in the logarithmic scale, there is no significant N dependence when N is sufficiently large.

4.2.2. Accuracy Depending on Discretization of Time Variables

Figure 7 shows Δ t -dependence of errors. As for the two-stage and third-order implicit Runge-Kutta method (+ and ∘ in the figure), we see that the error becomes 1 / 8 times smaller if Δ t is taken to be 1 / 2 times smaller. In other words, even in the nonlinear case, the numerical results confirm that the scheme holds the third-order accuracy with respect to time. Also in nonlinear cases, as for the θ method (× and □ in the figure), we see that the error becomes 1 / 4 times smaller if Δ t is taken to be 1 / 2 times smaller.
For each of the above two numerical schemes, by comparing the errors in case of N = 2 5 and N = 2 10 , the values of the errors are almost unchanged if the amplitude of Δ t is the same (note that in case of N = 2 10 , the numerical calculation does not converge and no numerical solution is obtained for large Δ t ). That is, if N is taken to be sufficiently large, the error depends only on the size of Δ t and not on the size of N. If we limit ourselves to our benchmark calculations(also in the nonlinear case), this advantage arises essentially from the introduction of the Fourier spectral method for the spatial direction.
In any case, the error depends only on Δ t for sufficiently large N, and the smaller Δ t results in the smaller error. Comparing two different schemes, it is concluded also in the nonlinear case of benchmark that the implicit Runge-Kutta method possibly includes almost 10 4 times smaller errors compared to the θ method.

4.2.3. Accuracy Depending on Discretization of Spatial Variables

Figure 8 shows that if N is larger than 2 5 , the error does not depend on the size of N for either Δ t = 2 6 , 2 13 (refer to + , in Figure 8 for the errors by implicit Runge-Kutta method, and to × , for those by θ method). In addition, the smaller Δ t leads to a smaller error. On the other hand, if N is smaller than 2 3 , the smaller N leads to the larger error in which the order of error is almost the same regardless of the choice of the scheme and the amplitude of Δ t . That is, most of the error arises from the discretization of space variables if N is smaller than 2 3 , while most of the error arises from the discretization of time variables if N is larger than 2 5 .
In conclusion, if N is sufficiently large, the error associated with the discretization of spatial variables becomes sufficiently small, and most of the error arises from the discretization of time. Therefore, similar to the linear case, the benchmark calculations suggest that high-precision numerical solutions can be obtained by taking Δ t 10 4 and N 2 5 .

4.2.4. Convergence/Stability of Iteration

As in the linear case, in order to confirm the properties of the iterative method from the viewpoint of stability in a broad sense, we present the results of the average number of iterations in calculating the time evolution of Equation (27) up to time t = 1 with several N and Δ t . In addition, the results of applying the iterative method with Equation (17) (normal iterative method) are also shown for comparison.
First, Table 3 shows the average number of iterations when N = 2 10 is fixed and Δ t = 2 2 , 2 3 , ⋯, 2 10 . For Δ t = 2 8 , 2 9 , 2 10 , the calculations by the modified iterative method converge in 11, 4, 4 iterations, respectively. On the contrary, the corresponding results by the normal iterative method have 16, 6, and 5 iterations, respectively. When Δ t = 2 2 , 2 3 , , 2 7 , both modified iterative method and normal iterative method do not converge. Next, Table 4 shows the average number of iterations when N = 2 5 and Δ t = 2 2 , 2 3 , , 2 10 . For Δ t = 2 3 , the modified iterative method converges in 13 iterations, and the smaller Δ t leads to the fewer convergent iterations. The same trend can be seen also in the case of the normal iterative method. However, when Δ t = 2 2 , both modified iterative method and normal iterative method do not converge.
Accordingly, by applying the same convergence condition
Δ t N 1 2 2
it is confirmed that the modified iterative method reduces the number of iterations by 20–50%, and the calculation cost is reduced significantly.

5. Summary

In order to understand the dynamics of nonlinear hyperbolic equations, the numerical approach is one of the most efficient tools, where the smoothing effect is hopeless for generic hyperbolic equations. In this sense, accuracy is an indispensable factor in any way (for the conservation of physical quantities of the present scheme, see [26,27]). A high precision numerical scheme for nonlinear hyperbolic evolution equations is proposed, and its performance is examined by linear and nonlinear benchmark calculations. The numerical scheme consists of the implicit Runge-Kutta method and the Fourier spectral method. For a concrete example, the demonstration of scheme is carried out for one-dimensional Klein-Gordon equations. The precision due to the implicit Runge-Kutta method and spectral method is quantitatively shown by the errors of benchmark calculations in comparison to the θ method, and in comparison to the difference of time spacing variables.
As demonstrated in benchmark cases, it is confirmed that the scheme constructed in this paper has a third-order accuracy with respect to time. We have also confirmed that the truncation error associated with the spatial discretization is much smaller than the error associated with the time discretization by setting N to be sufficiently large. That is, for sufficiently large N, most of the error arises from the error associated with Δ t .
In time discretization, due to the limitation of the time spacing variables by the CFL condition, it is generally difficult to achieve both small error and low computational cost simply by the conventional explicit method. Furthermore, in spatial discretization, due to the effects of numerical dispersion, it is generally difficult for the typical finite difference method to preserve the conserved quantities to a sufficient degree. The proposed method settles these two points by simultaneously introducing the implicit Runge-Kutta method and the spectral method. The applicability of the scheme is confirmed by its computational order O ( N log 2 N ) . In conclusion, the proposed scheme provides a generic framework for high-precision computation with relatively low computational cost.
From a future perspective, more complex problems with various boundary conditions and the behavior of solutions with discontinuities, construction of numerical scheme employing the spectral element method and/or the spectral penalty method is a promising direction. The proposed scheme should be a steadfast basis for such future works.

Author Contributions

Design of the study is prepared by all the authors. The first author (Y.T.) constructed the scheme and performed numerical experiments. The second author (Y.I.) drafted the manuscript. Y.T. and Y.I. All authors have read and agreed to the published version of the manuscript.

Funding

The authors have received no funding for this work.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare that there is no conflict of interest.

Appendix A. Jacobian Elliptic Functions

A set of basic elliptic functions was introduced by Carl Gustav Jacob Jacobi [28] in 1829. These functions are named the Jacobian elliptic functions after him. For any k [ 0 , 1 ) , we define K ( k ) by the incomplete elliptic integral of the first kind.
K ( k ) = 0 1 d t ( 1 t 2 ) ( 1 k 2 t 2 ) .
Then, for any k [ 0 , 1 ) and any x [ K ( k ) , K ( k ) ] , we define sn ( x , k ) by an inverse of the incomplete elliptic integral of the first kind.
x = 0 s n ( x , k ) d t ( 1 t 2 ) ( 1 k 2 t 2 ) .
Clearly, sn ( x , k ) is an increasing odd function in x from [ K ( k ) , K ( k ) ] to [ 1 , 1 ] . We extend the domain of sn ( x , k ) to R by sn ( x + 2 K ( k ) , k ) = sn ( x , k ) , which implies that sn ( x , k ) has 4 K ( k ) -periodicity. We can see that K ( 0 ) = π / 2 , sn ( x , 0 ) = sin x , K ( k ) , sn ( x , k ) tanh x as k 1 and sn ( · , k ) C ( R ) . Using sn ( x , k ) , for x [ K ( k ) , K ( k ) ] , we also define
cn ( x , k ) = 1 sn 2 ( x , k ) ,
dn ( x , k ) = 1 k 2 sn 2 ( x , k ) .
Clearly, cn ( x , k ) and dn ( x , k ) are even functions in x from [ K ( k ) , K ( k ) ] to [ 0 , 1 ] . We extend the domains of cn ( x , k ) and dn ( x , k ) to the whole of R by cn ( x + 2 K ( k ) , k ) = cn ( x , k ) and dn ( x + 2 K ( k ) , k ) = dn ( x , k ) . This implies that cn ( x , k ) and dn ( x , k ) have 4 K ( k ) - and 2 K ( k ) -periodicity. It is shown that cn ( x , 0 ) = cos x , dn ( x , 0 ) = 1 , cn ( x , k ) , dn ( x , k ) coth x as k 1 and cn ( · , k ) , dn ( · , k ) C ( R ) .

References

  1. Bjorken, J.D.; Drell, S.D. Relativistic Quantum Fields; MacGraw-Hill: New York, NY, USA, 1965. [Google Scholar]
  2. Boyd, J.P. Chebyshev and Fourier Spectral Methods Second Edition (Revised); Dover: New York, NY, USA, 2001. [Google Scholar]
  3. Abbasbandy, S.; Shivanian, E. Multiple solutions of mixed convection in a porous medium on semi-infinite interval using pseudo-spectral collocation method. Commun. Nonlinear Sci. Numer. Simulat. 2011, 16, 2745–2752. [Google Scholar] [CrossRef]
  4. Fornberg, B. A Practical Guide to Pseudospectral Methods; Cambridge University Press: Cambridge, UK, 1995. [Google Scholar]
  5. Pasciak, J.E. Spectral and Pseudo Spectral Methods for Advection Equations. Math. Comput. 1980, 35, 1081–1092. [Google Scholar] [CrossRef] [Green Version]
  6. Chantawansri, T.L.; Hur, S.M.; García-Cervera, C.J.; Ceniceros, H.D.; Fredrickson, G.H. Spectral collocation methods for polymer brushes. J. Chem. Phys. 2011, 134, 244905. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  7. Gomez, H.; Reali, A.; Sangalli, G. Accurate, efficient, and (iso)geometrically flexible collocation methods for phase-field models. J. Comput. Phys. 2014, 262, 153–171. [Google Scholar] [CrossRef]
  8. Ai, Q.; Li, H.Y.; Wang, Z.Q. Diagonalized Legendre spectral methods using Sobolev orthogonal polynomials for elliptic boundary value problems. Appl. Numer. Math. 2018, 127, 196–210. [Google Scholar] [CrossRef]
  9. Patera, A.T. A spectral element method for fluid dynamics: Laminar flow in a channel expansion. J. Comput. Phys. 1984, 54, 468–488. [Google Scholar] [CrossRef]
  10. Jafarzadeh, S.; Larios, A.; Bobaru, F. Efficient Solutions for Nonlocal Diffusion Problems via Boundary-Adapted Spectral Methods. J. Peridyn. Nonlocal Model. 2020, 2, 85–110. [Google Scholar] [CrossRef] [Green Version]
  11. Hesthaven, J.S. Spectral penalty methods. Appl. Numer. Math. 2000, 33, 23–41. [Google Scholar] [CrossRef]
  12. Cordero-Carrión, I.; Cerdá-Durán, P. Partially implicit Runge-Kutta methods for wave-like equations. In Advances in Differential Equations and Applications; Casas, F., Martínez, V., Eds.; SEMA SIMAI Springer Series 4; Springer: Cham, Switzerland, 2012. [Google Scholar]
  13. Wang, H.; Zhang, Q.; Shu, C.W. Third order implicit–explicit Runge-Kutta local discontinuous Galerkin methods with suitable boundary treatment for convection–diffusion problems with Dirichlet boundary conditions. J. Comput. Appl. Math. 2018, 342, 164–179. [Google Scholar] [CrossRef]
  14. Zhao, W.; Huang, J. Boundary treatment of implicit-explicit Runge-Kutta method for hyperbolic systems with source terms. J. Comput. Phys. 2020, 423, 109828. [Google Scholar] [CrossRef]
  15. Masud Rana, M.; Howle, V.E.; Long, K.; Meek, A.; Milestone, W. A New Block Preconditioner for Implicit Runge-Kutta Methods for Parabolic PDE Problems. SIAM J. Sci. Comput. 2020, 43, S475–S495. [Google Scholar] [CrossRef]
  16. Mardal, K.A.; Nilssen, T.K.; Staff, G.A. Order-optimal preconditioners for implicit Runge-Kutta schemes applied to parabolic PDEs. SIAM J. Sci. Comput. 2007, 29, 361–375. [Google Scholar] [CrossRef] [Green Version]
  17. Staff, G.A.; Mardal, K.A.; Nilssen, T.K. Preconditioning of fully implicit Runge-Kutta schemes for parabolic PDEs. Model. Identif. Control. 2006, 27, 109–123. [Google Scholar] [CrossRef] [Green Version]
  18. Butcher, J.C. The Numerical Analysis of Ordinary Differential Equations: Runge-Kutta and General Linear Methods; John Wiley and Sons: New York, NY, USA, 1987. [Google Scholar]
  19. Gottlieb, D.; Orszag, S.A. Numerical Analysis of Spectral Methods: Theory and Applications; SIAM-CBMS: Philadelphia, PA, USA, 1997. [Google Scholar]
  20. Canuto, C.; Hussaini, M.Y.; Quarteroni, A.; Zang, T.A. Spectral Methods in Fluid Dynamics; Springer: Berlin/Heidelberg, Germany, 1986. [Google Scholar]
  21. Iwata, Y.; Takei, Y. Numerical scheme based on the spectral method for calculating nonlinear hyperbolic evolution equations. In Proceedings of the ICCMS ’20; ACM Digital Library: New York, NY, USA, 2020; pp. 25–30. ISBN 978-1-4503-7703-4. (In English) [Google Scholar]
  22. Hairer, E.; Wanner, G. Stiff differential equations solved by Radau methods. J. Comput. Appl. Math. 1999, 111, 93–111. [Google Scholar] [CrossRef]
  23. Hairer, E.; Nørsett, S.P.; Wanner, G. Solving Ordinary Differential Equations I, Nonstiff Problems; Springer: Berlin/Heidelberg, Germany, 1993. [Google Scholar]
  24. Wanner, G.; Hairer, E. Solving Ordinary Differential Equations II: Stiff Problems; Springer: Berlin/Heidelberg, Germany, 1996. [Google Scholar]
  25. Jackson, E.A. Perspectives of Nonlinear Dynamics 1 and 2; Cambrdge University Press: Cambrdge, UK, 1991. [Google Scholar]
  26. Takei, Y.; Iwata, Y. Space-time breather solution for nonlinear Klein-Gordon equations. J. Phys. Conf. Ser. 2021, 1730, 012058. [Google Scholar] [CrossRef]
  27. Takei, Y.; Iwata, Y. Stationary analysis for coupled nonlinear Klein-Gordon equations. arXiv 2021, arXiv:2109.11038. [Google Scholar]
  28. Jacobi, C.G.J. Fundamenta Nova Theoriae Functionum Ellipticarum; Borntraeger: Kaliningrad, Russia, 1829; Reprinted by Cambridge University Press: Cambridge, UK, 2012; ISBN 978-1-108-05200-9. (In Latin) [Google Scholar]
Figure 1. Linear Klein-Gordon dynamics: time evolution of u and v ( N = 2 10 , Δ t = 2 13 , L = 8).
Figure 1. Linear Klein-Gordon dynamics: time evolution of u and v ( N = 2 10 , Δ t = 2 13 , L = 8).
Axioms 11 00028 g001
Figure 2. Solution of linear Klein-Gordon equation at t = 1 ( N = 2 10 , Δ t = 2 13 , L = 8).
Figure 2. Solution of linear Klein-Gordon equation at t = 1 ( N = 2 10 , Δ t = 2 13 , L = 8).
Axioms 11 00028 g002
Figure 3. The relation between the error and the time increments Δ t for u and v at t = 1 . Theta u e r r , N 5 , Theta u e r r , N 10 , Theta v e r r , N 5 and Theta v e r r , N 10 in the figure show the error of u and v, when the θ method with θ = 1 / 2 is applied with spatial spacing parameter N = 2 5 and 2 10 , respectively. Similarly, RK u e r r , N 5 and RK u e r r , N 10 , RK v e r r , N 5 and RK v e r r , N 10 denote the error of u and v, when the implicit Runge-Kutta method is applied with N = 2 5 and 2 10 , respectively.
Figure 3. The relation between the error and the time increments Δ t for u and v at t = 1 . Theta u e r r , N 5 , Theta u e r r , N 10 , Theta v e r r , N 5 and Theta v e r r , N 10 in the figure show the error of u and v, when the θ method with θ = 1 / 2 is applied with spatial spacing parameter N = 2 5 and 2 10 , respectively. Similarly, RK u e r r , N 5 and RK u e r r , N 10 , RK v e r r , N 5 and RK v e r r , N 10 denote the error of u and v, when the implicit Runge-Kutta method is applied with N = 2 5 and 2 10 , respectively.
Axioms 11 00028 g003
Figure 4. The relation between the error and the spatial spacing parameter N for u and v at t = 1 . Theta u e r r , t 6 , Theta u e r r , t 13 , Theta v e r r , t 6 , and Theta v e r r , t 13 in the figure show the error in u and v, when the θ method is applied with the time increments Δ t = 2 6 , 2 13 , respectively. Similarly, RK u e r r , t 6 , RK u e r r , t 16 , RK v e r r , t 6 and RK v e r r , t 16 denote the error in u and v, when the implicit Runge-Kutta method is applied with the time increments Δ t = 2 6 , 2 13 , respectively.
Figure 4. The relation between the error and the spatial spacing parameter N for u and v at t = 1 . Theta u e r r , t 6 , Theta u e r r , t 13 , Theta v e r r , t 6 , and Theta v e r r , t 13 in the figure show the error in u and v, when the θ method is applied with the time increments Δ t = 2 6 , 2 13 , respectively. Similarly, RK u e r r , t 6 , RK u e r r , t 16 , RK v e r r , t 6 and RK v e r r , t 16 denote the error in u and v, when the implicit Runge-Kutta method is applied with the time increments Δ t = 2 6 , 2 13 , respectively.
Axioms 11 00028 g004
Figure 5. Nonlinear Klein-Gordon dynamics: time evolution of u and v ( N = 2 10 , Δ t = 2 13 , L = 6.7430014192503 ).
Figure 5. Nonlinear Klein-Gordon dynamics: time evolution of u and v ( N = 2 10 , Δ t = 2 13 , L = 6.7430014192503 ).
Axioms 11 00028 g005
Figure 6. Solution of nonlinear Klein-Gordon equation: u and v at t = 1 ( N = 2 10 , Δ t = 2 13 , L = 6.7430014192503 ).
Figure 6. Solution of nonlinear Klein-Gordon equation: u and v at t = 1 ( N = 2 10 , Δ t = 2 13 , L = 6.7430014192503 ).
Axioms 11 00028 g006
Figure 7. The relation between the error and the time increments Δ t for u and v at t = 1 . Theta u e r r , N 5 , Theta u e r r , N 10 , Theta v e r r , N 5 and Theta v e r r , N 10 in the figure show the error of u and v, when the θ method with θ = 1 / 2 is applied with spatial spacing parameter N = 2 5 and 2 10 , respectively. Similarly, RK u e r r , N 5 and RK u e r r , N 10 , RK v e r r , N 5 and RK v e r r , N 10 denote the error of u and v, when the implicit Runge-Kutta method is applied with N = 2 5 and 2 10 , respectively.
Figure 7. The relation between the error and the time increments Δ t for u and v at t = 1 . Theta u e r r , N 5 , Theta u e r r , N 10 , Theta v e r r , N 5 and Theta v e r r , N 10 in the figure show the error of u and v, when the θ method with θ = 1 / 2 is applied with spatial spacing parameter N = 2 5 and 2 10 , respectively. Similarly, RK u e r r , N 5 and RK u e r r , N 10 , RK v e r r , N 5 and RK v e r r , N 10 denote the error of u and v, when the implicit Runge-Kutta method is applied with N = 2 5 and 2 10 , respectively.
Axioms 11 00028 g007
Figure 8. The relation between the error and the spatial spacing parameter N for u and v at t = 1 . Theta u e r r , t 6 , Theta u e r r , t 13 , Theta v e r r , t 6 , and Theta v e r r , t 13 in the figure show the error in u and v, when the θ method is applied with the time increments Δ t = 2 6 , 2 13 , respectively. Similarly, RK u e r r , t 6 , RK u e r r , t 16 , RK v e r r , t 6 and RK v e r r , t 16 denote the error in u and v, when the implicit Runge-Kutta method is applied with the time increments Δ t = 2 6 , 2 13 , respectively.
Figure 8. The relation between the error and the spatial spacing parameter N for u and v at t = 1 . Theta u e r r , t 6 , Theta u e r r , t 13 , Theta v e r r , t 6 , and Theta v e r r , t 13 in the figure show the error in u and v, when the θ method is applied with the time increments Δ t = 2 6 , 2 13 , respectively. Similarly, RK u e r r , t 6 , RK u e r r , t 16 , RK v e r r , t 6 and RK v e r r , t 16 denote the error in u and v, when the implicit Runge-Kutta method is applied with the time increments Δ t = 2 6 , 2 13 , respectively.
Axioms 11 00028 g008
Table 1. An average number of iterations by the iterative method with fixed N = 2 10 . Modified ite. count and Normal ite. count denotes the average number of iterations by the modified iterative method and the normal iterative method, respectively. N/A denotes the case where the iterations did not converge.
Table 1. An average number of iterations by the iterative method with fixed N = 2 10 . Modified ite. count and Normal ite. count denotes the average number of iterations by the modified iterative method and the normal iterative method, respectively. N/A denotes the case where the iterations did not converge.
N Δ t Δ t / N 1 (a) Modified Ite. Count(b) Normal Ite. Count(a)/(b)
2 10 2 2 2 8 N/AN/AN/A
2 10 2 3 2 7 N/AN/AN/A
2 10 2 4 2 6 N/AN/AN/A
2 10 2 5 2 5 N/AN/AN/A
2 10 2 6 2 4 N/AN/AN/A
2 10 2 7 2 3 N/AN/AN/A
2 10 2 8 2 2 46 0.67
2 10 2 9 2 1 46 0.67
2 10 2 10 2 0 45 0.80
Table 2. Average of the number of iterations by the iterative method with fixed N = 2 5 . Modified ite. count and Normal ite. count denotes the average number of iterations by the modified iterative method and the normal iterative method, respectively. N/A denotes the case where the iterations did not converge.
Table 2. Average of the number of iterations by the iterative method with fixed N = 2 5 . Modified ite. count and Normal ite. count denotes the average number of iterations by the modified iterative method and the normal iterative method, respectively. N/A denotes the case where the iterations did not converge.
N Δ t Δ t / N 1 (a) Modified Ite. Count(b) Normal Ite. Count(a)/(b)
2 5 2 2 2 3 N/AN/AN/A
2 5 2 3 2 2 711 0.64
2 5 2 4 2 1 69 0.67
2 5 2 5 2 0 58 0.63
2 5 2 6 2 1 57 0.71
2 5 2 7 2 2 47 0.57
2 5 2 8 2 3 46 0.67
2 5 2 9 2 4 46 0.67
2 5 2 10 2 5 45 0.80
Table 3. Average number of iterations by the iterative method with fixed N = 2 10 . Modified ite. count and Normal ite. count denotes the average number of iterations by the modified iterative method and the normal iterative method, respectively. N/A denotes the case where the iterations did not converge.
Table 3. Average number of iterations by the iterative method with fixed N = 2 10 . Modified ite. count and Normal ite. count denotes the average number of iterations by the modified iterative method and the normal iterative method, respectively. N/A denotes the case where the iterations did not converge.
N Δ t Δ t / N 1 (a) Modified Ite. Count(b) Normal Ite. Count(a)/(b)
2 10 2 2 2 8 N/AN/AN/A
2 10 2 3 2 7 N/AN/AN/A
2 10 2 4 2 6 N/AN/AN/A
2 10 2 5 2 5 N/AN/AN/A
2 10 2 6 2 4 N/AN/AN/A
2 10 2 7 2 3 N/AN/AN/A
2 10 2 8 2 2 1116 0.69
2 10 2 9 2 1 46 0.67
2 10 2 10 2 0 45 0.80
Table 4. Average of the number of iterations by the iterative method with fixed N = 2 5 . Modified ite. count and Normal ite. count denotes the average number of iterations by the modified iterative method and the normal iterative method, respectively. N/A denotes the case where the iterations did not converge.
Table 4. Average of the number of iterations by the iterative method with fixed N = 2 5 . Modified ite. count and Normal ite. count denotes the average number of iterations by the modified iterative method and the normal iterative method, respectively. N/A denotes the case where the iterations did not converge.
N Δ t Δ t / N 1 (a) Modified Ite. Count(b) Normal Ite. Count(a)/(b)
2 5 2 2 2 3 N/AN/AN/A
2 5 2 3 2 2 1324 0.54
2 5 2 4 2 1 814 0.57
2 5 2 5 2 0 610 0.60
2 5 2 6 2 1 69 0.67
2 5 2 7 2 2 57 0.71
2 5 2 8 2 3 47 0.57
2 5 2 9 2 4 46 0.67
2 5 2 10 2 5 45 0.80
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Takei, Y.; Iwata, Y. Numerical Scheme Based on the Implicit Runge-Kutta Method and Spectral Method for Calculating Nonlinear Hyperbolic Evolution Equations. Axioms 2022, 11, 28. https://doi.org/10.3390/axioms11010028

AMA Style

Takei Y, Iwata Y. Numerical Scheme Based on the Implicit Runge-Kutta Method and Spectral Method for Calculating Nonlinear Hyperbolic Evolution Equations. Axioms. 2022; 11(1):28. https://doi.org/10.3390/axioms11010028

Chicago/Turabian Style

Takei, Yasuhiro, and Yoritaka Iwata. 2022. "Numerical Scheme Based on the Implicit Runge-Kutta Method and Spectral Method for Calculating Nonlinear Hyperbolic Evolution Equations" Axioms 11, no. 1: 28. https://doi.org/10.3390/axioms11010028

APA Style

Takei, Y., & Iwata, Y. (2022). Numerical Scheme Based on the Implicit Runge-Kutta Method and Spectral Method for Calculating Nonlinear Hyperbolic Evolution Equations. Axioms, 11(1), 28. https://doi.org/10.3390/axioms11010028

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop