All articles published by MDPI are made immediately available worldwide under an open access license. No special
permission is required to reuse all or part of the article published by MDPI, including figures and tables. For
articles published under an open access Creative Common CC BY license, any part of the article may be reused without
permission provided that the original article is clearly cited. For more information, please refer to
https://www.mdpi.com/openaccess.
Feature papers represent the most advanced research with significant potential for high impact in the field. A Feature
Paper should be a substantial original Article that involves several techniques or approaches, provides an outlook for
future research directions and describes possible research applications.
Feature papers are submitted upon individual invitation or recommendation by the scientific editors and must receive
positive feedback from the reviewers.
Editor’s Choice articles are based on recommendations by the scientific editors of MDPI journals from around the world.
Editors select a small number of articles recently published in the journal that they believe will be particularly
interesting to readers, or important in the respective research area. The aim is to provide a snapshot of some of the
most exciting work published in the various research areas of the journal.
In this paper, we construct novel numerical algorithms to solve the heat or diffusion equation. We start with 105 different leapfrog-hopscotch algorithm combinations and narrow this selection down to five during subsequent tests. We demonstrate the performance of these top five methods in the case of large systems with random parameters and discontinuous initial conditions, by comparing them with other methods. We verify the methods by reproducing an analytical solution using a non-equidistant mesh. Then, we construct a new nontrivial analytical solution containing the Kummer functions for the heat equation with time-dependent coefficients, and also reproduce this solution. The new methods are then applied to the nonlinear Fisher equation. Finally, we analytically prove that the order of accuracy of the methods is two, and present evidence that they are unconditionally stable.
This paper can be considered to be a continuation of our previous work [1,2,3,4], in which we developed novel numerical algorithms to solve the heat or diffusion equation. The phenomenon of diffusion is described by the heat or diffusion equation, which, in its simplest form, is the following linear parabolic partial differential equation (PDE):
where u is the concentration or temperature, and α is the diffusion coefficient or thermal diffusivity in the case of particle or heat transport, respectively. If the medium in which the diffusion takes place is not homogeneous, one can use a more general form
where, in the case of conductive heat transfer, , , and are the heat conductivity, specific heat, and mass density, respectively. The relation connects the four nonnegative quantities.
The heat equation, either the simplest version (1) or similar but more difficult equations, has an increasing number of analytical solutions. Unfortunately, most consider only systems with relatively simple geometries, such as a cylindrical inclusion in an infinite medium [5]. Moreover, parameters such as the diffusion coefficient or the heat conductivity are typically taken as constants, or a fixed and relatively simple function of the space variables [6] at best. This means that, for intricate geometries, and in the case of space- and time-dependent coefficients in general, numerical calculations cannot be avoided. However, approaches such as the finite difference (FDM) or finite element (FEM) methods require the full spatial discretization of the system; thus, they are computationally demanding. We explained in our previous papers [3,4,7], and also demonstrate in this paper, that the widely used conventional solvers, either explicit or implicit, have serious difficulties. The explicit methods are usually conditionally stable, so when the stiffness of the problem is high, very small time step sizes have to be used. By comparison, when the number of cells is large, which is almost always the case in two or three dimensions, the implicit solvers become very slow and use a significant amount of memory. This information highlights the fact that finding effective numerical methods is still important.
One of the very few easily parallelizable explicit and unconditionally stable methods is the two-stage odd-even hopscotch (OEH) algorithm [3,8,9,10]. In our previous papers [3,4], we showed that this method is robust and powerful for spatially homogeneous grids but, in the case of stiff systems, it can be disastrously inaccurate for large time step sizes. We constructed and tested new hopscotch combinations and found [1,2,3,4] that some of them behave much better, not only for large, but also for medium and small time step sizes. In this paper, we extend our research by further modifying the underlying space and time structure as explained below.
Our current work was inspired by the well-known leapfrog method [11] used by the molecular dynamics community to solve Newton’s equations of motion. In their books, Hockney and Eastwood [12] (p. 28) and, later, Frenkel and Smit [13] (p. 75) introduced this method in the following form:
In Figure 1, a diagram of this method is presented, and one can immediately understand why the method is called leapfrog.
The leapfrog scheme was successfully adapted and also generalized to hyperbolic PDEs [14], but for parabolic equations it is unstable. This can be addressed by a modification to obtain the Dufort–Frankel method [15] (Subsection 8.3). In all of these publications, the nodes of the spatial mesh are treated equivalently, unlike in the OEH method. Verwer and Sommeijer [16] presented a composite method for the advection-diffusion problem, where the leapfrog scheme was used for the horizontal advection part, and the Du Fort Frankel and Crank–Nicolson methods are used for the other parts. It can be easily determined that their method is significantly different from ours. In addition, we have been unable to find any combinations of the leapfrog and hopscotch structures in the literature that are at least a little similar to ours.
The remainder of this paper is organized as follows. In Section 2 we introduce the new algorithms, both for the simplest, one-dimensional, equidistant mesh, and also for general, arbitrary meshes. In Section 3, first we very briefly present the results of the numerical tests for the first assessment of the 105 combinations to obtain a manageable number of methods. Then, in Section 3.3 and Section 3.4, two numerical experiments are presented for two space dimensional stiff systems consisting of 10,000 cells. We choose the top five combinations with the highest accuracy and compare them to selected other methods. In Section 3.5, we reproduce an analytical solution from the top five methods using a non-equidistant mesh for verification. In Section 3.6, we construct a new analytical solution for time-dependent diffusivity and then, in Section 3.7, the top five methods are verified again by comparing their numerical results to this solution. In Section 3.8, the convergence and stability properties of these methods are analytically investigated. In Section 4 we summarize the conclusions and our research plan for the future.
2. The New Methods
To implement the OEH algorithm, we need to construct a bipartite grid, in which the set of nodes (or cells) is divided into two similar subsets with odd and even space indices, such that all nearest neighbors of odd nodes are even, and vice versa. In one space dimension, where the space variable is discretized as usual, , it is straightforward. Let us fix the time discretization for the remainder of the paper to . In the original OEH method, odd and even time steps can also be distinguished: in odd time steps the odd nodes are treated in the first stage, and only the u values belonging to the beginning of the time step are used. Then, in the second stage, when the even nodes are calculated, the new values of their odd neighbors, which have just been calculated, are used. In even time steps, the roles of the nodes are interchanged, as shown in Figure 2a. The original OEH method always applies the standard explicit Euler formula in the first stage and the implicit Euler formula in the second stage. However, in each stage of this structure, the latest available u values of the neighbors are used; thus, when the second stage calculations begin, the new values of the neighbors and are known, which makes the implicit formula effectively explicit. Therefore, it is obvious that the OEH method is fully explicit, and the previous u values do not need to be stored at all, which means that one array is sufficient to store the variable u in the memory. We emphasize that all of our algorithms described in this paper have this property, irrespective of the modifications. In one of our previous papers [1], we applied the UPFD method in stage one and the explicit Euler in stage two, which we then called the UPFD + Explicit Euler odd-even Hopscotch method. However, this can simply be the called the reversed hopscotch method, as one only needs to change the order of the two formulas in the code of the original OEH to obtain the code of this method. Then, we modified the space and time structures and constructed the so called shifted-hopscotch method [4].
In the case of the new leapfrog-hopscotch method, the calculation starts with taking a half-sized time step for the odd nodes using the initial values, which is symbolized by a green box containing the number 0 in Figure 2b. Then, full time steps are taken for the even and odd nodes strictly alternately until the end of the last timestep (orange box in Figure 1), which must also be halved for odd nodes to reach exactly the same time point as the even nodes. In Figure 1, the number “0” means that this stage is performed only once and is not repeated. By comparison, stages 1–4 are repeated times (including the last, halved stage 4), so T must be an even integer. Thus, finally, we have five different stages 0...4 in which five different formulas can be applied to maximize the efficiency. Let us turn our attention to these formulas.
It is relatively common [17] (p. 112) to spatially discretize Equation (1) in one dimension by applying the central difference formula to obtain a system of ordinary differential equations (ODEs) for nodes :
Suppose now that one needs to solve Equation (2) when the material properties are not constants but functions of the space variables and the mesh is possibly unstructured. We have shown in our previous papers [3,7] (based on, e.g., Chapter 5 of the book [18]) that Equation (3) can be generalized to arbitrary grids consisting of cells of various shapes and properties:
where the heat capacity and the inverse thermal resistance can be calculated approximately as:
respectively, where is the volume of the cell, and and are the surface between the cells and the distance between the cell centers, respectively. Figure 3 can help the reader to understand these quantities. Although the grid should be topologically rectangular to maintain the explicit property of the OEH-type methods, we emphasize that the geometrical shape of the cells is not necessarily rectangular.
Equation systems (3) and (4) can be written into a condensed matrix-form:
The matrix M is tridiagonal in the one-dimensional case of Equation (3) with the following elements:
In the general case of Equation (4), the nonzero elements of the matrix can be given as:
Let us introduce the characteristic time or time constant τi of cell i, which is always a non-negative quantity:
Using this we introduce the following notations:
Here, is the generalization of , the usual mesh ratio. We recall the so-called theta method:
where . If , this scheme is the explicit (Euler) or forward-time central-space (FTCS) scheme. Otherwise the theta method is implicit, and for one has the implicit (Euler) and the Crank–Nicolson methods, respectively [17]. However, we remind the reader that in our leapfrog-hopscotch scheme, the neighbors are always taken into account at the same, most recent time period; thus, we can express the new value of the u variable in the following brief form in the 1D case:
and in the general case:
Here, for the even cells, and for the odd cells, depending on where we are in the leapfrog-hopscotch structure. For easier understanding, see Figure 1, in addition to Example 1 below.
The other formula we use is derived from the constant-neighbor (CNe) method, which was introduced in our previous papers [7,19]. For the leapfrog-hopscotch method in the one-dimensional case, this can be written as follows:
whereas for general grids it is:
and for halved time steps, and must be divided by 2, obviously.
Remark1.
In case of the theta method for(Formulas (11) and (12)) and the CNe methods (Formulas (13) and (14)), the newvalues are the convex combinations of the oldvalues. Indeed, in (11) the coefficients,, andare nonnegative and their sum is one. This property can be similarly shown for (12)–(14).
Here, briefly outline the notation of the individual combinations. The five numbers in a bracket are the values of the parameter θ for each of the stages 0, ..., 4, whereas the letter “C” means the CNe constant neighbor method. For example, L2 (¼, ½, C, ½, ½) means the following algorithm, which will be selected among the top five algorithms in Section 3.3, and is also denoted L2.
Example1.
Algorithm L2 (¼, ½, C, ½, ½), general from.
Stage 0. Take a half time step with the (12) formula with θ = ¼ for odd cells:
Repeat the following four stages for:
Stage 1. Take a full time step with the (12) formula with θ = ½ for even cells:
Stage 2. Take a full time step with the (14) formula for odd cells:
Stage 3. Take a full time step with the (12) formula with θ = ½ for even cells:
Stage 4. If, then take a full time step with θ = ½ for odd cells:
else take a half time step with the (12) formula with θ = ½ for odd cells:
Based on this example, all other combinations can be easily constructed. In our previous paper [4], the best shifted-hopscotch combination was S4 (0, ½, ½, ½, 1), and, in the category of the positivity-preserving algorithms, S1 (C, C, C, C, C). These will be tested against the new leapfrog-hopscotch algorithms in the following section.
3. Results
3.1. On the Methods and Circumstances of the Investigation
In Section 3.2, Section 3.3 and Section 3.4, two space dimensional, topologically rectangle-structured lattices with cells are investigated (see Figure 3 for visualization). We consider the system as thermally isolated (zero Neumann boundary conditions), which is implemented by the omission of those terms of the sum in Equation (4), which have infinite resistivity in the denominator due to the isolated boundary. Let us denote by and the eigenvalues of the system matrix M with the (nonzero) smallest and the largest absolute values, respectively. Now, gives the stiffness ratio of the system, and the maximum possible time step size for the FTCS (explicit Euler) scheme is exactly given by , above which the solutions are expected to diverge due to instability. This threshold is frequently called the CFL limit and also holds for the second-order explicit Runge–Kutta (RK) method [20].
In Section 3.2, Section 3.3 and Section 3.4, the reference solution is calculated by the ode15s built-in solver of MATLAB, with very strict error tolerance and, therefore, high precision. In Section 3.5, Section 3.6, and Section 3.8 the reference solution is an analytical solution. The (global) numerical error is the absolute difference of the numerical solutions produced by the examined method and the reference solution at final time . We use these individual errors of the nodes or cells to calculate the maximum error:
the average error:
and the so-called energy error:
which, in the case of heat conduction, yields the error in terms of energy. However, for different algorithms, these errors depend on the time step size in different ways. If we want to evaluate the overall performance of the new methods compared to the original OEH method, we consider a fixed value of the final time , and first calculate the solution with a large time step size (typically ), then repeat the whole calculation for subsequently halved time step sizes S times until h reaches a very small value (typically around ). Then, the aggregated relative error (ARE) quantities are calculated for each type of error defined above. For instance, in the case of the error, it has the following form:
We can also take the simple average of these errors,
to obtain one single quantity for a method applied on a concrete problem, which will help us to select the best combinations. For example, if ARE = 2, then the examined method produces smaller errors by two orders of magnitude than the OEH method.
For the simulations where running times are measured, we used a desktop computer with an Intel Core i5-9400 CPU, 16.0 GB RAM with the MATLAB R2020b software, in which there was a built-in tic-toc function to measure the total running time of the tested algorithms.
3.2. Preliminary Tests
The following nine values of the parameter theta are substituted into the formula given in (12): . Thus, with the CNe Formula (14), we have 10 different formulas. There are five stages in the leapfrog-hopscotch structure and, by inserting these 10 formulas in all possible combinations, we obtain 105 = 100,000 different algorithm combinations in total. We wrote MATLAB code which constructs and tests all of these combinations in a highly automatized manner under the following circumstances.
The generalized Equation (4) is solved in the case of four different small systems with , to ensure the total running times are manageable despite the large number of combinations. Here, and also in Section 3.3 and Section 3.4, randomly generated cell capacities and thermal resistances following a log-uniform distribution
have been used. The parameters of the distribution of the mesh-cells data were chosen to construct test problems with various stiffness ratios and , for example, . The (pseudo-)random number, rand, is generated by MATLAB for each quantity with a uniform distribution in the unit interval (0, 1). We also generate different random values for the initial conditions . The final times were set to and once to . The purpose of this latter, larger number is to let instabilities manifest themselves in order to exclude the unstable combinations. The computer program calculated the aggregated relative error (ARE) quantities and then sorted the algorithms in descending order according to this quantity to obtain a ranking list of the 105 algorithm combinations. Finally, we manually checked the top of these lists for all of the four small systems to select the best 20 combinations, which have the following short form:
In the next two subsections, we start only with these combinations. We note that there is an “odd one out” in the list above, i.e., the (C, C, C, C, C) combination, which was typically not at the top of the ranking lists above. We include it into the top 20 and, later, in the top five, because it is the best among those combinations which preserve positivity of the solution; more precisely, it always follows the maximum and minimum principles as the true solution [17] (p. 87), which is proven in Section 3.8.
We stress that, until this point, we have not stated anything precisely. The only purpose of these preliminary tests on the small systems is the reduction of the large number of algorithm combinations by eliminating those which are probably not worthy of more careful investigation.
3.3. Comparison with Other Methods for a Large, Moderately Stiff System
We examine a grid similar to the one in Figure 2 with an isolated boundary, but where the sizes are fixed to and ; thus, the total number of cells is 10,000, and the final time is . The exponents introduced above were set to the following values:
which means, e.g., that log-uniformly distributed values between 0.01 and 100 were given to the capacities. The generated system can be characterized by its stiffness ratio and values, which are and , respectively. The performance of the new algorithms is compared with the following, widely used MATLAB solvers:
ode15s, a first- to fifth-order (implicit) numerical differentiation formulas with variable-step and variable order (VSVO), developed for solving stiff problems;
ode23s, a second-order modified (implicit) Rosenbrock formula;
ode23t, applies implicit trapezoidal rule using a free interpolant;
ode23tb, a combination of backward differentiation formula and trapezoidal rule;
ode45, a fourth-/fifth-order explicit Runge–Kutta–Dormand–Prince formula;
ode23, a second-/third-order explicit Runge–Kutta–Bogacki–Shampine method;
ode113, a first- to 13th-order VSVO Adams–Bashforth–Moulton solver.
In the case of the MATLAB solvers, we could not directly set the time step size, but changed the tolerances instead, starting from a large value, such as , towards a small minimum value, for example, . In addition to these solvers, we also used the following methods for comparison purposes; the original UPFD, the CNe, the two- and three-stage linear-neighbor (LNe and LNe3) methods [7], and the Dufort–Frankel (DF) algorithm [17] (p. 120):
are explicit unconditionally stable schemes. The DF scheme is a two-step method which is not self-starting; thus, we calculated the first time step by two subsequent UPDF steps with halved time step sizes. The Heun method, also called the explicit trapezoidal rule, is one of the most common second-order RK scheme [21]. Finally, the widely used Crank–Nicolson scheme (abbreviated here as CrN) is an implicit scheme:
where I is the unit matrix. The A and B matrixes here are time-independent; thus, it is sufficient to calculate them only once, before the first time step. We implement this scheme in two different ways: one is to calculate before the first time step and then each time step is just a simple matrix multiplication , which is denoted by “CrN invert”. The other is the command of MATLAB, which we denote by “CrN lins”.
We plotted the , , and energy errors as a function of the effective time step size , and based on this (and on similar data that are presented in the next subsection), we selected the following top five combinations from those listed in (21) and discarded the reminder:
L1 (C, C, C, C, C),
L2 (0, ½, ½, ½, ½),
L3 (⅕, ½, ½, ½, ½),
L4 (¼, ½, C, ½, ½),
L5 (⅕, ½, C, ½, ½).
One can see that L2–3 and L4–5 are highly similar; only the zeroth stage is different. We tried to optimize the value of theta for this zeroth stage by calculating the errors for a few fixed time step sizes for , where . In the case of the method L2–3 (θ, ½, ½, ½, ½), we found that the errors are larger for large values of theta and the smallest when . In case of the L4–5 (θ, ½, C, ½, ½) combination, we found that there is a nontrivial minimum of the error function at around , but its position slightly depends on the time step size, as shown in Figure 4. We can conclude that the L4 combination is usually slightly more accurate than that of L5.
In Figure 5, we present the energy error as a function of the time step size for these top five combinations and other methods, and Figure 6 shows the errors vs. the total running times. Furthermore, Table 1 presents some results that were obtained by our numerical schemes and the “ode” routines of MATLAB. One can see that, as expected, the implicit Crank–Nicolson scheme is the most accurate, but the accuracy of the L2 method, in addition to the L5 and S4 methods, approaches it, and is similar to that of the Heun method below the CFL limit.
3.4. Comparison with Other Methods for a Large, Very Stiff System
In the second case study, new values were set for the α and β exponents:
This means that the width of the distribution of the capacities and thermal resistances were increased and a non-negligible anisotropy appeared because, on average, the resistances in the x direction are two orders of magnitude larger than in the z direction. Note that the geometric mean of the C and R quantities is still equal to one. With this modification, we gained a system with a much higher stiffness ratio, , and the CFL limit for the standard FTCS was , which, we stress again, also holds for the Heun method. All other parameters and circumstances are the same as in the previous subsection. In Figure 7 and Figure 8, the energy and the errors are presented as a function of the time step size and the total running time, respectively. As expected, the implicit methods gained a slight advantage compared to the previous, less stiff case, but the new L2 method outperforms all other examined methods provided that not only the accuracy, but also the speed, is taken into account. In Table 2, we report the data related to this numerical experiment, and in Table 3 we summarize the ARE error quantities, defined in Equation (19), of the explicit stable methods for both case studies.
3.5. Verification 1. Comparison to Analytical Results Using a Non-Uniform Mesh
We consider a very recent [22] nontrivial analytical solution of Equation (1), given on the whole real number line for positive values of t as follows:
where, for the sake of simplicity, we use the value . In our last paper [4], we reproduced this solution by prescribing the Dirichlet boundary conditions calculated using the analytical solution at the two ends of the interval. Now, we do not use this kind of information, but construct a large-scale non-equidistant spatial grid according to the following procedure. First, we define the coordinates of the cell borders by the formula:
where . Thus, we have a relatively dense system of nodes close to the origin, which becomes increasingly less dense as one moves further from the origin, and towards , which is the right boundary of the mesh. Then the cell centers are easily calculated:
It is straightforward to reflect this structure to the origin to create the mirror image of the mesh at the negative side of the x axis, thus obtaining 2000 cells in total. Now, at the vicinity of the origin we have small cells with diameter 0.01, which are increasing as we move further from the origin, first very slowly, then increasingly rapidly until they reach . The resistances and the cell capacities can then be calculated using Formula (5) in Section 2:
We took into account the zero Neumann boundary conditions, as in the previous two subsections, which is a good approximation because the values of the initial function are extremely close to zero far from the origin. The stiffness ratio is for this mesh, and . As in our last paper [4], and also in the next subsection, we reproduce the analytical solution in the finite time interval , where . Obviously, the generalized Formulas (12) and (14) must be used. In Figure 9, the errors as a function of the time step size h are presented for the case of the u solution for the top five leapfrog-hopscotch algorithms, a first-order “reference-curve” for the original CNe method, and the Heun method. These results verify not only the second-order convergence of the numerical methods, but also the procedure of generalizing the calculations to non-uniform grids (see Equations (4) and (5) above). One can also see that the L2 and L3 algorithms reach the minimum error (determined by the space discretization) for larger values of h than the CFL limit for the Heun method.
3.6. New Analytical Solution with Time-Dependent Diffusion Coefficient
Now we investigate the analytical solutions of the diffusion equation where the diffusion constant has a power-law time dependence:
To avoid confusion, we use the “hat” notation for the “generalized diffusion coefficient”, , which is considered constant and has the dimension of . We apply the self-similar Ansatz, first introduced by Sedov [23], with the form
where are the self-similar exponents describing the decay and the spreading of the solution in time and space. These properties make this Ansatz physically extraordinarily relevant. In the previous decade, we applied this Ansatz to numerous physical systems described by partial differential equations, most with hydrodynamical origins [24,25].
The constraint among the exponents is defined via . Therefore, a can have arbitrary real values but must be fulfilled. These relations dictate the reduced ODE of:
The solution can be evaluated with the Maple 12 software and reads:
where M and U are the Kummer functions [26], whereas and are arbitrary real constants. Note that the larger the exponent n, the quicker the spreading of the solutions. Figure 10 shows numerous shape functions for various values of n.
In the next subsection we address the first term on the right-hand side of (26); thus, we give this part of the solution as a function of the x and t variables:
3.7. Verification 2. Comparison to Analytical Results with Time-Dependent Material Properties
We examine Equation (25) and take the solution given in (27) for as a reference solution:
whose 3D graph can be seen in Figure 10b. We reproduce this solution in finite space and time intervals and , where while , , and . The space and time intervals are discretized as usual: and ; thus, we can return to the equidistant theta and CNe Formulas (11) and (13). For this grid, again, and the stiffness ratio is only . We use the analytical solution to prescribe the appropriate Dirichlet boundary conditions at the two ends of the interval. We take into consideration the continuous change of the thermal diffusivity by recalculating the mesh ratio at each time step by multiplying its original value by the physical time taken at the middle of the actual time step, i.e., is used at the nth time step in Formulas (11) and (13). The values of the Kummer M function were calculated by the hypergeom function of MATLAB. The errors as a function of the time step size h, which are presented in Figure 11, show that our methods, particularly L2 and L3, can also easily solve this problem. Figure 11 is highly similar to Figure 10, and we also obtained similar error functions for several other values of the parameters for both problems.
We also present the graphs of the initial function , the reference analytical solution , and a corresponding numerical solution for in Figure 12. We intentionally chose such a large time step size (more than two orders of magnitude larger than ), for which the error is 0.096, to demonstrate the robustness of the algorithms, because now one can see that there is nothing unphysical in the numerical solution. We observed that, for even larger time step sizes, the numerical solution is approaching the initial function, which means that increasing the time step size is roughly equivalent to the slowing of the numerical diffusion process. The second conclusion we can draw is that the CNe formula is less effective than the formula if we have small, nearly equally sized cells directly adjacent to one another.
3.8. Solution for the Nonlinear Fisher Equation
Here, we examine a nonlinear reaction-diffusion equation, the so-called Fisher equation [27], in the following form:
This equation is also known as the Kolmogorov–Petrovsky–Piskunov or Fisher-KPP equation [28,29], and was originally developed to describe the spread of advantageous gene variants in space. We found the following analytical solution for α = 1 in the literature [30,31]:
As in Section 3.7, we use this solution to gain the initial condition:
and the Dirichlet boundary conditions at the ends of the interval:
In principle, the nonlinear term can be incorporated into the schemes in many different ways. We chose the following semi-implicit treatment:
which can be rearranged into a fully explicit form:
where is the most recent value of after performing the previous stage calculations which describe the diffusion term. The operation (31) is performed in two extra, separate stages, where a loop goes through all of the nodes (both odd and even). The first extra stage is after Stage 1, whereas the second is after (the original) Stage 4, at integer time levels, at the same moments in which the boundary conditions are recalculated.
With this procedure, we observed that the new methods are very stable for Fisher’s equation and behave very similarly to the linear case. We solved Equation (29) for several different values of the parameters , and . We present the errors as a function of the time step size for , and in Figure 13, but we must note that very similar curves were produced for other values of the parameters.
We underline that the goal of this subsection is not the systematic investigation of the new methods in the case of nonlinear equations, which is planned to be performed in our future works. Here, we only show that these leapfrog-hopscotch methods can be successfully applied to nonlinear equations, even if the coefficient of the nonlinear term is large.
3.9. Analytical Investigations
We start with the analysis of the convergence properties of the five most advantageous methods in the one-dimensional case for constant values of mesh ratio r, that is, when the equidistant spatial discretization is fixed.
Theorem1.
The order of convergence of the L1...L5 leapfrog-hopscotch algorithms is two for the system (6) of linear ODEs:
where M is introduced in (7) andis an arbitrary initial vector.
The proof of Theorem 1 is presented in Appendix A.
When analyzing the stability of the methods, we distinguish two types of algorithms. In the first type, only the CNe and the formulas are present, in which the new values are the convex combinations of the old values, see Remark 1. We also recall a lemma [32] (p. 28) stating that a convex combination of convex combinations is again a convex combination. These lemmas imply the following:
Corollary2.
The finalvalues are the convex combinations of the initialvalues in the case of all leapfrog-hopscotch algorithms containing only the CNe and theformulas.
This means that all of these algorithms, and particularly L1 (C, C, C, C, C), are positivity preserving (more precisely, they satisfy the minimum and maximum principle), which is a significantly stronger property than mere unconditional stability.
By comparison, the L2–L5 algorithms contain the theta-formula for ; thus, they are not positivity preserving. We examine the stability through the eigenvalues of their amplification matrixes. Because Stage 0 is not repeated, it cannot affect stability. The smallest repeating unit in the L2 (0, ½, ½, ½, ½) and L3 (⅕, ½, ½, ½, ½) combinations consists of only two stages containing the formula; therefore, it is sufficient to construct them in a matrix form. If we denote the matrices of the formula acting on the odd and even nodes by , respectively, then these matrices have the following elements:
and periodic boundary conditions are taken into account while calculating the first and the last row. Now, the amplification matrix has the following elements:
Using the Maple software we analytically calculated the absolute values of the eigenvalues of this matrix for several values of N and found that none of them exceeds 1, which implies stability for an arbitrary time step size. As an illustration, we present these in Figure 14 as a function of the mesh ratio r, for N = 20.
We also constructed the amplification matrix of the L4 (¼, ½, C, ½, ½) and L5 (⅕, ½, C, ½, ½) combinations by the multiplication of the appropriate four matrices of Stages 1–4 in a similar manner, and obtained essentially the same results. These calculations provide strong evidence that these algorithms are unconditionally stable.
4. Discussion and Summary
The present paper is the natural continuation of our previous work on explicit algorithms for solving the time-dependent diffusion equation. In the current work, we combined the hopscotch space structure with leapfrog time integration. By applying the well-known theta method with nine different values of and the recently invented CNe method, we constructed 105 combinations. Via subsequent numerical experiments, this huge number was decreased by excluding the combinations that underperformed and, finally, only the top five of these were retained. We used two two-dimensional stiff systems containing 10,000 cells with completely discontinuous random parameters and initial conditions, and presented the results of these five algorithms, in addition to those of several other methods, for these systems. For the purpose of verification, two analytical solutions were used. One was reproduced using a non-equidistant grid. The second was a completely new solution containing the Kummer M function for time-dependent thermal diffusivity, presented here for the first time. We also solved the nonlinear Fisher equation and demonstrated that our algorithms can handle the nonlinearity of this equation without losing stability. Then, we provided exact proof that the top five new methods have second-order convergence for fixed spatial discretization, and also showed by analytical calculations that they have very good stability properties.
We found that, in general, the L2 (0, ½, ½, ½, ½) combination is the most competitive. This combination yields accurate results orders of magnitude faster than the well-optimized MATLAB routines, and it is more accurate than all of the other examined explicit and stable methods. Although, unlike the L1 (C, C, C, C, C) algorithm, L2 is not positivity preserving, it is also surprisingly robust for relatively large time step sizes, which is not that case for the original OEH algorithm. Moreover, this new L2 algorithm is easy to implement and requires an even smaller amount of function evaluation and computer memory than the conventional explicit second-order Runge–Kutta methods, such as the Heun method. We can conclude that it combines has the most important advantages of the standard explicit and the implicit methods.
Our next aim is to search for more accurate algorithms for nonlinear parabolic equations. We plan to construct new analytical solutions under circumstances in which the heat conducting coefficients depend on space, and use these to test both the new and old methods. These results will be of significant interest in the simulation of real problems, such as in the field of engineering, the heat transfer by conduction, and the radiation in elements of machine tools.
Author Contributions
Conceptualization and methodology, E.K.; software, Á.N. and I.O.; validation, E.K., H.K. and I.O.; formal analysis, I.F.B. (analytical solutions), E.K. (analytical proofs); investigation, Á.N., I.O. and H.K.; resources, E.K. and G.B.; data curation, Á.N.; writing—original draft preparation, E.K. and I.F.B.; writing—review and editing, G.B., Á.N.; visualization, Á.N. and H.K.; supervision, E.K. and G.B.; project administration, E.K. and Á.N.; funding acquisition, G.B. All authors have read and agreed to the published version of the manuscript.
Funding
This work was supported by Project No. 129257 implemented with the support provided from the National Research, Development and Innovation Fund of Hungary, financed under the K_18 funding scheme.
where all u terms on the right-hand side are taken at time t. Let us substitute the spatially discretized ODE:
to obtain:
After simplification one obtains:
Thus, for a double time step:
We show that the numerical solution is identical to (A5). We use series expansions up to the second order and, for the sake of brevity, we omit the higher-order terms; thus, for example, we write , etc. Starting with the L1 combination, (13) can be written as:
Let us calculate the value of u at the next time step:
We also repeat the same calculation for the left neighbors:
The right neighbors can be treated similarly:
Substituting (A8) and (A9) back to (A7) we obtain:
The first and the last term on the right-hand side of (A10) is already equivalent to the appropriate terms in (A5). The second and third terms containing the values of the first neighbors need some more manipulation as follows:
Now, we can cope with the second and third term on the right-hand side of (A10):
Substituting these back into (A10), one obtains (A5), which completes the proof for the L1 algorithm.
For the theta method, we use the power series expansion to find, for the denominator of Equation (11), that ; thus, rather than (A6), we now have:
For , this is the same as (A6); thus, the proof presented above for the L1 method is also valid for the L2...L5 combinations. One can also see that for arbitrary combinations of values of theta, the second-order property does not hold. □
References
Saleh, M.; Nagy, Á.; Kovács, E. Construction and investigation of new numerical algorithms for the heat equation: Part 1. Multidiszcip. Tudományok2020, 10, 323–338. [Google Scholar] [CrossRef]
Saleh, M.; Nagy, Á.; Kovács, E. Construction and investigation of new numerical algorithms for the heat equation: Part 2. Multidiszcip. Tudományok2020, 10, 339–348. [Google Scholar] [CrossRef]
Saleh, M.; Nagy, Á.; Kovács, E. Construction and investigation of new numerical algorithms for the heat equation: Part 3. Multidiszcip. Tudományok2020, 10, 349–360. [Google Scholar] [CrossRef]
Nagy, Á.; Saleh, M.; Omle, I.; Kareem, H.; Kovács, E. New stable, explicit, shifted-hopscotch algorithms for the heat equation. Math. Comput. Appl.2021, 26, 61. [Google Scholar]
António, J.; Tadeu, A.; Godinho, L.; Simões, N. Benchmark solutions for three-dimensional transient heat transfer in two-dimensional environments via the time fourier transform. Comput. Mater. Contin.2005, 2, 1–11. [Google Scholar] [CrossRef]
Zoppou, C.; Knight, J. Analytical solution of a spatially variable coefficient advection–diffusion equation in up to three dimensions. Appl. Math. Model.1999, 23, 667–685. [Google Scholar] [CrossRef]
Kovács, E. A class of new stable, explicit methods to solve the non-stationary heat equation. Numer. Methods Partial. Differ. Equ.2021, 37, 2469–2489. [Google Scholar] [CrossRef]
Gordon, P. Nonsymmetric Difference Equations. J. Soc. Ind. Appl. Math.1965, 13, 667–673. [Google Scholar] [CrossRef]
Gourlay, A.R. Hopscotch: A Fast Second-order Partial Differential Equation Solver. IMA J. Appl. Math.1970, 6, 375–390. [Google Scholar] [CrossRef]
Gourlay, A.R.; McGuire, G.R. General Hopscotch Algorithm for the Numerical Solution of Partial Differential Equations. IMA J. Appl. Math.1971, 7, 216–227. [Google Scholar] [CrossRef]
Hockney, R.W.; Eastwood, J.W. Computer Simulation Using Particles; Taylor & Francis: Boca Raton, FL, USA, 1989. [Google Scholar]
Frenkel, D.; Smit, B.; Ratner, M.A. Understanding Molecular Simulation: From Algorithms to Applications. Phys. Today1997, 50, 66. [Google Scholar] [CrossRef]
Iserles, A. Generalized Leapfrog Methods. Ima. J. Numer. Anal.1986, 6, 381–392. [Google Scholar] [CrossRef]
Hirsch, C. Numerical Computation of Internal and External Flows, Volume 1: Fundamentals of Numerical Discretization; Wiley: Hoboken, NJ, USA, 1988. [Google Scholar]
Verwer, J.G.; Sommeijer, B.P. Stability Analysis of an Odd—Even-Line Hopscotch Method for Three-Dimensional Advection—Diffusion Problems. SIAM J. Numer. Anal.1997, 34, 376–388. [Google Scholar] [CrossRef] [Green Version]
Holmes, M.H. Introduction to Numerical Methods in Differential Equations; Springer: New York, NY, USA, 2007. [Google Scholar]
Munka, M.; Pápay, J. 4D Numerical Modeling of Petroleum Reservoir Recovery; Akadémiai Kiadó: Budapest, Hungary, 2001. [Google Scholar]
Kovács, E. New stable, explicit, first order method to solve the heat conduction equation. J. Comput. Appl. Mech.2020, 15, 3–13. [Google Scholar] [CrossRef]
Mátyás, L.; Barna, I.F. General self-similar solutions of diffusion equation and related constructions. arXiv2021, arXiv:2104.09128v1. [Google Scholar]
Sedov, L.I. Similarity and Dimensional Methods in Mechanics; CRC Press: Boca Raton, FL, USA, 2018. [Google Scholar]
Barna, I.F.; Kersner, R. Heat conduction: A telegraph-type model with self-similar behavior of solutions. J. Phys. A Math. Theor.2010, 43, 375210. [Google Scholar] [CrossRef]
Barna, I.F.; Bognár, G.; Guedda, M.; Mátyás, L.; Hriczó, K. Analytic self-similar solutions of the kardar-parisi-zhang interface growing equation with various noise terms. Math. Model. Anal.2020, 25, 241–256. [Google Scholar] [CrossRef]
Olver, F.W.J.; Lozier, D.W.; Boisvert, R.F.; Clark, C.W. NIST Handbook of Mathematical Functions; Cambridge University Press: New York, NY, USA, 2011. [Google Scholar]
Kolmogorov, A.; Petrovskii, I.; Piscunov, N. A study of the equation of diffusion with increase in the quantity of matter, and its application to a biological problem. Bull. Mosc. Univ. Math. Mech.1937, 1, 1–25. [Google Scholar]
Li, Y.; Van Heijster, P.; Marangell, R.; Simpson, M.J. Travelling wave solutions in a negative nonlinear diffusion–reaction model. J. Math. Biol.2020, 81, 1495–1522. [Google Scholar] [CrossRef] [PubMed]
Bastani, M.; Salkuyeh, D.K. A highly accurate method to solve Fisher’s equation. Pramana. J. Phys.2012, 78, 335–346. [Google Scholar] [CrossRef]
Agbavon, K.M.; Appadu, A.R.; Khumalo, M. On the numerical solution of Fisher’s equation with coefficient of diffusion term much smaller than coefficient of reaction term. Adv. Differ. Equ.2019, 2019, 146. [Google Scholar] [CrossRef]
Hiriart-Urruty, J.-B.; Lemaréchal, C. Fundamentals of Convex Analysis; Springer: Berlin, Germany, 2001. [Google Scholar]
Figure 1.
Diagram of the original leapfrog algorithm.
Figure 1.
Diagram of the original leapfrog algorithm.
Figure 2.
The diagram of the odd-even hopscotch structures for two nodes. The numbers in the boxes represent the different formulas, whereas the areas surrounded by thick red lines are the repeating units. (a) The original OEH algorithm. (b) The new leapfrog-hopscotch structure. The green and orange boxes are non-repeating stages.
Figure 2.
The diagram of the odd-even hopscotch structures for two nodes. The numbers in the boxes represent the different formulas, whereas the areas surrounded by thick red lines are the repeating units. (a) The original OEH algorithm. (b) The new leapfrog-hopscotch structure. The green and orange boxes are non-repeating stages.
Figure 3.
Visualization of the generalized variables. The red arrows symbolize conductive (heat) transport between cells with heat capacities and through the resistances Rij.
Figure 3.
Visualization of the generalized variables. The red arrows symbolize conductive (heat) transport between cells with heat capacities and through the resistances Rij.
Figure 4.
Errors as a function of the parameter for the first (moderately stiff) system, in the case of the method L4–5 (θ, ½, C, ½, ½).
Figure 4.
Errors as a function of the parameter for the first (moderately stiff) system, in the case of the method L4–5 (θ, ½, C, ½, ½).
Figure 5.
Energy errors as a function of the time step size for the first (moderately stiff) system, in the case of the original OEH method (OEH REF), the new algorithms L1–L5, and several other methods.
Figure 5.
Energy errors as a function of the time step size for the first (moderately stiff) system, in the case of the original OEH method (OEH REF), the new algorithms L1–L5, and several other methods.
Figure 6.
Errors as a function of the running time for the first (moderately stiff) system, in the case of the original OEH method (OEH REF), the new algorithms L1–L5, and several other methods.
Figure 6.
Errors as a function of the running time for the first (moderately stiff) system, in the case of the original OEH method (OEH REF), the new algorithms L1–L5, and several other methods.
Figure 7.
Energy errors as a function of the time step size for the second (very stiff) system, in the case of the original OEH method (OEH REF), the new algorithms L1–L5, and some other methods.
Figure 7.
Energy errors as a function of the time step size for the second (very stiff) system, in the case of the original OEH method (OEH REF), the new algorithms L1–L5, and some other methods.
Figure 8.
Errors as a function of the running time for the second (very stiff) system, in the case of the original OEH method (OEH REF), one stage CNe method, the new algorithms L1–L5, and several other methods.
Figure 8.
Errors as a function of the running time for the second (very stiff) system, in the case of the original OEH method (OEH REF), one stage CNe method, the new algorithms L1–L5, and several other methods.
Figure 9.
The errors as a function of time step size h for the space-dependent mesh to reproduce the exact solution given in (24).
Figure 9.
The errors as a function of time step size h for the space-dependent mesh to reproduce the exact solution given in (24).
Figure 10.
The M part of the solutions in the case of time-dependent diffusion coefficients of Equation (25). (a) Shape functions for different n exponents of the diffusion parameter. The black, red, blue dotted, green dashed-dotted, and brown dashed lines are for , respectively. (b) Illustration of the solution given in Equation (28) for .
Figure 10.
The M part of the solutions in the case of time-dependent diffusion coefficients of Equation (25). (a) Shape functions for different n exponents of the diffusion parameter. The black, red, blue dotted, green dashed-dotted, and brown dashed lines are for , respectively. (b) Illustration of the solution given in Equation (28) for .
Figure 11.
The errors as a function of time step size h for the problem with a time-dependent coefficient.
Figure 11.
The errors as a function of time step size h for the problem with a time-dependent coefficient.
Figure 12.
The graphs of the u functions for time-dependent coefficients. Here, u0 is the initial function , uref is the analytical solution at the final time , and L2 is the corresponding numerical solution served by the L2 (0, ½, ½, ½, ½) algorithm for .
Figure 12.
The graphs of the u functions for time-dependent coefficients. Here, u0 is the initial function , uref is the analytical solution at the final time , and L2 is the corresponding numerical solution served by the L2 (0, ½, ½, ½, ½) algorithm for .
Figure 13.
The errors as a function of time step size h for the numerical solutions of Fisher’s equation for and .
Figure 13.
The errors as a function of time step size h for the numerical solutions of Fisher’s equation for and .
Figure 14.
The graphs of the eigenvalues of the amplification matrix as a function of the mesh ratio r for N = 20. (a) . (b) .
Figure 14.
The graphs of the eigenvalues of the amplification matrix as a function of the mesh ratio r for N = 20. (a) . (b) .
Table 1.
Comparison of different leapfrog-hopscotch algorithms with other methods for the moderately stiff system of ten thousand cells.
Table 1.
Comparison of different leapfrog-hopscotch algorithms with other methods for the moderately stiff system of ten thousand cells.
Numerical Method
ode45,
ode23,
ode113,
ode15s,
ode23s,
ode23t,
ode23tb,
CNe,
LNe,
LNe3,
DF,
Rev. hop.,
L1,
L2,
L3,
L4,
L5,
S1,
S4,
Heun,
CrN (inv),
CrN (lins),
Table 2.
Comparison of different algorithms for the very stiff system of ten thousand cells.
Table 2.
Comparison of different algorithms for the very stiff system of ten thousand cells.
Numerical Method
Error L∞
Error L1
Energy Error
Running Time (s)
ode45,
ode23,
ode113,
ode15s,
ode23s,
ode23t,
ode23tb,
CNe,
LNe,
LNe3,
DF,
Rev. hop.,
L1,
L2,
L3,
L4,
L5,
S1,
S4,
Heun,
CrN (inv),
CrN (lins),
Table 3.
ARE (average relative error) quantities of different explicit stable algorithms.
Table 3.
ARE (average relative error) quantities of different explicit stable algorithms.
Numerical Method
ARE (Mod. Stiff)
ARE (Very Stiff)
CNe
LNe
LNe3
Dufort-Frankel
Reversed hopscotch
L1 (C, C, C, C, C)
L2 (0, ½, ½, ½, ½)
L3 (⅕, ½, ½, ½, ½)
L4 (¼, ½, C, ½, ½)
L5 (⅕, ½, C, ½, ½)
S1 (C, C, C, C, C)
S4 (0, ½, ½, ½,1)
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Nagy, Á.; Omle, I.; Kareem, H.; Kovács, E.; Barna, I.F.; Bognar, G.
Stable, Explicit, Leapfrog-Hopscotch Algorithms for the Diffusion Equation. Computation2021, 9, 92.
https://doi.org/10.3390/computation9080092
AMA Style
Nagy Á, Omle I, Kareem H, Kovács E, Barna IF, Bognar G.
Stable, Explicit, Leapfrog-Hopscotch Algorithms for the Diffusion Equation. Computation. 2021; 9(8):92.
https://doi.org/10.3390/computation9080092
Chicago/Turabian Style
Nagy, Ádám, Issa Omle, Humam Kareem, Endre Kovács, Imre Ferenc Barna, and Gabriella Bognar.
2021. "Stable, Explicit, Leapfrog-Hopscotch Algorithms for the Diffusion Equation" Computation 9, no. 8: 92.
https://doi.org/10.3390/computation9080092
APA Style
Nagy, Á., Omle, I., Kareem, H., Kovács, E., Barna, I. F., & Bognar, G.
(2021). Stable, Explicit, Leapfrog-Hopscotch Algorithms for the Diffusion Equation. Computation, 9(8), 92.
https://doi.org/10.3390/computation9080092
Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.
Article Metrics
No
No
Article Access Statistics
For more information on the journal statistics, click here.
Multiple requests from the same IP address are counted as one view.
Nagy, Á.; Omle, I.; Kareem, H.; Kovács, E.; Barna, I.F.; Bognar, G.
Stable, Explicit, Leapfrog-Hopscotch Algorithms for the Diffusion Equation. Computation2021, 9, 92.
https://doi.org/10.3390/computation9080092
AMA Style
Nagy Á, Omle I, Kareem H, Kovács E, Barna IF, Bognar G.
Stable, Explicit, Leapfrog-Hopscotch Algorithms for the Diffusion Equation. Computation. 2021; 9(8):92.
https://doi.org/10.3390/computation9080092
Chicago/Turabian Style
Nagy, Ádám, Issa Omle, Humam Kareem, Endre Kovács, Imre Ferenc Barna, and Gabriella Bognar.
2021. "Stable, Explicit, Leapfrog-Hopscotch Algorithms for the Diffusion Equation" Computation 9, no. 8: 92.
https://doi.org/10.3390/computation9080092
APA Style
Nagy, Á., Omle, I., Kareem, H., Kovács, E., Barna, I. F., & Bognar, G.
(2021). Stable, Explicit, Leapfrog-Hopscotch Algorithms for the Diffusion Equation. Computation, 9(8), 92.
https://doi.org/10.3390/computation9080092
Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.