Next Article in Journal
The Exterior Problem of Parabolic Hessian Quotient Equations
Previous Article in Journal
Variational Iteration and Linearized Liapunov Methods for Seeking the Analytic Solutions of Nonlinear Boundary Value Problems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Efficient Numerical Schemes for a Heterogeneous Reaction–Diffusion System with Applications

by
Samima Akhter
1,
Md. Ariful Islam Arif
1,
Rubayyi T. Alqahtani
2,* and
Samir Kumar Bhowmik
1
1
Department of Mathematics, University of Dhaka, Dhaka 1000, Bangladesh
2
Department of Mathematics and Statistics, College of Science, Imam Mohammad Ibn Saud Islamic University (IMSIU), Riyadh 11564, Saudi Arabia
*
Author to whom correspondence should be addressed.
Mathematics 2025, 13(3), 355; https://doi.org/10.3390/math13030355
Submission received: 31 December 2024 / Revised: 15 January 2025 / Accepted: 17 January 2025 / Published: 23 January 2025
(This article belongs to the Section E: Applied Mathematics)

Abstract

:
In this study, a class of nonlinear heterogeneous reaction–diffusion system (RDS) has been considered that arises in modeling epidemiological interactions, environmental sciences, and chemical and ecological systems. Numerical and analytic solutions for this kind of variable medium nonlinear RDS are challenging. This article developed a few highly accurate numerical schemes for such problems. For the spatial integration of the heterogeneous RDS, a few finite difference schemes, a Bernstein collocation scheme, and a Fourier transform scheme were employed. The stability and accuracy analysis of the spectral schemes were studied to confirm the order of convergence of the approximation. A few methods of lines were then used for the temporal integration of the resulting semidiscrete model. It was confirmed theoretically that the spectral/pseudo-spectral method is very efficient and highly accurate for such a model. A fast and efficient solver for the resulting full discrete system is highly desired. A Newton–GMRES–Multigrid solver was applied for the resulting full discrete system. It is demonstrated in tabular form that a multigrid accelerated Newton–GMRES solver outperforms most linear solvers for such a model.

1. Introduction

The spatial characteristics of the environment can be represented by various PDEs of different kinds, such as ecology, composed of chemical, biological, geological, and physical, with various types and levels of heterogeneity. Heterogeneous PDEs, in which reaction–diffusion has been used, are employed in biological and chemical systems, in population dynamics, and in epidemics. Heterogeneity is associated with uneven changes in a system with general rules and also variations in external conditions. These systems may exist in the initial values of the problem modeled (dependent variables), in space, or by time varying the values of properties (see, e.g., [1]). Furthermore, heterogeneity in reaction–diffusion systems increases the difficulty of obtaining the exact solutions. In 1952, Alan Turing first developed a seminal mechanism for self-organizing emergence, such as the development of different patterns in various biological systems. He considered a reaction–diffusion system in which two concentrations diffuse and interact with each other with reaction terms and determined that a small variation in their concentrations near the dynamic equilibrium can be heterogeneously amplified for proper reaction terms. The resulting behavior of the Turing model with fixed constant coefficients is generally highly periodic and usually fails to generate real scenarios with many real-life applications [2,3].
The reaction–diffusion system (RDS) model is a time-dependent partial differential equation that describes physical realities. A detailed mathematical initiative, especially numerical schemes, may have numerous applications in understanding and classifying structures and patterns in engineering and biological systems [3,4]. The numerical range of parameter values and the physical meaning of the boundary conditions have been explained in detail in [3,5] and some references therein. A class of highly accurate numerical schemes is available for mathematical models that contain diffusion operators in a homogeneous medium [6,7]. There are several works that explain the applications of RDS in epidemics [8,9]. In [8,9], the authors analyzed a few RDS models used for describing epidemic processes. The main attention in the studies was on an RDS that explains COVID-19 pandemic modeling. Here, they studied traveling wave solutions, exact solutions, and parameter estimations. The authors considered a class of RDSs that were capable of modeling the development and continuation of the COVID-19 pandemic in both space and time in [8]. The study confirmed that some phenomena cannot be captured using ODEs, and thus a reaction–diffusion system with a network structure within an RDS may help to better understand the progression of an epidemic. In [10], the authors considered stochastic reaction–diffusion processes using graph neural networks and used cheap Monte Carlo simulations of reaction–diffusion processes. They also analyzed the stability and accuracy analysis of the schemes. Thus, a rigorous numerical study of RDS is highly needed; hence, the ideas developed by these authors can be implemented for the problem considered in this study, which will lead to a good future research direction.
The dynamics of the progression of epidemics and chemical and ecological systems may depend on heterogeneous ordering [3,5,8,9]. Heterogeneities are an underlying feature of environmental and biological reaction–diffusion systems. There is a growing interest in mathematical models where heterogeneity plays a significant role, including heterogeneous reaction–diffusion systems, where the diffusion parameters depend on space, and the reaction terms depend explicitly on space and/or time. These types of systems can also be found in the Belousov–Zhabotinsky cardiac cell model and neuron cell dynamics.
Recently, it has become clear that spatial heterogeneity has a significant impact on ecological dynamics [3]. This work focuses on the study of the numerical approximations of two-component RDSs [4,8,11]:
u t = δ 1 ( x , y ) u + f ( u , v ) , v t = δ 2 ( x , y ) v + g ( u , v ) ,
where = 2 x 2 + 2 y 2 , f and g are nonlinearities, and ( x , y ) Ω R 2 . One of the most used such nonlinearities is as follows:
f ( u , v ) = r u ( 1 u w ) p v h ( k u ) , g ( u , v ) = q v h ( k u ) s v ,
where u ( x , t ) and v ( x , t ) are the concentrations of the populations of prey and predators at a particular t and spatial position x Ω . Here, the model parameters r , w , p , δ i , i = 1 , 2 , k , q , s are in ( 0 , ) . δ i , i = 1 , 2 may depend on spatial variables. The first term determines how quickly the quantity u increases. The first term, δ 1 u , is the diffusion term. Here, the functional response  h C 2 [ a , b ] satisfies [4]
  • h ( x ) = 0 at x = 0 .
  • h ( x ) approaches 1 as x approaches .
  • h ( x ) > 0 .
One such choice of h is h ( x ) = x α + x (forms kinetics 1), which is an example of the resulting nonlinearity ( f , g ) = u ( 1 u v α + u ) , u ( β v α + u γ ) , whereas another option may be the function h ( x ) = ( 1 e x ) (forms kinetics 2) [4].
In addition, the following reaction kinetics can be found in the Gray–Scott model:
f ( u , v ) = F ( 1 u ) u v 2 , g ( u , v ) = u v 2 ( F + k ) v ,
where u and v are the concentrations of the two chemical components, F > 0 denotes the dimensionless flow rate of the reaction, and k > 0 indicates the rate constant of the activator. Thus, a few different types of kinetics are considered in this study. Stability issues/time–long dynamics of the RDS with such two nonlinearities have been well discussed in [3,4].
An extensive field of study, this kind of heterogeneous RDS replicates diverse predator–prey interactions and represents a broad spectrum of biologically, chemically, and environmentally relevant behavior arising from intrinsic variables alone. A reaction diffusion system was considered in [4], where the authors detailed a finite difference securitization and illustrated numerical solutions. The authors also studied the parameter-sensitive dynamical behavior of the solutions of the RDS. The study  [11] contained the same problem as in [4] and focused on developing an efficient linear system solver for the RDS. In this study, the author used a similar finite difference scheme to [4] and employed a few linear system solvers for the resulting system of equations. It is noticed from this study that the multigrid accelerated GMRES solver was a time–efficient technique for the homogeneous RDS considered.
Detailed mathematical and physical studies about the space–varying diffusion parameter in an RDS can be found in [3], where the author rigorously studied the effect of the heterogeneous diffusion coefficients in pattern recognition in biological systems and chemical systems. Here it was confirmed that the use of space-varying diffusion coefficients can be used to modify classical Turing patterns. The author used the Galerkin method for the space discretization. The analytic solution properties were discussed in detail. The author studied the generic instability of the solutions and the physical mechanism of real-life problems that lead to heterogeneity. The heterogeneous nonlinear problem of type (1) also arises in the progression of epidemics as well [8,10].
Studying numerical methods for heterogeneous RDS is important, as well as very challenging, due to the variability of the parameter values. A few studies have addressed the numerical solutions of this problem [3]. To the best of our knowledge, no existing solver is very fast and efficient for a heterogeneous RDS of type (1). The implementation of a spectral method for such a system of nonlinear PDEs is also challenging; a multigrid preconditioned linear solver for such a model has not been studied yet. Thus a smart algorithm will undoubtedly assist applied scientists to employ these kinds of models for research. Motivated by the studies described above, in this study, the Bernstein polynomial scheme, Fourier transform scheme, and a few finite difference schemes were employed for the spatial integration of (1) followed by a few methods of lines for time integrals. The stability and accuracy analysis of the spectral schemes were well studied to confirm the order of convergence of the approximation. Theoretically, it has been established that the spectral and pseudo-spectral methods are highly accurate and efficient for this type of model. The schemes have been illustrated through several graphical representations of solutions. Now, a combination of these schemes contains a full discrete system of nonlinear equations, which is very challenging to investigate. A preconditioned solver was employed to solve the abovementioned system. To be specific, a Newton–GMRES–Multigrid solver was applied for the resulting full non-linear system of equations. Here, our illustrations confirm that a multigrid accelerated Newton–GMRES solver outperforms most linear solvers for the full discrete heterogeneous RDS.
The rest of the article has been organized in the following manner. Section 2 deals with the Bernstein polynomial and Fourier transform schemes for the spatial domain. We also analyze the stability and accuracy of such approximations in this section. Section 3 contains a few full discrete finite difference schemes with the multigrid accelerated Newton–GMRES method for the resulting nonlinear system. A detailed numerical illustration for the schemes discussed in Section 2 and Section 3 are presented and analyzed in Section 4. A few concluding remarks are included in Section 5.

2. Spectral Schemes for the RDS

In contrast to finite difference methods, spectral methods are global, which means that the calculation at any given location is based on information from the entire domain, not just the surrounding points. Spectral approaches converge dramatically, making them more accurate than local methods. When the integral/solution changes significantly in time or space, when extremely high spatial resolution is required, or when long-term integration is required, global methods are preferred over local methods.

2.1. Bernstein Collocation Scheme

In the mathematical field of numerical analysis, a Bernstein polynomial is typically defined by a superposition of Bernstein basis functions or polynomials (BBPs) [12]. Here, our goal is to use the Bernstein collocation method for spatial integration to approximate the model. The accuracy problems with such a polynomial approximation have been well examined in [13,14,15]. Before we move onto the main discussion, a brief explanation of Bernstein polynomials over [ 0 , 1 ] domain and [ a , b ] domain is included here.
  • Bernstein Basis Polynomials (BBPs) in [ 0 , 1 ]
BBPs of degree n over [ 0 , 1 ] can be defined as [12,16]
b n , k ( x ) : = n k x k ( 1 x ) n k , k = 0 , 1 , 2 , , n ,
where C r n are the binomial coefficients. The respective Bernstein polynomials of degree n can be defined by [12,17]
B n ( x ) = k = 0 n c k b n , k ( x ) ,
where c k are known as Bernstein coefficients. These polynomials satisfy [12,13,17]
  • b n , k = 0 , if k < 0 or k > n ,
  • b n , k 0 , for x [ 0 , 1 ] ,
  • b n , k ( 1 x ) = b n , n k ( x ) ,
and the derivatives of such a polynomial satisfies
b n , k ( x ) = n b n 1 , k 1 ( x ) b n 1 , k ( x ) ,
b n , k ( l ) ( 0 ) = n ! ( n l ) ! l k ( 1 ) k + l ,
b n , k ( l ) ( 1 ) = ( 1 ) l b n , n k ( l ) ( 0 ) .
Let us consider F C [ 0 , 1 ] . Then, one of the best uses of such a polynomial is to approximate f efficiently. The efficiency is guaranteed by the famous Weierstrass approximation theorem, providing the following results [12,14,18]:
B n ( F ) ( x ) = k = 0 n F k n b n , k ( x ) ,
B n ( F ) ( l ) ( n ) l n l F ( l ) ,
and
F ( l ) B n ( F ) ( l ) 0 .
  • Bernstein Basis Polynomials (BBPs) in [ a , b ]
Now, we aim to define such polynomials over an arbitrary domain [ a , b ] . Such polynomials can be defined over [ a , b ] as [12]
b n , j ( x ) = n C r ( x a ) j ( b x ) n j ( b a ) n , j N , and 0 j n ,
and it satisfies the properties stated above for the same polynomials over [ 0 , 1 ] . Now, letting h = b a N , and x i = x 0 + i h , i = 0 , 1 , 2 , 3 , , n , a general form of Bernstein polynomials of degree n over [ a , b ] can be written as
β i , n ( x ) = n k ( x x 0 ) i ( x n x ) n i ( x n x 0 ) n ,
where i = 0 , 1 , , n . These polynomials satisfy the properties stated above. These polynomials also satisfy the following recurrence relations [12]:
β i , n ( x ) = x n x x n x 0 β i , n 1 ( x ) + x x 0 x n x 0 β i 1 , n 1 ( x ) ,
β i , n ( x ) = n x n x 0 β i 1 , n 1 ( x ) β i , n 1 ( x ) ,
β i , n ( x ) = n ( n 1 ) ( x n x 0 ) 2 β i 2 , n 2 ( x ) 2 β i 1 , n 2 ( x ) + β i , n 2 ( x ) .
Thus, it is evident that
β ( m ) ( x ) = D m β n m ( x ) , where β ( m ) = [ β 0 , n m b 1 , n * m b n , n * m ] T ,
and D m is a derivative operator of order m [12,19]:
D m b N , j ( x ) = 1 ( b a ) m N m l = max ( 0 , j + m N ) m i n ( j , m ) p l b j l , N m ( x ) .
Here, we recall the one dimensional version of the RDS (1) by
u t = δ 1 ( x ) u + f ( u , v ) , v t = δ 2 ( x ) v + g ( u , v ) .
Let the approximate solutions for (3) be
u h ( x , t ) = j = 0 M C j u ( t ) β j , M ( x ) , and v h ( x , t ) = j = 0 M C j v ( t ) β j , M ( x ) ,
where β j , M ( x ) are the Bernstein basis polynomials defined above and at x k , k = 0 ( 1 ) M ,
u h ( x k , t ) = j = 0 M C j u ( t ) β j , M ( x k ) , v h ( x k , t ) = j = 0 M C j v ( t ) β j , M ( x k ) ,
and it yields
u ̲ h ( t ) = B C u ( t ) , where B = β j , M ( x k ) , and C u = ( C j u ) ,
and
v ̲ h ( t ) = B C v ( t ) , where B = β j , M ( x k ) , and C v = ( C j v ) .
Then, system (3) can be approximated by the following system of ordinary differential equations through the use of product approximation of the nonlinear part  [20]
B d d t C u ( t ) = M α D 2 C u ( t ) + B f ( C u , C v ) , B d d t C v ( t ) = M β D 2 C v ( t ) + B g ( C u , C v ) ,
where M α = d i a g ( δ 1 ( x j ) ) , and M β = d i a g ( δ 2 ( x j ) ) . System (4) can be written in a matrix-vector form as
B * d C ( t ) d t = M C ( t ) + B F ,
where B * = B 0 0 B ,   M = M α D 2 0 0 M β D 2 ,   C ( t ) = C u ( t ) C v ( t ) , and F = f ( C u , C v ) g ( C u , C v ) .
  • Two-Space Dimensional Implementation of the Scheme
Now, a two-space dimensional version of Bernstein interpolating polynomials can be defined through tensor product approximation by
u h ( x , y ) = j = 0 N l = 0 M γ j , l b N , j ( x ) b M , l ( y ) ,
where the coefficients γ i , j can be computed at a sufficient number of collocation points. This idea consists of substituting the partial differential operators by the derivatives of the interpolating polynomials (6). Through vectorization, the matrix D r can be implemented similar to the Tchebychev pseudospectral method discussed in [14,21]. Now, the 2D version of the discrete diffusion operator on a square domain can be defined by
L h 1 = I ( M α D 2 ) + ( M α D 2 ) I , L h 2 = I ( M β D 2 ) + ( M β D 2 ) I ,
where I is an identity operator of same order as of D.
Thus, the resulting semi-discrete RDS can be written as
d u h d t = L h 1 u h + f h ( u h , v h ) , d v h d t = L h 2 v h + g h ( u h , v h ) ,
with a set of suitable initial conditions, and thus, the resulting coupled IVP can be solved using any standard implicit/explicit solvers. For more implementation details, we refer the reader to [14] (Chap. 13).

2.2. Stability and Convergence of Bernstein Collocation Scheme

The analysis of the error resulting from the previously outlined numerical method is put forward in this section. Here, we follow [22] for the theoretical analysis in this article. Consequently, the vector form of the system (3) can be expressed as follows:
u t = δ ( x ) 2 u + f ( u ) , x Ω R 2 ,
where 2 = 2 x 2 + 2 y 2 . The vectors u and f are associated with a relevant coefficient matrix D. The following approximation features of this problem have a standard demonstration for Dirichlet boundary conditions.
Theorem 1.
Assume that u is the exact solution to the problem (7), given the necessary initial and boundary conditions. Let f be an n continuously differentiable function, and assume once more that u N X N is the approximate solution of the Bernstein collocation method. The relationship that follows therefore holds:
u N 2 + 0 t δ 2 u N E 2 d s u 0 N 2 + C 0 t f n d s ,
with C functioning as a constant.
Proof. 
Let u = ( u 1 , u 2 ) , δ = ( δ 1 ( x ) , δ 2 ( x ) ) , and u 0 be the given initial conditions. Determine an expression of k = ( k 1 , k 2 ) using any pair of positive or negative integers. Hence, the Euclidean inner product R 2 is represented by the expression k . x = k 1 x 1 + k 2 x 2 . Additionally, we denote the set of multiindices k = ( k 1 , k 2 ) by J. Assume that X is a Hilbert space and that, for a positive integer N, X N and Y N are finite-dimensional subspaces of X with the same dimension, and
X N = s p a n { b k , N | u ( Ω ) = 0 , k J } .
Let P N be the orthogonal projection of X N onto the ω -weighted L 2 . The function that represents the Bernstein collocation approximation is
u N ( x , t ) = k J u N ( x k , t ) b k , n ,
where b k , n denotes the characteristic of Bernstein polynomials. These polynomials satisfy the following equation at the collocation points
u t N L N u N f ( x k , t ) = 0 , f o r a l l t > 0 , and k J .
Here, L N = δ 2 u , for all u X N ; this refers to the interpolation approximation of L u . The coercivity and continuity conditions are satisfied with E = H ω , 0 1 ( Ω ) , which is a Hilbert space in X, and the norm is [22]
u E = Ω | u x | 2 d Ω 1 2 .
The operator L N = δ 2 u = δ ( 2 x 2 + 2 y 2 ) u in this scheme satisfies the subsequent energy identity:
( L u , v ) = α ( u , v ) = Ω | u x | 2 d Ω .
Simply put, the square root on the right side of the equation is the semi–norm of the Hilbert space. Once more, we suppose that, for any N > 0 , X N is contained in E. Additionally, it is assumed that, for any u X N , u C u E , where C > 0 is not dependent on N [22]. The following is a coercivity requirement for the approximation (7). If there is a constant α > 0 (that is, independent of N), such that
α u E 2 ( L N u , u ) N u X N ,
then it is clear that the approximation is stable. This means that the estimate
u N E C α f n N
holds true. In fact, it has
α u N E 2 ( Q n L N u N , u N ) N = ( Q N f , u N ) N Q N f N u N N C f n N .
Here, we exploit the fact that Q N is the orthogonal projection operator upon Y N with respect to the discrete inner product ( u , v ) N . From this point forward, we assume that the continuity requirement and the coercivity conditions, which are the hypotheses presented in [22], satisfy the problem. Initially, we examine an inner product using the provided equation as
( t u N , v ) = ( δ 2 u N , v ) + ( f ( u N ) , v ) .
Consider v = u N ( t ) . For each t > 0 , we obtain [20]
1 2 d d t u N 2 = δ 2 u N E 2 + f n u N 2 d s .
After integration, we have
u N 2 + 0 t δ 2 u N E 2 d s u 0 N 2 + C 0 t f n d s .
This establishes the Bernstein approximation’s stability.    □
For the investigation of the spectral approximations of boundary-value problems, the approximation results in the Sobolev norms are significant. In this instance, projecting onto the space of polynomials satisfying the boundary data rather than merely the space of polynomials would be more appropriate. Consider the Dirichlet conditions at both ends of the interval [ 1 , 1 ] . The functions of H 1 ( 1 , 1 ) are the Sobolev space that meets these requirements, which constitutes a subspace, which is typically represented by H 0 1 ( 1 , 1 ) .
Theorem 2.
Let u represent the exact solution to (7) with the Dirichlet boundary condition. Suppose that f is n times continuously differentiable function, and assume once more that u N X N is the approximate solution of the Equation (7). After that, the inequality presented below is valid:
u u N E e ( 0 ) N 2 + C N 1 n 0 t | u t ( s ) | H n 2 2 d s + N 1 n | u ( s ) | H n 2 1 2
+ C N 1 n 0 t | u t ( s ) | H n 2 2 d s + C N 2 n | u | H n 2 + C N 2 n | f n | H 2 n .
Proof. 
To test its convergence, let us define e r ( t ) = R N u ( t ) u N ( t ) , where the projection operator is denoted by R N . The inequality is thus satisfied by the error function e r ( t ) as [22]
1 2 d d t e r 2 + δ e r E 2 | ( u t R N u t , e r ) + ( u R N u , e r ) | .
For any function p X , we can define the new norm as
p E * = sup v E , v 0 ( p , v ) v E .
According to [22] , p E * is the norm of p in the dual space E * of E. Keep in mind that for any p E * C p X , v X C v E , for all v E . From the formulation above and the continuity of the operator L , it follows that
| ( u t R N u t , e r ) + ( u R N u , e r ) C u t R N u t E * + u R N u E e r E .
Thus, for each t > 0 , the following error bound can be inferred as
e r ( t ) N 2 + δ 0 t d d t e r ( s ) E 2 d s e r ( 0 ) N 2
+ C 0 t u t R N u t E * 2 d s + 0 t u R N u E 2 d s + C 0 t ( R N u t , e r ) ( R N u t , e r ) N e r E 2 d s
+ 0 t ( L N u , e r ) ( L N R N u , e r ) N e r E 2 d s + ( f , e r ) ( f , e r ) N e r E 2 d s .
Thus,
u u N E e r ( 0 ) N 2 + C N 1 n 0 t | u t ( s ) | H n 2 2 d s + N 1 n | u ( s ) | H n 2 1 2
+ C N 1 n 0 t | u t ( s ) | H n 2 2 d s + C N 2 n | u | H n 2 + C N 2 n | f n | H 2 n .
This completes the proof of convergence.    □

2.3. Fourier Spectral Method

In this section, motivated by the study [14,23,24], the Fourier spectral method was employed for the spatial integration of PDEs (1), considering a two- space dimensional case in a homogeneous medium ( δ constants). Ref. [23] was followed to generate mathematical formula and computational algorithms in this study. A one-dimensional formula is similar to 2D cases, and it is basically straightforward. To that end, we consider the 2D system of PDEs in Equation (1). Since boundary conditions are imposed, Fast Fourier Transformation (FFT) was employed to approximate the spatial derivatives ( u x x , u y y , v x x , and v y y ) which are called spectral derivatives [14]. The problem is considered in a periodic domain to facilitate FFT and IFFTs.
Now, from the properties of the discrete Fourier transformation formulae [14], it is evident that
F ( u x x ( x , t ) ) = ( i ξ ) 2 u ^ ( ξ , t ) = ξ 2 u ^ ( ξ , t ) , F ( v x x ( x , t ) ) = ( i ξ ) 2 v ^ ( ξ , t ) = ξ 2 v ^ ( ξ , t ) ,
and that
u ^ ξ ξ ( 0 ) = u ^ ( 0 ) , v ^ ξ ξ ( 0 ) = v ^ ( 0 ) ,
where = [ ξ 1 2 ξ 2 2 ξ m 2 ] , ξ k = 2 π L k , for all k = N 2 ( 1 ) N 2 1 , where L = length of space domain, and N = number of discretization points in space. For further detail, we refer the reader to [14]. Then, we obtain our approximate u ˜ x x ( 0 ) and v ˜ x x ( 0 ) vectors by applying the inverse FFT [24], i.e.,
u ˜ x x ( 0 ) = i f f t ( u ^ ξ ξ ( 0 ) ) , v ˜ x x ( 0 ) = i f f t ( v ^ ξ ξ ( 0 ) ) .
Let us recall the RDS (1) in 2D, where δ i , i = 1 , 2 (considering a homogeneous medium) are constants
u t = δ 1 ( u x x + u y y ) + f ( u , v ) , v t = δ 2 ( v x x + v y y ) + g ( u , v ) , x Ω , t > 0 .
Applying Fourier transformation on Equation (10) yields
u ^ t = δ 1 ( ξ x 2 + ξ y 2 ) u ^ + f ( u , v ) ^ , v ^ t = δ 2 ( ξ x 2 + ξ y 2 ) v ^ + g ( u , v ) ^ ,
which is a first order time dependent semi-linear system of coupled ODE, which can be solved using integrating factors, where F ( u ( x , y , t ) ) = u ^ ( ξ x , ξ y , t ) . The idea was used to integrate the problem in time, and then inverse Fourier transform (ifft) was employed to generate solutions in the spatio-temporal domain. By employing a semi-implicit scheme over time on (11), it can be approximated by
u ^ n + 1 = u ^ n + Δ t f ( u , v ) ^ n 1 + Δ t δ 1 ( ξ x 2 + ξ y 2 ) , v ^ n + 1 = u ^ n + Δ t g ( u , v ) ^ n 1 + Δ t δ 2 ( ξ x 2 + ξ y 2 ) .
After approximating u ^ and v ^ , the inverse Fourier transform (ifft) was employed to generate solutions in the spatio-temporal domain, where
u ( x , y , t ) F 1 ( u ^ ( ξ x , ξ y , t ) ) , v ( x , y , t ) F 1 ( v ^ ( ξ x , ξ y , t ) ) .
It is well known that the convergence rate of the spectral method is O ( N n ) for any smooth function, and it is O ( c N ) for an analytic function, where c ( 0 , 1 )  [14]. To deduce these correlations, we apply the Fourier transform in two steps. First, a smooth function has a transform that decays swiftly. A smooth function changes slowly, and because high wave numbers correlate to rapidly oscillating waves, it has very little energy at such frequencies. Second, if a function’s Fourier transform decays quickly, the mistakes caused by discretization are minimal. These faults are generated by aliasing from high to low wave numbers [23].

2.4. Stability and Convergence of Fourier Collocation Method

We need to address a few preliminary issues that stem from [22] before we begin this discussion. Note that, we follow [22] and some references therein for the theoretical analysis in this section. A polynomial of degree N that is trigonometric due to Fourier collocation is represented by the space S N :
S N = s p a n { e ( j + k ) x | N k < N } .
It is important to remember that the L p -norm of a function u over ( 0 , 2 π ) has the following definition:
u L p ( 0 , 2 π ) = 0 2 π | u ( x ) | p d x 1 p , 1 p < ,
and
u L ( 0 , 2 π ) = sup 0 x 2 π | u ( x ) | , p = .
Again, we take the Sobolev norm of order n 0 ( n Z ), given by
u H n ( 0 , 2 π ) = k = 0 n 0 2 π | u k ( x ) | 2 d x 1 2 ,
with the norm
u n = k = ( 1 + | k | 2 n ) | u k ^ ( x ) | 2 1 2 .
Here, the interpolant of the function u at the nodes x j = π j N , j = 0 , , 2 N 1 is denoted by I N u S N . We compute the approximation results for u I N u , the interpolation error. We have
u I N u H l ( 0 , 2 π ) C N l n u n L 2 ( 0 , 2 π ) for 0 l n ,
a n d u H P n ( 0 , 2 π ) with   n 1 .
Theorem 3.
Let u represent the exact solution to the problem with periodic boundary conditions that is suggested. Assume once more that the approximate solution of the Fourier collocation method is u N S N , and that the function f is n times continuously differentiable. Next, the problem’s stability is described as
u N 2 + 0 t δ 2 u N E 2 d s u 0 N 2 + C 0 t f n d s ,
where C is considered as a constant.
Proof. 
The collocation points are as follows: x r k = ( x r , x k ) , 0 r , k 2 N 1 ; x r = π r N ; the interpolant of u at these points is indicated by I N u S N . The function u N ( x , t ) = r k u N ( x r k , t ) ϕ r k ( x ) represents the Fourier collocation approximation to u. ϕ r k represents the standard Lagrange trigonometric polynomials at the collocation points that fulfill the equations
[ ( u N ) t L u N f ] ( x r k , t ) = 0 t > 0 and 0 r , k 2 N 1
u N ( x r k , 0 ) = u 0 ( x r k ) , t > 0 and 0 r , k 2 N 1 ,
where L N u = δ 2 u for all u S N ; this is an approximation for the interpolation of L u . The general form of this approach can be written by choosing
( u , v ) N = ( π N ) 2 0 r , k 2 N 1 u ( x r k ) v ( x r k ) ¯ .
The stability of collocation is established by following the relations stated in [22]:
( Q N L N u , u ) N = ( L N u , u ) N u X N ,
where Q N represents the orthogonal projection onto S N for the discrete inner product ( u , v ) N . Additionally, u S N ,
( L N u , u ) N = Ω I N ( δ ( x ) u ) u d x d y
= Ω δ ( x ) | u | 2 d x d y
δ 0 Ω | u | 2 d x d y .
In the Hilbert space H P 1 ( 0 , 2 π ) , as given in ([22]), the integral on the right-hand side is precisely the square of the norm u m .
Thus, using E = H P n ( 0 , 2 π ) , the stability requirement is confirmed, and the approximation is stable in accordance with
u N E C δ 0 f n N
α u N E 2 ( Q n L N u N , u N ) N = ( Q N f , u N ) N Q N f N u N N C f n N .
Thus,
u N 2 + 0 t δ 2 u N E 2 d s u 0 N 2 + + C 0 t f n d s .
   □
Theorem 4.
Let u be the exact solution, according to the periodic boundary conditions, to the given problem. Let f be an n times continuously differentiable function. Assume again that the approximate solution of Equation (10) is u N S N . According to the Fourier collocation method, the solution converges as follows:
u u N H 1 ( 0 , 2 π ) e ( 0 ) L 2 ( 0 , 2 π ) + C N 1 n 0 t | u n | L 2 ( 0 , 2 π ) d s + N 1 n | u n | L 2 ( 0 , 2 π ) 1 2
+ C N 1 n 0 t | u n | L 2 ( 0 , 2 π ) d s + C N 2 n | u n | L 2 ( 0 , 2 π ) + C N 2 n | f n | L 2 ( 0 , 2 π ) ,
which holds.
Proof. 
For the Fourier collocation solution, the convergence estimate can be demonstrated by using Equation (9) and assuming R N u = I N u as well. Then, applying the following approximation features of this operator, one can estimate the aliasing error using result (13).
u I N u H 1 ( 0 , 2 π ) u P N u H 1 ( 0 , 2 π ) + R N u H 1 ( 0 , 2 π ) C N l m u m L 2 ( 0 , 2 π ) + C N l R N u L 2 ( 0 , 2 π ) C N l m u m L 2 ( 0 , 2 π ) .
Consequently, the following error bound can be deduced for every t > 0 as
e r ( t ) H 1 ( 0 , 2 π ) + δ 0 t d d t e r ( s ) H 1 ( 0 , 2 π ) d s e r ( 0 ) H 1 ( 0 , 2 π ) + C 0 t u t R N u t H 1 ( 0 , 2 π ) d s + 0 t u R N u H 1 ( 0 , 2 π ) d s + C 0 t ( R N u t , e ) r ( R N u t , e r ) H 1 ( 0 , 2 π ) e r H 1 ( 0 , 2 π ) 2 d s + 0 t ( L N u , e r ) ( L N R N u , e r ) H 1 ( 0 , 2 π ) e r H 1 ( 0 , 2 π ) 2 d s + ( f , e r ) ( f , e r ) H 1 ( 0 , 2 π ) e r H 1 ( 0 , 2 π ) 2 d s .
Thus,
u u N H 1 ( 0 , 2 π ) e ( 0 ) L 2 ( 0 , 2 π ) + C N 1 n 0 t | u n | L 2 ( 0 , 2 π ) d s + N 1 n | u n | L 2 ( 0 , 2 π ) 1 2
+ C N 1 n 0 t ( | u n | ) L 2 ( 0 , 2 π ) d s + C N 2 n | u n | L 2 ( 0 , 2 π ) + C N 2 n | f n | L 2 ( 0 , 2 π ) .
This completes the proof of convergence.    □

3. A Multigrid Preconditioned Finite Difference Scheme

In this section, a few numerical schemes are employed to solve the class of system of heterogeneous PDEs in Equation  (1) with Dirichlet/Neumann boundary conditions. We start by employing finite difference schemes for spatial integration followed by a method of lines for the temporal approximation. Then, we employ an efficient solver for the resulting discrete model. A preconditioned Newton solver is studied here for the nonlinear system of equations.
We study a two-space dimensional version of (1). Let us consider a spatial domain Ω = [ a , b ] × [ c , d ] . Let h x = b a n x , n x N , h y = d c n y , n y N , x i = a + i h x , i = 0 , 1 , 2 , , n x , and y j = a + j h y , j = 0 , 1 , 2 , , n y . Let u ( x , y , t ) C 2 , 1 [ Ω ; [ 0 , T ] ] , T > 0 . Then, a simple Taylor expansion for u ( x , y , t ) C 2 , 1 [ Ω ; [ 0 , T ] ] , T > 0 yields [1]
u x x ( x i , y j , t n ) [ u i + 1 , j n 2 u i , j n + u i 1 , j n ] / h x 2 ,
u y y ( x i , y j , t n ) [ u i , j + 1 n 2 u i , j n + u i , j 1 n ] / h y 2 ,
and
u t ( x i , y j , t n ) 1 Δ t [ u i , j n + 1 u i , j n ] , or u t ( x i , y j , t n ) 1 Δ t [ u i , j n u i , j n 1 ] ,
where Δ t = T / m , m N . Let us recall the RDS (1) with heterogeneous diffusion coefficients ( δ i ( x , y ) , i = 1 , 2 ) of the form
u t = δ 1 ( x , y ) u + f ( u , v ) , v t = δ 2 ( x , y ) v + g ( u , v ) ,
where u = 2 u x 2 + 2 u y 2 , along with the Dirichlet/Neumann boundary conditions required by the problem modeled. Then at ( x i , y j ) , the system (14) can be approximated by the following implicit Euler scheme:
u i , j n + 1 u i , j n Δ t = δ 1 ( x i , y j ) h 2 [ u i + 1 , j n + 1 + u i 1 , j n + 1 + u i , j + 1 n + 1 + u i , j 1 n + 1 4 u i , j n + 1 ] + f ( u i , j n + 1 , v i , j n + 1 ) , v i , j n + 1 v i , j n Δ t = δ 2 ( x i , y j ) h 2 [ v i + 1 , j n + 1 + v i 1 , j n + 1 + v i , j + 1 n + 1 + v i , j + 1 n + 1 4 v i , j n + 1 ] + g ( u i n + 1 , v i n + 1 ) ,
which is a full discrete system of nonlinear equations. Now, to facilitate the computations, two different semi-implicit schemes were employed, so that the equations can be solved separately. For example, two semi-implicit schemes are as follows:
  • Semi-implicit scheme 1
    u i , j n + 1 u i , j n Δ t = δ 1 ( x i , y j ) h 2 [ u i + 1 , j n + 1 + u i 1 , j n + 1 + u i , j + 1 n + 1 + u i , j 1 n + 1 4 u i , j n + 1 ] + f ( u i , j n , v i , j n ) , v i , j n + 1 v i , j n Δ t = δ 2 ( x i , y j ) h 2 [ v i + 1 , j n + 1 + v i 1 , j n + 1 + v i , j + 1 n + 1 + v i , j + 1 n + 1 4 v i , j n + 1 ] + g ( u i n + 1 , v i n ) ,
    and
  • Semi-implicit scheme 2
    u i , j n + 1 u i , j n Δ t = δ 1 ( x i , y j ) h 2 [ u i + 1 , j n + 1 + u i 1 , j n + 1 + u i , j + 1 n + 1 + u i , j 1 n + 1 4 u i , j n + 1 ] + f ( u i , j n + 1 , v i , j n ) , v i , j n + 1 v i , j n Δ t = δ 2 ( x i , y j ) h 2 [ v i + 1 , j n + 1 + v i 1 , j n + 1 + v i , j + 1 n + 1 + v i , j + 1 n + 1 4 v i , j n + 1 ] + g ( u i n + 1 , v i n + 1 ) .
The Euler formula in time and a five–point formula in space for one- and two–dimensional models give an computational accuracy of O ( h 2 + Δ t ) . Instead, the use of the Crank–Nicolson scheme is accurate for O ( h 2 + Δ t 2 ) . For the explicit Euler approximation, the stability of the scheme needs to follow the Courant–Friedrichs–Lewy (CFL) conditions for choosing h and Δ t , whereas the implicit Euler scheme is unconditionally stable [1,25,26].
We are interested in approximating the solutions of the full discrete model efficiently by using a fast Newton-based algorithm. In this case, u i , j n + 1 is approximated first, followed by v i , j n + 1 , which is easier to execute than utilizing Newton’s technique for (15). The primary challenge in implementing Newton’s method for (15), (16), or (17) is efficiently solving the resultant system of linear equations at each step. To be more explicit, the inverse of the Jacobian matrix must be effectively approximated in each step of Newton’s algorithm. Iterative solvers are commonly used to estimate the inverse of the Jacobian of the nonlinear equations. The most important step in completing such an algorithm for our issue is efficiently solving the resultant system of linear equations at each stage. We compare two iterative strategies for solving the system of linear equations that develops during each iteration of the Newton algorithm for nonlinear systems. In short, the main part in each step of Newton’s algorithm for a nonlinear system F ( x ) = 0 works in the following way:
Step 1:
Solve D F ( x i ) Δ x i = F ( x i ) , where D F ( x i ) , i = 0 , 1 , 2 , stands for the Jacobian of F at x i R n , n 1 .
Step 2:
Update x i + 1 = x i + Δ x i .
In this case, any iterative solvers inside the Newton algorithm oscillate or converge very slowly. This is because, while the minimal eigenvalue is bounded, the maximal eigenvalue of the discrete operator acting on each equation of (17) or (16) grows exponentially. The condition number c o n d ( A ) = O ( N 2 ) = O ( 2 2 l ) grows exponentially, in this case, for some l > 1 . Here, A represents the Jacobian of the equation parts of the nonlinear system (16) or (17). Thus, preconditioning is necessary to tackle such problems [27]. A multigrid is a very good candidate for such a situation [28]. So here, we study and employ a multigrid preconditioned GMRES solver on each iteration of the Newton algorithm [1] for the solution of a nonlinear system of equations.
The multigrid scheme is a numerical method used for solving partial differential equations (PDEs), particularly in the context of computational fluid dynamics and other areas requiring efficient and accurate solutions to complex problems. The scheme is designed to address challenges related to large-scale computations, including issues of stability, accuracy, and computational efficiency. This scheme can be used within the framework of the generalized minimal residual method (GMRES), which is an iterative method, as a preconditioning to speed up the computational process [27,28]. Here, we approximate the inverse of the Jacobian of each nonlinear system of equations we encountered, while calculating the solutions using a multigrid preconditioned GMRES algorithm. We refer to it as the Newton multigrid accelerated GMRES solver [27,28].
In this instance, the plan is to carry out a few smoothing iterations in a fine grid, then move to a coarser level and carry out a few more iterations, and so on. We refer to these as coarse grid adjustments. Following adjustments, one returns to the fine grid and carries out a few post-smoothing operations. Using the Newton technique for the nonlinear system given above, we use the two-grid iterative linear system solution technique as a preconditioner inside the GMRES solver within the MATLAB R2024b environment using the following algorithm:

Multigrid Algorithm

In short, a multigrid scheme works in the following way:
  • Grid Hierarchy: In the multigrid approach, the problem is solved on a hierarchy of grids. The coarse grids capture the large-scale features of the solution, while the finer grids handle the small-scale details. By transferring information between these grids, the method can efficiently reduce errors and improve the convergence rates.
  • Smoothing and Correction: The multigrid scheme involves two main components:
    • Smoothing: This step reduces the high-frequency errors on a given grid. It usually involves iterative solvers like the Gauss–Seidel or Jacobi relaxation methods.
    • Correction: After smoothing, the residual (error) is corrected using information from coarser grids. This process helps address low-frequency errors that are not effectively reduced by smoothing alone.
It can be implemented using the following Algorithm 1.
Algorithm 1 Multigrid V-Cycle method for A d ̲ = f ̲
Here is a more formal outline of the multigrid algorithm for solving A d ̲ = f ̲ .
Initialization 
Start with an initial guess d ( 0 ) for the solution.
OUTPUT 
The approximate solutions d ̲ h .
Step 1 
Pre-Smoothing: Apply a smoothing or relaxation technique on the fine grid to reduce high-frequency errors (apply k iterations of a smoothing method to d ( i ) ).
Step 2 
Residual Computation: Compute the residual vector ( r ( i ) = b A d ( i ) ). This represents the error in the current approximation.
Step 3 
Restriction: Restrict r ( i ) to a coarser grid to obtain r c o a r s e .
Step 4 
Coarse Grid Correction: Solve the linear system A c o a r s e d c o a r s e = r c o a r s e approximately on the coarse grid to obtain a correction vector. This solution addresses the low-frequency errors that are not well-captured by the fine grid.
Step 5 
Prolongation: Transfer (or prolong) the correction vector d c o a r s e from the coarse grid back to the fine grid and add the result to d ( i ) . This step interpolates the coarse grid solution to the finer grid and adds it to the current approximation.
Step 6 
Post-Smoothing: Apply k iterations of the smoothing technique again on the fine grid to the updated d ( i ) to further reduce any remaining high-frequency errors after the correction. Then, check whether the residual norm r ( i ) is below a predetermined tolerance. If not, iterate.
STOP

4. Numerical Results and Discussions

Here, we present a few experimented results for the schemes studied earlier in this article. Graphical and tabular illustration gives us a clear view of the problem and the acceptability of the solutions generated. It helps us to analyze the physical reality neatly. In the previous sections, a few class of schemes were studied. Here, we are motivated to demonstrate approximate solutions of the RDS by the numerical schemes discussed in the previous sections. The results in one- and two-space dimensions are presented here in the preceding segments of the section. To obtain the solution to the problems, a few linear system solvers like the multigrid preconditioned Newton–GMRES (generalized minimal residual) and Newton–ILU factorization algorithm are employed.
Here, from the graphical representations in Figure 1 and Figure 2 for the BBP schemes and overall, it is observed that the outputs are numerically convergent. Here, for all the computations the same set of parameter values were used. All the schemes can be good alternatives for further investigations for the problems in population dynamics, following [4]. Note that, for the experiments presented in Figure 1 and Figure 2, the parameter values α = 0.3 , β = 2.0 , γ = 0.8 , δ = 1.0 , a = 0 , b = 100   h = 1 / 2 , u 0 ( x ) = e x p ( ( x 0.5 b ) . 2 ) , and v 0 ( x ) = 2 / 5 were considered. As shown in Figure 1, the computation was performed using the Bernstein polynomials, and both the schemes are compatible for the model problem with a long time interval. In Figure 2, the numerical results obtained by the full implicit finite difference method, the Bernstein spectral method, and the Fourier spectral methods are compared. All the schemes are compatible, and the Bernstein spectral method outperforms the other two in terms of storage consideration, as it requires fewer numbers of space grid points compared to the other two schemes. For all the one-space dimensional cases, Kinetics 1 was considered. Here, in Figure 3 and Figure 4, the results for heterogeneity are presented, and the Figures show the effect of inhomogeneity in the RDS. Then, we move to the two space-dimensional cases in Figure 5. Figure 5 and Figure 6 confirm that for two space dimensions, the schemes are shown off apparent convergence to each other, which is a positive approach to use for further investigations in population dynamics [4]. In Figure 5, the solution dynamics illustrate that these schemes in two-dimensional space need some significant analysis to make the system user-friendly. In Figure 6 and Figure 7, we show the same experiment with different choices of the diffusion coefficient, α , and initial data to further demonstrate the solution’s behavior. In Figure 8, we use a system size [ 0 , 2 ] × [ 0 , 2 ] . The self-repeated replication of the initial spots is observed in the solution of the Gary–Scott model with the parameters. This type of self-replicating pattern is formed due to the complex interplay between the activator and inhibitor chemicals aided by the reaction and diffusion components. This type of development is found in proto-organisms. Here, the entire space is occupied by making replicas of cell-like localized structures.
Then, we move on to implement a multigrid preconditioned Newton–GMRES scheme for the full nonlinear system of equations (semi-implicit finite difference scheme 1 and semi-implicit finite difference scheme 2). The same experiments can be performed for the other nonlinear systems developed in this study. Here, the schemes were implemented within the framework of MATLAB. It is well observed from the earlier study [28] that the multigrid accelerated scheme performs better for the homogeneous RDS. So, we are interested in implementing and computing the results for GMRES and the multigrid accelerated GMRES solver only for the heterogeneous counterpart, to compare our results with that of the unaccelerated GMRES. Here, from the Tables (Table 1, Table 2 and Table 3) presented below, it is observed that multigrid preconditioned Newton–GMRES outperforms the Newton–GMRES scheme for both the choices of nonlinearity and the semi-implicit schemes. In fact, the multigrid preconditioned Newton–GMRES works better than the solver without preconditioning. Note that, for Table 1 and Table 2, we consider Ω = [ 0 , 1 ] 2 , T = 1 , h = 0.01 , and t o l = 1 × 10 10 . In the tables,
  • MGGMRES means the Newton multigrid preconditioned GMRES scheme was used to solve the resulting full discrete system,
  • GMRES means the unaccelerated Newton GMRES scheme was used to solve the resulting full discrete system,
  • “NC” means the Newton GMRES did not converge as desired [11].
In Table 3, we consider ω = [ 0 , 50 ] × [ 0 , 50 ] , h = 0.1 , and t o l = 10 10 . Here, N means the number of iterations.

5. Conclusions

In this study, a system of heterogeneous PDEs/RDS (1) was considered and studied numerically to observe the solution dynamics. Three different classes of schemes, namely the Bernstein polynomial schemes, Fourier pseudo-spectral scheme, and a few finite difference-based schemes were employed for spatial approximations. The resulting semi-discrete model was investigated using a few method of lines for time integration. By studying the numerical solutions of (1) by three different classes of schemes, it has been demonstrated that the Bernstein polynomial scheme works very well for the heterogeneous medium RDS. The Fourier transform schemes (FTS) for such models are one of the smartest schemes, if it can be applied, as it is very simple, easy to use, and a fast scheme. The main drawback of FTS is the use of convolution in the Fourier domain while working with the spatially variable medium models. It was confirmed theoretically that the spectral/pseudo-spectral methods are very efficient and highly accurate for such a model. The schemes were illustrated through several graphical representations of solutions. From our study, in one dimension, through experiments, all the schemes we considered in this study were compatible. The semi-explicit schemes require smaller Δ t compared to the full implicit scheme, as expected, and the spectral methods are well converged for a reasonably small number of spacial points compared to the finite difference schemes. It is noted that the multigrid preconditioned Newton–GMRES solver for a full discrete nonlinear system outperformed the Newton–GMRES solver in terms of iteration numbers. So, to study such kinds of heterogeneous problems, one may use this idea to generate the solutions quickly to analyze their real-life problems. Here, the Fourier scheme for the heterogeneous problem remains as a future research direction. The three-space dimensional problems and effects of the nonlocal behavior of the problems were not considered in this study and, thus, remain a future research direction.

Author Contributions

S.A. contributed to the finite difference schemes, spectral methods, and drafting of the manuscript; M.A.I.A. worked to develop and implement the finite difference schemes, multigrid preconditioning, and the drafting of the manusrcript; R.T.A. was responsible for the research planning, scheme implementation, and the drafting of the manuscript; S.K.B. was responsible for conceptualization, scheme developments, research planning, and the drafting of the manuscript. All authors have read and agreed to the published version of the manuscript.

Funding

The research was partially supported by a University Grants Commission 2021–22, University of Dhaka, Dhaka, Bangladesh, (ref: reg/admin-3/63062) funding and by a University Grants Commission 2022–23, Dhaka, Bangladesh, funding (Physical sciences-43-2022).

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Atkinson, K.; Han, W. Theoretical Numerical Analysis; Springer: Berlin/Heidelberg, Germany, 2001. [Google Scholar]
  2. Lotka, A.J. Contribution to the theory of periodic reaction. J. Phys. Chem. 1910, 14, 271–274. [Google Scholar] [CrossRef]
  3. Van Gorder, R.A. Pattern formation from spatially heterogeneous reaction–diffusion systems. Phil. Trans. R. Soc. A 2021, 379, 20210001. [Google Scholar] [CrossRef] [PubMed]
  4. Garvie, M.R. Finite-Difference Schemes for Reaction–Diffusion Equations Modeling Predator–Prey Interactions in MATLAB. Bull. Math. Biol. 2007, 69, 931–956. [Google Scholar] [CrossRef] [PubMed]
  5. Pal, S.; Banerjee, M.; Ghorai, S. Effects of boundary conditions on pattern formation in a nonlocal prey–predator model. Appl. Math. Model. 2020, 79, 809–823. [Google Scholar] [CrossRef]
  6. Majeed, A.H.; Liu, D.; Ali, A.R.; Alotaibi, H.; Yin, Z.J.; Yi, R.H. Numerical simulations of energy storage performance in a close configuration: A Galerkin finite element-based computation. Alex. Eng. J. 2024, 104, 56–65. [Google Scholar] [CrossRef]
  7. Majeed, A.H.; Mahmood, R.; Shahzad, H.; Pasha, A.A.; Raizah, Z.A.; Hosham, H.A.; Reddy, D.S.K.; Hafeez, M.B. Heat and mass transfer characteristics in MHD Casson fluid flow over a cylinder in a wavy channel: Higher-order FEM computations. Case Stud. Therm. Eng. 2023, 42, 102730. [Google Scholar] [CrossRef]
  8. Grave, M.; Viguerie, A.; Barros, G.F.; Reali, A.; Andrade, R.F.S.; Coutinho, A.L.G.A. Modeling nonlocal behavior in epidemics via a reaction–diffusion system incorporating population movement along a network. Comput. Methods Appl. Mech. Eng. 2022, 401 Pt A, 115541. [Google Scholar] [CrossRef]
  9. Davydovych, V.; Dutka, V.; Cherniha, R. Reaction–Diffusion Equations in Mathematical Models Arising in Epidemiology. Symmetry 2023, 15, 2025. [Google Scholar] [CrossRef]
  10. Cao, Z.; Chen, R.; Xu, L.; Zhou, X.; Fu, X.; Zhong, W.; Grima, R. Efficient and scalable prediction of stochastic reaction–diffusion processes using graph neural networks. Math. Biosci. 2024, 375, 109248. [Google Scholar] [CrossRef] [PubMed]
  11. Bhowmik, S.K. A multigrid preconditioned numerical scheme for a reaction–diffusion system. Appl. Math. Comput. 2015, 254, 266–276. [Google Scholar] [CrossRef]
  12. Hammad, D.A. Application of Bernstein collocation method for solving the generalized regularized long wave equations. Ain Shams Eng. J. 2021, 12, 4081–4089. [Google Scholar] [CrossRef]
  13. Phillips, G.M. Bernstein Polynomials. In Interpolation and Approximation by Polynomials; CMS Books in Mathematics; Springer: New York, NY, USA, 2003; pp. 247–290. [Google Scholar]
  14. Trefethen, L.N. Spectral Methods in MATLAB; SIAM: Philadelphia, PA, USA, 2000. [Google Scholar]
  15. Wang, H. The rate of convergence of q-Bernstein Polynomials for 0 < q < 1. J. Approx. Theory 2005, 136, 151–158. [Google Scholar]
  16. Farzana, H.; Bhowmik, S.K.; Islam, M.S. Orthonormal Bernstein Galerkin technique for computations of higher order eigenvalue problems. MethodsX 2023, 10, 102006. [Google Scholar] [CrossRef]
  17. Farzana, H.; Bhowmik, S.K.; Alim, M.A. Bernstein collocation technique for a class of Sturm-Liouville problems. Heliyon 2024, 10, e28888. [Google Scholar] [CrossRef] [PubMed]
  18. Natanson. Constructive Function Theory. Volume I: Uniform Approximation; Obolensky, A.N., Translator; Frederick Ungar: New York, NY, USA, 1964; p. 3, MR 0196340. [Google Scholar]
  19. Mirkov, N.; Rasuo, B. Bernstein Polynomial Collocation Method for Elliptic Boundary Value Problems. PAMM · Proc. Appl. Math. Mech. 2013, 13, 421–422. [Google Scholar] [CrossRef]
  20. Christie, I.; Griffiths, D.F.; Mitchell, A.R.; Sanz-Serna, J.M. Product Approximation for Non-linear Problems in the Finite Element Method. IMA J. Numer. Anal. 1981, 1, 253–266. [Google Scholar] [CrossRef]
  21. Weideman, J.A.C.; Reddy, S.C. A MATLAB differentiation matrix suite. ACM TOMS 2000, 26, 465–519. [Google Scholar] [CrossRef]
  22. Canuto, C.; Hussaini, M.Y.; Quarteroni, A.; Zang, T.A. Spectral Methods: Fundamentals in Single Domains; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2007. [Google Scholar]
  23. Trefethen, L.N. Finite Difference and Spectral Methods for Ordinary and Partial Differential Equations. 1996, unpublished text. Available online: http://people.maths.ox.ac.uk/trefethen/pdetext.html (accessed on 1 September 2024).
  24. Bueno-Orovio, A.; Kay, D.; Burrage, K. Fourier spectral methods for fractional-in-space reaction-diffusion equations. BIT Numer. Math. 2014, 54, 937–954. [Google Scholar] [CrossRef]
  25. Suli, E.; Mayers, D.F. An Introduction to Numerical Analysis, 1st ed.; Cambridge University Press: Cambridge, UK, 2003. [Google Scholar]
  26. Thomee, V. Galerkin Finite Element Methods for Parabolic Problems; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2007. [Google Scholar]
  27. Chen, K. Matrix Preconditioning Techniques and Applications; Cambridge Monographs on Applied and Computational Mathematics; Cambridge University Press: Cambridge, UK, 2005. [Google Scholar]
  28. Bhowmik, S.K. Piecewise polynomial approximation of a nonlocal phase transitions model. J. Math. Anal. Appl. 2014, 420, 1069–1094. [Google Scholar] [CrossRef]
Figure 1. One-dimensional graph for the Bernstein spectral method considering the parameter set stated above.
Figure 1. One-dimensional graph for the Bernstein spectral method considering the parameter set stated above.
Mathematics 13 00355 g001
Figure 2. One-dimensional graph for the full implicit finite difference method (FDM) with N = 128 space points, the Bernstein spectral method (BSM) with N = 32 space points, and the Fourier spectral method (FSM) with N = 128 space points, considering Δ t = 0.3 . u 0 ( x ) = e x p ( 0.5 ( x 0.5 b ) 2 ) , v 0 ( x ) = 2 / 5 .
Figure 2. One-dimensional graph for the full implicit finite difference method (FDM) with N = 128 space points, the Bernstein spectral method (BSM) with N = 32 space points, and the Fourier spectral method (FSM) with N = 128 space points, considering Δ t = 0.3 . u 0 ( x ) = e x p ( 0.5 ( x 0.5 b ) 2 ) , v 0 ( x ) = 2 / 5 .
Mathematics 13 00355 g002
Figure 3. One-dimensional graph for the implicit Euler method, considering δ = 1.0 , δ ( x ) = 1 2 + cos ( x 50 ) , and Δ t = 0.05 .
Figure 3. One-dimensional graph for the implicit Euler method, considering δ = 1.0 , δ ( x ) = 1 2 + cos ( x 50 ) , and Δ t = 0.05 .
Mathematics 13 00355 g003
Figure 4. One-dimensional graph for the implicit Euler method, considering δ = 1.0 , δ ( x ) = 2 + cos π 2 ( x 50 ) , and Δ t = 0.05 .
Figure 4. One-dimensional graph for the implicit Euler method, considering δ = 1.0 , δ ( x ) = 2 + cos π 2 ( x 50 ) , and Δ t = 0.05 .
Mathematics 13 00355 g004
Figure 5. FDS (15): two-dimensional graph with kinetics 1 (upper figures) and kinetics 2 (lower figures): α = 0.4 , β = 2.0 , γ = 0.6 , δ = 10.0 , a = 0 , b = 500 , h = 1 , T = 150 , d t = 1 / 24 , u 0 ( x , y ) = 6 35 2 10 7 ( x y 10 225 ) ( x y 10 675 ) , and v 0 ( x , y ) = 116 245 3 10 5 ( x 450 ) 1.2 10 4 ( y 150 ) , as considered in [4]. The figures agree with the results in [4].
Figure 5. FDS (15): two-dimensional graph with kinetics 1 (upper figures) and kinetics 2 (lower figures): α = 0.4 , β = 2.0 , γ = 0.6 , δ = 10.0 , a = 0 , b = 500 , h = 1 , T = 150 , d t = 1 / 24 , u 0 ( x , y ) = 6 35 2 10 7 ( x y 10 225 ) ( x y 10 675 ) , and v 0 ( x , y ) = 116 245 3 10 5 ( x 450 ) 1.2 10 4 ( y 150 ) , as considered in [4]. The figures agree with the results in [4].
Mathematics 13 00355 g005
Figure 6. FDM (17): two-dimensional graph with kinetics 2 and α = 0.45 , δ = 1 , γ = 0.6 , a = 0 , b = 1 , β = 2.0 , h = 0.001 , and d t = 0.1 .
Figure 6. FDM (17): two-dimensional graph with kinetics 2 and α = 0.45 , δ = 1 , γ = 0.6 , a = 0 , b = 1 , β = 2.0 , h = 0.001 , and d t = 0.1 .
Mathematics 13 00355 g006
Figure 7. Two-dimensional graph using the Fourier transform scheme with kinetic 1 and α = 0.4 , β = 2.0 , γ = 0.6 , δ = 10 , a = 0 , b = 1 , h = 0.01 , and d t = 0.5 , initial functions, u 0 ( x , y ) = e x p ( 50 ( ( x b / 2 ) . 2 + ( y b / 2 ) . 2 ) ) , and v 0 ( x , y ) = 2 u 0 .
Figure 7. Two-dimensional graph using the Fourier transform scheme with kinetic 1 and α = 0.4 , β = 2.0 , γ = 0.6 , δ = 10 , a = 0 , b = 1 , h = 0.01 , and d t = 0.5 , initial functions, u 0 ( x , y ) = e x p ( 50 ( ( x b / 2 ) . 2 + ( y b / 2 ) . 2 ) ) , and v 0 ( x , y ) = 2 u 0 .
Mathematics 13 00355 g007
Figure 8. The self-replicating pattern formation of the Gray–Scott model (2) with F = 0.0024 , k = 0.06 , δ 1 = 8 × 10 5 , δ 2 = 4 × 10 5 , h = 0.01 , and d t = 0.5 . Here, we use semi-implicit scheme 1. In addition, the multigrid preconditioned Newton–GMRES solver was used for the system of nonlinear equations.
Figure 8. The self-replicating pattern formation of the Gray–Scott model (2) with F = 0.0024 , k = 0.06 , δ 1 = 8 × 10 5 , δ 2 = 4 × 10 5 , h = 0.01 , and d t = 0.5 . Here, we use semi-implicit scheme 1. In addition, the multigrid preconditioned Newton–GMRES solver was used for the system of nonlinear equations.
Mathematics 13 00355 g008
Table 1. Here, we demonstrate the total iteration numbers to reach T. We consider the semi-implicit scheme 1 with nonlinearity type 1 and δ 1 ( x , y ) = δ 2 ( x , y ) = 0.0001 ( 1 + e x p ( ( x 0.5 ) 2 ( y 0.5 ) 2 ) .
Table 1. Here, we demonstrate the total iteration numbers to reach T. We consider the semi-implicit scheme 1 with nonlinearity type 1 and δ 1 ( x , y ) = δ 2 ( x , y ) = 0.0001 ( 1 + e x p ( ( x 0.5 ) 2 ( y 0.5 ) 2 ) .
N | Δ t 0.010.10.20.5
u h ( x , y , T ) , GMRES63751156860630
u h ( x , y , T ) , MGGMRES3800610480262
v h ( x , y , T ) , GMRES4220640295262
v h ( x , y , T ) , MGGMRES2250422176242
Table 2. Here, we demonstrate the total iteration numbers to reach T. We consider the semi-implicit scheme 2, nonlinearity type 1, and δ 1 ( x , y ) = δ 2 ( x , y ) = 0.0001 ( 1 + e x p ( ( x 0.5 ) 2 ( y 0.5 ) 2 ) .
Table 2. Here, we demonstrate the total iteration numbers to reach T. We consider the semi-implicit scheme 2, nonlinearity type 1, and δ 1 ( x , y ) = δ 2 ( x , y ) = 0.0001 ( 1 + e x p ( ( x 0.5 ) 2 ( y 0.5 ) 2 ) .
N | Δ t 0.0010.010.050.10.20.51
u h ( x , y , T ) , GMRES132,672NCNCNCNCNCNC
u h ( x , y , T ) , MGGMRES12,3204476201578045011567
v h ( x , y , T ) , GMRES121,482NCNCNCNCNCNC
v h ( x , y , T ) , MGGMRES12,3204432128676054011568
Table 3. Number of iterations needed to reach T. We consider the semi-implicit scheme 2, nonlinearity type 2, and δ 1 ( x , y ) = δ 2 ( x , y ) = 0.0001 ( 1 + e x p ( ( x 25 ) 2 ( y 25 ) 2 ) .
Table 3. Number of iterations needed to reach T. We consider the semi-implicit scheme 2, nonlinearity type 2, and δ 1 ( x , y ) = δ 2 ( x , y ) = 0.0001 ( 1 + e x p ( ( x 25 ) 2 ( y 25 ) 2 ) .
N | Δ t 0.010.10.20.51
u h ( x , y , T ) , GMRES45752286475NConNCon
u h ( x , y , T ) , MGGMRES750430242162112
v h ( x , y , T ) , GMRES442422541054NConNCon
v h ( x , y , T ) , MGGMRES650424255168108
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Akhter, S.; Arif, M.A.I.; Alqahtani, R.T.; Bhowmik, S.K. Efficient Numerical Schemes for a Heterogeneous Reaction–Diffusion System with Applications. Mathematics 2025, 13, 355. https://doi.org/10.3390/math13030355

AMA Style

Akhter S, Arif MAI, Alqahtani RT, Bhowmik SK. Efficient Numerical Schemes for a Heterogeneous Reaction–Diffusion System with Applications. Mathematics. 2025; 13(3):355. https://doi.org/10.3390/math13030355

Chicago/Turabian Style

Akhter, Samima, Md. Ariful Islam Arif, Rubayyi T. Alqahtani, and Samir Kumar Bhowmik. 2025. "Efficient Numerical Schemes for a Heterogeneous Reaction–Diffusion System with Applications" Mathematics 13, no. 3: 355. https://doi.org/10.3390/math13030355

APA Style

Akhter, S., Arif, M. A. I., Alqahtani, R. T., & Bhowmik, S. K. (2025). Efficient Numerical Schemes for a Heterogeneous Reaction–Diffusion System with Applications. Mathematics, 13(3), 355. https://doi.org/10.3390/math13030355

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop