1. Introduction
Modeling physical systems for which the future state depends on history due to hereditary characteristics, such as aftereffects or time lags, usually requires the use of delay differential models. The delay may be discrete or continuous, depending on whether a specific or complete past information is used. The inclusion of a delay requires specific techniques for the theoretical study of the differential model [
1,
2,
3,
4]. In practice, delay differential models play a key role in different scientific and technical fields [
5,
6,
7,
8,
9,
10].
In the context of delay differential equations, the construction of non-standard finite difference (NSFD) numerical schemes has not been much explored. Historically, NSFD schemes were developed by Mickens in the years 1994 and 2000 [
11,
12], together with a later edited book in 2005 [
13]. Mickens observed that traditional standard finite difference schemes may be modified, on the basis of exact numerical schemes for basic ordinary differential equations, so that the essential properties of the governing continuous model are mimicked [
14]. Until relatively recently, NSFD schemes were successfully designed and applied for ordinary, partial and fractional differential equations [
15]. However, delay differential equations have not been addressed in detail.
Recently, [
16] proposed a NSFD scheme for the general linear delay problem
(
) from an exact scheme on the whole domain, providing high order of accuracy and consistent dynamical behavior with simple computational properties. Such approach was extended to the non-scalar case in [
17].
In modeling, the variability of data, due to limited knowledge and fluctuation of the process under study, lack of information, bad calibration machines, etc., gives rise to variability in the model coefficients. Therefore, for a more realistic description of the process, coefficients should be regarded as random quantities on an abstract probability space. When the coefficients are random variables and regular stochastic processes, the solution to the model becomes a differentiable stochastic process, whose realizable trajectories solve the deterministic version of the model. A common treatment of random differential models uses mean square (m.s.) calculus [
18,
19,
20,
21,
22,
23,
24]. Of special importance is the computation of the mean and the variance of the solution stochastic process, or even its probability density function.
We are interested on delay random differential equations. Specifically, the randomization of (
1) as
Here and are random variables and f is a stochastic process on a complete probability space , where is the sample space formed by the outcomes , is the -algebra of events, and is the probability measure. The solution x is a differentiable stochastic process.
Only recently, a theoretical study on delay random differential equations was started. General delay random differential equations were analyzed in the m.s. sense in [
25], with the goal of extending some of the existing results on random differential equations with no delay from the book [
18]. Problem (
2) was solved in the m.s. sense in [
26], and later generalized to equations with a random forcing term in [
27]. On the other hand, in [
28] the authors studied (
2), but considered the solution in the sample-path sense and computed its probability density function via the random variable transformation technique, for certain forms of the initial condition process.
In this paper, we are concerned with computational aspects of delay random differential equations. Standard finite difference methods have already been applied to random ordinary, partial and fractional differential equations, by establishing the m.s. convergence, and even the convergence of densities, of the numerical discretizations towards the stochastic process solution [
29,
30,
31,
32,
33,
34]. Here we aim at extending the NSFD method from [
16] to (
2), by assessing the m.s. convergence of the discretizations. This permits approximating the expectation and the variance of the solution with high accuracy, whenever computationally feasible.
The organization of this paper is the following. In
Section 2, the main results on m.s. calculus are exposed. The material for this section is essentially taken from [
18]. In
Section 3, the NSFD numerical scheme from [
16] is presented. The randomization of the scheme, its m.s. convergence and its usefulness for approximating moments are discussed in
Section 4. Illustration of the theory with numerical examples is conducted in
Section 5. Finally,
Section 6 draws the main conclusions.
2. M.s. Calculus
We are interested in second order real random variables
, satisfying
We refer the reader to ([
18], Ch. 4), [
35]. The set of these random variables is a Hilbert space, denoted as
and endowed with the inner product
. This inner product gives rise to the norm
. By Cauchy–Schwarz inequality ([
18], p. 19)
. Random variables in
are characterized by having finite variance:
This is one of the principal reasons for working with second order random variables, since the main statistical information for uncertainty quantification, namely the average value and the dispersion, are well-defined.
Given a stochastic process , it is of second order if the random variable is of second order, for all . By Cauchy–Schwarz inequality, it is straightforward to check that a second order stochastic process possesses a correlation function, .
Convergence in is defined through its norm : a sequence of random variables converges to y in if . This is referred to as m.s. convergence.
M.s. convergence preserves the convergence of the expectation and the variance. This is a key fact. In general, if
and
are two sequences of second order random variables such that
and
as
in the m.s. sense, then
([
18], p. 88).
In the particular case that is a sequence of second order random variables such that its mean and its variance tend to zero, i.e., and as , then is m.s. convergent to zero, since as . The converse is also true.
M.s. convergence gives rise to m.s. calculus, where continuity, differentiability and Riemann integrability of a stochastic process are naturally defined by taking m.s. limits in the classical definitions. A stochastic process is m.s. continuous at if as in the m.s. sense. It is m.s. differentiable at if exists in the m.s. sense, which is denoted as . Finally, is m.s. Riemann integrable on an interval if there exists a sequence of partitions with mesh tending to 0, , such that for any choice of points , , the limit exists in the m.s. sense, and it is denoted as .
The following important properties will be used: m.s. continuity on an interval implies m.s. Riemann integrability ([
18] Section 4.5.1 (1)),
for any m.s. Riemann integrable process
([
18] Section 4.5.1 (3)), and the fundamental theorem of m.s. calculus ([
18] Section 4.5.1 (5), (6)).
Finally, we mention that the essential supremum norm is defined as
The set of random variables satisfying gives rise to the Banach space . Obviously, for two random variables and , it holds .
3. NSFD Methods for Linear Deterministic Differential Equations with Delay
Based on the explicit solution to (
1),
for
,
, an exact numerical difference scheme for (
1) is obtained in [
16,
17]. It is detailed in the following theorem.
Theorem 1. ([16], Theorem 2), ([17], Theorem 2) Consider a size mesh such that , for some integer . Write and , for . Then the numerical solution given by , for , and bywhere and , defines an exact numerical scheme for (1). Having an exact numerical scheme is ideal, since it reproduces the exact values of the solution at the points of the mesh. However, a drawback of (
7) is that definite integrals need to be numerically computed. The number of definite integrals increases with increasing times. Thus, a NSFD method is proposed to maintain sufficient accuracy and adequate dynamical properties, but reduce the complexity by avoiding definite integrals. In the first
M intervals
, the exact solution (
7) is used (or any other numerical method with sufficiently high accuracy), but afterward the integral part from (
7) is discarded. The precision of the method increases with
M.
Theorem 2. ([16], Theorem 3), ([17], Theorem 3) Fix , and compute the numerical solution to (1) in the intervals , for with the exact method (7) or with any other numerical method of global error at most . Then, for and , the expressiondefines a NSFD scheme of global error . As detailed in ([
16], Remark 1), the method (
8) has the characteristics of a NSFD method:
Furthermore, in the rest of [
16], it is proved and illustrated that the method from Theorem 2 is dynamically consistent with (
1), for asymptotic stability, positive preserving properties, and oscillation behavior.
4. NSFD Methods for Linear Random Differential Equations with Delay: Approximations of Moments
When
and
are random variables and
f is a stochastic process, problem (
1) is randomized. These inputs depend on each outcome
and (
2) is obtained. The numerical schemes from
Section 3 are also randomized. The exact scheme (
7) becomes
where the integral is considered in the m.s. sense (it is assumed that
f is a m.s. integrable stochastic process), while (
8) becomes
We translate Theorem 2 into m.s. convergence.
Theorem 3. Suppose that α and β are bounded random variables, and that f is a m.s. continuous stochastic process on . Fix , and compute the numerical stochastic solution to (2) in the intervals , for with the exact method (10). Then, for and , the expression (11) defines a random NSFD scheme of m.s. global error . Proof. For
, we have
and
. For
,
, the exact scheme (
10) is used, so that
. By (
10) and (
11), one gets
for
,
,
.
Let
, and
such that
and
for
. We first consider
and
. By (
12),
where
is a constant independent of
m and
h,
where
is a constant that only depends on
,
and
. Thus,
We continue by evaluating
,
, by starting from (
12) and by employing the bounds already obtained:
By solving this first order recursive inequality, we derive
where
is a constant that only depends on
,
and
. Therefore,
In general, we proceed by induction. Suppose that
for
, where
is constant and
. We prove that
. Fix
. From (
12) and by induction hypothesis,
By solving this first order recursive inequality, we derive (for
)
where
is constant. This concludes the proof by induction. □
Remark 1. As shall be seen in the numerical computations from Section 5, the boundedness of α and β from Theorem 3 is sufficient, but not necessary. Nonetheless, for unbounded α and/or β, if one wants to ensure the m.s. convergence of the NSFD scheme a priori, it is possible to properly truncate the support of α and β. Indeed, since , one may take a sufficiently big interval in such a way that , and truncate the support of α to (analogously for β). In fact, by applying the generalized Markov’s inequality, it may be demonstrated that any second order random variable can be truncated to the interval [mean±10×deviation], so that this interval contains 99% of the probability mass irrespective of the probability distribution. In the theory of m.s. calculus, the boundedness of the random input coefficient must be usually imposed: as proved in ([36], Example p. 541), in order for an autonomous and homogeneous first order linear random differential equation (i.e., , where a is a random variable) to possess a m.s. solution for every m.s. integrable initial condition , the coefficient a must be bounded. M.s. convergence guarantees convergence of the expectation and the variance. In the computer,
is explicitly and symbolically expressed in terms of
,
and
, by employing either the exact scheme (
10) along all the integration region or, at cheaper cost, the NSFD scheme from Theorem 3. By using the linearity of the expectation
, one can explicitly compute
. If the NSFD scheme is being used, one is approximating the true expectation of the solution to (
2). Complexity is severely increased for large times
t, large
N (small
h), and moderate or high dimension of the random space. The variance may also be approximated by symbolically expressing
and explicitly computing
, although the complexity becomes significantly affected because the symbolic expression handled is larger.
Notice that working with these symbolic expressions for
seems to be necessary. Indeed, if one applies the expectation operator directly in (
11), for instance, then
. Since each
depends on
and
, the expectation
cannot be split as
, unless both
and
are nonrandom. So there does not seem to exist a recursive relation formula for
.
Notice that by Jensen’s and Cauchy–Schwarz inequalities,
By triangular, Jensen’s and Cauchy–Schwarz inequalities,
So the approximations of the expectation and the variance inherit the rate of convergence corresponding to the m.s. norm, which is
when the exact numerical scheme (
10) is used for the first
M intervals of length
and (
11) is used for the subsequent intervals (Theorem 3).
In the following section, the m.s. convergence of the NSFD scheme is illustrated with some numerical computations. We point out that the boundedness of and from Theorem 3 is sufficient, but not necessary. It may be possible that a random coefficient is unbounded and the NSFD scheme converges in the m.s. sense.
5. Numerical Examples
The theoretical discussion is illustrated with some numerical computations. We consider specific probability distributions for
,
and/or
f, and a fixed delay
. We denote by
the discretization of the random NSFD method from Theorem 3. The exact solution
is computed with the exact scheme (
10). Both random variables are explicitly and symbolically expressed in terms of
,
and
. These expansions are employed to compute the expectation and the variance, by using the linearity of the expectation. To check the accuracy, the absolute errors in the approximations of the mean value,
, and the variance,
, are calculated for different values of
N and
M. According to the theoretical discussion, the errors should decay as
when
, which entails accuracy up to a significant number of digits. We remark that such level of accuracy cannot be achieved by Monte Carlo simulation, since its error decreases as the reciprocal of the square root of the number of realizations.
The implementations and computations are performed with Mathematica® (Wolfram Research, Inc, Mathematica, Version 12.0, Champaign, IL, USA, 2019), owing to its capability to handle both symbolic and numeric computations.
Example 1. Let . Consider and , while β is a random variable, uniformly distributed on the interval .
Figure 1 plots absolute errors of the approximation of the expectation. First, is fixed and varies. Second, is fixed and varies. In addition, third, these errors are divided by h to show, because of the overlapping, the decrease as . Observe that the error is exactly 0 on the first M intervals of length τ. In the fourth panel, is plotted, where is the output of the NSFD scheme of Theorem 3; the estimated expected values are validated by Monte Carlo simulation. Figure 2 is analogous with the variance and the absolute error of its approximation, . According to ([16], Lemma 4, Theorem 7), since and almost surely, the NSFD scheme converges to 0 almost surely as (it is asymptotically stable almost surely). In both figures, observe that and tend to 0 as , which means that the NSFD scheme is asymptotically stable in the m.s. sense. Finally, the numerical solution is always positive because almost surely and ([16], Theorem 8). Example 2. Let . Consider , , and β random with Gaussian distribution, of zero mean and 0.3 standard deviation. Notice that the support of β is unbounded; however, we will see that m.s. convergence of the NSFD scheme described in Theorem 3 holds.
Figure 3 reports absolute errors of the approximation of the expectation, where N and M take on the same values as in Example 1. The decay as is captured again. The last panel of the figure plots the expectation of the numerical solution from the NSFD scheme of Theorem 3, , together with Monte Carlo simulation. Figure 4 is analogous with the variance and the absolute error of its approximation, . This example is interesting from the dynamics viewpoint. By ([16], Lemma 4), the probability that the zero solution to the realizations of (2) is asymptotically stable is the probability that and , i.e., . Taking into account the Gaussian distribution of β, this probability is up to 12 decimals, i.e., approximately half of the time a realizable NSFD scheme tends to 0 as , and half of the time it does not. The m.s. treatment mixes these two behaviors. In the figures, both and seem to increase as t advances, which means that the NSFD scheme is unstable in the m.s. sense. Finally, notice that β has one half of probability of being negative and the mean of the solution is positive ([16], Theorem 8). Example 3. Let . Consider , where γ is a random variable. It is assumed that α, β and γ are independent random quantities, uniformly distributed on the interval .
As Example 1, Figure 5 reports absolute errors of the approximation of the expectation. First, for fixed and . Second, for fixed and . In addition, third, these errors are divided by to highlight, because of the overlapping, the decrease as . The fourth panel plots with the discretization computed via the NSFD scheme of Theorem 3, which is validated by Monte Carlo simulation. For the variance, computations become more expensive, due to the dimension three of the random space. In particular, the symbolic expression of the exact scheme (10) becomes unfeasible, so the exact error of the variance approximation cannot be reported. In Figure 6, we plot with the discretization from the NSFD scheme of Theorem 3. Comparison is performed with Monte Carlo simulation, showing agreement of the estimates. Based on ([16], Lemma 4, Theorem 7), the condition almost surely entails that the NSFD scheme does not approach 0 as (almost sure instability). This fact agrees with the plots of and , which seem to increase as t grows; this behavior entails that the NSFD scheme is unstable in the m.s. sense. Finally, and almost surely implies the positivity of the numerical solution ([16], Theorem 8). We would like to remark that even when and the global error of the NSFD scheme is , its error is lower than Euler’s method, given by . Euler’s method has already been employed for random differential equations (ordinary and fractional) in the m.s. sense [29,30,34]. In this Example 3, Figure 7 plots errors for approximations of the mean, for comparing Euler’s method and the NSFD scheme with fixed. Observe that in log scale, both errors are located in parallel, but the error corresponding to the NSFD scheme is lower; this may be due to the non-standard nature of the method and being error-free on . Although the proposed NSFD method is restricted to the linear random differential equation with delay, it may provide the foundation for designing new non-standard numerical methods for delay nonlinear equations. Remark 2. In the recent literature, the m.s. convergence of Euler’s method has not been formally proved for delay random differential equations. If randomness is not incorporated into the system by the coefficients, but by a Wiener noise instead (Itô stochastic delay differential equation, which gives rise to non-differentiable solutions), then Euler’s method (Euler-Maruyama’s method, which considers discrete increments of the driving Wiener process) was rigorously studied and its m.s. convergence was proved in [37]. In the context of delay random differential equations (those with randomness manifested in coefficients and no Wiener noise), we focus on the linear case (2) studied in this paper assuming, as in Theorem 3, that α and β are bounded random variables, so that and are finite. The exact m.s. solution to (2) satisfies , where the integrals are m.s. Riemann. If denotes the difference between the exact solution and the discretization at the mesh point , thenby a simple subtraction. Given any interval , , the m.s. Lipschitz condition holds, . Then , , , . Consequently, For (notice that ), by solving the first order recursive inequality for , we derive the following bounds:where is constant. Then For , based on similar calculations,where is constant. Then For , , etc. one proceeds similarly. This proves that the random Euler’s method has m.s. global error . This proof corresponds to the linear case, although it may be extendible to delay random differential equations satisfying a m.s. Lipschitz condition.
Example 4. In this example, the linear model (2) is considered for fitting the time evolution of a photosynthetic bacterial population, Rhodobacter capsulatus (R. capsulatus) [38], under infrared lighting conditions. Direct cell counts were made for the first 7 days, every two to three days, during which the population grew with no effect of competition for resources (light and/or CO2) that would yield logistic nonlinearities. For days 0, 2, 4 and 7, the population sizes, measured in cells/mL scaled by one million, were 0.583, 0.635, 1.08 and 3.20, respectively. For delay and initial function on , the least-squares estimates for α and β are and , respectively. The effect of small random displacements on the coefficients is studied here. Let us suppose displacements of α and β with respect to their least-squares estimates, with zero mean values. According to the maximum entropy principle [39], α and follow truncated exponential distributions, with rates and respectively. The expectation and the variance of the output are approximated with the random NSFD scheme. Figure 8 plots the results for and distinct N (mean values in solid line, and mean ± 2 × standard deviation in dashed lines), together with the least-squares fitting. Observe that a small uncertainty of for parameters may cause significant changes in the final solution, up to 30% variation for the seventh day compared to an idealized situation containing no uncertainty. Observe also that as N increases, the approximations from the NSFD scheme tend to overlap, thus indicating convergence. 6. Conclusions
In this paper, we have extended a NSFD numerical scheme recently proposed for deterministic linear differential equations with delay to the random framework. Incorporating randomness into models is important to account for measurement errors in data. M.s. convergence of the numerical discretizations has been established when the two equation coefficients are bounded random variables and the initial condition is a regular stochastic process, with rate of convergence given by , where h is the step size and M is the number of intervals of length where the exact scheme is applied. M.s. convergence allows for approximating the expectation and the variance of the solution at inherited rate , by symbolically expanding the discretizations in terms of the random inputs. The numerical examples have illustrated and assessed the proposed approach. The convergence rate has been supported numerically, even when the two equation coefficients are unbounded random variables. The asymptotic behavior of the expectation and the variance as the time t grows has been evaluated, graphically and taking into account theoretical results on deterministic stability and instability of the zero solution. A comparison with Euler’s method has been performed when ; although both methods have global errors , the error of the NSFD scheme is lower, possibly due to the non-standard nature of the scheme and being error-free on . Also, we have considered an example dealing with actual experimental data for a bacterial specie growing under infrared lighting conditions, and have calculated numerical solutions after randomizing the input parameters according to the maximum entropy principle.
The advantage of the random NSFD scheme is the high accuracy to approximate some statistics, which cannot be achieved with Monte Carlo methods. In addition, the procedure is simple: one only symbolically expands the discretizations in terms of the random inputs and afterward applies the corresponding statistical operator. However, this strategy possesses some limitations. Obviously, the necessity of symbolically expressing the discretizations restricts the applicability of the NSFD scheme to moderate step size h and time variable t, as well as small dimension of the random space. Although the calculation of the expectation of the discretization seems to be quite feasible in the computer, the calculation of the variance may become a big issue with this approach, let alone other statistics of order greater than two. We ask ourselves about the possibility of accurately calculating statistics with the random NSFD scheme without relying on symbolic expansions. We admit that for the moment, Monte Carlo simulation seems the best option for large time variable t, small step size h or large dimension of the random space, where each realization of the governing delay model is numerically solved by employing a NSFD scheme. For estimating densities, the symbolic expression is too complex and kernel methods are the preferable.
Mickens’ methodology on NSFD schemes has shown fruitful applications along the years on ordinary, partial and fractional deterministic differential equations. However, only very recently, the application of NSFD numerical schemes to delay deterministic differential equations has been explored, in the context of linear models. Thus, this is the first contribution that proposes the use of NSFD schemes for quantifying uncertainty for delay random differential equations. Further study of NSFD methods for delay deterministic and random differential equations needs to be conducted, especially for nonlinear equations, for applications to modeling of real-life systems with aftereffects or time lags.
We propose specific lines of research for possible future developments:
In the deterministic setting, the extension of the NSFD method to nonlinear delay differential equations. We believe that this extension may be done by linearization or by applying the empirical rules proposed by Mickens.
Randomization of the NSFD method for delay random differential equations and applications without relying on symbolic expansions. Symbolic computations are the main drawback of the method proposed in the present paper.
A theoretical analysis of m.s. dynamic consistency.