Next Article in Journal
Optimal Sliced Latin Hypercube Designs with Slices of Arbitrary Run Sizes
Next Article in Special Issue
A New Approach to Solving Stochastic Optimal Control Problems
Previous Article in Journal
Analytical Solution of Urysohn Integral Equations by Fixed Point Technique in Complex Valued Metric Spaces
Previous Article in Special Issue
Understanding the Evolution of Tree Size Diversity within the Multivariate Nonsymmetrical Diffusion Process and Information Measures
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Numerical Integral Transform Methods for Random Hyperbolic Models with a Finite Degree of Randomness

Instituto Universitario de Matemática Multidisciplinar, Building 8G, access C, 2nd floor, Universitat Politècnica de València, Camino de Vera s/n, 46022 Valencia, Spain
*
Author to whom correspondence should be addressed.
Mathematics 2019, 7(9), 853; https://doi.org/10.3390/math7090853
Submission received: 28 August 2019 / Revised: 10 September 2019 / Accepted: 11 September 2019 / Published: 16 September 2019
(This article belongs to the Special Issue Stochastic Differential Equations and Their Applications)

Abstract

:
This paper deals with the construction of numerical solutions of random hyperbolic models with a finite degree of randomness that make manageable the computation of its expectation and variance. The approach is based on the combination of the random Fourier transforms, the random Gaussian quadratures and the Monte Carlo method. The recovery of the solution of the original random partial differential problem throughout the inverse integral transform allows its numerical approximation using Gaussian quadratures involving the evaluation of the solution of the random ordinary differential problem at certain concrete values, which are approximated using Monte Carlo method. Numerical experiments illustrating the numerical convergence of the method are included.

1. Introduction

Analytic-numerical solutions of random mean square partial differential models have been treated recently using random integral transforms [1,2,3]. It is well-known [4] that the type of appropriated integral transform depends closely on the type of equation and initial/boundary conditions due to the properties of the operational calculus of the underlying integral transform. Important hyperbolic models of the telegraph type are relevant in wave propagation [5,6], signal analysis [7] and random walk theory [8]. In real problems, parameters, coefficients and initial/boundary conditions are subject to uncertainties, not only by error measurement but also due to heterogeneity of the media, or the lack of access to the measurement. The evaluation of microwave heating processes in ferrite materials [9] using the classical deterministic model gives inaccurate results because of the complication of the distribution within the oven and the fluctuation in dielectric properties of the material with respect to the density, temperature, moisture content and other elements. Since the seminal paper by Kac [10], several authors have treated telegraph equation with uncertainties with other objectives [11,12].
Efficient methods for solving numerically deterministic problems such as finite-difference methods become unsuitable for the random case because the computation of the expectation and the variance of the approximation stochastic process. This computational complexity arises from the operational random calculus involving big random matrices throughout the iterative levels of the discretization steps and the necessity to store the information of all the previous levels of the iteration process.
These drawbacks for solving random partial differential models, essentially of computational complexity, motives the search for non-iterative alternatives. The random integral transform approach previously quoted has two main steps: The first step is the transformation of the original random partial differential problem into a random ordinary differential system. The second step is the recovery of the solution of the original problem throughout the random inverse integral transform. At this point, the random Gaussian quadrature technique provides an easy expression involving evaluations at the zeros of the underlying family of orthogonal polynomials linked to the Gaussian quadrature. This approach allows the treatment of both cases: the first case where the explicit solution of the random transformed ordinary differential problem is available, as well as when one needs to solve numerically because the evaluation of the Gaussian quadrature rules is required only at concrete points.
In this paper, we address the numerical solution of random hyperbolic models of telegraph type. Section 3 deals with the random linear telegraph type problem
u t t ( x , t ) + 2 b u t ( x , t ) + a u ( x , t ) = c u x x ( x , t ) + ϕ ( x , t ) , x R , t > 0 ,
u ( x , 0 ) = f 1 ( x ) ,
u t ( x , 0 ) = f 2 ( x ) ,
where the damping coefficient b, the reaction coefficient a and the diffusion coefficient c, all are random variables (r.v.’s). We also assume that source term ϕ ( x , t ) and initial conditions f 1 ( x ) and f 2 ( x ) are mean square (m.s.) continuous stochastic processes (s.p.’s) with a finite degree of randomness [13,14], and absolutely integrable with respect to the spatial variable in the real line. In this problem, the random numerical approximation of the random inverse Fourier transform is performed using Gauss–Hermite quadrature rule.
Section 4 studies the random heterogeneous telegraph type problem
u t t ( x , t ) = k ( x ) u x ( x , t ) x + a ( x ) u ( x , t ) + ψ ( x , t ) , x > 0 , t > 0 ,
u ( 0 , t ) = g 1 ( t ) ,
u x ( 0 , t ) = g 2 ( t ) ,
u ( x , 0 ) = g ( x ) ,
where a ( x ) , k ( x ) , ψ ( x , t ) , g 1 ( t ) , g 2 ( t ) and g ( x ) are m.s. continuous s.p.’s with a finite degree of randomness, and absolutely integrable with respect to the time variable those depending on t. Here, the diffusivity coefficient k ( x ) is also assumed to be positive and m.s. differentiable. In this section, we use random Gauss–Laguerre quadrature rules. Section 2 includes some preliminaries about the solution of random linear differential systems that are used in further sections [14]. The paper ends with a conclusion in Section 5.

2. Numerical Solution of Random Linear Differential Problems via Simulations

For the sake of clarity in the presentation, we begin this section recalling some definitions and results of [15].
Given a complete probability space, ( Ω , F , P ) , L p m × n ( Ω ) denotes the set of all random matrices X = ( x i , j ) m × n (denoted by capital case letters) whose entries x i , j (denoted by lower case letters) are r.v.’s satisfying
x i , j p = E | x i , j | p 1 / p < + , p 1 ,
that is, x i , j L p ( Ω ) , where E · denotes the expectation operator. The space of all random matrices together with the matrix p-norm, that is, L p m × n ( Ω ) , · p defined as follows
X p = i = 1 m j = 1 n x i , j p , E | x i , j | p < + ,
is a Banach space. Note that, in the case m = n = 1 , both norms are the same and ( L p 1 × 1 ( Ω ) L p ( Ω ) , · p ) represents the Banach space of real r.v.’s verifying Equation (8). The definition of matrix p-norm in Equation (9) can be extended to matrix s.p.’s X ( t ) = ( x i , j ( t ) ) m × n of L p m × n ( Ω ) where now each entry x i , j ( t ) is a s.p., that is a r.v. for each t. We say that a matrix s.p. X ( t ) lies in L p m × n ( Ω ) if ( x i , j ( t ) ) m × n L p ( Ω ) for every 1 i m and 1 j n . The definitions of continuity, differentiability and integrability of matrix s.p.’s lying in L p m × n ( Ω ) follow in a straightforwardly manner using matrix p-norm in Equation (9). The cases p = 2 and p = 4 correspond to the so-called mean square (m.s.) and mean fourth (m.f.) convergence, respectively. Specifically, when dealing with random differential equations, the reference space is L 2 m × n ( Ω ) ( p = 2 ) because in practise most r.v.’s have finite variance. However, the space L 4 m × n ( Ω ) ( p = 4 ), L 4 m × n ( Ω ) L 2 m × n ( Ω ) , is also used in order to legitimize some mean square operational rules. Let us consider the random vector initial value problem (IVP)
Y ( s ) = L ( s ) Y ( s ) , Y ( 0 ) = Y 0 , s > 0 ,
where L ( s ) L 2 p n × n ( Ω ) is a matrix s.p. and Y 0 L 2 p n × 1 ( Ω ) , being L ( s ) 2 p -locally absolutely integrable, that is 0 T L ( s ) 2 p ds < + . Assume that the random system in Equation (10) is random 2 p -regular, p 1 , in the sense of Definition 3, page 943, of [15], and let Φ L ( s ; 0 ) be the random fundamental matrix solution of Equation (10) satisfying
Φ L ( s ; 0 ) = L ( s ) Φ L ( s ; 0 ) ; Φ L ( 0 ; 0 ) = I n ,
being I n the identity matrix of size n.
If B ( s ) lies in L 2 p n × 1 ( Ω ) and is 2 p -integrable, with previous hypothesis on the random problem in Equation (10) and assuming that the entries of matrix s.p. L ( s ) = i , j ( s ) n × n satisfies the moment condition of [15] for every s > 0 , that is
E i , j ( s ) r m i , j h i , j r < + , r 0 , i , j : 1 i , j m ,
then, by Theorem 1, page 944, of [15], the solution of the non-homogeneous problem
X ( s ) = L ( s ) X ( s ) + B ( s ) , X ( 0 ) = Y 0 ,
is given by
X ( s ) = Φ L ( s ; 0 ) Y 0 + Φ L ( s ; 0 ) 0 s Φ L 1 ( v ; 0 ) B ( v ) dv .
In particular, if  L ( s ) = L is a constant random matrix, see Section 3 of [16], the corresponding solution in Equation (14) of Equation (13) can be written as
X ( s ) = e L s Y 0 + 0 s e L v B ( v ) dv .
Note that the m.s. solution of Equation (13), given by Equation (14), is not available, apart from very limited cases, see Example 5 of [15], because the random fundamental matrix Φ L ( s ; 0 ) is not known.
This motivates the search of alternative approximations via simulations, the so called Monte Carlo approach [17], that provides the expectation E [ X ( s ) ] throughout the average of an appropriate number of realizations ω Ω of the deterministic problem
X ( s , ω ) = L ( s , ω ) X ( s , ω ) + B ( s , ω ) , X ( 0 , ω ) = Y 0 ( ω ) .
For the sake of convenience, we introduce an interesting example with known exact solution that is used below for solving a random hyperbolic problem.
Example 1.
Let m ( s ) , n ( s ) , and q ( s ) be 2 p -continuous s.p.’s and let [ y 1 , y 2 ] T be a r.v. in L 2 p 2 × 1 ( Ω ) . Assume that n ( s ) and q ( s ) satisfy the moment condition in Equation (12) for each s. Consider the random initial value problem in L 2 p 2 × 1 ( Ω ) .
X ( s ) = 0 1 n ( s ) q ( s ) X ( s ) + 0 m ( s ) , X ( 0 ) = y 1 y 2 ,
with
n ( s ) = n ( s ; a , b ) = e ( b a ) s , q ( s ) = q ( s ; b ) = b , m ( s ) = m ( s ; a , b ) = 1 + ( a b ) ( a 2 b ) e ( a b ) s , y 1 = 1 ; y 2 = y 2 ( a , b ) = a b ,
where a is the truncated Gaussian r.v., a N [ 0 . 9 , 1 . 1 ] ( 1 , 0 . 05 ) , and b is the truncated beta r.v., b B e t a [ 0 . 1 , 0 . 8 ] ( 2 , 0 . 25 ) . Both r.v.’s, a and b, are independent.
It is easy to check that the exact m.s. solution of the test random problem in Equations (17) and (18) is given by
X ( s ) = e ( a b ) s ( a b ) e ( a b ) s = e ( a b ) s 1 a b .
The expectation and the standard deviation of the random vector exact solution s.p. in Equation (19) at s = 1 , denoted by X ( 1 ) = [ x 1 ( 1 ) , x 2 ( 1 ) ] T , take the following values
E [ X ( 1 ) ] = E [ x 1 ( 1 ) ] , E [ x 2 ( 1 ) ] T = [ 1 . 52728 , 0 . 671897 ] T
Var [ X ( 1 ) ] = Var [ x 1 ( 1 ) ] , Var [ x 2 ( 1 ) ] T = [ 0 . 284533 , 0 . 43069 ] T .
Now, we illustrate and compare the numerical approximations with the exact solutions for the unfavorable and usual case when the solution s.p. of a random system of the type in Equation (17) is not available because the random fundamental matrix Φ L ( s ; 0 ) is not known. The search of alternative approximations is made via Monte Carlo approach, as we have just commented in this Section. Firstly, we consider in the system in Equations (17) and (18), for both r.v.’s, a and b, a different number of realizations K 0 = 10 4 , K 1 = 2 × 10 4 , K 2 = 4 × 10 4 , K 3 = 8 × 10 4 and K 4 = 1 . 6 × 10 5 for fixed value of s, obtaining K i , i = 0 , , 4 , deterministic numerical solutions, denoted by X ¯ ( s ) = [ x ¯ 1 ( s ) , x ¯ 2 ( s ) ] T . Then, for a fixed realization K i and taking the average of the K i numerical solutions obtained, we compute the expectation E M C K i [ X ¯ ( 1 ) ] . In a similar way, we compute the standard deviation Var M C K i [ X ¯ ( 1 ) ] for a fixed realization K i . Table 1, Table 2 and Table 3 include all these values together with the absolute errors and the numerical convergence ratios for a number K i of realizations, computed by
ErrAbs E M C K i ( s ) = E M C K i [ x ¯ j ( s ) ] E [ x j ( s ) ] , j = 1 , 2 ,
ErrAbs Var M C K i ( s ) = Var M C K i [ x ¯ j ( s ) ] Var [ x j ( s ) ] , j = 1 , 2 ,
ratio E M C K i 1 K i ( s ) = E M C K i 1 [ x ¯ j ( s ) ] E [ x j ( s ) ] E M C K i [ x ¯ j ( s ) ] E [ x j ( s ) ] , j = 1 , 2 ,
ratio Var M C K i 1 K i ( s ) = Var M C K i 1 [ x ¯ j ( s ) ] Var [ x j ( s ) ] Var M C K i [ x ¯ j ( s ) ] Var [ x j ( s ) ] j = 1 , 2 ,
respectively. Table 1 and Table 2 show the values of both the expectation and the standard deviation of the numerical approximations at s = 1 obtained when simulations via Monte Carlo are used considering a different number of realizations K i of both r.v.’s a and b in the system in Equations (17) and (18). Using the definition of the numerical convergence ratio for a number K i of realizations, given by Equations (24) and (25), the expected numerical convergence of both the expectation and the standard deviation of the numerical solutions of the Monte Carlo simulated problems to those of the exact random solution is shown. Although the convergence provided by Monte Carlo method is slow, it is a useful tool to manage the high computational problem. Table 1 and Table 2 show the good approximations to the exact values, E [ x j ( s ) ] and Var [ x j ( s ) ] , j = 1 , 2 , obtained by Monte Carlo method for K = 1 . 6 × 10 5 realizations, although the value of parameter s increases.
Computations were carried out by Mathematica © software version 11.3.0.0 [18] for Windows 10Pro (64-bit) Intel(R) Core(TM) i7-7820X CPU, 3.60 GHz 8 kernels. The timings (CPU time spent in the Wolfram Language kernel) for K = 1 . 6 × 10 5 and s = 1 . 5 in Table 3 correspond to the most expensive scenario. They  were 86 . 25 s for the generation of these K realizations of both r.v.’s a and b, and  174 . 047 s to obtain the approximations of both the expectation and the standard deviation in Table 3.

3. Gauss–Hermite Solution of Random Telegraph Model

In this section, we construct numerical solution of the random telegraph model in Equations (1)–(3) in two-stages. Firstly, using the Fourier exponential transform, an infinite integral form solution of the theoretical solution is obtained. Then, using random Gauss–Hermite quadrature formulae a random numerical solution is represented that is further computer by means of Monte Carlo simulations.
Let u ( x , t ) be the theoretical solution s.p. of the random problem in Equations (1)–(3), and let
U ( t ) ( ξ ) = F [ u ( · , t ) ] ( ξ ) = 1 2 π + u ( x , t ) e i x ξ dx ,
be the Fourier exponential transform of the one-variable s.p. u ( · , t ) , for a fixed time t. Using the properties of the random Fourier exponential (see [1]), we have
F [ u x x ( · , t ) ] ( ξ ) = ξ 2 F [ u ( · , t ) ] ( ξ ) = ξ 2 U ( t ) ( ξ ) ,
F [ u t ( · , t ) ] ( ξ ) = d d t F [ u ( · , t ) ] ( ξ ) = d d t ( U ( t ) ) ( ξ ) ,
F [ u t t ( · , t ) ] ( ξ ) = d 2 d t 2 ( U ( t ) ) ( ξ ) .
We assume that the r.v.’s a, b and c of Equation (1) are mutually independent lying in L 4 ( Ω ) and the s.p.’s ϕ ( · , t ) , f 1 ( x ) and f 2 ( x ) are m.f. continuous but ϕ ( · , t ) having at most a finite number of jump discontinuities in the variable x. Let f 1 ( x ) , f 2 ( x ) and ϕ ( · , t ) be m.f. absolutely integrables in x R , that is,
+ ϕ ( x , t ) 4 d x < + ( t > 0 fixed ) , + f 1 ( x ) 4 d x < + , + f 2 ( x ) 4 d x < + ,
then, by formal application of the random Fourier exponential transform to problem in Equations (1)–(3) one achieves the random initial value transformed problem
d 2 d t 2 ( U ( t ) ) ( ξ ) + 2 b d d t ( U ( t ) ) ( ξ ) + ( a + ξ 2 c ) U ( t ) ( ξ ) = Φ ( t ) ( ξ ) , t > 0 ,
U ( 0 ) ( ξ ) = F [ f 1 ( x ) ] ( ξ ) = F 1 ( ξ ) ,
d d t ( U ( 0 ) ) ( ξ ) = F [ f 2 ( x ) ] ( ξ ) = F 2 ( ξ ) ,
where
Φ ( t ) ( ξ ) = F [ ϕ ( · , t ) ] ( ξ ) ,
is the Fourier transform of the source term ϕ ( · , t ) . Note that random linear inhomogeneous problem in Equations (30)–(33) can be written in the extended random linear system
X ( t ) ( ξ ) = L ( ξ ) X ( t ) ( ξ ) + B ( t ) ( ξ ) , X ( 0 ) ( ξ ) = Y 0 ( ξ ) , t > 0 ,
where
L ( ξ ) = 0 1 α 2 b , α = a + ξ 2 c ,
B ( t ) ( ξ ) = 0 Φ ( t ) ( ξ ) , Y 0 = Y 0 ( ξ ) = F 1 ( ξ ) , F 2 ( ξ ) , .
According to the theory shown in Section 2, the entries of the random matrix L ( ξ ) = ( i , j ) 2 × 2 L 4 2 × 2 ( Ω ) , for  ξ R fixed, must be absolute moments with respect to the origin that increases at the most exponentially, that is,
E i , j r m i , j h i , j r < + , r 0 , i , j : 1 i , j m ,
then we assume that the r.v.’s a, b and c satisfy the condition in Equation (37). Furthermore, the condition in Equation (37) guarantees that L ( ξ ) is 4-locally absolutely integrable. Because the random matrix L ( ξ ) in Equation (34) is constant in time, we may use Equation (15) to capture explicitly X ( t ) ( ξ ) from Equations (35) and (36), where the exponential matrix in Equation (15) takes a particular form depending on the following cases
  • Case 1. b 2 > α
    e L ( ξ ) t = e b t cosh t b 2 α + b sinh t b 2 α b 2 α sinh t b 2 α b 2 α α sinh t b 2 α b 2 α cosh t b 2 α b sinh t b 2 α b 2 α ,
  • Case 2. b 2 < α
    e L ( ξ ) t = e b t cos t α b 2 + b sin t α b 2 α b 2 sin t α b 2 α b 2 α sin t α b 2 α b 2 cos t α b 2 b sin t α b 2 α b 2 ,
  • Case 3. b 2 = α
    e L ( ξ ) t = e b t 1 + b t t b 2 t 1 b t .
The solution s.p. of the random initial value transformed problem in Equations (30)–(33) takes the form
U ( t ) ( ξ ) = [ 1 , 0 ] X ( t ) ( ξ ) = [ 1 , 0 ] e L ( ξ ) t Y 0 + 0 t e L ( ξ ) v B ( v ) ( ξ ) dv , t > 0 ,
with L ( ξ ) L 4 2 × 2 ( Ω ) locally absolutely integrable whose entries satisfy the condition in Equation (37), Y 0 L 4 2 × 1 ( Ω ) and B ( v ) L 4 2 × 1 ( Ω ) absolutely integrables in x R and e L ( ξ ) s defined in Equations (35), (36) and (38)–(40), respectively. By using the inverse Fourier exponential transform, one gets
u ( x , t ) = 1 2 π + [ 1 , 0 ] X ( t ) ( ξ ) e i ξ x d ξ , < x < + , t > 0 .
Apart from the fact that, in this case, X ( t ) ( ξ ) can be written as Equation (15), we are interested in the approximation of the random infinite integral using random Gauss–Hermite quadratures (see Section 2.1 of [15]). Note that the random integral in Equation (42) can be written in the form
u ( x , t ) = [ 1 , 0 ] 1 2 π + e ξ 2 J ( t , ξ ) d ( ξ ) , < x < + , t > 0 ,
J ( t , ξ ) = X ( t ) ( ξ ) e ξ ( i x + ξ ) , t > 0 .
Let ρ j be the weights of the Gauss–Hermite quadrature formula,
ρ j = 2 N 1 N ! π N H N 1 ( θ j ) 2 , 1 j N ,
where θ j are the roots of the deterministic Hermite polynomial, H N , of degree N, see page 890 of [19]. Then, the random Gauss–Hermite quadrature formula of degree N approximating the integral of Equations (43) and (44) takes the form
I N G H [ J ] = j = 1 N ρ j J ( t , θ j ) .
From Equations (43)–(46), the resulting approximation u N G H ( x , t ) of u ( x , t ) becomes the s.p.
u N G H ( x , t ) = 1 2 π j = 1 N ρ j e θ j ( i x + θ j ) X 1 ( t ) ( θ j ) ,
where X 1 ( t ) ( θ j ) = [ 1 , 0 ] X ( t ) ( θ j ) . We can obtain the following explicit expression for the expectation of the approximated solution s.p. in Equation (47) of the random telegraph in Equations (1)–(3)
E [ u N G H ( x , y ) ] = 1 2 π j = 1 N ρ j e θ j ( i x + θ j ) E [ X 1 ( t ) ( θ j ) ] t > 0 .
With respect to the computation of the variance of the approximate solution s.p. u N G H ( x , t ) , given by Equation (47), one gets
Var u N G H ( x , t ) = E u N G H ( x , t ) 2 E u N G H ( x , t ) 2 ,
or the equivalent explicit expression by using Equations (47) and (48)
Var u N G H ( x , t ) = 1 2 π j , = 1 N ρ j ρ e θ j ( i x + θ j ) + θ ( i x + θ ) E [ X 1 ( t ) ( θ j ) X 1 ( t ) ( θ ) ] E [ X 1 ( t ) ( θ j ) ] E [ X 1 ( t ) ( θ ) ]
= 1 2 π j = 1 N = 1 N ρ j ρ e θ j ( i x + θ j ) + θ ( i x + θ ) Cov X 1 ( t ) ( θ j ) , X 1 ( t ) ( θ ) .

A Numerical Example

In this example, we illustrate the theoretical results developed in Section 3. We consider the following random telegraph equation
u t t ( x , t ) + 2 b u t ( x , t ) + a u ( x , t ) = u x x ( x , t ) + ϕ ( x , t ) , x R , t > 0 ,
u ( x , 0 ) = f 1 ( x ) = 0 ,
u t ( x , 0 ) = f 2 ( x ) = e x 2 / 2 ,
where the source term ϕ ( x , t ) is the rectangular pulse function
ϕ ( x , t ) = 0 , if | x | > 1 , 1 , if | x | 1 ,
the r.v. a > 0 has a Gaussian distribution of parameters ( 1 ; 0 . 05 ) truncated on the interval [ 0 . 9 , 1 . 1 ] , that is, a N [ 0 . 9 , 1 . 1 ] ( 1 ; 0 . 05 ) , and the r.v. b has a beta distribution of parameters ( 2 ; 2 ) truncated on the interval [ 0 . 4 , 0 . 6 ] , that is, b Beta [ 0 . 4 , 0 . 6 ] ( 2 ; 2 ) . Both a and b are considered independent r.v.’s.
It is known that the exact solution of the problem in Equations (52)–(55), when both a and b are deterministic and γ = a b 2 > 0 , is given by (see Section 4.4.1 of [20])
u ( x , t ) = e b t 2 x t x + t J 0 γ t 2 ( x s ) 2 f 2 ( s ) d s + 1 2 0 t x ( t τ ) x + ( t τ ) e b ( t τ ) J 0 γ ( t τ ) 2 ( x s ) 2 ϕ ( s ) d s d t ,
where we have denoted J 0 ( r ) the Bessel function of the first kind, that is
J 0 ( r ) = 1 π 0 π cos ( r sin ( s ) ) d s .
The exact computation of the expectation and the standard deviation of the solution s.p. in Equations (56) and (57), considering both a and b r.v.’s, is not available as an exact point of view. The use of numerical techniques to compute the expectation and the standard deviation of the integrals appearing in Equation (56) is required. Therefore, it is necessary to transform these random integrals into deterministic ones before applying numerical techniques. To carry out this task, firstly, for a fixed point ( x , t ) , we took K = 2 × 10 5 realizations of the independent r.v.’s a and b, then we computed numerically these K deterministic integrals and finally we obtained the mean and the standard deviation of these K values. Figure 1a,b shows the numerical values for the expectation and the standard deviation of the exact solution s.p. in Equations (56) and (57).
Now, we obtain our approximation solution s.p. for the problem in Equations (52)–(55) as well as its expectation and standard deviation. Finally, we establish the corresponding comparisons between both statistical moment functions the approximate ones and the exact ones. By applying the random Fourier exponential transform to the problem in Equations (52)–(55) the random initial value transformed problem in Equations (30)–(33) is obtained with  c = 1 , which can be expressed as the random linear system in Equation (34) with fixed ξ R and
L ( ξ ) = 0 1 ( a + ξ 2 ) 2 b , B ( t ) ( ξ ) = B ( ξ ) = 0 Φ ( ξ ) = 0 2 π sin ( ξ ) ξ , Y 0 = Y 0 ( ξ ) = 0 e ξ 2 2 .
Random matrix L ( ξ ) L 4 2 × 2 ( Ω ) because r.v.’s a and b are truncated r.v.’s. The condition in Equation (37) on the matrix L ( ξ ) , defined in Equation (58), is satisfied because the r.v.’s a and b are truncated ones, and furthermore B ( ξ ) and Y 0 ( ξ ) are 4-absolutely integrables in R . Note that L ( ξ ) is a constant matrix with respect to t, then we can use Equation (15). Furthermore ,as B ( ξ ) is also constant an explicit solution s.p. for the random linear system in Equation (34), Equation (58) is given by
X ( t ) ( ξ ) = e L ( ξ ) t Y 0 + 0 t e L ( ξ ) v B ( ξ ) d v = e L ( ξ ) t Y 0 + L ( ξ ) 1 e L ( ξ ) s s = 0 s = t B ( ξ ) = e L ( ξ ) t Y 0 + L ( ξ ) 1 e L ( ξ ) t I B ( ξ ) = e L ( ξ ) t Y 0 + L ( ξ ) 1 B ( ξ ) L ( ξ ) 1 B ( ξ ) ,
being
L ( ξ ) 1 = 2 b α 1 α 1 0 , L ( ξ ) 1 B ( ξ ) = 1 α 2 π sin ( ξ ) ξ 0 , α = a + ξ 2 .
and
e L ( ξ ) t the exponential matrix defined in Case 2 , Equation ( ) , with c = 1 ,
because the parameter γ = a b 2 > 0 and hence b 2 < a + ξ 2 , that is, b 2 < α .
Using Equations (59) and (60) and the introduced notation [ 1 , 0 ] X ( t ) ( ξ ) = U ( t ) ( ξ ) , the solution s.p. U ( t ) ( ξ ) of the random initial value transformed problem in Equations (30)–(33) with c = 1 is given by Equation (41) and takes the form
U ( t ) ( ξ ) = e b t sin t α b 2 α b 2 e ξ 2 2 + 1 α 2 π sin ( ξ ) ξ 1 e b t cos t α b 2 + b sin t α b 2 α b 2 ,
for t > 0 and fixed ξ R . Now, taking into account that U ( t ) ( ξ ) = U ( t ) ( ξ ) and lies in R for each ξ R , the recovered solution s.p. u ( x , t ) of the original random partial differential problem in Equations (52)–(55) given by Equation (42) takes the form
u ( x , t ) = 1 2 π + U ( t ) ( ξ ) cos ( ξ x ) d ξ , < x < + , t > 0 .
Using that U ( t ) ( ξ ) is an even function in ξ R and denoting U ( t ) ( ξ ) = X 1 ( t ) ( ξ ) , the  numerical values of the expectation in Equation (48) of the approximate solution s.p. of the problem in Equations (52)–(55) can be written as follows
E [ u N G H ( x , t ) ] = 1 2 π j = 1 N ρ j cos ( ξ j x ) e ξ j 2 E [ X 1 ( t ) ( ξ ) ] , t > 0 ,
where ρ j , j = 1 , , N are the weights of the Gauss–Hermite quadrature formula (see Equation (45)) and ξ j are the roots of the deterministic Hermite polynomial, H N , of degree N. The approximate values of the standard deviation can be computed by
Var u N G H ( x , t ) = E u N G H ( x , t ) 2 E u N G H ( x , t ) 2 ,
where
E u N G H ( x , t ) 2 = 1 2 π j = 1 N = 1 N ρ j ρ cos ( ξ j x ) cos ( ξ x ) e ξ j 2 + ξ 2 E [ X 1 ( t ) ( ξ j ) X 1 ( t ) ( ξ ) ] , t > 0 .
Figure 2 and Figure 3 show a comparative study of both the expectation and the standard deviation at t = 1 on the spatial domain x i [ 0 , 2 . 5 ] , x i = i h , 0 i 10 , h = 0 . 25 , for both the theoretical and the approximate solution s.p. of the random problem in Equations (52)–(55). Computation time of our method is competitive for the degrees N of the Hermite’s polynomial considered. For example, 2 . 34625 s (CPU time spent in the Wolfram Language kernel) in total for computing both approximations the expectation E u N G H ( x i , 1 ) (Equation (64)), and the standard deviation Var [ u N G H ( x i , 1 ) ] (Equations (64)–(66)) versus 20 . 2656 s in total used in the calculation of the theoretical ones. The computation times are reduced for our method when the degree N of the Hermite’s polynomial decreases, for example, taking N = 6 the time spent is  1 . 6567 s.
In Figure 2a and Figure 3a, we plot at t = 1 the expectation, E [ u ( x i , 1 ) ] , and the standard deviation, Var [ u ( x i , 1 ) ] , respectively, of the exact solution s.p. in Equations (56) and (57) vs. the respective approximate ones, E [ u N G H ( x i , 1 ) ] (Equation (64)), and  Var [ u N G H ( x i , 1 ) ] (Equations (64)–(66)), for different degrees N of the Hermite polynomials: N = { 2 , 4 , 10 } . In Figure 2b and Figure 3b, it is observed that the approximations improve as the degree N increases because the relative errors computed using the following expressions decrease
RelErr E [ u N G H ( x , t ) ] = E [ u ( x , t ) ] E u N G H ( x , t ) E [ u ( x , t ) ] ,
RelErr Var [ u N G H ( x , t ) ] = Var [ u ( x , t ) ] Var [ u N G H ( x , t ) ] Var [ u ( x , t ) ] .
An interesting quantitative study of global errors, in the spatial domain [ 0 , 2 . 5 ] for a fixed time t and a number n of spatial points x i , 0 i n , was also carried out considering the root mean squared errors (RMSEs):
RMSE E ( x i , t ) = 1 ( n + 1 ) i = 0 n E [ u ( x i , t ) ] E [ u N G H ( x i , t ) ] 2 ,
RMSE Var ( x i , t ) = 1 ( n + 1 ) i = 0 n Var [ u ( x i , t ) ] Var [ u N G H ( x i , t ) ] 2 ,
Table 4 collects the results obtained and shows that the proposed method provides good approximations to the numerical values of the expectation and the standard deviation of the exact solution s.p. in Equations (56) and (57). We observe that it is sufficient to consider a Hermite’s polynomial of degree N = 6 , in order to obtain reasonable approximations to the exact ones.
Note that in this example the errors due to the calculus of an approximate solution s.p. of the auxiliary system in Equations (34) and (58) do not appear in the errors computed because this solution, U ( t ) ( ξ ) , was calculated in an exact way (see Equation (62)). Then, the relative errors plotted in Figure 2b and Figure 3b and the RMSEs collect in Table 4 include mainly the errors due to the random Gauss–Hermite quadrature formula of degree N.
Algorithm 1 summarizes the steps to compute the approximations of the expectation and the standard deviation of the solution s.p. in Equation (47).
Algorithm 1 Calculation procedure for the expectation and the standard deviation of the approximated solution s.p. u N G H ( x , t ) (Equation (47)) of the problem in Equations (1)–(3).
Require: 
Guarantee that the random input data of problem in Equations (1)–(3): a, b and c are r.v.’s in L 4 ( Ω ) and ϕ ( x , t ) , f 1 ( x ) and f 2 ( x ) are m.f. continuous s.p.’s with a finite degree of randomness and m.f. absolutely integrable with respect to the spatial variable in the real line. Additionally, the s.p. ϕ ( x , t ) can be chosen with at most a finite number of jump discontinuities in the variable x.  
1:
Fix a point ( x , t ) , x R , t > 0
2:
Choose the degree N of the Hermite polynomial, H N ( · ) , and compute H N 1 ( · ) and H N ( · )
3:
for j = 1 to N do
4:
  Compute the j roots, θ j , of  H N ( · )
5:
end for 
6:
for j = 1 to N do
7:
  Compute the weights, ρ j , of  H N ( · ) using Equation (45). 
8:
end for 
9:
Construct the random matrix L ( θ j ) using Equation (35) where ξ represent a particular θ j
10:
if the entries of random matrix L ( θ j ) verifies the condition in Equation (37) then
11:
  continue to the following step 
12:
else
13:
  change the election of the r.v.’s a, b and c and check again the condition in Equation (37). 
14:
end if 
15:
Compute the random Fourier exponential transforms of the input data ϕ ( x , t ) , f 1 ( x ) and f 2 ( x )
16:
Construct the random vector s.p. B ( x ) ( θ j ) and the random vector Y 0 ( θ j ) using Equation (36). 
17:
for j = 1 to N do
18:
  Compute numerically the expectation of the j-solutions s.p.’s, X 1 ( t ) ( θ j ) = [ 1 , 0 ] X ( t ) ( θ j ) , of the random linear system in Equations (34)–(36) using Equation (41) and the adequate case in Equations (38)–(40) for the exponential matrix of L ( θ j ) t . These j expectations are denoted by E [ X 1 ( t ) ( θ j ) ]
19:
end for 
20:
Compute the expectation, E [ u N G H ( x , t ) ] , and the standard deviation, Var [ u N G H ( x , t ) ] , of the approximated solution s.p. u N G H ( x , t ) (Equation (47)) using the explicit expressions in Equations (48), (49) and (50). 

4. Gauss–Laguerre Solution of a Random Heterogeneous Telegraph Model

This section is addressed to construct random Gauss–Laguerre quadrature formulae for the numerical solution of the model in Equations (4)–(7). Although the approach is similar to the one developed in Section 3, here we use the random Fourier sine transform acting on the temporal variable. The random variable coefficient transformed problem requires a numerical approach that is constructed in two stages. Firstly, the Gauss–Laguerre quadrature of the random inverse Fourier sine transform and further Monte Carlo simulations at appropriated root points of the Laguerre polynomials.
Let V ( x ) ( ξ ) = F s [ u ( x , · ) ] ( ξ ) be the Fourier sine transform of the unknown u ( x , · ) :
V ( x ) ( ξ ) = F s [ u ( x , · ) ] ( ξ ) = 0 + u ( x , t ) sin ( ξ t ) dt , ξ > 0 , x > 0 .
From Theorem 1 of [16], we have
F s [ h ( t ) ] ( ξ ) = ξ h ( 0 ) ξ 2 F s [ h ( t ) ] ( ξ ) , ξ > 0 .
Let us denote
G 1 ( ξ ) = F s [ u ( 0 , · ) ] ( ξ ) = F s [ g 1 ( t ) ] ( ξ ) , ξ > 0 ,
G 2 ( ξ ) = F s [ u x ( 0 , · ) ] ( ξ ) = F s [ g 2 ( t ) ] ( ξ ) , ξ > 0 ,
Ψ ( x ) ( ξ ) = F s [ ψ ( x , · ) ] ( ξ ) , x > 0 , ξ > 0 .
Let us assume that the s.p.’s k ( x ) , a ( x ) , ψ ( x , t ) , g 1 ( t ) , g 2 ( t ) and g ( x ) of the problem in Equations (4)–(7) are m.f. continuous with a finite degree of randomness. Let k ( x ) be a positive s.p. 4-differentiable and let ψ ( x , t ) , g 1 ( t ) , g 2 ( t ) be m.f. absolutely integrable s.p.’s in t > 0 , that is,
0 + ψ ( x , t ) 4 d t < + ( x > 0 fixed ) , 0 + g 1 ( t ) 4 d t < + , 0 + g 2 ( t ) 4 d t < + .
By applying random Fourier sine transform to the problem in Equations (4)–(7) and using Equations (72)–(74), one gets
ξ g ( x ) ξ 2 V ( x ) ( ξ ) = d d x k ( x ) d d x ( V ( x ) ) ( ξ ) + a ( x ) V ( x ) ( ξ ) + Ψ ( x ) ,
or
d 2 d x 2 ( V ( x ) ) ( ξ ) + k ( x ) k ( x ) d d x ( V ( x ) ) ( ξ ) + a ( x ) + ξ 2 k ( x ) V ( x ) ( ξ ) = Ψ ( x ) ( ξ ) ξ g ( x ) k ( x ) , ξ > 0 fixed ,
together with
V ( 0 ) ( ξ ) = G 1 ( ξ ) , d d x ( V ( 0 ) ) ( ξ ) = G 2 ( ξ ) .
The solution to the problem in Equations (77) and (78) is the first component of the solution of extended random linear differential system, V ( x ) ( ξ ) = [ 1 , 0 ] X ( x ) ( ξ ) ,
X ( x ) ( ξ ) = L ( x ) ( ξ ) X ( x ) ( ξ ) + B ( x ) ( ξ ) , x > 0 , X ( 0 ) ( ξ ) = Y 0 ( ξ ) ,
where
L ( x ) ( ξ ) = 0 1 ξ 2 + a ( x ) k ( x ) k ( x ) k ( x ) , B ( x ) ( ξ ) = 0 Ψ ( x ) ( ξ ) ξ g ( x ) k ( x ) , Y 0 ( ξ ) = G 1 ( ξ ) G 2 ( ξ ) .
By Section 2, assuming that 4-s.p.’s a ( x ) , k ( x ) and k ( x ) satisfy the moment condition in Equation (12) for every x > 0 , it is guaranteed that the entries of the matrix s.p. L ( x ) ( ξ ) = ( i , j ( x ) ) 2 × 2 L 4 2 × 2 ( Ω ) , for  ξ > 0 fixed, and satisfy the condition in Equation (12). Furthermore, the condition in Equation (12) guarantees that L ( x ) ( ξ ) is 4-locally absolutely integrable in x [ 0 , x 1 ] :
0 x 1 i , j ( x ) 4 d x = 0 x 1 E [ | i , j ( x ) | 4 ] 1 / 4 d x 0 x 1 m i , j ( h i , j ) 4 1 / 4 d x = ( m i , j ) 1 / 4 h i , j x 1 < + .
Furthermore, it is verified that vector s.p.’s both B ( x ) ( ξ ) and Y 0 ( x ) ( ξ ) lie in L 4 2 × 1 ( Ω ) and they are absolutely integrables in x [ 0 , + ) .
Note that, unlike the case of Section 3, here the system in Equations (79) and (80) does not have an explicit solution s.p. because, in Equation (14), the random fundamental matrix Φ L ( x ; 0 ) is unknown and thus one needs a numerical approach. By using random inverse Fourier sine transform to V ( x ) ( ξ ) , one gets
u ( x , t ) = 2 π 0 V ( x ) ( ξ ) sin ( ξ t ) d ξ = 2 π 0 [ 1 , 0 ] X ( x ) ( ξ ) sin ( ξ t ) d ξ = 2 π 0 X 1 ( x ) ( ξ ) sin ( ξ t ) d ξ ,
where X 1 ( x ) ( ξ ) = [ 1 , 0 ] X ( x ) ( ξ ) . Now, we apply random Gauss–Laguerre quadrature to approximate the integral of Equation (81). For s.p. J ( ξ ) L 2 ( Ω ) being m.f.-absolutely integrable with respect to ξ > 0 , let us consider the following integral
I = I [ J ] = 0 J ( ξ ) e ξ d ξ , ξ > 0 ,
which is a r.v. Since 0 < e ξ 1 for all ξ > 0 and s.p. J ( ξ ) L 2 ( Ω ) being m.f.-absolutely integrable respect to ξ > 0 one gets
I 2 = 0 + J ( ξ ) e ξ d ξ 2 0 + J ( ξ ) e ξ 2 d ξ 0 + J ( ξ ) 2 d ξ < + .
Then, I = I [ J ] is well-defined. Assuming that J ( ξ ) L 2 ( Ω ) has continuous sample trajectories, i.e.,  J ( ξ ) ( ω ) is continuous with respect to ξ > 0 for all ω Ω , then r.v. in Equation (82) coincides, with probability 1, with the (deterministic) sample integrals
I ( ω ) = I [ J ] ( ω ) = 0 + J ( ξ ; ω ) e ξ d ξ , ω Ω ,
which are well-defined and thus they are convergent for all ω Ω (see Appendix I of [13]). Then, taking advantage of the Gauss–Laguerre quadrature formula of degree N, see page 890 of [19], we can consider the following numerical approximation for each event ω Ω
I N G L [ J ] ( ω ) = j = 1 N ν j J ( ϑ j ; ω ) , ν j = ϑ j ( N + 1 ) L N + 1 ( ϑ j ) 2 ,
where ϑ j is the jth root of the deterministic Laguerre polynomial, L N ( ϑ ) , of degree N and ν j is the weight. This quadrature formula to approximate a random integral of the type in Equation (82) is applied to the r.v. u ( x , t ) given by Equation (81) taking
J ( ξ ) = J ( x , t , ξ ) = 2 π X 1 ( x ) ( ξ ) sin ( ξ t ) e ξ .
Given the degree N, let us denote by u N G L ( x , t ) the Gauss–Laguerre s.p. approximation of degree N of the exact solution s.p. u ( x , t ) of the random problem in Equations (4)–(7), evaluated at ( x , t ) and expressed as the r.v.
u N G L ( x , t ) = 2 π j = 1 N ν j sin ( ϑ j t ) e ϑ j X 1 ( x ) ( ϑ j ) .
Note that, unlike to the problem of Section 3, the exact solution X 1 ( x ) ( ϑ j ) is obtained using Monte Carlo simulation because it is not available. That is, the evaluation of the solution s.p. of the random linear system in Equation (79) at ϑ j is not available in an explicit form, thus the expectation is approximated using Monte Carlo approach, as treated in Section 2, and denoted by E M C K [ X ¯ 1 ( x ) ( ϑ j ) ] where K represents the number of realizations used in the Monte Carlo simulation and X ¯ 1 ( x ) ( ϑ j ) the deterministic numerical solution obtained after taking K realizations. Thus, the final expression for the approximation of the E [ u ( x , t ) ] takes the form
E [ u N G L ( x , t ) ] E [ u N , K G L ( x , t ) ] = 2 π j = 1 N ν j sin ( ϑ j t ) e ϑ j E M C K [ X ¯ 1 ( x ) ( ϑ j ) ] , x > 0 , t > 0 .
The standard deviation of the approximate solution s.p. u N G L ( x , t ) (Equation (84)), can be computed taking the square root of the following expression
Var u N G L ( x , t ) Var u N , K G L ( x , t ) = 2 π 2 j = 1 N = 1 N ν j ν sin ( ϑ j t ) sin ( ϑ t ) e ϑ j + ϑ E M C K [ X ¯ 1 ( x ) ( ϑ j ) X ¯ 1 ( x ) ( ϑ ) ] E M C K [ X ¯ 1 ( t ) ( ϑ j ) ] E M C K [ X ¯ 1 ( t ) ( ϑ ) ] = 2 π 2 j = 1 N = 1 N ν j ν sin ( ϑ j t ) sin ( ϑ t ) e ϑ j + ϑ Cov M C K X ¯ 1 ( x ) ( ϑ j ) , X ¯ 1 ( x ) ( ϑ ) .

A Numerical Example

Consider the random heterogeneous telegraph type problem in Equation (4)–(7) with the following input data
k ( x ) = 1 + b cos ( π x ) , a ( x ) = e a x , ψ ( x , t ) = e ( x + t ) g 1 ( t ) = 0 , g 2 ( t ) = 0 , g ( x ) = 0 , x > 0 , t > 0 ,
where parameters a and b are assumed to be independent r.v.’s; specifically, a has a uniform distribution giving values in [ 0 , 1 ] , that is, a U n ( 0 , 1 ) , and  b > 0 has an exponential distribution of parameter 2 truncated on the interval [ 0 . 1 , 0 . 2 ] , that is, b E x p [ 0 . 1 , 0 . 2 ] ( 2 ) . Then, it is verified that s.p.’s k ( x ) and a ( x ) and functions ψ ( x , t ) , g 1 ( t ) , g 2 ( t ) and g ( x ) are 4-continuous and 4-absolutely integrable with respect to the time variable those depending on t. Furthermore, k ( x ) is positive and 4-differentiable. Note that s.p.’s k ( x ) and a ( x ) depend on a single r.v., that is, they have a finite degree of randomness.
In this example, the elements of the auxiliary random linear differential system in Equations (79) and (80) take the form
L ( x ) ( ξ ) = 0 1 ξ 2 + e a x 1 + b cos ( π x ) b π sin ( π x ) 1 + b cos ( π x ) , B ( x ) ( ξ ) = 0 e x ξ ( 1 + ξ 2 ) ( 1 + b cos ( π x ) ) , Y 0 ( ξ ) = 0 0 ,
where in B ( x ) ( ξ ) it is obtained that Ψ ( x ) ( ξ ) = F s [ ψ ( x , t ) ] ( ξ ) = e x ξ 1 + ξ 2 , x > 0 for a fixed ξ > 0 . Observe that the entries of the matrix s.p. L ( x ) ( ξ ) given in Equation (88) satisfies the moment condition in Equation (12) for every x because r.v.’s a and b are bounded. The random linear differential system in Equations (79) and (88) does not have an explicit solution s.p., thus we proceed as shown in Example 1 searching alternative approximations of Equations (79) and (88) via Monte Carlo simulations. Taking a particular number of realizations, K, over the r.v.’s a and b, we solve the K deterministic systems corresponding to Equations (79) and (88) for each ξ j , j = 1 , , N . The integer N represents the roots of the Laguerre polynomial of degree N and it must be fixed befor the computation of the K deterministic systems. We computed, for each ξ j , the mean of the K solutions obtained, that is, E M C K [ X ¯ 1 ( x ) ( ξ j ) ] , j = 1 , , N . Finally, we can provide an approximation of the first and the second moments of the solution s.p. of the original problem in Equations (4)–(7) and (87) using the explicit expressions in Equations (85) and (86) where ϑ j represents the jth root ξ j , j = 1 , , N . Algorithm 2 summarizes the procedure described above to compute the approximations of the expectation and the standard deviation of the solution s.p. in Equation (84). Figure 4 shows simulations of the expectation of the solution s.p. in Equation (84) at time instants t = { 0 . 5 , 1 , 1 . 5 , 2 } on the spatial domain 0 x 1 considering the set of points x i = i h , 0 i n = 10 , with the stepsize h = 0 . 1 . To carry out these simulations, K = 1000 realizations for Monte Carlo method and N = 10 for the Gauss–Laguerre quadrature were considered.
Now, to study the numerical convergence of the approximations of both the expectation and the standard deviation, we studied the behavior of their root mean square deviations (RMSD) in two stages: firstly, varying the number K of realizations in the Monte Carlo method but considering fixed the N in the Gauss–Laguerre quadrature; and, secondly, varying N but considering the number of realizations K fixed. For the first stage, Table 5 collects the RMSDs computed using the following notation
RMSD E [ u N , K K + 1 G L ( x i , t ) ] = 1 ( n + 1 ) = 0 n E [ u N , K + 1 G L ( x i , t ) ] E [ u N , K G L ( x i , t ) ] 2 ,
RMSD Var [ u N , K K + 1 G L ( x i , t ) ] = 1 ( n + 1 ) = 0 n Var [ u N , K + 1 G L ( x i , t ) Var [ u N , K G L ( x i , t ) 2 ,
at time instant t = 1 along the spatial domain [ 0 , 1 ] considering the set of points x i = i h , 0 i n = 10 , with the stepsize h = 0 . 1 . The integers K and K + 1 denote the realizations taken using Monte Carlo method to solve numerically the random linear differential system in Equations (79) and (88). The simulations K , = 0 , , 5 , correspond to K 0 = 2500 , K 1 = 5000 , K 2 = 10 4 , K 3 = 2 × 10 4 , K 4 = 4 × 10 4 and K 5 = 5 × 10 4 and N = 6 is the degree fixed for the Laguerre polynomials. The decreasing trend of the RMSDs when compared to the previous realization is observed. A similar behavior can be observed when other degrees N are considered.
Algorithm 2 Calculation procedure for the expectation and the standard deviation of the approximated solution s.p. u N G L ( x , t ) (Equation (84)) of the problem in Equations (4)–(7).
Require: 
Guarantee that the random input data k ( x ) , a ( x ) , ψ ( x , t ) , g 1 ( t ) , g 2 ( t ) and g ( x ) of problem in Equations (4)–(7) are m.f. continuous s.p.’s with a finite degree of randomness, and m.f. absolutely integrable s.p.’s with respect to the time variable those depending on t. Furthermore, k ( x ) must be positive and 4-differentiable. 
1:
Fix a point ( x , t ) , x > 0 , t > 0 .  
2:
Choose the degree N of the Laguerre polynomial, L N ( · ) , and compute L N ( · ) and L N + 1 ( · )
3:
for j = 1 to N do
4:
  Compute the j roots, ϑ j , of  L N ( · )
5:
end for 
6:
for j = 1 to N do
7:
  Compute the weights, ν j , of  L N ( · ) using Equation (83).  
8:
end for 
9:
Choose and carry out a number K of the realizations over the r.v.’s involve in the s.p.’s of the input data: k ( x ) , a ( x ) , ψ ( x , t ) , g 1 ( t ) , g 2 ( t ) and g ( x )
10:
Construct the matrix s.p. L ( x ) ( ϑ j ) given by Equation (80) where ξ represent a particular ϑ j
11:
if the entries of matrix s.p. L ( x ) ( ϑ j ) verifies the condition in Equation (12) then
12:
  continue to the following step 
13:
else
14:
  change the election of the s.p.’s a ( x ) and k ( x ) and check again the condition in Equation (12). 
15:
end if 
16:
Compute the random Fourier sine transforms of the input data: ψ ( x , t ) , g 1 ( t ) and g 2 ( t ) .  
17:
Construct the random vector s.p. B ( x ) ( ϑ j ) and the random vector Y 0 ( ϑ j ) using Equation (80). 
18:
for j = 1 to N do
19:
  Obtain numerically the K deterministic solutions, X ¯ 1 ( x ) ( ϑ j ) , of the K linear differential system in Equations (79) and (80) for each root ϑ j
20:
  Compute the mean of the K solutions obtained and denote it by E M C K [ X ¯ 1 ( x ) ( ϑ j ) ] .  
21:
end for 
22:
Compute an approximation of the expectation, E [ u N G L ( x , t ) ] , and the standard deviation, Var [ u N G L ( x , t ) ] , of the approximated solution s.p. u N G L ( x , t ) (Equation (84)) using the explicit expressions in Equations (85) and (86). 
For the second stage, Table 6 collects the RMSDs computed fixing K using the following expressions
RMSD E [ u N N + 1 , K G L ( x i , t ) ] = 1 ( n + 1 ) = 0 n E [ u N + 1 , K G L ( x i , t ) ] E [ u N , K G L ( x i , t ) ] 2 ,
RMSD Var [ u N N + 1 , K G L ( x i , t ) ] = 1 ( n + 1 ) = 0 n Var [ u N + 1 , K G L ( x i , t ) Var [ u N , K G L ( x i , t ) 2 .
Computations were carried out at time t = 1 along the spatial domain [ 0 , 1 ] considering the set of points x i = i h , 0 i n = 10 , h = 0 . 1 , fixing the number of realizations K = 1000 and increasing the degree N of Laguerre polynomials from N = 4 to N = 12 . The decrease of the RMSDs in Equations (91) and (92) is in full agreement with the results shown in Figure 5, where it is illustrated how the successive approximations of the absolute deviations for both the expectation and the standard deviation, defined as follows
AbsDev E [ u N N + 1 , K G L ( x , t ) ] = E [ u N + 1 , K G L ( x , t ) ] E [ u N , K G L ( x , t ) ] , AbsDev Var [ u N N + 1 , K G L ( x , t ) ] = Var [ u N + 1 , K G L ( x , t ) ] Var [ N , K G L ( x , t ) ] ,
have a decreasing trend as N increases.

5. Conclusions

This paper proposes an efficient numerical method to approximate the stochastic process solution of random hyperbolic models of telegraph type by a low order finite sum. This expression makes manageable the computational complexity of its statistical moments. The method combines the random Fourier transform approach and random Gaussian quadrature technique together with Monte Carlo method. The role of Gaussian quadrature is related to the approximation of the inverse Fourier transform while the Monte Carlo method provides the approximation of random ordinary differential transformed problem. Both cases, the random constant coefficient and the heterogeneous case, with random coefficient are treated and illustrated with numerical examples. Numerical experiments varying the degree of Gaussian quadrature and the amount of Monte Carlo simulations are discussed. The fact that the solution of the intermediate ODE problem is solved using Monte Carlo, and that the random inverse Fourier transform is approximated using quadratures allows an easy computation that can be checked with real experiments and applicable to real problems, even outside of our telegraph type hyperbolic models.

Author Contributions

These authors contributed equally to this work.

Funding

This work was partially supported by the Ministerio de Ciencia, Innovación y Universidades Spanish grant MTM2017-89664-P.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Casabán, M.C.; Company, R.; Cortés, J.C.; Jódar, L. Solving the random diffusion model in an infinite medium: A mean square approach. Appl. Math. Model. 2014, 38, 5922–5933. [Google Scholar] [CrossRef]
  2. Casabán, M.C.; Cortés, J.C.; Jódar, L. Solving random mixed heat problems: A random integral transform approach. J. Comput. Appl. Math. 2016, 291, 5–19. [Google Scholar] [CrossRef]
  3. Casabán, M.C.; Cortés, J.C.; Jódar, L. Analytic-Numerical solution of random parabolic models: A mean square Fourier transform approach. Math. Model. Anal. 2018, 23, 79–100. [Google Scholar] [CrossRef]
  4. Farlow, S.J. Partial Differential Equations for Scientists and Engineers; John Wiley & Sons: New York, NY, USA, 1993. [Google Scholar]
  5. Saadatmandi, A.; Dehghan, M. Numerical solution of hyperbolic telegraph equation using the Chebyshev tau method. Numer. Meth. Part. Differ. Equ. 2010, 26, 239–252. [Google Scholar] [CrossRef]
  6. Weston, V.H.; He, S. Wave splitting of the telegraph equation in ℝ3 and its application to inverse scattering. Inverse Probl. 1993, 9, 789–812. [Google Scholar] [CrossRef]
  7. Jordan, P.M.; Puri, A. Digital signal propagation in dispersive media. J. Appl. Phys. 1999, 85, 1273–1282. [Google Scholar] [CrossRef] [Green Version]
  8. Banasiak, J.; Mika, J.R. Singularly perturved telegraph equations with applications in the random walk theory. J. Appl. Math. Stoch. Annal. 1998, 11, 9–28. [Google Scholar] [CrossRef]
  9. Pozar, D.M. Microwave Engineering, 2nd ed.; John Wiley & Sons, Inc.: New York, NY, USA, 1998. [Google Scholar]
  10. Kac, M. A stochastic model related to the telegrapher’s equation. Rocky Mt. J. Math. 1974, 4, 497–509. [Google Scholar] [CrossRef]
  11. Iacus, S.M. Statistical analysis of the inhomogeneous telegrapher’s process. Stat. Probab. Lett. 2001, 55, 83–88. [Google Scholar] [CrossRef]
  12. Kolesnik, A.D. Moment analysis of the telegraph random process. Bull. Acad. Sci. Mold. Ser. Math. 2012, 68, 90–107. [Google Scholar]
  13. Soong, T.T. Random Differential Equations in Science and Engineering; Academic Press: New York, NY, USA, 1973. [Google Scholar]
  14. Casabán, M.C.; Cortés, J.C.; Jódar, L. A random Laplace transform method for solving random mixed parabolic differential problems. Appl. Math. Comput. 2015, 259, 654–667. [Google Scholar] [CrossRef]
  15. Casabán, M.C.; Cortés, J.C.; Jódar, L. Solving linear and quadratic random matrix differential equations using: A mean square approach. The non-autonomous case. J. Comput. Appl. Math. 2018, 330, 937–954. [Google Scholar] [CrossRef]
  16. Casabán, M.C.; Cortés, J.C.; Jódar, L. Solving linear and quadratic random matrix differential equations: A mean square approach. Appl. Math. Model. 2016, 40, 9362–9377. [Google Scholar] [CrossRef] [Green Version]
  17. Kroese, D.P.; Taimre, T.; Botev, Z.I. Handbook of Monte Carlo Methods; Wiley Series in Probability and Statistics; John Wiley & Sons: New York, NY, USA, 2011. [Google Scholar]
  18. Wolfram Research, Inc. Mathematica, version 11.3; Wolfram Research, Inc.: Champaign, IL, USA, 2018. [Google Scholar]
  19. Abramowitz, M.; Stegun, I.A. Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables; Dover Publications, Inc.: New York, NY, USA, 1972. [Google Scholar]
  20. Polyanin, A.D.; Zaitsev, V.F. Handbook of Nonlinear Partial Differential Equations; Chapman & Hall: New York, NY, USA, 2004. [Google Scholar]
Figure 1. (a) Surface of the expectation E [ u ( x , t ) ] ; and (b) surface of the standard deviation Var [ u ( x , t ) ] . Both statistical moment functions correspond to the exact solution s.p. in Equations (56) and (57) of the random IVP in Equations (52)–(55) on the domain ( x , t ) [ 0 , 2 . 5 ] × [ 0 , 2 ] considering a N [ 0 . 9 , 1 . 1 ] ( 1 ; 0 . 05 ) and b Beta [ 0 . 4 , 0 . 6 ] ( 2 ; 2 ) .
Figure 1. (a) Surface of the expectation E [ u ( x , t ) ] ; and (b) surface of the standard deviation Var [ u ( x , t ) ] . Both statistical moment functions correspond to the exact solution s.p. in Equations (56) and (57) of the random IVP in Equations (52)–(55) on the domain ( x , t ) [ 0 , 2 . 5 ] × [ 0 , 2 ] considering a N [ 0 . 9 , 1 . 1 ] ( 1 ; 0 . 05 ) and b Beta [ 0 . 4 , 0 . 6 ] ( 2 ; 2 ) .
Mathematics 07 00853 g001
Figure 2. (a) Expectation, E [ u ( x i , 1 ) ] , of the exact solution s.p. in Equations (56) and (57) vs. their corresponding numerical approximations, E u N G H ( x i , 1 ) (Equation (64)), by random Gauss–Hermite quadrature using Hermite’s polynomials of degree N { 2 , 4 , 10 } , at the time instant t = 1 and on the spatial domain 0 x 2 . 5 . (b) Relative errors of the expectation, RelErr E N G H ( x i , 1 ) (Equation (67)) when it is considered Hermite’s polynomials of degree N { 2 , 4 , 10 } and the spatial domain 0 x 2 . 5 .
Figure 2. (a) Expectation, E [ u ( x i , 1 ) ] , of the exact solution s.p. in Equations (56) and (57) vs. their corresponding numerical approximations, E u N G H ( x i , 1 ) (Equation (64)), by random Gauss–Hermite quadrature using Hermite’s polynomials of degree N { 2 , 4 , 10 } , at the time instant t = 1 and on the spatial domain 0 x 2 . 5 . (b) Relative errors of the expectation, RelErr E N G H ( x i , 1 ) (Equation (67)) when it is considered Hermite’s polynomials of degree N { 2 , 4 , 10 } and the spatial domain 0 x 2 . 5 .
Mathematics 07 00853 g002
Figure 3. (a) Standard deviation, Var [ u ( x i , 1 ) ] , of the exact solution s.p. in Equations (56) and (57) vs. their corresponding numerical approximations, Var [ u N G H ( x i , 1 ) ] (Equations (64)–(66)), by random Gauss–Hermite quadrature using Hermite’s polynomials of degree N { 2 , 4 , 10 } , at the time instant t = 1 and on the spatial domain 0 x 2 . 5 . (b) Relative errors of the standard deviation, RelErr Var N G H ( x i , 1 ) (Equation (68)) when it is considered Hermite’s polynomials of degree N { 2 , 4 , 10 } and the spatial domain 0 x 2 . 5 .
Figure 3. (a) Standard deviation, Var [ u ( x i , 1 ) ] , of the exact solution s.p. in Equations (56) and (57) vs. their corresponding numerical approximations, Var [ u N G H ( x i , 1 ) ] (Equations (64)–(66)), by random Gauss–Hermite quadrature using Hermite’s polynomials of degree N { 2 , 4 , 10 } , at the time instant t = 1 and on the spatial domain 0 x 2 . 5 . (b) Relative errors of the standard deviation, RelErr Var N G H ( x i , 1 ) (Equation (68)) when it is considered Hermite’s polynomials of degree N { 2 , 4 , 10 } and the spatial domain 0 x 2 . 5 .
Mathematics 07 00853 g003
Figure 4. Simulations of the evolution along the time instants t = 0 . 5 , t = 1 , t = 1 . 5 and t = 2 of the approximated expectation, E [ u N , K G L ( x i , t ) ] (Equation (85)), of the solution s.p. u N G L ( x , t ) (Equation (84)) on the spatial domain 0 x 1 for K = 1000 realizations via Monte Carlo and N = 10 the degree of the Laguerre polynomial.
Figure 4. Simulations of the evolution along the time instants t = 0 . 5 , t = 1 , t = 1 . 5 and t = 2 of the approximated expectation, E [ u N , K G L ( x i , t ) ] (Equation (85)), of the solution s.p. u N G L ( x , t ) (Equation (84)) on the spatial domain 0 x 1 for K = 1000 realizations via Monte Carlo and N = 10 the degree of the Laguerre polynomial.
Mathematics 07 00853 g004
Figure 5. (a) Comparative graphics of the absolute deviations for successive approximations to the expectation E [ u N N + 1 , K G L ( x i , 1 ) ] (Equation (93)). (b) Comparative graphics of the absolute deviations for successive approximations to the standard deviation Var [ u N N + 1 , K G L ( x i , 1 ) ] (Equation (93)). Both graphics correspond to the time t = 1 on the spatial interval 0 x 1 , K = 1000 realizations and the degrees N = { 4 , 6 , 8 , 10 } for the Laguerre polynomials.
Figure 5. (a) Comparative graphics of the absolute deviations for successive approximations to the expectation E [ u N N + 1 , K G L ( x i , 1 ) ] (Equation (93)). (b) Comparative graphics of the absolute deviations for successive approximations to the standard deviation Var [ u N N + 1 , K G L ( x i , 1 ) ] (Equation (93)). Both graphics correspond to the time t = 1 on the spatial interval 0 x 1 , K = 1000 realizations and the degrees N = { 4 , 6 , 8 , 10 } for the Laguerre polynomials.
Mathematics 07 00853 g005
Table 1. Approximate values of the expectations, E M C K i [ x ¯ j ( 1 ) ] , their absolute errors, ErrAbs E M C K i ( 1 ) (Equation (22)), and the numerical convergence ratios, ratio E M C K i 1 K i ( 1 ) (Equation (24)), at  s = 1 for each component j = 1 , 2 of the approximate vector solution of Equations (17) and (18) obtained by Monte Carlo method, X ¯ ( 1 ) = [ x ¯ 1 ( 1 ) , x ¯ 2 ( 1 ) ] , considering consecutive simulations K i , i = 0 , , 4 .
Table 1. Approximate values of the expectations, E M C K i [ x ¯ j ( 1 ) ] , their absolute errors, ErrAbs E M C K i ( 1 ) (Equation (22)), and the numerical convergence ratios, ratio E M C K i 1 K i ( 1 ) (Equation (24)), at  s = 1 for each component j = 1 , 2 of the approximate vector solution of Equations (17) and (18) obtained by Monte Carlo method, X ¯ ( 1 ) = [ x ¯ 1 ( 1 ) , x ¯ 2 ( 1 ) ] , considering consecutive simulations K i , i = 0 , , 4 .
E MC K i [ x ¯ j ( 1 ) ] ErrAbs E MC K i ( 1 ) ratio E MC K i 1 K i ( 1 )
K 0 = 10 4 j = 1 1 . 52545 1 . 82301 e 03
j = 2 0 . 669022 2 . 87531 e 03
K 1 = 2 × 10 4 j = 1 1 . 52640 8 . 72148 e 04 2 . 09025
j = 2 0 . 67048 1 . 41528 e 03 2 . 03162
K 2 = 4 × 10 4 j = 1 1 . 52826 9 . 86689 e 04 0 . 88391
j = 2 0 . 67354 1 . 63892 e 03 0 . 86355
K 3 = 8 × 10 4 j = 1 1 . 52568 1 . 59686 e 03 0 . 61789
j = 2 0 . 66951 2 . 38523 e 03 0 . 68711
K 4 = 1 . 6 × 10 5 j = 1 1 . 52708 2 . 00882 e 04 7 . 94925
j = 2 0 . 67160 2 . 96101 e 04 8 . 05546
Table 2. Approximate values of the standard deviations, Var M C K i [ x ¯ j ( 1 ) ] , their absolute errors, ErrAbs Var M C K i ( 1 ) (Equation (22)), and the numerical convergence ratios, ratio Var M C K i 1 K i ( 1 ) (Equation  (25)), at  s = 1 for each component j = 1 , 2 of the approximate vector solution of Equations (17) and (18) obtained by Monte Carlo method, X ¯ ( 1 ) = [ x ¯ 1 ( 1 ) , x ¯ 2 ( 1 ) ] , considering consecutive simulations K i , i = 0 , , 4 .
Table 2. Approximate values of the standard deviations, Var M C K i [ x ¯ j ( 1 ) ] , their absolute errors, ErrAbs Var M C K i ( 1 ) (Equation (22)), and the numerical convergence ratios, ratio Var M C K i 1 K i ( 1 ) (Equation  (25)), at  s = 1 for each component j = 1 , 2 of the approximate vector solution of Equations (17) and (18) obtained by Monte Carlo method, X ¯ ( 1 ) = [ x ¯ 1 ( 1 ) , x ¯ 2 ( 1 ) ] , considering consecutive simulations K i , i = 0 , , 4 .
Var MC K i [ x ¯ j ( 1 ) ] ErrAbs Var MC K i ( 1 ) ratio Var MC K i 1 K i ( 1 )
K 0 = 10 4 j = 1 0 . 28309 1 . 44459 e 03
j = 2 0 . 42914 1 . 54728 e 03
K 1 = 2 × 10 4 j = 1 0 . 28352 1 . 00834 e 03 1 . 43264
j = 2 0 . 42916 1 . 52672 e 03 1 . 01346
K 2 = 4 × 10 4 j = 1 0 . 28609 1 . 55768 e 03 0 . 64733
j = 2 0 . 43368 2 . 98663 e 03 0 . 51119
K 3 = 8 × 10 4 j = 1 0 . 28377 7 . 61522 e 04 2 . 04548
j = 2 0 . 42931 1 . 38278 e 03 2 . 15987
K 4 = 1 . 6 × 10 5 j = 1 0 . 28453 2 . 50041 e 06 304 . 559
j = 2 0 . 43082 1 . 32738 e 04 10 . 4173
Table 3. Exact values of the expectation, E [ x j ( s ) ] , and the standard deviation, Var [ x j ( s ) ] , for each component j = 1 , 2 , of the random vector solution in Equation (19). The approximate values of both the expectations and the standard deviations, E M C K [ x ¯ j ( s ) ] and Var M C K [ x ¯ j ( s ) ] , respectively, and their absolute errors, ErrAbs E M C K ( s ) (Equation (22)) and ErrAbs Var M C K ( s ) (Equation (23)) were computed by Monte Carlo method considering K = 1 . 6 × 10 5 simulations at different values of the parameter s = 0 . 5 , 1 , 1 . 5 , 2 .
Table 3. Exact values of the expectation, E [ x j ( s ) ] , and the standard deviation, Var [ x j ( s ) ] , for each component j = 1 , 2 , of the random vector solution in Equation (19). The approximate values of both the expectations and the standard deviations, E M C K [ x ¯ j ( s ) ] and Var M C K [ x ¯ j ( s ) ] , respectively, and their absolute errors, ErrAbs E M C K ( s ) (Equation (22)) and ErrAbs Var M C K ( s ) (Equation (23)) were computed by Monte Carlo method considering K = 1 . 6 × 10 5 simulations at different values of the parameter s = 0 . 5 , 1 , 1 . 5 , 2 .
E [ x j ( s ) ] E M C K [ x ¯ j ( s ) ] ErrAbs E MC K ( s ) Var [ x j ( s ) ] Var MC K [ x ¯ j ( s ) ] ErrAbs Var MC K ( s )
j = 1 1 . 23086 1 . 23078 7 . 83976 e 05 0 . 11073 0 . 11070 3 . 23642 e 05
s = 0 . 5
j = 2 0 . 52105 0 . 52085 1 . 97325 e 04 0 . 27619 0 . 27618 1 . 52591 e 05
j = 1 1 . 52728 1 . 52708 2 . 00882 e 04 0 . 28453 0 . 28453 2 . 50041 e 06
s = 1
j = 2 0 . 67190 0 . 671601 2 . 96101 e 04 0 . 43069 0 . 43082 1 . 32738 e 04
j = 1 1 . 91133 1 . 91095 3 . 78171 e 04 0 . 55336 0 . 55348 1 . 2166 e 04
s = 1 . 5
j = 2 0 . 874369 0 . 873953 4 . 16073 e 04 0 . 66385 0 . 66419 3 . 3551 e 04
j = 1 2 . 41353 2 . 41291 6 . 20018 e 04 0 . 96546 0 . 96583 3 . 70421 e 04
s = 2
j = 2 1 . 14844 1 . 14788 5 . 55237 e 04 1 . 01592 1 . 01649 5 . 69849 e 04
Table 4. Values of the RMSEs for the expectation, RMSE E ( x i , t ) (Equation (69)) and the standard deviation RMSE Var ( x i , t ) (Equation (70)), at the time instant t = 1 along the spatial domain [ 0 , 2 . 5 ] considering the set of points x i = i h , 0 i n = 10 , with the stepsize h = 0 . 25 .
Table 4. Values of the RMSEs for the expectation, RMSE E ( x i , t ) (Equation (69)) and the standard deviation RMSE Var ( x i , t ) (Equation (70)), at the time instant t = 1 along the spatial domain [ 0 , 2 . 5 ] considering the set of points x i = i h , 0 i n = 10 , with the stepsize h = 0 . 25 .
N RMSE E ( x i , 1 ) RMSE Var ( x i , 1 )
2 8 . 8178 e 02 2 . 2916 e 03
4 1 . 1892 e 02 1 . 7732 e 04
6 9 . 0351 e 03 1 . 4160 e 04
8 8 . 9287 e 03 1 . 1546 e 04
10 6 . 0334 e 03 6 . 4509 e 05
Table 5. Values of the RMSDs for the approximations of the expectation, RMSD E N , K K + 1 G L ( x i , t ) (Equation (89)), and the standard deviation, RMSD Var N , K K + 1 G L ( x i , t ) (Equation (90)), at t = 1 on the spatial domain 0 x 1 , N = 6 the degree of the Laguerre polynomial and the realizations K 0 = 2500 , K 1 = 5000 , K 2 = 10 4 , K 3 = 2 × 10 4 , K 4 = 4 × 10 4 and K 5 = 5 × 10 4 .
Table 5. Values of the RMSDs for the approximations of the expectation, RMSD E N , K K + 1 G L ( x i , t ) (Equation (89)), and the standard deviation, RMSD Var N , K K + 1 G L ( x i , t ) (Equation (90)), at t = 1 on the spatial domain 0 x 1 , N = 6 the degree of the Laguerre polynomial and the realizations K 0 = 2500 , K 1 = 5000 , K 2 = 10 4 , K 3 = 2 × 10 4 , K 4 = 4 × 10 4 and K 5 = 5 × 10 4 .
K K + 1 RMSD E [ u 6 , K K + 1 G L ( x i , 1 ) ] RMSD Var [ u 6 , K K + 1 G L ( x i , 1 ) ]
K 0 K 1 2 . 81922 e 05 2 . 91484 e 05
K 1 K 2 1 . 96565 e 05 1 . 18180 e 05
K 2 K 3 1 . 11618 e 05 2 . 49163 e 06
K 3 K 4 1 . 07918 e 05 4 . 96128 e 06
K 4 K 5 3 . 59937 e 06 2 . 83452 e 06
Table 6. Values of the RMSDs for the approximations of the expectation, RMSD E N N + 1 , K G L ( x i , t ) (Equation (91)), and the standard deviation, RMSD Var N N + 1 , K G L ( x i , t ) (Equation (92)), at the time instant t = 1 on spatial domain 0 x 1 and several degrees of the Laguerre polynomial taking values in the subset N = { 4 , 6 , 8 , 10 , 12 } .
Table 6. Values of the RMSDs for the approximations of the expectation, RMSD E N N + 1 , K G L ( x i , t ) (Equation (91)), and the standard deviation, RMSD Var N N + 1 , K G L ( x i , t ) (Equation (92)), at the time instant t = 1 on spatial domain 0 x 1 and several degrees of the Laguerre polynomial taking values in the subset N = { 4 , 6 , 8 , 10 , 12 } .
N N + 1 RMSD E [ u N N + 1 , K G L ( x i , 1 ) ] RMSD Var [ u N N + 1 , K G L ( x i , 1 ) ]
{ 4 , 6 } 8 . 68910 e 03 1 . 51443 e 04
{ 6 , 8 } 4 . 00353 e 03 7 . 17175 e 05
{ 8 , 10 } 2 . 69052 e 03 5 . 10703 e 05
{ 10 , 12 } 2 . 01171 e 03 3 . 80140 e 05

Share and Cite

MDPI and ACS Style

Casabán, M.C.; Company, R.; Jódar, L. Numerical Integral Transform Methods for Random Hyperbolic Models with a Finite Degree of Randomness. Mathematics 2019, 7, 853. https://doi.org/10.3390/math7090853

AMA Style

Casabán MC, Company R, Jódar L. Numerical Integral Transform Methods for Random Hyperbolic Models with a Finite Degree of Randomness. Mathematics. 2019; 7(9):853. https://doi.org/10.3390/math7090853

Chicago/Turabian Style

Casabán, M. Consuelo, Rafael Company, and Lucas Jódar. 2019. "Numerical Integral Transform Methods for Random Hyperbolic Models with a Finite Degree of Randomness" Mathematics 7, no. 9: 853. https://doi.org/10.3390/math7090853

APA Style

Casabán, M. C., Company, R., & Jódar, L. (2019). Numerical Integral Transform Methods for Random Hyperbolic Models with a Finite Degree of Randomness. Mathematics, 7(9), 853. https://doi.org/10.3390/math7090853

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop