Next Article in Journal
The Complementary q-Lidstone Interpolating Polynomials and Applications
Next Article in Special Issue
A Continuous Model of Marital Relations with Stochastic Differential Equations
Previous Article in Journal
Evolutionary Multi-Objective Energy Production Optimization: An Empirical Comparison
Previous Article in Special Issue
Modelling the Process to Access the Spanish Public University System Based on Structural Equation Models
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Improving Kernel Methods for Density Estimation in Random Differential Equations Problems

by
Juan Carlos Cortés López
* and
Marc Jornet Sanz
Instituto Universitario de Matemática Multidisciplinar, Universitat Politècnica de València, 46022 Valencia, Spain
*
Author to whom correspondence should be addressed.
Math. Comput. Appl. 2020, 25(2), 33; https://doi.org/10.3390/mca25020033
Submission received: 30 May 2020 / Revised: 16 June 2020 / Accepted: 18 June 2020 / Published: 18 June 2020
(This article belongs to the Special Issue Mathematical Modelling in Engineering & Human Behaviour 2019)

Abstract

:
Kernel density estimation is a non-parametric method to estimate the probability density function of a random quantity from a finite data sample. The estimator consists of a kernel function and a smoothing parameter called the bandwidth. Despite its undeniable usefulness, the convergence rate may be slow with the number of realizations and the discontinuity and peaked points of the target density may not be correctly captured. In this work, we analyze the applicability of a parametric method based on Monte Carlo simulation for the density estimation of certain random variable transformations. This approach has important applications in the setting of differential equations with input random parameters.

1. Introduction

Given an abstract probability space ( Ω , F , P ) and a random variable X : Ω R , its probability density function f X is defined as the Borel measurable and non-negative function that satisfies P [ X B ] = B f X ( x ) d x for any Borel set B R ([1], Ch. 2). In other words, f X ( x ) = d P X ( x ) d x is the Radon–Nikodym derivative of the probability law P X = P X 1 with respect to the Lebesgue measure. When the density function exists, the random variable (its probability law) is said to be absolutely continuous. Examples of absolutely continuous distributions are Uniform, Triangular, Normal, Gamma, etc.
When the probability law of X is unknown in explicit form but realizations of X can be drawn, a kernel estimation may be used to reconstruct the probability density function of X. It is a non-parametric method that uses a finite data sample from X. If x 1 , , x M are independent realizations of X, then the kernel density estimator takes the form f ^ X M ( x ) = 1 M b i = 1 M K x x i b , where K is the kernel function and b is the bandwidth, see [2].
Kernel density estimation may present slow convergence rate with the number of realizations and may smooth out discontinuity and non-differentiability points of the target density. For certain random variable transformations X = g ( U , V ) , where U and V are independent random quantities and g is a deterministic real function, we will see that there is an alternative estimation method of parametric nature that is based on Monte Carlo simulation. Such approach has been used for certain random differential equations problems, see [3,4,5,6,7]. However, a detailed comparison with kernel density estimation, both theoretically and numerically, has not been performed yet. This detailed comparison is the novelty of the present paper. We will demonstrate that the parametric method is more efficient and correctly captures density features. Numerical experiments will illustrate these improvements.
As it shall be seen later (see expression (2)), our approach relies upon the representation of the probability density via the expectation of a transformation derived through the application of the so-called Random Variable Transformation (RVT) method [1] (also termed “change of variable formula”, see [8]). This method depends on the computation of invertible functions with Jacobian determinant. We point out that, when the number of random model parameters is large, it is convenient to apply competitive techniques to carry out computations. In this regard, we here highlight recent contributions that address this important issue in the machine learning setting. In [8], the authors applied real-valued non-volume preserving (real NVP) transformations to estimate the density when reconstructing natural images on datasets through sampling, log-likelihood evaluation, and latent variable manipulations. Recently, in [9], the real NVP was improved by combining the so called flow-based generative models. In the machine learning context, these are methods based upon the application of the “change of variable formula”—together with Knothe–Rosenblatt rearrangement.

2. Method

Let X = g ( U , V ) be a random variable, where U is a random variable, V is a random variable/vector, and g is a deterministic real function. The aim is to estimate the probability density function of X.
A kernel density estimator takes the form
f ^ X M ( x ) = 1 M b i = 1 M K x g ( u i , v i ) b ,
where u 1 , , u M and v 1 , , v M are independent realizations of U and V, respectively.
Let us see an alternative method when U and V are independent. Suppose that U has an explicit density function f U . Suppose also that g ( · , V ) is invertible for all V, where h ( · , V ) is the inverse. Then,
f X ( x ) = E f U h ( x , V ) 1 h ( x , V ) ,
where 1 is the partial derivative with respect to the first variable and E is the expectation operator. Notice that we are not requiring V to have a probability density.
Indeed, if P V = P V 1 denotes the probability law of V, then the following chain of equalities holds:
P g ( U , V ) x = R P g ( U , V ) x | V = v P V ( d v ) = R P g ( U , v ) x | V = v P V ( d v ) = R P g ( U , v ) x P V ( d v ) = R x f g ( U , v ) ( u ) d u P V ( d v ) = R x f U ( h ( u , v ) ) | 1 h ( u , v ) | d u P V ( d v ) = x R f U ( h ( u , v ) ) | 1 h ( u , v ) | P V ( d v ) d u = x E f U h ( u , V ) 1 h ( u , V ) d u .
We have used the independence, the transformed density function ([1], Section 6.2.1), and Fubini’s theorem.
To estimate the expectation from (2) when the probability law of V is unknown in closed form but realizations of it can be drawn, Monte Carlo simulation may be conducted. For example, the crude Monte Carlo method estimates f X ( x ) by using
f ^ X M ( x ) = 1 M i = 1 M f U h ( x , v i ) 1 h ( x , v i ) ,
where v 1 , , v M are independent realizations of V ([10], pp. 53–54).
Notice that the structure and the complexity of this estimator are very similar to a kernel one. For (1), one needs realizations u 1 , , u M , v 1 , , v M , while (3) only uses v 1 , , v M . Hence, the complexity of drawing realizations of V affects (1) and (3) in exactly the same way. On the other hand, (1) evaluates K and g, while (3) evaluates f U and h.
By the central limit theorem, the root mean square error of the Monte Carlo estimate (3) of the mean is σ / M = O ( M 1 / 2 ) , where σ is the standard deviation of the random quantity within the expectation (2) (which is assumed to be finite). Variance reduction methods may be applied, such as antithetic or control variates, see [11]. By contrast, the root mean square error of the kernel estimate (1) is O ( M r ) , r < 1 / 2 [2].
The density estimation of f X ( x ) thus becomes a parametric problem, as we are estimating the expectation parameter from a population distribution. This is in contrast to kernel density estimation, which is non-parametric because it reconstructs a distribution. Moreover, the method presented here acts pointwise, so discontinuities and non-differentiability points are correctly captured, without smoothing them out.

3. Some Numerical Experiments

In this section, we compare the new parametric method with kernel density estimation numerically. We use the software Mathematica® (Wolfram Research, Inc, Mathematica, Version 12.0, Champaign, IL, USA, 2019).
Example 1.
Let X = U + V , where U Normal ( 0 , 1 ) and V Uniform ( 1 , 1 ) are independent random variables. Obviously, the exact density function of X is known via convolution: f X ( x ) = f U ( u v ) f V ( v ) d v . However, this example allows for comparing the new parametric method with kernel density estimation by calculating exact errors. With g ( U , V ) = U + V and h ( U , V ) = U V , expression (2) becomes f X ( x ) = E [ f U ( x V ) ] . From M realizations, we conduct a kernel density estimation (Gaussian kernel with Silverman’s rule to determine the bandwidth, b = 0.9 × min { σ ^ , I Q ^ R / 1.34 } × M 1 / 5 , where σ ^ is the standard deviation of the sample and I Q ^ R is the interquartile range of the sample), crude Monte Carlo simulation on E [ f U ( x V ) ] , antithetic variates method ( M / 2 realizations of V and the sample is completed by changing signs), and control variates method (the control variable here is V). Let f ^ X M ( x ) be the density estimate of f X ( x ) . We consider the error measure δ M = ( f X ( x ) f ^ X M ( x ) ) 2 d x . Numerically, this integral is computed by fixing a large interval [ R , R ] and performing Gauss–Legendre quadrature on it. As δ M is random, we better consider ϵ M = ( E [ δ M ] ) 1 / 2 . This expectation is estimated by repeating the density estimate several times (we did so 20 times). Figure 1 reports the estimated error ϵ ^ M in log-log scale, for different orders M growing geometrically. Observe that the parametric approach is more efficient than the kernel method. Variance reduction methods also allow for lowering the error. The lines corresponding to the three parametric methods have slope 1 / 2 approximately, due to the well-known decrease of the Monte Carlo error.
For example, to achieve an error ϵ M < 0.01 , the kernel density estimation requires about 25,000 realizations, while the parametric crude Monte Carlo estimation needs 400 realizations. The antithetic method decreases the number of required realizations to 50–100, and the control variates approach does so even more, to less than 50.
In Table 1, we present the timings in seconds for achieving root mean square errors less than 0.01 and 0.001 . We work at the density location x = 1 . Observe that the kernel density estimation is the least efficient method, especially as we require smaller errors.
Example 2.
Let X = U V , where U Normal ( 0 , 1 ) and V = P + i = 1 12 Q i , being P Poisson ( 1 ) and Q i Triangular ( 0.1 , 0.105 ) . All of the random variables are assumed to be independent. With g ( U , V ) = U V and h ( U , V ) = U / V , expression (2) is f X ( x ) = E [ f U ( x V ) 1 V ] . Notice that, despite V being discrete, X has a density. This expectation cannot be computed via quadratures, due to the large dimension of the random space. We employ our parametric method using crude Monte Carlo simulation. In Figure 2, first panel, we depict the estimated density function f ^ X M ( x ) using M realizations. For M = 10,000 and M = 30,000 , we perceive good agreement between the approximations. This is in contrast to the second panel of Figure 2, where the kernel density estimation does not show convergence yet.
Example 3.
Consider the same setting of Example 2 but U Uniform ( 1 , 1 ) (now f U is not continuous on R ). In Figure 3, first plot, we show the estimated density function f ^ X M ( x ) by the parametric crude Monte Carlo method using M realizations. The discontinuity points of the target density f X are correctly captured. Indeed, the method acts pointwise in x, so any feature of f X will be correctly identified, no matter how rare it is. By contrast, the kernel density estimation smooths out the discontinuities, see the second panel of the figure.

4. Application to Random Differential Equations

A random differential equation problem considers some of the terms in the system (input coefficients, forcing term, initial conditions, etc.) as random variables or stochastic processes [1]. This is due to errors in measurements of data when trying to model a physical phenomenon, which introduces a variability in the parameters estimation. The solution to the system is then a stochastic process. Its deterministic trajectories are not the main concern. Instead, the study of the statistical content of the solution is the main goal. An important aim is to compute its probability density function. The key idea is that the solution is a transformation of the input random parameters; therefore, the probability density may be estimated as described in this paper whenever the solution is given in closed form. In the notation used in this paper, U and V would denote the input random parameters of the system, and g would be the transformation mapping that relates the output with the inputs. A specific input parameter is selected as U, with respect to which the mapping g is easily invertible to obtain h.
We consider the Ricatti random differential equation
X ˙ ( t ) = A X ( t ) 2 , t [ 0 , ) , X ( 0 ) = X 0 ,
where A > 0 and X 0 are independent random variables. The solution to (4) is given by
X ( t ) = X 0 1 + A X 0 t .
By taking U = X 0 , the density function of X ( t ) is given by
f X ( t ) ( x ) = E f X 0 x 1 A x t 1 ( 1 A x t ) 2 .
From M realizations of A, say A ( 1 ) , , A ( M ) , the expectation is estimated via crude Monte Carlo simulation:
f X ( t ) ( x ) 1 M i = 1 M f X 0 x 1 A ( i ) x t 1 ( 1 A ( i ) x t ) 2 .
As seen in the previous sections, this procedure is more efficient and certain than kernel density estimation, expressed as
f X ( t ) ( x ) 1 M b i = 1 M K x X 0 ( i ) / ( 1 + A ( i ) X 0 ( i ) t ) b ,
where the superscript i denotes the i-th realization, i = 1 , , M .
For a specific numerical example, let us fix A Uniform ( 1 , 1 ) and X 0 Uniform ( 0.1 , 0.13 ) . Figure 4, first panel, plots (5) for a certain number of realizations M for A, for time t = 0.3 . A comparison is conducted with kernel density estimation with a Gaussian kernel (second panel). It must be appreciated that the parametric Monte Carlo method (5) correctly captures the non-differentiability points of the target density.
For another example, let us consider the damped pendulum differential equation with uncertainties:
X ¨ ( t ) + 2 ω 0 ξ X ˙ ( t ) + ω 0 2 X ( t ) = Y ( t ) , t [ 0 , T ] , X ( 0 ) = X 0 , X ˙ ( 0 ) = X 1 ,
where T > 0 , w 0 0 , ξ 0 and ξ 2 < 1 (underdamped case) are constant, the initial position X 0 and the initial velocity X 1 are random variables, and the source/forcing term Y ( t ) is a stochastic process ([1], Example 7.2). The solution is given by
X ( t ) = ξ e ω 0 ξ t sin ( ω 1 t ) 1 ξ 2 + e ω 0 ξ t cos ( ω 1 t ) X 0 + e ω 0 ξ t sin ( ω 1 t ) ω 1 X 1 + 0 t p ( t s ) Y ( s ) d s ,
where ω 1 = ω 0 1 ξ 2 0 and p ( t ) = 1 ω 1 e ξ ω 0 t sin ( ω 1 t ) . In [4], the density function of the response X ( t ) is expressed in terms of an expectation (2), which is estimated by means of parametric crude Monte Carlo simulation due to the large dimension of the random space. By taking U = X 0 , one of the formulas derived is
f X ( t ) ( x ) = E f X 0 x e ω 0 ξ t sin ( ω 1 t ) ω 1 X 1 0 t p ( t s ) Y ( s ) d s ξ e ω 0 ξ t sin ( ω 1 t ) 1 ξ 2 + e ω 0 ξ t cos ( ω 1 t ) 1 ξ e ω 0 ξ t sin ( ω 1 t ) 1 ξ 2 + e ω 0 ξ t cos ( ω 1 t ) 1 M i = 1 M f X 0 x e ω 0 ξ t sin ( ω 1 t ) ω 1 X 1 ( i ) 0 t p ( t s ) Y ( i ) ( s ) d s ξ e ω 0 ξ t sin ( ω 1 t ) 1 ξ 2 + e ω 0 ξ t cos ( ω 1 t ) 1 ξ e ω 0 ξ t sin ( ω 1 t ) 1 ξ 2 + e ω 0 ξ t cos ( ω 1 t ) ,
where the superscript i denotes the i-th realization, i = 1 , , M . Compare (8) with a kernel density estimation
f X ( t ) ( x ) 1 M b i = 1 M K x ξ e ω 0 ξ t sin ( ω 1 t ) 1 ξ 2 + e ω 0 ξ t cos ( ω 1 t ) X 0 ( i ) e ω 0 ξ t sin ( ω 1 t ) ω 1 X 1 ( i ) 0 t p ( t s ) Y ( i ) ( s ) d s b .
The expressions and their complexities are very similar, but the convergence of (8) is faster with M, as justified in the previous sections.
Let us see a numerical example. We take the upper time T = 1 , the damping ratio ξ = 1 / 2 and the natural frequency ω 0 = π / 2 . Consider X 0 Exponential ( 3 ) , X 1 Binomial ( 7 , 0.31 ) and Y ( t ) = j = 1 2 j π sin ( t j π ) ξ j . The series is understood in L 2 ( [ 0 , 1 ] × Ω ) and { ξ j } j = 1 is a sequence of independent random variables with Uniform ( 3 , 3 ) distribution. This is a Karhunen–Loève expansion ([10], p. 47). It is assumed that X 0 , X 1 , and Y are independent. By applying (8) with the sum of Y ( t ) truncated to a finite-term series, we estimate the density function of X ( t ) . For example, in Figure 5, first panel, f X ( 0.5 ) ( x ) is plotted for orders of truncation N = 4 , 5, and 6. Overlapping of the graphs is clearly appreciated. The strong oscillations of the density are perfectly captured because the method acts pointwise. In the second panel of the figure, the kernel density estimate with Gaussian kernel is plotted, with 10 6 realizations. Although there is apparent agreement between both panels, our parametric Monte Carlo method captures all the peaks while the kernel method smooths them out.
Further applications of the methodology for random differential equations may be consulted in [3,4,5,6,7]. The present paper forms the theoretical and computational foundations of the methodology used in those recent contributions.

5. Conclusions

In this paper, we have been concerned with the density estimation of random variables. We have proposed an alternative to kernel density estimation for certain random variable transformations. The alternative is based on the estimation of an expectation parameter via Monte Carlo methods; therefore, it is of parametric nature and improves kernel methods in terms of efficiency. Furthermore, the method captures density features and does not smooth out discontinuity and non-differentiability points of the target density.
As shown here, the solution to some random differential equation problems is an explicit transformation of the input random parameters. The methodology proposed in this paper may be employed to estimate the density function of the closed-form stochastic solution parametrically, instead of relying on kernel estimation.

Author Contributions

Investigation, M.J.S.; Methodology and M.J.S.; Software, M.J.S.; Supervision, J.C.C.L.; Validation, J.C.C.L.; Visualization, J.C.C.L. and M.J.S.; Writing—original draft, M.J.S.; Writing—review and editing, J.C.C.L. and M.J.S. All authors have read and agreed to the published version of the manuscript.

Funding

This work has been supported by the Spanish Ministerio de Economía, Industria y Competitividad (MINECO), the Agencia Estatal de Investigación (AEI) and Fondo Europeo de Desarrollo Regional (FEDER UE) grant MTM2017–89664–P.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Soong, T.T. Random Differential Equations in Science and Engineering; Academic Press: New York, NY, USA, 1973. [Google Scholar]
  2. Silverman, B.W. Density Estimation for Statistics and Data Analysis; Chapman and Hall: London, UK, 1986. [Google Scholar]
  3. Calatayud, J.; Cortés, J.-C.; Díaz, J.A.; Jornet, M. Constructing reliable approximations of the probability density function to the random heat PDE via a finite difference scheme. Appl. Numer. Math. 2020, 151, 413–424. [Google Scholar] [CrossRef]
  4. Calatayud, J.; Cortés, J.-C.; Jornet, M. The damped pendulum random differential equation: A comprehensive stochastic analysis via the computation of the probability density function. Physica A 2018, 512, 261–279. [Google Scholar] [CrossRef]
  5. Calatayud, J.; Cortés, J.-C.; Díaz, J.A.; Jornet, M. Density function of random differential equations via finite difference schemes: A theoretical analysis of a random diffusion-reaction Poisson-type problem. Stochastics 2019. [Google Scholar] [CrossRef]
  6. Calatayud, J.; Cortés, J.-C.; Dorini, F.A.; Jornet, M. Extending the study on the linear advection equation subject to stochastic velocity field and initial condition. Math. Comput. Simul. 2020, 172, 159–174. [Google Scholar] [CrossRef]
  7. Jornet, M.; Calatayud, J.; Le Maître, O.P.; Cortés, J.-C. Second order linear differential equations with analytic uncertainties: Stochastic analysis via the computation of the probability density function. J. Comput. Appl. Math. 2020, 374, 112770. [Google Scholar] [CrossRef] [Green Version]
  8. Dinh, L.; Sohl-Dickstein, J.; Bengio, S. Density estimation using Real NVP. In Proceedings of the 5th International Conference on Learning Representations, Toulon, France, 24–26 April 2017. [Google Scholar]
  9. Tang, K.; Wan, X.; Liao, Q. Deep density estimation via invertible block-triangular mapping. Theor. Appl. Mech. Lett. 2020, 10, 143–148. [Google Scholar]
  10. Xiu, D. Numerical Methods for Stochastic Computations: A Spectral Method Approach; Princeton University Press: Princeton, NJ, USA, 2010. [Google Scholar]
  11. Botev, Z.; Ridder, A. Variance Reduction. Wiley StatsRef Stat. Ref. Online 2017, 1–6. [Google Scholar] [CrossRef]
Figure 1. Density estimation using a kernel method, parametric crude Monte Carlo (MC) simulation, antithetic variates and control variates methods. Estimated error ϵ ^ M in log-log scale. This figure corresponds to Example 1.
Figure 1. Density estimation using a kernel method, parametric crude Monte Carlo (MC) simulation, antithetic variates and control variates methods. Estimated error ϵ ^ M in log-log scale. This figure corresponds to Example 1.
Mca 25 00033 g001
Figure 2. Density estimation f ^ X M ( x ) using parametric crude Monte Carlo simulation (first panel) and kernel density estimation (second panel) with M realizations. This figure corresponds to Example 2.
Figure 2. Density estimation f ^ X M ( x ) using parametric crude Monte Carlo simulation (first panel) and kernel density estimation (second panel) with M realizations. This figure corresponds to Example 2.
Mca 25 00033 g002
Figure 3. Density estimation f ^ X M ( x ) using parametric crude Monte Carlo simulation (first panel) and kernel density estimation (second panel) with M realizations. This figure corresponds to Example 3.
Figure 3. Density estimation f ^ X M ( x ) using parametric crude Monte Carlo simulation (first panel) and kernel density estimation (second panel) with M realizations. This figure corresponds to Example 3.
Mca 25 00033 g003
Figure 4. First panel: Estimated density function f X ( 0.3 ) ( x ) with parametric Monte Carlo for M = 10,000 and M = 30,000 . Second panel: Kernel density estimate with Gaussian kernel for M = 10 5 and M = 10 6 .
Figure 4. First panel: Estimated density function f X ( 0.3 ) ( x ) with parametric Monte Carlo for M = 10,000 and M = 30,000 . Second panel: Kernel density estimate with Gaussian kernel for M = 10 5 and M = 10 6 .
Mca 25 00033 g004
Figure 5. First panel: Estimated density function f X ( 0.5 ) ( x ) with parametric Monte Carlo for orders of truncation N = 4 , 5 and 6 (see [4]). Second panel: Kernel density estimate with Gaussian kernel for N = 6 .
Figure 5. First panel: Estimated density function f X ( 0.5 ) ( x ) with parametric Monte Carlo for orders of truncation N = 4 , 5 and 6 (see [4]). Second panel: Kernel density estimate with Gaussian kernel for N = 6 .
Mca 25 00033 g005
Table 1. Time in seconds to achieve root mean square error less than 0.01 and 0.001. We employ a kernel method, parametric crude Monte Carlo (MC) simulation, antithetic variates and control variates methods. This table corresponds to Example 1.
Table 1. Time in seconds to achieve root mean square error less than 0.01 and 0.001. We employ a kernel method, parametric crude Monte Carlo (MC) simulation, antithetic variates and control variates methods. This table corresponds to Example 1.
TimeKernelCrude MCAntithetic VariatesControl Variates
error 0.010.0630.000890.000230.00015
error 0.0014.20.130.012 0.0057

Share and Cite

MDPI and ACS Style

Cortés López, J.C.; Jornet Sanz, M. Improving Kernel Methods for Density Estimation in Random Differential Equations Problems. Math. Comput. Appl. 2020, 25, 33. https://doi.org/10.3390/mca25020033

AMA Style

Cortés López JC, Jornet Sanz M. Improving Kernel Methods for Density Estimation in Random Differential Equations Problems. Mathematical and Computational Applications. 2020; 25(2):33. https://doi.org/10.3390/mca25020033

Chicago/Turabian Style

Cortés López, Juan Carlos, and Marc Jornet Sanz. 2020. "Improving Kernel Methods for Density Estimation in Random Differential Equations Problems" Mathematical and Computational Applications 25, no. 2: 33. https://doi.org/10.3390/mca25020033

APA Style

Cortés López, J. C., & Jornet Sanz, M. (2020). Improving Kernel Methods for Density Estimation in Random Differential Equations Problems. Mathematical and Computational Applications, 25(2), 33. https://doi.org/10.3390/mca25020033

Article Metrics

Back to TopTop