Next Article in Journal
On the Stability of la Cierva’s Autogiro
Previous Article in Journal
Multiple Solutions for Partial Discrete Dirichlet Problems Involving the p-Laplacian
Previous Article in Special Issue
Bell Polynomial Approach for Time-Inhomogeneous Linear Birth–Death Process with Immigration
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

On the Entropy of Fractionally Integrated Gauss–Markov Processes

1
Dipartimento di Matematica, Università “Tor Vergata”, via della Ricerca Scientifica, I-00133 Roma, Italy
2
Dipartimento di Matematica e Applicazioni, Università “Federico II”, via Cintia, Complesso Monte S. Angelo, I-80126 Napoli, Italy
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Mathematics 2020, 8(11), 2031; https://doi.org/10.3390/math8112031
Submission received: 30 September 2020 / Revised: 29 October 2020 / Accepted: 13 November 2020 / Published: 14 November 2020
(This article belongs to the Special Issue Stochastic Modeling in Biology)

Abstract

:
This paper is devoted to the estimation of the entropy of the dynamical system { X α ( t ) , t 0 } , where the stochastic process X α ( t ) consists of the fractional Riemann–Liouville integral of order α ( 0 , 1 ) of a Gauss–Markov process. The study is based on a specific algorithm suitably devised in order to perform the simulation of sample paths of such processes and to evaluate the numerical approximation of the entropy. We focus on fractionally integrated Brownian motion and Ornstein–Uhlenbeck process due their main rule in the theory and application fields. Their entropy is specifically estimated by computing its approximation (ApEn). We investigate the relation between the value of α and the complexity degree; we show that the entropy of X α ( t ) is a decreasing function of α ( 0 , 1 ) .
MSC:
60G15; 26A33; 65C20

1. Introduction

In the study of a biological system, whose time evolution is modeled by a stochastic process that depends on a certain parameter α , often there is a need to find how a change in the value of α affects the qualitative behavior of the system, as well as its complexity degree, or entropy. Another useful information is the knowledge of a stochastic ordering, with respect to expectation of functionals of the process (e.g., its mean and variance), when varying α .
As a case study, we are interested to the qualitative behavior of the fractional integral of a Gauss–Markov (GM) process, when varying the order α of the fractional integration.
Actually, GM processes and their fractional integrals over time are very relevant in various application fields, especially in Biology—e.g., in stochastic models for neuronal activity (see [1]). In particular, the fractional integral of order α ( 0 , 1 ) of a GM process, say X α ( t ) , is suitable to describe certain stochastic phenomena with long range memory dynamics, involving correlated input processes (see [2]).
As an example of application, one can consider the model for the neuronal activity, based on the coupled differential equations:
D α V ( t ) = g L C m V L + η ( t ) C m , V ( 0 ) = V 0 d η ( t ) = η ( t ) I ( t ) τ d t + ς τ d B ( t ) , η ( 0 ) = η 0 .
Here, D α stands for the Caputo fractional derivative (see [3]); η ( t ) is in place of the white noise, usually utilized in the stochastic differential equation, which describes a Leaky Integrate-and-Fire (LIF) neuronal model (see, for example, [4]). The colored noise process η ( t ) is the correlated process obeying the second of Equation (1) and it is the input for the first one; it is indeed a time-non-homogeneous GM process of Ornstein–Uhlenbeck (OU)-type (see Section 2). The stochastic process V ( t ) represents the voltage of the neuronal membrane, whereas C m is the membrane capacitance, g L the leak conductance, V L the resting (equilibrium) level potential, I ( t ) the synaptic current (deterministic function), τ is the correlation time of η ( t ) and B ( t ) the noise (standard BM). As we can see, the process V ( t ) , which is the solution of (1) belongs to the class of fractional integrals of GM processes. Indeed, it is a specific example of X α ( t ) process, V ( t ) being the Caputo fractional integral of the η ( t ) process [5]. The biophysical motivation in the above model is to describe a neuronal activity as a perfect integrator (without leakage), from an initial time until to the current time, of the process η ( t ) , representing the time dependent input. The use of fractional operators allows us to regulate the time scale by choosing the fractional order of integration suitably adherent to the neuro-physiological evidences. Indeed, such a model can be useful, for instance, in the investigation and simulation of synchronous/asynchronous communications in networks of neurons [6].
To introduce the terms of our investigation, we recall some definitions.
A continuous GM process Y ( t ) is a stochastic process of the form:
Y ( t ) = m ( t ) + h 2 ( t ) B ( r ( t ) ) , t 0 ,
where B ( t ) = B t denotes standard Brownian motion (BM), m ( t ) , h 1 ( t ) , h 2 ( t ) are C 1 functions in ( 0 , + ) , with h 2 ( t ) 0 and r ( t ) = h 1 ( t ) / h 2 ( t ) is a monotone increasing, differentiable and non-negative function.
For a continuous function f ( t ) , its Riemann–Liouville (RL) fractional integral of order α ( 0 , 1 ) is defined as (see [7]):
I α ( f ) ( t ) = 1 Γ ( α ) 0 t ( t s ) α 1 f ( s ) d s ,
where Γ is the Gamma Euler function—i.e., Γ ( z ) = 0 + t z 1 e t d t , z > 0 .
We recall also that the Caputo fractional derivative of order α of a function f ( t ) is defined by (see [3]):
D α f ( t ) = 1 Γ ( 1 α ) 0 t f ( s ) ( t s ) α d s ,
where f denotes the ordinary derivative of f .
Notice that, taking the limit for α 1 , one gets D 1 f ( t ) = f ( t ) , while I 1 ( f ) ( t ) = 0 t f ( s ) d s —i.e., the ordinary Riemann integral of f. Moreover, D 0 f ( t ) = f ( t ) f ( 0 ) and I 0 ( f ) ( t ) = f ( t ) .
Referring to the neuronal model (1), assuming that V ( 0 ) = 0 (and, in some cases, also η ( 0 ) = 0 ), the RL fractional integral I α is used as the left-inverse of the Caputo derivative D α (see [8,9]). In this way, we find that the solution V ( t ) of (1) involves the RL fractional integral process of the GM process η ( t ) , specifically:
V ( t ) = I α ( D α V ( t ) ) = I α g L C m V L + I α η ( t ) C m , with V ( 0 ) = 0 .
Thus, V ( t ) turns out to be written in terms of the fractional integral of η ( t ) .
From this consideration, in the framework of general stochastic models involving correlated processes, it appears useful to investigate the properties of X α ( t ) = I α ( Y ) ( t ) , —i.e., the fractional integral of a GM process Y ( t ) , as varying α ( 0 , 1 ) . Although X α ( t ) is not Markov, we have showed in [2] that it is still a Gaussian process with mean μ α ( t ) and variance σ α 2 ( t ) ; for instance, the fractional integral of BM has mean μ α ( t ) = 0 and variance
σ α 2 ( t ) = t 2 α + 1 ( 2 α + 1 ) Γ 2 ( α + 1 )
(for closed formulae of the mean μ α ( t ) and variance σ α 2 ( t ) of the fractional integral of a general GM process, see [2]). For fixed α , σ α 2 ( t ) turned out to be increasing, as a function of t . Moreover, in [2] we found that for small values of time t the variances of the considered fractionally integrated GM processes become ever lower, as α increases (i.e., the variance decreases as a function of α ) ; for large values of t this behavior is overturned, and the variance increases with α (see [2]).
In this paper, we aim to characterize the qualitative behavior of the dynamical system X α ( t ) , α ( 0 , 1 ) by means of its entropy. Indeed, the entropy is widely used for this purpose in many fields (see [10,11,12,13,14]). In Biology, entropy is useful to characterize the behavior of, for example, Leaky Integrate-and-Fire (LIF) neuronal models (see [4]). In finance, Kelly in [15] introduced entropy for gambling on horse races, and Breiman in [16] for investments in general markets. Finally, the admissible self-financing strategy achieving the maximum entropy results in a growth optimal strategy (see [17]).
In order to specify the entropy for the processes considered in this paper, we first note that, for a fixed time s the r.v. X α ( s ) is normally distributed with mean μ α ( s ) and variance σ α 2 ( s ) , so recalling that the entropy of a r.v. X with density f ( x ) is given by
H ( X ) = + log 2 ( f ( x ) ) f ( x ) d x ,
where, by calculation, it easily follows that the entropy of the normal r.v. X = X α ( s ) with fixed s , depends only on σ α 2 ( s ) and it is given by (see [18], p. 181):
H ( X ) = log 2 ( σ α ( s ) 2 π e ) .
Thus, the larger the variance σ α 2 ( s ) , the larger the entropy of X α ( s ) for a fixed time s .
In this paper we are interested in studying a different quantity: for a certain value of α ( 0 , 1 ) , and T > 0 , our aim is to find the entropy of trajectories of X α ( t ) , t [ 0 , T ] , which involves all the points of the trajectories up to time T , and to show that the entropy is a decreasing function of α .
We do not actually compute the entropy of X α ( t ) , but its approximate entropy ApEn (see [19]), obtained by using several long enough simulated trajectories (they were previously obtained in [2], for the fractional integral of some noteworthy GM processes Y ( t ) , namely, BM and Ornstein–Uhlenbeck (OU)). In fact, Pincus [19] has showed that ApEn is suitable to quantify the concept of changing complexity, being able to distinguish a wide variety of system behaviors. Indeed, for general time series, it can potentially separate those coming from deterministic systems and stochastic ones, and those coming from periodic systems and chaotic ones; moreover, for a homogeneous, ergodic Markov chain, ApEn coincides with Kolmogorov–Sinai entropy. Thus, though X α ( t ) is not a Markov process, its approximate entropy ApEn is able to characterize the complexity degree of the system, when varying α .
As we said, we previously found that, in all the considered cases of GM processes, for large t the variance σ α 2 ( t ) of their fractional integral X α ( t ) is an increasing function of α , while for small t it decreases with α ; instead, the covariance function has more diversified behaviors (see [2]).
In the present article, we show that, for small values of α ( 0 , 1 ) , X α ( t ) exhibits a large value of the complexity degree; a possible explanation is that, for small α the trajectories of the process X α ( t ) become more jagged, giving rise to a greater value of the complexity degree. In fact, our estimates of ApEn show that it is a decreasing function of α ( 0 , 1 ) . This behavior appears for the fractional integral of BM (FIBM), as well as for the fractional integral of the OU process (FIOU).

2. The Entropy of the Trajectories of X α ( t )

In this section, we study the complexity degree of the trajectories of the process X α ( t ) , in two noteworthy cases of GM processes Y ( t ) , precisely:
(i)
Y ( t ) = B t , so X α ( t ) = I α ( B ) ( t ) is fractionally integrated Brownian motion (FIBM);
(ii)
Y ( t ) is the Ornstein–Uhlenbeck (OU) process, driven by the SDE.
d Y ( t ) = ( μ Y ( t ) β ) d t + σ d B t , Y ( 0 ) = y ( μ , σ > 0 , β R ) ,
which can be expressed as (see [20]):
Y ( t ) = β + e μ t [ y β + B ( r ( t ) ) ] ,
where the equality is meant in distribution, and
r ( t ) = σ 2 2 μ e 2 μ t 1 .
OU process Y ( t ) is a GM process of the form (2), with:
m ( t ) = β + e μ t ( y β ) , h 1 ( t ) = σ 2 2 μ e μ t e μ t , h 2 ( t ) = e μ t ,
and covariance
c ( s , t ) = h 1 ( s ) h 2 ( t ) = σ 2 2 μ e μ ( t s ) e μ ( s + t ) , 0 s t .
Then, X α ( t ) = I α ( Y ) ( t ) is called the fractionally integrated OU (FIOU) process.
Both FIBM and FIOU are Gaussian processes whose variance and covariance functions were explicitly obtained in [2] and studied, as functions of α ( 0 , 1 ) .
To study the complexity degree of the trajectories of the process X α ( t ) , in cases (i) and (ii), we make use of several simulated trajectories of length N , previously obtained in [2], for N large. The sample paths have been obtained by using the R software, with time discretization step h = 0.01 and by means of the same sequence of pseudo-random Gaussian numbers. The simulation algorithm has been realized as an R script. More specifically, we specialize the algorithm to simulate an array of ( x 1 , x 2 , , x N ) Gaussian numbers with a specified covariance matrix. Indeed, we first set the time instants t 1 < t 2 < < t N (with t 0 = 0 and t i = t i 1 + h , i = 1 , , N ) and we evaluate the elements of the covariance matrix C i , j = c o v ( X α ( t i ) , X α ( t j ) ) . Note that, for each fractionally integrated Gauss–Markov process here considered, we implemented a specific algorithm to be evaluated by numerical procedures the mathematical expression of the covariance according to Equation (3.5) of [2]. Then, we apply the Cholesky decomposition to matrix C in order to determine the lower triangular matrix G, such that C = G G T , where G T is the transposition of G . Finally, we generate N pseudo-Gaussian standard numbers ( z 1 , z 2 , , z N ) z T and we set x i = G i z (for i = 1 , , N , with G i the i th row of matrix G) such that the obtained array ( x 1 , x 2 , , x N ) is a simulation of a centered Gaussian distributed N-dimensional r.v. with covariances c o v ( x i , x j ) = C i , j for i , j = 1 , , N .
In particular, referring to algorithms for the generation of pseudo-random numbers (see [21]), the main steps of implementation were the following (for more, see [2]):
STEP 1
The elements of N × N covariance matrix C ( t i , t j ) are calculated at times t i , i = 1 , , N , of an equi-spaced temporal grid.
STEP 2
The Cholesky decomposition algorithm is applied to the covariance matrix C in order to obtain a lower triangular matrix G ( i , j ) , such that C = G G T .
STEP 3
The N-dimensional array z of standard pseudo-Gaussian numbers is generated.
STEP 4
The sequence of simulated values of the correlated fractionally integrated process is constructed as the array x = G z .
Finally, the array x provides the simulated path—i.e., a realization ( x 1 , x 2 , , x N ) , of ( X α ( t 1 ) , , X α ( t N ) ) , whose components have the assigned covariance.

2.1. The Approximate Entropy

In [19] Pincus defined the concept of approximate entropy (ApEn) to measure the complexity of a system, proving also that, for a Markov chain, ApEn equals the entropy rate of the chain. In fact, to measure chaos concerning a given set of data, we have at our disposal Hausdorff and correlation dimension, K-S entropy, and the Lyapunov spectrum (see [19]); indeed, to calculate one of the above parameters, one needs an impractically large amount of data. Instead, calculation of ApEn(m, r) (see below for the definition) only requires relatively few points. Actually, as shown in [19], if one uses only 1000 points, and m is taken as being equal to 2, ApEn(m, r) can characterize a large variety of system behaviors, since it is able to distinguish deterministic systems from stochastic ones, and periodic systems from chaotic ones.
For instance, Abundo et al. [10] used ApEn to obtain numerical approximations of the entropy rate, with the final purpose to investigate the degree of cooperativity of proteins in a Markov model with binomial transition distributions. They showed that the corresponding ApEn is a decreasing function of the degree of cooperativity (for more about approximation of entropy by numerical algorithms, see [12] and references therein).
Now, we recall from [19] the definition of ApEn. Let { x 1 , x 2 , . . . , x N } be given a time-series of data, equally spaced in time, and fix an integer m > 0 and a positive number r . Next, let us consider a sequence of vectors { v ̲ 1 , v ̲ 2 , . . . , v ̲ N m + 1 } in R m defined by v ̲ i = ( x i , x i + 1 , . . . , x i + m 1 ) . Then, define for each i, 1 i N m + 1 ,
C i ( m , r ) = # o f j s u c h t h a t d ( v ̲ i , v ̲ j ) r N m + 1 ,
in which the distance d ( · , · ) between two vectors is defined by
d ( v ̲ i , v ̲ j ) = max k = 1 , . . . , m | x i + k 1 x j + k 1 | .
We observe that the C i ( m , r ) quantities measure up to a tolerance r the frequency of patterns which are similar to a certain pattern whose window length is m . Now, define
Φ N ( m , r ) = i = 1 N m + 1 log C i ( m , r ) N m + 1
and
A p E n ( m , r ) = lim N ( Φ N ( m , r ) Φ N ( m + 1 , r ) ) .
Given N data points, formula (16) can be implemented by defining the statistics
A p E n ^ ( m , r , N ) = Φ N ( m , r ) Φ N ( m + 1 , r ) .
Heuristically, we can say that ApEn is a measure of the logarithmic likelihood that runs of patterns that are close for m observations, remain close on the next incremental comparison. A greater likelihood of remaining close (i.e., regularity) produces smaller ApEn values, and viceversa. On the basis of simulated data, Pincus showed that, for N = 1000 and m = 2 , for values of r, between 0.1 and 0.2 times the standard deviation of the x i data produce reasonable statistical validity of A p E n ^ ( m , r , N ) . Moreover, he showed that, for a homogeneous, ergodic Markov chain, ApEn coincides with the Kolmogorov–Sinai entropy (see [14]), that is
A p E n ( m , r ) = i j π i p i j log p i j ,
where p i j denotes the transition probability of the Markov chain from the state i to the state j, and π j = lim n p i j ( n ) is the j th component of the vector π = ( π 1 , π 2 , ) of the stationary probabilities, being p i j ( n ) the n step transition probability of the Markov chain from the state i to the state j.

2.2. Calculation of the Entropy of Simulated Trajectories of the Process X α ( t )

In the case of FIBM and FIOU, for a set of values α ( 0 , 1 ) , we have performed L (discretized) trajectories ( x 1 , x 2 , , x N ) of length N of the process X α ( t ) , by means of the simulation algorithm previously described in STEPS 1–4. In particular, for each simulated path, we follow the remaining steps:
STEP 5
Construction of the array { v ̲ 1 , α , v ̲ 2 , α , . . . , v ̲ N m + 1 , α } in R m (for a fixed m) by extracting from a given sample path ( x 1 , x 2 , , x N ) ( X α ( t 1 ) , . . . , X α ( t N ) ) , obtained in STEPS 1–4, the vectors v ̲ i , α = ( X α ( t i ) , X α ( t i + 1 ) , . . . , X α ( t i + m 1 ) ) .
STEP 6
Construction of the distance matrix D i , j α whose elements are d i , j α are defined as the follows distance between vectors v i , α and v ̲ j , α —i.e.,
d i , j α = d ( v ̲ i , α , v ̲ j , α ) = max k = 1 , . . . , m | X α ( t i + k 1 ) X α ( t j + k 1 ) | .
STEP 7
After setting r = 0.1 S , with S sample deviation of simulated paths, evaluation of array C α whose components are provided as
C i α = # o f j s u c h t h a t d ( v ̲ i , α , v ̲ j , α ) r N m + 1 ,
for 1 i N m + 1 .
STEP 8
Evaluation of the quantities
Φ N , m α = i = 1 N m + 1 log C i α N m + 1 , Φ N , m + 1 α = i = 1 N m log C i α N m
and
A p E n α = Φ N , m α Φ N , m + 1 α .
We have taken the number of paths L large enough and N from 100 to 300 , and for each of these L trajectories of length N , corresponding to a value of α , we have estimated A p E n α ( i ) , i = 1 , , L by means of the approximation A p E n ^ ( 2 , r , N ) , where r = 0.1 × (the standard deviation of trajectory points); then, the approximate entropy of X α ( t ) has been obtained by A p E n α L = 1 L i = 1 L A p E n α ( i ) . This allowed us to study the dependence of the entropy of the sample paths of X α ( t ) = I α ( Y ) ( t ) on the parameter α , showing that the entropy—namely a measure of the complexity of the dynamical system X α ( t ) —is a decreasing function of α ( 0 , 1 ) .
Since the fractional integral of order zero of Y ( t ) is nothing but the process Y ( t ) itself, and the fractional integral of order 1 is the ordinary Riemann integral of Y ( t ) , our result means that fractional integration introduces a greater degree of complexity than that corresponding to ordinary integration; moreover, the maximum degree of complexity is obtained for the original process Y ( t ) (that is, without integration).
In Figure 1 and Figure 2 we plot the numerical results for ApEn, as a function of α , for FIBM and FIOU, respectively. When the estimates of ApEn have been obtained for N = 100 , it appears clear that ApEn is a decreasing function of α .
Moreover, our calculation highlights that, for small values of α , the trajectories of FIBM and FIOU become more jagged, giving rise to a greater value of the complexity degree (see Figure 3).
We also show that the results of ApEn as N increases in Figure 4 and Figure 5. Our investigations show that the estimated values of ApEn for FIOU, for a given α and a given trajectory length, are considerably larger than those for FIBM (compare Figure 4 and Figure 5). This possibly depends on the fact that the trajectories of FIOU are more complicated than those of FIBM, giving rise to a greater complexity degree. Moreover, contrary to the case of FIBM, where for all α the estimated value of ApEn is a decreasing function of the length N of simulated trajectories, in the case of FIOU, for α 0.5 , the estimated value of ApEn appears to be an increasing function of N . Perhaps if one used far longer trajectories ( N 1000 ) to estimate ApEn, the values obtained in both cases would be comparable and they would exhibit the same behavior as a function of N . Notice, however, that to simulate very long trajectories is impractical from the computational point of view (even for N = 300 , the CPU time to evaluate ApEn in the case of FIOU was of order of almost one hour).

3. Conclusions and Final Remarks

In this paper, we further investigated the qualitative behavior of the fractional integral of order α ( 0 , 1 ) of a Gauss–Markov process, that we already studied in [2].
Actually, Gauss–Markov processes and their fractional integrals over time are very relevant in various application fields, especially in Biology—e.g., in stochastic models for neuronal activity (see [1]). In fact, the fractional integral of order α ( 0 , 1 ) of a Gauss–Markov process Y ( t ) , say X α ( t ) , is suitable to describe stochastic phenomena with long range memory dynamics, involving correlated input processes, which are very relevant in Biology (see [2]).
While in [2] we have showed that X α ( t ) is itself a Gaussian process, and we have found its variance and covariance, obtaining that the variance σ α 2 ( t ) of X α ( t ) is an increasing function of α , in this paper we have characterized the qualitative behavior of the dynamical system X α ( t ) , α ( 0 , 1 ) by means of its complexity degree, or entropy. Actually, for several values of α we have estimated its approximate entropy ApEn, obtained by long enough trajectories of the process X α ( t ) . Specifically, we investigate the problem by means of the implementation of an algorithm based on STEPS 1–8 detailed described in the paper. We have found that ApEn is a decreasing function of α ; this behavior appeared for the fractional integral of the Brownian motion, as well as for the fractional integral of Ornstein–Uhlenbeck process. Since the fractional integral of Y ( t ) of order zero is nothing but the process Y ( t ) itself, and the fractional integral of order 1 is the Riemann integral of Y ( t ) , our result means that fractional integration introduces a greater degree of complexity than in the case of ordinary integration; moreover, the maximum degree of complexity is obtained for the original Gauss–Markov process Y ( t ) (that is, without integration).
Furthermore, we remark that the algorithm for computing ApEn uses numerical data, which can be used independently of knowing the process where they come from. However, in our case, we study the process X α ( t ) , when varying the parameter α , so we need to simulate its trajectories, and to make use of the obtained numerical values to estimate ApEn. As regards the possibility of finding out, by using ApEn, if certain data come from a particular class of possible systems, we have not investigated this. Our aim was only to characterize the behavior of fractionally integrated Gauss–Markov process X α ( t ) , as varying the parameter α , by means of the corresponding value of ApEn.
As a future work, we aim to estimate the entropy for other cases of fractionally integrated Gauss–Markov processes X α ( t ) , such as the fractional integral of stationary Ornstein–Uhlenbeck process. Moreover, in order to further characterize the qualitative behavior of X α ( t ) in terms of α , our investigation will be addressed to estimate the fractal dimension of its trajectories, as a function of α .

Author Contributions

Conceptualization, M.A.; data curation, E.P.; investigation, M.A. and E.P.; methodology, M.A. and E.P.; Software, E.P. All authors have read and agreed to the published version of the manuscript.

Funding

This work was partially supported by MIUR-PRIN 2017, project Stochastic Models for Complex Systems, no. 2017JFFHSH, by Gruppo Nazionale per il Calcolo Scientifico (GNCS-INdAM) and by the MIUR Excellence Department Project awarded to the Department of Mathematics, University of Rome Tor Vergata, CUP E83C18000100006.

Acknowledgments

We thank the anonymous reviewers for their valuable comments.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Pirozzi, E. Colored noise and a stochastic fractional model for correlated inputs and adaptation in neuronal firing. Biol. Cybern. 2018, 112, 25–39. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  2. Abundo, M.; Pirozzi, E. On the Fractional Riemann-Liouville Integral of Gauss-Markov processes and applications. arXiv 2019, arXiv:1905.08167. [Google Scholar]
  3. Caputo, M. Linear models of dissipation whose Q is almost frequency independent–II. Geophys. J. R. Astron. Soc. 1967, 13, 529–539. [Google Scholar] [CrossRef]
  4. Burkitt, A.N. A review of the integrate-and-fire neuron model: I. Homogeneous synaptic input. Biol. Cybern. 2006, 95, 1–19. [Google Scholar] [CrossRef] [PubMed]
  5. Pirozzi, E. On the Integration of Fractional Neuronal Dynamics Driven by Correlated Processes; Lecture Notes in Computer Science, 12013 LNCS; Springer: Cham, Switzerland, 2019; pp. 211–219. [Google Scholar]
  6. Tamura, S.; Nishitani, Y.; Hosokawa, C.; Mizuno-Matsumoto, Y. Asynchronous Multiplex Communication Channels in 2-D Neural Network with Fluctuating Characteristics. IEEE Trans. Neural Netw. Learn. Syst. 2018, 30, 2336–2345. [Google Scholar] [CrossRef] [PubMed]
  7. Debnath, L. Fractional integral and fractional differential equations in fluid mechanics. Fract. Calc. Appl. Anal. 2003, 6, 119–155. [Google Scholar]
  8. Kilbas, A.A.; Srivastava, H.M.; Trujillo, J.J. Theory and applications of fractional differential equations, volume 204. In North-Holland Mathematics Studies; Elsevier: Amsterdam, The Netherlands, 2006. [Google Scholar]
  9. Malinowska, A.B. Advanced Methods in the Fractional Calculus of Variations; Springer Briefs in Applied Sciences and Technology; Springer: Berlin/Heidelberg, Germany, 2015. [Google Scholar] [CrossRef]
  10. Abundo, M.; Accardi, L.; Rosato, N.; Stella, L. Analyzing protein energy data by a stochastic model for cooperative interactions: Comparison and characterization of cooperativity. J. Math. Biol. 2002, 44, 341–359. [Google Scholar] [CrossRef] [PubMed]
  11. Bollt, E.M.; Skufca, J.D. Control entropy: A complexity measure for nonstationary signals. Math. Biosci. Eng. 2009, 6. [Google Scholar] [CrossRef]
  12. Ciuperca, G.; Girardin, V. On the estimation of the entropy rate of finite Markov chains. In Proceedings of the International Symposium on Applied Stochastic Models and Data Analysis, Brest, France, 17–20 January 2005. [Google Scholar]
  13. Delgado-Bonal, A.; Marshak, A. Approximate Entropy and Sample Entropy: A Comprehensive Tutorial. Entropy 2019, 21, 541. [Google Scholar] [CrossRef] [Green Version]
  14. Walters, P. An Introduction to Ergodic Theory; Springer: New York, NY, USA, 1982. [Google Scholar]
  15. Kelly, J.L. A new interpretation for the information rate. Bell Syst. Tech. J. 1956, 35, 917–926. [Google Scholar] [CrossRef]
  16. Breiman, L. Optimal gambling system for favorable games. In Proceedings of the 4-th Berkeley Symposium on Mathematical Statistics and Probablity, Berkeley, CA, USA, 20 June–30 July 1960; University California Press: Cambridge, UK, 1960; Volume 1, pp. 65–78. [Google Scholar]
  17. Li, P.; Yan, J. The growth optimal portfolio in discrete-time financial markets. Adv. Math. 2002, 31, 537–542. [Google Scholar]
  18. Applebaum, D. Probability and Information; Cambridge University Press: Cambridge, UK, 2008. [Google Scholar]
  19. Pincus, S.M. Approximate entropy as a measure of system complexity. Proc. Natl. Acad. Sci. USA 1991, 88, 2297–2301. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  20. Abundo, M. An inverse first-passage problem for one-dimensional diffusion with random starting point. Stat. Probab. Lett. 2012, 82, 7–14. [Google Scholar] [CrossRef]
  21. Haugh, M. Generating Random Variables and Stochastic Processes. In IEOR E4703: Monte Carlo Simulation; Columbia University: New York, NY, USA, 2016. [Google Scholar]
Figure 1. Approximate entropy (ApEn) of FIBM, as a function of α for N = 100 . (Values of α are on the horizontal axes).
Figure 1. Approximate entropy (ApEn) of FIBM, as a function of α for N = 100 . (Values of α are on the horizontal axes).
Mathematics 08 02031 g001
Figure 2. Approximate entropy (ApEn) of FIOU with μ = σ = 1 , β = 0 , as a function of α for N = 100 . (Values of α are on the horizontal axes).
Figure 2. Approximate entropy (ApEn) of FIOU with μ = σ = 1 , β = 0 , as a function of α for N = 100 . (Values of α are on the horizontal axes).
Mathematics 08 02031 g002
Figure 3. Some simulated sample paths of FIBM (left) and of FIOU (right) for some values of α (specified by labels inside the figure). In both examples we set N = 100 , but the time discretization step h is 0.01 on the left and 0.1 on the right. The seed of the random generator is the same for all simulated paths. (Values of time t are on the horizontal axes).
Figure 3. Some simulated sample paths of FIBM (left) and of FIOU (right) for some values of α (specified by labels inside the figure). In both examples we set N = 100 , but the time discretization step h is 0.01 on the left and 0.1 on the right. The seed of the random generator is the same for all simulated paths. (Values of time t are on the horizontal axes).
Mathematics 08 02031 g003
Figure 4. Approximate entropy (ApEn) of FIBM, for various value of α (specified by labels inside the figure) and N (on the horizontal axes).
Figure 4. Approximate entropy (ApEn) of FIBM, for various value of α (specified by labels inside the figure) and N (on the horizontal axes).
Mathematics 08 02031 g004
Figure 5. Approximate entropy (ApEn) of FIOU with μ = σ = 1 , β = 0 , for various value of α (specified by labels inside the figure) and N (on the horizontal axes).
Figure 5. Approximate entropy (ApEn) of FIOU with μ = σ = 1 , β = 0 , for various value of α (specified by labels inside the figure) and N (on the horizontal axes).
Mathematics 08 02031 g005
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Abundo, M.; Pirozzi, E. On the Entropy of Fractionally Integrated Gauss–Markov Processes. Mathematics 2020, 8, 2031. https://doi.org/10.3390/math8112031

AMA Style

Abundo M, Pirozzi E. On the Entropy of Fractionally Integrated Gauss–Markov Processes. Mathematics. 2020; 8(11):2031. https://doi.org/10.3390/math8112031

Chicago/Turabian Style

Abundo, Mario, and Enrica Pirozzi. 2020. "On the Entropy of Fractionally Integrated Gauss–Markov Processes" Mathematics 8, no. 11: 2031. https://doi.org/10.3390/math8112031

APA Style

Abundo, M., & Pirozzi, E. (2020). On the Entropy of Fractionally Integrated Gauss–Markov Processes. Mathematics, 8(11), 2031. https://doi.org/10.3390/math8112031

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop