1. Introduction
This paper is about some (conjectured) properties of the projection of an element of fractional Gaussian noise onto the neighbouring elements. Unfortunately, not all our conjectures are amenable to analytical proofs, while numerical experiments confirm their validity. This is indeed rather strange, as the properties of fractional Brownian motion and its increments have been thoroughly studied, attracting a lot of research efforts resulting in countless papers and several books, e.g., [
1,
2,
3,
4]. These books are mostly devoted to the stochastic analysis of fractional processes, the properties of their trajectories, distributional properties of certain functionals of the paths, and related issues. Note that such interest and the large number of theoretical studies related to Gaussian fractional noises is due to the wide range of applications of such processes and its properties: existence of memory combined with self-similarity and stationarity. In particular, fractional Gaussian noises appear in the investigation of the behaviour of anomalous diffusion and solutions of fractional diffusion equations, including numerical schemes [
5,
6,
7], information capacity of a non-linear neuron model [
8], statistical inference [
9,
10], entropy calculation [
11,
12], extraction of the quantitative information from recurrence plots [
13] and many others. There is, however, an area where much less is known: problems relating to the covariance matrix of fractional Brownian motion and fractional Gaussian noise in high dimensions, and its determinant. Computational features of the covariance matrices are widely used for simulations and in various applications, see, for example, [
14,
15,
16,
17]. The problem considered in the present paper arose in the following way: In [
18], the authors construct a discrete process that converges weakly to a fractional Brownian motion (fBm)
with Hurst parameter
. The construction of this process is based on the Cholesky decomposition of the covariance matrix of the fractional Gaussian noise (fGn). Several interesting properties of this decomposition are proved in [
18], such as the positivity of all elements of the corresponding triangular matrix and the monotonicity along its main diagonal. Numerical examples suggest also the conjecture, that one has monotonicity along all diagonals of this matrix. However, the analytic proof of this fact remains an open problem. Studying this problem, the authors of [
18] establish a connection between the predictor’s coefficients—that is, the coefficients of the projection of any value of a stationary Gaussian process onto finitely many subsequent elements—and the Cholesky decomposition of the covariance matrix of the process. It turns out that the positivity of the coefficients of the predictor implies the monotonicity along the diagonals of the triangular matrix of the Cholesky decomposition of fGn, which is sufficient for the monotonicity along the columns of the triangular matrix in the Cholesky decomposition of fBm itself; this property, in turn, ensures the convergence of a wide class of discrete-time schemes to a fractional Brownian motion. We will see in
Section 2.1 below that the coefficients of the predictor can be found as the solution to a system of linear equations, whose coefficient matrix coincides with the covariance matrix of fGn. This enables us to reduce the monotonicity problem for the Cholesky decomposition to proving the positivity of the solution to a linear system of equations. However, see
Section 2, even in the particular case of a
-matrix, an analytic proof of positivity of all coefficients is a non-trivial problem. For the moment, we have only a partial solution. Therefore, we formulate the following conjecture:
Conjecture 1. If , then the coefficients of the projection of any element of fractional Gaussian noise onto any finite number of its subsequent elements are strictly positive.
We shall discuss this conjecture in
Section 2 in more detail. Due to stationarity, it is sufficient to establish Conjecture 1 for the projection of
onto
, i.e., for the conditional expectation
where
denotes fBM and
. Having computational evidence but lacking an analytical proof for Conjecture 1, we provide in this paper a wide range of associated properties of coefficients, some with an analytic proof, and some obtained using various computational tools. It is, in particular, interesting to study the asymptotic behaviour of the coefficients as
. This is particularly interesting since
fractional Brownian motion
is degenerate, i.e.,
, where
∼
, and
denotes the standard normal distribution. Consequently,
∼
for all
, and
for any convex combination,
. This shows that in the case
, the values of the coefficients are indefinite, and therefore they cannot define the asymptotic behaviour of the prelimit coefficients as
. It will be very “elegant” if all coefficients tend to
; however, in reality their asymptotic behaviour is different, see
Section 2.3. Another interesting question are the relations between the coefficients. It is natural to assume that they decrease as
k increases, but the situation here is also more involved, essentially depending on the value of
H. In
Section 2.4, we prove some recurrence relations between the coefficients. These relations lead to a computational algorithm which is more efficient than solving the system of equations as described in
Section 2.1. Finally, it turns out that the positivity of the first coefficient can be proven analytically for all values of
n; this result is established in
Section 2.5.
We close the paper with a few numerical examples, supporting our theoretical results and conjectures. In particular, we compute the coefficients for all and for various values of H, and discuss their behaviour. Additionally, we compare different calculation methods for the coefficients in terms of computing time, and we demonstrate the advantage of the approach via the recurrence of formulae in most cases.
The paper is organized as follows:
Section 2 contains almost all properties of the predictor’s coefficients that can be established analytically, and it introduces the system of linear equations for these coefficients and some properties of the coefficients of this system. We consider in detail two particular cases:
and
. In these cases, we prove the positivity of all coefficients, establish some relations between them, and study the asymptotic behaviour as
. We also obtain recurrence relations for the coefficients, and prove that for all values of
n, the first coefficient is positive.
Section 3 contains some numerical illustrations of the properties and conjectures from
Section 1 and
Section 2. In
Section 3.3, we briefly discuss some observations concerning the case
.
2. Analytical Properties of the Coefficients
Let
be a fractional Brownian motion (fBm) with Hurst index
, that is, a centered Gaussian process with covariance function of the form
We use
for the
nth increment of fBM. It is well known that the process
has stationary increments, which implies that
is a stationary Gaussian sequence (known as
fractional Gaussian noise—fGn for short). It follows from (
1) that its autocovariance function is given by
Obviously, .
Now, let us consider the projection of
onto
, i.e., the conditional expectation
. Since the joint distribution of
is centered and Gaussian, we obtain the following relation from the theorem on normal correlation (see, for example, Theorem 3.1 in [
19]):
where
. Our Conjecture 1 means that all the coefficients
for
,
, are strictly positive (We have formulated it in more general form, i.e., for any element
, because, by stationarity, the projection
for any
j has the same distribution as
).
Let us consider two approaches to the calculation of the coefficients . The first method is straightforward; it involves solving of the system of linear equations. The second one is based on recurrence relations for the .
2.1. System of Linear Equations for Coefficients
Multiplying both sides of (
3) by
,
and taking expectations yields
This leads to the following system of linear equations for the coefficients
,
:
We can solve this using Cramer’s rule,
where
and
is the matrix
A with its
kth column vector replaced by
:
Remark 1. It is known that the finite-dimensional distributions of have a nonsingular covariance matrix; in particular, for any , the values are linearly independent; see Theorem 1.1 in [20] and its proof. Obviously, a similar statement holds for fractional Gaussian noise, since the vector is a nonsingular linear transform of . In other words, ; moreover, if a.s., then for all k. 2.2. Relations between the Values
In order to establish analytic properties of the coefficients , we need several auxiliary results on the properties of the sequence . We start with a useful relation between , and .
Lemma 1. The following equality holds: Proof. Using the self-similarity of fBm and the stationarity of its increments, we obtain
Note that by (
2),
, whence
. Thus, we arrive at
which is equivalent to (
7). □
Remark 2. The inequality was proved in [18] (p. 28) by analytic methods. In this paper, we improve this result in two directions: we obtain an explicit expression for and we prove the sharper bound ; see Lemma 3 below. Many important properties of the covariance function of a fractional Gaussian noise (such as monotonicity, convexity and log-convexity) follow from the more general property of complete monotonicity, which is stated in the next lemma. To formulate it, let us introduce the function
Lemma 2. - 1.
The function is convex if and concave if .
- 2.
If , then the function ρ is completely monotone(
CM)
on , that is, and - 3.
If , then the function is completely monotone on .
Proof. 1. Using the elementary relation
, it is not hard to see that
Since is convex, and since convex functions are a convex cone which is closed under pointwise convergence, the double integral appearing in the representation of is again convex. Thus, is convex or concave according to or , respectively.
2. Let
and
. Then, Formula (
9) remains valid if we replace
with
. But
is CM and so
is an integral mixture of CM-functions. Since CM is a convex cone which is closed under pointwise convergence, cf. Corollary 1.6 in [
21], we see that
is CM on
.
3. The above arguments holds true in the case ; the only difference is that in this case, the factor is negative. □
Remark 3. 1. Since is a CM function on , it admits the representation , for some positive measure μ on and , see, for example, Theory 1.4 in [21]. Taking into account that , it is not hard to see that , i.e., 2. The function ρ can be represented in the form , where we write for the step-1 difference operator, and . Then the second statement of Lemma 2 follows from the more general result: if f is CM on , then is CM. Indeed, since CM is a closed convex cone, it is enough to verify the claim for the “basic” CM function , where is a parameter. Now we haveand this is clearly a completely monotone function. 3. The argument which we used in the proof of Lemma 2. proves a bit more: The function is for and even a Stieltjes function, i.e., a double Laplace transform. To see this, we note that the kernel is a Stieltjes function. Further details on Stieltjes functions can be found in [21]. As for the following properties, fractional Brownian with Hurst index is degenerate, i.e., , where ∼; consequently all and the next set of inequalities are equalities. Therefore, we consider only .
Corollary 1. Let . The sequence has the following properties
- 1.
Monotonicity and positivity: for any - 2.
- 3.
Log-convexity: for any
Proof. By Lemma 2, the function is convex on and completely monotone on ; by continuity, we can include the endpoints of each interval.
We begin with the observation that a completely monotone function is automatically log-convex. We show this for
using the representation (
10): for any
,
Thus, the Cauchy–Schwarz inequality yields
which guarantees that
is convex.
Therefore all properties claimed in the statement hold for , convexity even for , and we only have to deal with the case .
Monotonicity for : We have to show
. This follows by direct verification since by (
2),
(recall that
).
Log-convexity for : In this case, the inequality (
13) has the form
. It immediately follows from the representation (
7) combined with the monotonicity property (
11). □
The previous lemma implies that . The following result gives a sharper bound.
Lemma 3. If , then Proof. Applying (
7), we may write
because of Statement 2 of Corollary 1. □
2.3. Particular Cases
We will now consider in detail two particular cases: and . In these cases, we prove the positivity of all coefficients , establish some relations between them, and study the asymptotic behavior as . In the case everything is established analytically, while in the case , the sign of the second coefficient and the relation between the second and the third coefficients, and , are verified numerically.
2.3.1. Case
In the case
, the system (
4) becomes
whence
Proposition 1. For any , Proof. Recall that, by Corollary 1 (Statement 1),
Hence, the first inequality
is equivalent to
which is true due to Corollary 1.
To prove the second inequality , we need to show that , which was established in Corollary 1. □
Remark 4. It is worth pointing out that the positivity (and positive definiteness) of the coefficient matrix together with the positivity of the right-hand side of the system does not imply the positivity of the solution. Indeed, consider the following system with the same coefficients as in (17), but another positive right-hand side, say : The solution has the form If, for example, , then and . For the system (17), this condition is written as , contradicting Corollary 1. Proof. If we take the limit
in the relations
we obtain
Figure 1 shows the dependence of the coefficients
and
on
H. It illustrates the theoretical results stated in Propositions 1 and 2, in particular, the positivity and monotonicity of the coefficients, and convergence to theoretical limit values as
.
2.3.2. Case
For
, the system (
4) has the following form
Proposition 3. For any , Proof. The positivity of the denominator follows from the representation
and with Corollary 1. Therefore, it suffices to prove the claimed relations for the numerators of
,
, and
.
1. Let us prove that
. The difference between the numerators of
and
is equal to
since
and
by Statements 1 and 2 of Corollary 1.
2. Finally, the positivity of
follows from the following representation of its numerator:
because
and
by (
16), and (
13), respectively. □
Figure 2 confirms the above proposition. We see that
is the largest coefficient. However,
only for
; for larger
H, the order changes.
Remark 5. Consider numerically the relation between and and the sign of . One may represent the numerator of as follows: Thus we need to establish that We established this fact numerically since we could not come up with an analytical proof. Figure 3 shows the plot of the left-hand side of (24) that confirms the positivity of . However, we can look at (24) from another point of view. Rewrite (24) in the following form: The left- and the right-hand sides of this inequality are the values at the points and , respectively, of the following function: The graph of the surface is shown in Figure 4. It was natural to assume that the function decreases in x for any H, being at bigger than at . However, the function is not monotone for all H. Figure 5 contains two-dimensional plots of for four different values of H: , , and . We observe that for each value of H; however, the function changes its behavior from increasing to decreasing. Remark 6. The unexpected behavior of (first increasing, then decreasing) is a consequence of the non-standard term . For , this function decreases in x for any . Indeed, for , it has the form Write . It is sufficient to prove that the functionincreases in y for any . However, for ,andwhere(here is the Pochhammer symbol). The monotonicity of for can be proved by differentiation. Thenand hence, the partial derivative equals By rearranging the double sum in the numerator, we obtain the expressionwhich is clearly positive. Thus for any , is increasing as a function of . Let us try to establish a bit more. We can represent in the following form:where the coefficients can be found successively from the following equations: Let us find the first few coefficients: , It is easy to see that , , and are positive for . We believe that for all k. However, the proof of this fact remains an open problem.
Remark 7. Obviously, the sum of the limits of the coefficients is 1, as expected.
Proof (Sketch of proof). The proof is straightforward. Substituting
,
, and
into (
19)–(
21), and simplifying the resulting expressions, we obtain
(for
, we first cancel out the factor
, see (
22) and (
23)). Then applying l’Hôpital’s rule (twice), we arrive at the claimed limits by simple algebra. □
Remark 8. For , we present the graphical results only; see Figure 6. The situation here is more complicated compared to the case . The first coefficient is still the largest; however, the order of three other coefficients changes several times depending on H. In particular, for H close to 1/2, these coefficients are decreasing, but for H close to 1, they are increasing. 2.4. Recurrence Relations for the Coefficients
In general, there are several ways to obtain (
4). For example, we can consider the coefficients
as a result of minimizing the value of the quadratic form
Evidently, differentiation leads again to the system (
4). We can look for the coefficients with the help of the inverse matrix
, where
A is from (
5). However, calculating the entries of the inverse matrix is as difficult as calculating the determinants. It is possible to avoid determinants using the properties of fGn. More precisely, we propose a recurrence method to calculate the coefficients
successively, starting with
.
Proposition 5. The following relations hold true: Proof. In order to prove (
26) and (
27), we use the theorem on normal correlation as well as the independence of
and any of
. We get
where
,
, are some constants. Now we take the conditional expectation
on both sides of (
28) to obtain
Comparing this equality with (
3), and taking into account that the increments
,
are linearly independent, we conclude that
Now we insert this equality into (
28) and see
After multiplying both sides of the last equality by
and taking expectations, we arrive at
It follows from the stationarity of the increments that the indices
and 1 of the last equality play symmetric roles, i.e., they are equivalent to
From this, we conclude that
Thus, the relation (
26) is proved.
Using again the symmetry of the stationary increments, it is not hard to see that
Therefore, we obtain from (
29) that
and (
27) follows. □
2.5. Positivity of
We conjecture that all coefficients , are positive. However, analytically, we can prove only the positivity of the leading coefficient, .
Proposition 6. For all , .
Proof. From the stationarity of the increments, it follows that
It remains to prove the positivity of the numerator
However, we know from (
4) that
Therefore,
since the sequence
is decreasing, see Corollary 1. □