Next Article in Journal
A Langevin-Type q-Variant System of Nonlinear Fractional Integro-Difference Equations with Nonlocal Boundary Conditions
Previous Article in Journal
Consensus of Julia Sets
Previous Article in Special Issue
Asymptotics of Karhunen–Loève Eigenvalues for Sub-Fractional Brownian Motion and Its Application
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Maximum Likelihood Estimation for Mixed Fractional Vasicek Processes

1
School of Mathematics (Zhuhai), Sun Yat-sen University, Guangzhou 510275, China
2
School of Mathematics, Shanghai University of Finance and Economics, Shanghai 200433, China
3
School of Mathematics and Statistics, Guangdong University of Technology, Guangzhou 510006, China
4
School of Management, Zhejiang University, Hangzhou 310058, China
*
Author to whom correspondence should be addressed.
Fractal Fract. 2022, 6(1), 44; https://doi.org/10.3390/fractalfract6010044
Submission received: 18 December 2021 / Revised: 10 January 2022 / Accepted: 11 January 2022 / Published: 14 January 2022
(This article belongs to the Special Issue Stochastic Calculus for Fractional Brownian Motion)

Abstract

:
In this paper, we consider the problem of estimating the drift parameters in the mixed fractional Vasicek model, which is an extended model of the traditional Vasicek model. Using the fundamental martingale and the Laplace transform, both the strong consistency and the asymptotic normality of the maximum likelihood estimators are studied for all H ( 0 , 1 ) , H 1 / 2 . On the other hand, we present that the MLE can be simulated when the Hurst parameter H > 1 / 2 .

1. Introduction

The standard Vasicek models, including the diffusion models based on Brownian motion and the jump-diffusion models driven by Lévy processes, provide good service in cases where the data demonstrate the Markovian property and a lack of memory. However, over the past few decades, numerous empirical studies have found that the phenomenon of long-range dependence may be observed in the data of hydrology, geophysics, climatology, telecommunication, economics, and finance. Consequently, several time series models or stochastic processes have been proposed to capture long-range dependence, both in discrete time and in continuous time. In the continuous time case, the best-known and widely used stochastic process that exhibits long-range dependence or short-range dependence is of course the fractional Brownian motion (fBm), which describes the degree of dependence by the Hurst parameter. This naturally explains the appearance of fBm in the modeling of some properties of “real-world” data. As well as in the diffusion model with the fBm, the mean-reverting property is very attractive to understand volatility modeling in finance. Hence, the fractional Vasicek model (fVm) becomes the usual candidate to capture some phenomena of the volatility of financial assets (see, for example, [1,2,3]). More precisely, the fVm can be described by the following Langevin equation:
d X t = α β X t d t + γ d B t H , t [ 0 , T ] ,
where β , γ R + , α R , the initial condition is set at X 0 = 0 , and B t H , an fBm with Hurst parameter H ( 0 , 1 ) , is a zero mean Gaussian process with the covariance:
E B t H B s H = R H ( s , t ) = 1 2 | t | 2 H + | s | 2 H | t s | 2 H .
The process B t H is self-similar in the sense that a R + , B a t H = d a H B t H . It becomes the standard Brownian motion W t when H = 1 / 2 and can be represented as a stochastic integral with respect to the standard Brownian motion. When 1 / 2 < H < 1 , it has long-range dependence in the sense that n = 1 E B 1 H ( B n + 1 H B n H ) = . In this case, the positive (negative) increments are likely to be followed by positive (negative) increments. The parameter H, which is also called the self-similarity parameter, measures the intensity of the long-range dependence. Recently, borrowing the idea of [4], these papers [5,6] used the mixed fractional Vasicek model (mfVm) to describe some phenomena of the volatility of financial assets, which can be expressed as:
d X t = ( α β X t ) d t + γ d ξ t , t [ 0 , T ] , X 0 = 0 .
where β , γ R + , α R , and the initial condition is set at X 0 = 0 . Here, the process of the so-called mixed fractional Brownian motion ξ = ( ξ t , t [ 0 , T ] ) is defined by ξ t = W t + B t H , H ( 0 , 1 ) where W and B H are independent standard and fractional Brownian motions.
When the long-term mean α in (3) is known (without loss of generality, it is assumed to be zero), (3) becomes the mixed fractional Ornstein–Uhlenbeck process (mfOUp). Using the canonical representation and spectral structure of the mfBm, the authors of [7] originally proposed the maximum likelihood estimator (MLE) of β in (3) and considered the asymptotical theory for this estimator with the Laplace transform and the limit presence of the eigenvalues of the covariance operator for the fBm (see [8]). Using an asymptotic approximation for the eigenvalues of its covariance operator, the paper of [9] explained the mfBm from the viewpoint of spectral theory. Some surveys and a complete literature related to the parametric and other inference procedures for stochastic models driven by the mfBm were summarized in a recent monograph of [10,11].
However, in some situations, the long-term mean α in (3) is always unknown. Thus, it is important to estimate all the drift parameters, α and β , in the mfVm. To the best of our knowledge, the asymptotic theory of the MLE of α and β has not developed yet; even some methods for the fractional diffusion cases can be applied in this situation (e.g., see [12]). This paper fills in the gaps in this area. Using the Girsanov formula for the mfBm, we introduce the MLE for both α and β . When a continuous record of observations of X t is available, both the strong consistency and the asymptotic laws of the MLE are established in the stationary case for the Hurst parameter H ( 0 , 1 ) .
In the aspect of simulation, as far as we know, until now, there are few works referring to the exact experiment for the MLE even in the fractional O-U process. The difficulties come from the process Q t defined in (9): it is not easy to simulate and also will cost much time. Here, we try to illustrate that the MLE of the drift parameter in the mixed fractional O-U process (the same for the Vasicek process) can be achieved when H > 1 / 2 , even if it is not practical.
The rest of the paper is organized as follows. Section 2 introduces some preliminaries of the mfBm. Section 3 proposes the MLE for the drift parameters in the mfVm and studies the asymptotic properties of the MLE for the Hurst parameter range H ( 0 , 1 ) in the stationary case. Section 4 provides the proofs of the main results of this paper. We complete with the simulation of the drift parameter in Section 5. Some technical lemmas are gathered in Section 6. We use the following notations throughout the paper: a . s . , P , d , and ∼ denote convergence almost surely, convergence in probability, convergence in distribution, and asymptotic equivalence, respectively, as T .

2. Preliminaries

This section is dedicated to some notions that are used in our paper, related mainly to the integro-differential equation and the Radon–Nikodym derivative of the mfBm. In fact, mixtures of stochastic processes can have properties quite different from the individual components. The mfBm drew considerable attention since some of its properties were discovered in [4,11,13]. Moreover, the mfBm has been proven useful in mathematical finance (see, for example, [14]). We start by recalling the definition of the main process of our work, which is the mfBm. For more details about this process and its properties, the interested reader can refer to [4,11,13].
Definition 1.
An mfBm of the Hurst parameter H ( 0 , 1 ) is a process ξ = ( ξ t , t [ 0 , T ] ) defined on a probability space ( Ω , F , P ) by:
ξ t = W t + B t H ,
where W = ( W t , t [ 0 , T ] ) is the standard Brownian motion and B H = ( B t H , t [ 0 , T ] ) is the independent fBm with the Hurst exponent H ( 0 , 1 ) and the covariance function:
K ( s , t ) = E B t H B s H = 1 2 t 2 H + s 2 H | t s | 2 H 1 .
Let us observe that the increments of the mfBm are stationary and ξ t is a centered Gaussian process with the covariance function:
E ξ t H ξ s H = min { t , s } + 1 2 t 2 H + s 2 H | t s | 2 H , s , t 0 .
In particular, for H > 1 / 2 , the increments of the mfBm exhibit long-range dependence, which makes it important in modeling volatility in finance. Let F ξ = ( F t ξ , t [ 0 , T ] ) . We use the canonical representation suggested in [13], based on the martingale:
M t = E ( W t | F t ξ ) , t [ 0 , T ] .
To this end, let us consider the integro-differential equation:
g ( s , t ) + H d d s 0 t g ( r , t ) | r s | 2 H 1 sign ( s r ) d r = 1 , 0 < s t T .
By Theorem 5.1 in [13], this equation has a unique solution for any H ( 0 , 1 ) . It is continuous on [ 0 , T ] , and the F ξ -martingale defined in (4) and its quadratic variation M t , t [ 0 , T ] satisfy:
M t = 0 t g ( s , t ) d ξ s , M t = 0 t g ( s , t ) d s , t 0 , t [ 0 , T ] ,
where the stochastic integral is defined for L 2 ( 0 , T ) deterministic integrands in the usual way. By Corollary 2.9 in [13], the process ξ admits canonical representation:
ξ t = 0 t G ( s , t ) d M s , t [ 0 , T ]
with:
G ( s , t ) = 1 d d M s 0 t g ( r , s ) d r .
Remark 1.
For H > 1 / 2 , the equation g ( s , t ) is a Wiener–Höpfner equation:
g ( s , t ) + H ( 2 H 1 ) 0 t g ( r , t ) | r s | 2 H 2 d r = 1 , 0 s t T ,
and the quadratic variation M is:
M t = 0 t g 2 ( s , s ) d s
Let us mention that the canonical representation (6) and (7) can be also used to derive an analogue of Girsanov’s theorem, which will be the key tool for constructing the MLE.
Corollary 1.
Consider a process Y = ( Y t , t [ 0 , T ] ) defined by:
Y t = 0 t f ( s ) d s + ξ t , t [ 0 , T ] ,
where f = ( f ( t ) , t [ 0 , T ] ) is a process with a continuous path and E 0 T | f ( t ) | d t < , adapted to a filtration G = ( G t ) with respect to a martingale M. Then, Y admits the following representation:
Y t = 0 t G ( s , t ) d Z s
with G ( s , t ) defined in (8), and the process Z = ( Z t , t [ 0 , T ] ) can be written as:
Z t = 0 t g ( s , t ) d Y s , t [ 0 , T ] .
Let us mention that Z t is a G -martingale with the Doob–Meyer decomposition:
Z t = M t + 0 t Φ ( s ) d M s ,
where:
Φ ( t ) = d d M t 0 t g ( s , t ) f ( s ) d s .
In particular, F t Y = F t Z , P a . s . for all t [ 0 , T ] . Moreover, if:
E exp 0 T Φ ( t ) d M t 1 2 0 T Φ 2 ( t ) d M t = 1 ,
then the measures μ ξ and μ Y are equivalent and the corresponding Radon–Nikodym derivative is given by:
d μ Y d μ ξ ( Y ) = exp 0 T Φ ^ ( t ) d Z t 1 2 0 T Φ ^ 2 ( t ) d M t ,
where Φ ^ ( t ) = E ( Φ ( t ) | F t Y ) .

3. Estimators and Asymptotic Behaviors

Now, we return to the model (3); similar to the previous corollary, we define:
Z t = 0 t g ( s , t ) d X s , Q t = d d M t 0 t g ( s , t ) X s d s , t [ 0 , T ] .
From the following Lemma 10 and Equation (44), we know:
Q t = α β α β d d M t 0 t g ( s , t ) e β s d s + Q t U ,
where Q t U is Q t when α = 0 . By Theorem 2.4 of [13] and Lemma 2.1 of [7], we know the derivative of the martingale bracket d M t / d t exists and is continuous, as well as the process Q t U admits the representation as the stochastic integral with respect to auxiliary observation process Z t . That is to say, the process Q t is well defined.
Then, using the quadratic variation of Z on [0, T], we can estimate γ almost surely from any small interval as long as we have a continuous observation of the process. Moreover, the estimation of H in the mfBm was performed in [15]. As a consequence, for further statistical analysis, we assumed that H and γ are known, and without loss of generality, from now on, we suppose that γ is equal to one. For γ = 1 , our observation will be Z = ( Z t , t [ 0 , T ] ) , where Z t satisfies the following equation:
d Z t = ( α β Q t ) d M t + d M t , t [ 0 , T ] .
Applying the analog of the Girsanov formula for an mfBm, we can obtain the following likelihood ratio and the explicit expression of the likelihood function:
L T ( α , β , Z T ) = exp 0 T ( α β Q t ) d Z t 1 2 0 T ( α β Q t ) 2 d M t .

3.1. Only One Parameter Is Unknown

Denote the log-likelihood equation by Λ ( Z T ) = log L T ( α , β , Z T ) . First of all, if we suppose α is known and β > 0 is the unknown parameter, then the MLE β ˜ T is defined by:
β ˜ T = 0 T α Q t d M t 0 T Q t d Z t 0 T Q t 2 d M t
then using (10) for all H ( 0 , 1 ) , H 1 / 2 , the estimator error can be presented by:
β ˜ T β = 0 T Q t d M t 0 T Q t 2 d M t .
We have the following results:
Theorem 2.
For H > 1 / 2 ,
T ( β ˜ T β ) d N ( 0 , 2 β )
and for H < 1 / 2 ,
T ( β ˜ T β ) d N 0 , 2 β 2 2 α 2 + β
Now, we suppose β is known and α is the parameter to be estimated. Then, the MLE α ˜ T is:
α ˜ T = Z T + β 0 T Q t d M t M T .
Still with (10), the estimator error will be:
α ˜ α = M T M T .
The asymptotical property is the same, as well as the linear case, which was demonstrated in [7]. That is, for H > 1 / 2 , T 1 H ( α ˜ T α ) d N ( 0 , v H ) where v H is a constant defined in Theorem 3 and for H < 1 / 2 , T ( α ˜ α ) d N ( 0 , 1 ) .

3.2. Two Parameters Unknown

Then, taking the derivatives of the log-likelihood function, Λ ( Z T ) , with respect to α and β and setting them to zero, we can obtain the following results:
Λ ( Z T ) α = Z T α M T + β 0 T Q t d M t = 0 Λ ( Z T ) β = 0 T Q t d Z t + α 0 T Q t d M t β 0 T Q t 2 d M t
The MLE α ^ T and β ^ T is a solution of the equation of (16), and the maximization can be confirmed when we check the second partial derivative of Λ ( Z T ) by the Cauchy–Schwarz inequality. Now, the solution of (16) gives us:
α ^ T = 0 T Q t d Z t 0 T Q t d M t Z T 0 T Q t 2 d M t 0 T Q t d M t 2 M T 0 T Q t 2 d M t
and:
β ^ T = M T 0 T Q t d Z t Z T 0 T Q t d M t 0 T Q t d M t 2 M T 0 T Q t 2 d M t .
From the expression of Z = ( Z t , t [ 0 , T ] ) , we obtain that the error term of the MLE can be written as:
α ^ T α = 0 T Q t d M t 0 T Q t d M t M T 0 T Q t 2 d M t 0 T Q t d M t 2 M T 0 T Q t 2 d M t
and:
β ^ T β = M T 0 T Q t d M t M T 0 T Q t d M t 0 T Q t d M t 2 M T 0 T Q t 2 d M t .
We can now describe the asymptotic laws of α ^ T and β ^ T for H ( 0 , 1 ) , but H 1 / 2 .
Theorem 3.
For H > 1 / 2 and as T , we have:
T β ^ T β d N 0 , 2 β ,
and:
T 1 H α ^ T α d N 0 , v H ,
where v H = 2 H Γ ( H + 1 / 2 ) Γ ( 3 2 H ) Γ ( 3 / 2 H ) .
Theorem 4.
In the case of H < 1 / 2 , the maximum likelihood estimator of β ^ T has the same property of the asymptotical normality presented in (19), and for α ^ T , we have:
T ( α ^ T α ) d N ( 0 , 1 + 2 α 2 β )
Remark 2.
From the previous theorem, we can see that when H > 1 / 2 , whether one parameter is unknown or two parameters are unknown together, the asymptotical normality of the estimator error has the same result, and they are also the same, as well as the linear case and Ornstein–Uhlenbeck process with the pure fBm with Hurst parameter H > 1 / 2 . However, for H < 1 / 2 , the situation changes, and these differences come from the limit representation of the quadratic variation of the martingale M = ( M t , 0 t T ) .
Now, we consider the joint distribution of the estimator error. For H < 1 / 2 , if we consider ϑ = α β as the two-dimensional unknown parameter, then the following theorem gives us the joint distribution of the estimator error of ϑ ^ T :
Theorem 5.
The maximum likelihood estimator ϑ ^ T = α ^ T β ^ T is asymptotically normal:
T ϑ ^ T ϑ d N ( 0 , I 1 ( ϑ ) ) .
where 0 = 0 0 and I ( ϑ ) = 1 α β α β 1 2 β + α 2 β 2 is the matrix of the Fisher information.
Remark 3.
From Theorem 4, we can see that the convergence rates of α ^ T and β ^ T are the same, and we can use the central limit theorem of the martingale in the proof. On the contrary, for H > 1 / 2 , when the function g ( s , t ) defined in (5) has no explicit formula, we cannot use the method in [16] to obtain the joint distribution of ϑ ^ T . In fact, the convergence rates of α ^ T and β ^ T are different, which causes many difficulties, and we leave it for further study.
In the above discussions, we were concerned with the asymptotical laws of the estimators; however, even in [7] with α = 0 , the authors did not consider the strong consistency of β ^ T . In what follows, we conclude that β ^ T converges to β almost surely.
Theorem 6.
For H ( 0 , 1 ) , H 1 / 2 , the estimators of β ^ T have strong consistency, that is, as T ,
β ^ T a . s . β .
Remark 4.
For the estimator α ^ T , the strong consistency is clear when β = 0 and the same proof for β is unknown, and that is why we do not write this conclusion.

4. Proofs of the Main Results

4.1. Proof of Theorem 2

From (13), we have:
T ( β ˜ T β ) = 1 T 0 T Q t d M t 1 T 0 T Q t 2 d M t .
In fact, the process 0 t Q s d M s , 0 t T is a martingale. From Lemma 13, when H > 1 / 2 ,
1 T 0 T Q t 2 d M t P 1 2 β
and from Equation (38) in Proof of Theorem 5, when H < 1 / 2 :
1 T 0 T Q t 2 d M t P α β 2 + 1 2 β .
The central limit theorem of the martingale (see [17]) achieves the proof.

4.2. Proof of Theorem 3

First, we consider the asymptotical normality of β ^ T . Using (18), we have:
T β ^ T β = 1 T 0 T Q t d M t M T M T 1 T 0 T Q t d M t 1 T M T 0 T Q t d M t 2 1 T 0 T Q t 2 d M t ,
where M T is a centered Gaussian random variable with variation M T . Using Lemmas 11 and 12, we can obtain:
M T M T 1 T 0 T Q t d M t P 0 ,
and:
1 T M T 0 T Q t d M t 2 P 0 .
Combining (24)–(26) with Lemma 13, we can obtain (19).
Now, we deal with the convergence of α ^ T . From (17), we can easily have:
T 1 H α ^ T α = 1 T 0 T Q t d M t T 1 H M T T 0 T Q t d M t T 1 H M T M T 1 T 0 T Q t 2 d M t 1 T M T 0 T Q t d M t 2 1 T 0 T Q t 2 d M t .
It is worth noting that:
1 T 0 T Q t d M t T 1 H M T T 0 T Q t d M t P 0 ,
and:
1 T M T 0 T Q t d M t 2 P 0 .
Moreover, from [7], we can see that:
T 1 H M T M T d N ( 0 , v H ) ,
where v H = 2 H Γ ( H + 1 / 2 ) Γ ( 3 2 H ) Γ ( 3 / 2 H ) . Finally, combining (27)–(30), we can obtain (20).

4.3. Proof of Theorem 4

For β ^ T , let us relook at Equation (24) with H < 1 / 2 . First of all, let us develop the denominator,
0 T Q t d M t 2 = α β 2 M T 2 + α β 2 0 T V ( t ) d M t 2 + 0 T Q t U d M t 2 2 α β 2 M T 0 T V ( t ) d M t + 2 α β M T 0 T Q t U d M t 2 α β 0 T V ( t ) d M t 0 T Q t U d M t
where V ( t ) and Q t U are defined in Lemma 10. On the other hand:
0 T Q t 2 d M t = α β 2 M T + α β 2 0 T V 2 ( t ) d M t + 0 T ( Q t U ) 2 d M t 2 α β 2 0 T V ( t ) d M t + 2 α β 0 T Q t U d M t 2 α β 0 T V ( t ) Q t U d M t .
Consequently, we have:
1 T M T 0 T Q t d M t 2 1 T 0 T Q t 2 d M t = 1 T M T α β 2 0 T V ( t ) d M t 2 + 1 T M T 0 T Q t U d M t 2 2 T M T α β 0 T V ( t ) d M t 0 T Q t U d M t 1 T 0 T ( Q t U ) 2 d M t 1 T α β 2 0 T V 2 ( t ) d M t + 2 T α β 0 T V ( t ) Q t U d M t .
We study this one by one. From Lemmas 9, 11, and 14, we have:
1 T M T α β 2 0 T V ( t ) d M t 2 0 , 1 T α β 2 0 T V 2 ( t ) d M t 0 , T 0 .
From [7], we can easily obtain:
1 T 0 T ( Q t U ) 2 d M t P 1 2 β .
Using Lemma 15, we have:
1 T M T 0 T Q t d M t 2 1 T 0 T Q t 2 d M t P 1 2 β .
Now, we consider the numerator,
1 T 0 T Q t d M t M t M T T 0 T Q t d M t = α β 1 T 0 T V ( t ) d M t + 1 T 0 T Q t U d M t + M T M T T 0 T α β V ( t ) Q t U d M t .
From the previous proof, it is not difficult to show that:
α β 1 T 0 T V ( t ) d M t P 0 , M T M T T 0 T α β V ( t ) Q t U d M t P 0 , T .
With the fact in [7]:
1 T 0 T Q t U d M t d N 0 , 1 2 β ,
we have:
1 T 0 T Q t d M t M t M T T 0 T Q t d M t d N 0 , 1 2 β .
Then, combining Equation (31) with (32), it is easy to obtain:
T β ^ T β d N ( 0 , 2 β ) .
Now, we look at α ^ T . In fact:
T ( α ^ T α ) = 1 M T T 0 T Q t d M t 0 T Q t d M t 1 T M T M T 0 T Q t 2 d M t 1 T M T 0 T Q t d M t 2 1 T 0 T Q t 2 d M t .
We observe that the denominator is the same formula in β and it adapts to Equation (31), so we only need to consider the numerator. From Lemmas 11 and 15, it is easy to know:
1 M T 0 T Q t d M t a . s . α β .
For the numerator,
1 M T T 0 T Q t d M t 0 T Q t d M t = T M T 1 T 0 T Q t d M t 1 T 0 T Q t d M t .
With Lemmas 9, 11, and 14:
1 M T T 0 T Q t d M t 0 T Q t d M t α β 2 M T T d α β N 0 , 1 2 β
On the other hand:
1 T M T M T 0 T Q t 2 d M t α β 2 M T T d 1 2 β N ( 0 , 1 ) .
Further study tells us that the two convergences in the distribution of (34) and (35) come from the terms 1 T 0 T Q t U d M t and M T T , when M = ( M t , 0 t T ) is a martingale, then these two terms are asymptotically independent, and then, from Equations (33)–(35), we can easily obtain:
T ( α ^ T α ) d N 0 , 2 α 2 β + 1 .

4.4. Proof of Theorem 5

From Equations (17) and (18), we have:
ϑ ^ T ϑ = α ^ T β ^ T α β = Q T 1 R T
where:
R T = M T 0 T Q t d M t , Q T = M T 0 T Q t d M t 0 T Q t d M t 0 T Q t 2 d M t .
We can see that R t , 0 t T is a martingale and Q t is its quadratic variation. Strictly speaking, in order to use the central limit theorem for the martingale (see [17]), it is better for us if we can compute the Laplace transform for Q T to achieve the proof, but as the quadratic formula of 0 T Q t U 2 d M t was verified in [7], here, we just study the asymptotical properties of every component of Q T .
First of all, from [7], we know:
lim T 1 T M T = 1 .
On the other hand, from Lemmas 10, 11, and 15, we have:
1 T 0 T Q t d M t P α β .
Finally, from Lemmas 10, 11, and 15 and [7], we have:
1 T 0 T Q t 2 d M t P 1 2 β + α 2 β 2 .
The limits of (36)–(38) achieve the proof.
Remark 5.
In fact, it is easy to calculate:
I 1 ( ϑ ) = 1 + 2 α 2 β 2 α 2 α 2 β
which indicates Theorem 4.

4.5. Proof of Theorem 6

First of all, from Lemma 2.2 of [7], for every fixed μ R and fixed T, the Laplace transform:
E μ 0 T Q t U 2 d M t < .
We prove the strong consistency of β ^ T . For α ^ T , the proof is similar. To simplify the notation, we first assumed α = 0 . Then, using the fact α = 0 , we can write:
β ^ T β = 0 T Q t U d M t 0 T Q t U 2 d M t .
With (39) and similar to Proposition of 2.5 in [18], due to the strong law of large numbers, to obtain the convergence almost surely, it suffices to prove:
0 T Q t U 2 d M t a . s . .
From the Appendix of [19], if we define:
K T ( μ ) = 1 T log E exp μ 0 T Q t U 2 d M t ,
then:
lim T K T ( μ ) = β 2 β 2 4 + μ 2 ,
for all μ > β 2 2 . When μ > 0 , the limit of the Laplace transform can be written as:
lim T E μ 0 T Q t U 2 d M t = 0 ,
which achieves (40).
Now, we turn to the case of α 0 : in this situation, using (18), we have:
β ^ T β = M T 0 T Q t d M t 0 T Q t d M t 2 M T 0 T Q t 2 d M t M T T M T 0 T Q t d M t 1 T M T 0 T Q t d M t 2 1 T 0 T Q t 2 d M t
For the first term of the above equation, we can write:
M T 0 T Q t d M t 0 T Q t d M t 2 M T 0 T Q t 2 d M t = 1 0 T Q t d M t 2 M T 0 T Q t d M t 0 T Q t 2 d M t 0 T Q t d M t .
From the proof of Lemma 13 and Equation (40), we see immediately that:
0 T Q t 2 d M t 0 T Q t d M t .
With the previous proofs, we obtain that:
0 T Q t d M t 2 M T 0 T Q t d M t
is bounded. This shows that the first term tends to zero almost surely, as well as the second term. Hence, H > 1 / 2 , and we have the strong consistency. The result for H < 1 / 2 can be proven with the same method, so we do not present it again.

5. Simulation Study

5.1. Numerical Solution of g ˙ ( s , t )

From the construction of the MLE of the two parameters α and β , we found that the procedure of the simulation will be the same for these two. In order to reduce the time of the simulation, in this part, we only considered the mixed fractional O-U case, that is α = 0 and defined in (43):
d U t = β U t d t + d ξ t , t [ 0 , T ] , U 0 = 0 .
Now:
β ^ T β = 0 T Q t U d M t 0 T Q t U 2 d M t
where Q t U is defined in (45):
Q t U = d d M t 0 t g ( s , t ) U s d s .
a direct computation leads to:
Q t U = d t d M t d d t 0 t g ( s , t ) U s d s = 1 g 2 ( t , t ) g ( t , t ) U t + 0 t g ˙ ( s , t ) U s d s ,
and now, the only new thing is the numerical solution of g ˙ ( s , t ) = d d t g ( s , t ) . In fact, we have two methods to find the numerical solution of g ˙ ( s , t ) : first of all, we can first find the solution of g ( s , t ) and g ( s , t Δ t ) where Δ t is a small enough positive constant, then we calculate:
g ( s , t Δ t ) g ( s , t ) Δ t .
However, with this method, we need two different divisions, and how to choose Δ t is also a problem. Therefore, we chose the second method—the explicit formula of g ˙ ( s , t ) from Equation (5). However, when H < 1 / 2 , the integral and the difference are not interchangeable, so we only consider H > 1 / 2 and:
g ˙ ( s , t ) + H ( 2 H 1 ) 0 t g ˙ ( r , t ) | r s | 2 H 2 d r = H ( 2 H 1 ) g ( t , t ) | s t | 2 H 2 , 0 s < t T .
The following is the procedure of the numerical solution of g ˙ ( s , t ) . For every t fixed, we divide the interval [ 0 , t ] into n equal parts, and we denote every point 0 = s 1 < s 2 < < s n = n 1 t , then the distance will be 1 / n . With Equation (41), for every s i , we have:
lim n g ˙ ( s i , t ) + H ( 2 H 1 ) n j = 0 n 1 g ˙ ( s j , t ) | s j s i | 2 H 2 = H ( 2 H 1 ) g ( t , t ) | s i t | 2 H 2 .
From the definition of the Riemann integral, for j = i when n , the term | s j s i | 2 H 2 can be negligible. Therefore, we have the following relationship for the vector g = ( g ( s 1 , t ) , g ( s n , t ) ) * .
Lemma 7.
With the previous definition, when n ,
lim n Id + H ( 2 H 1 ) n A g = H ( 2 H 1 ) g ( t , t ) b
where A is an n × n matrix with A i , i = 0 and A i , j = | s j s i | 2 H 2 and the vector b is defined by b = ( | s 1 t | 2 H 2 , , | s n t | 2 H 2 ) * .
From this lemma, we have the numerical solution of the function g ( s , t ) for t fixed with the formula of g :
g H ( 2 H 1 ) Id + H ( 2 H 1 ) n A 1 g ( t , t ) b .
Even if we do not have the explicit solution of g ( t , t ) , we can use its numerical result with the probability method from [20]. In Figure 1, we simulate the results for t = 1 , 2 , , 10 and H = 0.8 of g ˙ ( s , t ) .
Then, one may ask: Is our simulation reasonable or not? Of course, we can verify this with the method of the derivative directly, but as we mentioned before, this is very complicated. Notice that when H > 1 / 2 :
g 2 ( t , t ) = d d t 0 t g ( s , t ) d s = g ( t , t ) + 0 t g ˙ ( s , t ) d s .
From (42), we can compare the two numerical results: g 2 ( t , t ) g ( t , t ) and the integral 0 t g ˙ ( s , t ) d s ; if they are nearby, we can say that our simulation is reasonable. We divide all the intervals with n = 5000 for every t fixed, and the following are the numerical results:
  • H = 2 / 3 , t = 1 , g 2 ( t , t ) g ( t , t ) = 0.233542 , 0 t g ˙ ( s , t ) d s = 0.217080 ;
  • H = 2 / 3 , t = 5 , g 2 ( t , t ) g ( t , t ) = 0.249744 , 0 t g ˙ ( s , t ) d s = 0.232879 ;
  • H = 2 / 3 , t = 10 , g 2 ( t , t ) g ( t , t ) = 0.248964 , 0 t g ˙ ( s , t ) d s = 0.232367 ;
  • H = 0.8 , t = 1 , g 2 ( t , t ) g ( t , t ) = 0.241762 , 0 t g ˙ ( s , t ) d s = 0.240436 ;
  • H = 0.8 , t = 5 , g 2 ( t , t ) g ( t , t ) = 0.238866 , 0 t g ˙ ( s , t ) d s = 0.237451 ;
  • H = 0.8 , t = 10 , g 2 ( t , t ) g ( t , t ) = 0.218105 , 0 t g ˙ ( s , t ) d s = 0.216969 .
We can see that the left side and the right side of (42) are almost the same, and we can say our simulation is reasonable.
Remark 6.
When we cannot find the convergence rate of the numerical solution of g ( s , t ) with the probability method presented in [20], the same problem exists for g ˙ ( s , t ) .

5.2. Procedure of The Simulation of β ^ T

In this part, we present the simulation of the MLE step by step:
  • To obtain our estimator, first of all, we need to simulate the path of the mixed fractional Ornstein–Uhlenbeck process. Different from the general stochastic differential equation, we have the explicit formula of U t defined in (43):
    U t = e β t 0 t e β s d ξ s , 0 t T .
    Then, with the numerical result of g ( s , t ) , we can easily obtain the path of the process of Z t U = 0 t g ( s , t ) d U s ;
  • In the second step, we need the fundamental martingale M t = 0 t g ( s , t ) d ξ s and the important process Q t U = 1 g 2 ( t , t ) g ( t , t ) U t + 0 t g ˙ ( s , t ) U s d s ;
  • With all these prepared, we use the exact formula:
    β ^ T = 0 T Q t U d Z t U 0 T Q t U 2 d M t .
The asymptotical normality of the estimator ϑ ^ T is presented in Figure 2 for ϑ = 0.2 , H = 2 / 3 , and T = 100 . Even in the figure, ϑ ^ T still has a bias, but it almost satisfies the property:
β ^ T β N ( 0 , 2 β ) .
Remark 7.
Compared with the previous estimator, the MLE of course is a good estimator, but why do we not suggest it? First of all, when we take the observation distance T = 10, it is far from the theoretical result. To obtain a reasonable result, at least, we put T = 100; however, this is very time consuming. On the contrary, in [21], we presented a practical estimator, and we just chose a small T and took little time. In general, if one wants to obtain the drift estimator, we do not suggest the MLE, but the previous practical estimator.

6. Auxiliary Results

This section contains some technical results needed in the proofs of the main theorems of the paper. First, we introduce two important results from [7]:
Lemma 8.
For H > 1 / 2 , we have:
d d T M T T 1 2 H , d d T log d d T M T 2 T 2 , T .
This is Lemma 2.5 from [7].
Lemma 9.
For H < 1 / 2 , we have:
d d T M T c o n s t . , d d T log d d T M T 2 T 2 , T .
This is Lemma 2.6 from [7].
The following lemma shows the relationship between the mfOUp and mfVm.
Lemma 10.
Let U = ( U t , 0 t T ) be an mfOUp with the drift parameter β:
d U t = β U t d t + d ξ t , t [ 0 , T ] , U 0 = 0 .
Then, we have:
X t = α β α β e β t + U t , t [ 0 , T ] .
Moreover, we have the development of Q t with:
Q t = α β α β V ( t ) + Q t U ,
where:
V ( t ) = d d M t 0 t g ( s , t ) e β s d s , Q t U = d d M t 0 t g ( s , t ) U s d s .
Proof. 
In fact, the mixed fractional Vasicek process has a unique solution with the initial value X 0 = 0 :
X t = α β 1 e β t + 0 t e β ( t s ) d ξ s .
On the other hand, the mixed O-U process U t with U 0 = 0 is defined by:
U t = e β t 0 t e β s d ξ s .
The equality (44) is immediate. For the equation (44), we only need to take the integral and the derivative on the two sides of (44). □
Next, we present some limit results.
Lemma 11.
For H ( 0 , 1 ) and H 1 / 2 , as T , we have:
0 T V ( t ) d M t O ( 1 ) . ,
where V ( t ) is defined in (45).
Proof. 
From the definition of V ( t ) , we have:
0 T V ( t ) d M t = 0 T g ( s , T ) e β s d s .
The condition 0 g ( s , t ) 1 achieves the proof. □
Lemma 12.
For H > 1 / 2 , as T , we have:
1 T M T 0 T Q t U d M t P 0 .
Proof. 
A standard calculation yields:
E 0 T Q t U d M t 2 = E 0 T g ( t , T ) U t d t 2 = 0 T 0 T g ( s , T ) g ( t , T ) E ( U s U t ) d s d t 0 T e 2 β ( T t ) d t + C H , β H ( 2 H 1 ) 0 T 0 T g ( t , T ) g ( s , T ) | t s | 2 H 2 d s d t = 0 T e 2 β ( T t ) d t + C H , β 0 T ( 1 g ( s , T ) ) g ( s , T ) d s = 1 2 β 1 e 2 β T + 2 C H , β M T .
From Lemma 8,
lim T e 2 β T T M T = 0 .
Now, with the Chebyshev inequality, ε > 0 :
P 1 T M T 0 T Q t U d M t ε E 0 T Q t U d M t 2 T M T ε 2 T 0 ,
which implies the desired result. □
Lemma 13.
Let H > 1 / 2 ; as T , we have:
1 T 0 T Q t 2 d M t P 1 2 β .
Moreover, from the martingale convergence theorem, we have:
1 T 0 T Q t d M t d N 0 , 1 2 β .
Proof. 
From the definition of Q t , we can write Q t 2 as:
Q t 2 = α β α β V ( t ) + Q t U 2 = α β 2 + α β 2 V 2 ( t ) + ( Q t U ) 2 2 α β 2 V ( t ) + 2 α β Q t U 2 α β V ( t ) Q t U .
Using (48), we can write our target quantity as:
1 T 0 T Q t 2 d M t = 1 T 0 T α β 2 d M t + 1 T 0 T α β 2 V 2 t d M t + 1 T 0 T Q t U 2 d M t 2 1 T 0 T α β 2 V ( t ) d M t + 1 T 0 T 2 α β Q t U d M t 1 T 0 T 2 α β V ( t ) Q t U d M t .
We consider the above six integrals separately. First, as T , from Lemma 8,
1 T 0 T α β 2 d M t = α β 2 M T T a . s . 0 .
Now, we deal with the second term in (49). From Lemmas 8 and 11, we know V ( t ) o ( t 2 H 2 ) , t . Now, we have:
lim T 1 T 0 T V 2 ( t ) d t = 0
and:
lim T 1 T 0 T V t d M t = 0 .
From [7], as T , we have
1 T 0 T ( Q t U ) 2 d M t P 1 2 β .
Next, from the proof of Lemma 12 and the Borel–Cantelli theorem, as T , we obtain:
1 T 0 T Q t U d M t a . s . 0 .
With the Cauchy–Schwarz inequality, (51) and (53), we obtain:
1 T 0 T V ( t ) Q t U d M t 1 T 0 T V 2 ( t ) d M t 1 T 0 T ( Q t U ) 2 d M t P 0 .
Finally, the convergence in the probability of (46) can be obtained by (50)–(55).
For the convergence of (47), since the process 0 t Q s d M , t [ 0 , T ] is a martingale and its quadratic variance is 0 t Q s 2 d M s , t [ 0 , T ] , we can obtain (47) by the martingale convergence theorem. □
The following are the results for H < 1 / 2 . When Lemma 11 is also available for all H ( 0 , 1 ) , then:
Lemma 14.
For H < 1 / 2 , we have:
V ( t ) O ( 1 / t ) , t .
Proof. 
0 T V ( t ) d M t = 0 T V ( t ) d M t d t d t C o n s t .
The result is clear with Lemma 9. □
Now, we deal with the difficulty of the integral of Q t U :
Lemma 15.
For H < 1 / 2 we have:
1 T M T 0 T Q t U d M t 2 = 1 T M T 0 T g ( s , T ) U s d s 2 a . s . 0
and:
2 T α β 0 T V ( t ) Q t U d M t a . s . 0
Proof. 
0 T Q t U d M t = 0 T d d M t 0 t g ( s , t ) U s d s d M t = 0 T g ( t , T ) U t d t
We still consider the integral:
0 T 0 T g ( s , T ) g ( t , T ) E ( U s U t ) d s d t
as presented in Lemma 12. When H < 1 / 2 ,
E ( U s U t ) = 0 min ( s , t ) e ϑ ( t r ) e ϑ ( s r ) d r + 0 t e ϑ ( t v ) d d v 0 s H | v u | 2 H 1 s g n ( v u ) e ϑ ( s u ) d u d v .
The first part of this expectation, which comes from the Brownian motion of course, admits the result. We develop the second part:
0 t e ϑ ( t v ) d d v 0 s H | v u | 2 H 1 s g n ( v u ) e ϑ ( s u ) d u d v = 0 t e ϑ ( t v ) d d v 0 v H ( v u ) 2 H 1 e ϑ ( s u ) d u d v 0 t e ϑ ( t v ) d d v v s H ( u v ) 2 H 1 e ϑ ( s u ) d u d v
With this development and the same calculation in [22], we have:
0 T 0 T g ( t , T ) g ( s , T ) E ( U s U t ) d s d t O ( T ) + C ϑ , H 0 T g ( t , T ) d d t 0 T g ( s , T ) | t s | 2 H 1 s g n ( t s ) d s d t
and the second converges with the inequality of Cauchy–Schwarz. □

7. Conclusions

In this paper, we considered the maximum likelihood estimator for the drift parameters α and β in the mixed fractional Vasicek model:
d X t = ( α β X t ) d t + d W t + d B t H , t [ 0 , T ] , X 0 = 0 .
with the continuous observation path X = ( X t , t [ 0 , T ] ) . We presented the strong consistency and the asymptotical normality of the MLE α ^ T and β ^ T for the two unknown parameters for H 1 / 2 , as well as the joint distribution when H < 1 / 2 . On the other hand, we also tried to simulate the MLE when H > 1 / 2 with the numerical solution of the derivative of the Wiener–Hopfequation even it is time consuming. There exist two problems to be solved, the joint distribution of the MLE when H > 1 / 2 and the simulation of the MLE when H < 1 / 2 , and they will be our future study.

Author Contributions

Conceptualization, C.-H.C. and W.-L.X.; methodology, all the authors; software, Y.-Z.H. and L.S.; validation, C.-H.C. and W.-L.X.; writing—original draft preparation, Y.-Z.H. and L.S.; writing—review and editing, C.-H.C.; visualization, W.-L.X.; funding acquisition, W.-L.X. and L.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the Humanities and Social Sciences Research and Planning fund of the Ministry of Education of China Grant Number 20YJA630053 (L.S.) and the National Nature Science Foundation of China Grant Number 71871202 (W.-L.X.). The APC was funded by the Humanities and Social Sciences Research and Planning fund of the Ministry of Education of China No. 20YJA630053.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

Our deepest gratitude goes to the anonymous reviewers for their careful work and thoughtful suggestions that have helped improve this paper substantially.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Aït-Sahalia, Y.; Mancini, T.S. Out of sample forecasts of quadratic variation. J. Econom. 2008, 147, 17–33. [Google Scholar] [CrossRef] [Green Version]
  2. Comte, F.; Renault, E. Long memory continuous-time stochastic volatility models. Math. Financ. 1998, 8, 291–323. [Google Scholar] [CrossRef]
  3. Gatheral, J.; Jaisson, T.; Rosenbaum, M. Volatility is rough. Quant. Financ. 2018, 18, 933–949. [Google Scholar] [CrossRef]
  4. Cheridito, P. Mixed fractional Brownian motion. Bernoulli 2001, 7, 913–934. [Google Scholar] [CrossRef]
  5. Jacod, J.; Todorov, V. Limit theorems for integrated local empirical characteristic exponents from noisy high-frequency data with application to volatility and jump activity estimation. Ann. Appl. Probab. 2018, 28, 511–576. [Google Scholar] [CrossRef] [Green Version]
  6. Li, J.; Liu, Y.X. Efficient estimation of integrated volatility functionals via multiscale jackknife. Ann. Stat. 2019, 47, 156–176. [Google Scholar] [CrossRef] [Green Version]
  7. Chigansky, P.; Kleptsyna, M. Statistical analysis of the mixed fractional Ornstein–Uhlenbeck process. Theory Probab. Its Appl. 2019, 63, 408–425. [Google Scholar] [CrossRef]
  8. Chigansky, P.; Kleptsyna, M. Exact asymptotics in eigenproblems for fractional Brownian covariance operators. Stoch. Process. Appl. 2018, 128, 2007–2059. [Google Scholar] [CrossRef] [Green Version]
  9. Chigansky, P.; Kleptsyna, M.; Marushkevych, D. Mixed fractional Brownian motion: A spectral take. J. Math. Anal. Appl. 2020, 482, 123558. [Google Scholar] [CrossRef] [Green Version]
  10. Kukush, A.; Lohvinenko, S.; Mishura, Y.; Ralchenko, K. Two approachs to consistent estimation of parameters of mixed fractional Brownian motion with trend. Stat. Inference Stoch. Process. 2021. [Google Scholar] [CrossRef]
  11. Mishura, Y.; Zili, M. Stochastic Analysis of Mixed Fractional Gaussian Processes; Elsevier: Amsterdam, The Netherlands, 2018. [Google Scholar]
  12. Kubilius, K.; Mishura, Y.; Ralchenko, K. Parameter Estimation in Fractional Diffusion Models; BS Book Series; Springer: Berlin/Heidelberg, Germany, 2017; Volume 8. [Google Scholar]
  13. Cai, C.; Chigansky, P.; Kleptsyna, M. Mixed Gaussian processes: A filtering approach. Ann. Probab. 2016, 44, 3032–3075. [Google Scholar] [CrossRef]
  14. Cheridito, P. Arbitrage in fractional Brownian motion models. Financ. Stoch. 2003, 7, 533–553. [Google Scholar] [CrossRef]
  15. Dozzi, M.; Mishura, Y.; Shevchenko, G. Asymptotic behavior of mixed power variations and statistical estimation in mixed models. Stat. Inference Stoch. Process. 2015, 18, 151–175. [Google Scholar] [CrossRef] [Green Version]
  16. Lohvinenko, S.; Ralchenko, K. Maximum Likelihood estimation in the non ergodic fractional Vasicek model. Mod. Stoch. Theory Appl. 2019, 6, 377–395. [Google Scholar] [CrossRef]
  17. Hall, P.; Heyde, C.C. Martingale Limite Theory and Its Application; Academic Press: Cambridge, MA, USA, 1980. [Google Scholar]
  18. Kozachenko, Y.; Melnikov, A.; Mishura, Y. On drift parameter estimation in models with fractional Brownian motion. Statistics 2015, 49, 35–62. [Google Scholar] [CrossRef] [Green Version]
  19. Marushkevych, D. Large deviations for drift parameter estimator of mixed fractional Ornstein–Uhlenbeck process. Mod. Stoch. Theory Appl. 2016, 3, 107–117. [Google Scholar] [CrossRef] [Green Version]
  20. Cai, C.; Xiao, W. Simulation of integro-differential equation and application in estimation ruin probability with mixed fractional Brownian motion. J. Integral Equ. Appl. 2021, 33, 1–17. [Google Scholar] [CrossRef]
  21. Cai, C.; Wang, Q.; Xiao, W. Mixed Sub-fractional Brownian Motion and Drift Estimation of Related Ornstein–Uhlenbeck Process. arXiv 2018, arXiv:1809.02038. [Google Scholar]
  22. Hu, Y.; Nualart, D. Parameter estimation for fractional Ornstein–Uhlenbeck processes. Stat. Probab. Lett. 2010, 80, 1030–1038. [Google Scholar] [CrossRef] [Green Version]
Figure 1. The solutions of g ˙ ( s , t ) for t = 1 , 2 , 3 , , 10 when H = 0.8 .
Figure 1. The solutions of g ˙ ( s , t ) for t = 1 , 2 , 3 , , 10 when H = 0.8 .
Fractalfract 06 00044 g001
Figure 2. Asymptotical normality of β ^ T β when β = 0.2 , H = 2 / 3 , and T = 100 .
Figure 2. Asymptotical normality of β ^ T β when β = 0.2 , H = 2 / 3 , and T = 100 .
Fractalfract 06 00044 g002
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Cai, C.-H.; Huang, Y.-Z.; Sun, L.; Xiao, W.-L. Maximum Likelihood Estimation for Mixed Fractional Vasicek Processes. Fractal Fract. 2022, 6, 44. https://doi.org/10.3390/fractalfract6010044

AMA Style

Cai C-H, Huang Y-Z, Sun L, Xiao W-L. Maximum Likelihood Estimation for Mixed Fractional Vasicek Processes. Fractal and Fractional. 2022; 6(1):44. https://doi.org/10.3390/fractalfract6010044

Chicago/Turabian Style

Cai, Chun-Hao, Yin-Zhong Huang, Lin Sun, and Wei-Lin Xiao. 2022. "Maximum Likelihood Estimation for Mixed Fractional Vasicek Processes" Fractal and Fractional 6, no. 1: 44. https://doi.org/10.3390/fractalfract6010044

APA Style

Cai, C. -H., Huang, Y. -Z., Sun, L., & Xiao, W. -L. (2022). Maximum Likelihood Estimation for Mixed Fractional Vasicek Processes. Fractal and Fractional, 6(1), 44. https://doi.org/10.3390/fractalfract6010044

Article Metrics

Back to TopTop