1. Introduction
The diffusion process has been studied thoroughly in seeking a solution for a stochastic differential equation (SDE) as well as for its properties, such as the conditional moments and mixed moments, which play significant roles in various applications and are especially beneficial for the estimation of rate parameters. Usually, these moments can be directly evaluated by utilizing the transition probability density function (PDF), which is sometimes unknown, complicated, or unavailable in closed form. Hence, the analytical formula for the moments of the SDE may be unavailable. The important application of these moments is parameter estimation. There are many tools to estimate parameters, such as the maximum likelihood estimator (MLE), which is one of the most efficient tools. Sometimes, however, it cannot be performed directly for the data of processes for which the transition PDFs are unknown or complicated. Thus, the moments are required for estimating parameters; this can be performed via several methods, e.g., martingale estimating functions, quasi-likelihood methods, nonlinear weighted least squares estimation, and method of moments (MM).
The aim of this paper is mainly to propose a simple analytical formula for conditional mixed moments of a generalized stochastic correlation process without requiring the transition PDF. As for more specific details, we let
be a filtered probability space generated by an adapted stochastic process
. This paper focuses on the conditional expectation of a product of polynomial functions
and
of the form
called a conditional mixed moment up to order
for
, the analytical formula of which has not been provided, where
and
evolve according to a generalized stochastic correlation process (time-dependent parameters) governed by the following SDE:
where
is an asymmetric Wiener process,
,
, and
for all
. The parameter
corresponds to the mean-reverting parameter,
represents the mean of the process, and
is the volatility coefficient which determines the state space of the diffusion. Emmerich [
1] showed that the stochastic correlation process, which is (
2) when the parameters
,
, and
are constant,
, and
, fulfills the natural features which correlation is expected to possess. In fact, this process is a transformed version of the Jacobi process [
2]. In other words, the Jacobi process is the generalized stochastic correlation process (
2) when the parameters
,
, and
are constant,
, and
. Moreover, the Jacobi process is commonly used to describe the dynamic of discretely sampled data with range
, such as the regime probability or default probability, discount coefficient, and arbitrage free pure discount bond price; see e.g., [
2,
3].
The conditional mixed moment (
1) becomes the well-known conditional moment when
. It is worth noting that the conditional moment, which is usually used in many branches of mathematical science (especially in describing the dynamics of observed data), has been studied extensively from a probabilistic viewpoint. In 2002, to study the moment evaluation of interest rates, Delbaen and Shirakawa [
2] provided an analytical formula for the transition PDF of the Jacobi process through solving it using the orthogonal polynomials with the Fokker–Planck equation, called Jacobi polynomials. In addition, an analytical formula for the conditional moments of the Jacobi process was algebraically solved by applying the transition PDF; see
Figure 1. The transition PDF of Jacobi process is very complicated and involves the Jacobi polynomials; their formula is difficult to work with, especially, when extending it to a formula for conditional mixed moments (
1). The authors showed that the Jacobi process, which is bounded on
, becomes a more general bounded process on
by using Itô’s lemma; see more details in [
2]. In this case, an analytical formula for conditional moments of the new bounded process is provided in [
2] as well. In 2004, Gouriéroux and Valéry [
3] proposed a method to find the conditional mixed moments in order to calibrate the values of parameters on well conditional moments. Their idea used the conditional power moments, sometimes called the tower property, on the conditional moments. However, their formula for the conditional moments is based on solving the system of conditional moments recursively.
In this work, by utilizing the Feynman–Kac formula, which is transformed from the Kolmogorov equation by using Itô’s lemma, we provide a simple analytical formula for conditional moments of the Jacobi process. The key interesting element of our work is that we successfully solve the partial differential equation (PDE) given in the Feynman–Kac formula, as shown in
Figure 1. The obtained formula does not require solving any recursive system, as is the case in the literature to date. In addition, by applying the obtained formula with the binomial theorem, we immediately obtain a simple analytical formula for conditional moments of the generalized stochastic correlation process (
2). Moreover, we extend the obtained formulas to the conditional mixed moments (
1) using the tower property. We propose an analytical formula for several mathematical properties, such as the conditional variance, covariance, central moment and correlation, as consequences of our results.
The overall idea of our results relies on a PDE solution provided by the Feynman–Kac formula, which corresponds to the solution of (
1). Roughly speaking, by assuming the solution of the PDE as a polynomial expression, we can solve the coefficients to receive a closed-form formula directly. The key motivation for the form of conditional moments, that is, a solution to PDE, is based on [
4,
5,
6,
7]. Because the SDE in the Jacobi process has linear drift coefficient and polynomial squared diffusion coefficient, the closed-form solutions of the conditional moments can be assumed by the polynomial expansion; see more details in [
4,
8,
9,
10,
11].
The rest of this paper is organized as follows.
Section 2 provides a brief overview of the extended Jacobi process and the generalized stochastic correlation process. The key methodology and main theorems are proposed in
Section 3. Experimental validations of proposed formulas are shown in
Section 4 via Monte Carlo (MC) simulations. To illustrate applications in practice, parameter estimation methods based on conditional moments are mentioned in
Section 5.
2. Jacobi and Generalized Stochastic Correlation Processes
The Jacobi process is a class of solvable diffusion processes the solution of which satisfies the Pearson equation [
12]. It involves a wide variety of issues in many branches, such as chemistry, physics and engineering; see more details in [
13]. Over the past decade, the Jacobi process has been considered as one class of the Pearson diffusion process [
4], sometimes called a generalized Jacobi process. The Pearson diffusion process is presented via an Itô process having linear drift coefficient and diffusion in quadratic square, which its dynamics follows:
where
is an asymmetric Wiener process,
is in state space,
, and
a,
b, and
c are constants which ensure that the quadratic squared diffusion coefficient in (
3) is well-defined for all
t in time space. By considering the transition PDF of the Pearson diffusion process through the Fokker–Planck equation, Forman and Sørensen [
4] classified it based on the stationary solution into six classes, including the Jacobi process.
Under the classification of Forman and Sørensen [
4], the Pearson diffusion process becomes the Jacobi process under conditions
and
. The simplest form of the Jacobi process follows the SDE (
3) when
and
, and its dynamics follow
Unlike the Cox–Ingersoll–Ross process [
14], which is only bounded below, all values produced from the Jacobi process (
4) are bounded both below and above. To avoid inaccessible boundary points 0, 1, almost certain with respect to probability measure
P, we need a sufficient condition that is
; see e.g., [
2,
15]. Under this condition, a generalized case of the Jacobi process (
4) can be obtained by applying Itô’s lemma with
. In this work, we call this the generalized stochastic correlation process (constant parameters) governed by the SDE
Comparing (
2) with (
5) yields
,
and
.
Figure 2 summarizes the relation among processes (
2)–(
5) and (
8). However, we return to the extended Jacobi process (
8) again in
Section 3.
In the context of conditional expectation, a rising question is whether the conditional expectation can be calculated directly by using the transition PDF. We begin with the transition PDF of the Jacobi process, which is associated with the Jacobi polynomials through the Jacobi generator’s eigenfunctions; see more details in [
16,
17]. In this case, we discuss only the simplest case provided in (
4). We use the transition PDF following Leonenko’s version [
17], which can be rewritten as
where
is the invariant distribution,
is the beta function, and
is the gamma function
for
. The well-known parameter in (
7) is
, which is the discrete spectrum of the generator corresponding to the Jacobi polynomial
.
As shown in (
6) and (
7), the formula for conditional expectations such as the moments are difficult to calculate using the transition PDF, and this becomes even more complicated for conditional mixed moments (
1). To overcome this issue, the Feynman–Kac formula is applied here.
3. Main Results
As strong empirical evidence indicates that movements in finance-based practices tend to involve time (see more details in [
18,
19,
20]), we therefore extend the dynamics of the Jacobi process (
4) governed by time-varying parameters, called the extended Jacobi process,
where
is an asymmetric Wiener process,
,
, and
for all
t. The well-known instant SDE processes governed by time parameters are the extended Ornstein–Uhlenbeck [
19] and the extended Cox–Ingersoll–Ross [
21] processes. However, to ensure the existence and uniqueness of the process (
8), it is required that
and
are Borel-measurable and satisfy the local Lipschitz and linear growth conditions; see more details in [
22]. This section is partitioned into three subsections consisting of ten theorems and two lemmas.
This section presents the key methodology used in this paper as well as the main results. To achieve our aim (
1), we first study the extended Jacobi process (
8). The generalized stochastic correlation process is transformed from the extended Jacobi process, as well as the properties. Several consequences of the obtained theorems are investigated in the later part of this section.
3.1. Extended Jacobi Process
By solving the PDE in the Feynman–Kac formula, Theorem 1 provides an analytical formula for the
conditional moments based on the extended Jacobi process (
8) where
. Unlike the previous works in the literature, the obtained formula is given as the infinite sum, the limit of which is first assumed to converge uniformly.
Theorem 1. Suppose that follows the extended Jacobi process (8). The conditional moment for isfor and , given that the infinite series in (9) converges uniformly on , where the coefficients in (9) are expressed byfor , where Proof. By the Feynman–Kac formula [
23],
in (
9) satisfies the PDE
for all
, subject to the initial condition
By comparing the coefficients of (
9) and (
13), we obtain the conditions
and
for
. To solve (
12), we use (
9) to find the partial derivatives
,
and
, which are
After substituting the above partial derivatives into (
12), it can be simplified to obtain
Under the assumption of the uniform convergence of the infinite series in (
9) over
, the above equation can be solved through the following system of recurrence differential equations:
with initial conditions
and
for
. As the system (
14) consists only of the general linear first-order differential equations, the coefficients in (
9) are therefore obtained by solving the system (
14) in the form of recursive relation, which provides the results (
10). □
According to the infinite sum (
9), a convergent case needs to be mentioned. Theorem 2 is a special case of Theorem 1 when
is a non-negative integer. In such a case, the infinite sum, which can cause a truncation error in practice, can be reduced to a finite sum. It should be noted that our proposed formulas for the extended Jacobi process are more general, covering the formulas provided in [
2,
3].
Theorem 2. Suppose that follows the extended Jacobi process (8). Then, the conditional moment for isfor , where the coefficients in (15) are defined by (10) and (11). Proof. By considering
in (
11), when
, we obtain
. This then implies the coefficients
for all integers
. Thus, the infinite sum (
9) can be reduced to the finite sum (
15). □
The other formula in the form of a finite sum is presented in Corollary 1 when a constant for all and .
Corollary 1. According to Theorem 1, with a constant for all , we havefor , , where the coefficients are defined by (10) and (11). Proof. The result is directly obtained by inserting
in
of (
11). Then,
for all
. This makes
, which implies that
for all
. □
To establish the results for the system of linear recurrence differential equations shown in (
14) when all parameters are constants, we provide an efficient tool in Lemma 1 in order to consider the conditional moments in the Jacobi process (
4) as well as the consequences.
Lemma 1. Let and . For distinct , the recurrence differential equations provided bywhere the initial conditions for have the solutions Proof. For
, we can rewrite (
17) in the matrix form
which is denoted by
subject to the initial condition
. Even though
L contains asymmetric structure, it is easy to see that its solution is
. Note that the coefficient matrix
L is the lower triangular matrix. It is well known that the eigenvalues of
L are its diagonal entries, i.e.,
for
. As these eigenvalues are all distinct values, the matrix
L can be diagonalizable. In the other words,
. Thus, the solution can be expressed in the following form:
where
is the eigenvalue matrix of
L and
is the eigenvector matrix of
L for all
. Let the
jth column of
S, which is the eigenvector corresponding to
, be denoted by
. Then,
, that is
Because the matrix
L has all distinct eigenvalues, it is simple and has completely
eigenvectors. Hence, for each eigenvalue
, the system (
19) has only one free variable. In solving, we let
be the free variable which contains a value equal to one. Thus, we can directly solve (
19) with
to obtain
for
and
for
. After varying all column indices
j from 0 to
n, we have the eigenvector matrix
S as the lower triangular matrix with elements
. Next, the inverse of eigenvector matrix
S, denoted by
, can be calculated directly. Accordingly, it is the lower triangular matrix, with entries
. Then, we can explicitly express the elements
and
, respectively, as follows:
Now, we substitute the obtained matrices into (
18), namely,
Evidently, we have
and for
,
as required. □
Under the condition
, as mentioned in
Section 2, Theorem 3 shows that the formulas provided in (
9), (
15) and (
16) can be expressed in closed forms under the Jacobi process (
4) when the parameters
,
, and
are constants.
Theorem 3. Suppose that follows the Jacobi process (4). Then, the conditional moment for isfor , , which uniformly converges on , wherefor , where Proof. For the Jacobi process (
4) the parameters in (
8) become constant and we set
,
, and
. Thus,
and
provided in (
11) are represented, respectively, by
and
as provided in (
22). The key idea of the proof is to solve the coefficients
in (
14), which can be accomplished straightforwardly using Lemma 1. We consider a partial sum of (
20) from
to
. Recall the system (
14); now we have
with distinct
for all
and initial vector
By applying Lemma 1, the solution of the coefficients
in (
23) is (
21) for all
. Hence, under the assumption that the infinite series in (
20) uniformly converges on
, (
21) holds for all
as required. □
In the case that
,
can be expressed as a power series in terms of
x which terminates at finite order. This means that Theorem 4 reduces the result (
15) in Theorem 2 to a finite sum of order
n.
Theorem 4. Suppose that follows the Jacobi process (4). Then, the conditional moment for iswhere and are as provided in (22). Proof. The proof is rather trivial by combining Theorems 2 and 3. □
The following corollary can be reduced from Theorem 3 using the same idea as in Corollary 1.
Corollary 2. According to Theorem 3, with a constant , we havewhere and are as provided in (22). Proof. The proof is rather trivial by combining the idea of the proofs in Theorem 3 and Corollary 1. □
Remark 1. In the case that , as and , we have . The suitable theorem for this case is Theorem 4. In fact, we can use Corollary 2 with the coefficients of for all .
In addition, Theorem 5 is transformed from (
24) in Theorem 4 to the unconditional moment as
; the obtained result no longer depends on
x.
Theorem 5. Suppose that follows Jacobi process (4). Then, the unconditional moment at equilibrium for , and is provided by Proof. According to (
24) in Theorem 4, because
for all
the coefficient terms of
provided in (
21) approach 0 as
for
, except in the case that
. We have
; thus
as required. □
3.2. Generalized Stochastic Correlation Process
Theorem 6 provides a relation between the extended Jacobi (
8) and generalized stochastic correlation processes (
5) through Itô’s lemma, and provides a formula for the conditional moments of the generalized stochastic correlation process (
5) in closed form.
Theorem 6. Let follow the extended Jacobi process (8) where for all . Suppose that for all . Then, (8) becomes a generalized stochastic correlation processand for all . In addition, its conditional moment iswhere , and is defined in (15). Proof. Applying
with Itô’s lemma provides
as shown in (
27). As
for all
,
for all
. The analytical formula for the conditional moments is determined in two cases. For the case where
, we have
For the other case,
, the binomial theorem results in
As
follow the extended Jacobi process (
8), applying Theorem 2 yields the two cases in (
28). □
Remark 2. It should be noted that the generalized stochastic correlation processes (6) are more general than those of processes (4) and (5). Comparing the generalized stochastic correlation processes (2) and (6) provides , , and . In addition, Theorem 6 becomes Theorem 7 under the constant parameters; the stationary property at is studied in Theorem 7.
Theorem 7. According to Theorem 6 with the real constant parameters , and , the conditional moment iswhere and is defined in (24). Moreover, Proof. Let
,
and
be constant. The extended Jacobi process (
8) is reduced to the original Jacobi process (
4). In addition, (
27) is reduced to (
5) rapidly. Hence, the conditional moment (
28) is transformed to (
29). Thus, by applying (
29) with Theorem 5 we obtain (
30). □
By applying the tower property, we derive an interesting result of the conditional mixed moments (
1) of process (
2). To the best of our knowledge, no other authors have found the simple formula as shown in Theorem 8. However, the following lemma is needed.
Lemma 2. Suppose that follows the extended Jacobi process (8) and . The conditional mixed moment up to order for iswhere the parameters dependent on time are provided in (10). In the spacial case of the Jacobi process (4), the parameters are defined in (21). Proof. Using the tower property for
, the conditional mixed moment of the extended Jacobi process (
8) can be expressed as
After applying Theorem 2 twice, we have
as required. □
Theorem 8. According to Theorem 6 with , the conditional mixed moment of the generalized stochastic correlation process (27) up to order for iswhere and the conditional mixed moment of the extended Jacobi process (8) is provided in Lemma 2. Proof. For the case where
, similar to the proof of Theorem 6, it is not difficult to check and is thus omitted here. For the latter case, applying the binomial theorem twice yields
where the analytical formula of conditional mixed moments
, for
and
, is provided in Lemma 2. This completes the proof. □
Remark 3. Applying the idea in the proofs of Lemma 2 and Theorem 8, the general formula for conditional mixed moments , where and , can be directly obtained. The advantage of our formula for conditional mixed moments (8) is its simple closed form, which can be used in many applications, especially to estimate the functions of the powers of observed processes which appeared in Sørensen [24], Leonenko and Šuvak [25,26], and Avram et al. [27]. Moreover, in order to study the integrated Jacobi process, the conditional mixed moments need to be evaluated. However, their proposed formulas are very complicated; see Forman and Sørensen [4]. Thus, our results can be applied easily. Before finishing this section, we summarize the relationship of the presented formulas in the form of the diagram displayed in
Figure 3, which shows the development process of the formulas consisting of ten theorems and two lemmas, which are categorized as performed in processes (
2), (
4), (
5) and (
8).
3.3. Statistical Properties
The conditional variance of the generalized stochastic correlation process (
27) can be expressed as
where
is defined in Theorem 6. Furthermore, the
moment about the mean, that is, the
central moment, can be expressed as
Well-known instances for the central moment are the zero-th moment , the 1st central moment , the 2nd central moment , called the conditional variance, and the third and fourth , known as the skewness and kurtosis, respectively.
We now move our focus to the conditional covariance and correlation. By applying Theorem 8, for
where
,
and
we have
and the conditional correlation of the generalized stochastic correlation process (
27) is
It should be noted that the analytical formulas for the conditional covariance and correlation can be extended to the analytical of
and
where
. Several of the related applications as estimator tools are mentioned in [
24,
25,
26,
27,
28].
4. Experimental Validation
As our results proposed in
Section 3 are mainly based on the extended Jacobi process (
8), this experimental validation section discusses this process first. In this experiment, we applied the Euler–Maruyama (EM) discretization method with MC simulations to process (
8). Let
be a time-discretized approximation of
that is generated on time interval
into
N steps, i.e.,
. Then, the EM approximate is defined by
where the initial value
,
is the size of the time step and
is the standard normal random variable. We illustrate the validations of the 1st moment (
) of the formula (
15) via the parameters studied by Ardian and Kumral [
29] for the evolution of gold prices and interest rates. For the generalized stochastic correlation process (
2), their estimated parameters are
,
and
with
and
. Thus, for the extended Jacobi process (
8), those parameters correspond to
,
and
for
; note that these parameters are all constants. This then corresponds to the Jacobi process (
4) as well, which can compute the 1st conditional moment using formula (
24) directly. This work was implemented in MATLAB libraries available in GitHub repositories:
https://github.com/TyMathAD/Conditional_Mixed_Moments accessed on 21 April 2022.
To test the efficiency of the 1st moment
, we compared the obtained results with MC simulations at various points
, where
. These simulations were examined with the time step
and varied with the sample paths by 100, 1000, and 10,000, as depicted in
Figure 4, which is the contour plotting of absolute errors between our formula and MC simulations. From
Figure 4, we can see obviously that the contour colors trend to the dark blue shade for the larger path numbers. This means that the absolute errors approach zero.
Figure 4a–c produces average absolute errors equal to
,
and
, respectively. Hence, the MC simulations are most likely to converge to our formula.
In this validation, these obtained results of our formula and the MC simulations based on the EM method (
33) were computed by implementing MATLAB R2021a software run on a laptop computer configured with the following details: Intel(R) Core(TM) i7-5700HQ, CPU @2.70 GHz, 16.0 GB RAM, Windows 10, 64-bit Operating System. As a result, the computational run time of our analytical formula is around 0.0145 s, while the MC simulations consume run times of 1.43, 4.32, and 40.21 s for 100, 1000, and 10,000 sample paths, respectively. Thus, we can see that the times of MC simulations are more tremendously expensive than our formula, especially, with large path numbers. It is notable that the MC simulations with just 100 paths spent more computing time than our formula, with almost 100 times the time elapsed. Hence, for a more accurate result the use of MC simulations may not be a good choice in terms of computing time. In contrast, the proposed formula is independent of any discretizations and has a very low computational cost. Therefore, the formulas presented here are efficient and suitable for practical use.
Moreover, we used the above parameters to compute the 1st and 2nd conditional moments,
and
, in order to model the correlation between gold prices and interest rates. We computed these moments utilizing the presented formula (
28) at different values
. The obtained results are demonstrated by surface plots in
Figure 5. In addition, we plotted the graphical contours of the 1st and 2nd conditional moments for
. It can be seen that when
is increasing, the obtained results converge to a certain value for both moments. This can be seen from
Figure 6, in that the contour colors trend to a light blue shade which has an approximate value of
. Using Theorem 7, it is confirmed that when
these 1st and 2nd conditional moments are
and
, respectively, corresponding to
Figure 6.
Note that one primary concern for our proposed formula in Theorem 1 is that the coefficients
for
in (
10) may not be exactly integrable. Thus, numerical integration methods are needed to manipulate the integral terms, such as a trapezoidal rule, Simpson’s rule, etc. One efficient method that we suggest to handle these integral terms is the Chebyshev integration method provided by Boonklurb et al. [
30,
31,
32,
33], which provides higher accuracy than other integration methods under the same discretization.
5. Method of Moments Estimator
In certain cases, the MM is superseded by Fisher’s method when estimating parameters of a known family of probability distribution, as the MLEs have a higher probability of being suitable to the quantities to be estimated. However, in certain cases such as the examples of gamma and beta distributions, MLEs may be intractable without computer programming. In this case, estimation using MM can be used as a first approximation of the solutions of the MLEs; the MM and the method of MLEs are symbiotic in this respect.
The key idea of the MM is to calibrate a well set of parameter values based on suitable conditional moments. In this section, suppose that we need to calibrate an unknown parameter vector
of the generalized stochastic correlation process (
2), where the value of the true parameters is the vector
on discretely observed data
, where
for all
. Normally, the basic conditional moments selected for calibration may be the first three conditional moments of the form provided in Theorem 6. It is sufficient to solve the unknown vector
; however, in 2004, Gouriéroux and Valéry [
3] suggested that we need to choose those conditional moments satisfying the identities of observed interest data.
They further determined sufficient moments to be adequately informative, such as the 1st, 2nd, 3rd (skewness), and 4th (kurtosis) conditional moments and the mixed moments
and
to capture the dynamics of the risk premium and the possible volatility persistence, respectively. Their set of conditional moments selected for implementing MM is
with the conditional moments and mixed moments appearing above having been proposed in Theorems 6 and 8, respectively. In order to estimate parameters, we suppose that the conditional expectations of
,
, exist as a real number for all
, satisfying
, and let
The MM estimator of
based on the conditional expectation
is the solution to the system of equations
. If we cannot solve the exact value of
, a good estimate of the true value
, called
, is needed. In other words, we need a
that makes
close to 0; see more details in [
34]. In any event, the algorithm that we suggest would use either Newton’s method or iterative methods to solve the system of nonlinear equations
.
It should be noted that in certain cases, infrequent with large sample sizes and not as infrequent with small ones, the estimates provided by the MM are not suitable. In this case, they may be outside of the parameter space and it does not make sense to rely on the sample provided by the MM. In the context of the properties of the MM and its generalized version, under sufficient conditions they are consistent and asymptotically normally distributed; see for more details in [
34,
35].