Next Article in Journal
Research on the Residual Bearing Capacity of a Rib Beam Bridge Carriageway Slab Based on Fatigue Cumulative Damage
Next Article in Special Issue
Solution of Integral Equation with Neutrosophic Rectangular Triple Controlled Metric Spaces
Previous Article in Journal
New Formulations on Kinetic Energy and Acceleration Energies in Applied Mechanics of Systems
Previous Article in Special Issue
A q-Difference Equation and Fourier Series Expansions of q-Lidstone Polynomials
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Analytical Formulas for Conditional Mixed Moments of Generalized Stochastic Correlation Process

by
Ampol Duangpan
1,
Ratinan Boonklurb
1,*,†,
Kittisak Chumpong
2,3,*,† and
Phiraphat Sutthimat
1
1
Department of Mathematics and Computer Science, Faculty of Science, Chulalongkorn University, Bangkok 10330, Thailand
2
Division of Computational Science, Faculty of Science, Prince of Songkla University, Songkhla 90110, Thailand
3
Statistics and Applications Research Unit, Faculty of Science, Prince of Songkla University, Songkhla 90110, Thailand
*
Authors to whom correspondence should be addressed.
These authors contributed equally to this work.
Symmetry 2022, 14(5), 897; https://doi.org/10.3390/sym14050897
Submission received: 30 March 2022 / Revised: 21 April 2022 / Accepted: 25 April 2022 / Published: 27 April 2022
(This article belongs to the Special Issue Symmetries in Differential Equation and Application)

Abstract

:
This paper proposes a simple and novel approach based on solving a partial differential equation (PDE) to establish the concise analytical formulas for a conditional moment and mixed moment of the Jacobi process with constant parameters, accomplished by including random fluctuations with an asymmetric Wiener process and without any knowledge of the transition probability density function. Our idea involves a system with a recurrence differential equation which leads to the PDE by involving an asymmetric matrix. Then, by using Itô’s lemma, all formulas for the Jacobi process with constant parameters as well as time-dependent parameters are extended to the generalized stochastic correlation processes. In addition, their statistical properties are provided in closed forms. Finally, to illustrate applications of the proposed formulas in practice, estimations of parametric methods based on the moments are mentioned, particularly in the method of moments estimators.

1. Introduction

The diffusion process has been studied thoroughly in seeking a solution for a stochastic differential equation (SDE) as well as for its properties, such as the conditional moments and mixed moments, which play significant roles in various applications and are especially beneficial for the estimation of rate parameters. Usually, these moments can be directly evaluated by utilizing the transition probability density function (PDF), which is sometimes unknown, complicated, or unavailable in closed form. Hence, the analytical formula for the moments of the SDE may be unavailable. The important application of these moments is parameter estimation. There are many tools to estimate parameters, such as the maximum likelihood estimator (MLE), which is one of the most efficient tools. Sometimes, however, it cannot be performed directly for the data of processes for which the transition PDFs are unknown or complicated. Thus, the moments are required for estimating parameters; this can be performed via several methods, e.g., martingale estimating functions, quasi-likelihood methods, nonlinear weighted least squares estimation, and method of moments (MM).
The aim of this paper is mainly to propose a simple analytical formula for conditional mixed moments of a generalized stochastic correlation process without requiring the transition PDF. As for more specific details, we let Ω , F s , { F s } 0 s T 1 T 2 , P be a filtered probability space generated by an adapted stochastic process { ρ t } s t T 2 . This paper focuses on the conditional expectation of a product of polynomial functions ρ T 1 n 1 and ρ T 2 n 2 of the form
E ρ T 1 n 1 ρ T 2 n 2 F s = E ρ T 1 n 1 ρ T 2 n 2 ρ s = ρ ,
called a conditional mixed moment up to order n 1 + n 2 for n 1 , n 2 Z 0 + , the analytical formula of which has not been provided, where ρ [ ρ min , ρ max ] R and ρ t evolve according to a generalized stochastic correlation process (time-dependent parameters) governed by the following SDE:
d ρ t = θ * ( t ) μ * ( t ) ρ t d t + σ * ( t ) ρ max ρ t ρ t ρ min d W t ,
where W t is an asymmetric Wiener process, θ * ( t ) > 0 , σ * ( t ) > 0 , and ρ min < μ * ( t ) < ρ max for all t [ s , T 2 ] . The parameter θ * ( t ) corresponds to the mean-reverting parameter, μ * ( t ) represents the mean of the process, and σ * ( t ) is the volatility coefficient which determines the state space of the diffusion. Emmerich [1] showed that the stochastic correlation process, which is (2) when the parameters θ * ( t ) , μ * ( t ) , and σ * ( t ) are constant, ρ min = 1 , and ρ max = 1 , fulfills the natural features which correlation is expected to possess. In fact, this process is a transformed version of the Jacobi process [2]. In other words, the Jacobi process is the generalized stochastic correlation process (2) when the parameters θ * ( t ) , μ * ( t ) , and σ * ( t ) are constant, ρ min = 0 , and ρ max = 1 . Moreover, the Jacobi process is commonly used to describe the dynamic of discretely sampled data with range [ 0 , 1 ] , such as the regime probability or default probability, discount coefficient, and arbitrage free pure discount bond price; see e.g., [2,3].
The conditional mixed moment (1) becomes the well-known conditional moment when γ 1 = 0 . It is worth noting that the conditional moment, which is usually used in many branches of mathematical science (especially in describing the dynamics of observed data), has been studied extensively from a probabilistic viewpoint. In 2002, to study the moment evaluation of interest rates, Delbaen and Shirakawa [2] provided an analytical formula for the transition PDF of the Jacobi process through solving it using the orthogonal polynomials with the Fokker–Planck equation, called Jacobi polynomials. In addition, an analytical formula for the conditional moments of the Jacobi process was algebraically solved by applying the transition PDF; see Figure 1. The transition PDF of Jacobi process is very complicated and involves the Jacobi polynomials; their formula is difficult to work with, especially, when extending it to a formula for conditional mixed moments (1). The authors showed that the Jacobi process, which is bounded on [ 0 , 1 ] , becomes a more general bounded process on [ ρ min , ρ max ] by using Itô’s lemma; see more details in [2]. In this case, an analytical formula for conditional moments of the new bounded process is provided in [2] as well. In 2004, Gouriéroux and Valéry [3] proposed a method to find the conditional mixed moments in order to calibrate the values of parameters on well conditional moments. Their idea used the conditional power moments, sometimes called the tower property, on the conditional moments. However, their formula for the conditional moments is based on solving the system of conditional moments recursively.
In this work, by utilizing the Feynman–Kac formula, which is transformed from the Kolmogorov equation by using Itô’s lemma, we provide a simple analytical formula for conditional moments of the Jacobi process. The key interesting element of our work is that we successfully solve the partial differential equation (PDE) given in the Feynman–Kac formula, as shown in Figure 1. The obtained formula does not require solving any recursive system, as is the case in the literature to date. In addition, by applying the obtained formula with the binomial theorem, we immediately obtain a simple analytical formula for conditional moments of the generalized stochastic correlation process (2). Moreover, we extend the obtained formulas to the conditional mixed moments (1) using the tower property. We propose an analytical formula for several mathematical properties, such as the conditional variance, covariance, central moment and correlation, as consequences of our results.
The overall idea of our results relies on a PDE solution provided by the Feynman–Kac formula, which corresponds to the solution of (1). Roughly speaking, by assuming the solution of the PDE as a polynomial expression, we can solve the coefficients to receive a closed-form formula directly. The key motivation for the form of conditional moments, that is, a solution to PDE, is based on [4,5,6,7]. Because the SDE in the Jacobi process has linear drift coefficient and polynomial squared diffusion coefficient, the closed-form solutions of the conditional moments can be assumed by the polynomial expansion; see more details in [4,8,9,10,11].
The rest of this paper is organized as follows. Section 2 provides a brief overview of the extended Jacobi process and the generalized stochastic correlation process. The key methodology and main theorems are proposed in Section 3. Experimental validations of proposed formulas are shown in Section 4 via Monte Carlo (MC) simulations. To illustrate applications in practice, parameter estimation methods based on conditional moments are mentioned in Section 5.

2. Jacobi and Generalized Stochastic Correlation Processes

The Jacobi process is a class of solvable diffusion processes the solution of which satisfies the Pearson equation [12]. It involves a wide variety of issues in many branches, such as chemistry, physics and engineering; see more details in [13]. Over the past decade, the Jacobi process has been considered as one class of the Pearson diffusion process [4], sometimes called a generalized Jacobi process. The Pearson diffusion process is presented via an Itô process having linear drift coefficient and diffusion in quadratic square, which its dynamics follows:
d X t = θ μ X t d t + 2 θ a X t 2 + b X t + c d W t ,
where W t is an asymmetric Wiener process, X t is in state space, θ > 0 , and a, b, and c are constants which ensure that the quadratic squared diffusion coefficient in (3) is well-defined for all t in time space. By considering the transition PDF of the Pearson diffusion process through the Fokker–Planck equation, Forman and Sørensen [4] classified it based on the stationary solution into six classes, including the Jacobi process.
Under the classification of Forman and Sørensen [4], the Pearson diffusion process becomes the Jacobi process under conditions a < 0 and b 2 4 a c > 0 . The simplest form of the Jacobi process follows the SDE (3) when a = b < 0 and c = 0 , and its dynamics follow
d X t = θ μ X t d t + 2 a θ X t X t 1 d W t .
Unlike the Cox–Ingersoll–Ross process [14], which is only bounded below, all values produced from the Jacobi process (4) are bounded both below and above. To avoid inaccessible boundary points 0, 1, almost certain with respect to probability measure P, we need a sufficient condition that is a μ 1 + a ; see e.g., [2,15]. Under this condition, a generalized case of the Jacobi process (4) can be obtained by applying Itô’s lemma with ρ t = ρ min + ρ max ρ min X t . In this work, we call this the generalized stochastic correlation process (constant parameters) governed by the SDE
d ρ t = θ ρ max ρ min μ + ρ min ρ t d t + 2 a θ ρ t ρ max ρ t ρ min d W t .
Comparing (2) with (5) yields θ * ( t ) = θ , μ * ( t ) = ρ max ρ min μ + ρ min and σ * ( t ) = 2 a θ . Figure 2 summarizes the relation among processes (2)–(5) and (8). However, we return to the extended Jacobi process (8) again in Section 3.
In the context of conditional expectation, a rising question is whether the conditional expectation can be calculated directly by using the transition PDF. We begin with the transition PDF of the Jacobi process, which is associated with the Jacobi polynomials through the Jacobi generator’s eigenfunctions; see more details in [16,17]. In this case, we discuss only the simplest case provided in (4). We use the transition PDF following Leonenko’s version [17], which can be rewritten as
p ( x , T x t , t ) = b e t a ( x ) j = 0 e λ j ( T t ) ω j P j ( α , β ) 2 x t 1 P j ( α , β ) 2 x 1 ,
where b e t a ( x ) = x β 1 x α B α + 1 , β + 1 is the invariant distribution, B α , β = Γ α Γ β Γ α + β is the beta function, and Γ ( · ) is the gamma function
α = 1 a + μ a 1 , β = μ a 1 , λ j = j a θ j 1 1 a , ω 0 = 1 , ω j = Γ α + 2 Γ β + 2 Γ α + β + 4 j j ! Γ α + 2 j Γ β + 2 j Γ α + β + 3 2 j + α + β + 1 and P j ( α , β ) ( z ) = Γ α + j + 1 j ! Γ α + β + j + 1 k = 0 j j k Γ α + β + j + k + 1 Γ α + k + 1 z 1 2 k
for α , β ( 1 , ) . The well-known parameter in (7) is λ j , which is the discrete spectrum of the generator corresponding to the Jacobi polynomial P j ( α , β ) ( z ) .
As shown in (6) and (7), the formula for conditional expectations such as the moments are difficult to calculate using the transition PDF, and this becomes even more complicated for conditional mixed moments (1). To overcome this issue, the Feynman–Kac formula is applied here.

3. Main Results

As strong empirical evidence indicates that movements in finance-based practices tend to involve time (see more details in [18,19,20]), we therefore extend the dynamics of the Jacobi process (4) governed by time-varying parameters, called the extended Jacobi process,
d X t = θ ( t ) μ ( t ) X t d t + 2 a ( t ) θ ( t ) X t X t 1 d W t ,
where W t is an asymmetric Wiener process, θ ( t ) > 0 , a ( t ) < 0 , and 0 < μ ( t ) < 1 for all t. The well-known instant SDE processes governed by time parameters are the extended Ornstein–Uhlenbeck [19] and the extended Cox–Ingersoll–Ross [21] processes. However, to ensure the existence and uniqueness of the process (8), it is required that θ ( t ) ( μ ( t ) X t ) and 2 a ( t ) θ ( t ) X t X t 1 are Borel-measurable and satisfy the local Lipschitz and linear growth conditions; see more details in [22]. This section is partitioned into three subsections consisting of ten theorems and two lemmas.
This section presents the key methodology used in this paper as well as the main results. To achieve our aim (1), we first study the extended Jacobi process (8). The generalized stochastic correlation process is transformed from the extended Jacobi process, as well as the properties. Several consequences of the obtained theorems are investigated in the later part of this section.

3.1. Extended Jacobi Process

By solving the PDE in the Feynman–Kac formula, Theorem 1 provides an analytical formula for the γ th conditional moments based on the extended Jacobi process (8) where γ R . Unlike the previous works in the literature, the obtained formula is given as the infinite sum, the limit of which is first assumed to converge uniformly.
Theorem 1.
Suppose that X t follows the extended Jacobi process (8). The γ t h conditional moment for γ R is
U E γ ( x , τ ) : = E X T γ X s = x = k = 0 P k γ ( τ ) x γ k ,
for ( x , τ ) D E γ R × [ 0 , ) and τ = T s , given that the infinite series in (9) converges uniformly on D E γ , where the coefficients in (9) are expressed by
P 0 γ ( τ ) = e 0 τ A 0 γ ( ξ ) d ξ a n d P k γ ( τ ) = 0 τ e η τ A k γ ( ξ ) d ξ B k 1 γ ( η ) P k 1 γ ( η ) d η
for k Z + , where
A j γ ( τ ) = θ ( T τ ) γ j γ j 1 a ( T τ ) 1 a n d B j γ ( τ ) = θ ( T τ ) γ j μ ( T τ ) γ j 1 a ( T τ ) .
Proof. 
By the Feynman–Kac formula [23], U E γ ( x , τ ) : = U in (9) satisfies the PDE
U τ θ ( T τ ) μ ( T τ ) x U x θ ( T τ ) a ( T τ ) x 2 a ( T τ ) x U x x = 0
for all ( x , τ ) D E γ , subject to the initial condition
U E γ ( x , 0 ) = E X T γ X T = x = x γ .
By comparing the coefficients of (9) and (13), we obtain the conditions P 0 γ ( 0 ) = 1 and P k γ ( 0 ) = 0 for k Z + . To solve (12), we use (9) to find the partial derivatives U τ , U x and U x x , which are
U τ = k = 0 d d τ P k γ ( τ ) x γ k , U x = k = 0 γ k P k γ ( τ ) x γ k 1 and U x x = k = 0 γ k γ k 1 P k γ ( τ ) x γ k 2 .
After substituting the above partial derivatives into (12), it can be simplified to obtain
d d τ P 0 γ ( τ ) A 0 γ ( τ ) P 0 γ ( τ ) x γ + k = 1 d d τ P k γ ( τ ) A k γ ( τ ) P k γ ( τ ) B k 1 γ ( τ ) P k 1 γ ( τ ) x γ k = 0 .
Under the assumption of the uniform convergence of the infinite series in (9) over D E γ , the above equation can be solved through the following system of recurrence differential equations:
d d τ P 0 γ ( τ ) A 0 γ ( τ ) P 0 γ ( τ ) = 0 and d d τ P k γ ( τ ) A k γ ( τ ) P k γ ( τ ) B k 1 γ ( τ ) P k 1 γ ( τ ) = 0
with initial conditions P 0 γ ( 0 ) = 1 and P k γ ( 0 ) = 0 for k Z + . As the system (14) consists only of the general linear first-order differential equations, the coefficients in (9) are therefore obtained by solving the system (14) in the form of recursive relation, which provides the results (10). □
According to the infinite sum (9), a convergent case needs to be mentioned. Theorem 2 is a special case of Theorem 1 when γ is a non-negative integer. In such a case, the infinite sum, which can cause a truncation error in practice, can be reduced to a finite sum. It should be noted that our proposed formulas for the extended Jacobi process are more general, covering the formulas provided in [2,3].
Theorem 2.
Suppose that X t follows the extended Jacobi process (8). Then, the n t h conditional moment for n Z 0 + is
U E n ( x , τ ) = E X T n X s = x = k = 0 n P k n ( τ ) x n k ,
for ( x , τ ) D E n R × [ 0 , ) , τ = T s where the coefficients P k n ( τ ) in (15) are defined by (10) and (11).
Proof. 
By considering B j γ ( τ ) in (11), when k = n = γ , we obtain B k n ( τ ) = 0 . This then implies the coefficients P k n ( τ ) = 0 for all integers k n + 1 . Thus, the infinite sum (9) can be reduced to the finite sum (15). □
The other formula in the form of a finite sum is presented in Corollary 1 when a constant γ = m + μ ( τ ) a ( τ ) + 1 for all τ 0 and m N .
Corollary 1.
According to Theorem 1, with a constant γ = m + μ ( τ ) a ( τ ) + 1 for all τ 0 , m Z + we have
U E γ ( x , τ ) = E X T γ X s = x = k = 0 m P k γ ( τ ) x γ k
for ( x , τ ) D E ( 0 , ) × [ 0 , ) , τ = T s 0 , where the coefficients P k γ ( τ ) are defined by (10) and (11).
Proof. 
The result is directly obtained by inserting γ = m + μ ( τ ) a ( τ ) + 1 in B k γ ( τ ) of (11). Then, B m γ ( τ ) = 0 for all τ 0 . This makes P m + 1 γ ( τ ) = 0 , which implies that P k γ ( τ ) = 0 for all k m + 1 . □
To establish the results for the system of linear recurrence differential equations shown in (14) when all parameters are constants, we provide an efficient tool in Lemma 1 in order to consider the conditional moments in the Jacobi process (4) as well as the consequences.
Lemma 1.
Let α k , β k R and n Z + . For distinct α 0 , α 1 , α 2 , , α n , the recurrence differential equations provided by
d d t y 0 ( t ) = α 0 y 0 ( t ) a n d d d t y k ( t ) = α k y k ( t ) + β k 1 y k 1 ( t )
where the initial conditions y k ( 0 ) = ϕ k R for k { 0 , 1 , 2 , , n } have the solutions
y 0 ( t ) = ϕ 0 e α 0 t a n d y k ( t ) = j = 0 k i = 0 j l = i l j k 1 α j α l · l = i k 1 β l · ϕ i e α j t .
Proof. 
For k { 0 , 1 , 2 , , n } , we can rewrite (17) in the matrix form
d d t y 0 ( t ) d d t y 1 ( t ) d d t y 2 ( t ) d d t y n ( t ) = α 0 β 0 α 1 β 1 α 2 β n 1 α n y 0 ( t ) y 1 ( t ) y 2 ( t ) y n ( t ) , where y 0 ( 0 ) y 1 ( 0 ) y 2 ( 0 ) y n ( 0 ) = ϕ 0 ϕ 1 ϕ 2 ϕ n
which is denoted by d d t y ( t ) = L y ( t ) subject to the initial condition y ( 0 ) = Φ . Even though L contains asymmetric structure, it is easy to see that its solution is y ( t ) = e t L Φ . Note that the coefficient matrix L is the lower triangular matrix. It is well known that the eigenvalues of L are its diagonal entries, i.e., α j for j { 0 , 1 , 2 , , n } . As these eigenvalues are all distinct values, the matrix L can be diagonalizable. In the other words, L = S Λ S 1 . Thus, the solution can be expressed in the following form:
y ( t ) = S e t Λ S 1 Φ ,
where Λ = diag { α 0 , α 1 , α 2 , , α n } is the eigenvalue matrix of L and S : = [ s k , j ] is the eigenvector matrix of L for all k , j { 0 , 1 , 2 , , n } . Let the jth column of S, which is the eigenvector corresponding to α j , be denoted by s j = [ s 0 , j , s 1 , j , s 2 , j , , s n , j ] . Then, L α j I s j = 0 , that is
α 0 α j β 0 α 1 α j β j 1 0 β n 1 α n α j s 0 , j s 1 , j s j , j s n , j = 0 0 0 0 .
Because the matrix L has all distinct eigenvalues, it is simple and has completely n + 1 eigenvectors. Hence, for each eigenvalue α j , the system (19) has only one free variable. In solving, we let s j , j be the free variable which contains a value equal to one. Thus, we can directly solve (19) with s j , j = 1 to obtain s k , j = 0 for k { 0 , 1 , 2 , , j 1 } and
s k , j = β i 1 s k 1 , j α j α k = i = j k 1 β i α j α i + 1 · s j , j = i = j k 1 β i α j α i + 1
for k { j + 1 , j + 2 , j + 3 , , n } . After varying all column indices j from 0 to n, we have the eigenvector matrix S as the lower triangular matrix with elements s k , j . Next, the inverse of eigenvector matrix S, denoted by S 1 : = [ r k , j ] , can be calculated directly. Accordingly, it is the lower triangular matrix, with entries r k , j . Then, we can explicitly express the elements s k , j and r k , j , respectively, as follows:
s k , j = 0 if k < j , 1 if k = j , i = j k 1 β i α j α i + 1 if k > j , and r k , j = 0 if k < j , 1 if k = j , i = j k 1 β i α k α i if k > j .
Now, we substitute the obtained matrices into (18), namely,
y 0 ( x ) y 1 ( x ) y n ( x ) = 1 s 1 , 0 1 s n , 0 s n , n 1 1 e α 0 t e α 1 t e α n t 1 r 1 , 0 1 r n , 0 r n , n 1 1 ϕ 0 ϕ 1 ϕ n .
Evidently, we have y 0 ( t ) = ϕ 0 e α 0 t and for k { 1 , 2 , 3 , , n } ,
y k ( t ) = j = 0 k s k , j i = 0 j r j , i ϕ i e α j t = j = 0 k i = 0 j l = j k 1 β l α j α l + 1 · l = i j 1 β l α j α l · ϕ i e α j t = j = 0 k i = 0 j l = i l j k β l α j α l · β j β k · ϕ i e α j t = j = 0 k i = 0 j l = i l j k 1 α j α l · l = i k 1 β l · ϕ i e α j t
as required. □
Under the condition a μ 1 + a , as mentioned in Section 2, Theorem 3 shows that the formulas provided in (9), (15) and (16) can be expressed in closed forms under the Jacobi process (4) when the parameters θ ( t ) = θ , μ ( t ) = μ , and a ( τ ) = a are constants.
Theorem 3.
Suppose that X t follows the Jacobi process (4). Then, the γ t h conditional moment for γ R is
U J γ ( x , τ ) : = E X T γ X s = x = k = 0 P k γ ( τ ) x γ k ,
for ( x , τ ) D J γ R × [ 0 , ) , τ = T s , which uniformly converges on D J γ , where
P 0 γ ( τ ) = e τ A ˜ 0 γ a n d P k γ ( τ ) = j = 0 k l = 0 l j k 1 A ˜ j γ A ˜ l γ · l = 0 k 1 B ˜ l γ e τ A ˜ j γ
for k Z + , where
A ˜ j γ = θ γ j γ j 1 a 1 a n d B ˜ j γ = θ γ j μ γ j 1 a .
Proof. 
For the Jacobi process (4) the parameters in (8) become constant and we set θ ( t ) = θ , μ ( t ) = μ , and a ( t ) = a . Thus, A j γ ( τ ) and B j γ ( τ ) provided in (11) are represented, respectively, by A ˜ j γ and B ˜ j γ as provided in (22). The key idea of the proof is to solve the coefficients P k γ ( τ ) in (14), which can be accomplished straightforwardly using Lemma 1. We consider a partial sum of (20) from k = 0 to k = n . Recall the system (14); now we have
d d τ P 0 γ ( τ ) = A ˜ 0 γ P 0 γ ( τ ) and d d τ P k γ ( τ ) = A ˜ k γ P k γ ( τ ) + B ˜ k 1 γ P k 1 γ ( τ )
with distinct A ˜ k γ for all k { 0 , 1 , 2 , n } and initial vector
P 0 γ ( 0 ) , P 1 γ ( 0 ) , P 2 γ ( 0 ) , , P n γ ( 0 ) = [ 1 , 0 , 0 , , 0 ] .
By applying Lemma 1, the solution of the coefficients P k γ ( τ ) in (23) is (21) for all k { 0 , 1 , 2 , , n } . Hence, under the assumption that the infinite series in (20) uniformly converges on D J γ , (21) holds for all k Z + as required. □
In the case that γ = n Z 0 + , U J γ ( x , τ ) can be expressed as a power series in terms of x which terminates at finite order. This means that Theorem 4 reduces the result (15) in Theorem 2 to a finite sum of order n.
Theorem 4.
Suppose that X t follows the Jacobi process (4). Then, the n t h conditional moment for n Z 0 + is
U J n ( x , τ ) = E X T n X s = x = e τ A ˜ 0 n x n + k = 1 n j = 0 k l = 0 l j k 1 A ˜ j n A ˜ l n · l = 0 k 1 B ˜ l n e τ A ˜ j n x n k ,
where A ˜ j n and B ˜ j n are as provided in (22).
Proof. 
The proof is rather trivial by combining Theorems 2 and 3. □
The following corollary can be reduced from Theorem 3 using the same idea as in Corollary 1.
Corollary 2.
According to Theorem 3, with a constant γ = m + μ a + 1 , m Z + we have
U J γ ( x , τ ) = E X T γ X s = x = e τ A ˜ 0 γ x γ + k = 1 m j = 0 k l = 0 l j k 1 A ˜ j γ A ˜ l γ · l = 0 k 1 B ˜ l γ e τ A ˜ j γ x γ k ,
where A ˜ j γ and B ˜ j γ are as provided in (22).
Proof. 
The proof is rather trivial by combining the idea of the proofs in Theorem 3 and Corollary 1. □
Remark 1.
In the case that γ = m + μ a + 1 N , as 0 < μ < 1 and a < 0 , we have γ m . The suitable theorem for this case is Theorem 4. In fact, we can use Corollary 2 with the coefficients of x γ k = 0 for all k { γ + 1 , γ + 2 , γ + 3 , , m } .
In addition, Theorem 5 is transformed from (24) in Theorem 4 to the unconditional moment as τ ; the obtained result no longer depends on x.
Theorem 5.
Suppose that X t follows Jacobi process (4). Then, the n t h unconditional moment at equilibrium for n Z 0 + , 0 < x < 1 and τ = T s 0 is provided by
lim τ U J n ( x , τ ) = lim T E X T n X s = x = l = 0 n 1 μ a l 1 a l .
Proof. 
According to (24) in Theorem 4, because A ˜ j n < 0 for all j < n the coefficient terms of x n k provided in (21) approach 0 as τ for j , k { 0 , 1 , 2 , , n 1 } , except in the case that k = j = n . We have A ˜ n n = 0 ; thus
lim τ U J n ( x , τ ) = lim τ l = 0 l n n 1 A ˜ n n A ˜ l n · l = 0 n 1 B ˜ l n e τ A ˜ n n = ( 1 ) n l = 0 n 1 B ˜ l n A ˜ l n = l = 0 n 1 μ a l 1 a l
as required. □

3.2. Generalized Stochastic Correlation Process

Theorem 6 provides a relation between the extended Jacobi (8) and generalized stochastic correlation processes (5) through Itô’s lemma, and provides a formula for the conditional moments of the generalized stochastic correlation process (5) in closed form.
Theorem 6.
Let X t follow the extended Jacobi process (8) where X t ( 0 , 1 ) for all t [ s , T ] . Suppose that ρ t = ρ m i n + ρ m a x ρ m i n X t for all t [ s , T ] . Then, (8) becomes a generalized stochastic correlation process
d ρ t = θ ( t ) ρ m a x ρ m i n μ ( t ) + ρ m i n ρ t d t + 2 a ( t ) θ ( t ) ρ t ρ m a x ρ t ρ m i n d W t ,
and ρ t ( ρ m i n , ρ m a x ) for all t [ s , T ] . In addition, its n t h conditional moment is
U G n ( ρ , τ ) : = E ρ T n ρ s = ρ = ρ m a x n U E n ( x , T s ) , f o r ρ m i n = 0 , ρ m i n n k = 0 n n k ρ m a x ρ m i n ρ m i n k U E k ( x , T s ) , f o r ρ m i n 0 ,
where x = ρ ρ m i n ρ m a x ρ m i n , τ = T s and U E k is defined in (15).
Proof. 
Applying ρ t = ρ min + ρ max ρ min X t with Itô’s lemma provides
d ρ t = ρ max ρ min d X t = ρ max ρ min θ ( t ) μ ( t ) X t d t + 2 a ( t ) θ ( t ) X t X t 1 d W t = ρ max ρ min ( θ ( t ) μ ( t ) ρ t ρ min ρ max ρ min d t + 2 a ( t ) θ ( t ) ρ t ρ min ρ max ρ min ρ t ρ min ρ max ρ min 1 d W t ) = θ ( t ) ρ max ρ min μ ( t ) + ρ min ρ t d t + 2 a ( t ) θ ( t ) ρ t ρ max ρ t ρ min d W t
as shown in (27). As ρ t ρ min ρ max ρ min = X t ( 0 , 1 ) for all t [ s , T ] , ρ t ( ρ min , ρ max ) for all t [ s , T ] . The analytical formula for the conditional moments is determined in two cases. For the case where ρ min = 0 , we have
E ρ T n ρ s = ρ = E ρ max X T n X s = x = ρ max n E X T n X s = x .
For the other case, ρ min 0 , the binomial theorem results in
E ρ T n ρ s = ρ = E ρ min + ρ max ρ min X T n X s = x = E k = 0 n n k ρ min n k ρ max ρ min X T k X s = x = ρ min n k = 0 n n k ρ max ρ min ρ min k E X T k X s = x .
As X t follow the extended Jacobi process (8), applying Theorem 2 yields the two cases in (28). □
Remark 2.
It should be noted that the generalized stochastic correlation processes (6) are more general than those of processes (4) and (5). Comparing the generalized stochastic correlation processes (2) and (6) provides θ * ( t ) = θ ( t ) , μ * ( t ) = ρ max ρ min μ ( t ) + ρ min , and σ * ( t ) = 2 a ( t ) θ .
In addition, Theorem 6 becomes Theorem 7 under the constant parameters; the stationary property at T is studied in Theorem 7.
Theorem 7.
According to Theorem 6 with the real constant parameters θ ( t ) = θ , μ ( t ) = μ and a ( t ) = a , the n t h conditional moment is
E ρ T n ρ s = ρ = ρ m a x n U J n ( x , T s ) , f o r ρ m i n = 0 , ρ m i n n k = 0 n n k ρ m a x ρ m i n ρ m i n k U J k ( x , T s ) , f o r ρ m i n 0 .
where x = ρ ρ m i n ρ m a x ρ m i n and U J k is defined in (24). Moreover,
lim T E ρ T n ρ s = ρ = ρ m a x n l = 0 n 1 μ a l 1 a l , f o r ρ m i n = 0 , ρ m i n n k = 0 n n k ρ m a x ρ m i n ρ m i n k l = 0 k 1 μ a l 1 a l , f o r ρ m i n 0 .
Proof. 
Let θ ( t ) = θ , μ ( t ) = μ and a ( t ) = a be constant. The extended Jacobi process (8) is reduced to the original Jacobi process (4). In addition, (27) is reduced to (5) rapidly. Hence, the conditional moment (28) is transformed to (29). Thus, by applying (29) with Theorem 5 we obtain (30). □
By applying the tower property, we derive an interesting result of the conditional mixed moments (1) of process (2). To the best of our knowledge, no other authors have found the simple formula as shown in Theorem 8. However, the following lemma is needed.
Lemma 2.
Suppose that X t follows the extended Jacobi process (8) and 0 s T 1 T 2 . The conditional mixed moment up to order n 1 + n 2 for n 1 , n 2 Z + is
E X T 1 n 1 X T 2 n 2 X s = x = k = 0 n 2 j = 0 n 1 + n 2 k P k n 2 T 2 T 1 P j n 1 + n 2 k T 1 s x n 1 + n 2 k j ,
where the parameters dependent on time are provided in (10). In the spacial case of the Jacobi process (4), the parameters are defined in (21).
Proof. 
Using the tower property for 0 s < T 1 T 2 , the conditional mixed moment of the extended Jacobi process (8) can be expressed as
E X T 1 n 1 X T 2 n 2 X s = x = E X T 1 n 1 E X T 2 n 2 X T 1 X s = x .
After applying Theorem 2 twice, we have
E X T 1 n 1 X T 2 n 2 X s = x = E X T 1 n 1 k = 0 n 2 P k n 2 ( T 2 T 1 ) X T 1 n 2 k X s = x = k = 0 n 2 P k n 2 ( T 2 T 1 ) E X T 1 n 1 + n 2 k X s = x = k = 0 n 2 j = 0 n 1 + n 2 k P k n 2 T 2 T 1 P j n 1 + n 2 k T 1 s x n 1 + n 2 k j
as required. □
Theorem 8.
According to Theorem 6 with 0 s T 1 T 2 , the conditional mixed moment of the generalized stochastic correlation process (27) up to order n 1 + n 2 for n 1 , n 2 Z + is
E ρ T 1 n 1 ρ T 2 n 2 ρ s = ρ = ρ m a x n 1 + n 2 E X T 1 n 1 X T 2 n 2 X s = x , f o r ρ m i n = 0 , ρ m i n n 1 + n 2 k = 0 n 1 j = 0 n 2 n 1 k n 2 j ρ m a x ρ m i n ρ m i n k + j E X T 1 k X T 2 j X s = x , f o r ρ m i n 0 ,
where x = ρ ρ min ρ max ρ min and the conditional mixed moment E X T 1 n 1 X T 2 n 2 X s = x of the extended Jacobi process (8) is provided in Lemma 2.
Proof. 
For the case where ρ min = 0 , similar to the proof of Theorem 6, it is not difficult to check and is thus omitted here. For the latter case, applying the binomial theorem twice yields
E ρ T 1 n 1 ρ T 2 n 2 ρ s = ρ = E ρ min + ρ max ρ min X T 1 n 1 ρ min + ρ max ρ min X T 2 n 2 X s = x = E k = 0 n 1 n 1 k ρ min n 1 k ρ max ρ min X T 1 k j = 0 n 2 n 2 j ρ min n 2 j ρ max ρ min X T 2 j X s = x = ρ min n 1 + n 2 k = 0 n 1 j = 0 n 2 n 1 k n 2 j ρ max ρ min ρ min k + j E X T 1 k X T 2 j X s = x ,
where the analytical formula of conditional mixed moments E X T 1 k X T 2 j X s = x , for 0 k n 1 and 0 j n 2 , is provided in Lemma 2. This completes the proof. □
Remark 3.
Applying the idea in the proofs of Lemma 2 and Theorem 8, the general formula for conditional mixed moments E ρ T 1 n 1 ρ T 2 n 2 ρ T 3 n 3 ρ T k n k ρ s = ρ , where n 1 , n 2 , n 3 , , n k Z 0 + and 0 s < T 1 T 2 T 3 T k , can be directly obtained. The advantage of our formula for conditional mixed moments (8) is its simple closed form, which can be used in many applications, especially to estimate the functions of the powers of observed processes which appeared in Sørensen [24], Leonenko and Šuvak [25,26], and Avram et al. [27]. Moreover, in order to study the integrated Jacobi process, the conditional mixed moments need to be evaluated. However, their proposed formulas are very complicated; see Forman and Sørensen [4]. Thus, our results can be applied easily.
Before finishing this section, we summarize the relationship of the presented formulas in the form of the diagram displayed in Figure 3, which shows the development process of the formulas consisting of ten theorems and two lemmas, which are categorized as performed in processes (2), (4), (5) and (8).

3.3. Statistical Properties

The conditional variance of the generalized stochastic correlation process (27) can be expressed as
Var ρ T ρ s = ρ = E ρ T E ρ T ρ s 2 ρ t = ρ = U G 2 ( ρ , T s ) U G 1 ( ρ , T s ) 2 ,
where U G γ ( ρ , T s ) is defined in Theorem 6. Furthermore, the n th moment about the mean, that is, the n th central moment, can be expressed as
μ n : = E ρ T E ρ T ρ s n ρ s = ρ = k = 0 n n k U G k ( ρ , T s ) U G 1 ( ρ , T s ) n k .
Well-known instances for the central moment are the zero-th moment μ 0 = 1 , the 1st central moment μ 1 = 0 , the 2nd central moment μ 2 = Var ρ T ρ s = ρ , called the conditional variance, and the third μ 3 and fourth μ 4 , known as the skewness and kurtosis, respectively.
We now move our focus to the conditional covariance and correlation. By applying Theorem 8, for 0 s < T 1 T 2 where τ = T 2 s , τ 1 = T 1 s and τ 2 = T 2 T 1 we have
Cov ρ T 1 , ρ T 2 ρ s = ρ = E ρ T 1 E ρ T 1 ρ s ρ T 2 E ρ T 2 ρ s ρ s = ρ = E ρ T 1 ρ T 2 ρ s = ρ E ρ T 1 ρ s = ρ E ρ T 2 ρ s = ρ = k = 0 1 j = 0 2 k P k 1 ( τ 2 ) P j 2 k ( τ 1 ) ρ 2 k j U G 1 ( ρ , τ 1 ) U G 1 ( ρ , τ 2 )
and the conditional correlation of the generalized stochastic correlation process (27) is
Corr ρ T 1 , ρ T 2 ρ s = ρ = Cov ρ T 1 , ρ T 2 ρ s = ρ Var [ ρ T 1 ρ t = ρ ] Var [ ρ T 2 ρ s = ρ ] .
It should be noted that the analytical formulas for the conditional covariance and correlation can be extended to the analytical of Cov ρ T 1 n 1 , ρ T 2 n 2 ρ s = ρ and Corr ρ T 1 n 1 , ρ T 2 n 2 ρ s = ρ where n 1 , n 2 Z + . Several of the related applications as estimator tools are mentioned in [24,25,26,27,28].

4. Experimental Validation

As our results proposed in Section 3 are mainly based on the extended Jacobi process (8), this experimental validation section discusses this process first. In this experiment, we applied the Euler–Maruyama (EM) discretization method with MC simulations to process (8). Let X ^ t be a time-discretized approximation of X t that is generated on time interval [ 0 , T ] into N steps, i.e., 0 = t 0 < t 1 < t 2 < < t N = T . Then, the EM approximate is defined by
X ^ t i + 1 = X ^ t i + θ ( t i ) μ ( t i ) X ^ t i Δ t + 2 a ( t i ) θ ( t i ) X ^ t i X ^ t i 1 Δ t Z i + 1 ,
where the initial value X ^ t 0 = X t 0 , Δ t = t i + 1 t i is the size of the time step and Z i is the standard normal random variable. We illustrate the validations of the 1st moment ( n = 1 ) of the formula (15) via the parameters studied by Ardian and Kumral [29] for the evolution of gold prices and interest rates. For the generalized stochastic correlation process (2), their estimated parameters are θ * ( t ) = 1.15 , μ * ( t ) = 0.17 and σ * ( t ) = 0.56 with ρ min = 1 and ρ max = 1 . Thus, for the extended Jacobi process (8), those parameters correspond to θ ( t ) = 1.15 , μ ( t ) = 0.59 and a ( t ) = 0.14 for t [ 0 , τ ] ; note that these parameters are all constants. This then corresponds to the Jacobi process (4) as well, which can compute the 1st conditional moment using formula (24) directly. This work was implemented in MATLAB libraries available in GitHub repositories: https://github.com/TyMathAD/Conditional_Mixed_Moments accessed on 21 April 2022.
To test the efficiency of the 1st moment U J 1 ( x , τ ) , we compared the obtained results with MC simulations at various points ( x , τ ) , where x , τ { 0.1 , 0.2 , 0.3 , , 1 } . These simulations were examined with the time step Δ t = 0.0001 and varied with the sample paths by 100, 1000, and 10,000, as depicted in Figure 4, which is the contour plotting of absolute errors between our formula and MC simulations. From Figure 4, we can see obviously that the contour colors trend to the dark blue shade for the larger path numbers. This means that the absolute errors approach zero. Figure 4a–c produces average absolute errors equal to 1.77 × 10 2 , 5.67 × 10 3 and 6.42 × 10 4 , respectively. Hence, the MC simulations are most likely to converge to our formula.
In this validation, these obtained results of our formula and the MC simulations based on the EM method (33) were computed by implementing MATLAB R2021a software run on a laptop computer configured with the following details: Intel(R) Core(TM) i7-5700HQ, CPU @2.70 GHz, 16.0 GB RAM, Windows 10, 64-bit Operating System. As a result, the computational run time of our analytical formula is around 0.0145 s, while the MC simulations consume run times of 1.43, 4.32, and 40.21 s for 100, 1000, and 10,000 sample paths, respectively. Thus, we can see that the times of MC simulations are more tremendously expensive than our formula, especially, with large path numbers. It is notable that the MC simulations with just 100 paths spent more computing time than our formula, with almost 100 times the time elapsed. Hence, for a more accurate result the use of MC simulations may not be a good choice in terms of computing time. In contrast, the proposed formula is independent of any discretizations and has a very low computational cost. Therefore, the formulas presented here are efficient and suitable for practical use.
Moreover, we used the above parameters to compute the 1st and 2nd conditional moments, U G 1 ( ρ , τ ) and U G 2 ( ρ , τ ) , in order to model the correlation between gold prices and interest rates. We computed these moments utilizing the presented formula (28) at different values ( ρ , τ ) [ 1 , 1 ] × [ 0 , 1.5 ] . The obtained results are demonstrated by surface plots in Figure 5. In addition, we plotted the graphical contours of the 1st and 2nd conditional moments for ( ρ , τ ) [ 1 , 1 ] × [ 0 , 5 ] . It can be seen that when τ is increasing, the obtained results converge to a certain value for both moments. This can be seen from Figure 6, in that the contour colors trend to a light blue shade which has an approximate value of 0.1 . Using Theorem 7, it is confirmed that when τ these 1st and 2nd conditional moments are 0.17 and 0.15 , respectively, corresponding to Figure 6.
Note that one primary concern for our proposed formula in Theorem 1 is that the coefficients P k γ ( τ ) for k Z 0 + in (10) may not be exactly integrable. Thus, numerical integration methods are needed to manipulate the integral terms, such as a trapezoidal rule, Simpson’s rule, etc. One efficient method that we suggest to handle these integral terms is the Chebyshev integration method provided by Boonklurb et al. [30,31,32,33], which provides higher accuracy than other integration methods under the same discretization.

5. Method of Moments Estimator

In certain cases, the MM is superseded by Fisher’s method when estimating parameters of a known family of probability distribution, as the MLEs have a higher probability of being suitable to the quantities to be estimated. However, in certain cases such as the examples of gamma and beta distributions, MLEs may be intractable without computer programming. In this case, estimation using MM can be used as a first approximation of the solutions of the MLEs; the MM and the method of MLEs are symbiotic in this respect.
The key idea of the MM is to calibrate a well set of parameter values based on suitable conditional moments. In this section, suppose that we need to calibrate an unknown parameter vector θ = θ * , μ * , σ * R 3 of the generalized stochastic correlation process (2), where the value of the true parameters is the vector θ 0 on discretely observed data ρ t i 1 i n , where t i 1 < t i for all i { 1 , 2 , 3 , , n } . Normally, the basic conditional moments selected for calibration may be the first three conditional moments of the form provided in Theorem 6. It is sufficient to solve the unknown vector θ ; however, in 2004, Gouriéroux and Valéry [3] suggested that we need to choose those conditional moments satisfying the identities of observed interest data.
They further determined sufficient moments to be adequately informative, such as the 1st, 2nd, 3rd (skewness), and 4th (kurtosis) conditional moments and the mixed moments E ρ t i ρ t i 1 2 ρ t 0 = ρ and E ρ t i 2 ρ t i 1 2 ρ t 0 = ρ to capture the dynamics of the risk premium and the possible volatility persistence, respectively. Their set of conditional moments selected for implementing MM is
f ρ t i , θ = ρ t i E ρ t i ρ t 0 = ρ ρ t i ρ t i 1 E ρ t i ρ t i 1 ρ t 0 = ρ ρ t i 2 E ρ t i 2 ρ t 0 = ρ ρ t i ρ t i 1 2 E ρ t i ρ t i 1 2 ρ t 0 = ρ ρ t i 2 ρ t i 1 E ρ t i 2 ρ t i 1 ρ t 0 = ρ ρ t i 2 ρ t i 1 2 E ρ t i 2 ρ t i 1 2 ρ t 0 = ρ ρ t i 3 E ρ t i 3 ρ t 0 = ρ ρ t i 4 E ρ t i 4 ρ t 0 = ρ ,
with the conditional moments and mixed moments appearing above having been proposed in Theorems 6 and 8, respectively. In order to estimate parameters, we suppose that the conditional expectations of f ( ρ t i , θ ) , E f ρ t i , θ F t , exist as a real number for all i { 1 , 2 , 3 , , n } , satisfying E f ρ t i , θ 0 F t = 0 , and let
f n θ = 1 n i = 1 n f ρ i , θ .
The MM estimator of θ 0 based on the conditional expectation E f ρ t i , θ F t is the solution to the system of equations f n θ = 0 . If we cannot solve the exact value of θ , a good estimate of the true value θ 0 , called θ ^ , is needed. In other words, we need a θ ^ that makes f n θ ^ close to 0; see more details in [34]. In any event, the algorithm that we suggest would use either Newton’s method or iterative methods to solve the system of nonlinear equations f n θ ^ = 0 .
It should be noted that in certain cases, infrequent with large sample sizes and not as infrequent with small ones, the estimates provided by the MM are not suitable. In this case, they may be outside of the parameter space and it does not make sense to rely on the sample provided by the MM. In the context of the properties of the MM and its generalized version, under sufficient conditions they are consistent and asymptotically normally distributed; see for more details in [34,35].

6. Conclusions

Without the knowledge of the transition PDF, this paper presents a simple and novel approach for obtaining the analytical formulas for conditional moments and mixed moments of the extended Jacobi process. Those analytical formulas become concise forms under the Jacobi process. In addition, the analytical formulas for the unconditional moments are provided. By applying Ito’s lemma to the extended Jacobi process we obtain the generalized stochastic correlation process, and its analytic formulas for conditional moments are proposed. Statistical properties, namely, conditional variance, central moment, covariance, and correlation, are formulated. The validation of our formulas is shown by comparing the results with MC simulations. Our results can be used to find the parameters of correlation processes between financial product prices. Our study provides additional support for the work of those who require statistical tools to studying data governed by generalized stochastic correlation processes. Finally, a tool for estimating parameters concerning the calculation of moments is provided as well.

Author Contributions

Conceptualization, A.D., R.B., K.C. and P.S.; methodology, A.D. and P.S.; software, A.D. and P.S.; validation, A.D., R.B., K.C. and P.S.; formal analysis, A.D., R.B., K.C. and P.S.; investigation, A.D., R.B., K.C. and P.S.; writing—original draft preparation, A.D. and P.S.; writing—review and editing, R.B. and K.C.; visualization, A.D. and P.S.; supervision, R.B. and K.C.; project administration, K.C.; funding acquisition, R.B. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

This research project is supported by Second Century Fund (C2F), Chulalongkorn University. We are grateful for a variety of valuable suggestions from the anonymous referees which have substantially improved the quality and presentation of the results.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
EMEuler–Maruyama
MCMonte Carlo
MLEmaximum likelihood estimator
MMMethod of moment
PDEPartial differential equation
PDFProbability density function
SDEStochastic differential equation

References

  1. Emmerich, C.V. Modelling Correlation as a Stochastic Process; University of Wuppertal: Wuppertal, Germany, 2006; Volume 6. [Google Scholar]
  2. Delbaen, F.; Shirakawa, H. An interest rate model with upper and lower bounds. Asia-Pac. Financ. Mark. 2002, 9, 191–209. [Google Scholar] [CrossRef]
  3. Gouriéroux, C.; Valéry, P. Estimation of a Jacobi Process; Technical report; Université de Montréal: Montreal, QC, Canada, 2004. [Google Scholar]
  4. Forman, J.L.; Sørensen, M. The Pearson diffusions: A class of statistically tractable diffusion processes. Scand. J. Stat. 2008, 35, 438–465. [Google Scholar] [CrossRef]
  5. Sutthimat, P.; Mekchay, K.; Rujivan, S. Explicit formula for conditional expectations of product of polynomial and exponential function of affine transform of extended Cox–Ingersoll–Ross process. J. Phys. Conf. Ser. 2018, 1132, 012083. [Google Scholar] [CrossRef]
  6. Sutthimat, P.; Rujivan, S.; Mekchay, K.; Rakwongwan, U. Analytical formula for conditional expectations of path-dependent product of polynomial and exponential functions of extended Cox–Ingersoll–Ross process. Res. Math. Sci. 2022, 9, 10. [Google Scholar] [CrossRef]
  7. Sutthimat, P.; Mekchay, K. Closed-form formulas for conditional moments of inhomogeneous Pearson diffusion processes. Commun. Nonlinear Sci. Numer. Simul. 2022, 106, 106095. [Google Scholar] [CrossRef]
  8. Chumpong, K.; Sumritnorrapong, P. Closed-Form Formula for the Conditional Moments of Log Prices under the Inhomogeneous Heston Model. Computation 2022, 10, 46. [Google Scholar] [CrossRef]
  9. Boonklurb, R.; Duangpan, A.; Rakwongwan, U.; Sutthimat, P. A Novel Analytical Formula for the Discounted Moments of the ECIR Process and Interest Rate Swaps Pricing. Fractal Fract. 2022, 6, 58. [Google Scholar] [CrossRef]
  10. Chumpong, K.; Mekchay, K.; Rujivan, S. A simple closed-form formula for the conditional moments of the Ornstein–Uhlenbeck process. Songklanakarin J. Sci. Technol. 2020, 42, 836–845. [Google Scholar]
  11. Chumpong, K.; Mekchay, K.; Thamrongrat, N. Analytical formulas for pricing discretely-sampled skewness and kurtosis swaps based on Schwartz’s one-factor model. Songklanakarin J. Sci. Technol. 2021, 43, 465–470. [Google Scholar]
  12. Pearson, K. Tables for Statisticians and Biometricians; University Press: Cambridge, England, 1914. [Google Scholar]
  13. Ditlevsen, S.; Rubio, A.C.; Lansky, P. Transient dynamics of Pearson diffusions facilitates estimation of rate parameters. Commun. Nonlinear Sci. Numer. Simul. 2020, 82, 105034. [Google Scholar] [CrossRef]
  14. Cox, J.C.; Ingersoll, J.E., Jr.; Ross, S.A. A theory of the term structure of interest rates. In Theory of Valuation; World Scientific: Singapore, 2005; pp. 129–164. [Google Scholar]
  15. Veraart, A.E.; Veraart, L.A. Stochastic volatility and stochastic leverage. Ann. Financ. 2012, 8, 205–233. [Google Scholar] [CrossRef] [Green Version]
  16. Chihara, T.S. An Introduction to Orthogonal Polynomials; Courier Corporation: New York, NY, USA, 2011. [Google Scholar]
  17. Leonenko, G.M.; Phillips, T.N. High-order approximation of Pearson diffusion processes. J. Comput. Appl. Math. 2012, 236, 2853–2868. [Google Scholar] [CrossRef] [Green Version]
  18. Hansen, L.P.; Scheinkman, J.A. Back to the future: Generating moment implications for continuous-time Markov processes. Econometrica 1995, 63, 767–804. [Google Scholar] [CrossRef]
  19. Hull, J.; White, A. Pricing interest-rate-derivative securities. Rev. Financ. Stud. 1990, 3, 573–592. [Google Scholar] [CrossRef]
  20. Maghsoodi, Y. Solution of the extended CIR term structure and bond option valuation. Math. Financ. 1996, 6, 89–109. [Google Scholar] [CrossRef]
  21. Egorov, A.V.; Li, H.; Xu, Y. Maximum likelihood estimation of time-inhomogeneous diffusions. J. Econom. 2003, 114, 107–139. [Google Scholar] [CrossRef]
  22. Ngoc, P.H.A. Contraction of stochastic differential equations. Commun. Nonlinear Sci. Numer. Simul. 2021, 95, 105613. [Google Scholar] [CrossRef]
  23. Kijima, M. Stochastic Processes with Applications to Finance; CRC Press: Boca Raton, FL, USA, 2016. [Google Scholar]
  24. Sørensen, M. Prediction-based estimating functions. Econom. J. 2000, 3, 123–147. [Google Scholar] [CrossRef]
  25. Leonenko, N.N.; Šuvak, N. Statistical inference for reciprocal gamma diffusion process. J. Stat. Plan. Inference 2010, 140, 30–51. [Google Scholar] [CrossRef]
  26. Leonenko, N.N.; Šuvak, N. Statistical inference for Student diffusion process. Stoch. Anal. Appl. 2010, 28, 972–1002. [Google Scholar] [CrossRef]
  27. Avram, F.; Leonenko, N.N.; Šuvak, N. Parameter estimation for Fisher-Snedecor diffusion. Statistics 2011, 45, 27–42. [Google Scholar] [CrossRef]
  28. Forman, J.L. Least Squares Estimation for Autocorrelation Parameters with Applications to Sums of Ornstein–Uhlenbeck Type of Processes; Department of Applied Mathematics and Statistics, University of Copenhagen: Copenhagen, Denmark, 2005. [Google Scholar]
  29. Ardian, A.; Kumral, M. Incorporating stochastic correlations into mining project evaluation using the Jacobi process. Resour. Policy 2020, 65, 101558. [Google Scholar] [CrossRef]
  30. Boonklurb, R.; Duangpan, A.; Treeyaprasert, T. Modified finite integration method using Chebyshev polynomial for solving linear differential equations. J. Numer. Anal. Ind. Appl. Math. 2018, 12, 1–19. [Google Scholar]
  31. Boonklurb, R.; Duangpan, A.; Gugaew, P. Numerical solution of direct and inverse problems for time-dependent Volterra integro-differential equation using finite integration method with shifted Chebyshev polynomials. Symmetry 2020, 12, 497. [Google Scholar] [CrossRef] [Green Version]
  32. Duangpan, A.; Boonklurb, R.; Treeyaprasert, T. finite integration method with shifted Chebyshev polynomials for solving time-fractional Burgers’ equations. Mathematics 2019, 7, 1201. [Google Scholar] [CrossRef] [Green Version]
  33. Duangpan, A.; Boonklurb, R.; Juytai, M. Numerical solutions for systems of fractional and classical integro-differential equations via Finite Integration Method based on shifted Chebyshev polynomials. Fractal Fract. 2021, 5, 103. [Google Scholar] [CrossRef]
  34. Zsohar, P. Short introduction to the generalized method of moments. Hung. Stat. Rev. 2012, 16, 150–170. [Google Scholar]
  35. Hazelton, M.L. Methods of Moments Estimation; Springer: Berlin/Heidelberg, Germany, 2011. [Google Scholar]
Figure 1. Comparative diagram between traditional methods methods proposed in this paper.
Figure 1. Comparative diagram between traditional methods methods proposed in this paper.
Symmetry 14 00897 g001
Figure 2. Relationship diagram of the Jacobi and generalized stochastic correlation processes.
Figure 2. Relationship diagram of the Jacobi and generalized stochastic correlation processes.
Symmetry 14 00897 g002
Figure 3. Relationship diagram of all presented theorems and lemmas in processes (2), (4), (5) and (8).
Figure 3. Relationship diagram of all presented theorems and lemmas in processes (2), (4), (5) and (8).
Symmetry 14 00897 g003
Figure 4. Contour plotting of absolute errors between our formula and MC simulations with different paths: (a) 100 paths; (b) 1000 paths; (c) 10,000 paths.
Figure 4. Contour plotting of absolute errors between our formula and MC simulations with different paths: (a) 100 paths; (b) 1000 paths; (c) 10,000 paths.
Symmetry 14 00897 g004
Figure 5. Graphical behaviors of the 1st (a) and 2nd (b) conditional moments obtained by our formula.
Figure 5. Graphical behaviors of the 1st (a) and 2nd (b) conditional moments obtained by our formula.
Symmetry 14 00897 g005
Figure 6. Contour plotting of the 1st (a) and 2nd (b) conditional moments obtained by our formula.
Figure 6. Contour plotting of the 1st (a) and 2nd (b) conditional moments obtained by our formula.
Symmetry 14 00897 g006
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Duangpan, A.; Boonklurb, R.; Chumpong, K.; Sutthimat, P. Analytical Formulas for Conditional Mixed Moments of Generalized Stochastic Correlation Process. Symmetry 2022, 14, 897. https://doi.org/10.3390/sym14050897

AMA Style

Duangpan A, Boonklurb R, Chumpong K, Sutthimat P. Analytical Formulas for Conditional Mixed Moments of Generalized Stochastic Correlation Process. Symmetry. 2022; 14(5):897. https://doi.org/10.3390/sym14050897

Chicago/Turabian Style

Duangpan, Ampol, Ratinan Boonklurb, Kittisak Chumpong, and Phiraphat Sutthimat. 2022. "Analytical Formulas for Conditional Mixed Moments of Generalized Stochastic Correlation Process" Symmetry 14, no. 5: 897. https://doi.org/10.3390/sym14050897

APA Style

Duangpan, A., Boonklurb, R., Chumpong, K., & Sutthimat, P. (2022). Analytical Formulas for Conditional Mixed Moments of Generalized Stochastic Correlation Process. Symmetry, 14(5), 897. https://doi.org/10.3390/sym14050897

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop