Next Article in Journal
Relational Contractions Involving (c)-Comparison Functions with Applications to Boundary Value Problems
Previous Article in Journal
JQPro:Join Query Processing in a Distributed System for Big RDF Data Using the Hash-Merge Join Technique
Previous Article in Special Issue
Comparison of Statistical Production Models for a Solar and a Wind Power Plant
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Analytically Computing the Moments of a Conic Combination of Independent Noncentral Chi-Square Random Variables and Its Application for the Extended Cox–Ingersoll–Ross Process with Time-Varying Dimension

by
Sanae Rujivan
1,
Athinan Sutchada
1,
Kittisak Chumpong
2,3,* and
Napat Rujeerapaiboon
4
1
Center of Excellence in Data Science for Health Study, Division of Mathematics and Statistics, School of Science, Walailak University, Nakhon Si Thammarat 80161, Thailand
2
Division of Computational Science, Faculty of Science, Prince of Songkla University, Songkhla 90110, Thailand
3
Statistics and Applications Research Unit, Faculty of Science, Prince of Songkla University, Songkhla 90110, Thailand
4
Department of Industrial Systems Engineering and Management, National University of Singapore, Singapore 117576, Singapore
*
Author to whom correspondence should be addressed.
Mathematics 2023, 11(5), 1276; https://doi.org/10.3390/math11051276
Submission received: 29 January 2023 / Revised: 28 February 2023 / Accepted: 5 March 2023 / Published: 6 March 2023
(This article belongs to the Special Issue Probability, Statistics and Their Applications 2021)

Abstract

:
This paper focuses mainly on the problem of computing the γ th , γ > 0 , moment of a random variable Y n : = i = 1 n α i X i in which the α i ’s are positive real numbers and the X i ’s are independent and distributed according to noncentral chi-square distributions. Finding an analytical approach for solving such a problem has remained a challenge due to the lack of understanding of the probability distribution of Y n , especially when not all α i ’s are equal. We analytically solve this problem by showing that the γ th moment of Y n can be expressed in terms of generalized hypergeometric functions. Additionally, we extend our result to computing the γ th moment of Y n when X i is a combination of statistically independent Z i 2 and G i in which the Z i ’s are distributed according to normal or Maxwell–Boltzmann distributions and the G i ’s are distributed according to gamma, Erlang, or exponential distributions. Our paper has an immediate application in interest rate modeling, where we can explicitly provide the exact transition probability density function of the extended Cox–Ingersoll–Ross (ECIR) process with time-varying dimension as well as the corresponding γ th conditional moment. Finally, we conduct Monte Carlo simulations to demonstrate the accuracy and efficiency of our explicit formulas through several numerical tests.

1. Introduction

Consider a random variable Y n driven on a probability space ( Ω , F , P ) defined by
Y n : = i = 1 n α i X i
for an integer n 2 , where α i > 0 , and each X i is distributed according to a noncentral chi-square distribution with ν i > 0 degrees of freedom and a noncentrality parameter δ i 0 , i.e., X i χ ν i 2 δ i , for all i = 1 , , n . We assume that the X i ’s are independent with respect to the σ -field F and probability measure P .
This paper focuses mainly on the problem of computing the γ th moment, γ R + , of Y n given by
E [ Y n γ ] = 0 y γ f Y n ( y ) d y
where f Y n ( y ) is the probability density function (PDF) of Y n , and E [ X ] denotes the expected value of a random variable X with respect to the probability measure P . Utilizing the property of the noncentral chi-square random variables [1], we have that E [ X i m ] is finite for all non-negative integer m and i = 1 , , n . This result and the independence of the X i ’s ensure the integral on the RHS of (2) is always finite for all γ R + .
The random variable Y n is found in many statistical applications. In hypothesis testing, several test statistics converge in distribution toward a conic combination of independent noncentral chi-square random variables (see, e.g., [2,3,4,5]). Moreover, f Y n ( y ) and E [ Y n γ ] play an interesting role in financial applications; see, e.g., [6,7,8,9,10,11]. Very recently, Rujivan and Rakwongwan [11], Chumpong et al. [6], and Rujivan [10] showed that the log-return realized variance when the underlying asset follows the extended Black–Scholes model can be expressed in terms of a conic combination of independent noncentral chi-square random variables. As a result, they derived the exact PDF of the log-return realized variance as well as an explicit formula for the γ th moment of the log-return realized variance for γ = 1 2 , 1 , yielding the first explicit pricing formulas for volatility swaps, volatility options, variance swaps, and variance options, respectively. Furthermore, Rujivan and Rakwongwan [11] utilized the approach proposed in Rujivan [12] for constructing an approximate formula for pricing volatility swaps for the Heston stochastic volatility model. On the other hand, Rujivan [10] proposed an approximate formula for pricing volatility swaps when the underlying asset evolves according to the constant elasticity of variance model.
We now return to the problem of computing the desired moments. Computing (2) is trivial when the values of the α i ’s are equal and γ = m is a non-negative integer. In other words, Y n reduces to a scaled noncentral chi-square random variable, the PDF of which is known, which in turn implies that E [ Y n m ] can be obtained in an explicit form since calculating the integral on the RHS of (2) can be worked out when γ is a non-negative integer (see, e.g., [1,13]). On the other hand, it has been repeatedly shown in literature for several decades, see for example in [13,14,15,16,17,18,19,20,21,22,23,24,25,26], that finding an analytical approach for solving the nonlinear problem (2) is significantly more intricate since the PDF of Y n is not well-known, when the values of some α i ’s are unequal, and it is not clear which existing representations of the PDF lend themselves to the calculation of the moments. This underlines the importance of our study.
Based on the above discussion, our paper has three aims which we now describe. The principal aim is to provide practitioners an accurate and efficient formula for computing E [ Y n γ ] for any integer n 2 and γ R + , including both integers and nonintegers. The next aim is to illustrate further applications of Y n in interest rate modeling by adopting a Laguerre expansion for the PDF of Y n which is proposed in this paper to explicitly derive the transition probability density function (TPDF) of the extended Cox–Ingersoll–Ross (ECIR) process with time-varying dimension and which, to our knowledge, has never been found in explicit form until now. In addition, the ECIR process with time-varying dimension was intensively studied by Maghsoodi [27], where its TPDF was given in explicit form and used for pricing bond options for the case where the dimension is constant. The final aim is to utilize the explicit formula for E [ Y n γ ] obtained in this paper to find a novel formula for computing the γ th conditional moment of the ECIR process which is accurate and more efficient in terms of computational complexity than existing methods, such as those in [12,28].
The rest of the paper is structured as follows. In Section 2, we derive a Laguerre expansion for the PDF of Y n . In Section 3, by utilizing the Laguerre expansion, we write E [ Y n γ ] , n N , γ R + , in terms of generalized hypergeometric functions and analytically estimate its truncation errors. In that section, we also extend our result to computing E [ Y n γ ] when X i is a conic combination of Z i 2 and G i in which the Z i ’s are distributed according to normal or Maxwell–Boltzmann distributions while the G i ’s are distributed according to gamma, Erlang, or exponential distributions, assuming that the Z i ’s and G i ’s are independent. Section 4 illustrates a usage of our result in analyzing the ECIR processes. This includes the first explicit formula for the TPDF of the ECIR process with time-varying dimension and novel explicit formula for the γ th conditional moment of the ECIR process. In Section 5, all the explicit formulas proposed in this paper are validated with either Monte Carlo (MC) simulations or other formulas proposed in the literature through several numerical tests. The paper is concluded in Section 6. All proofs are provided in the appendices.

2. The PDF of Yn

The PDF of Y n defined in (1) has been studied by many authors for several decades with various representations (see, for instance, in [16,17,21,24,25,26]). In this paper, we use the approach proposed in [16] to obtain a Laguerre expansion for the PDF of Y n .
 Theorem 1.
The PDF of Y n given in (1) can be expressed as
f Y n ( y ) = f Y n ( β ) ( y ) : = e y 2 β y ν 2 1 ( 2 β ) ν 2 k = 0 k ! Γ ( ν 2 + k ) c k L k ν 2 1 y 2 β y > 0
where ν : = i = 1 n ν i , β > 0 can be arbitrarily chosen, Γ ( x ) is the gamma function, and L k ( η ) ( x ) is the generalized Laguerre function (see [29]). In addition, c k , k = 0 , 1 , 2 , , satisfy the recurrent relations:
c 0 = 1
and
c k = 1 k j = 0 k 1 c j d k j k 1 ,
where
d 1 = 1 2 β i = 1 n δ i α i + 1 2 i = 1 n ν i 1 α i β
and
d j = j 2 1 β j i = 1 n δ i α i β α i j 1 + 1 2 i = 1 n ν i 1 α i β j j 2 .
Proof. 
The proof is provided in Appendix A. □
A couple of remarks should be made about the free parameter β . First, we note that the impact of β goes beyond Equation (3) as it also influences the c k coefficients through the recurrence relations (4)–(7). Second, the value of β can also influence the convergence rate of (3). Indeed, if the c k coefficients diverge, then it would be more challenging to reliably approximate the infinite sum on the right-hand side of (3) by its truncated version. As a result of this, we follow the procedures of [16] to study and promote appropriate choices of β in Section 3.

3. The γ th Moment of Yn

Computing E [ Y n γ ] as given in (2) can be achieved with any desired level of accuracy when the PDF of Y n is explicitly known. In this section, we use the Laguerre expansion (3) to obtain an explicit formula for E [ Y n γ ] as well as to showcase some interesting applications of the Laguerre expansion (3) in the following subsections.

3.1. Our Explicit Formula for E [ Y n γ ]

From the Laguerre expansion (3) together with some properties of the generalized hypergeometric function
2 F 1 a 1 , a 2 ; b 1 ; z = k = 0 ( a 1 ) k ( a 2 ) k ( b 1 ) k z k k ! ,
where ( · ) k denotes the usual Pochhammer symbol, known from [30], and we derive a simple explicit formula for the γ th moment of Y n for any γ R + .
 Theorem 2.
For any γ R + , we have
E [ Y n γ ] = 2 β γ k = 0 ( 1 ) k Γ γ + k + ν 2 Γ k + ν 2 2 F 1 k , 1 k ν 2 ; 1 k ν 2 γ ; 1 c k
where the coefficients c k , k = 0 , 1 , , are chosen according to (4)–(7), and the parameters ν = i = 1 n ν i and β > 0 can be arbitrarily chosen.
Proof. 
The proof is provided in Appendix B. □
Theorem 2 essentially expresses E [ Y n γ ] in terms of generalized hypergeometric functions. We remark that computing E [ Y n γ ] relies on the c k coefficients which can be obtained from the recursive Formulas (4)–(7), and we demonstrate later in our numerical study in Section 5 that implementing our Formula (8) for computing E [ Y n γ ] consumes significantly less time and effort than employing MC simulations.
In terms of applications, the result presented in the following corollary can be, for instance, applied to obtain an analytical formula for pricing volatility swaps in the discrete observation case based on the Black–Scholes model with time-varying risk-free interest rate and time-varying volatility as proposed in Theorem 3.1 of Rujivan [10].
 Corollary 1.
We have
E [ Y n ] = 2 β Γ ν + 1 2 Γ ν 2 k = 0 2 F 1 k , ν + 1 2 ; ν 2 ; 1 c k
where the coefficients c k , k = 0 , 1 , , are chosen according to (4)–(7) and the parameters ν = i = 1 n ν i and β > 0 can be arbitrarily chosen.
 Proof. 
The proof is provided in Appendix B. □
Another interesting special case of Theorem 1 is when n = 1 , that is, there is only one summand, say X. In this case, we can leverage Theorem 1 to compute a noninteger moment of any chi-square random variable as follows.
 Corollary 2.
For any X χ ν 2 δ and γ R + , we have
E [ X γ ] = 2 γ k = 0 Γ γ + k + ν 2 Γ k + ν 2 2 F 1 k , 1 k ν 2 ; 1 k ν 2 γ ; 1 1 k ! δ 2 k .
Proof. 
The proof is provided in Appendix B. □

3.2. Estimates for Truncation Errors of E [ Y n γ ]

To implement E [ Y n γ ] on a computer, it is necessary to investigate truncation errors, that is, to quantify a loss due to replacing an infinite sum with a finite sum. This subsection derives an estimate for the truncation errors of E [ Y n γ ] by applying the results proposed in [16] as follows.
To begin with, based on the formula of E [ Y n γ ] in (8), we define
E k 1 , k 2 ( γ ) : = 2 β γ k = k 1 + 1 k 2 ( 1 ) k Γ γ + k + ν 2 Γ k + ν 2 2 F 1 k , 1 k ν 2 ; 1 k ν 2 γ ; 1 c k
for any k 1 , k 2 N { 0 , } such that k 1 + 1 < k 2 . Therefore, E K , ( γ ) represents a truncation error of order K of E [ Y n γ ] . To estimate this truncation error, we first derive valid bounds for c k for k = 1 , 2 , , defined in (5)–(7).
 Lemma 1.
The coefficient c k , k N , satisfies
| c k | e δ 2 ζ 2 k + ν 2 k k 2 k + ν ν ν 2 ζ k ,
where δ = i = 1 n δ i and ζ = 1 β max i { 1 , , n } β α i . Moreover, if β > 1 2 max i { 1 , , n } α i , then 0 < ζ < 1 .
Proof. 
The proof is provided in Appendix B. □
From Lemma 1 above, we further define
B k 1 , k 2 ( γ ) ( ζ ) : = 2 β γ e δ 2 ζ k = k 1 + 1 k 2 b k ( γ , ν , ζ )
for all k 1 , k 2 N { 0 , } such that k 1 + 1 < k 2 and ζ > 0 , where
b k ( γ , ν , ζ ) = Γ γ + k + ν 2 Γ k + ν 2 2 F 1 k , 1 k ν 2 ; 1 k ν 2 γ ; 1 2 k + ν 2 k k 2 k + ν ν ν 2 ζ k .
Utilizing Lemma 1, we obtain the following upper bound of a truncation error.
 Theorem 3.
Supposing that β > 1 2 max i { 1 , , n } α i , then we have
E K , ( γ ) B K , ( γ ) ( ζ ) γ R + , k N ,
where ζ = 1 β max i { 1 , , n } β α i . Furthermore,
lim K E K , ( γ ) = 0 .
Proof. 
The proof is provided in Appendix B. □
It should be mentioned from Theorem 3 that the inequality β > 1 2 max i { 1 , , n } α i ought to hold when we implement the Formula (8) for computing E [ Y n γ ] to ensure that the truncation occurring tends to zero when K approaches infinity.
Finally, we use Euler’s transformation [31] in order to show that (8) terminates when γ = m is a non-negative integer.
 Theorem 4.
For any m N , we have
E [ Y n m ] = 2 β m k = 0 m ( 1 ) k Γ m + k + ν 2 Γ k + ν 2 2 F 1 k , 1 k ν 2 ; 1 k ν 2 m ; 1 c k
where the coefficients c k , k = 0 , 1 , , are chosen according to (4)–(7) and the parameters ν = i = 1 n ν i and β > 0 can be arbitrarily chosen.
Proof. 
The proof is provided in Appendix B. □
Applying Corollary 2 and Theorem 4, an explicit formula for the m th moment of noncentral chi-square random variables can be obtained as follows.
 Corollary 3.
For any X χ ν 2 δ and m N , we have
E [ X m ] = m ! 2 m Γ m + ν 2 k = 0 m δ 2 k k ! ( m k ) ! Γ k + ν 2 .
Proof. 
The proof is provided in Appendix B. □

3.3. Analytical Formulas for Other Conic Combinations of Independent Random Variables

In this subsection, we extend our previous results to various other types of random variables. First, instead of assuming that each component X i is a chi-square random variable, we assume instead that it is normally distributed with varying means and variances. Second, motivated by its application in queuing theory, we focus on the case where each component follows a gamma distribution with varying shape parameters. Note that gamma distributions are often used in queuing theory for modeling the distribution of certain types of waiting times, e.g., the excess water flow of a dam as explained in Mathai [32] and other problems in communication theory with respect to the performance of certain wireless transmission systems as described in Alouini et al. [33]. Third, we focus on the sum of independent Erlang distributed random variables, which lie at the core of many fields such as telecommunications, statistics, reliability theory, and risk analysis [34]. Last but not least, we focus on the sum of exponential random variables, which are often used in stochastic modeling thanks to its memoryless property, and the sum of squared Maxwell–Boltzmann random variables, which can be used to explain the molecular speed distribution of ideal gases [35], etc.
 Theorem 5.
Consider a random variable Y ( 1 , n ) : = i = 1 n a ( 1 , i ) Z i 2 , where each a ( 1 , i ) > 0 and each Z i is a normal random variable with mean μ ( 1 , i ) R and variance σ ( 1 , i ) 2 for σ ( 1 , i ) > 0 . Assuming that all summands are independent, then the PDF of Y ( 1 , n ) , E [ Y ( 1 , n ) γ ] , and E [ Y ( 1 , n ) m ] can be computed using (3), (8), and (17), respectively, for all γ R + and integer m N , by setting α i = a ( 1 , i ) σ ( 1 , i ) 2 , ν i = 1 , and δ i = μ ( 1 , i ) σ ( 1 , i ) 2 for all i = 1 , , n .
Proof. 
The proof is provided in Appendix B. □
 Theorem 6.
Consider a random variable Y ( 2 , n ) : = i = 1 n a ( 2 , i ) G i , where each a ( 2 , i ) > 0 and each G i is distributed according to a gamma distribution with shape parameter κ ( 2 , i ) > 0 and scale parameter θ ( 2 , i ) > 0 . Assuming all summands are independent, then the PDF of Y ( 2 , n ) , E [ Y ( 2 , n ) γ ] , and E [ Y ( 2 , n ) m ] can be computed using (3), (8), and (17), respectively, for all γ R + and integer m N , by setting α i = 1 2 a ( 2 , i ) θ ( 2 , i ) , ν i = 2 κ ( 2 , i ) , and δ i = 0 for all i = 1 , , n .
Proof. 
The proof is provided in Appendix B. □
 Theorem 7.
Consider a random variable Y ( 3 , n ) : = i = 1 n a ( 3 , i ) L i , where each a ( 3 , i ) > 0 and each L i is distributed according to an Erlang distribution with shape parameter κ ( 3 , i ) { 1 , 2 , } and rate parameter λ ( 3 , i ) > 0 . Assuming that all summands L i are independent, then the PDF of Y ( 3 , n ) , E [ Y ( 3 , n ) γ ] , and E [ Y ( 3 , n ) m ] can be computed using (3), (8), and (17), respectively, for all γ R + and integer m N , by setting α i = a ( 3 , i ) 2 λ ( 3 , i ) , ν i = 2 κ ( 3 , i ) , and δ i = 0 for all i = 1 , , n .
Proof. 
The proof is provided in Appendix B. □
 Theorem 8.
Consider a random variable Y ( 4 , n ) : = i = 1 n a ( 4 , i ) P i , where each a ( 4 , i ) > 0 and each P i is distributed according to an exponential distribution with rate parameter λ ( 4 , i ) > 0 . Assuming that all summands are independent, then the PDF of Y ( 4 , n ) , E [ Y ( 4 , n ) γ ] , and E [ Y ( 4 , n ) m ] can be computed using (3), (8), and (17), respectively, for all γ R + and integer m N , by setting α i = a ( 4 , i ) 2 λ ( 4 , i ) , ν i = 2 , and δ i = 0 for all i = 1 , , n .
Proof. 
The proof is provided in Appendix B. □
 Theorem 9.
Consider a random variable Y ( 5 , n ) : = i = 1 n a ( 5 , i ) W i 2 , where each a ( 5 , i ) > 0 and each W i is distributed according to a Maxwell–Boltzmann distribution with parameter ϕ ( 5 , i ) > 0 . Assuming that all W i ’s are independent, then the PDF of Y ( 5 , n ) , E [ Y ( 5 , n ) γ ] , and E [ Y ( 5 , n ) m ] can be computed using (3), (8), and (17), respectively, for all γ R + and integer m N , by setting α i = a ( 5 , i ) ϕ ( 5 , i ) 2 , ν i = 3 , and δ i = 0 for all i = 1 , , n .
Proof. 
The proof is provided in Appendix B. □

4. Extensions to the ECIR Process with Time-Varying Dimension

The ECIR process is one of the most widely used processes to model interest rates and to price financial products such as zero-coupon bond, ex-coupon, moment swaps, options, and interest rate swaps. With time-dependent parameters, the ECIR process is capable of accounting for side information from potential political or economic events. Formally, according to Maghsoodi [27], the ECIR process, denoted by V t , satisfies
d V t = κ ( t ) ( θ ( t ) V t ) d t + σ ( t ) V t d W t
for t ( 0 , T ] and T > 0 with an initial value V 0 = v 0 > 0 , where the parameter functions θ ( t ) > 0 , κ ( t ) > 0 , and σ ( t ) > 0 are continuous on [ 0 , T ] , and W t is a standard Brownian motion under a probability space ( Ω , F , P ) with a filtration ( F t ) 0 t T . Note that V t reduces to a plain-vanilla Cox–Ingersoll–Ross process (CIR process) [36] provided that the relevant parameter functions are constants.
Focusing on the ECIR process (19), we define the dimension of V t as
d ( t ) : = 4 κ ( t ) θ ( t ) σ 2 ( t )
for t [ 0 , T ] , and this quantity plays an important role for deriving an expression for the distribution of V t . Maghsoodi [27] discovered that when d ( t ) = d 2 for all t [ 0 , T ] that included the CIR process, V t never hit zero almost surely and was in fact a scaled time-changed squared Bessel process; as a result, the TPDF of V t was explicitly given.
Moreover, Maghsoodi [27] showed that V t could be represented as a lognormal process through a stochastic time-change when d ( t ) 2 for all t [ 0 , T ] , but the TPDF of V t was not analytically derived. Consequently, it has been an open question until now how the TPDF of V t can be obtained in explicit form when d ( t ) is time-varying, based on the stochastic time-varying lognormal process representation.
To demonstrate our contribution in the current paper for solving this problem, we apply our previous results in Section 2 and Section 3 to explicitly derive the TPDF of V t as well as its γ th conditional moment when d ( t ) is time-varying, provided that the following two assumptions hold:
 Assumption 1.
d ( t ) 2 for all t [ 0 , T ] .
 Assumption 2.
The derivative d ( 1 ) ( t ) of d ( t ) with respect to t satisfies 0 d ( 1 ) ( t ) < for all t [ 0 , T ] .

4.1. The Exact TPDF of the ECIR Process with Time-Varying Dimension

To realize our objective, we firstly define a parameter function as follows:
τ ( t , s ) : = 1 4 s t σ 2 ( ζ ) e ζ t κ ( u ) d u d ζ
for 0 s t T .
Peng and Schellhorn [37] showed that the ECIR process V t described by (19) could be represented as that of a convergent series of weighted independent noncentral chi-square and chi-square random variables. For the sake of completeness, we summarize their results in the following theorem.
 Theorem 10.
Supposing that Assumptions 1 and 2 hold, then the ECIR process V t described by (19) with an initial value v 0 > 0 can be expressed as
V t = l a w lim n i = 1 n α ^ i X ^ i
for any t ( 0 , T ] , where the random variables X ^ i are independent and distributed according to noncentral chi-square and chi-square distributions as
X ^ i χ ν ^ i 2 δ ^ i ,
with the coefficients and parameters in (22) and (23) given by
α ^ i = τ t , ( i 1 ) t n i { 1 , , n } ,
ν ^ 1 = d ( 0 ) ,
δ ^ 1 = v 0 τ ( t , 0 ) e 0 t κ ( u ) d u ,
and
ν ^ i = d ( 1 ) ( i 1 ) t n t n i { 2 , , n } ,
δ ^ i = 0 i { 2 , , n } .
In particular, if d ( s ) = d 2 for all s [ 0 , t ] , then
V t τ ( t , 0 ) · χ d 2 v 0 τ ( t , 0 ) e 0 t κ ( u ) d u
Proof. 
See Theorem 3.1 in Peng and Schellhorn [37]. □
Peng and Schellhorn [37] also represented the TPDF of V t in terms of a limit of a sequence of convolutions of the PDFs of scaled noncentral chi-square and chi-square random variables.
Instead of utilizing the convolution property for independent random variables as shown by Peng and Schellhorn [37], we apply Theorem 1 to obtain the first explicit formula for the TPDF of V t with time-varying dimension d ( t ) .
 Theorem 11.
The TPDF of V t defined by
f V t ( v , t | v 0 ) : = P V t = v | V 0 = v 0
for v , v 0 > 0 and t ( 0 , T ] can be expressed as
f V t ( v , t | v 0 ) = e v 2 τ ( t , 0 ) v d ( t ) 2 1 ( 2 τ ( t , 0 ) ) d ( t ) 2 k = 0 k ! Γ ( d ( t ) 2 + k ) c ^ k ( t , v 0 ) L k d ( t ) 2 1 v 2 τ ( t , 0 )
where
c ^ 0 ( t , v 0 ) = 1 ,
c ^ k ( t , v 0 ) = 1 k j = 0 k 1 c ^ j ( t , v 0 ) d ^ k j ( t , v 0 ) k N ,
and
d ^ 1 ( t , v 0 ) = 1 2 τ ( t , 0 ) v 0 e 0 t κ ( u ) d u + 1 2 0 t d ( 1 ) ( s ) 1 τ ( t , s ) τ ( t , 0 ) d s ,
d ^ j ( t , v 0 ) = 1 2 0 t d ( 1 ) ( s ) 1 τ ( t , s ) τ ( t , 0 ) j d s j N { 1 } .
In particular, if d ( s ) = d 2 for all s [ 0 , t ] , then
c ^ k ( t , v 0 ) = e 0 t κ ( u ) d u 2 τ ( t , 0 ) k v 0 k k ! k N { 0 } .
Proof. 
The proof is provided in Appendix C. □

4.2. The γ th Conditional Moment of the ECIR Process with Time-Varying Dimension

For γ R + and a probability space ( Ω , F , P ) with a filtration ( F t ) 0 t T , we define the γ th conditional moment of the ECIR process V t as
U E ( γ ) ( t | v 0 ) : = E P V t γ | F 0 = E P V t γ | V 0 = v 0 = 0 v γ f V t ( v , t | v 0 ) d v
for t ( 0 , T ] and v 0 > 0 , where f V t ( v , t | v 0 ) is the TPDF of V t given in (31).
Rujivan [12] first presented a recursive formula for computing U E ( γ ) ( t | v 0 ) using a partial differential equation (PDE) approach. Alternatively, we apply Theorems 2, 4, and 11 to obtain a novel explicit formula for U E ( γ ) ( t | v 0 ) in the following theorem.
 Theorem 12.
Supposing that Assumptions 1 and 2 hold, then for any γ R + we have
U E ( γ ) ( t | v 0 ) = 2 τ ( t , 0 ) γ k = 0 ( 1 ) k Γ γ + k + d ( t ) 2 Γ k + d ( t ) 2 2 F 1 k , 1 k d ( t ) 2 ; 1 k d ( t ) 2 γ ; 1 c ^ k ( t , v 0 )
for t ( 0 , T ] and v 0 > 0 , where c ^ k ( t , v 0 ) , k = 0 , , are given in (32) and (33).
In particular, for any integer m N ,
U E ( m ) ( t | v 0 ) = 2 τ ( t , 0 ) m k = 0 m ( 1 ) k Γ m + k + d ( t ) 2 Γ k + d ( t ) 2 2 F 1 k , 1 k d ( t ) 2 ; 1 k d ( t ) 2 m ; 1 c ^ k ( t , v 0 )
for t ( 0 , T ] and v 0 > 0 .
Proof. 
The proof is provided in Appendix C. □

4.2.1. Comparison with Other Formulas

To date, the computation of the conditional moment has only been partially solved due to the unavailability of the transitional PDF. Indeed, the problem of computing the integral on the RHS of (37) with any stochastic differential equation (SDE) is typically addressed by the Feynman–Kac theorem, where the partial differential equation (PDE) is solved analytically, and some combinatorial techniques are used to simplify the system of recursive ordinary differential equations (ODEs) associated with the conditional moment; see, for instance, [38,39,40,41], for more details.
For a more concrete comparison, Rujivan [12] presented the first explicit formula for the γ th conditional moment of the ECIR process (19) with time-varying dimension as a power series. Rujivan demonstrated the effectiveness of this analytical approach over the other state-of-the-art techniques including the method by Dufresne [28] and MC simulations. This result has some similarities and dissimilarities to our work, which we shall explain. Our formula (38) expresses the γ th conditional moment of the ECIR process as an infinite series where the coefficients c ^ k ( t , v 0 ) can be computed analytically recursively. This offers a more efficient way than, say, using Equation (2.2) of Rujivan [12], which can achieve the same purpose of characterizing the conditional moment of the ECIR process. Therein, the parameters must also be computed recursively but the result of each iteration does not have a closed form. The method of Rujivan [12] is therefore more time-consuming and more prone to numerically accumulating errors.

5. Numerical Results and Discussions

As shown in Section 2, Section 3 and Section 4, our theoretical frameworks presented in this paper produce several new explicit formulas for computing the PDF of Y n defined in (1) and its moments, including the TPDF of the ECIR process (19) with time-varying dimension and its conditional moments. A natural question that may be raised by practitioners is whether these newly derived explicit formulas are accurate and efficient, especially considering that an infinite sum has to be truncated. Therefore, we intensively investigated the accuracies of our explicit formulas to confirm that there were no algebraic errors in the derivation processes as well as to demonstrate the efficiencies of our explicit formulas compared with either MC simulations or other explicit formulas proposed in the literature through a series of numerical examples which were coded in MATHEMATICA 11 and executed on a notebook with the following specifications: Intel(R) Core (TM) i5-6500, CPU @3.20GHz, 16GB RAM, Windows 10, 64 bit operating system.

5.1. The accuracy of Our Explicit Formula for f Y n ( β ) ( y )

In order to illustrate the accuracy of our explicit Formula (3), we introduced random variables as follows. For any n N , we defined
Y n ( j ) : = i = 1 n α i ( j ) X i ( j )
for j = 1 , 2 , 3 , where
X i ( j ) χ ν i ( j ) 2 δ i ( j )
for i = 1 , , n , and the parameters were set as
α i ( 1 ) = α i ( 2 ) = α i ( 3 ) = 2 i α n ( n + 1 ) ,
ν i ( 1 ) = ν i ( 2 ) = ν i ( 3 ) = i + 3 2 ,
δ i ( 1 ) = i 10 , δ i ( 2 ) = 0 , δ i ( 3 ) = ( 1 ( 1 ) i ) i 20 ,
for α > 0 . Furthermore, for n 2 , we assumed that X i ( j ) , i = 1 , , n were independent for all j = 1 , 2 , 3 . By construction, we note that α represents the total conic coefficients, i.e., α = i = 1 n α i ( j ) , and that each Y n ( j ) constitutes a sum of independent noncentral chi-square random variables. Though, we note that through the transformations studied in Theorems 5–9, the distribution of Y n ( j ) may be identical to those of other random sums. For instance,
Y n ( 2 ) = l a w i = 1 n 2 α i ( 2 ) θ ( 2 , i ) G i = l a w i = 1 n 2 α i ( 2 ) λ ( 3 , i ) L i ,
where the G i ’s are independent random variables and are distributed according to a gamma distribution with the shape parameter ν i ( 2 ) / 2 and the scale parameter θ ( 2 , i ) > 0 , and similarly the L i ’s are independent random variables and are distributed according to an Erlang distribution with the shape parameter ν i ( 2 ) / 2 and the rate parameter λ ( 3 , i ) > 0 .
Example 1.
We started by considering the PDF of Y 6 ( 1 ) = i = 1 6 α i ( 1 ) X i ( 1 ) in which the values of parameters α i ( 1 ) , ν i ( 1 ) , and δ i ( 1 ) for i = 1 , , 6 are plotted in Figure 1a. The PDFs of α i ( 1 ) X i ( 1 ) , i = 1 , , 6 which varied in range and shape, are shown in Figure 2a–f, respectively. The problem of analytically computing the PDF of Y 6 ( 1 ) was investigated as follows.
To obtain the PDF of Y 6 ( 1 ) , denoted by f Y 6 ( 1 ) ( β ) ( y ) , we set n = 6 , α i = α i ( 1 ) , ν i = ν i ( 1 ) , and δ i = δ i ( 1 ) for i = 1 , , 6 in (3). The c k coefficients of the Laguerre expansion (3) were computed by using (4)–(7) with β > 1 2 max i α i . Then, the sequence of c k ’s was plotted (Figure 3a) showing that c k 0 as k . After inserting the values of the c k ’s into (3), a graph of f Y 6 ( 1 ) ( β ) ( y ) was displayed, against the histogram of Y 6 ( 1 ) obtained from MC simulations, as shown in Figure 4a. We clearly see from the figure that the histogram representing the PDF of Y 6 ( 1 ) perfectly fit the graph of f Y 6 ( 1 ) ( β ) ( y ) computed by using the Laguerre expansion (3).
To extend our study to various cases of the linear combination (40) as introduced in Section 3.3, with increasing values of n, we further considered the PDFs of Y 11 ( 1 ) = i = 1 11 α i ( 1 ) X i ( 1 ) , Y 15 ( 2 ) = i = 1 15 α i ( 2 ) X i ( 2 ) , and Y 20 ( 3 ) = i = 1 20 α i ( 3 ) X i ( 3 ) . The values of the parameters α i ( j ) , ν i ( j ) , and δ i ( j ) for j = 1 , 2 , 3 computed by using (42)–(44) are plotted in Figure 1b–d. By following the procedure as previously described for determining the PDF of Y 6 ( 1 ) , we thus obtained the sequences of c k ’s as shown in Figure 3b–d along with the PDFs of Y 11 ( 1 ) , Y 15 ( 2 ) , and Y 20 ( 3 ) , denoted by f Y 11 ( 1 ) ( β ) ( y ) , f Y 15 ( 2 ) ( β ) ( y ) , and f Y 20 ( 3 ) ( β ) ( y ) , as shown in Figure 4b–d, respectively. It is clearly seen from Figure 4b–d that the graphs of the PDFs markedly matched their corresponding histograms obtained from MC simulations.
Next, we illustrate the fitness as previously discussed by employing the Kolmogorov–Smirnov (K-S) test [42]. Figure 4a–d also display the p-values for the K-S tests computed from 1000 random samples generated by the PDFs f Y 6 ( 1 ) ( β ) ( y ) , f Y 11 ( 1 ) ( β ) ( y ) , f Y 15 ( 2 ) ( β ) ( y ) , and f Y 20 ( 3 ) ( β ) ( y ) and their corresponding random variables Y 6 ( 1 ) , Y 11 ( 1 ) , Y 15 ( 2 ) , and Y 20 ( 3 ) , respectively. As shown in Figure 4a–d, the p-values obtained fell into the area of the acceptance region in which we set the significant levels for the K-S tests to be 0.1 ; all the resulting null hypotheses of the K-S tests were accepted. Therefore, the PDFs f Y 6 ( 1 ) ( β ) ( y ) , f Y 11 ( 1 ) ( β ) ( y ) , f Y 15 ( 2 ) ( β ) ( y ) , and f Y 20 ( 3 ) ( β ) ( y ) computed by using the Laguerre expansion (3) appeared consistent with their corresponding histogram of random samples generated from Y 6 ( 1 ) , Y 11 ( 1 ) , Y 15 ( 2 ) , and Y 20 ( 3 ) , respectively.

5.2. The Performance of Our Explicit Formula for E [ Y n γ ]

Example 2.
In our next example, we demonstrate the performance of our explicit Formula (8) by selecting Y 11 ( 1 ) , Y 15 ( 2 ) , and Y 20 ( 3 ) given in Example 1 to be our case study.
Firstly, we computed the values of E Y 11 ( 1 ) γ for γ [ 0 , 2 ] by using our explicit Formula (8) with K = K 1 = 100 . Next, we plotted these values against the results obtained from MC simulations with n p = 10 , 20 , 50 , 100 , 500 , 2000 , which were the numbers of sample paths used in the MC simulations, in Figure 5a–f, respectively. Next, we similarly applied this procedure to investigate the accuracy of our explicit Formula (8) for E Y 15 ( 2 ) γ and E Y 20 ( 3 ) γ . The results obtained for E Y 15 ( 2 ) γ , where γ = 2 , 2.01 , , 3 , with K = K 2 = 300 and E Y 20 ( 3 ) γ , where γ = 3 , 3.01 , , 4 , with K = K 3 = 400 , are displayed in Figure 6a,c,e and Figure 6b,d,f, respectively. Evidently, the variation of the approximate values from MC simulations decreased when n p increased for all chosen γ’s, demonstrating the convergence of the approximate values from MC simulations to the one computed by using our explicit Formula (8).
In order to verify our result presented in Theorem 3, we used (11) to compute the truncation errors of E Y 11 ( 1 ) γ , E Y 15 ( 2 ) γ , and E Y 20 ( 3 ) γ , denoted by E k , ( γ , 1 ) , E k , ( γ , 2 ) , and E k , ( γ , 3 ) , respectively, when γ = 1 2 N and γ = 1 , 2 N for k = 0 , , 10 . The truncation errors obtained are tabulated in Table 1. We clearly see from Table 1 that E k , ( γ , j ) tended to zero when k increased for all j = 1 , 2 , 3 and selected γ. In particular, E k , ( γ , j ) = 0 for k γ when γ = 1 , 2 . This confirmed our result presented in Theorem 4 that our explicit Formula (17) could be used to compute E [ Y n γ ] without producing truncation errors when γ N ; it should be remarked from Table 1 that the truncation errors could be very large when K + 1 was less than γ for γ N . Although utilizing our explicit Formula (8) for computing E [ Y n γ ] when γ R + and γ N always produced a truncation error, our result presented in Theorem 3 ensured that the truncation error tended to zero as K increased.
We finish this example by illustrating the efficiency of our explicit Formula (8) over the MC simulations. As shown in Figure 5a–f and Figure 6a–f, we needed to increase the value of n p in the MC simulations in order to reduce the variations on the approximate values of E Y 11 ( 1 ) γ , E Y 15 ( 2 ) γ , and E Y 20 ( 3 ) γ , respectively, which could be time-consuming. For example, to yield an absolute difference between the exact value of E Y 20 ( 3 ) 2 computed from our explicit Formula (8) and its approximate value obtained from MC simulations to be less than 10 3 , our numerical experiment required MC simulations with n p = 10 7 consuming 19 s, while implementing our explicit Formula (8) with K = 3 took just 0.001 s.

5.3. Extended Results for the ECIR Process with Time-Varying Dimension

We next shift our attention to our explicit formulas for the TPDF and the moments of ECIR processes with time-varying dimension. We specifically set the parameter functions of the ECIR process V t described by (19) with time-varying dimension d ( t ) defined in (20) as follows:
κ ( t ) = 0.1 + 0.2 t + 0.3 e cos 2 ( t + 2 ) ,
θ ( t ) = 0.1 + 0.5 e 2 sin ( t + 2 ) ,
and
σ ( t ) = 0.4 + 0.1 t e sin ( t + 2 )
for t [ 0 , 3 ] .
Figure 7a–f display variations of the three parameter functions (45)–(47) as well as d ( t ) , d ( 1 ) ( t ) , and τ ( t , 0 ) defined in (21), respectively, as given in the figure. It should be noticed from Figure 7d that Assumption 1 was fulfilled on [ 0 , 3 ] , while Assumption 2 was violated, e.g., d ( 1 ) ( 1.5 ) < 0 , as shown in Figure 7e. Consequently, we set the time domain for this study to be D 1 : = [ 0 , 1 ] , ensuring that both Assumptions 1 and 2 were satisfied.

5.3.1. The Accuracy of Our Explicit Formula for the TPDF of the ECIR Process with Time-Varying Dimension

Example 3.
This example aimed to investigate the accuracy of our explicit Formula (31) for calculating the TPDF of V t . We started by implementing the result presented in Theorem 10. Consider the convergence of the PDF of Y ^ n ( t , v 0 ) : = i = 1 n α ^ i X ^ i to the PDF of V t | v 0 as n approaches infinity, written in (22). We set v 0 = 1 , 2 , and t = 0.1 , 0.5 , 1 D 1 . Then, we computed the approximate PDFs of Y ^ n ( t , v 0 ) for n = 1 , 2 , 3 , based on the random samples drawn from populations distributed according to noncentral chi-square and chi-square distributions with a number of sample n p = 10 6 . On the other hand, as a benchmark, the approximate PDFs of V t | v 0 for all v 0 = 1 , 2 , and t = 0.1 , 0.5 , 1 were obtained by using the sample paths generated from (19) as follows.
Figure 8a–f show that the PDF of Y ^ n ( t , v 0 ) tended to align better with the histogram of V t | v 0 representing the PDF of V t | v 0 when n increased for all v 0 = 1 , 2 , and t = 0.1 , 0.5 , 1 , demonstrating Y ^ n ( t , v 0 ) converged in distribution to V t | v 0 as n approached infinity. However, as shown in Figure 8a–f, the number of terms used in the summation i = 1 n α ^ i X ^ i had to be increased in order to obtain a better approximation for the PDF of V t | v 0 . This is a major drawback of implementing (22), which requires the exact PDFs of Y ^ n ( t , v 0 ) and V t | v 0 in order to estimate errors occurring for all n.
The problem mentioned above can completely be solved by employing our explicit Formula (31) for obtaining the exact PDF of V t | v 0 . Truncation errors occurring when implementing the infinite series in (31) can be estimated by applying Lemma 1. To demonstrate the accuracy of our explicit Formula (31), we computed f V t ( v , t | v 0 ) for v 0 = 1 , 2 , t = 0.1 , 0.5 , 1 and v > 0 by setting K = 20 . Sequences of c ^ k ( t , v 0 ) ’s computed from (32) and (33) for v 0 = 1 , 2 and t = 0.1 , 0.5 , 1 are displayed in Figure 9a,b, showing that the coefficients tended to zero when k increased. This ensured the truncation errors vanished when K approached infinity.
Figure 8a–f also display the graphs of f V t ( v , t | v 0 ) against the corresponding histograms of V t | v 0 for t = 0.1 , 0.5 , 1 D 1 and v 0 = 1 , 2 , obtained from the sample paths generated from (19). It is readily seen that the graphs of f V t ( v , t | v 0 ) gracefully matched the corresponding histograms. Following the K-S tests with the significance level of 10% employed in Example 1 to determine the equivalence between two distributions, the minimum of the p-values was 0.14 , implying that there was no significance difference between f V t ( v , t | v 0 ) obtained from our explicit Formula (31) and the corresponding histograms of the random samples generated from (19).
The validity of our explicit Formula (31) admittedly becomes questionable when Assumption 2 is violated. To investigate, we extended the time domain from D 1 : = [ 0 , 1 ] to D 2 : = [ 0 , 3 ] and considered f V t ( v , t | v 0 ) for v > 0 , v 0 = 1 , 2 , and t = 0.03 , 0.75 , 1.5 , 2.25 , 3 D 2 . Figure 7e visually shows that at t = 1.5 , 3 , d ( 1 ) ( 1.5 ) and d ( 1 ) ( 3 ) were negative and hence Assumption 2 was violated, but f V t ( v , t | v 0 ) was finite for all v > 0 , v 0 = 1 , 2 , and t D 2 , as shown in Figure 10a,b.Moreover, the graph of f V t ( v , t | v 0 ) appears to match the corresponding histograms of the random samples from (19) based on MC simulations because the minimum of the p-values from the K-S tests was 20 % . The results obtained suggested that the condition in Assumption 2 could perhaps be replaced with d ( 1 ) ( t ) < for all t [ 0 , T ] when our explicit Formula (31) was employed for computing the TPDF of the ECIR process (19). That being said, Assumption 2 remains an important ingredient of our analysis since the degrees of freedom ν ^ i given in (27) are not allowed to be negative from the definitions of noncentral chi-square and chi-square distributions.

5.3.2. The performance of Our Explicit Formula for U E ( γ ) ( t | v 0 )

Example 4.
In our last example, we illustrate the accuracy and efficiency of our explicit Formula (38) for computing the γ th conditional moment of the ECIR process (19) with time-varying dimension. As previously discussed in Section 4.2.1, we demonstrate the advantages of using our explicit Formula (38) over the explicit formula presented in Theorem 2.1 of Rujivan [12] by setting the parameter functions of the ECIR process (19) to follow (45)–(47) as used in Example 3.
Firstly, we considered the accuracy of our explicit Formula (38) and the one written in Equation (2.2) of Rujivan [12] by setting γ = 0.5 , 1 , 1.5 , 2 , 2.5 , 3 . Let v 0 = 1 and t = 1 . For each γ, we computed two sequences of U E ( γ , K ) ( t | v 0 ) for K = 0 , , 5 , from our explicit formulas (38) and the one written in Equation (2.2) of Rujivan [12] with the number of terms, K + 1 , used in the infinite series on the RHS of (38) and Equation (2.2) of Rujivan [12]. Figure 11a–f display the graphs of the two sequences of U E ( γ , K ) ( t | v 0 ) against the approximate value of U E ( γ ) ( t | v 0 ) from MC simulations based on the ECIR process (19), demonstrating that the two sequences of U E ( γ , K ) ( t | v 0 ) converged to the corresponding approximate value of U E ( γ ) ( t | v 0 ) obtained from MC simulations when K increased.
It should also be pointed out from Figure 11a–f that the sequence of U E ( γ , K ) ( t | v 0 ) computed from (38) converged to the corresponding approximate value of U E ( γ ) ( t | v 0 ) obtained from the MC simulations faster than the sequence of U E ( γ , K ) ( t | v 0 ) computed from Equation (2.2) of Rujivan [12]. Moreover, as shown in Figure 11b,d,f, the two sequences of U E ( γ , K ) ( t | v 0 ) coincided for K + 1 m when γ = m was a positive integer, demonstrating the consistency of our closed-from Formula (39) and the explicit formula written in Equation (2.13) in Theorem 2.2 of Rujivan [12].
Secondly, we investigated the efficiency of our explicit Formula (38) over the one written in Equation (2.2) of Rujivan [12] for calculating U E ( γ ) ( t | v 0 ) by choosing γ = 0.5 in our case study. Figure 12a,b illustrate the computational times (seconds) used to compute U E ( γ , K ) ( t | v 0 ) from the two explicit formulas and the reduction (in folds) of computational time for K = 1 , , 10 , respectively. As expected, implementing our explicit Formula (38) consumed considerably less time and effort than implementing the explicit formula written in Equation (2.2) of Rujivan [12], in particular, the reduction was more than sixfold when K 10 .

6. Conclusions

In this paper, we presented the first explicit formula expressed in terms of generalized hypergeometric functions for computing the γ th moment of a conic combination of n independent noncentral chi-square random variables defined in (1), when the conic coefficients were not all identical for any integer n 2 and real number γ R + . Moreover, the truncation errors occurring by implementing our explicit formulas were determined analytically. We extended our result to various types of random variables which were independent and could be transformed to noncentral chi-square random variables. For validation purposes, several numerical examples were presented to show the performance of our explicit formulas compared with MC simulations. Furthermore, we highlighted an interesting application of our explicit formulas in interest rate modeling by expressing the exact TPDF of the ECIR process with time-varying dimension in terms of generalized Laguerre functions. As a result, a novel explicit formula for the γ th conditional moment of the ECIR process was obtained and tested, and we concluded that the distinguished feature of our current analytical approach lay in its computational efficiency, which was superior to that of the other existing methods from the literature.

Author Contributions

Conceptualization, S.R., A.S., K.C. and N.R.; methodology, S.R. and A.S.; software, S.R. and A.S.; validation, S.R., A.S., K.C. and N.R.; formal analysis, S.R., A.S., K.C. and N.R.; investigation, S.R., A.S., K.C. and N.R.; writing—original draft preparation, S.R. and A.S.; writing—review and editing, S.R., K.C. and N.R.; visualization, S.R. and A.S.; supervision, K.C. and N.R.; project administration, S.R. and K.C.; funding acquisition, S.R. All authors have read and agreed to the published version of the manuscript.

Funding

This research project was supported by the National Research Council of Thailand (NRCT):NRCT5-RGJ63016-150 and Walailak University partial funding contract no.01/2562 for the first and the second authors. Furthermore, the first author received funding support from the NSRF, Thailand, via the Program Management Unit for Human Resources and Institutional Development, Research and Innovation (grant no. B05F640202).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

We are grateful for the suggestions from the anonymous referees that have substantially improved the quality and presentation of the results. All errors are the authors’ own responsibility.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
ECIRExtended Cox–Ingersoll–Ross
CIRCox–Ingersoll–Ross
K-SKolmogorov–Smirnov
MCMonte Carlo
ODEOrdinary differential equation
PDEPartial differential equation
PDFProbability density function
SDEStochastic differential equation
TPDFTransition probability density function

Appendix A. Omitted Proofs from Section 2

Proof of Theorem 1.
To obtain the PDF of Y n as written in (3), we use the results proposed in Section 3 of [16] as follows. From (1), we let the random variable Q n defined in [16] to be Y n and consider the coefficients α i > 0 and random variables X i χ ν i 2 δ i for i = 1 , , n . Then, we set p = μ 0 = ν 2 in Equations (3.2), (3.4a), and (3.4b) of [16]. As a result, we have c 0 = 1 . Moreover, the formulas of the remaining c k coefficients on the RHS of Equation (3.2), as written in Equations (3.4a) and (3.4b), reduce to (5) and (6)–(7), respectively. □

Appendix B. Omitted Proofs from Section 3

Proof of Theorem 2.
We set
C k , γ ( y ) : = 1 ( 2 β ) ν 2 k ! Γ ( ν 2 + k ) c k L k ν 2 1 y 2 β e y 2 β y ν 2 1 + γ
for γ R + and k = 0 , 1 , , where the c k ’s, ν , and β are given in Theorem 1.
Observe that
0 C k , γ ( y ) d y = k ! c k ( 2 β ) ν 2 Γ ( ν 2 + k ) 0 L k ν 2 1 y 2 β e y 2 β y ν 2 1 + γ d y = k ! ( 2 β ) γ c k Γ ( ν 2 + k ) 0 L k ν 2 1 ( y ) e y y ν 2 1 + γ d y = ( 2 β ) γ c k Γ ν 2 + γ Γ ν 2 2 F 1 k , ν 2 + γ ; ν 2 ; 1 = ( 2 β ) γ c k Γ ν 2 + γ Γ ν 2 ( γ ) k ν 2 k ,
where the third equality follows from [43] and ( · ) k denotes the usual Pochhammer symbol. By substituting the following identities to the above derivation
Γ ν 2 = Γ ν 2 + k ν 2 k and Γ ν 2 + γ = Γ ν 2 + γ + k ν 2 + γ k
we further obtain
0 C k , γ ( y ) d y = ( 2 β ) γ c k Γ γ + k + ν 2 Γ k + ν 2 ( γ ) k ν 2 + γ k = ( 2 β ) γ ( 1 ) k c k Γ γ + k + ν 2 Γ k + ν 2 ( γ ) k 1 k ν 2 γ k = 2 β γ ( 1 ) k Γ γ + k + ν 2 Γ k + ν 2 2 F 1 k , 1 k ν 2 ; 1 k ν 2 γ ; 1 c k ,
which is manifestly finite for all γ R + and k N { 0 } . The uniformly convergent series of f Y n derived in (3) implies that the series k = 0 C k , γ ( y ) converges uniformly to y γ f Y n ( y ) , i.e.,
y γ f Y n ( y ) = k = 0 C k , γ ( y ) .
Applying (A2) and (A3) yields
E [ Y n γ ] = 0 y γ f Y n ( y ) d y = k = 0 0 C k , γ ( y ) d y = 2 β γ k = 0 ( 1 ) k Γ γ + k + ν 2 Γ k + ν 2 2 F 1 k , 1 k ν 2 ; 1 k ν 2 γ ; 1 c k ,
and this finishes the proof. □
Proof of Corollary 1.
Firstly, we apply the Gauss summation formula [31] to the generalized hypergeometric functions in the infinite series on the RHS of (8) with γ = 1 2 as follows:
2 F 1 k , 1 k ν 2 ; 1 k ν 2 1 2 ; 1 = Γ 1 k ν 2 1 2 Γ k 1 2 Γ 1 ν 2 1 2 Γ 1 2 .
Let z 1 = k + ν 2 and z 2 = ν 2 . We use the properties of gamma functions to obtain the following relations:
Γ 1 k ν 2 1 2 = Γ 1 2 z 1 = ( 4 ) z 1 z 1 ! π ( 2 z 1 ) ! ,
Γ 1 2 + k + ν 2 = Γ 1 2 + z 1 = ( 2 z 1 ) ! π ( 4 ) z 1 z 1 ! ,
Γ 1 ν 2 1 2 = Γ 1 2 z 2 = ( 4 ) z 2 z 2 ! π ( 2 z 2 ) ! ,
Γ ν + 1 2 = Γ 1 2 + z 2 = ( 2 z 2 ) ! π ( 4 ) z 2 z 2 ! .
Applying (A5)–(A9) to the coefficients of the infinite series on the RHS of (8) with γ = 1 2 and simplifying the result obtained by using the Gauss summation formula yield
( 1 ) k Γ 1 2 + k + ν 2 Γ k + ν 2 2 F 1 k , 1 k ν 2 ; 1 k ν 2 1 2 ; 1 = Γ ν + 1 2 Γ ν 2 2 F 1 k , ν + 1 2 ; ν 2 ; 1
where
2 F 1 k , ν + 1 2 ; ν 2 ; 1 = Γ ν 2 Γ k 1 2 Γ k + ν 2 Γ 1 2
and this completes the proof. □
Proof of Corollary 2.
(Proof of Corollary 2). From (6) and (7), when β = α i = 1 for all i = 1 , , n , and δ = i = 1 n δ i > 0 , we have d 1 = 1 2 δ and d j = 0 for all j = 2 , 3 , Applying the result obtained to (5) yields
c k = ( 1 ) k δ k 2 k k !
for k = 1 , 2 , We set n = 1 and X = Y 1 . Replacing the coefficients in the infinite series on the RHS of (8) with (A12) and simplifying the result obtained yield (10). □
Proof of Lemma 1.
We follow the approach presented in Lemma 3.1 of [16] to derive bounds for c k for k = 1 , 2 , From Inequality (3.7) in Lemma 3.1, we set μ 0 = p = ν 2 and this immediately yields (12). The last statement of the lemma is true from Remark 3.1 of [16]. □
Proof of Theorem 3.
Using (11)–(14), we immediately obtain (15). Next, we define
P K ( ν , ζ ) : = k = K + 1 b k ( γ , ν , ζ )
for K 0 and ν > 0 , where b k ( γ , ν , ζ ) is given in (14). Applying Lemma 1, one can show that
lim k b k + 1 ( γ , ν , ζ ) b k ( γ , ν , ζ ) = ζ < 1
providing β > 1 2 max i α i .
Using the ratio test along with (A14), the infinite series P K ( n , ζ ) converges absolutely for all 0 < ζ < 1 and ν > 0 . As a result, one can show from (13) that
lim K B K , ( γ ) ( ζ ) = 2 β γ e δ 2 ζ lim K P K ( ν , ζ ) = 0 .
By utilizing (15) and (A15), we thus obtain (16). □
Proof of Theorem 4.
From Euler’s transformation [31], we apply
2 F 1 a , b ; c ; z = ( 1 z ) c a b 2 F 1 c a , c b ; c ; z
for c > a + b to the generalized hypergeometric functions on the RHS of (8) with γ = m as follows:
2 F 1 k , 1 k ν 2 ; 1 k ν 2 m ; z = ( 1 z ) k m 2 F 1 1 ν 2 m , m ; 1 k ν 2 m ; z .
Using (A17), it is easy to show that, when k > m ,
2 F 1 k , 1 k ν 2 ; 1 k ν 2 m ; 1 = 0
for all m = 1 , 2 , , and this completes the proof. □
Proof of Corollary 3.
We derived in the proof of Corollary 2 that the c k ’s satisfy (A12). By replacing the coefficients in the finite series on the RHS of (17) with (A12) and simplifying the result obtained, we immediately obtain (18). □
Proof of Theorem 5.
Set X i = Z i 2 σ ( 1 , i ) 2 . Thus, X i χ 1 2 μ ( 1 , i ) 2 σ ( 1 , i ) 2 for all i = 1 , , n . Moreover, Y ( 1 , n ) can be expressed as Y ( 1 , n ) = i = 1 n a ( 1 , i ) σ ( 1 , i ) 2 X i . Utilizing Theorem 1, Theorem 2, and Theorem 4 with α i = a ( 1 , i ) σ ( 1 , i ) 2 , ν i = 1 , and δ i = μ ( 1 , i ) σ ( 1 , i ) 2 for all i = 1 , , n , we immediately obtain that the PDF of Y ( 1 , n ) , E [ Y ( 1 , n ) γ ] , and E [ Y ( 1 , n ) m ] can be computed by using (3), (8), and (17), respectively, for all γ R + and integer m N . □
Proof of Theorem 6.
Let ν i = 2 κ ( 2 , i ) and X i χ ν i 2 . By using the property of the Gamma distribution, we have that 1 2 θ ( 2 , i ) X i Gamma κ ( 2 , i ) , θ ( 2 , i ) and G i can be expressed as G i = 1 2 θ ( 2 , i ) X i , for all i = 1 , , n . As a result, Y ( 2 , n ) can be expressed in terms of a linear combination of independent chi-square random variables as Y ( 2 , n ) = i = 1 n 1 2 a ( 2 , i ) θ ( 2 , i ) X i . Applying Theorems 1, 2, and 4 with α i = 1 2 a ( 2 , i ) θ ( 2 , i ) , ν i = 2 κ ( 2 , i ) , and δ i = 0 for all i = 1 , , n , the PDF of Y ( 2 , n ) , E [ Y ( 2 , n ) γ ] , and E [ Y ( 2 , n ) m ] can be computed by using (3), (8), and (17), respectively, for all γ R + and integer m N . □
Proof of Theorem 7.
Utilizing the property of the Erlang distribution, we have L i Gamma κ ( 3 , i ) , θ ( 3 , i ) , where θ ( 3 , i ) = 1 λ ( 3 , i ) for all i = 1 , , n . Applying Theorem 6, the PDF of Y ( 3 , n ) , E [ Y ( 3 , n ) γ ] , and E [ Y ( 3 , n ) m ] can be computed by using (3), (8), and (17), respectively, for all γ R + and integer m N , where we set α i = a ( 3 , i ) 2 λ ( 3 , i ) , ν i = 2 κ ( 3 , i ) , and δ i = 0 for all i = 1 , , n . □
Proof of Theorem 8.
Using the property of the exponential distribution, we have P i Gamma κ ( 4 , i ) , θ ( 4 , i ) , where κ ( 4 , i ) = 1 and θ ( 4 , i ) = 1 λ ( 4 , i ) for all i = 1 , , n . From Theorem 6, the PDF of Y ( 4 , n ) , E [ Y ( 4 , n ) γ ] , and E [ Y ( 4 , n ) m ] can be computed by using (3), (8), and (17), respectively, for all γ R + and integer m N , where we set α i = a ( 4 , i ) 2 λ ( 4 , i ) , ν i = 2 , and δ i = 0 for all i = 1 , , n . □
 Proof of Theorem 9.
From the property of the Maxwell–Boltzmann distribution, we have W i 2 Gamma κ ( 5 , i ) , θ ( 5 , i ) , where κ ( 5 , i ) = 3 2 and θ ( 5 , i ) = 2 ϕ ( 5 , i ) 2 for all i = 1 , , n . Applying Theorem 6, the PDF of Y ( 5 , n ) , E [ Y ( 5 , n ) γ ] , and E [ Y ( 5 , n ) m ] can be computed by using (3), (8), and (17), respectively, for all γ R + and integer m N , where we set α i = a ( 5 , i ) ϕ ( 5 , i ) 2 , ν i = 3 , and δ i = 0 for all i = 1 , , n . □

Appendix C. Omitted Proofs from Section 4

Proof of Theorem 11.
First, we define
Y ^ n : = i = 1 n α ^ i X ^ i
where the X ^ i ’s are the independent noncentral chi-square and chi-square random variables given in (23) with the coefficients and parameters given in (24)–(28). Next, we use the Laguerre expansion (3) in Theorem 1 to obtain the PDF of Y ^ n by setting α i = α ^ i , ν i = ν ^ i , and δ i = δ ^ i for i = 1 , , n . As a result, the PDF of Y ^ n , denoted by f Y ^ n ( β ) ( y ^ n ) , can be expressed as
f Y ^ n ( β ) ( y ^ n ) = e y ^ n 2 β y ^ n ν ^ n 2 1 ( 2 β ) ν ^ n 2 k = 0 k ! Γ ( ν ^ n 2 + k ) c ^ k , n L k ν ^ n 2 1 y ^ n 2 β
for y ^ n > 0 and β > 0 , where
ν ^ n = i = 1 n ν ^ i = d ( 0 ) + i = 2 n d ( 1 ) ( i 1 ) t n t n ,
c ^ 0 , n = 1 ,
c ^ k , n = 1 k j = 0 k 1 c ^ j , n d ^ k j , n ,
for k 1 ,
d ^ 1 , n = 1 2 β δ ^ 1 α ^ 1 + 1 2 ν ^ 1 1 α ^ 1 β + 1 2 i = 2 n ν ^ i 1 α ^ i β = 1 2 β v 0 e 0 t κ ( u ) d u + 1 2 d ( 0 ) 1 τ ( t , 0 ) β + 1 2 i = 2 n d ( 1 ) ( i 1 ) t n t n 1 τ t , ( i 1 ) t n β ,
and
d ^ j , n = j 2 1 β j δ ^ 1 α ^ 1 β α ^ 1 j 1 + 1 2 ν ^ 1 1 α ^ 1 β j + 1 2 i = 2 n ν ^ i 1 α ^ i β j = j 2 1 β j v 0 e 0 t κ ( u ) d u β τ ( t , 0 ) j 1 + 1 2 d ( 0 ) 1 τ ( t , 0 ) β j + 1 2 i = 2 n d ( 1 ) ( i 1 ) t n t n 1 τ t , ( i 1 ) t n β j ,
for j 2 . It should be noted from (32)–(35) and (A22)–(A25) that choosing β = τ ( t , 0 ) > 0 yields
lim n ν ^ n = d ( t ) ,
lim n d ^ j , n = d ^ j ( t , v 0 )
for j 1 , and
lim n c ^ k , n = c ^ k ( t , v 0 )
for k 0 . From Theorem 10, we apply the convergence (22) to Y ^ n as defined in (A19) and use (A26)–(A28) to obtain
v = lim n y ^ n ,
and
f V t ( v , t | v 0 ) = lim n f Y ^ n ( τ ( t , 0 ) ) ( y ^ n ) = e v 2 τ ( t , 0 ) v d ( t ) 2 1 ( 2 τ ( t , 0 ) ) d ( t ) 2 k = 0 k ! Γ ( d ( t ) 2 + k ) c ^ k ( t , v 0 ) L k d ( t ) 2 1 v 2 τ ( t , 0 ) ,
respectively.
Furthermore, if d ( s ) = d 2 for all s [ 0 , t ] then d ( 1 ) ( s ) = 0 for all s [ 0 , t ] . Hence, c ^ k ( t , v 0 ) for k 0 as written in (32) and (33) are simplified to (36) by replacing d ( 1 ) ( s ) in (34) and (35) with zero, and the proof is now complete. □
Proof of Theorem 12.
According to the proof of Theorem 11, we first apply Theorem 2 to Y ^ n defined in (A19) to obtain
E P Y ^ n γ = 0 y ^ γ f Y ^ n ( β ) ( y ^ ) d y ^ = 2 β γ k = 0 ( 1 ) k Γ γ + k + ν ^ n 2 Γ k + ν ^ n 2 2 F 1 k , 1 k ν ^ n 2 ; 1 k ν ^ n 2 γ ; 1 c ^ k , n .
By choosing β = τ ( t , 0 ) > 0 and using the results written in (A26)–(A30), an explicit formula for U E ( γ ) ( t | v 0 ) = lim n E P Y ^ n γ is obtained as expressed in (38). In addition, when m N , an explicit formula for U E ( m ) ( t | v 0 ) can be obtained as expressed in (39) by applying Theorem 4 to (38), and this completes the proof. □

References

  1. Johnson, N.L.; Kotz, S.; Balakrishnan, N. Continuous Univariate Distributions, Volume 2; John Wiley & Sons: Hoboken, NJ, USA, 1995; Volume 289. [Google Scholar]
  2. Moore, D.S.; Spruill, M.C. Unified large-sample theory of general chi-squared statistics for tests of fit. Ann. Stat. 1975, 3, 599–616. [Google Scholar] [CrossRef]
  3. Francq, C.; Roy, R.; Zakoïan, J.M. Diagnostic checking in ARMA models with uncorrelated errors. J. Am. Stat. Assoc. 2005, 100, 532–544. [Google Scholar] [CrossRef]
  4. Ljung, G.M. Diagnostic testing of univariate time series models. Biometrika 1986, 73, 725–730. [Google Scholar] [CrossRef]
  5. Peña, D.; Rodríguez, J. A powerful portmanteau test of lack of fit for time series. J. Am. Stat. Assoc. 2002, 97, 601–610. [Google Scholar] [CrossRef] [Green Version]
  6. Chumpong, K.; Mekchay, K.; Rujivan, S.; Thamrongrat, N. Simple Analytical Formulas for Pricing and Hedging Moment Swaps. Thai J. Math. 2022, 20, 693–713. [Google Scholar]
  7. Chumpong, K.; Mekchay, K.; Thamrongrat, N. Analytical formulas for pricing discretely-sampled skewness and kurtosis swaps based on Schwartz’s one-factor model. Songklanakarin J. Sci. Technol. 2021, 43, 1–6. [Google Scholar]
  8. Boonklurb, R.; Duangpan, A.; Rakwongwan, U.; Sutthimat, P. A Novel Analytical Formula for the Discounted Moments of the ECIR Process and Interest Rate Swaps Pricing. Fractal Fract. 2022, 6, 58. [Google Scholar] [CrossRef]
  9. Duangpan, A.; Boonklurb, R.; Rakwongwan, U.; Sutthimat, P. Analytical Formulas Using Affine Transformation for Pricing Generalized Swaps in Commodity Markets with Stochastic Convenience Yields. Symmetry 2022, 14, 2385. [Google Scholar] [CrossRef]
  10. Rujivan, S. Valuation of volatility derivatives with time-varying volatility: An analytical probabilistic approach using a mixture distribution for pricing nonlinear payoff volatility derivatives in discrete observation case. J. Comput. Appl. Math. 2023, 418, 114672. [Google Scholar] [CrossRef]
  11. Rujivan, S.; Rakwongwan, U. Analytically pricing volatility swaps and volatility options with discrete sampling: Nonlinear payoff volatility derivatives. Commun. Nonlinear Sci. Numer. Simul. 2021, 100, 105849. [Google Scholar] [CrossRef]
  12. Rujivan, S. A closed-form formula for the conditional moments of the extended CIR process. J. Comput. Appl. Math. 2016, 297, 75–84. [Google Scholar] [CrossRef]
  13. Marchand, E. Computing the moments of a truncated noncentral chi-square distribution. J. Stat. Comput. Simul. 1996, 54, 387–391. [Google Scholar] [CrossRef]
  14. Bodenham, D.A.; Adams, N.M. A comparison of efficient approximations for a weighted sum of chi-squared random variables. Stat. Comput. 2016, 26, 917–928. [Google Scholar] [CrossRef] [Green Version]
  15. Castaño-Martínez, A.; López-Blázquez, F. Distribution of a sum of weighted central chi-square variables. Commun. Stat. Theory Methods 2005, 34, 515–524. [Google Scholar] [CrossRef]
  16. Castaño-Martínez, A.; López-Blázquez, F. Distribution of a sum of weighted noncentral chi-square variables. Test 2005, 14, 397–415. [Google Scholar] [CrossRef]
  17. Davis, A. A differential equation approach to linear combinations of independent chi-squares. J. Am. Stat. Assoc. 1977, 72, 212–214. [Google Scholar] [CrossRef]
  18. Duchesne, P.; De Micheaux, P.L. Computing the distribution of quadratic forms: Further comparisons between the Liu–Tang–Zhang approximation and exact methods. Comput. Stat. Data Anal. 2010, 54, 858–862. [Google Scholar] [CrossRef]
  19. Grau, D. Moments of the unbalanced non-central chi-square distribution. Stat. Probab. Lett. 2009, 79, 361–367. [Google Scholar] [CrossRef]
  20. Ha, H.T. An accurate approximation to the distribution of a linear combination of non-central chi-square random variables. REVSTAT-Stat. J. 2013, 11, 231–254. [Google Scholar]
  21. Kotz, S.; Johnson, N.L.; Boyd, D. Series representations of distributions of quadratic forms in normal variables II. Non-central case. Ann. Math. Stat. 1967, 38, 838–848. [Google Scholar] [CrossRef]
  22. Koutras, M. On the generalized noncentral chi-squared distribution induced by an elliptical gamma law. Biometrika 1986, 73, 528–532. [Google Scholar]
  23. Liu, H.; Tang, Y.; Zhang, H.H. A new chi-square approximation to the distribution of non-negative definite quadratic forms in non-central normal variables. Comput. Stat. Data Anal. 2009, 53, 853–856. [Google Scholar] [CrossRef]
  24. Ruben, H. Probability content of regions under spherical normal distributions, IV: The distribution of homogeneous and non-homogeneous quadratic functions of normal variables. Ann. Math. Stat. 1962, 33, 542–570. [Google Scholar] [CrossRef]
  25. Shah, B.; Khatri, C. Distribution of a definite quadratic form for non-central normal variates. Ann. Math. Stat. 1961, 32, 883–887. [Google Scholar] [CrossRef]
  26. Shah, B. Distribution of definite and of indefinite quadratic forms from a non-central normal distribution. Ann. Math. Stat. 1963, 34, 186–190. [Google Scholar]
  27. Maghsoodi, Y. Solution of the extended CIR term structure and bond option valuation. Math. Financ. 1996, 6, 89–109. [Google Scholar] [CrossRef]
  28. Dufresne, D. The integrated square-root process. In Research Paper; Centre for Actuarial Studies, Department of Economics, University of Melbourne: Parkville, VIC, Australia, 2001; pp. 1–34. [Google Scholar]
  29. Mirevski, S.; Boyadjiev, L. On some fractional generalizations of the Laguerre polynomials and the Kummer function. Comput. Math. Appl. 2010, 59, 1271–1277. [Google Scholar] [CrossRef] [Green Version]
  30. Slater, L.J. Generalized Hypergeometric Functions; Cambridge University Press: Cambridge, UK, 1966. [Google Scholar]
  31. Kajihara, Y. Euler transformation formula for multiple basic hypergeometric series of type A and some applications. Adv. Math. 2004, 187, 53–97. [Google Scholar]
  32. Mathai, A.M. Storage capacity of a dam with gamma type inputs. Ann. Inst. Stat. Math. 1982, 34, 591–597. [Google Scholar]
  33. Alouini, M.S.; Abdi, A.; Kaveh, M. Sum of gamma variates and performance of wireless communication systems over Nakagami-fading channels. IEEE Trans. Veh. Technol. 2001, 50, 1471–1480. [Google Scholar]
  34. Amari, S.V.; Misra, R.B. Closed-form expressions for distribution of sum of exponential random variables. IEEE Trans. Reliab. 1997, 46, 519–522. [Google Scholar] [CrossRef]
  35. Maxwell, J.C.V. Illustrations of the dynamical theory of gases.—Part I. On the motions and collisions of perfectly elastic spheres. London Edinburgh Dublin Philos. Mag. J. Sci. 1860, 19, 19–32. [Google Scholar] [CrossRef]
  36. Cox, J.C.; Ingersoll, J.E.; Ross, S. A theory of the term structure of interest rates. Econometrica 1985, 53, 385–407. [Google Scholar] [CrossRef]
  37. Peng, Q.; Schellhorn, H. On the distribution of extended CIR model. Stat. Probab. Lett. 2018, 142, 23–29. [Google Scholar] [CrossRef]
  38. Chumpong, K.; Mekchay, K.; Rujivan, S. A simple closed-form formula for the conditional moments of the Ornstein-Uhlenbeck process. Songklanakarin J. Sci. Technol. 2020, 42, 836–845. [Google Scholar]
  39. Sutthimat, P.; Mekchay, K. Closed-form formulas for conditional moments of inhomogeneous Pearson diffusion processes. Commun. Nonlinear Sci. Numer. Simul. 2022, 106, 106095. [Google Scholar] [CrossRef]
  40. Sutthimat, P.; Mekchay, K.; Rujivan, S. Closed-form formula for conditional moments of generalized nonlinear drift CEV process. Appl. Math. Comput. 2022, 428, 127213. [Google Scholar] [CrossRef]
  41. Sutthimat, P.; Rujivan, S.; Mekchay, K.; Rakwongwan, U. Analytical formula for conditional expectations of path-dependent product of polynomial and exponential functions of extended Cox–Ingersoll–Ross process. Res. Math. Sci. 2022, 9, 10. [Google Scholar] [CrossRef]
  42. Daniel, W.W. Applied Nonparametric Statistics; Houghton Mifflin: Boston, MA, USA, 1978. [Google Scholar]
  43. Zwillinger, D.; Jeffrey, A. Table of Integrals, Series, and Products; Elsevier: Amsterdam, The Netherlands, 2007. [Google Scholar]
Figure 1. The values of the parameters set in Example 1, in which we plotted c j ν i ( j ) with a scaling factor 0 < c j < 1 to make its scale comparable to the other parameters for all j = 1 , 2 , 3 .
Figure 1. The values of the parameters set in Example 1, in which we plotted c j ν i ( j ) with a scaling factor 0 < c j < 1 to make its scale comparable to the other parameters for all j = 1 , 2 , 3 .
Mathematics 11 01276 g001
Figure 2. The PDFs of α i ( 1 ) X i ( 1 ) in which X i ( 1 ) χ ν i ( 1 ) 2 δ i ( 1 ) for i = 1 , , 6 , and the values of parameters α i ( 1 ) , ν i ( 1 ) , and δ i ( 1 ) are displayed in Figure 1a.
Figure 2. The PDFs of α i ( 1 ) X i ( 1 ) in which X i ( 1 ) χ ν i ( 1 ) 2 δ i ( 1 ) for i = 1 , , 6 , and the values of parameters α i ( 1 ) , ν i ( 1 ) , and δ i ( 1 ) are displayed in Figure 1a.
Mathematics 11 01276 g002
Figure 3. The sequences of c k ’s in the Laguerre expansion (3) for the PDFs of Y n ( j ) , j = 1 , 2 , 3 set in Example 1, in which the values of parameters α i ( j ) , ν i ( j ) , and δ i ( j ) are displayed in Figure 1a–d.
Figure 3. The sequences of c k ’s in the Laguerre expansion (3) for the PDFs of Y n ( j ) , j = 1 , 2 , 3 set in Example 1, in which the values of parameters α i ( j ) , ν i ( j ) , and δ i ( j ) are displayed in Figure 1a–d.
Mathematics 11 01276 g003
Figure 4. The PDFs of the Y n ( j ) ’s in Example 1 computed by using the Laguerre expansion (3), against the corresponding histograms obtained from MC simulations, in which the p-values based on the K-S tests are greater than 0.1 (the significant level).
Figure 4. The PDFs of the Y n ( j ) ’s in Example 1 computed by using the Laguerre expansion (3), against the corresponding histograms obtained from MC simulations, in which the p-values based on the K-S tests are greater than 0.1 (the significant level).
Mathematics 11 01276 g004
Figure 5. The variation on the approximate values of E ( Y 11 ( 1 ) ) γ obtained from MC simulations in Example 2 with an increasing number of sample paths, demonstrating the convergence of the approximate values obtained from MC simulations to the one computed by using our Formula (8) with K = K 1 = 100 terms in the infinite series, when the number of sample paths approaches infinity.
Figure 5. The variation on the approximate values of E ( Y 11 ( 1 ) ) γ obtained from MC simulations in Example 2 with an increasing number of sample paths, demonstrating the convergence of the approximate values obtained from MC simulations to the one computed by using our Formula (8) with K = K 1 = 100 terms in the infinite series, when the number of sample paths approaches infinity.
Mathematics 11 01276 g005
Figure 6. The variations on the approximate values of E ( Y 15 ( 2 ) ) γ and E ( Y 20 ( 3 ) ) γ obtained from MC simulations in Example 2 with an increasing number of sample paths, demonstrating the convergence of the approximate values obtained from MC simulations to the ones computed by using our Formula (8) with K = K 1 = 300 and K = K 2 = 400 terms in the infinite series, respectively, when the number of sample paths approaches infinity.
Figure 6. The variations on the approximate values of E ( Y 15 ( 2 ) ) γ and E ( Y 20 ( 3 ) ) γ obtained from MC simulations in Example 2 with an increasing number of sample paths, demonstrating the convergence of the approximate values obtained from MC simulations to the ones computed by using our Formula (8) with K = K 1 = 300 and K = K 2 = 400 terms in the infinite series, respectively, when the number of sample paths approaches infinity.
Mathematics 11 01276 g006
Figure 7. Variations of the three parameter functions (45)–(47) as well as d ( t ) , d ( 1 ) ( t ) , and τ ( t , 0 ) for t [ 0 , 3 ] in Example 3.
Figure 7. Variations of the three parameter functions (45)–(47) as well as d ( t ) , d ( 1 ) ( t ) , and τ ( t , 0 ) for t [ 0 , 3 ] in Example 3.
Mathematics 11 01276 g007
Figure 8. The convergence of Y ^ n ( t , v 0 ) = i = 1 n α ^ i X ^ i in distribution to V t | v 0 as n approaches infinity tested by using MC simulations in Example 3, demonstrating the result (22) presented in Theorem 10, against f V t ( v , t | v 0 ) computed by using our explicit Formula (31).
Figure 8. The convergence of Y ^ n ( t , v 0 ) = i = 1 n α ^ i X ^ i in distribution to V t | v 0 as n approaches infinity tested by using MC simulations in Example 3, demonstrating the result (22) presented in Theorem 10, against f V t ( v , t | v 0 ) computed by using our explicit Formula (31).
Mathematics 11 01276 g008
Figure 9. The sequences of c ^ k ( t , v 0 ) ’s computed from (32) and (33) in Example 3.
Figure 9. The sequences of c ^ k ( t , v 0 ) ’s computed from (32) and (33) in Example 3.
Mathematics 11 01276 g009
Figure 10. The graphs of f V t ( v , t | v 0 ) computed from our explicit Formula (31), against the corresponding histograms of the random samples generated from (19) based on MC simulations for t = 0.03 , 0.75 , 1.5 , 2.25 , 3 , along with the p-values in parentheses obtained from the K-S tests with a significant level of 0.1 in Example 3.
Figure 10. The graphs of f V t ( v , t | v 0 ) computed from our explicit Formula (31), against the corresponding histograms of the random samples generated from (19) based on MC simulations for t = 0.03 , 0.75 , 1.5 , 2.25 , 3 , along with the p-values in parentheses obtained from the K-S tests with a significant level of 0.1 in Example 3.
Mathematics 11 01276 g010
Figure 11. The sequences of U E ( γ , K ) ( t | v 0 ) for K = 0 , , 5 , computed from our explicit Formula (38) and the explicit formula written in Equation (2.2) of Rujivan [12], against the approximate value of U E ( γ ) ( t | v 0 ) obtained from MC simulations based on the ECIR process (19) in Example 4 by setting v 0 = 1 and t = 1 .
Figure 11. The sequences of U E ( γ , K ) ( t | v 0 ) for K = 0 , , 5 , computed from our explicit Formula (38) and the explicit formula written in Equation (2.2) of Rujivan [12], against the approximate value of U E ( γ ) ( t | v 0 ) obtained from MC simulations based on the ECIR process (19) in Example 4 by setting v 0 = 1 and t = 1 .
Mathematics 11 01276 g011
Figure 12. The efficiency of our explicit Formula (38) over the explicit formula written in Equation (2.2) of Rujivan [12] for calculating U E ( γ ) ( t | v 0 ) in Example 4 by setting γ = 0.5 , v 0 = 1 , and t = 1 .
Figure 12. The efficiency of our explicit Formula (38) over the explicit formula written in Equation (2.2) of Rujivan [12] for calculating U E ( γ ) ( t | v 0 ) in Example 4 by setting γ = 0.5 , v 0 = 1 , and t = 1 .
Mathematics 11 01276 g012
Table 1. The truncation errors of E Y 11 ( 1 ) γ , E Y 15 ( 2 ) γ , and E Y 20 ( 3 ) γ , denoted by E k , ( γ , 1 ) , E k , ( γ , 2 ) , and E k , ( γ , 3 ) , respectively, computed in Example 2 by using (11) when γ = 1 2 N and γ = 1 , 2 N for k = 0 , , 10 .
Table 1. The truncation errors of E Y 11 ( 1 ) γ , E Y 15 ( 2 ) γ , and E Y 20 ( 3 ) γ , denoted by E k , ( γ , 1 ) , E k , ( γ , 2 ) , and E k , ( γ , 3 ) , respectively, computed in Example 2 by using (11) when γ = 1 2 N and γ = 1 , 2 N for k = 0 , , 10 .
k γ = 1 / 2 γ = 1 γ = 2
E k , ( γ , 1 ) E k , ( γ , 2 ) E k , ( γ , 3 ) E k , ( γ , 1 ) E k , ( γ , 2 ) E k , ( γ , 3 ) E k , ( γ , 1 ) E k , ( γ , 2 ) E k , ( γ , 3 )
0 6.2 × 10 1 1.2 × 10 0 1.6 × 10 0 3.5 × 10 0 7.6 × 10 0 1.2 × 10 1 5.6 × 10 1 1.6 × 10 2 3.8 × 10 2
1 6.5 × 10 2 1.9 × 10 1 2.8 × 10 1 000 1.3 × 10 1 5.8 × 10 1 1.5 × 10 2
2 1.2 × 10 2 5.5 × 10 2 9.1 × 10 2 000000
3 3.2 × 10 3 2.0 × 10 2 3.5 × 10 2 000000
4 9.1 × 10 4 7.7 × 10 3 1.5 × 10 2 000000
5 2.8 × 10 4 3.2 × 10 3 6.8 × 10 3 000000
6 9.2 × 10 5 1.4 × 10 3 3.2 × 10 3 000000
7 3.1 × 10 5 6.3 × 10 4 1.5 × 10 3 000000
8 1.1 × 10 5 2.9 × 10 4 7.7 × 10 4 000000
9 4.1 × 10 6 1.3 × 10 4 3.9 × 10 4 000000
10 1.6 × 10 6 6.7 × 10 5 1.9 × 10 4 000000
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Rujivan, S.; Sutchada, A.; Chumpong, K.; Rujeerapaiboon, N. Analytically Computing the Moments of a Conic Combination of Independent Noncentral Chi-Square Random Variables and Its Application for the Extended Cox–Ingersoll–Ross Process with Time-Varying Dimension. Mathematics 2023, 11, 1276. https://doi.org/10.3390/math11051276

AMA Style

Rujivan S, Sutchada A, Chumpong K, Rujeerapaiboon N. Analytically Computing the Moments of a Conic Combination of Independent Noncentral Chi-Square Random Variables and Its Application for the Extended Cox–Ingersoll–Ross Process with Time-Varying Dimension. Mathematics. 2023; 11(5):1276. https://doi.org/10.3390/math11051276

Chicago/Turabian Style

Rujivan, Sanae, Athinan Sutchada, Kittisak Chumpong, and Napat Rujeerapaiboon. 2023. "Analytically Computing the Moments of a Conic Combination of Independent Noncentral Chi-Square Random Variables and Its Application for the Extended Cox–Ingersoll–Ross Process with Time-Varying Dimension" Mathematics 11, no. 5: 1276. https://doi.org/10.3390/math11051276

APA Style

Rujivan, S., Sutchada, A., Chumpong, K., & Rujeerapaiboon, N. (2023). Analytically Computing the Moments of a Conic Combination of Independent Noncentral Chi-Square Random Variables and Its Application for the Extended Cox–Ingersoll–Ross Process with Time-Varying Dimension. Mathematics, 11(5), 1276. https://doi.org/10.3390/math11051276

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop