Next Article in Journal
Forward-Backward Sweep Method for the System of HJB-FP Equations in Memory-Limited Partially Observable Stochastic Control
Next Article in Special Issue
Two Features of the GINAR(1) Process and Their Impact on the Run-Length Performance of Geometric Control Charts
Previous Article in Journal
Research on Structurally Constrained KELM Fault-Diagnosis Model Based on Frequency-Domain Fuzzy Entropy
Previous Article in Special Issue
A Conway–Maxwell–Poisson-Binomial AR(1) Model for Bounded Time Series Data
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Modified Multiplicative Thinning-Based INARCH Model: Properties, Saddlepoint Maximum Likelihood Estimation, and Application

1
School of Mathematics, Jilin University, Changchun 130012, China
2
College of Mathematics, Changchun Normal University, Changchun 130032, China
*
Author to whom correspondence should be addressed.
Entropy 2023, 25(2), 207; https://doi.org/10.3390/e25020207
Submission received: 19 December 2022 / Revised: 15 January 2023 / Accepted: 18 January 2023 / Published: 21 January 2023
(This article belongs to the Special Issue Discrete-Valued Time Series)

Abstract

:
In this article, we propose a modified multiplicative thinning-based integer-valued autoregressive conditional heteroscedasticity model and use the saddlepoint maximum likelihood estimation (SPMLE) method to estimate parameters. A simulation study is given to show a better performance of the SPMLE. The application of the real data, which is concerned with the number of tick changes by the minute of the euro to the British pound exchange rate, shows the superiority of our modified model and the SPMLE.

1. Introduction

In practice, we can often observe a series of integer-valued data that have their own distinguishing characteristics, and many models were proposed for modeling integer-valued time series, such as the integer-valued autoregressive (INAR) process introduced by McKenzie (1985) [1], and Al-Osh and Alzaid (1987) [2]; the integer-valued moving average process proposed by Al-Osh and Alzaid (1988) [3]; the integer-valued autoregressive moving-average model defined by McKenize (1988) [4]; and the integer-valued generalized autoregressive conditional heteroscedasticity (INGARCH) model proposed by Ferland et al. (2006) [5], among others. Here we focus on two kinds of the models above: one is the INAR process, which was introduced as a convenient way to transfer the usual autoregressive structure to a discrete-valued time series, and a p-order model, which is defined as follows:
X t = i = 1 p α i X t i + ε t ,
where α i [ 0 , 1 ) for i = 1 , , p , and { ε t } is a sequence of independent and identically distributed (i.i.d.) non-negative integer-valued random variables with E ( ε t ) = μ and Var ( ε t ) = σ ε 2 . The binomial thinning operator ∘ is defined by Steutel and Van Harn (1979) [6] as:
α X = i = 1 X Y i , if X > 0 and 0 otherwise ,
where Y i are i.i.d. Bernoulli random variables, independent of X, with a success probability are defined by α . This model has been generalized by Qian and Zhu (2022) [7], and Huang et al. (2023) [8], among others.
The other is the INGARCH model which was proposed by Ferland et al. (2006) [5] to model the observations of integer-valued time series which exist heteroscedasticity; this INGARCH ( p , q ) model with a Poisson deviate is defined as:
X t | F t 1 : P ( λ t ) , λ t = α 0 + i = 1 p α i X t i + j 1 q β j λ t j ,
where α 0 > 0 , α i 0 , β j 0 , i = 1 , , p , j = 1 , , q , p 1 , q 0 , and F t 1 is the σ -field generated by { X t 1 , X t 2 , } . This model has been generalized by Hu (2016) [9], Liu et al. (2022) [10], and Weiß et al. (2022) [11], among others. Weiß (2018) [12] and Davis et al. (2021) [13] gave recent reviews. According to definitions of INAR and INGARCH models, we noticed that the INAR model is thinning-based, while the INGARCH model is specified by a conditional distribution with a time-varying mean depending on past observations. Combining the thinning-based stochastic equations and the INGARCH model, Aknouche and Scotto (2022) [14] proposed a multiplicative thinning-based INGARCH (MthINGARCH) model to model the integer-valued time series with high overdispersion and persistence. Furthermore, it fits well with heavy-tailed data regardless of the choice of innovation distribution and does not require recourse to complex random coefficient equations. The MthINGARCH model is denoted by:
X t = λ t ε t , λ t = 1 + ω m + i = 1 q α i X t i + j = 1 p β j λ t j ,
where the symbol ∘ stands for the binomial thinning operator, and 0 ω 1 , 0 α i < 1 and 0 β j < 1 ( i = 1 , , q , j = 1 , , p ) , m is a fixed positive integer number that was introduced for more flexibility. Since there is no explicit probability mass function for the series { X t } , then the traditional maximum likelihood estimation (MLE) cannot be applied to estimate the parameters; therefore, Aknouche and Scotto (2022) [14] used a two-stage weighted least squares estimation instead.
Note that the probability mass function of the random variables cannot be given directly for the likelihood function in some cases; to solve this problem, saddlepoint approximation has been proposed. Daniel (1954) [15] introduced saddlepoint techniques into the statistical field, which have been extended by Field and Ronchetti (1990) [16], Jensen (1995) [17], and Butler (2007) [18]. Saddlepoint techniques have been used successfully in many applications because of the high accuracy with which they can approximate intractable densities and tail probabilities. Pedeli et al. (2015) [19] proposed an alternative approach based on the saddlepoint approximation to log-likelihood, and the saddlepoint maximum likelihood estimation (SPMLE) was used to estimate the parameters of the INAR model, which demonstrates the usefulness of this technique. Thus, through combining the MthINGARCH model of Aknouche and Scotto (2022) [14] and the saddlepoint approximation, we propose a modified multiplicative thinning-based INARCH model for modeling high overdispersion, before applying the saddlepoint method to the estimated parameters. Although the two-stage weighted least squares estimation could be used to estimate the parameters of our modified model, we still adopted the SPMLE as it was still expected to have a better performance than the two-stage weighted least squares estimation in practice. Here, we just consider the INARCH model instead of the INGARCH model because it is difficult and complex to give the conditional cumulant-generating function of random variables for the latter model when applying the saddlepoint approximation.
This article has the following structure. A modified multiplicative thinning-based INARCH model is given, alongside some related properties in Section 2. Moreover, we use the Poisson distribution and geometric distribution for innovations. Section 3 discusses the SPMLE and its asymptotic properties, then simulation studies for both models with SPMLE are also given. A real data example is analyzed with our modified models in Section 4, and comparisons with existing models are made. In-sample and out-of-sample forecasts are used to show the superiority of the SPMLE and our modified model. The conclusion is given in Section 5. Some details of SPMLE and proof of some theorems are presented in the Appendix A.

2. A Multiplicative Thinning-Based INARCH Model

Note that N = { 0 , 1 , 2 , } and Z = { , 1 , 0 , 1 , } are the set of non-negative integers and integers, respectively. It can be supposed that { ε t , t Z } is a sequence of i.i.d. random variables with a mean of one and finite variance of σ 2 . The modified multiplicative thinning-based INARCH (denoted by the MthINARCH ( q ) ) model, which we deal with in this paper, is defined by
X t = λ t ε t , λ t = ω m + i = 1 q α i X t i ,
where 0 < ω 1 , 0 α i < 1 , i = 1 , , q , m is a fixed positive integer number. In real applications, we can set m as the upper integer part of the sample mean. It is assumed that the Bernoulli terms corresponding to the binomial variables ω m and α i X t i are mutually independent and independent of the sequence { ε t , t Z } . The reason that we defined the new model in this way can be explained as follows. The additive term 1 in λ t and in (1) is unnatural, and is posed to ensure λ t > 0 , but we can achieve this by adjusting the range of ω ; therefore, we adopted a simple version of λ t in (2).
Now that we discuss the conditional mean and conditional variance of X t . Note that F t 1 is the σ -field generated by X t 1 , X t 2 , . For E ( ε t ) = 1 , let μ t : = E ( X t | F t 1 ) = E ( λ t ε t | F t 1 ) = E ( ε t ) E ( λ t | F t 1 ) = E ( λ t | F t 1 ) = ω m + i = 1 q α i X t i . Then we can obtain the conditional variance; first, let ν t : = Var ( λ t | F t 1 ) and σ t 2 : = Var ( X t | F t 1 ) . For E ( ε t ) = 1 , Var ( ε t ) = σ 2 , so E ( ε t 2 ) = σ 2 + 1 . Therefore,
ν t : = Var ( λ t | F t 1 ) = ω ( 1 ω ) m + i = 1 q α i ( 1 α i ) X t i , σ t 2 : = Var ( X t | F t 1 ) = E ( X t 2 | F t 1 ) [ E ( X t | F t 1 ) ] 2 = E ( λ t 2 | F t 1 ) E ( ε t 2 ) μ t 2 = [ Var ( λ t | F t 1 ) + ( E ( λ t | F t 1 ) ) 2 ] E ( ε t 2 ) μ t 2 = ( σ 2 + 1 ) ( ν t + μ t 2 ) μ t 2 = ( σ 2 + 1 ) ν t + σ 2 μ t 2 .
Proposition 1.
The necessary and sufficient condition for the first-order stationarity of X t defined in (2) is that all roots of 1 i = 1 q α i z i = 0 should lie outside the unit circle.
Proposition 2.
The necessary and sufficient condition for the second-order stationarity of X t defined in (2) is that ( σ 2 + 1 ) i = 1 q α i 2 < 1 .
Proofs of Propositions 1 and 2 are similar to the proofs of Theorems 2.1 and 2.2 in Aknouche and Scotto (2022) [14], so we omit the details.
For convenience, we need to specify the distribution of { ε t } in (2). First, we let ε t P ( 1 ) , then E ( ε t ) = Var ( ε t ) = 1 , and this model is denoted by PMthINARCH ( q ) . It is easy to obtain
μ t = ω m + i = 1 q α i X t i , σ t 2 = 2 ν t + μ t 2 .
Second, let ε t G e ( p * ) . The mean of ε t is ( 1 p * ) / p * = 1 , so we have p * = 0.5 and the variance is Var ( ε t ) = 2 . This model is denoted by GMthINARCH ( q ) , then we have
μ t = ω m + i = 1 q α i X t i , σ t 2 = 3 ν t + 2 μ t 2 .

3. Parameter Estimation

In this section, we will consider the SPMLE and its asymptotic properties, and a simulation study will be conducted to assess the performance of this estimator.

3.1. Saddlepoint Maximum Likelihood Estimation

Let θ = ( ω , α 1 , , α q ) T be the unknown parameter vector. Note that according to the condition on ε t , σ 2 is no longer an unknown parameter. The maximum likelihood estimator of θ was obtained by maximizing the conditional log-likelihood function
l ( θ ) = t = 1 n log P ( X t = x t | X t 1 = x t 1 , , X t q = x t q ) ,
giving θ ^ = arg max θ l ( θ ) . But the above procedure is challenging to implement because it is difficult to give the likelihood function due to the thinning operations.
Now we discuss the SPMLE. The conditional moment generating function of X t is
E ( e u X t | X t 1 = x t 1 , , X t q = x t q ) = E ( e u λ t ε t | X t 1 = x t 1 , , X t q = x t q ) = E ( e u ( ω m + i = 1 q α i X t i ) ε t | X t 1 = x t 1 , , X t q = x t q ) = E ( e u ( ω m ) ε t ) i = 1 q E ( e u ( α i x t i ) ε t ) .
Remark 1.
Here we just consider the INARCH model instead of the INGARCH model because for the INGARCH model, the conditional cumulant-generating function of X t should be given by E ( e u X t | X t 1 = x t 1 , , X t q = x t q ) = E ( e u ( ω m + i = 1 q α i X t i + j = 1 p β j λ t i ) ε t | X t 1 = x t 1 , , X t q = x t q ) . Notice that X t and λ t are correlated, it is difficult and complex to show the conditional cumulant-generating function.
Using the binomial theorem ( a + b ) n = k = 0 n C n k a n k b k , we have
E ( e u ( ω m ) ε t ) = E E ( e u ( ω m ) ε t | ε t ) = E ( ω e u ε t + ( 1 ω ) ) m = E r = 0 m C m r ( 1 ω ) r ω m r e u ( m r ) ε t = r = 0 m C m r ( 1 ω ) r ω m r E ( e u ( m r ) ε t ) .
Similarly, we also have
E ( e u ( α i x t i ) ε t ) = r = 0 x t i C x t i r ( 1 α i ) r α i x t i r E ( e u ( x t i r ) ε t ) .
Therefore, for the PMthINARCH ( q ) model, we have
E ( e u ( ω m ) ε t ) = r = 0 m C m r ( 1 ω ) r ω m r e ( e u ( m r ) 1 ) , E ( e u ( α i x t i ) ε t ) = r = 0 x t i C x t i r ( 1 α i ) r α i x t i r e ( e u ( x t i r ) 1 ) ,
while for the GMthINARCH ( q ) model, we have
E ( e u ( ω m ) ε t ) = r = 0 m C m r ( 1 ω ) r ω m r 1 2 e u ( m r ) , E ( e u ( α i x t i ) ε t ) = r = 0 x t i C x t i r ( 1 α i ) r α i x t i r 1 2 e u ( x t i r ) .
Thus the conditional cumulant-generating function of X t is:
K t ( u ) = log [ E ( e u X t | X t 1 = x t 1 , , X t q = x t q ) ] = log E ( e u ( ω m ) ε t ) + i = 1 q log E ( e u ( α i x t i ) ε t ) .
A highly accurate approximation to the conditional mass function of X t at x t is provided by the saddlepoint approximation:
f ˜ X t | X t 1 = x t 1 , , X t q = x t q ( x t ) = 2 π K t ( u ˜ t ) 1 2 exp { K t ( u ˜ t ) u ˜ t x t } ,
where u ˜ t is the unique value of u which satisfies the saddlepoint equation K t ( u ) = x t , with K t and K t represent the first and second order derivatives of K t with respect to u. Notice that it is difficult to solve the saddlepoint equation K t ( u ) = x t analytically; similar to that mentioned in Pedeli et al. (2015) [19], we can use the Newton–Raphson method to solve this equation.
The log-likelihood function (3) can be approximated by summing the logarithms of the corresponding density approximations (4), yielding:
L ˜ n ( θ ) = t = 1 n l ˜ t ( θ ) : = t = 1 n log f ˜ X t | X t 1 = x t 1 , , X t q = x t q ( x t ) .
The value θ maximizing this expression is called the saddlepoint maximum likelihood estimator (SPMLE).

3.2. Asymptotic Properties of the SPMLE

Now we discuss the asymptotic properties of the SPMLE. First we give the first-order Taylor expansion of K t ( u ) at u = 0 yields,
K t ( u ) = K t ( 0 ) + u K t ( 0 ) + o ( u ) = μ t ( θ ) + u σ t 2 ( θ ) + o ( u ) ,
where μ t ( θ ) and σ t 2 ( θ ) are the conditional mean and conditional variance of X t . Notice that u ˜ t can be given by K t ( u ˜ t ) = x t , so with the Taylor series expansion of K t ( u ) in (6), we have:
u ˜ t = x t μ t ( θ ) σ t 2 ( θ ) + o ( 1 ) , t = q + 1 , , n .
Then, we can obtain the second-order Taylor expansion of K t ( u ) at u = 0 , which is:
K t ( u ) u K t ( 0 ) + u 2 2 K t ( 0 ) = u μ t ( θ ) + u 2 2 σ t 2 ( θ ) .
Focusing on the exponent of the saddlepoint approximation (4), Equation (8) gives
K t ( u ) u x t u ( μ t ( θ ) x t ) + u 2 2 σ t 2 ( θ ) .
Then using Equation (7), we have
K t ( u ˜ t ) u ˜ t x t [ x t μ t ( θ ) ] 2 2 σ t 2 ( θ ) .
Hence, we can derive from (8) and (9) that the first-order saddlepoint approximation to the conditional probability mass function is approximately:
f ˜ X t | X t 1 = x t 1 , , X t q = x t q ( x t ) = 2 π K t ( u ˜ t ) 1 2 × exp ( x t ω m i = 1 q α i x t i ) 2 2 ( σ 2 + 1 ) ( ω ( 1 ω ) m + i = 1 q α i ( 1 α i ) x t i ) + σ 2 ( ω m + i = 1 q α i x t i ) 2 .
Therefore, L ˜ n ( θ ) = t = 1 n l ˜ t ( θ ) = t = 1 n log f ˜ X t | X t 1 = x t 1 , , X t q = x t q ( x t ) is the quasi-likelihood function for the estimation of θ . To establish the large-sample properties, we have
L n ( θ ) = t = 1 n l t ( θ ) = t = 1 n log f X t | X t 1 = x t 1 , , X t q = x t q ( x t ) ,
which is the ergodic approximation of L ˜ n ( θ ) . The first and second derivatives of the quasi-likelihood function are given in the Appendix A. The strong convergence and asymptotic normality for the SPMLE θ ^ n are established in the following theorems.
First of all, the assumptions for Theorems 1 and 2 are listed as follows.
Assumption 1.
The solution of the MthINARCH process is strictly stationary and ergodic.
Assumption 2.
Θ is compact and θ 0 Θ ° , where Θ ° denotes the interior of Θ. For technical reasons, we assumed the lower and upper values of each component of parameters as 0 < ω L ω ω U 1 and 0 α L α i α U < 1 , i = 1 , , q .
Theorem 1.
Let θ ^ n be a sequence of SPMLEs satisfying θ ^ n = arg max θ Θ L ˜ n ( θ ) , then under Assumptions 1 and 2, θ ^ n converges to θ 0 almost as surely, as n .
Theorem 2.
Under Assumptions 1 and 2, there exists a sequence of maximizers θ ^ n of L ˜ n ( θ ) such as that of n ,
n ( θ ^ n θ 0 ) d N ( 0 , Σ 1 ) ,
where
Σ = E θ 0 2 l t ( θ 0 ) θ θ T ,
and Σ is positively definite.

3.3. Simulation Study

In this section, simulation studies of PMthINARCH ( q ) and GMthINARCH ( q ) models for finite sample size are given, where q = 2 . Here, we used several combinations to show the performance of SPMLE, and the mean absolute deviation error (MADE) 1 s j = 1 s | θ j ^ θ j | was used as the evaluation criterion; here, s is the number of replications. The sample size is n = 100 , 200 , 500 , and the number of replications is s = 200 . We used the following combinations of ( ω , α 1 , α 2 ) T as the true values to generate the random sample: A1 = ( 0.65 , 0.4 , 0.4 ) T , A2 = ( 0.9 , 0.5 , 0.3 ) T for the PMthINARCH ( 2 ) model, and B1 = ( 0.8 , 0.4 , 0.4 ) T , B2 = ( 0.65 , 0.3 , 0.5 ) T for the GMthINARCH ( 2 ) model. Table 1 and Table 2 show the results of these simulations. Notice that as the sample sizes become larger, the MADEs become smaller, and the estimates seem to be close to the true values. Therefore, the SPMLE performs well.

4. A Real Example

Here, we considered the number of tick changes by the minute of the euro to the British pound exchange rate (ExRate for short) on December 12th from 9.00 a.m. to 9.00 p.m. The dataset is available at the website http://www.histdata.com/ (accessed on 17 January 2023). The series comprises of 720 observations with a sample mean of 13.2153 and a sample variance of 224.2498. Obviously, the sample variance is much larger than the sample mean, which shows high overdispersion, and this high overdispersion can also be seen in Figure 1a. Figure 1b,c are the plots of the autocorrelation function (ACF), and the partial autocorrelation function (PACF) means that we know the tick changes are correlated.
We analyzed the data using the PMthINARCH ( 3 ) model, GMthINARCH ( 3 ) model, Poisson INAR ( 3 ) (here denoted by PINAR ( 3 ) for short) model, and the INARCH ( 3 ) model. The Poisson INAR model is mentioned in Pedeli et al. (2015) [19], and the SPMLE was used to estimate the parameters. Here, the innovations in the PINAR model were assumed to be Poisson with a mean of one. The INARCH model with a Poisson deviate was proposed by Ferland et al. (2006) [5], and the MLE was used to estimate the parameters. According to Aknouche and Scotto (2022) [14], in real applications, we can set m as the upper integer part of the sample mean. Here the sample mean is 13.2153, so m is set to the value of 14. Table 3 gives the estimates of SPMLE and the values of the Akaike information criterion (AIC) and Bayesian information criterion (BIC). According to Table 3, it is clear to see that the values of AIC and BIC of PMthINARCH ( 3 ) and GMthINARCH ( 3 ) are smaller than those of the PINAR ( 3 ) and INARCH ( 3 ) models, the values of AIC and BIC of INARCH ( 3 ) are smaller than those of the PINAR ( 3 ) model. Moreover, the values of AIC and BIC of PMthINARCH ( 3 ) are smaller than those of GMthINARCH ( 3 ) . In summary, the INARCH model performed better than the PINAR model; meanwhile, the PMthINARCH model and GMthINARCH model performed better than the PINAR model and INARCH model.
According to Aknouche and Scotto (2022) [14], the two-stage weighted least squares estimation (2SWLSE) was used to estimate the parameters of the MthINGARCH model. Therefore, to compare the performance of 2SWLSE and SPMLE, and the performance of PMthINARCH, GMthINARCH, and PINAR models, to consider the in-sample and out-of-sample forecasts of these two estimation methods and the three models above, respectively. First, we considered the in-sample forecast. We used all of the observations to estimate the model, and then we could forecast the last 10 observations 711–720, the last 15 observations 706–720, and the last 20 observations 701–720; these three-time horizons of in-sample forecast are denoted by C1, C2, and C3, respectively. Similar to the in-sample forecast process, we also considered the out-of-sample forecast and divided all the observations into three-time horizons: the first one was 1–710 and 711–720, the second one was 1–705 and 706–720, and the third one was 1–700 and 701–720, which are denoted by D1, D2, and D3, respectively.
Here we illustrate the performance of the considered models by comparing the MADEs of each forecast. The MADEs of in-sample forecasts and out-of-sample forecasts for three models with SPMLE are shown in Table 4. The MADEs of the in-sample forecasts and out-of-sample forecasts for the PMthINARCH model with 2SWLSE and SPMLE are shown in Table 5, and the in-sample forecasts and out-of-sample forecasts for the GMthINARCH model with 2SWLSE and SPMLE are shown in Table 6. According to Table 4, the MADEs of PMthINARCH ( 3 ) and GMthINARCH ( 3 ) are smaller than those of PINAR ( 3 ) , Table 5 and Table 6 show that the MADEs of PMthINARCH ( 3 ) and GMthINARCH ( 3 ) of SPMLE are smaller than those of 2SWLSE; meanwhile, in these three Tables, the MADEs of in-sample forecasts were smaller than those of out-of-sample forecasts. In summary, the PMthINARCH model and GMthINARCH model were superior to the PINAR model in modeling this real data set, and the PMthINARCH model performed better than the GMthINARCH model. Meanwhile, the performance of SPMLE was better than 2SWLSE for MthINARCH models.

5. Conclusions

In this paper, we modified a multiplicative thinning-based INARCH model. The probability mass function of random variables is provided by saddlepoint approximation. We used the SPMLE to estimate the parameters and obtain the asymptotic distribution of the SPMLE. Moreover, to show the superiority of the MthINARCH models and the SPMLE, we used the PMthINARCH ( q ) process and GMthINARCH ( q ) process for discussion and comparison. The SPMLE performs well in the simulation studies. A real dataset indicates that the PMthINARCH model and the GMthINARCH model are able to describe the overdispersed integer-valued data, and the real data example leads to a superior performance of the MthINARCH models compared with the PINAR and INARCH models. In addition, the results also show a superior performance of SPMLE compared with 2SWLSE.
For further discussion, more research is needed for some aspects. Here we used the Poisson distribution and geometric distribution for ε t ; however, we could use the negative binomial distribution or some zero-inflated distributions as well. Moreover, we just considered the INARCH model, so the corresponding INGARCH model should be considered as well.

Author Contributions

Conceptualization, F.Z.; methodology, Y.X.; software, Y.X. and Q.L.; validation, Y.X. and Q.L.; formal analysis, Y.X. and Q.L.; investigation, Y.X. and F.Z.; resources, Q.L.; data curation, Y.X. and Q.L.; writing—original draft preparation, Y.X., Q.L. and F.Z.; writing—review and editing, Y.X., Q.L. and F.Z.; visualization, Y.X.; supervision, F.Z.; project administration, F.Z.; funding acquisition, Q.L. and F.Z. All authors have read and agreed to the published version of the manuscript.

Funding

Li’s work is supported by the National Natural Science Foundation of China (No. 12201069), the Natural Science Foundation of Jilin Province (No. 20210101160JC), the Science and Technology Research Project of Education Bureau of Jilin Province (No. JJKH20220820KJ), and Natural Science Foundation Projects of CCNU (CSJJ2022006ZK). Zhu’s work is supported by the National Natural Science Foundation of China (No. 12271206) and the Natural Science Foundation of Jilin Province (No. 20210101143JC).

Data Availability Statement

The dataset is available at the website http://www.histdata.com/ (accessed on 17 January 2023).

Acknowledgments

The authors are very grateful to three reviewers for their constructive suggestions and comments, leading to a substantial improvement in the presentation and contents.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Appendix A.1. Details of SPMLE

Here, we give the derivatives of K t ( u ) mentioned in Section 3.1 of PMthINARCH ( q ) and GMthINARCH ( q ) . Now we give K t ( u ) and K t ( u ) of PMthINARCH ( q ) . In Section 3.1, we have
K t ( u ) = log E ( e u ( ω m ) ε t ) + i = 1 q log E ( e u ( α i x t i ) ε t ) = log a 1 + i = 1 q log b 1 ,
so the derivatives of K t ( u ) are given by
K t ( u ) = c 1 a 1 + i = 1 q d 1 b 1 , K t ( u ) = e 1 a 1 c 1 2 a 1 2 + i = 1 q f 1 b 1 d 1 2 b 1 2 ,
where
a 1 = r = 0 m C m r ( 1 ω ) r ω m r e e u ( m r ) 1 , b 1 = r = 0 x t i C x t i r ( 1 α i ) r α i x t i r e e u ( x t i r ) 1 , c 1 = r = 0 m C m r ( 1 ω ) r ω m r e u ( m r ) e e u ( m r ) 1 , d 1 = r = 0 x t i C x t i r ( 1 α i ) r α i x t i r e u ( x t i r ) e e u ( x t i r ) 1 , e 1 = r = 0 m C m r ( 1 ω ) r ω m r e u ( m r ) ( m r ) 2 e e u ( m r ) 1 [ 1 + e u ( m r ) ] , f 1 = r = 0 x t i C x t i r ( 1 α i ) r α i x t i r ( x t i r ) 2 e u ( x t i r ) e e u ( x t i r ) 1 [ 1 + e u ( x t i r ) ] .
Then we give K t ( u ) and K t ( u ) of GMthINARCH ( q ) . In Section 3.1, we have
K t ( u ) = log E ( e u ( ω m ) ε t ) + i = 1 q log E ( e u ( α i x t i ) ε t ) = log a 2 + i = 1 q log b 2 ,
so the derivatives of K t ( u ) are given by
K t ( u ) = c 2 a 2 + t = 1 q d 2 b 2 , K t ( u ) = e 2 a 2 c 2 2 a 2 2 + t = 1 q f 2 b 2 d 2 2 b 2 2 ,
where
a 2 = r = 0 m C m r ( 1 ω ) r ω m r 1 2 ( 2 e u ( m r ) ) , b 2 = r = 0 x t i C x t i r ( 1 α i ) r α i x t i r 1 2 ( 2 e u ( x t i r ) ) , c 2 = 1 4 r = 0 m C m r ( 1 ω ) r ω m r ( m r ) e u ( m r ) [ 1 ( 1 1 2 e u ( m r ) ) ] 2 , d 2 = 1 4 r = 0 x t i C x t i r ( 1 α i ) r α i x t i r ( x t i r ) e u ( x t i r ) [ 1 ( 1 1 2 e u ( x t i r ) ) ] 2 , e 2 = 1 4 r = 0 m C m r ( 1 ω ) r ω m r ( m r ) 2 e u ( m r ) 1 + 1 2 e u ( m r ) [ 1 ( 1 1 2 e u ( m r ) ) ] 3 , f 2 = 1 4 r = 0 x t i C x t i r ( 1 α i ) r α i x t i r ( x t i r ) 2 e u ( x t i r ) 1 + 1 2 e u ( x t i r ) [ 1 ( 1 1 2 e u ( x t i r ) ) ] 3 .

Appendix A.2. Derivatives of the Quasi-Likelihood Function

The conditional log-quasi-likelihood function l t ( θ ) is continuous on Θ : for 1 t n ,
l t ( θ ) θ = m 1 μ t ( θ ) θ + m 2 σ t 2 ( θ ) θ , 2 l t ( θ ) θ θ T = ( m 1 m 3 ) 2 μ t ( θ ) θ θ T 2 m 1 m 3 μ t ( θ ) θ σ t 2 ( θ ) θ T + ( m 2 + m 3 2 2 m 1 2 m 3 ) 2 σ t 2 ( θ ) θ θ T ,
where
m 1 = X t μ t ( θ ) σ t 2 ( θ ) , m 2 = ( X t μ t ( θ ) ) 2 σ t 2 ( θ ) 2 σ t 4 ( θ ) , m 3 = 1 σ t 2 ( θ ) .
Then the first and second derivatives of μ t ( θ ) and σ t 2 ( θ ) can be easily expressed by
μ t ( θ ) ω = m , μ t ( θ ) α i = X t i , σ t 2 ( θ ) ω = ( σ 2 + 1 ) ( m 2 ω m ) + 2 σ 2 ( m 2 ω + m i = 1 q α i X t i ) , σ t 2 ( θ ) α i = ( σ 2 + 1 ) ( X t i 2 α i X t i ) + 2 σ 2 ( m ω X t i + α i X t i 2 ) , 2 μ t ( θ ) ω 2 = 0 , 2 μ t ( θ ) α i 2 = 1 , 2 μ t ( θ ) ω α i = 0 , 2 σ t 2 ( θ ) ω 2 = 2 m ( σ 2 + 1 ) + 2 m 2 σ 2 , 2 σ t 2 ( θ ) α i 2 = 2 X t i ( σ 2 + 1 ) + 2 X t i 2 σ 2 , 2 σ t 2 ( θ ) ω α i = 2 m σ 2 X t i .

Appendix A.3. Proof of Theorem 1

The techniques used here are mainly based on Francq and Zakoïan (2004) [20]. We will establish the following intermediate results:
(i)
lim n sup θ Θ 1 n L n ( θ ) L ˜ n ( θ ) = 0 a . s .
(ii)
E ( l t ( θ ) ) is continuous in θ .
(iii)
It exists t Z such that σ t 2 ( θ ) = σ t 2 ( θ 0 ) a.s., then θ = θ 0 .
(iv)
Any θ θ 0 has a neighbourhood V ( θ ) such that
lim sup n sup θ * V k ( θ ) Θ 1 n L ˜ n ( θ * ) > E θ 0 l 1 ( θ 0 ) a . s .
First we prove (i). Let a t : = sup θ Θ | μ ˜ t ( θ ) μ t ( θ ) | , b t : = sup θ Θ | σ ˜ t 2 ( θ ) σ t 2 ( θ ) | . Standard arguments from Corollary 2.2 in Aknouche and Francq (2023) [21] show that a t ( 1 + X t + sup θ Θ μ t ( θ ) ) 0 , a . s . and b t ( 1 + X t 2 + sup θ Θ μ t 2 ( θ ) ) 0 , a . s . , t , so we obtain the inequality
sup θ Θ 1 n ( L n ( θ ) L ˜ n ( θ ) ) = sup θ Θ 1 2 n t = 1 n log σ ˜ t 2 ( θ ) σ t 2 ( θ ) + ( ( x t μ ˜ t ) 2 σ ˜ t 2 ( x t μ t ( θ ) ) 2 σ t 2 ) sup θ Θ 1 2 n t = 1 n σ ˜ t 2 ( θ ) σ t 2 ( θ ) σ t 2 ( θ ) + ( ( x t μ ˜ t ( θ ) ) 2 σ ˜ t 2 ( θ ) ( x t μ t ( θ ) ) 2 σ t 2 ) sup θ Θ 1 2 n t = 1 n | σ ˜ t 2 ( θ ) σ t 2 ( θ ) | σ t 2 ( θ ) + | μ ˜ t ( θ ) μ t ( θ ) | | μ t ( θ ) + μ ˜ t ( θ ) 2 X t | σ ˜ t 2 ( θ ) + σ ˜ t 2 ( θ ) σ t 2 ( θ ) X t μ t ( θ ) 2 σ t 2 ( θ ) σ ˜ t 2 ( θ ) 1 2 n t = 1 n 2 σ t 2 ( θ ) a t ( 1 + X t + sup θ Θ μ t ( θ ) ) + 1 + σ ˜ t 2 ( θ ) σ t 2 ( θ ) σ ˜ t 2 ( θ ) c t ( 1 + X t 2 + sup θ Θ μ t 2 ( θ ) ) .
The a.s. limit holds because of the Cesàro lemma.
We prove (ii) now. For any θ Θ , let V η ( θ ) = B ( θ , η ) be an open ball centered at θ with radius η ,
l t ( θ ˜ ) l t ( θ ) | σ t 2 ( θ ˜ ) σ t 2 ( θ ) | X t 2 + μ t 2 ( θ ) + σ t 2 ( θ ˜ ) σ t 2 ( θ ) σ t 2 ( θ ˜ ) + | μ t ( θ ˜ ) μ t ( θ ) | | μ t ( θ ) + μ t ( θ ˜ ) 2 X t | σ t 2 ( θ ˜ ) .
Then
E sup θ V η ( θ ) ˜ l t ( θ ˜ ) l t ( θ ) σ t 2 ( θ ˜ ) σ t 2 ( θ ) 2 X t 2 + μ t 2 ( θ ) + σ t 2 ( θ ˜ ) σ t 2 ( θ ) σ t 2 ( θ ˜ ) 2 + μ t ( θ ˜ ) μ t ( θ ) 2 μ t ( θ ) + μ t ( θ ˜ ) 2 X t 2 σ t 2 ( θ ˜ ) 0 , a s η 0 .
Next, we check (iii). By Jensen’s inequality, we have
E l t ( θ ) l t ( θ 0 ) = E E 1 2 log σ t 2 ( θ 0 ) σ t 2 ( θ ) + ( x t μ t ( θ 0 ) ) 2 2 σ t 2 ( θ 0 ) ( x t μ t ( θ ) ) 2 2 σ t 2 ( θ ) | F t 1 E log E σ t 2 ( θ 0 ) σ t 2 ( θ ) | F t 1 = E ( log ( 1 ) ) = 0 .
The equality holds if σ t 2 ( θ 0 ) σ t 2 ( θ ) = 1 a.s. F t 1 , i.e., θ = θ 0 .
Then the proof of (iv) is similar to that in the Supplementary Material A.4 in Xu and Zhu (2022) [22]. Here we omit the details.

Appendix A.4. Proof of the Positive Definiteness of Σ

Here, we prove the positive definiteness of Σ . By definition of positive definiteness, we need to prove for any ξ = ( ξ 0 , ξ 1 , , ξ q ) T R q + 1 , if ξ T Σ ξ = 0 , then ξ = 0 .
ξ T Σ ξ = ξ T E 1 2 σ t 4 ( θ 0 ) σ t 2 ( θ 0 ) θ σ t 2 ( θ 0 ) θ T + 1 σ t 2 ( θ 0 ) μ t ( θ 0 ) θ μ t ( θ 0 ) θ T ξ = E 1 2 σ t 4 ( θ 0 ) ( ξ T σ t 2 ( θ 0 ) θ ) 2 + 1 σ t 2 ( θ 0 ) ( ξ T μ t ( θ 0 ) θ ) 2 .
Suppose the left-hand side is 0 , then under Assumption 1, the expectation in the right-hand side is 0 for any t Z . Because σ t 2 ( θ 0 ) > 0 , this expectation is always greater than or equal to 0 . It equals 0 only when ξ T σ t 2 ( θ 0 ) θ = 0 and ξ T μ t ( θ 0 ) θ = 0 almost surely. Thus, ξ T Σ ξ = 0 yields ξ T σ t 2 ( θ 0 ) θ = 0 and ξ T μ t ( θ 0 ) θ = 0 a.s. for t Z , and vice versa.
Using vector form of σ t 2 ( θ 0 ) θ , we have
ξ a T σ t 2 ( θ 0 ) θ = ξ T ( σ 2 + 1 ) ( m 2 ω m ) + 2 σ 2 ( ω m 2 + m i = 1 q α i X t i ) ( σ 2 + 1 ) ( X t 1 2 α 1 X t 1 ) + 2 σ 2 ( ω m X t 1 + α 1 X t 1 2 ) ( σ 2 + 1 ) ( X t q 2 α q X t q ) + 2 σ 2 ( ω m X t q + α q X t q 2 ) .
Suppose the left-hand side is 0 almost surely, then the right-hand side is also 0 almost surely, which can be written as
ξ 0 ( σ 2 + 1 ) ( m 2 ω m ) + 2 σ 2 ξ 0 ( ω m 2 + m i = 1 q α i X t i ) + ξ 1 ( σ 2 + 1 ) ( X t 1 2 α 1 X t 1 ) + 2 σ 2 ξ 1 ( ω m X t 1 + α 1 X t 1 2 ) + M t 2 = 0 a . s . ,
where
M t 2 = k = 2 p ξ k ( σ 2 + 1 ) ( X t k 2 α k X t k ) + 2 σ 2 ( ω m X t k + α k X t k 2 ) .
So the coefficients of the above equation must satisfy
ξ i ( σ 2 + 1 ) = 0 , 2 σ 2 ξ i = 0 , i = 0 , , q .
For σ 2 > 0 , we must have ξ i = 0 , i = 0 , , q . Thus, ξ = ( ξ 0 , ξ 1 , , ξ q ) T = 0 , which completes the proof of the positive definiteness of Σ .

Appendix A.5. Lemmas for the Proof of Theorem 2

Similar to the proof of Theorem 1.2 in Hu (2016) [9], we give some related lemmas for the proof of Theorem 2. According to the derivatives of the quasi-likelihood function, we have
μ t ( θ ) ω = m , σ t 2 ( θ ) ω = ( σ 2 + 1 ) ( m 2 ω m ) + 2 σ 2 m 2 ω + m i = 1 q α i X t i , ( σ 2 + 1 ) m ( 1 2 ω L ) + 2 σ 2 m 2 ω U + m i = 1 q α U X t i ,
thus, E ( μ t ( θ ) ω ) 2 < and E ( σ t 2 ( θ ) ω ) 2 < . Likewise for the other terms of parameters.
Lemma A1.
Under Assumptions 1 and 2, when n ,
1 n t = 1 n l ˜ t ( θ 0 ) θ i d N ( 0 , Σ ) , 1 n t = 1 n 2 l ˜ t ( θ 0 ) θ i θ j P Σ .
Proof of Lemma A1.
First, we show that
n 1 / 2 t = 1 n l t ( θ 0 ) θ i l ˜ t ( θ 0 ) θ i P 0 , n 1 t = 1 n 2 l t ( θ 0 ) θ i θ j 2 l ˜ t ( θ 0 ) θ i θ j P 0 .
Notice that μ ˜ t ( θ ) and σ ˜ t 2 ( θ ) are stationary approximations of μ t ( θ ) and σ t 2 ( θ ) , since X t is stationary and ergodic, using arguments similar to Proposition 2.1.1 in Straumann (2005) [23], for fixed θ Θ , μ ˜ t ( θ ) and σ ˜ t 2 ( θ ) , μ t ( θ ) and σ t 2 ( θ ) are also stationary and ergodic. Hence, similar to the proof of Lemma A2 in Hu and Andrews (2021) [24], it is easy to have
n 1 / 2 t = 1 n l t ( θ 0 ) θ i l ˜ t ( θ 0 ) θ i P 0 , n 1 t = 1 n 2 l t ( θ 0 ) θ i θ j 2 l ˜ t ( θ 0 ) θ i θ j P 0 .
Therefore, it suffices to show that
1 n t = 1 n l t ( θ 0 ) θ d N ( 0 , Σ ) , 1 n t = 1 n 2 l t ( θ 0 ) θ θ T P Σ .
First, we should guarantee that
E θ 0 l t ( θ 0 ) θ l t ( θ 0 ) θ T < , E θ 0 2 l t ( θ 0 ) θ θ T < .
Now we prove the first part of (A1).
E θ 0 l t ( θ 0 ) ω 2 = E θ 0 1 2 σ t 4 ( θ 0 ) σ t 2 ( θ 0 ) ω 2 + 1 σ t 2 ( θ 0 ) μ t ( θ 0 ) ω 2 < .
Similarly, we can prove other terms, thus, the first part of (A1) holds. The proof of the second part of (A1) is similar, here we omit the details.
Under (A1), l t ( θ 0 ) θ is a martingale difference sequence with respect to F t , it follows that at θ = θ 0 , E θ 0 l t ( θ 0 ) θ | F t 1 = 0 , so E θ 0 l t ( θ 0 ) θ = 0 . Moreover, we have shown that Σ = E θ 0 l t ( θ 0 ) θ l t ( θ 0 ) θ T in Section 3.2. Hence 1 n t = 1 n l ˜ t ( θ 0 ) θ d N ( 0 , Σ ) holds by the central limit theorem for martingale difference sequence in Billingsley (1961). Similarly, we have E θ 0 l t 2 ( θ 0 ) θ θ T = Σ .
Under Assumption 1, 1 n t = 1 n 2 l ˜ t ( θ 0 ) θ i θ j P Σ follows from the ergodic theorem. Thus, Lemma A1 is proved. □
Before showing Lemma A2, we have
T ˜ n ( u ) l ˜ n θ 0 + u n l ˜ n ( θ 0 ) , u R q + 1 ,
we use T ˜ n to derive the asymptotic distribution of θ ^ n .
For any u R q + 1 , the Taylor series expansion of T ˜ n ( u ) at θ 0 is
T ˜ n ( u ) = 1 n t = 1 n u T l ˜ t ( θ 0 ) θ + 1 2 n t = 1 n u T 2 l ˜ t ( θ 0 ) θ θ T u + 1 2 n t = 1 n u T 2 l ˜ t ( θ * ) θ θ T 2 l ˜ t ( θ 0 ) θ θ T u ,
where θ * = θ n * ( u ) is on the line segment connecting θ 0 and θ 0 + u n . For Euclidean distance · and any compact set K R q + 1 , sup u K θ * θ 0 0 , as n .
Lemma A2.
Under Assumptions 1 and 2, when n ,
1 n t = 1 n 2 l ˜ t ( θ * ) θ θ T 2 l ˜ t ( θ 0 ) θ θ T P 0 .
Proof. 
Similar to Lemma A1, for any 1 i , j q + 1 ,
1 n t = 1 n 2 l t ( θ 0 ) θ i θ j 2 l ˜ t ( θ 0 ) θ i θ j P 0 .
Using arguments similar to the proof of Theorem 2.2 of Francq and Zakoïan (2004) [20], it suffices to show
1 n t = 1 n 2 l t ( θ * ) θ i θ j 2 l t ( θ 0 ) θ i θ j P 0 .
By the Taylor series expansion, we have
1 n t = 1 n 2 l t ( θ * ) θ i θ j = 1 n t = 1 n 2 l t ( θ 0 ) θ i θ j + 1 n t = 1 n θ k 2 l t ( θ * * ) θ i θ j ( θ * θ 0 ) ,
here θ * * = θ n * * ( u ) is on the line segment connecting θ 0 and θ * , such that for any u, we have θ * * θ 0 0 a . s . , n .
From (A2), θ * θ 0 0 a . s , so
1 n t = 1 n θ k 2 l t ( θ * * ) θ i θ j ( θ * θ 0 ) 0 , a . s .
if
lim sup n 1 n t = 1 n θ k 2 l t ( θ * * ) θ i θ j < , a . s .
Then we have
1 n t = 1 n 2 l t ( θ * ) θ i θ j 1 n t = 1 n 2 l t ( θ 0 ) θ i θ j a . s . ,
so (A4) is proved.
Using arguments similar to the proof of Theorem 2.2 of Francq and Zakoïan (2004) [20], there exists a neighborhood ν ( θ 0 ) , that
E θ 0 sup θ ν ( θ 0 ) Θ θ k 2 l t ( θ ) θ i θ j < , sup θ ν ( θ 0 ) 1 n t = 1 n 2 l t ( θ ) θ i θ j 2 l ˜ t ( θ ) θ i θ j P 0 .
Therefore, by the ergodic theorem, we have
lim sup n 1 n t = 1 n θ k 2 l t ( θ * * ) θ i θ j lim sup n 1 n t = 1 n sup θ ν ( θ 0 ) Θ θ k 2 l t ( θ ) θ i θ j = E θ 0 sup θ ν ( θ 0 ) Θ θ k 2 l t ( θ ) θ i θ j < ,
so (A5) is proved.
In view of (A3), (A4) and (A6), we obtain Lemma A2. □
Lemma A3.
For any compact set K R q + 1 and any ε > 0 ,
lim σ 0 lim sup n P sup u , v K , u v < σ T ˜ n ( u ) T ˜ n ( v ) ε = 0 .
Proof. 
For any ϵ > 0 , by (A2) we have
lim δ 0 lim sup n P sup u , v K , u v < δ T ˜ n ( u ) T ˜ n ( v ) ε lim δ 0 lim sup n P sup u , v K , u v < δ 1 n t = 1 n ( u v ) T l ˜ t ( θ 0 ) θ ϵ 3 + lim δ 0 lim sup n P sup u , v K , u v < δ 1 n t = 1 n u T 2 l ˜ t ( θ 0 ) θ θ T u t = 1 n v T 2 l ˜ t ( θ 0 ) θ θ T v 2 ϵ 3 + lim δ 0 lim sup n P sup u , v K , u v < δ 1 n t = 1 n u T 2 l ˜ t ( θ * ) θ θ T 2 l ˜ t ( θ 0 ) θ θ T u t = 1 n v T 2 l ˜ t ( θ * ) θ θ T 2 l ˜ t ( θ 0 ) θ θ T v 2 ϵ 3 .
Because of Lemmas A1 and A2, we have
1 n t = 1 n l ˜ t ( θ 0 ) θ = O p ( 1 ) , 1 n t = 1 n 2 l ˜ t ( θ 0 ) θ θ T = O p ( 1 ) ,
1 n t = 1 n 2 l ˜ t ( θ * ) θ θ T 2 l ˜ t ( θ 0 ) θ θ T = o p ( 1 ) ,
where O p ( 1 ) and o p ( 1 ) for vector and matrix means O p ( 1 ) and o p ( 1 ) for every elements. By the compactness of K , we have
lim δ 0 lim sup n P sup u , v K , u v < δ 1 n t = 1 n ( u v ) T l ˜ t ( θ 0 ) θ ϵ 3 = 0 ,
lim δ 0 lim sup n P sup u , v K , u v < δ 1 n t = 1 n u T 2 l ˜ t ( θ 0 ) θ θ T u t = 1 n v T 2 l ˜ t ( θ 0 ) θ θ T v 2 ϵ 3 = 0 ,
lim δ 0 lim sup n P sup u , v K , u v < δ 1 n t = 1 n u T 2 l ˜ t ( θ * ) θ θ T 2 l ˜ t ( θ 0 ) θ θ T u t = 1 n v T 2 l ˜ t ( θ * ) θ θ T 2 l ˜ t ( θ 0 ) θ θ T v 2 ϵ 3 = 0 ,
which completes our proof. □

Appendix A.6. Proof of Theorem 2

Proof. 
Let T ( u ) = u T N ( 0 , Σ ) 1 2 u T Σ u , where N is a multivariate Gaussian random vector with mean 0 and covariance matrix Σ . By Lemmas A1 and A2, for any u R q + 1 and n , the finite dimensional distributions of T ˜ n converge to those of T: T ˜ n ( u ) T ( u ) .
By Lemma A3, similar to Hu (2016) [9], T ˜ n ( u ) is tight on the continuous function space C ( K ) for any compact set K R q + 1 . So by Theorem 7.1 in Billingsley (1999) [25], T ˜ n ( · ) T ( · ) on C ( K ) . From Appendix A.4 and Lemma A1, Σ is positive finite and invertible, meanwhile, T ( · ) is concave with the unique maximum Σ 1 N ( 0 , Σ ) = N ( 0 , Σ 1 ) . T ˜ n ( · ) is maximized at u max = n ( θ ^ n θ 0 ) . Thus, the result of Theorem 2 can be proved by the proof of Lemma 2.2 and Remark 1 in Davis et al. (1992) [26]. □

References

  1. McKenzie, E. Some simple models for discrete variate time series. Water Resour. Bull. 1985, 21, 645–650. [Google Scholar] [CrossRef]
  2. Al-Osh, M.A.; Alzaid, A.A. First-order integer-valued autoregressive (INAR(1)) process. J. Time Ser. Anal. 1987, 8, 261–275. [Google Scholar] [CrossRef]
  3. Al-Osh, M.A.; Alzaid, A.A. Integer-valued moving average (INMA) process. Stat. Pap. 1988, 29, 281–300. [Google Scholar] [CrossRef]
  4. McKenzie, E. Some ARMA models for dependent sequences of Poisson counts. Adv. Appl. Probab. 1988, 20, 822–835. [Google Scholar] [CrossRef]
  5. Ferland, R.; Latour, A.; Oraichi, D. Integer-valued GARCH process. J. Time Ser. Anal. 2006, 27, 923–942. [Google Scholar] [CrossRef]
  6. Steutel, F.W.; van Harn, K. Discrete analogues of self-decomposability and stability. Ann. Probab. 1979, 7, 893–899. [Google Scholar] [CrossRef]
  7. Qian, L.; Zhu, F. A new minification integer-valued autoregressive process driven by explanatory variables. Aust. N. Z. J. Stat. 2022, 64, 478–494. [Google Scholar] [CrossRef]
  8. Huang, J.; Zhu, F.; Deng, D. A mixed generalized Poisson INAR model with applications. J. Stat. Comput. Simul. 2023, forthcoming. [Google Scholar] [CrossRef]
  9. Hu, X. Volatility Estimation for Integer-Valued Financial Time Series. Ph.D. Thesis, Northwestern University, Evanston, IL, USA, 2016. [Google Scholar]
  10. Liu, M.; Zhu, F.; Zhu, K. Modeling normalcy-dominant ordinal time series: An application to air quality level. J. Time Ser. Anal. 2022, 43, 460–478. [Google Scholar] [CrossRef]
  11. Weiß, C.H.; Zhu, F.; Hoshiyar, A. Softplus INGARCH models. Stat. Sin. 2022, 32, 1099–1120. [Google Scholar] [CrossRef]
  12. Weiß, C.H. An Introduction to Discrete-Valued Time Series; John Wiley & Sons: Chichester, UK, 2018. [Google Scholar]
  13. Davis, R.A.; Fokianos, K.; Holan, S.H.; Joe, H.; Livsey, J.; Lund, R.; Pipiras, V.; Ravishanker, N. Count time series: A methodological review. J. Am. Stat. Assoc. 2021, 116, 1533–1547. [Google Scholar] [CrossRef]
  14. Aknouche, A.; Scotto, M. A multiplicative Thinning-Based Integer-Valued GARCH Model. Working Paper. 2022. Available online: https://mpra.ub.uni-muenchen.de/112475 (accessed on 17 January 2023).
  15. Daniels, H.E. Saddlepoint approximations in statistics. Ann. Math. Stat. 1954, 25, 631–650. [Google Scholar] [CrossRef]
  16. Field, C.; Ronchetti, E. Small sample asymptotics. In Institute of Mathematical Statistics Lecture Notes—Monograph Series; Institute of Mathematical Statistics: Hayward, CA, USA, 1990. [Google Scholar]
  17. Jensen, J.L. Saddlepoint Approximations; Oxford University Press: Oxford, UK, 1995. [Google Scholar]
  18. Butler, R.W. Saddlepoint Approximations with Applications; Cambridge University Press: Cambridge, UK, 2007. [Google Scholar]
  19. Pedeli, X.; Davison, A.C.; Fokianos, K. Likelihood estimation for the INAR(p) model by saddlepoint approximation. J. Am. Stat. Assoc. 2015, 110, 1229–1238. [Google Scholar] [CrossRef]
  20. Francq, C.; Zakoïan, J.M. Maximum likelihood estimation of pure GARCH and ARMA-GARCH processes. Bernoulli 2004, 10, 605–637. [Google Scholar] [CrossRef]
  21. Aknouche, A.; Francq, C. Two-stage weighted least squares estimator of the conditional mean of observation-driven time series models. J. Econom. 2023. forthcoming. [Google Scholar] [CrossRef]
  22. Xu, Y.; Zhu, F. A new GJR-GARCH model for Z-valued time series. J. Time Ser. Anal. 2022, 43, 490–500. [Google Scholar] [CrossRef]
  23. Straumann, D. Estimation in Conditionally Heteroscedastic Time Series Models; Springer: Berlin, Germany, 2005. [Google Scholar]
  24. Hu, X.; Andrews, B. Integer-valued asymmetric GARCH modeling. J. Time Ser. Anal. 2021, 42, 737–751. [Google Scholar] [CrossRef]
  25. Billingsley, P. Convergence of Probability Measures, 2nd ed.; Wiley: New York, NY, USA, 1999. [Google Scholar]
  26. Davis, R.A.; Knight, K.; Liu, J. M-estimation for autoregressions with infinite variance. Stoch. Process. Their Appl. 1992, 40, 145–180. [Google Scholar] [CrossRef] [Green Version]
Figure 1. (a) The plot of integer-valued series of ExRate. (b) The plot of ACF of observations. (c) The plot of PACF of observations.
Figure 1. (a) The plot of integer-valued series of ExRate. (b) The plot of ACF of observations. (c) The plot of PACF of observations.
Entropy 25 00207 g001
Table 1. Mean and MADE of estimates for PMthINARCH ( 2 ) model with SPMLE.
Table 1. Mean and MADE of estimates for PMthINARCH ( 2 ) model with SPMLE.
Model ω α 1 α 2
A1m = 3n = 100Mean0.60690.53560.3569
MADE0.36810.28660.2510
n = 200Mean0.57220.50260.3952
MADE0.35570.24340.2243
n = 500Mean0.64360.48880.4140
MADE0.27240.12870.1005
A2m = 8n = 100Mean0.77820.50760.4750
MADE0.25330.27520.3007
n = 200Mean0.79350.51610.4701
MADE0.23180.25270.2778
n = 500Mean0.87030.51700.4677
MADE0.17520.21550.2390
Table 2. Mean and MADE of estimates for GMthINARCH ( 2 ) model with SPMLE.
Table 2. Mean and MADE of estimates for GMthINARCH ( 2 ) model with SPMLE.
Model ω α 1 α 2
B1m = 4n = 100Mean0.78210.29300.2870
MADE0.11950.14990.1766
n = 200Mean0.81900.36110.3185
MADE0.11210.14250.1640
n = 500Mean0.84560.36100.3298
MADE0.06010.13310.1414
B2m = 6n = 100Mean0.47180.20860.3811
MADE0.19650.14660.1463
n = 200Mean0.51860.26320.5080
MADE0.16070.11980.1412
n = 500Mean0.54680.28740.4896
MADE0.14150.10500.0770
Table 3. Estimation results: AIC and BIC values for PMthINARCH ( 3 ) , GMthINARCH ( 3 ) , PINAR ( 3 ) and INARCH ( 3 ) models.
Table 3. Estimation results: AIC and BIC values for PMthINARCH ( 3 ) , GMthINARCH ( 3 ) , PINAR ( 3 ) and INARCH ( 3 ) models.
PMthINARCH(3) ω α 1 α 2 α 3 AICBIC
0.32420.52140.19450.08421395.2961413.613
GMthINARCH(3) ω α 1 α 2 α 3 AICBIC
0.49040.25320.21550.23921402.4721420.789
PINAR(3) α 1 α 2 α 3 AICBIC
0.13350.41160.3901 1572.8061586.544
INARCH(3) ω α 1 α 2 α 3 AICBIC
8.56700.11400.13790.10091524.6381542.955
Table 4. MADEs of in-sample forecasts and out-of-sample forecasts for PMthINARCH ( 3 ) , GMthINARCH ( 3 ) , and PINAR ( 3 ) models with SPMLE.
Table 4. MADEs of in-sample forecasts and out-of-sample forecasts for PMthINARCH ( 3 ) , GMthINARCH ( 3 ) , and PINAR ( 3 ) models with SPMLE.
Methods of Forecast PMthINARCHGMthINARCHPINAR
In-sampleC115.3016.8017.40
C215.8717.6718.40
C316.6520.7021.90
Out-of-sampleD117.5017.7022.50
D219.4719.8023.80
D320.5025.2527.50
Table 5. MADEs of in-sample forecasts and out-of-sample forecasts for PMthINARCH ( 3 ) model with SPMLE and 2SWLSE.
Table 5. MADEs of in-sample forecasts and out-of-sample forecasts for PMthINARCH ( 3 ) model with SPMLE and 2SWLSE.
Methods of Forecast SPMLE2SWLSE
In-sampleC115.3016.20
C215.8717.20
C316.6518.55
Out-of-sampleD117.5018.60
D219.4721.67
D320.5022.70
Table 6. MADEs of in-sample forecasts and out-of-sample forecasts for GMthINARCH ( 3 ) model with SPMLE and 2SWLSE.
Table 6. MADEs of in-sample forecasts and out-of-sample forecasts for GMthINARCH ( 3 ) model with SPMLE and 2SWLSE.
Methods of Forecast SPMLE2SWLSE
In-sampleC116.8017.20
C217.6718.07
C320.7021.05
Out-of-sampleD117.7019.90
D219.8022.87
D325.2526.50
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Xu, Y.; Li, Q.; Zhu, F. A Modified Multiplicative Thinning-Based INARCH Model: Properties, Saddlepoint Maximum Likelihood Estimation, and Application. Entropy 2023, 25, 207. https://doi.org/10.3390/e25020207

AMA Style

Xu Y, Li Q, Zhu F. A Modified Multiplicative Thinning-Based INARCH Model: Properties, Saddlepoint Maximum Likelihood Estimation, and Application. Entropy. 2023; 25(2):207. https://doi.org/10.3390/e25020207

Chicago/Turabian Style

Xu, Yue, Qi Li, and Fukang Zhu. 2023. "A Modified Multiplicative Thinning-Based INARCH Model: Properties, Saddlepoint Maximum Likelihood Estimation, and Application" Entropy 25, no. 2: 207. https://doi.org/10.3390/e25020207

APA Style

Xu, Y., Li, Q., & Zhu, F. (2023). A Modified Multiplicative Thinning-Based INARCH Model: Properties, Saddlepoint Maximum Likelihood Estimation, and Application. Entropy, 25(2), 207. https://doi.org/10.3390/e25020207

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop