Next Article in Journal
PSEV-BF Methodology for Object Recognition of Birds in Uncontrolled Environments
Next Article in Special Issue
Tax Fraud Reduction Using Analytics in an East European Country
Previous Article in Journal
A New Advanced Class of Convex Functions with Related Results
Previous Article in Special Issue
A Multi-Stage Early Stress Detection Model with Time Delay Subject to a Person’s Stress
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Mixture Autoregressive Model Based on an Asymmetric Exponential Power Distribution

Department of Statistics, College of Economics, Jinan University, Guangzhou 510632, China
*
Author to whom correspondence should be addressed.
Axioms 2023, 12(2), 196; https://doi.org/10.3390/axioms12020196
Submission received: 26 December 2022 / Revised: 4 February 2023 / Accepted: 8 February 2023 / Published: 13 February 2023
(This article belongs to the Special Issue Methods and Applications of Advanced Statistical Analysis)

Abstract

:
In nonlinear time series analysis, the mixture autoregressive model (MAR) is an effective statistical tool to capture the multimodality of data. However, the traditional methods usually need to assume that the error follows a specific distribution that is not adaptive to the dataset. This paper proposes a mixture autoregressive model via an asymmetric exponential power distribution, which includes normal distribution, skew-normal distribution, generalized error distribution, Laplace distribution, asymmetric Laplace distribution, and uniform distribution as special cases. Therefore, the proposed method can be seen as a generalization of some existing model, which can adapt to unknown error structures to improve prediction accuracy, even in the case of fat tail and asymmetry. In addition, an expectation-maximization algorithm is applied to implement the proposed optimization problem. The finite sample performance of the proposed approach is illustrated via some numerical simulations. Finally, we apply the proposed methodology to analyze the daily return series of the Hong Kong Hang Seng Index. The results indicate that the proposed method is more robust and adaptive to the error distributions than other existing methods.

1. Introduction

In the study of time series, autoregressive (AR) models are a fundamental and important statistical tool. The classical AR models only allow for unimodal marginal and conditional densities, which cannot capture conditional heteroscedasticity. To solve this problem, Wong & Li (2000) introduced the k-component Gaussian mixture AR (GMAR) model that is presented as follows [1].
Let  X t  be a random variable observed at time t, and let  F t  be the information set up to time t X t  arises from a k-component GMAR model of order p if  X t | F t 1  has a density of the form
g ( x t | F t 1 ; γ ) = i = 1 k π i ϕ x t ; β i 0 + j = 1 p β i j x t j , σ i 2 ,
where  π i > 0 i = 1 k π i = 1 β i = ( β i 0 , , β i p ) σ i 2 > 0  for all  i = 1 , , k γ = ( π 1 , β 1 , σ 1 2 , , π k , β k , σ k 2 )  is an unknown parameter vector, and  ϕ ( x ; μ , σ 2 )  is a normal density function with mean  μ  and variance  σ 2 .
The GMAR model (1) is very useful for modeling nonlinear time series, and it can capture serial correlations, time-varying means, and volatilities [2]. Furthermore, Wong & Li (2001) and Fong et al. (2007) extended the GMAR model to the AR conditional heteroscedastic (ARCH) model and the vector AR model, respectively [3,4]. Since the GMAR model needs the Gaussian assumption, its estimator is not robust to heavy-tailed data or outliers. In order to estimate the occurrence of extreme financial events accurately, Wong et al. (2009) proposed a Student t-mixture AR model [2]. Nguyen et al. (2016) introduced the Laplace mixture AR model [5]. Meitz et al. (2021) considered a mixture of autoregressive models based on the scale mixture of skew-normal distributions [6]. Meitz et al. (2021) proposed a new mixture autoregressive model based on Student’s t-distribution [7]. Virolainen (2021) introduced a new mixture autoregressive model that combines Gaussian and Student’s t mixture components [8]. Solikhah et al. (2021) studied Fisher’s z distribution-based mixture autoregressive model [9]. Since the above proposed methods need to assume a specific error distribution, they are not adaptive to the error distributions. A wrong distributional assumption may lead to a decrease in the precision of the model estimate.
In order to develop a robust mixture autoregressive, in this paper, we proposed a robust estimation procedure for mixture AR models by replacing a normal density function in (1) with an asymmetric exponential power (AEP) density function [10]. The AEP distribution includes many important statistical distributions as special cases, e.g., normal distribution, skew-normal distribution, generalized error distribution, Laplace distribution, asymmetric Laplace distribution, and uniform distribution. This indicates that the proposed method provides a more general approach, which can adapt to much more different error structures and automatically chooses the parameters to achieve both efficiency and robustness of estimators. Meanwhile, we apply an expectation-maximization (EM) algorithm [11] to implement the proposed optimization problem. In addition, the finite-sample performance of the proposed method is evaluated via some numerical studies and a real-data analysis.
The remainder of the paper is organized as follows. In Section 2, we introduce an estimation procedure for mixture AR models via an AEP density function and introduce an EM algorithm to solve the proposed methodology. In Section 3, simulation studies are conducted to evaluate the finite sample performance of the proposed method. In Section 4, a real data set is analyzed to compare the proposed method with some existing methods. We conclude with some remarks in Section 5.

2. Methodology

According to Fernandez et al. (1995), an AEP density  f ( x ; μ , τ , α , σ )  is defined as follows [10].
f ( x ; μ , σ , α , τ ) = α τ ( 1 τ ) Γ ( 1 / α ) σ exp | x μ | α σ α I ( x μ ) τ α + I ( x < μ ) ( 1 τ ) α ,
where  μ R  is the location parameter,  σ > 0  is the scale parameter,  0 < τ < 1  controls the skewness,  α > 0  is the shape parameter, and  I ( · )  is an indicator function. The AEP density function is a flexible and general density function class that can even capture the fat tail and asymmetry of the error term. It also includes some important statistical density functions as its special cases, e.g.,
  • Normal density function:  f ( x ; μ , σ , α = 2 , τ = 0.5 ) .
  • Skew-normal density function:  f ( x ; μ , σ , α = 2 , τ ) .
  • Generalized error density function:  f ( x ; μ , σ , α , τ = 0.5 ) .
  • Laplace density function:  f ( x ; μ , σ , α = 1 , τ = 0.5 ) .
  • Asymmetric Laplace density function:  f ( x ; μ , σ , α = 1 , τ ) .
  • Uniform density function:  f ( x ; μ , σ , α , τ ) .
Based on the AEP density function, we propose the k-component AEP-MAR model, which is defined as follows:
h ( x t | F t 1 ; θ ) = i = 1 k π i f i x t ; β i 0 + j = 1 p β i j x t j , σ i , α i , τ i ,
where  θ = ( π 1 , β 1 , σ 1 , α 1 , τ 1 , , π k , β k , σ k , α k , τ k )  is an unknown parameter vector, and  f i  is an AEP density function given in (2). For the k-component AEP-MAR model, we obtain the conditional expectation and conditional variance as follows:
E ( x t | F t 1 ; θ ) = i = 1 k π i β i 0 + j = 1 p β i j x t j = i = 1 k π i μ i ,
V a r ( x t | F t 1 ; θ ) = i = 1 k π i σ i 2 + i = 1 k π i μ i 2 i = 1 k π i μ i 2 .
where  μ i = β i 0 + j = 1 p β i j x t j .
Let  x 1 , x 2 , , x n  be a random sample from the k-component AEP-MAR model. Then, the sample conditional log-likelihood function can be written as
P n ( θ ) = t = p + 1 n log { i = 1 k π i α i τ i ( 1 τ i ) Γ ( 1 / α i ) σ i exp | x t β i 0 j = 1 p β i j x t j | α i σ i α i I ( x t β i 0 + j = 1 p β i j x t j ) τ i α i + I ( x t < β i 0 + j = 1 p β i j x t j ) ( 1 τ i ) α i }
Therefore, an estimator  θ ^ n  for  θ  is defined as
θ ^ n = arg max θ P n ( θ ) .
Theoretically, by selecting the proper parameters of location, skewness, shape, and scale, the AEP-MAR model can select the best likelihood function via the data-driven technique. Under some special conditions, the likelihood function of the AEP-MAR model can also be equivalent to the existing statistical methods, e.g., GMAR [1] and LMAR [5]. This implies that the AEP-MAR model can provide a more general approach that does not need to assume the error distribution in advance. In addition, the proposed method can adapt to the unknown error structures to improve prediction accuracy.

Algorithm

The EM algorithm is a commonly used algorithm for maximum likelihood estimation in incomplete data proposed by Dempster et al. (1977). Under proper regularity conditions, the EM algorithm has ascent property and global convergence [11]. In this subsection, we will apply the EM algorithm to solve (3).
Firstly, we define the unobserved random variables
z t j = 1 , if sample x t is in the j - th component , 0 , otherwise .
where  t = 1 , , n  and  j = 1 , , k . Let  z i = ( z i 1 , , z i k ) . Then, the complete data are  { ( x t , z t ) , t = 1 , , n } . Thus, the log-likelihood function of the complete data can be obtained as follows:
R n ( θ ) = t = p + 1 n i = 1 k z t i log { π i α i τ i ( 1 τ i ) Γ ( 1 / α i ) σ i exp | x t β i 0 j = 1 p β i j x t j | α i σ i α i I ( x t β i 0 + j = 1 p β i j x t j ) τ i α i + I ( x t < β i 0 + j = 1 p β i j x t j ) ( 1 τ i ) α i } .
In the following, we apply an EM algorithm to implement (4).
E-step: Given the m-th approximation  θ ^ n ( m )  of  θ , the expectation of the latent variable  z t i  is given by
p t i ( m ) = E ( z t i | F t 1 ; θ ^ n ( m ) ) = π ^ n i ( m ) f i x t ; β ^ n i 0 ( m ) + j = 1 p β ^ n i j ( m ) x t j , σ ^ n i ( m ) , α ^ n i ( m ) , τ ^ n i ( m ) i = 1 k π ^ n i ( m ) f i x t ; β ^ n i 0 ( m ) + j = 1 p β ^ n i j ( m ) x t j , σ ^ n i ( m ) , α ^ n i ( m ) , τ ^ n i ( m ) .
M-step: By replacing  z t i  with  p t i ( m )  in (4), we obtain the following objective function:
R n 1 ( θ ) = t = p + 1 n i = 1 k p t i ( m ) log { π i α i τ i ( 1 τ i ) Γ ( 1 / α i ) σ i exp | x t β i 0 j = 1 p β i j x t j | α i σ i α i I ( x t β i 0 + j = 1 p β i j x t j ) τ i α i + I ( x t < β i 0 + j = 1 p β i j x t j ) ( 1 τ i ) α i } .
By maximizing  R n 1 ( θ )  about  π i , we can yield
π ^ n i ( m + 1 ) = 1 n p t = p + 1 n p t i ( k ) .
For fixed values of  β i  and  α i , the values of  σ i , τ i  can be expressed by maximizing  R n 1 ( θ ) :
σ i ( m + 1 ) = σ i ( β i , α i ) = α i e + ( β i , α i ) ( τ i ) α i + e ( β i , α i ) ( 1 τ i ) α i t = p + 1 n p t i ( k ) 1 / α i , τ i ( m + 1 ) = τ i ( β i , α i ) = 1 + e + ( β i , α i ) e ( β i , α i ) 1 / ( α i + 1 ) 1 .
where
e + ( β i , α i ) = t = p + 1 n p t i | x t β i 0 j = 1 p β i j x t j | τ i α i I ( x t β i 0 + j = 1 p β i j x t j ) , e ( β i , α i ) = t = p + 1 n p t i | x t β i 0 j = 1 p β i j x t j | τ i α i I ( x t < β i 0 + j = 1 p β i j x t j ) .
By replacing  σ i  and  τ i  with  σ i ( m + 1 )  and  τ i ( m + 1 )  in (5), the objective function about  { β i , α i }  can be written as
R n 2 ( β i , α i ) = t = p + 1 n p t i ( k ) { log α i Γ ( 1 / α i ) 1 α i log α i t = p + 1 n p t i ( k ) 1 α i 1 + α i α i log e + ( β i , α i ) 1 / ( 1 + α i ) + e ( β i , α i ) 1 / ( 1 + α i ) } .
Therefore, the  ( m + 1 ) -th approximation  θ ^ n ( m + 1 )  of  θ  can be obtained by:
β ^ n i ( m + 1 ) = arg max β i R n 2 ( β i , α ^ n i ( m ) ) , i = 1 , , k , α ^ n i ( m + 1 ) = arg max α i R n 2 ( β ^ n i ( m ) , α i ) , i = 1 , , k , σ ^ n i ( m + 1 ) = σ i ( β ^ n i ( m + 1 ) , α ^ n i ( m + 1 ) ) , i = 1 , , k , τ ^ n i ( m + 1 ) = τ i ( β ^ n i ( m + 1 ) , α ^ n i ( m + 1 ) ) , i = 1 , , k .
Remark 1.
In order to implement the above EM algorithm, we need an initial value  θ ^ n i ( 0 ) = { β ^ n i ( 0 ) , α ^ n i ( 0 ) , σ ^ n i ( 0 ) , τ ^ n i ( 0 ) , π ^ n i ( 0 ) } , i = 1 , , k . First, we apply the k-means clustering method to the dataset. According to [12], we obtain  θ ^ n i ( 0 )  as follows:
α ^ n i ( 0 ) = 1 , β ^ n i ( 0 ) = β i ( τ ^ n i ( 0 ) ) , τ ^ n i ( 0 ) = arg min τ i t O i ρ τ i x t β i 0 ( τ i ) j = 1 p β i j ( τ i ) x t j / [ τ i ( 1 τ i ) ] , σ ^ n i ( 0 ) = 1 | O i | t O i ρ τ ^ n i ( 0 ) x t β ^ n i 0 ( 0 ) j = 1 p β ^ n i j ( 0 ) x t j ,
where  O i  is the random sample in the i-th category,  ρ τ ( · )  is the quantile check loss function, and  β i ( τ ) = arg min β i t O i ρ τ ( x t β i 0 j = 1 p β i j x t j ) . We can use some standard numerical software to obtain  β ^ n i ( m + 1 )  and  α ^ n i ( m + 1 ) , e.g., quantreg, optim, and optimize in R software.

3. Simulation Studies

Example 1.
In this example, some numerical simulations are carried out to illustrate the finite-sample performance of the proposed method. We compare the proposed method (AEP-MAR) with the following three methods: the method based on the Gaussian mixture autoregressive model (GMAR) [1], the method based on the Student t-mixture autoregressive models (TMAR) [2], and the method based on the Laplace mixture autoregressive model (LMAR) [5]. In this simulation, we consider a following two-component time series model (6).
y t = 0.6 y t 1 0.9 y t 2 + ϵ 1 , w i t h   π 1 = 0.5 , y t = 0.1 y t 1 + 0.7 y t 2 + ϵ 2 , w i t h   π 2 = 0.5 .
We generate 200 random samples from model (7) with sample sizes of  n = 250,500 . For the error terms  ϵ 1  and  ϵ 2 , in order to demonstrate that the proposed method is robust to unknown error distributions, we consider the following five scenarios:
  • Scenario 1: The standard normal distribution ( N ( 0 , 1 ) ).
  • Scenario 2: The standard Laplace distribution ( L a ( 0 , 1 ) ).
  • Scenario 3: The t-distribution with degrees of freedom 3 ( t ( 3 ) ).
  • Scenario 4: A mixture of standard normal distribution  N ( 0 , 1 )  and standard Laplace distribution  L a ( 0 , 1 )  ( 0.5 N ( 0 , 1 ) + 0.5 L a ( 0 , 1 ) ).
  • Scenario 5: The chi-square distribution ( χ 2 ( 3 ) ). When the error assumption is correct, the corresponding mixture AR model should have the best performance. Meanwhile, when the error assumption is wrong, the estimation accuracy of this method will also decrease. Therefore, the Scenarios 1–3 are used to compare the performance with the existing methods under correct error assumption, and the performance of the proposed method should be similar to the correct model. The Scenarios 4 and 5 are used to demonstrate that our method is robust to unknown err structures, and the performance of the proposed method should rank first among the four methods.
To assess the finite-sample performance, we calculate the bias and the mean squared error (MSE) of estimators based on 200 simulations. The simulations results are reported in Table 1, Table 2, Table 3, Table 4, Table 5, Table 6 and Table 7, respectively. In Table 2 and Table 4, we also report the estimators of other parameters for the two-component AEP-MAR model. From Table 1, we find that the GMAR has smaller bias and MSE than other three methods in the case of normal distribution, while the finite-sample performance of the AEP-MAR is better than the other two methods. In Table 3, the LMAR has the best performance in the case of laplace distribution. Meanwhile, the performance of the AEP-MAR is also similar to that of the LMAR. We can observe from Table 5 that the TMAR has the smallest bias and MSE in all four methods, while the AEP-MAR and the TMAR have similar performance. In Table 6 and Table 7, the AEP-MAR has the smallest bias and MSE in all four methods, and the effectiveness of the other three methods decreases significantly. The estimators of AEP-MAR are also precise as the sample size increases. This illustrates that AEP-MAR is robust and effective to an unknown error structure. In conclusion, the proposed method is more adaptive to the error distribution than the other three methods. If the error structure is unknown, the proposed method should be considered first.
Example 2.
In this example, we apply numerical simulation to illustrate the finite-sample performance of the model selections for the proposed AEP-MAR model via the Akaike information criterion (AIC) and Bayesian information criterion (BIC). The dataset is generated according to Scenario 4 in Example 1. We consider the k-component mixture AR model, where  k = 1 , 2 , 3 , 4 , 5 . We calculate the AIC and BIC value of the GMAR model, the TMAR model, the LMAR model, and the proposed AEP-MAR model for each k. The corresponding results are shown in Table 8. From Table 8, we find that the the two-component AEP-MAR model is selected by minimizing AIC and BIC.

4. A Real Data Analysis

In this section, we will apply the proposed methodology to analyze the daily return series of Hong Kong Hang Seng Index (HSI). The data covers the periods from 2 January 2002 to 31 December 2020, which includes 4689 observations. The original series is shown in Figure 1. From Figure 1, we can clearly see that the daily price series is non-stationary. Similar to [13], we let  x t = 100 * ( log ( P t ) log ( P t 1 ) ) , where  P t  is the daily price in t-th day. The corresponding  x t  series are shown in Figure 2. We can observe from Figure 2 that  x t  is stationary. The skewness and excess kurtosis of  x t  are −0.3 and 9.01, respectively, which also means that the  x t  series does not satisfy the normality assumption. Meanwhile, the density of log of the daily closing price of the Hong Kong Hang Seng Index is drawn in Figure 3. From Figure 3, we find that the marginal distribution of the series is clearly not symmetric and exhibits multimodality. This indicates that we need to use a mixture AR model rather than an AR model to describe the daily trend of HSI.
In real data analysis, it is important to choose the number of components k and the order of AR components for the MAR model. According to the distribution characteristics described above, we first consider a two-component mixture AR model and a three-component mixture AR model for this dataset. According to Wong & Li (2000) [1], we used AIC and BIC as the model-selection criteria. An example of the performances of the model-selection criteria can be seen in the Example 2 in Section 3.
The corresponding results are reported in Table 9. From Table 9, we can see that all of the methods rank the two-component as the best. Additionally, the best model selected by minimizing AIC and BIC is the two-component and second-order AEP-MAR model; the value of AIC is 5.61, and the value of BIC is 70.14. This shows that the AEP-MAR model can fit the characteristics of high kurtosis and multimodality in the the log-return series of HSI better. The estimation results for the selected model are given in Table 10. According to Table 10, we obtain the following the two-component and second-order AEP-MAR model for the daily return series of Hong Kong Hang Seng Index.
h ( x t | F t 1 ; θ ) = 0.7564 f 1 0.0447 x t 1 + 0.0610 x t 2 , 0.5240 , 1.2187 , 0.4808 + 0.2436 f 2 0.6032 x t 1 0.1394 x t 2 , 0.5473 , 0.9012 , 0.5333 ,
We obtain from the model (8) that the first component can be interpreted as the overall trend of the log-returns with relatively small fluctuations, and the second component can be interpreted as the irrational “Unilateral Overshooting Phenomenon” in financial markets.

5. Discussion

In this paper, we introduced a robust mixture autoregressive procedure via an asymmetric exponential power distribution. The proposed method has greater flexibility and can adapt to unknown error structures. Under some specific parameters, our method can also be equivalent to the existing method, e.g., GMAR and LMAR as its special cases. In addition, an EM algorithm was introduced to solve the proposed optimization problem. The merits of the proposed method are illustrated by some numerical simulations and a real data analysis. The results indicated that the proposed method was robust and was adaptive to the error distribution. Finally, we will study the large sample properties of the proposed method as future work.

Author Contributions

Methodology, Y.J.; Formal analysis, Z.Z. All authors have read and agreed to the published version of the manuscript.

Funding

Jiang’s research is partially supported by NSFC (12171203) and the Natural Science Foundation of Guangdong (No. 2022A1515010045).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Wong, C.S.; Li, W.K. On a mixture autoregressive model. J. R. Stat. Soc. Ser. B 2000, 62, 95–115. [Google Scholar] [CrossRef]
  2. Wong, C.S.; Chan, W.S.; Kam, P.L. A Student t-mixture autoregressive model with applications to heavy-tailed financial data. Biometrika 2009, 96, 751–760. [Google Scholar] [CrossRef]
  3. Wong, C.S.; Li, W.K. On a mixture autoregressive conditional heteroscedastic model. J. Am. Stat. Assoc. 2001, 96, 982–995. [Google Scholar] [CrossRef]
  4. Fong, P.W.; Li, W.K.; Yau, C.W.; Wong, C.S. On a mixture vector autoregressive model. Can. J. Stat. 2007, 35, 135–150. [Google Scholar] [CrossRef]
  5. Nguyen, H.D.; McLachlan, G.J.; Ullmann, J.F.P.; Janke, A.L. Laplace mixture autoregressive models. Stat. Probab. Lett. 2016, 110, 18–24. [Google Scholar] [CrossRef]
  6. Maleki, M.; Hajrajabi, A.; Arellano-Valle, R.B. Symmetrical and asymmetrical mixture autoregressive processes. Braz. J. Probab. Stat. 2020, 34, 273–290. [Google Scholar] [CrossRef]
  7. Meitz, M.; Preve, D.; Saikkonen, P. A mixture autoregressive model based on Student’s t–distribution. Commun. Stat. Theory Methods 2021, 52, 1–76. [Google Scholar] [CrossRef]
  8. Virolainen, S. A mixture autoregressive model based on Gaussian and Student’s t-distributions. Stud. Nonlinear Dyn. Econ. 2021, 26, 559–580. [Google Scholar] [CrossRef]
  9. Solikhah, A.; Kuswanto, H.; Iriawan, N.; Fithriasari, K. Fisher’s z Distribution-Based Mixture Autoregressive Model. Econometrics 2021, 9, 27. [Google Scholar] [CrossRef]
  10. Fernández, C.; Osiewalski, J.; Steel, M.F.J. Modeling and inference with υ-spherical distributions. J. Am. Stat. Assoc. 1995, 90, 1331–1340. [Google Scholar] [CrossRef]
  11. Dempster, A.P.; Laird, N.M.; Rubin, D.B. Maximum likelihood from incomplete data via the EM algorithm. J. R. Stat. Soc. Ser. B 1977, 39, 1–22. [Google Scholar]
  12. Yang, T.; Gallagher, C.M.; McMahan, C.S. A robust regression methodology via M-estimation. Commun. Stat. Theory Methods 2019, 48, 1092–1107. [Google Scholar] [CrossRef] [PubMed]
  13. Chen, H.; Chong, T.T.-L.; Bai, J. Theory and applications of tar model with two threshold variables. Econ. Rev. 2012, 31, 142–170. [Google Scholar] [CrossRef] [Green Version]
Figure 1. The price of Hong Kong Hang Seng Index (HSI), January 2002–December 2020.
Figure 1. The price of Hong Kong Hang Seng Index (HSI), January 2002–December 2020.
Axioms 12 00196 g001
Figure 2. The log-return of Hong Kong Hang Seng Index (HSI), January 2002–Dececember 2020.
Figure 2. The log-return of Hong Kong Hang Seng Index (HSI), January 2002–Dececember 2020.
Axioms 12 00196 g002
Figure 3. Density of log of the daily closing price of Hong Kong Hang Seng Index.
Figure 3. Density of log of the daily closing price of Hong Kong Hang Seng Index.
Axioms 12 00196 g003
Table 1. Simulation results for Scenario 1.
Table 1. Simulation results for Scenario 1.
GMARTMARLMARAEP-MAR
β 11 0.0036 (0.0026)0.0064 (0.0031)−0.0054 (0.0037)0.0034 (0.0026)
β 12 0.0023 (0.0023)0.0054 (0.0030)0.0151 (0.0037)0.0030 (0.0023)
β 21 0.0010 (0.0022)0.0008 (0.0027)0.0061 (0.0032)0.0008 (0.0024)
n = 250 β 22 0.0004 (0.0016)0.0019 (0.0018)0.0075 (0.0024)0.0015 (0.0017)
π 1 0.0022 (0.0014)0.0025 (0.0016)0.0024 (0.0023)0.0024 (0.0014)
π 2 −0.0022 (0.0014)−0.0025 (0.0016)−0.0024 (0.0023)−0.0024 (0.0014)
β 11 0.0007 (0.0009)0.0018 (0.0012)−0.0056 (0.0017)0.0008 (0.0009)
β 12 0.0031 (0.0011)0.0058 (0.0013)0.0078 (0.0017)0.0031 (0.0012)
n = 500 β 21 0.0029 (0.0013)0.0040 (0.0015)0.0099 (0.0022)0.0035 (0.0014)
β 22 0.0031 (0.0012)0.0104 (0.0014)0.0129 (0.0022)0.0039 (0.0013)
π 1 0.0022 (0.0013)0.0029 (0.0016)0.0038 (0.0015)0.0023 (0.0014)
π 2 −0.0022 (0.0013)−0.0029 (0.0016)−0.0038 (0.0015)−0.0023 (0.0014)
Table 2. Simulation results of AEP-MAR for Scenario 1.
Table 2. Simulation results of AEP-MAR for Scenario 1.
AEP-MAR σ 1 σ 2 τ 1 τ 2 α 1 α 2
n = 2500.6833 (0.0108)0.6886 (0.0092)0.5031 (0.0012)0.4984 (0.0010)2.1746 (0.3438)2.1036 (0.3445)
n = 5000.7038 (0.0056)0.6990 (0.0057)0.5003 (0.0007)0.5016 (0.0006)2.0986 (0.1625)2.1009 (0.1638)
Table 3. Simulation results for Scenario 2.
Table 3. Simulation results for Scenario 2.
GMARTMARLMARAEP-MAR
β 11 0.0061 (0.0018)0.0060 (0.0012)0.0040 (0.0011)0.0046 (0.0011)
β 12 0.0013 (0.0019)−0.0012 (0.0012)0.0004 (0.0010)0.0012 (0.0010)
n = 250 β 21 0.0022 (0.0019)−0.0029 (0.0016)0.0009 (0.0014)−0.0006 (0.0015)
β 22 0.0088 (0.0019)0.0039 (0.0015)0.0025 (0.0013)0.0026 (0.0013)
π 1 0.0014 (0.0023)0.0019 (0.0022)0.0014 (0.0021)0.0013 (0.0022)
π 2 −0.0014 (0.0023)−0.0019 (0.0022)−0.0014 (0.0021)−0.0013 (0.0022)
β 11 −0.0008 (0.0008)0.0011 (0.0006)0.0006 (0.0005)0.0006 (0.0005)
β 12 0.0012 (0.0006)0.0010 (0.0005)0.0007 (0.0004)0.0010 (0.0004)
n = 500 β 21 0.0054 (0.0009)0.0022 (0.0005)0.0012 (0.0005)0.0020 (0.0005)
β 22 0.0048 (0.0010)0.0023 (0.0005)0.0016 (0.0005)0.0019 (0.0005)
π 1 0.0023 (0.0012)0.0014 (0.0009)0.0010 (0.0009)0.0014 (0.0009)
π 2 −0.0023 (0.0012)−0.0014 (0.0009)−0.0010 (0.0009)−0.0014 (0.0009)
Table 4. Simulation results of AEP-MAR for Scenario 2.
Table 4. Simulation results of AEP-MAR for Scenario 2.
AEP-MAR σ 1 σ 2 τ 1 τ 2 α 1 α 2
n = 2500.4823 (0.0344)0.4857 (0.0420)0.5048 (0.0015)0.4957 (0.0014)1.0618 (0.4893)1.0240b (0.4989)
n = 5000.4985 (0.0115)0.5031 (0.0137)0.5028 (0.0009)0.4990 (0.0006)1.0173 (0.4212)0.9883 (0.4293)
Table 5. Simulation results for Scenario 3.
Table 5. Simulation results for Scenario 3.
GMARTMARLMARAEP-MAR
β 11 0.0046 (0.0017)0.0030 (0.0012)0.0035 (0.0015)0.0035 (0.0014)
β 12 0.0053 (0.0020)−0.0008 (0.0009)0.0019 (0.0013)0.0008 (0.0012)
n = 250 β 21 0.0078 (0.0015)0.0023 (0.0008)0.0044 (0.0017)0.0045 (0.0010)
β 22 0.0102 (0.0017)0.0025 (0.0009)0.0051 (0.0013)0.0028 (0.0012)
π 1 0.0035 (0.0028)0.0022 (0.0021)0.0030 (0.0026)0.0026 (0.0022)
π 2 −0.0035 (0.0028)−0.0022 (0.0021)−0.0030 (0.0026)−0.0026 (0.0022)
β 11 −0.0008 (0.0008)0.0003 (0.0005)−0.0016 (0.0008)0.0006 (0.0007)
β 12 0.0129 (0.0012)0.0030 (0.0004)0.0037 (0.0006)0.0033 (0.0006)
n = 500 β 21 0.0076 (0.0010)0.0015 (0.0005)0.0040 (0.0007)0.0018 (0.0006)
β 22 0.0106 (0.0015)0.0012 (0.0004)0.0040 (0.0005)0.0022 (0.0005)
π 1 0.0018 (0.0024)0.0001 (0.0010)0.0014 (0.0012)0.0008 (0.0011)
π 2 −0.0018 (0.0024)−0.0001 (0.0010)−0.0014 (0.0012)−0.0008 (0.0011)
Table 6. Simulation results for Scenario 4.
Table 6. Simulation results for Scenario 4.
GMARTMARLMARAEP-MAR
β 11 0.0053 (0.0016)0.0063 (0.0019)0.0057 (0.0023)0.0031 (0.0016)
β 12 0.0050 (0.0015)0.0070 (0.0016)0.0112 (0.0022)−0.0027 (0.0014)
n = 250 β 21 0.0116 (0.0023)0.0084 (0.0018)0.0075 (0.0020)0.0055 (0.0015)
β 22 0.0094 (0.0021)0.0050 (0.0018)0.0105 (0.0016)0.0036 (0.0015)
π 1 0.0067 (0.0023)−0.0106 (0.0021)−0.0031 (0.0021)−0.0004 (0.0021)
π 2 −0.0067 (0.0023)0.0106 (0.0021)0.0031 (0.0021)0.0004 (0.0021)
β 11 0.0026 (0.0012)0.0022 (0.0013)0.0010 (0.0019)0.0009 (0.0006)
β 12 −0.0016 (0.0013)0.0046 (0.0014)0.0065 (0.0017)−0.0015 (0.0010)
n = 500 β 21 0.0100 (0.0022)0.0030 (0.0013)0.0065 (0.0012)0.0026 (0.0011)
β 22 0.0086 (0.0022)−0.0032 (0.0013)0.0050 (0.0013)0.0023 (0.0013)
π 1 −0.0012 (0.0022)−0.0171 (0.0023)−0.0045 (0.0021)−0.0012 (0.0020)
π 2 0.0012 (0.0022)0.0171 (0.0023)0.0045 (0.0021)0.0012 (0.0020)
Table 7. Simulation results for Scenario 5.
Table 7. Simulation results for Scenario 5.
GMARTMARLMARAEP-MAR
β 11 −0.0149 (0.0079)−0.1418 (0.0216)−0.0337 (0.0021)−0.0033 (0.0018)
β 12 0.0965 (0.0268)0.1251 (0.0170)0.0659 (0.0053)0.0155 (0.0038)
n = 250 β 21 −0.0672 (0.0097)−0.1476 (0.0237)−0.0761 (0.0068)−0.0108 (0.0035)
β 22 0.0173 (0.0264)−0.1247 (0.0166)0.0504 (0.0032)0.0075 (0.0029)
π 1 0.0875 (0.0145)0.0577 (0.0054)0.0805 (0.0090)0.0041 (0.0058)
π 2 −0.0875 (0.0145)−0.0577 (0.0054)−0.0805 (0.0090)−0.0041 (0.0058)
β 11 −0.0218 (0.0030)−0.1329 (0.0195)−0.0240 (0.0016)−0.0019 (0.0002)
β 12 0.0835 (0.0103)0.1183 (0.0151)0.0535 (0.0038)0.0030 (0.0004)
n = 500 β 21 −0.0680 (0.0060)−0.1401 (0.0214)−0.0704 (0.0056)−0.0049 (0.0004)
β 22 −0.006 2 (0.0085)−0.0843 (0.0145)−0.0466 (0.0025)−0.0057 (0.0003)
π 1 0.0951 (0.0107)0.0647 (0.0044)0.0800 (0.0076)0.0034 (0.0019)
π 2 −0.0951 (0.0107)−0.0647 (0.0044)−0.0800 (0.0076)−0.0034 (0.0019)
Table 8. AIC and BIC for Example 2.
Table 8. AIC and BIC for Example 2.
GMARTMARLMARAEP-MAR
ComponentsAICBICAICBICAICBICAICBIC
11498.481512.541093.551195.441337.811351.871024.011035.10
21420.371448.481115.971154.621274.511302.62990.351032.54
n = 25031109.581151.741095.321155.051060.341102.501004.531067.77
41257.331313.541079.921160.731072.441128.661014.001098.32
51003.501073.771094.091195.981189.401259.671000.941106.34
12028.472045.311992.912013.961992.992019.831981.642006.90
22172.482206.162064.952111.272037.352071.041944.681995.21
n = 50032135.742186.272132.662204.242055.362105.882139.672215.46
42095.952163.322120.832217.682147.282214.652116.722227.78
52243.172327.382112.552234.662157.192241.402109.662235.98
Table 9. AIC and BIC of a real dataset.
Table 9. AIC and BIC of a real dataset.
AR ( 2 ) AR ( 3 )
ComponentsMethodAICBICAICBIC
GMAR5.8670.389.8687.29
k = 2TMAR5.7970.329.8087.23
LMAR5.8070.339.7987.22
AEPD-MAR5.6170.149.6287.06
GMAR15.82112.6121.82137.97
k = 3TMAR15.76112.5621.74137.90
LMAR15.76112.5521.75137.90
AEPD-MAR15.68112.4821.68137.83
Table 10. The estimation results of two-component and second-order AEP-MAR model.
Table 10. The estimation results of two-component and second-order AEP-MAR model.
Component β 1 β 2 σ τ α π
component1−0.04470.06100.52400.48081.21870.7564
component2−0.6032−0.13940.54730.53330.90120.2436
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Jiang, Y.; Zhuang, Z. A Mixture Autoregressive Model Based on an Asymmetric Exponential Power Distribution. Axioms 2023, 12, 196. https://doi.org/10.3390/axioms12020196

AMA Style

Jiang Y, Zhuang Z. A Mixture Autoregressive Model Based on an Asymmetric Exponential Power Distribution. Axioms. 2023; 12(2):196. https://doi.org/10.3390/axioms12020196

Chicago/Turabian Style

Jiang, Yunlu, and Zehong Zhuang. 2023. "A Mixture Autoregressive Model Based on an Asymmetric Exponential Power Distribution" Axioms 12, no. 2: 196. https://doi.org/10.3390/axioms12020196

APA Style

Jiang, Y., & Zhuang, Z. (2023). A Mixture Autoregressive Model Based on an Asymmetric Exponential Power Distribution. Axioms, 12(2), 196. https://doi.org/10.3390/axioms12020196

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop