Next Article in Journal
Does Double Biofeedback Affect Functional Hemispheric Asymmetry and Activity? A Pilot Study
Next Article in Special Issue
New Families of Special Polynomial Identities Based upon Combinatorial Sums Related to p-Adic Integrals
Previous Article in Journal
Calculation of the Statistical Properties in Intermittency Using the Natural Invariant Density
Previous Article in Special Issue
Some Identities of the Degenerate Higher Order Derangement Polynomials and Numbers
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Monitoring Persistence Change in Heavy-Tailed Observations

School of Mathematics, Northwest University, Xi’an 710127, China
Symmetry 2021, 13(6), 936; https://doi.org/10.3390/sym13060936
Submission received: 18 March 2021 / Revised: 20 May 2021 / Accepted: 21 May 2021 / Published: 25 May 2021

Abstract

:
In this paper, a ratio test based on bootstrap approximation is proposed to detect the persistence change in heavy-tailed observations. This paper focuses on the symmetry testing problems of I ( 1 ) -to- I ( 0 ) and I ( 0 ) -to- I ( 1 ) . On the basis of residual CUSUM, the test statistic is constructed in a ratio form. I prove the null distribution of the test statistic. The consistency under alternative hypothesis is also discussed. However, the null distribution of the test statistic contains an unknown tail index. To address this challenge, I present a bootstrap approximation method for determining the rejection region of this test. Simulation studies of artificial data are conducted to assess the finite sample performance, which shows that our method is better than the kernel method in all listed cases. The analysis of real data also demonstrates the excellent performance of this method.

1. Introduction

Change point analysis is used in a wide range of fields such as biology, medicine, finance, etc. The hot research topics in change point analysis are how to test structural changes of the statistical model and how to estimate the change points. The detection of a change point is very meaningful, which can help in building a more reasonable model. Over the past few decades, many economic and financial data have displayed changes in persistence. This type of change causes substantial practical problems, especially concerning inflation rates, short-term interest rates, and government budget deficits. There is a rich literature on the detection of such change points. Kim [1] and Kim et al. [2] proposed the ratio test for a change in persistence. Leybourne et al. [3] discussed the ADF test method. The LBI test was also considered, such as by Busetti and Taylor [4] and by Leybourne and Taylor [5]. More recently, Leybourne et al. [6] proposed a CUSUM test. Sibbertsen and Kruse [7] studied the change point problem of long-range dependence series. Belaire-Franch and Contreras [8] presented a nonparametric unit root test. Halunga and Osborn [9] summarized the ratio-based estimators of persistence change points. Kejriwal et al. [10] considered the Wald tests for detecting multiple persistence changes. Perron et al. [11] proposed a test for the presence of a nonlinear deterministic trend to determine whether the noise component is stationary or contains an autoregressive unit root. In [12], the unit root test on the panel A R ( 1 ) model was discussed. Kejriwal et al. [13] presented bootstrap procedures for detecting multiple persistence shifts. The abovementioned work has a common limitation: they are for a time series with finite variance.
It is also important to studying the sequences of infinite variance. This type of sequence has many interesting mathematical properties. In this paper, we present a new approach to test persistence change in heavy-tailed sequences. We assume that the innovation follows a heavy-tailed distribution. This distribution regularly varies with tail index κ satisfying 1 < κ < 2 , so that the mean exists but the variance is infinite for the sequence. There are a lot of heavy-tailed models in economics and finance; see, for example, Davis and Mikosch [14], Kokoszka and Wolf [15], Soohan et al. [16], etc. Such data sets are well explained by a heavy-tailed model rather than a Gaussian model. Therefore, testing for change within this sequence has also attracted significant research interests. Horváth and Kokoszka [17] tested unit root in such an innovation case. Wang et al. [18] presented a detection and estimation method of structural change in heavy-tailed sequence.
This paper focuses on symmetric I ( 1 ) -to- I ( 0 ) and I ( 0 ) -to- I ( 1 ) persistence change detection problems for observations in heavy-tailed distribution. A bootstrap method is proposed to test changes in persistence. First, we construct the test statistic into a ratio form with residual CUSUM. The proposed statistic is a completely new attempt. It is different from the traditional ratio statistic of Kim [1]. Then, we prove the null distribution of the test statistic. We present its consistency under an alternative hypothesis. Moreover, a bootstrap procedure is proposed for determining the rejection region of this test because the null distribution contains an unknown tail index. Finally, the simulation results of artificial and real data sets demonstrate the good performance of our method.
The remainder of this paper is organized as follows. Section 2 first introduces the statistical model. Then, Section 3 presents the test method and theoretical results. Section 4 contains the simulation studies. The simulation shows the performance of our method is excellent. Section 5 and Section 6 present the discussion and the conclusions, respectively.

2. Statistical Model

The model discussed in this paper is as follows:
y t = μ t + ε t ,
ε t = ρ t ε t 1 + e t , t = 1 , , T ,
where μ t = E ( y t ) = δ T d t is a linear combination for a vector of nonrandom regressors d t . We assume three different typical scenarios: d t = 0 , d t = 1 , and d t = ( 1 , t ) T .   { e t }  is a heavy-tailed sequence satisfying the following assumption.
Assumption 1.
The sequence  { e t }  is strictly stationary with symmetric univariate marginal distributions, which satisfy
T × P ( e 1 / a T · ) μ ( · ) ,
where  a T  is defined by T × P ( | e 1 | > a T ) 1 , the notation ¡°⟹¡± means weak convergence, and the measure  μ ( · )  is given by
2 μ ( d x ) = κ | x | κ 1 I { x < 0 } d x + κ x κ 1 I { x > 0 } d x ,
where  1 < κ < 2 .
Lemma 1.
If Assumption 1 holds, then
a T 1 t = 1 [ T τ ] e t , a T 2 t = 1 [ T τ ] e t 2 d U 1 ( τ ) , U 2 ( τ ) ,
where  { U 1 ( τ ) }  is a κ-stable and  { U 2 ( τ ) }  is a κ / 2 -stable Lévy process in  [ 0 , 1 ] . The notation ¡° d ¡± stands for convergence in distribution.
Remark 1.
This result was obtained by Resnick [19] and by Kokoszka and Wolf [15]. The quantities a T  can be denoted as  a T = T 1 / κ L ( T ) for some slowly varying function L.
Consider the following null hypothesis:
H 0 : y t I ( 1 ) , t = 1 , , T ,
against the alternative hypothesis
H 1 : y t I ( 1 ) , t = 1 , , [ T τ 0 ] ,
y t I ( 0 ) , t = [ T τ 0 ] + 1 , , T ,
where  τ 0 ( 0 , 1 ) ,  T is the sample size and  [ · ]  is the rounding function.
We also deal with the following test problem:
H 0 : y t I ( 0 ) , t = 1 , , T ,
against the alternative hypothesis
H 1 : y t I ( 0 ) , t = 1 , , [ T τ 0 ] ,
y t I ( 1 ) , t = [ T τ 0 ] + 1 , , T ,
where  τ 0 ( 0 , 1 ) .
The goal of our paper is to detect  H 0  against  H 1 or to test  H 0  against  H 1 . These are two symmetry test problems. In model (1), the process { y t } is I ( 0 ) if | ρ t | < 1 , the process { y t } is I ( 1 ) if | ρ t | = 1 . In this paper, we detect whether the process { y t } is I ( 1 ) throughout the sample period or a change occurs from I ( 1 ) to I ( 0 ) . We also test whether the process { y t } is I ( 0 ) throughout the sample period or a change occurs from I ( 0 ) to I ( 1 ) .

3. Monitoring Change in Persistence

In this section, we establish the ratio and bootstrap tests for persistence change problems. We demonstrate the construction of statistics and their good theoretical properties.

3.1. Detecting I ( 1 ) to I ( 0 )

Let { ε ^ 0 , t } be the OLS residuals from the regression of y t on d t , t = 1 , , [ T τ ] , and let { ε ^ 1 , t } be the OLS residuals from the regression of y t on d t , t = [ T τ ] + 1 , , T . Under H 0 , the average information of the sequence { ε ^ 0 , t , t = 1 , , [ T τ ] } should no be very different from the average information of the sequence { ε ^ 1 , t , t = [ T τ ] + 1 , , T } . Take the average of these two parts. The numerator and the denominator of the ratio should be close to each other if there is no persistence change. On the other hand, the numerator and the denominator are very different under H 1 . The ratio-form statistic is constructed as follows:
R T ( τ ) = [ τ T ] 2 t = 1 [ T τ ] ε ^ 0 , t 2 [ ( 1 τ ) T ] 2 t = [ T τ ] + 1 T ε ^ 1 , t 2 .
Based on (12), we can obtain three test statistics: maximum-chow statistic in Andrews [20]:
m a x τ Θ R T ( τ ) ,
mean-score statistic in Hansen [21]:
τ Θ R T ( τ ) d τ ,
mean-exponential statistic in Andrews and Ploberger [22]:
log { τ Θ e x p ( R T ( τ ) ) d τ } ,
where τ Θ , Θ is a compact subset in [ 0 , 1 ] .
Theorem 1.
If Assumption 1 and the null hypothesis H 0 hold,
R T ( τ ) τ 2 V j , 0 ( τ ) ( 1 τ ) 2 V j , 1 ( τ ) R ( τ )
and
H ( R T ) H ( R ) ,
where  H ( · ) denotes the statistics (13)–(15). The notation j = 1 signifies the model with d t = 0 , j = 2 , and j = 3 denote d t = 1 and d t = ( 1 , t ) T respectively.
V 1 , 1 ( τ ) = τ 1 U 1 ( r ) 2 d r , V 1 , 0 ( τ ) = 0 τ U 1 ( r ) 2 d r ;
V 2 , 1 ( τ ) = V 1 , 1 ( τ ) 1 1 τ G 1 ( τ ) 2 , V 2 , 0 ( τ ) = V 1 , 0 ( τ ) 1 τ K 1 ( τ ) 2 ;
V 3 , 1 ( τ ) = V 1 , 1 ( τ ) + 4 ( 1 τ ) 1 3 ( 1 τ ) 1 G 1 ( τ ) G 2 ( τ ) G 1 ( τ ) 2 3 ( 1 τ ) 2 G 2 ( τ ) 2 ;
V 3 , 0 ( τ ) = V 1 , 0 ( τ ) + 4 τ 1 ( 3 τ 1 K 1 ( τ ) K 2 ( τ ) K 1 ( τ ) 2 3 τ 2 K 2 ( τ ) 2 ) ;
K 1 ( τ ) = 0 τ U 1 ( r ) d r , K 2 ( τ ) = 0 τ r U 1 ( r ) d r ;
G 1 ( τ ) = K 1 ( 1 ) K 1 ( τ ) , G 2 ( τ ) = K 2 ( 1 ) K 2 ( τ ) .
The following Theorem 2 shows the consistency of the test.
Theorem 2.
If Assumption 1 and the alternative hypothesis H 1  hold, then R T ( τ ) = O P ( 1 ) when 0 < τ < τ 0 , R T ( τ ) = O P ( T ) when τ 0 τ < 1 . Thus, if [ τ 0 , 1 ] Θ , H ( R T ) = O P ( T ) .

3.2. Detecting I ( 0 ) to I ( 1 )

The notations ε ^ 0 , t and ε ^ 1 , t have similar meanings as before. The statistic is constructed symmetrically as follows:
M T ( τ ) = [ ( 1 τ ) T ] 2 t = [ T τ ] + 1 T ε ^ 1 , t 2 [ τ T ] 2 t = 1 [ T τ ] ε ^ 0 , t 2 .
Based on (24), we can obtain three analogous test statistics:
m a x τ Θ M T ( τ ) ,
τ Θ M T ( τ ) d τ ,
log { τ Θ e x p ( M T ( τ ) ) d τ } ,
where τ Θ , Θ is a compact subset in [ 0 , 1 ] .
Theorem 3.
If Assumption 1 and the null hypothesis H 0 hold (assuming that | ρ t | = | ρ | < 1 ), we have
M T ( τ ) ( 1 τ ) 2 W j , 1 ( τ ) τ 2 W j , 0 ( τ ) M ( τ )
and
H ( M T ) H ( M ) ,
where  H ( · ) denotes the statistics (25)–(27). The notation j = 1 signifies the model with d t = 0 , j = 2 , and j = 3 denote d t = 1 and d t = ( 1 , t ) T respectively.
W 1 , 0 ( τ ) = W 2 , 0 ( τ ) = W 3 , 0 ( τ ) = Ψ 2 2 U 2 ( τ ) ,
W 1 , 1 ( τ ) = W 2 , 1 ( τ ) = W 3 , 1 ( τ ) = Ψ 2 2 U 2 ( 1 τ ) ,
where Ψ 2 denotes l 2 norm of sequence { φ j } and φ j = ρ j .
The following Theorem 4 shows the consistency of the test.
Theorem 4.
If Assumption 1 and the alternative hypothesis H 1  hold, then M T ( τ ) = O P ( T ) when 0 < τ < τ 0 and M T ( τ ) = O P ( 1 ) when τ 0 τ < 1 . Thus, if [ 0 , τ 0 ] Θ , H ( M T ) = O P ( T ) .

3.3. Bootstrap Approximation

The drawback of statistics R T ( τ ) and M T ( τ ) is that the asymptotic distributions depend on the tail index κ . Mandelbrot [23] proposed a method for estimating the tail index. However, the accuracy of this method was not good enough. In order to solve this problem, we present the bootstrap method. The goal of this section is to discuss an approximation rejection region of this test based on the statistic R T ( τ ) , even if κ is unknown. Take the case with I ( 1 ) changing into I ( 0 ) as an example.
The algorithm is as follows:
Step 1: Compute the centered residuals
e ˜ i = e ^ i 1 T i = 1 T e ^ i , 1 i T ,
where e ^ i = ε ^ i ρ ^ ε ^ i 1 . ρ ^ is the OLS estimator of ρ on residuals ε ^ 1 , ε ^ 2 , . . . , ε ^ T .
Step 2: For a fixed N T , select with replacement a bootstrap sample { e i , i = 1 , . . . , N } from { e ˜ i , i = 1 , . . . , T } .
Step 3: Calculate the bootstrap process
ε ^ i = ρ ^ ε ^ i 1 + e i , i = 1 , . . . , N
y ˜ i = δ ^ T d i + ε ^ i , i = 1 , . . . , N
and the statistic
R ˜ N ( τ ) = [ τ N ] 2 t = 1 [ N τ ] η ^ 0 , t 2 [ ( 1 τ ) N ] 2 t = [ N τ ] + 1 N η ^ 1 , t 2 ,
where { η ^ 0 , t } is the OLS residuals from the regression of y ˜ t on d t , t = 1 , , [ N τ ] and { η ^ 1 , t } is the OLS residuals from the regression of y ˜ t on d t , t = [ N τ ] + 1 , , N .
Step 4: Repeat Step 2 and Step 3 B times. The asymptotic critical value H ( R ) of statistic H ( R T ) can be approximated by the empirical quantile of H ( R ˜ ) . We reject the null hypothesis if H ( R T ) > H ( R ) .
In order to prove the convergence of R ˜ N ( τ ) , we consider the following assumption:
Assumption 2.
As T , then N and N / T 0 .
Theorem 5.
If Assumption 1 and 2 hold, then for every real x,
P ϵ ( R ˜ N ( τ ) x ) P P ( R ( τ ) x ) ,
where ϵ = σ ( ε j , j 1 ) P ϵ is the conditional probability with respect to ϵ and ¡° P ¡± stands for convergence in probability.
Remark 2.
Theorem 5 implies that the bootstrap test has an asymptotically correct size. Additionally, it also shows that the test with bootstrap add-on is consistent.
We can finish the bootstrap algorithm of detecting H 0 against H 1 similarly. We construct the statistic
M ˜ N ( τ ) = [ ( 1 τ ) N ] 2 t = [ N τ ] + 1 N η ^ 1 , t 2 [ τ N ] 2 t = 1 [ N τ ] η ^ 0 , t 2 .
We can also obtain the corresponding conclusion of Theorem 5.
Theorem 6.
If Assumptions 1 and 2 hold, then for every real x,
P ϵ ( M ˜ N ( τ ) x ) P P ( M ( τ ) x ) ,
where ϵ = σ ( ε j , j 1 ) P ϵ is the conditional probability with respect to ϵ and ¡° P ¡± stands for convergence in probability.
Appendix A presents the mathematical proofs.

4. Simulation and Real-Data Analysis

In this section, simulation studies of artificial data are conducted to assess the finite sample performance. The empirical sizes and powers perform well. Simulation results show that our method is better than the kernel method in all listed cases. The analysis of real data also demonstrates that this method is effective.

4.1. Simulation

We use R software to complete the simulation. To save computational time, we simply show the results for d t = 1 . The results for the d t = 0 and d t = ( 1 , t ) T cases are quite similar.
To investigate the size and power property of the test, we consider the following data generating process:
y t = r 0 + ε t , ε t = ρ ε t 1 + e t , t = 1 , , T .
Null hypothesis:
H 0 : ρ = 1 , t = 1 , , T ,
against the alternative hypothesis:
H 1 : ρ = 1 , t = 1 , , [ T τ 0 ] ,
ρ = ρ 0 , t = [ T τ 0 ] + 1 , , T ,
another null hypothesis:
H 0 : ρ = ρ 0 , t = 1 , , T ,
against the corresponding alternative hypothesis:
H 1 : ρ = ρ 0 , t = 1 , , [ T τ 0 ] ,
ρ = 1 , t = [ T τ 0 ] + 1 , , T ,
where r 0 = 0.1 , ρ 0 = 0.2 , 0.5 , 0.8 , τ 0 = 0.25 , 0.35 , and τ 0 = 0.3 , 0.5 . The innovation { e t } satisfies Assumption 1. We set the tail index κ = 1.14 , 1.43 , 1.97 . This heavy-tailed sequence is generated by a small program that can be downloaded from Professor Nolan’s website: https://edspace.american.edu/jpnolan/, accessed on 20 May 2021.
The simulation study is based on different sample sizes T = 200 , 500 , 800 at nominal levels α = 0.1 or α = 0.05 . We consider the test statistic (13) and (25). We choose the appropriate bootstrap sample sizes N, which is not given in detail. Let N = { 20 , 25 , 35 } , { 30 , 35 , 40 } ,   { 70 , 100 , 120 } , respectively, and the bootstrap frequency B = 500 in this section.
In the case of detecting  H 0  against  H 1 , the algorithm to calculate empirical sizes is as follows (Algorithm 1):
Algorithm 1. Calculate empirical sizes detecting  H 0  against  H 1 .
  • initialize count variable k = 0
  • repeat
  •     Step A: Generate the data y t , t = 1 , . . . , T under H 0 and calculate the statistics R T ( τ ) and H ( R T ) = m a x 0.2 τ 0.8 R T ( τ ) .
  •     Step B: Repeat the Step 1, 2 and 3 in the bootstrap algorithm for B = 500 times in Section 3.3. Calculate the empirical quantile of H ( R ˜ N ) = m a x 0.2 τ 0.8 R ˜ N ( τ ) , which can be denoted as R T , N .
  •     Step C: If H ( R T ) > R T , N , we reject  H 0 and let k = k + 1 .
  • until   for 5000 times
  •      return    k / 5000
The empirical sizes can be approximated by the frequency, which the null hypothesis rejects with 5000 replications. Calculating the empirical powers is similar to the above algorithm. Only change the data generating process under H 1 . The method to obtain the empirical sizes and powers is also similar in the case of testing H 0 against H 1 .
The empirical sizes and powers of the I ( 1 ) to I ( 0 ) test are provided in Table 1, Table 2, Table 3 and Table 4. The data in parentheses are the corresponding standard errors. We analyze the main conclusions that can be drawn from Table 1, Table 2, Table 3 and Table 4.
(1)
The empirical sizes are almost close to the nominal level α in Table 1.
(2)
From Table 2, Table 3 and Table 4, we can find the powers increase when the value of T is larger for the same ρ 0 and τ 0 . For fixed numbers T and τ 0 , the powers raise gradually with the decrease in ρ 0 . The earlier change gives higher empirical power for the same T and ρ 0 . This is a famous result in the detection of change points. Some powers are equal to 1 in Table 4.
(3)
The larger tail index κ , the higher empirical powers. This is due to the special properties of heavy-tailed sequences. The smaller tail index κ , the more likely the sequence is to contain ‘outliers’. The test statistics behave differently before and after such points, which could seriously affect the performance of this test.
The empirical sizes and powers of the I ( 0 ) to I ( 1 ) test are provided in Table 5, Table 6, Table 7 and Table 8. The data in parentheses is the corresponding standard errors.We now present the main conclusions of the simulation.
(1) The empirical sizes are almost the same as the nominal level α in Table 5.
(2) From Table 6, Table 7 and Table 8, we find that the powers increase when the value of T is larger for the same ρ 0 and τ 0 . For fixed numbers T and τ 0 , the powers raise gradually with the decrease in ρ 0 . An earlier location of the change point results in a higher empirical power. This is a famous result in the detection of change points.
(3) The larger tail index κ , the higher empirical powers. This is due to the special properties of heavy-tailed sequences. The smaller tail index κ , the more likely the sequence is to contain ‘outliers’. The test statistics behave differently before and after such points, which could seriously affect the performance of this test.
We compare our method with the kernel-weighted ratio method (Chen et al. [24]). The empirical powers of the I ( 0 ) to I ( 1 ) tests are provided in Table 9. We let T = 200 , 500 , κ = 1.43 and the location of change point τ 0 = 0.5 . The other parameters are set as before. In the kernel method, we choose bandwidth h = 0.2 M , where the start time M is set to be 0.2 T or 0.3 T .
Table 9 shows that our test method is better than the kernel-weighted test method in all listed cases. The empirical powers of our method are always greater than that of the kernel method at two different start times. The powers increase when the value of T is larger for the same ρ 0 . For fixed number T, the powers raise gradually with the decrease of ρ 0 . In particular, our advantage is more obvious when the sample size is 200. In other words, we can obtain high empirical powers with a small sample size. Our method is more efficient. The numerical simulation shows excellent performance of our method.

4.2. Real-Data Analysis

There is growing evidence to indicate that many economic and financial time sequences have heavy-tailed features. Sometimes, the data contains changes in persistence. We apply the ratio test method to analyze the foreign exchange rate data. The data set contains 300 monthly foreign exchange rates for Sweden/US from January 1971 to December 1995. Figure 1 shows the real data. The data used here can be found on the website of the Federal Reserve Bank of St. Louis. Figure 2 describes the first-order difference of the original data in Figure 1. From Figure 2, we can see that there exist many ’outliers’.
According to Figure 1, the real data maybe has a persistence change from I ( 0 ) to I ( 1 ) . We apply our method to detect persistence change in this sequence. First, we used the bootstrap approximation method to determine the rejection domain by the statistic (37). Then, we discovered that the test statistic is larger than the critical value. Therefore, we reject the null hypothesis. This means there could be a change point in persistence from I ( 0 ) to I ( 1 ) . Based on Kim’s [1] method, the change point estimation is 104. This coincides with our detection results.
One question we have is whether the conclusion of a rejection is caused by persistence change points or by ’outliers’. In order to solve this problem and to make our conclusion more reliable, we also tested the first-order difference data in Figure 2. The monitoring process that used the same parameters as before does not discover changes in persistence. This result shows that the initial data contains a possible change point and that the first-order difference series is stationary.
Furthermore, we conclude that there could be a change point in persistence from I ( 0 ) to I ( 1 ) . The estimated change point 104 is located at the point August, 1979. Referring to the history of the American economic policy, this estimated location can be well interpreted. In the second half of the 1970s, the US government decided to adopt an expansionary fiscal policy and a monetary policy to stimulate the economy due to the high inflation, high unemployment rate, and economic growth rate of the US economy. After President Reagan took office, the dollar began to strengthen and the foreign exchange rate for Sweden/US reached its highest point in July 1985. Thus, this implies that the sequence goes from stationary to nonstationary because of the stimulus of the economic policy.

5. Discussion

We focused on the symmetric I ( 1 ) -to- I ( 0 ) and I ( 0 ) -to- I ( 1 ) persistence change testing problems in this paper. A ratio test based on bootstrap approximation was proposed to detect this type of change in heavy-tailed observations. On the basis of residual CUSUM, the test statistic was constructed in a ratio form. We proved the null distribution of the test statistic. The consistency under alternative hypothesis was also discussed. However, the null distribution of the test statistic contains an unknown tail index. Then, we presented the bootstrap methodology.
Over the past few decades, many economic and financial data have displayed changes in persistence. This type of changes causes substantial practical problems concerning inflation rates, short-term interest rates, and government budget deficits, especially the issue of inflation persistence that plays a key role in the formulation and evaluation of quantitative macroeconomic models; see Korenok et al. [25]. Furthermore, a possible application of testing persistence change is to predictive regression, predicting a low-persistence I ( 0 ) variable such as stock returns using a highly persistent predictor; see Kejriwal et al. [13] and Verdickt et al. [26].
Our study thus offers a new strategy to treat persistence change detecting problems. There are still some shortcomings in our work: for example, how to select the bootstrap sample sizes N for a fixed T, how to determine the direction of the persistence change, etc. We will conduct further research into these questions.

6. Conclusions

In this paper, the new ratio and bootstrap test for persistence change with heavy-tailed innovations was proposed. This paper focuses on the I ( 1 ) -to- I ( 0 ) and I ( 0 ) -to- I ( 1 ) persistence change detecting problems. We derived the asymptotic distributions of the ratio tests under the corresponding null hypothesises. However, the asymptotic distributions are dependent on tail index κ , which is unknown and difficult to estimate. To solve this problem, we presented an approximate method based on the bootstrap methodology. As most subsampling methods, our approach relies on the choice of the subsample size N. Under the alternative, we proved the consistency of the ratio and bootstrap test. The simulation results show that the empirical sizes and powers perform well. In conclusion, the ratio test based on bootstrap method constitutes an effective tool for detecting persistence change with heavy-tailed sequence.

Funding

This research was funded by the Special Research Program of the Shaanxi Provincial Education Department OF FUNDER grant number 15JK1737, and the Shaanxi Province Science and Technology Program OF FUNDER grant number 2018JQ1075.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data set in Section 4.2 contains 300 monthly foreign exchange rate for Sweden/US from January 1971 to December 1995. This data is public and available. The data used here can be found on the website of the Federal Reserve Bank of St. Louis (https://www.stlouisfed.org/, 20 May 2021).

Acknowledgments

Thanks for the reviewers’ constructive suggestions, which enriched this research.

Conflicts of Interest

The author declares no conflict of interest.

Appendix A

Lemma A1
([24]). If the time series { ε t } was generated by the A R ( 1 ) process (2) with | ρ t | = | ρ | < 1 and the innovation process { e t } satisfies Assumption 1, then
a T 1 t = 1 [ T τ ] ε t , a T 2 t = 1 [ T τ ] ε t 2 d ψ U 1 ( τ ) , Ψ 2 2 U 2 ( τ ) ,
where ψ = j = 0 φ j , Ψ 2 denotes the l 2 norm of sequence { φ j } and φ j = ρ j .
Proof of Theorem 1.
Under H 0 , ε t = i = 1 t e i . If d t = 0 , ε ^ 0 , t = y t = ε t and ε ^ 1 , t = y t = ε t , then Lemma 1 gives that,
T 1 a T 2 t = 1 [ T τ ] ε ^ 0 , t 2 = T 1 t = 1 [ T τ ] ( a T 1 i = 1 t e i ) 2 0 τ U 1 ( r ) 2 d r V 1 , 0 ( τ ) ,
T 1 a T 2 t = [ T τ ] + 1 T ε ^ 1 , t 2 = T 1 t = [ T τ ] + 1 T ( a T 1 i = 1 t e i ) 2 τ 1 U 1 ( r ) 2 d r V 1 , 1 ( τ ) .
If d t = 1 ,
ε ^ 0 , t = ε t 1 [ T τ ] i = 1 [ T τ ] ε i , ε ^ 1 , t = ε t 1 [ T ( 1 τ ) ] i = [ T τ ] + 1 T ε i ,
We have
a T 2 t = 1 [ T τ ] ε ^ 0 , t 2 = a T 2 t = 1 [ T τ ] ε t 2 1 [ T τ ] a T 2 ( t = 1 [ T τ ] ε t ) 2 ,
a T 2 t = [ T τ ] + 1 T ε ^ 1 , t 2 = a T 2 t = [ T τ ] + 1 T ε t 2 1 [ T ( 1 τ ) ] a T 2 ( t = [ T τ ] + 1 T ε t ) 2 .
Hence
T 1 a T 2 t = 1 [ T τ ] ε ^ 0 , t 2 = T 1 t = 1 [ T τ ] ( a T 1 i = 1 t e i ) 2 1 [ T τ ] / T ( T 1 a T 1 t = 1 [ T τ ] ε t ) 2 0 τ U 1 ( r ) 2 d r 1 τ ( τ 1 U 1 ( r ) d r ) 2 V 2 , 0 ( τ ) ,
T 1 a T 2 t = [ T τ ] + 1 T ε ^ 1 , t 2 = T 1 t = [ T τ ] + 1 T ( a T 1 i = 1 t e i ) 2 1 [ T ( 1 τ ) ] / T ( T 1 a T 1 t = [ T τ ] + 1 T ε t ) 2 τ 1 U 1 ( r ) 2 d r 1 1 τ ( τ 1 U 1 ( r ) d r ) 2 V 2 , 1 ( τ ) .
If d t = ( 1 , t ) T , let δ = ( α , β ) T ; then, by the definition of LS, we can obtain
α ^ α β ^ β = t t t 2 1 ε t t ε t ,
where = t = 1 [ T τ ] , if we estimate δ using the samples y 1 , . . . , y [ T τ ] , and = t = [ T τ ] + 1 T , if we estimate δ using the samples y [ T τ ] + 1 , . . . , y T . □
Consider ε ^ 0 , t first; according to the continuous mapping theorem, we obtain
T 1 a T 1 t = 1 [ T τ ] ε t 0 τ U 1 ( r ) d r K 1 ( τ ) ,
T 1 a T 1 t = 1 [ T τ ] t e t U 1 ( τ ) 0 τ U 1 ( r ) d r ,
T 2 a T 1 t = 1 [ T τ ] t ε t = T 2 a T 1 ( t = 1 [ T τ ] t ε t 1 + t = 1 [ T τ ] t e t ) = T 2 a T 1 t = 1 [ T τ ] t ε t 1 + O P ( T 1 ) 0 τ r U 1 ( r ) d r K 2 ( τ ) .
We have
T 1 a T 2 t = 1 [ T τ ] ε ^ 0 , t 2 = T 1 a T 2 t = 1 [ T τ ] ε t ( α ^ α ) ( β ^ β ) t 2 = T 1 a T 2 t = 1 [ T τ ] ( ε t 2 + ( α ^ α ) 2 + ( β ^ β ) 2 t 2 2 ε t ( α ^ α ) 2 ( β ^ β ) t ε t + 2 ( α ^ α ) ( β ^ β ) t ) .
The proof before gives that
T 1 a T 2 t = 1 [ T τ ] ε t 2 0 τ U 1 ( r ) 2 d r V 1 , 0 ( τ ) ,
combining (A2) and (A3) with a tedious calculation, we can obtain
T 1 a T 2 t = 1 [ T τ ] ε t ( α ^ α ) = ( 4 [ T τ ] + 2 ) ( t = 1 [ T τ ] ε t ) 2 6 ( t = 1 [ T τ ] t ε t ) ( t = 1 [ T τ ] ε t ) T a T 2 [ T τ ] ( [ T τ ] 1 ) 4 τ 1 K 1 ( τ ) 2 6 τ 2 K 1 ( τ ) K 2 ( τ ) ,
the rest of (A4) can be analysed similarly. Then,
T 1 a T 2 t = 1 [ T τ ] ε ^ 0 , t 2 V 1 , 0 ( τ ) + 4 τ 1 ( 3 τ 1 K 1 ( τ ) K 2 ( τ ) K 1 ( τ ) 2 3 τ 2 K 2 ( τ ) 2 ) V 3 , 0 ( τ ) .
Similar arguments gives that
T 1 a T 2 t = [ T τ ] + 1 T ε ^ 1 , t 2 V 1 , 1 ( τ ) + 4 ( 1 τ ) 1 ( 3 ( 1 τ ) 1 G 1 ( τ ) G 2 ( τ ) G 1 ( τ ) 2 3 ( 1 τ ) 2 G 2 ( τ ) 2 ) V 3 , 1 ( τ ) .
The proof of Theorem 1 is finished.
Proof of Theorem 2.
We omit proofs for the d t = 0 and d t = ( 1 , t ) T cases; these are straightforward but tedious and follow the same logical development as those presented for the d t = 1 case.
If a persistence change point occurs at [ T τ 0 ] and if τ 0 τ < 1 , then Lemma 2 gives that
a T 2 t = [ T τ ] + 1 T ε ^ 1 , t 2 = a T 2 t = [ T τ ] + 1 T ε t 2 1 [ T ( 1 τ ) ] a T 2 ( t = [ T τ ] + 1 T ε t ) 2 = O P ( 1 ) .
Since
a T 2 t = 1 [ T τ ] ε ^ 0 , t 2 = a T 2 t = 1 [ T τ ] ε t 2 1 [ T τ ] a T 2 ( t = 1 [ T τ ] ε t ) 2 = a T 2 t = 1 [ T τ 0 ] ε t 2 + a T 2 t = [ T τ 0 ] + 1 [ T τ ] ε t 2 1 [ T τ ] a T 2 ( ( t = 1 [ T τ 0 ] ε t ) 2 + ( t = [ T τ 0 ] + 1 [ T τ ] ε t ) 2 + 2 ( t = 1 [ T τ 0 ] ε t ) ( t = [ T τ 0 ] + 1 [ T τ ] ε t ) ) ,
according to Lemma 2 and (A5), we can get
a T 2 t = [ T τ 0 ] + 1 [ T τ ] ε t 2 = O P ( 1 ) a T 2 t = 1 [ T τ 0 ] ε t 2 = O P ( T ) ,
based on Lemma 2 and (A2), the remainder of (A6) can be dealt with analogously, then
a T 2 t = 1 [ T τ ] ε ^ 0 , t 2 = O P ( T ) .
Hence
R T ( τ ) = O P ( T ) .
If 0 < τ < τ 0 , following the same proof line above, we obtain
R T ( τ ) = O P ( 1 ) .
The statistics (13)–(15) are monotonic increasing functions. Thus,
H ( R T ) = O P ( T ) .
This completes the proof of Theorem 2. □
Proof of Theorem 1.
Under H 0 , if d t = 0 , ε ^ 0 , t = y t = ε t , and ε ^ 1 , t = y t = ε t , then Lemma 2 gives that
a T 2 t = 1 [ T τ ] ε ^ 0 , t 2 = a T 2 t = 1 [ T τ ] ε t 2 Ψ 2 2 U 2 ( τ ) W 1 , 0 ( τ ) ,
a T 2 t = [ T τ ] + 1 T ε ^ 1 , t 2 = a T 2 t = [ T τ ] + 1 T ε t 2 Ψ 2 2 U 2 ( 1 τ ) W 1 , 1 ( τ ) .
If d t = 1 ,
ε ^ 0 , t = ε t 1 [ T τ ] i = 1 [ T τ ] ε i , ε ^ 1 , t = ε t 1 [ T ( 1 τ ) ] i = [ T τ ] + 1 T ε i ,
We have
a T 2 t = 1 [ T τ ] ε ^ 0 , t 2 = a T 2 t = 1 [ T τ ] ε t 2 1 [ T τ ] a T 2 ( t = 1 [ T τ ] ε t ) 2 = a T 2 t = 1 [ T τ ] ε t 2 O P ( T 1 ) Ψ 2 2 U 2 ( τ ) W 2 , 0 ( τ ) ,
a T 2 t = [ T τ ] + 1 T ε ^ 1 , t 2 = a T 2 t = [ T τ ] + 1 T ε t 2 1 [ T ( 1 τ ) ] a T 2 ( t = [ T τ ] + 1 T ε t ) 2 = a T 2 t = [ T τ ] + 1 T ε t 2 O P ( T 1 ) Ψ 2 2 U 2 ( 1 τ ) W 2 , 1 ( τ ) .
If d t = ( 1 , t ) T , let δ = ( α , β ) T , then by the definition of LS, we can obtain
α ^ α β ^ β = t t t 2 1 ε t t ε t ,
where = t = 1 [ T τ ] , if we estimate δ using the samples y 1 , . . . , y [ T τ ] , and = t = [ T τ ] + 1 T , if we estimate δ using the samples y [ T τ ] + 1 , . . . , y T .
Consider ε ^ 0 , t first; according to the continuous mapping theorem, we obtain
T 1 a T 1 t = 1 [ T τ ] t ε t ψ ( U 1 ( τ ) 0 τ U 1 ( r ) d r ) ,
and we have
a T 2 t = 1 [ T τ ] ε ^ 0 , t 2 = a T 2 t = 1 [ T τ ] ε t ( α ^ α ) ( β ^ β ) t 2 = a T 2 t = 1 [ T τ ] ( ε t 2 + ( α ^ α ) 2 + ( β ^ β ) 2 t 2 2 ε t ( α ^ α ) 2 ( β ^ β ) t ε t + 2 ( α ^ α ) ( β ^ β ) t ) .
Lemma 2 gives that
a T 2 t = 1 [ T τ ] ε t 2 Ψ 2 2 U 2 ( τ ) ,
combining Lemma 2 and (A7) with a tedious calculation, we can obtain
a T 2 t = 1 [ T τ ] ( α ^ α ) 2 = 1 a T 2 [ T τ ] ( [ T τ ] 1 ) 2 ( ( 4 [ T τ ] + 2 ) 2 ( t = 1 [ T τ ] ε t ) 2 + 36 ( t = 1 [ T τ ] t ε t ) 2 12 ( 4 [ T τ ] + 2 ) ( t = 1 [ T τ ] ε t ) ( t = 1 [ T τ ] t ε t ) ) = O P ( T 1 ) ,
the rest of (A8) can be analysed similarly. Then,
a T 2 t = 1 [ T τ ] ε ^ 0 , t 2 = a T 2 t = 1 [ T τ ] ε t 2 + O P ( T 1 ) Ψ 2 2 U 2 ( τ ) W 3 , 0 ( τ ) .
Similar arguments give that
a T 2 t = [ T τ ] + 1 T ε ^ 1 , t 2 = a T 2 t = [ T τ ] + 1 T ε t 2 + O P ( T 1 ) Ψ 2 2 U 2 ( 1 τ ) W 3 , 1 ( τ ) .
The proof of Theorem 3 is finished.
For the remainder of this part, we omit proofs for the d t = 0 and d t = ( 1 , t ) T cases. These are simple but tedious and follow the same proof line as those discussed for d t = 1 . □
Proof of Theorem 4.
If a persistence change point occurs at [ T τ 0 ] and if 0 < τ < τ 0 , then Lemma 2 gives that
a T 2 t = 1 [ T τ ] ε ^ 0 , t 2 = a T 2 t = 1 [ T τ ] ε t 2 1 [ T τ ] a T 2 ( t = 1 [ T τ ] ε t ) 2 = a T 2 t = 1 [ T τ ] ε t 2 1 [ T τ ] ( a T 1 t = 1 [ T τ ] ε t ) 2 = O P ( 1 ) .
Since
a T 2 t = [ T τ ] + 1 T ε ^ 1 , t 2 = a T 2 t = [ T τ ] + 1 T ε t 2 1 [ T ( 1 τ ) ] a T 2 ( t = [ T τ ] + 1 T ε t ) 2 ,
thus
T 1 a T 2 t = [ T τ ] + 1 T ε ^ 1 , t 2 = T 1 a T 2 t = [ T τ ] + 1 [ T τ 0 ] ε t 2 + T 1 a T 2 t = [ T τ 0 ] + 1 T ε t 2 1 T [ T ( 1 τ ) ] a T 2 ( ( t = [ T τ ] + 1 [ T τ 0 ] ε t ) 2 + ( t = [ T τ 0 ] + 1 T ε t ) 2 + 2 ( t = [ T τ ] + 1 [ T τ 0 ] ε t ) ( t = [ T τ 0 ] + 1 T ε t ) ) ,
according to Lemma 2, we obtain
T 1 a T 2 t = [ T τ ] + 1 [ T τ 0 ] ε t 2 = O P ( T 1 ) ,
and based on the continuous mapping theorem, we obtain
T 1 a T 2 t = [ T τ 0 ] + 1 T ε t 2 τ 0 1 U 1 ( r ) 2 d r = O P ( 1 ) .
The remainder of (A9) can be dealt with analogously; then,
T 1 a T 2 t = [ T τ ] + 1 T ε ^ 1 , t 2 = O P ( 1 ) .
Hence
M T ( τ ) = O P ( T ) .
If τ 0 τ < 1 , following a similar proof to that above, we obtain
M T ( τ ) = O P ( 1 ) .
The statistics (25)–(27) are monotonic increasing functions. Thus,
H ( M T ) = O P ( T ) .
This completes the proof of Theorem 4. □
Proof of Theorem 1.
To simplify the proof, we assume that the parameters δ and ρ in (1) and (2) are known. If d t = 1 , then
ε ^ i = ε i 1 T i = 1 T ε i , i = 1 , . . . , T .
This means e ^ i e i in Step 1. Consider
e ˜ i = e i 1 T i = 1 T e i , i = 1 , . . . , T .
When we select one of e ˜ 1 , . . . , e ˜ T , we select the corresponding unobservable noise variable denoted as e ̲ i . This means that
e i = e ̲ i 1 T i = 1 T e i , i = 1 , . . . , N .
Therefore,
a N 1 i = 1 [ N t ] e i = a N 1 i = 1 [ N t ] e ̲ i [ N t ] T a N i = 1 T e i .
By Lemma 1 and Assumption 2, we have
[ N t ] T a N i = 1 T e i = [ N t ] T T 1 / κ L ( T ) N 1 / κ L ( N ) 1 a T i = 1 T e i 2 ( N / T ) 1 1 / κ γ 1 a T i = 1 T e i = o p ( 1 ) .
The above inequality follows from L ( T ) / L ( N ) 2 ( T / N ) γ [27], and γ > 0 is chosen so small that 1 1 / κ γ > 0 .
Horváth and Kokoszka [17] showed that, for any bounded continuous functional f on D [ 0 , 1 ]
P ϵ f ( a M 1 i = 1 [ M t ] e ̲ i ) x P P ( f ( U 1 ( t ) ) x ) .
Then, we can obtain
a N 1 i = 1 [ N t ] e i U 1 ( t ) .
Similar arguments give that
a N 2 i = 1 [ N t ] e i 2 U 2 ( t ) .
Hence, we can complete this proof using a similar proof to that of Theorem 1. □
Proof of Theorem 6.
The proof is similar to the proof of Theorem 3. □

References

  1. Kim, J.Y. Detection of change in persistence of linear times series. J. Econom. 2000, 95, 97–116. [Google Scholar] [CrossRef]
  2. Kim, J.Y.; Belaire-Franch, J.; Badilli Amador, R. Corrigendum to “Detection of change in persistence of linear times series”. J. Econom. 2002, 109, 389–392. [Google Scholar] [CrossRef]
  3. Leybourne, S.; Kim, T.H.; Smith, V.; Newbold, P. Tests for a change in persistence against the null of difference stationarity. J. Econom. 2003, 6, 291–311. [Google Scholar] [CrossRef]
  4. Busetti, F.; Taylor, A.M.R. Tests of stationarity against a change in persistence. J. Econom. 2004, 123, 33–66. [Google Scholar] [CrossRef]
  5. Leybourne, S.; Taylor, A.M.B. Persistence change tests and shifting stable autoregressions. Econ. Lett. 2006, 91, 44–49. [Google Scholar] [CrossRef]
  6. Leybourne, S.; Taylor, A.M.B.; Kim, T.H. CUSUM of squares-based tests for a change in persistence. J. Time Ser. Anal. 2007, 28, 408–433. [Google Scholar] [CrossRef]
  7. Sibbertsen, P.; Kruse, R. Testing for a break in persistence under long-range dependencies. J. Time Ser. Anal. 2009, 30, 263–285. [Google Scholar] [CrossRef] [Green Version]
  8. Belaire-Franch, J.; Contreras, D. Nonparametric Unit Root Test and Structural Breaks. J. Time Ser. Econom. 2011, 3, 1–14. [Google Scholar] [CrossRef]
  9. Halunga, A.G.; Osborn, D.R. Ratio-based estimators for a change point in persistence. J. Econom. 2012, 171, 24–31. [Google Scholar] [CrossRef]
  10. Kejriwal, M.; Perron, P.; Zhou, J. Wald tests for detecting multiple structural changes in persistence. Econom. Theory 2013, 29, 289–323. [Google Scholar] [CrossRef] [Green Version]
  11. Perron, P.; Shintani, M.; Yabu, T. Testing for Flexible Nonlinear Trends with an Integrated or Stationary Noise Component. Oxf. Bull. Econ. Stat. 2017, 79, 822–850. [Google Scholar] [CrossRef] [Green Version]
  12. In, C. Unit root tests for dependent micropanels. J. Jpn. Econ. Assoc. 2019, 70, 145–167. [Google Scholar]
  13. Kejriwal, M.; Yu, X.; Perron, P. Bootstrap procedures for detecting multiple persistence shifts in heteroskedastic time series. J. Time Ser. Anal. 2020, 41, 676–690. [Google Scholar] [CrossRef]
  14. Davis, R.A.; Mikosch, T. The sample autocorrelations of heavy-tailed processes with applications to ARCH. Ann. Stat. 1998, 26, 2049–2080. [Google Scholar] [CrossRef]
  15. Kokoszka, P.; Wolf, M. Subsampling the mean of heavy-tailed dependent observations. J. Time Ser. Anal. 2004, 25, 217–234. [Google Scholar] [CrossRef]
  16. Soohan, A.; Joseph, H.T.; Vaidyanathan, R. A new class of models for heavy tailed distributions in finance and insurance risk. Insur. Math. Econ. 2012, 51, 43–52. [Google Scholar]
  17. Horváth, L.; Kokoszka, P. A bootstrap approximation to a unit root test statistic for heavy-tailed observations. Stat. Probab. Lett. 2003, 63, 163–173. [Google Scholar] [CrossRef]
  18. Wang, D.; Guo, P.; Xia, Z. Detection and Estimation of structural change in heavy-tailed sequence. Commun. Stat. Theory Methods 2017, 46, 815–827. [Google Scholar]
  19. Resnick, S.I. Point processes, regular variation and weak convergence. Adv. Appl. Probab. 1986, 18, 66–138. [Google Scholar] [CrossRef]
  20. Andrews, D.W.K. Tests for parameter instability and structural change with unknown change point. Econometrica 1993, 61, 821–856. [Google Scholar] [CrossRef] [Green Version]
  21. Hansen, B.E. Testing for Structural of Unknown form in Models with Nonstationary Regressors; Mimeo. Department of Economics, University of Rochester: Rochester, NY, USA, 1991. [Google Scholar]
  22. Andrews, D.W.K.; Ploberger, W. Optimal tests when a nuisance parameter is present only under the alternative. Econometrica 1994, 62, 1383–1414. [Google Scholar] [CrossRef]
  23. Mandelbrot, B.B. The variation of certain speculative prices. J. Bus. 1963, 36, 394–491. [Google Scholar] [CrossRef]
  24. Chen, Z.; Tian, Z.; Zhao, C. Monitoring persistence change in infinite variance observations. J. Korean Stat. Soc. 2012, 41, 61–73. [Google Scholar] [CrossRef]
  25. Korenok, O.; Radchenko, S.; Swanson, N.R. International evidence on the efficacy of new-Keynesian models of inflation persistence. J. Appl. Econom. 2010, 25, 31–54. [Google Scholar] [CrossRef] [Green Version]
  26. Verdickt, G.; Annaert, J.; Deloof, M. Dividend growth and return predictability: A longrun re-examination of conventional wisdom. J. Empir. Financ. 2019, 52, 112–127. [Google Scholar] [CrossRef]
  27. Bingham, N.H.; Goldie, C.M.; Teugels, J.L. Regular Variation; Cambridge University Press: Cambridge, MA, USA, 1987. [Google Scholar]
Figure 1. Monthly exchange rate data for Sweden/US.
Figure 1. Monthly exchange rate data for Sweden/US.
Symmetry 13 00936 g001
Figure 2. First order difference data for Figure 1.
Figure 2. First order difference data for Figure 1.
Symmetry 13 00936 g002
Table 1. Empirical sizes ( ρ 0 = 0.5 ).
Table 1. Empirical sizes ( ρ 0 = 0.5 ).
TN κ = 1.14 κ = 1.43 κ = 1.97
α = 0 . 1 α = 0 . 05 α = 0 . 1 α = 0 . 05 α = 0 . 1 α = 0 . 05
200.132 (0.024)0.045 (0.014)0.087 (0.019)0.057 (0.016)0.088 (0.020)0.045 (0.014)
200250.096 (0.020)0.062 (0.017)0.114 (0.022)0.055 (0.016)0.095 (0.020)0.060 (0.017)
350.115 (0.022)0.041 (0.014)0.110 (0.022)0.053 (0.016)0.106 (0.022)0.057 (0.016)
300.096 (0.013)0.043 (0.009)0.110 (0.014)0.049 (0.009)0.103 (0.013)0.053 (0.010)
500350.106 (0.014)0.052 (0.010)0.114 (0.014)0.053 (0.010)0.091 (0.013)0.047 (0.009)
400.105 (0.014)0.047 (0.009)0.090 (0.013)0.052 (0.010)0.098 (0.013)0.052 (0.010)
700.097 (0.010)0.053 (0.008)0.105 (0.011)0.048 (0.008)0.102 (0.010)0.049 (0.008)
8001000.102 (0.010)0.048 (0.008)0.097 (0.010)0.049 (0.008)0.099 (0.010)0.050 (0.008)
1200.101 (0.010)0.051 (0.008)0.099 (0.010)0.050 (0.008)0.100 (0.010)0.050 (0.008)
Table 2. Empirical powers κ = 1.14 , α = 0.05 .
Table 2. Empirical powers κ = 1.14 , α = 0.05 .
τ 0 TN ρ 0 = 0.8 ρ 0 = 0.5 ρ 0 = 0.2
200.588 (0.035)0.697 (0.033)0.735 (0.031)
τ 0 = 0.35200250.591 (0.035)0.703 (0.032)0.700 (0.033)
350.612 (0.035)0.706 (0.032)0.717 (0.032)
200.712 (0.032)0.823 (0.026)0.847 (0.025)
τ 0 = 0.25200250.724 (0.032)0.828 (0.026)0.839 (0.026)
350.725 (0.032)0.822 (0.026)0.846 (0.025)
300.743 (0.019)0.780 (0.018)0.807 (0.018)
τ 0 = 0.35500350.744 (0.019)0.800 (0.018)0.792 (0.018)
400.746 (0.019)0.794 (0.018)0.805 (0.018)
300.879 (0.014)0.937 (0.011)0.953 (0.009)
τ 0 = 0.25500350.888 (0.014)0.937 (0.011)0.954 (0.009)
400.881 (0.014)0.942 (0.010)0.949 (0.010)
700.756 (0.015)0.808 (0.014)0.810 (0.014)
τ 0 = 0.358001000.772 (0.015)0.806 (0.014)0.815 (0.014)
1200.760 (0.015)0.804 (0.014)0.813 (0.014)
700.925 (0.009)0.952 (0.008)0.964 (0.007)
τ 0 = 0.258001000.921 (0.010)0.954 (0.007)0.965 (0.006)
1200.924 (0.009)0.952 (0.008)0.963 (0.007)
Table 3. Empirical powers κ = 1.43 , α = 0.05 .
Table 3. Empirical powers κ = 1.43 , α = 0.05 .
τ 0 TN ρ 0 = 0.8 ρ 0 = 0.5 ρ 0 = 0.2
200.619 (0.035)0.731 (0.031)0.754 (0.030)
τ 0 = 0.35200250.613 (0.035)0.737 (0.031)0.749 (0.031)
350.615 (0.035)0.739 (0.031)0.750 (0.031)
200.795 (0.028)0.897 (0.021)0.930 (0.018)
τ 0 = 0.25200250.813 (0.028)0.905 (0.021)0.924 (0.019)
350.804 (0.028)0.894 (0.022)0.929 (0.018)
300.760 (0.019)0.806 (0.018)0.813 (0.017)
τ 0 = 0.35500350.764 (0.019)0.801 (0.018)0.813 (0.017)
400.761 (0.019)0.804 (0.018)0.815 (0.017)
300.901 (0.013)0.955 (0.009)0.976 (0.007)
τ 0 = 0.25500350.902 (0.013)0.958 (0.009)0.976 (0.007)
400.904 (0.013)0.960 (0.009)0.974 (0.007)
700.789 (0.014)0.816 (0.014)0.824 (0.013)
τ 0 = 0.358001000.794 (0.014)0.815 (0.014)0.826 (0.013)
1200.790 (0.014)0.810 (0.014)0.823 (0.013)
700.945 (0.008)0.971 (0.006)0.980 (0.005)
τ 0 = 0.258001000.949 (0.008)0.975 (0.005)0.982 (0.005)
1200.941 (0.008)0.975 (0.005)0.984 (0.004)
Table 4. Empirical powers κ = 1.97, α = 0.05.
Table 4. Empirical powers κ = 1.97, α = 0.05.
τ 0 TN ρ 0 = 0.8 ρ 0 = 0.5 ρ 0 = 0.2
200.695 (0.033)0.742 (0.031)0.792 (0.028)
τ 0 = 0.35200250.698 (0.033)0.746 (0.031)0.778 (0.029)
350.695 (0.033)0.745 (0.031)0.777 (0.029)
200.862 (0.024)0.949 (0.015)0.966 (0.013)
τ 0 = 0.25200250.877 (0.023)0.951 (0.015)0.969 (0.012)
350.870 (0.024)0.963 (0.013)0.968 (0.012)
300.775 (0.019)0.814 (0.017)0.823 (0.017)
τ 0 = 0.35500350.766 (0.019)0.812 (0.018)0.824 (0.017)
400.777 (0.019)0.810 (0.018)0.823 (0.017)
300.975 (0.007)0.988 (0.005)0.995 (0.003)
τ 0 = 0.25500350.974 (0.007)0.990 (0.004)1 (0)
400.970 (0.008)0.991 (0.004)1 (0)
700.806 (0.014)0.823 (0.013)0.835 (0.013)
τ 0 = 0.358001000.803 (0.014)0.826 (0.013)0.837 (0.013)
1200.805 (0.014)0.829 (0.013)0.838 (0.013)
700.990 (0.003)0.995 (0.002)1 (0)
τ 0 = 0.258001000.989 (0.004)0.996 (0.002)1(0)
1200.987 (0.004)1 (0)1 (0)
Table 5. Empirical sizes ( ρ 0 = 0.5 ).
Table 5. Empirical sizes ( ρ 0 = 0.5 ).
TN κ = 1.14 κ = 1.43 κ = 1.97
α = 0 . 1 α = 0 . 05 α = 0 . 1 α = 0 . 05 α = 0 . 1 α = 0 . 05
200.083 (0.019)0.062 (0.017)0.088 (0.020)0.058 (0.016)0.115 (0.022)0.043 (0.014)
200250.122 (0.023)0.053 (0.016)0.102 (0.021)0.059 (0.017)0.102 (0.021)0.067 (0.018)
350.095 (0.020)0.055 (0.016)0.096 (0.020)0.052 (0.016)0.086 (0.019)0.055 (0.016)
300.097 (0.013)0.052 (0.010)0.095 (0.013)0.053 (0.010)0.107 (0.014)0.058 (0.010)
500350.110 (0.014)0.049 (0.009)0.114 (0.014)0.054 (0.010)0.089 (0.013)0.045 (0.009)
400.110 (0.014)0.050 (0.010)0.092 (0.013)0.043 (0.009)0.096 (0.013)0.052 (0.010)
700.098 (0.010)0.058 (0.008)0.094 (0.010)0.051 (0.008)0.100 (0.010)0.050 (0.008)
8001000.102 (0.010)0.047 (0.007)0.107 (0.011)0.049 (0.008)0.101 (0.010)0.051 (0.008)
1200.097 (0.010)0.054 (0.008)0.098 (0.010)0.052 (0.008)0.099 (0.010)0.050 (0.008)
Table 6. Empirical powers κ = 1.14 , α = 0.05 .
Table 6. Empirical powers κ = 1.14 , α = 0.05 .
τ 0 TN ρ 0 = 0.2 ρ 0 = 0.5 ρ 0 = 0.8
200.895 (0.022)0.883 (0.023)0.817 (0.027)
τ 0 = 0.5200250.897 (0.021)0.881 (0.023)0.827 (0.027)
350.910 (0.020)0.882 (0.023)0.825 (0.027)
200.922 (0.019)0.894 (0.022)0.834 (0.026)
τ 0 = 0.3200250.926 (0.018)0.894 (0.022)0.832 (0.026)
350.925 (0.019)0.893 (0.022)0.833 (0.026)
300.920 (0.012)0.890 (0.014)0.843 (0.016)
τ 0 = 0.5500350.922 (0.012)0.891 (0.014)0.840 (0.016)
400.921 (0.012)0.892 (0.014)0.845 (0.016)
300.944 (0.010)0.928 (0.011)0.865 (0.015)
τ 0 = 0.3500350.939 (0.010)0.931 (0.011)0.868 (0.015)
400.940 (0.010)0.932 (0.011)0.866 (0.015)
700.935 (0.009)0.928 (0.009)0.858 (0.012)
τ 0 = 0.58001000.938 (0.009)0.928 (0.009)0.856 (0.012)
1200.940 (0.008)0.929 (0.009)0.856 (0.012)
700.959 (0.007)0.945 (0.008)0.905 (0.010)
τ 0 = 0.38001000.960 (0.007)0.944 (0.008)0.905 (0.010)
1200.956 (0.007)0.942 (0.008)0.906 (0.010)
Table 7. Empirical powers κ = 1.43 , α = 0.05 .
Table 7. Empirical powers κ = 1.43 , α = 0.05 .
τ 0 TN ρ 0 = 0.2 ρ 0 = 0.5 ρ 0 = 0.8
200.915 (0.020)0.890 (0.022)0.844 (0.026)
τ 0 = 0.5200250.917 (0.019)0.891 (0.022)0.841 (0.026)
350.913 (0.020)0.887 (0.023)0.833 (0.026)
200.954 (0.015)0.937 (0.017)0.847 (0.025)
τ 0 = 0.3200250.950 (0.015)0.934 (0.018)0.851 (0.025)
350.951 (0.015)0.935 (0.018)0.851 (0.025)
300.950 (0.010)0.935 (0.011)0.880 (0.014)
τ 0 = 0.5500350.945 (0.010)0.933 (0.011)0.888 (0.014)
400.943 (0.010)0.928 (0.011)0.885 (0.014)
300.961 (0.009)0.952 (0.010)0.912 (0.013)
τ 0 = 0.3500350.961 (0.009)0.955 (0.009)0.901 (0.013)
400.962 (0.009)0.954 (0.009)0.909 (0.013)
700.951 (0.008)0.936 (0.009)0.872 (0.012)
τ 0 = 0.58001000.953 (0.007)0.930 (0.009)0.878 (0.011)
1200.947 (0.008)0.929 (0.009)0.877 (0.011)
700.966 (0.006)0.955 (0.007)0.933 (0.009)
τ 0 = 0.38001000.970 (0.006)0.959 (0.007)0.919 (0.010)
1200.968 (0.006)0.957 (0.007)0.927 (0.009)
Table 8. Empirical powers κ = 1.97 , α = 0.05 .
Table 8. Empirical powers κ = 1.97 , α = 0.05 .
τ 0 TN ρ 0 = 0.2 ρ 0 = 0.5 ρ 0 = 0.8
200.922 (0.019)0.895 (0.022)0.863 (0.024)
τ 0 = 0.5200250.921 (0.019)0.897 (0.021)0.866 (0.024)
350.925 (0.019)0.896 (0.022)0.850 (0.0225)
200.955 (0.014)0.949 (0.015)0.877 (0.023)
τ 0 = 0.3200250.959 (0.014)0.954 (0.015)0.885 (0.023)
350.956 (0.014)0.946 (0.016)0.888 (0.022)
300.958 (0.009)0.941 (0.010)0.892 (0.014)
τ 0 = 0.5500350.957 (0.009)0.940 (0.010)0.895 (0.014)
400.957 (0.009)0.942 (0.010)0.890 (0.014)
300.976 (0.007)0.957 (0.009)0.923 (0.012)
τ 0 = 0.3500350.975 (0.007)0.962 (0.009)0.921 (0.012)
400.973 (0.007)0.961 (0.009)0.925 (0.012)
700.958 (0.007)0.944 (0.008)0.881 (0.011)
τ 0 = 0.58001000.959 (0.007)0.942 (0.008)0.883 (0.011)
1200.959 (0.007)0.946 (0.008)0.884 (0.011)
700.978 (0.005)0.964 (0.007)0.942 (0.008)
τ 0 = 0.38001000.973 (0.006)0.965 (0.006)0.941 (0.008)
1200.973 (0.006)0.966 (0.006)0.945 (0.008)
Table 9. Empirical powers of our method and the kernel method.
Table 9. Empirical powers of our method and the kernel method.
TN ρ 0 Our MethodKernel Method
M = 0 . 2 T M = 0 . 3 T
T = 200 N = 20 0.20.9150.7840.751
0.50.8900.7660.679
0.80.8440.7570.644
N = 25 0.20.9170.7850.755
0.50.8910.7650.677
0.80.8410.7550.642
N = 35 0.20.9130.7860.752
0.50.8870.7660.676
0.80.8330.7520.641
T = 500 N = 30 0.20.9500.9340.910
0.50.9350.8800.847
0.80.8800.8490.731
N = 35 0.20.9450.9360.911
0.50.9330.8880.845
0.80.8880.8510.729
N = 40 0.20.9430.9330.910
0.50.9280.8860.848
0.80.8850.8500.730
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Wang, D. Monitoring Persistence Change in Heavy-Tailed Observations. Symmetry 2021, 13, 936. https://doi.org/10.3390/sym13060936

AMA Style

Wang D. Monitoring Persistence Change in Heavy-Tailed Observations. Symmetry. 2021; 13(6):936. https://doi.org/10.3390/sym13060936

Chicago/Turabian Style

Wang, Dan. 2021. "Monitoring Persistence Change in Heavy-Tailed Observations" Symmetry 13, no. 6: 936. https://doi.org/10.3390/sym13060936

APA Style

Wang, D. (2021). Monitoring Persistence Change in Heavy-Tailed Observations. Symmetry, 13(6), 936. https://doi.org/10.3390/sym13060936

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop