In this section, simulation studies of artificial data are conducted to assess the finite sample performance. The empirical sizes and powers perform well. Simulation results show that our method is better than the kernel method in all listed cases. The analysis of real data also demonstrates that this method is effective.
4.1. Simulation
We use R software to complete the simulation. To save computational time, we simply show the results for . The results for the and cases are quite similar.
To investigate the size and power property of the test, we consider the following data generating process:
Null hypothesis:
against the alternative hypothesis:
another null hypothesis:
against the corresponding alternative hypothesis:
where
,
,
, and
. The innovation
satisfies Assumption 1. We set the tail index
. This heavy-tailed sequence is generated by a small program that can be downloaded from Professor Nolan’s website:
https://edspace.american.edu/jpnolan/, accessed on 20 May 2021.
The simulation study is based on different sample sizes at nominal levels or . We consider the test statistic (13) and (25). We choose the appropriate bootstrap sample sizes N, which is not given in detail. Let , respectively, and the bootstrap frequency in this section.
In the case of detecting
against
, the algorithm to calculate empirical sizes is as follows (Algorithm 1):
Algorithm 1. Calculate empirical sizes detecting against . |
initialize count variable repeat Step A: Generate the data under and calculate the statistics and Step B: Repeat the Step 1, 2 and 3 in the bootstrap algorithm for times in Section 3.3. Calculate the empirical quantile of , which can be denoted as . Step C: If , we reject and let . until for 5000 times return
|
The empirical sizes can be approximated by the frequency, which the null hypothesis rejects with 5000 replications. Calculating the empirical powers is similar to the above algorithm. Only change the data generating process under . The method to obtain the empirical sizes and powers is also similar in the case of testing against .
- (1)
The empirical sizes are almost close to the nominal level
in
Table 1.
- (2)
From
Table 2,
Table 3 and
Table 4, we can find the powers increase when the value of
T is larger for the same
and
. For fixed numbers
T and
, the powers raise gradually with the decrease in
. The earlier change gives higher empirical power for the same
T and
. This is a famous result in the detection of change points. Some powers are equal to 1 in
Table 4.
- (3)
The larger tail index , the higher empirical powers. This is due to the special properties of heavy-tailed sequences. The smaller tail index , the more likely the sequence is to contain ‘outliers’. The test statistics behave differently before and after such points, which could seriously affect the performance of this test.
The empirical sizes and powers of the
to
test are provided in
Table 5,
Table 6,
Table 7 and
Table 8. The data in parentheses is the corresponding standard errors.We now present the main conclusions of the simulation.
(1) The empirical sizes are almost the same as the nominal level
in
Table 5.
(2) From
Table 6,
Table 7 and
Table 8, we find that the powers increase when the value of
T is larger for the same
and
. For fixed numbers
T and
, the powers raise gradually with the decrease in
. An earlier location of the change point results in a higher empirical power. This is a famous result in the detection of change points.
(3) The larger tail index , the higher empirical powers. This is due to the special properties of heavy-tailed sequences. The smaller tail index , the more likely the sequence is to contain ‘outliers’. The test statistics behave differently before and after such points, which could seriously affect the performance of this test.
We compare our method with the kernel-weighted ratio method (Chen et al. [
24]). The empirical powers of the
to
tests are provided in
Table 9. We let
,
and the location of change point
. The other parameters are set as before. In the kernel method, we choose bandwidth
, where the start time
M is set to be
or
.
Table 9 shows that our test method is better than the kernel-weighted test method in all listed cases. The empirical powers of our method are always greater than that of the kernel method at two different start times. The powers increase when the value of
T is larger for the same
. For fixed number
T, the powers raise gradually with the decrease of
. In particular, our advantage is more obvious when the sample size is 200. In other words, we can obtain high empirical powers with a small sample size. Our method is more efficient. The numerical simulation shows excellent performance of our method.
4.2. Real-Data Analysis
There is growing evidence to indicate that many economic and financial time sequences have heavy-tailed features. Sometimes, the data contains changes in persistence. We apply the ratio test method to analyze the foreign exchange rate data. The data set contains 300 monthly foreign exchange rates for Sweden/US from January 1971 to December 1995.
Figure 1 shows the real data. The data used here can be found on the website of the Federal Reserve Bank of St. Louis.
Figure 2 describes the first-order difference of the original data in
Figure 1. From
Figure 2, we can see that there exist many ’outliers’.
According to
Figure 1, the real data maybe has a persistence change from
to
. We apply our method to detect persistence change in this sequence. First, we used the bootstrap approximation method to determine the rejection domain by the statistic (37). Then, we discovered that the test statistic is larger than the critical value. Therefore, we reject the null hypothesis. This means there could be a change point in persistence from
to
. Based on Kim’s [
1] method, the change point estimation is 104. This coincides with our detection results.
One question we have is whether the conclusion of a rejection is caused by persistence change points or by ’outliers’. In order to solve this problem and to make our conclusion more reliable, we also tested the first-order difference data in
Figure 2. The monitoring process that used the same parameters as before does not discover changes in persistence. This result shows that the initial data contains a possible change point and that the first-order difference series is stationary.
Furthermore, we conclude that there could be a change point in persistence from to . The estimated change point 104 is located at the point August, 1979. Referring to the history of the American economic policy, this estimated location can be well interpreted. In the second half of the 1970s, the US government decided to adopt an expansionary fiscal policy and a monetary policy to stimulate the economy due to the high inflation, high unemployment rate, and economic growth rate of the US economy. After President Reagan took office, the dollar began to strengthen and the foreign exchange rate for Sweden/US reached its highest point in July 1985. Thus, this implies that the sequence goes from stationary to nonstationary because of the stimulus of the economic policy.