Next Article in Journal
Asymptotic Distribution and Finite Sample Bias Correction of QML Estimators for Spatial Error Dependence Model
Previous Article in Journal
The Seasonal KPSS Test: Examining Possible Applications with Monthly Data and Additional Deterministic Terms
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Jackknife Correction to a Test for Cointegration Rank

by
Marcus J. Chambers
Department of Economics, University of Essex, Wivenhoe Park, Colchester, Essex CO4 3SQ, UK
Econometrics 2015, 3(2), 355-375; https://doi.org/10.3390/econometrics3020355
Submission received: 24 July 2014 / Accepted: 15 May 2015 / Published: 20 May 2015

Abstract

:
This paper investigates the performance of a jackknife correction to a test for cointegration rank in a vector autoregressive system. The limiting distributions of the jackknife-corrected statistics are derived and the critical values of these distributions are tabulated. Based on these critical values the finite sample size and power properties of the jackknife-corrected tests are compared with the usual rank test statistic as well as statistics involving a small sample correction and a Bartlett correction, in addition to a bootstrap method. The simulations reveal that all of the corrected tests can provide finite sample size improvements, while maintaining power, although the bootstrap procedure is the most robust across the simulation designs considered.
JEL classifications:
C12; C32

1. Introduction

The concept of cointegration has assumed a prominent role in the analysis of economic and financial time series since the pioneering work of Engle and Granger [1], and tests for the cointegration rank of a vector of time series have become an essential part of the applied econometrician’s toolkit. The most popular test for cointegration rank is the trace statistic proposed by Johansen [2,3] which exploits the reduced rank regression techniques of Anderson [4] in the context of a vector autoregressive (VAR) model. The limiting distribution of the test statistic can be expressed as a functional of a vector Brownian motion process, the dimension of which depends upon the difference between the number of variables under consideration and the cointegration rank under the null hypothesis. Percentage points of the limiting distribution have been tabulated by simulation and can be found, for example, in Johansen [5], Doornik [6] and MacKinnon, Haug and Michelis [7].
The accuracy of the limiting distribution as a description of the finite sample distribution has been examined in a number of studies. Toda [8,9] found that the performance of the tests is dependent on the value of the stationary roots of the process, and that a sample of 100 observations is insufficient to detect the true cointegrating rank when the stationary root is close to one (0.8 or above). Doornik [6] proposed Gamma distribution approximations to the limiting distributions, finding that they are more accurate than previously published tables of critical values, while Nielsen [10] used local asymptotic theory to improve the ability of the limiting distribution to act as an approximation in finite samples.
In view of the experimental evidence reported above, there have been a number of further attempts to improve inference in finite samples when using the asymptotic critical values for the trace test of cointegration rank. Johansen [11] demonstrated how Bartlett corrections can be made to the statistic; these rely on various asymptotic expansions of the statistic’s expectation and result in complicated functions of the model’s parameters that can be estimated from the sample data. A simpler small sample correction factor was suggested by Reinsel and Ahn [12] and involves a degrees-of-freedom type of adjustment to the sample size when calculating the value of the test statistic. This small sample correction was shown to work well by Reimers [13] although, as Johansen [5] (p. 99) notes, the “theoretical justification for this result presents a very difficult mathematical problem.” More computationally intensive bootstrap procedures have recently been advocated by, inter alia, Swensen [14] and Cavaliere, Rahbek and Taylor [15].
The aim of this paper is to analyse the properties of a simple jackknife-corrected test statistic for cointegration rank. The approach is far less demanding, computationally, than bootstrap methods and does not require an explicit analytical derivation of an asymptotic expansion for the sample moment as in the Bartlett approach, merely relying on its existence. The idea behind the jackknife is to combine the statistic based on the full sample of observations with statistics based on a set of sub-samples, the weights used in the linear combination being chosen so that the leading term in the bias expansion is eliminated. Although intended primarily as a method of bias reduction in parameter estimation following the work of Quenouille [16] and Tukey [17], the jackknife can equally be applied to test statistics in order to achieve the same type of outcome as the Bartlett correction. In the case of stationary autoregressive time series Chambers [18] has shown that jackknife methods are capable of producing substantial bias reductions as well as reductions in root mean square errors compared to other methods in the estimation of model parameters; the jackknife results were also shown to be robust to departures from normality and conditional heteroskedasticity as well as other types of misspecification. However, some care has to be taken when applying these techniques with non-stationary data, as pointed out by Chambers and Kyriacou [19,20], who propose methods that can be used to ensure that the jackknife procedure achieves the bias reduction as intended in the case of non-stationarity.
The paper is organised as follows. Section 2 begins by defining the model as well as the test statistic of interest. Three variants of the model are considered, corresponding to different specifications of the deterministic linear trend, although most attention is given to the two variants of greatest empirical interest. The jackknife-corrected version of the statistic is defined and the appropriate limiting distributions are derived and presented in Theorem 1. Section 3 is devoted to simulation results and is divided into two subsections. The first sub-section computes (asymptotic) critical values of the limiting distributions of the jackknife-corrected statistics. The second sub-section is concerned with the finite sample size and power properties of the jackknife-corrected statistics, which are compared with the unadjusted statistic as well as the small-sample adjustment of Reinsel and Ahn [12], the Bartlett correction of Johansen [11] and the bootstrap approach of Cavaliere, Rahbek and Taylor [15]. Section 4 concludes and discusses some directions for future research, while an Appendix provides a proof of Theorem 1 and some details on the simulation method used to derive the critical values in Section 3.1.
The following notation will be used. For a p × r matrix C of rank r < p there exists a full rank p × ( p - r ) matrix C satisfying C C = 0 ; P C = C ( C C ) - 1 C denotes the projection matrix of C; and C = ( i = 1 p j = 1 r c i j 2 ) 1 / 2 denotes the Euclidean norm of C. In addition, for a square matrix A, det [ A ] denotes the determinant, and I p denotes the p × p identity matrix. The symbol = d denotes equality in distribution; d denotes convergence in distribution; p denotes convergence in probability; ⇒ denotes weak convergence of the relevant probability measures; L denotes the lag operator such that L j y t = y t - j for some integer j and variable y t ; and B ( s ) denotes a standard vector Brownian motion process. Functionals of B ( s ) , such as 0 1 B ( s ) B ( s ) d s , are denoted 0 1 B B for notational convenience.

2. The Model and Tests for Cointegration Rank

Following Johansen [5] the model under consideration is the following VAR(k) system in the p × 1 vector y t :
y t = i = 1 k Π i y t - i + Φ D t + ϵ t , t = 1 , , T
where ϵ 1 , , ϵ T are independent and identically distributed p × 1 random vectors with mean vector zero and positive definite covariance matrix Ω, D t is a q × 1 vector of deterministic terms, Π 1 , , Π k are p × p matrices of autoregressive coefficients, Φ is a p × q matrix of coefficients on the deterministic terms, and the initial values, y - k + 1 , , y 0 , are assumed to be fixed. It is convenient, in the analysis of cointegration, to write Equation (1) in the vector error correction model (VECM) form
Δ y t = Π y t - 1 + i = 1 k - 1 Γ i Δ y t - i + Φ D t + ϵ t , t = 1 , , T
where Π = i = 1 k Π i - I p and Γ i = - j = i + 1 k Π j ( i = 1 , , k - 1 ) . The following assumption will be made.
Assumption 1. (i) Let A ( z ) = ( 1 - z ) I p - Π z - i = 1 k - 1 Γ i ( 1 - z ) z i . Then det [ A ( z ) ] = 0 implies that | z | > 1 or z = 1 ; (ii) The matrix Π has rank r < p and has the representation Π = α β where α and β are p × r with rank r; (iii) The matrix α Γ β is nonsingular, where Γ = I p - i = 1 k - 1 Γ i .
This assumption is common in the cointegration literature and implies (see Theorem 4.2 of Johansen [5], for example) that y t has the representation
y t = C i = 1 t ϵ i + Φ D i + C ( L ) ϵ t + Φ D t + P β y 0 , t = 1 , , T
where C ( z ) = ( 1 - z ) A ( z ) - 1 = i = 1 C i z i and C = C ( 1 ) = β ( α Γ β ) - 1 α . This representation is convenient because it decomposes y t into a stochastic trend component, a stationary component, and a term that depends on the initial condition y 0 . Assumption 1 also ensures that, although the vector y t is I(1), both Δ y t - E ( Δ y t ) and β y t - E ( β y t ) can be given initial distributions such that they are I(0). A proof of Equation (3) and a discussion of its implications can be found in Johansen [5] (pp. 49–52).
The specification of the deterministic component Φ D t in Equation (2) has implications not only for the interpretation of the error correction model but also for the level process y t itself. The following three leading cases have received most attention in the literature:
  • Case 1: no deterministic components. In this case Φ D t = 0 and so, from Equations (2) and (3),
    Δ y t = α β y t - 1 + i = 1 k - 1 Γ i Δ y t - i + ϵ t y t = C i = 1 t ϵ i + C ( L ) ϵ t + P β y 0
  • Case 2: restricted intercept. Here Φ D t = α ρ 0 , where ρ 0 is r × 1 , and hence
    Δ y t = α β y t - 1 + ρ 0 + i = 1 k - 1 Γ i Δ y t - i + ϵ t y t = C i = 1 t ϵ i + C ( L ) ϵ t + τ 0 + P β y 0
    where τ 0 is a vector of intercepts.
  • Case 3: restricted linear trend. In this specification Φ D t = μ 0 + α ρ 1 t , where μ 0 and ρ 1 are p × 1 and r × 1 vectors respectively. It follows that
    Δ y t = α β y t - 1 + ρ 1 t + i = 1 k - 1 Γ i Δ y t - i + μ 0 + ϵ t y t = C i = 1 t ϵ i + C ( L ) ϵ t + τ 0 + τ 1 t + P β y 0
    where τ 0 and τ 1 are vectors of constants.
Hence in case 1 there are no deterministic terms at all, in case 2 the intercept is restricted to the cointegrating relationships, while in case 3 there is an unrestricted intercept and the time trend only enters through the cointegrating relationships although a linear trend is present in the levels representation.
The trace statistic has become a popular method of testing the null hypothesis of r < p cointegrating vectors against the maintained hypothesis that Π has full column rank p, in which case the process y t is stationary. In what follows it is convenient to further express the VECM Equation (2) in the form
Z 0 t = α β * Z 1 t + Ψ Z 2 t + ϵ t , t = 1 , , T
where Z 0 t = Δ y t and the remaining terms are defined with respect to the three cases concerning the specification of the deterministic component Φ D t defined above:
  • Case 1: β * = β , Z 1 t = y t - 1 , Z 2 t = ( Δ y t - 1 , , Δ y t - k ) , Ψ = [ Γ 1 , , Γ k - 1 ] .
  • Case 2: β * = ( β , ρ 0 ) , Z 1 t = ( y t - 1 , 1 ) , and Ψ and Z 2 t are defined as in case 1.
  • Case 3: β * = ( β , ρ 1 ) , Z 1 t = ( y t - 1 , t ) , Z 2 t = ( Δ y t - 1 , , Δ y t - k , 1 ) , Ψ = [ Γ 1 , , Γ k - 1 , μ 0 ] .
Based on Equation (4) it is possible to define the matrices
M i j = T - 1 t = 1 T Z i t Z j t , ( i , j = 0 , 1 , 2 ) , S i j = M i j - M i 2 M 22 - 1 M 2 j , ( i , j = 0 , 1 )
The trace statistic is then obtained as
S r = - T i = r + 1 p log ( 1 - λ ^ i )
where the (ordered) eigenvalues 1 > λ ^ 1 > > λ ^ p > 0 solve the determinantal equation det [ λ S 11 - S 10 S 00 - 1 S 01 ] = 0 . Let B ( s ) denote a ( p - r ) -dimensional standard Brownian motion process. Then, as T ,
S r t r a c e 0 1 d B F 0 1 F F - 1 0 1 F d B
where the stochastic process F ( s ) is defined for each case as follows:
C a s e 1 : F ( s ) = B ( s ) ; C a s e 2 : F ( s ) = B ( s ) , 1 ; C a s e 3 : F ( s ) = B ( s ) - 0 1 B , s - 1 2
A proof of Equation (6) can be found in Theorem 11.1 of Johansen [5].
The asymptotic distribution given in Equation (6) has been found to provide a poor approximation to the finite sample distribution of S r in a number of cases leading to alternative approaches being developed. Recent work has suggested bootstrap techniques as a way of improving inference concerning the cointegration rank in finite samples, for example Swensen [14] and Cavaliere, Rahbek and Taylor [15], while Bartlett corrections have been proposed by Johansen [11]. The idea behind Bartlett correction is to adjust the statistic S r so that its finite sample distribution is closer to the limiting distribution. To see how this works, suppose it is possible to expand the expectation of S r as
E ( S r ) = a 0 + a 1 T + O ( T - 2 )
where a 0 denotes the limit of the expectation as T and a 1 is either a known constant (typically a function of the model parameters) or can be estimated consistently from the sample data. Then the adjusted statistic
S r B = 1 + a 1 a 0 1 T - 1 S r
can be shown to satisfy E ( S r B ) = a 0 + O ( T - 2 ) , thereby improving the accuracy of the limiting distribution as an approximation to the finite sample distribution (at least in terms of the mean of the distribution). Johansen [11] derives expressions for the Bartlett correction factor which depend on the parameters of the model in a complicated way and, hence, must be estimated from the sample data. Simulations reveal that the adjusted statistic improves the size of the cointegration rank test in a large part of the parameter space.
A procedure related to the Bartlett approach is the jackknife. It achieves the same order of reduction in the bias of the statistic but does not require a formal derivation of the precise terms in the expansion in Equation (8), merely relying on the existence of such a representation. The idea is to combine the statistic S r in a linear combination with the mean of a set of m statistics obtained from sub-samples, the weights being chosen to eliminate the first order bias term a 1 / T . Suppose, then, that the full sample of observations is divided into m non-overlapping sub-samples, each sub-sample containing = T / m observations. If S r j denotes the statistic computed from sub-sample j then the jackknife statistic is defined by
S r , m J = m m - 1 S r - 1 m - 1 1 m j = 1 m S r j
Provided that E ( S r j ) = a 0 + a 1 - 1 + O ( - 2 ) for j = 1 , , m and = O ( T ) , it is straightforward to show that E ( S r , m J ) = a 0 + O ( T - 2 ) by substitution of the relevant expressions. Hence both statistics, S r B and S r , m J , achieve the same order of bias reduction but by different means.
Although valid in cases 2 and 3, the above argument concerning the jackknife statistic S r , m J falters in case 1 on the assumption that the sub-sample statistics, S r j ( j = 1 , , m ) , all share the same expansion in Equation (8) as the full-sample statistic S r . It has been shown by Chambers and Kyriacou [19,20] that, in a univariate setting with a unit root, the sub-sample statistics (for j 1 ) have different properties to the full sample statistic in the limit due to the stochastic order of magnitude of the pre-sub-sample value, and the same phenomenon also arises here in case 1. The implication is that the limiting distributions of the sub-sample statistics S r j (at least for j 1 ) will differ from that of S r ; the expansions of E ( S r j ) will differ from that of E ( S r ) ; and, hence, the jackknife statistic (as defined above) will not fully eliminate the O ( T - 1 ) term in the bias. These problems do not arise in cases 2 and 3 because the presence of the intercept and/or time trend ensures that the distributions are invariant to the initial (pre-sub-sample) conditions. Although it is possible to overcome these problems in case 1 by simply subtracting y ( j - 1 ) from the observations in sub-sample j—an idea proposed in the univariate unit root setting by Chambers and Kyriacou [19]—we do not pursue this avenue any further in view of the limited applicability of case 1 in practice. Instead, we focus on the application of the jackknife correction in the more empirically relevant cases 2 and 3.
In order to economise on notation it is convenient to define the functional
Q ( U , V , δ ) = t r a c e δ d U V δ V V - 1 δ V d U
where U ( s ) and V ( s ) are vector stochastic processes defined on s δ , and to define the intervals δ 0 = [ 0 , 1 ] and δ j , m = [ ( j - 1 ) / m , j / m ] ( j = 1 , , m ) . With this notation the limiting distribution in Equation (6), for example, can be represented as
S r Q ( B , F , δ 0 )
The formal statement of the main result is as follows.
Theorem 1. Let y 1 , , y T be generated according to Equation (2) and let Assumption 1 hold. Then, as T :
(a)
If m is fixed:
  • Case 2. S r j Q ( B , F 2 , δ j , m ) ( j = 1 , , m ) , and
    S r , m J m m - 1 Q ( B , F 2 , δ 0 ) - 1 m - 1 1 m j = 1 m Q ( B , F 2 , δ j , m )
    where F 2 ( s ) = [ B ( s ) , 1 ] . Furthermore, Q ( B , F 2 , δ j , m ) and Q ( B , F 2 , δ k , m ) are independent for j k .
  • Case 3. S r j Q ( B , F j , m , δ j , m ) ( j = 1 , , m ) , and
    S r , m J m m - 1 Q ( B , F 3 , δ 0 ) - 1 m - 1 1 m j = 1 m Q ( B , F j , m , δ j , m )
    where
    F 3 ( s ) = B ( s ) - 0 1 B , s - 1 2
    and
    F j , m ( s ) = B ( s ) - m ( j - 1 ) / m j / m B ( s ) d s , s - j - 1 2 m , j = 1 , , m
    Furthermore, Q ( B , F j , m , δ j , m ) and Q ( B , F k , m , δ k , m ) are independent for j k .
(b)
If m - 1 + m T - 1 0 :
  • Case 2. S r , m J Q ( B , F 2 , δ 0 ) .
  • Case 3. S r , m J Q ( B , F 3 , δ 0 ) .
Theorem 1(a) shows that, when the number of sub-samples m is fixed, the limiting distribution of the jackknife-corrected statistic is the same linear combination of the limiting distributions of the full- and sub-sample statistics. Note that the length of each sub-sample, = T / m , increases with T for fixed m. However, when m is allowed to increase with T but at a slower rate, Theorem 1(b) shows that the limiting distribution is equivalent to that of the full-sample statistic alone. In this case note that, if m T - 1 0 as T , then = m - 1 T . In order to use these distributions for inference it is necessary to obtain the appropriate critical values; these are provided in Section 3 for a range of values of m and p - r . Further analysis of the finite sample properties of the jackknife-corrected statistics is provided in the next section by means of Monte Carlo simulations.

3. Simulation Results

3.1. Critical Values of Limiting Distributions

The limiting distributions of the jackknife statistic S r , m J presented in Theorem 1 for fixed values of the jackknife parameter m are nonstandard and, in order to be useful in practice, it is necessary to obtain the appropriate critical values. Table 1 and Table 2 provide the 90%, 95% and 99% points of the distributions for values of p - r ranging from 1 to 12 for the empirically relevant cases 2 and 3, respectively, and for values of m { 2 , 3 , 4 , 5 , 6 , 8 , 10 , 12 , 16 , 20 } . A total of 100,000 replications were carried out with a sample size of T = max { 1200 , 100 m } spanning the interval [ 0 , 1 ] ensuring that when m 12 each sub-sample (of length = T / m ) contains 100 points. The method described in Chapter 15 of Johansen [5] was employed, with suitable modifications to allow for the sub-sample statistics used in constructing the jackknife corrections; details are provided in the Appendix. It can be seen that, in all cases, the critical values for S r , m J are larger than those for S r that are reported in, for example, Johansen [5], Doornik [6] and MacKinnon, Haug and Michelis [7]. The critical values are also seen to decrease as m increases for a given value of p - r .
Table 1. Percentage points of limiting distributions of S r , m J : case 2.
Table 1. Percentage points of limiting distributions of S r , m J : case 2.
m23456810121620
p - r 90%
110.059.088.668.428.268.067.967.887.807.75
222.2520.5019.7619.3819.1118.7818.6218.5018.3618.28
338.2135.8134.7934.2233.8833.4333.2033.0432.8532.74
458.0954.9653.6752.9852.5451.9851.6651.4651.2051.06
582.0378.1976.6275.7575.1974.5574.1873.9273.6173.43
6109.89105.40103.53102.51101.85101.06100.61100.3299.9799.75
7141.58136.48134.26133.11132.40131.53130.98130.65130.24129.98
8177.55171.72169.24167.89167.05165.99165.43165.06164.59164.30
9217.22210.62207.94206.44205.49204.39203.73203.29202.75202.41
10260.95253.74250.84249.23248.23246.90246.18245.69245.07244.68
11308.49300.83297.62295.80294.65293.28292.50291.92291.25290.82
12360.20351.68348.22346.20345.04343.50342.64342.07341.29340.80
p - r 95%
112.5611.2610.6810.3510.149.879.719.629.509.43
225.8923.6522.7422.1821.8221.3821.1620.9820.8120.69
342.9339.8538.5037.7437.2936.7136.3836.1735.9135.75
463.9159.9358.2757.4156.8356.0855.6955.4055.0554.87
589.0184.1382.0780.9280.1979.3278.7878.4678.0577.82
6117.86112.19109.73108.37107.55106.49105.92105.53105.06104.80
7150.83144.14141.37139.81138.84137.70137.02136.57136.03135.69
8187.76180.10177.07175.32174.21172.93172.21171.70171.08170.71
9228.63220.14216.60214.86213.53212.13211.23210.73210.02209.60
10273.43264.11260.24258.12256.76255.20254.19253.58252.77252.32
11321.89311.80307.72305.46303.90302.13301.13300.43299.55299.01
12374.52363.51359.00356.62355.07353.12351.97351.20350.28349.67
p - r 99%
117.9916.0215.2114.6414.3013.9013.6513.4813.2813.15
233.5230.4128.9228.1727.6726.9726.6226.3626.0825.90
352.5448.0846.0445.0044.3043.4443.0242.6542.2342.02
475.5669.8867.4566.1765.2264.2363.6163.2062.7062.43
5102.6595.7492.6091.0289.9988.7688.0587.5486.9586.58
6133.52125.28121.70119.75118.60117.10116.25115.73115.05114.65
7168.40159.21155.18152.98151.64149.90148.84148.21147.48146.98
8207.38196.97192.68190.16188.48186.65185.54184.77183.87183.36
9249.98238.76233.74231.05229.56227.31225.92225.21224.21223.54
10296.32283.65278.30275.06273.38271.14269.66268.83267.75267.06
11347.61334.18327.61324.53322.64320.06318.46317.51316.24315.52
12402.54387.33380.59377.15374.75372.10370.54369.34368.02367.27
Table 2. Percentage points of limiting distributions of S r , m J : case 3.
Table 2. Percentage points of limiting distributions of S r , m J : case 3.
m23456810121620
p - r 90%
118.7614.9113.5112.7912.3511.8511.5711.3911.1811.05
235.8229.6927.5626.4925.8525.1024.6824.4124.1023.92
356.1247.9845.2843.8843.0442.0741.5541.2040.7940.55
480.0870.1666.8365.1564.1462.9562.2861.8561.3261.02
5107.9996.3692.4290.4289.2287.8287.0386.5285.9285.56
6139.70126.38121.86119.54118.11116.50115.59114.99114.27113.85
7175.42160.26155.23152.63151.08149.19148.17147.49146.67146.18
8215.34198.49192.84189.80188.06185.95184.79184.05183.11182.56
9258.79240.45234.13230.83228.88226.55225.27224.43223.37222.73
10306.55286.65279.68276.15273.91271.35269.92268.96267.81267.10
11358.30336.52329.07325.17322.77319.95318.35317.36316.08315.31
12413.57390.29382.24377.98375.45372.47370.74369.57368.15367.31
p - r 95%
122.3417.6215.9415.0914.5613.9513.6113.3913.1312.99
240.5833.3730.9129.6428.8828.0027.5027.1826.8126.59
361.9052.5349.3847.7646.8045.6745.0544.6344.1343.85
486.9275.5271.7069.7268.5667.2166.4365.9365.3364.98
5115.89102.4998.0595.7594.3592.7091.7991.1790.4590.03
6148.70133.59128.43125.74124.12122.26121.16120.47119.65119.17
7185.48168.61162.76159.67157.88155.67154.43153.64152.69152.11
8226.37207.59201.08197.68195.59193.16191.79190.91189.85189.20
9270.96250.37243.27239.51237.30234.62233.07232.11230.93230.18
10320.06297.55289.65285.71283.12280.20278.50277.40276.05275.23
11372.32347.92339.37334.93332.33329.16327.29326.12324.66323.74
12428.72402.54393.18388.61385.63382.25380.20378.93377.35376.35
p - r 99%
130.2423.5521.2820.1019.3618.5018.0217.7317.3817.16
250.5240.9537.7136.1335.1233.9933.3132.8932.4032.10
373.7761.9457.8955.8554.5053.1052.2851.7551.1450.75
4100.2086.4481.5679.0977.6675.8574.8974.2373.4772.99
5131.58115.32109.74106.87105.06102.93101.78100.97100.0299.55
6166.69147.92141.48138.09136.03133.67132.33131.43130.33129.73
7205.79184.90177.73173.88171.62168.78167.29166.32165.11164.36
8248.25225.64217.36213.17210.70207.55205.74204.69203.27202.46
9294.75269.95261.04256.53253.66250.36248.36247.22245.70244.75
10344.98317.65308.29303.22300.04296.43294.31293.04291.35290.38
11399.79370.35359.97354.38351.15347.01344.72343.19341.39340.35
12458.22426.77415.51409.54405.96401.61399.07397.46395.48394.31

3.2. Finite Sample Properties

The finite sample size and power properties of the jackknife-corrected test statistics were investigated using the simulation model adopted by Cavaliere, Rahbek and Taylor [15] (denoted CRT12) for the purpose of evaluating their bootstrap procedure. The model takes p = 4 and is given by
Δ y t = α β y t - 1 + Γ 1 Δ y t - 1 + ϵ t , t = 1 , , T
where ϵ t is a vector of normally distributed independent random variables with covariance matrix I 4 , the sample size T { 50 , 100 , 200 } , and the initial condition is y 0 = Δ y 0 = 0 . The short-run adjustment matrices are defined as α = ( a , 0 , 0 , 0 ) and
Γ 1 = γ δ 0 0 δ γ 0 0 0 0 γ 0 0 0 0 γ
where a, γ and δ are scalar parameters defined below for each of the three data generation processes (DGPs) considered:
  • DGP1: a = - 0 . 4 , β = ( 1 , 0 , 0 , 0 ) , γ = 0 . 8 , δ { 0 , 0 . 2 } .
  • DGP2: a = - 0 . 4 , β = ( 1 , 0 , 0 , 0 ) , γ = 0 . 5 , δ { 0 , 0 . 2 } .
  • DGP3: a = 0 , δ = 0 , γ { 0 , 0 . 5 , 0 . 8 , 0 . 9 } .
In DGPs 1 and 2 there is a single cointegrating vector while in DGP3 there is no cointegration and y t is an I(1) VAR(2) process (or, equivalently, Δ y t is an I(0) VAR(1) process). The form of cointegration in DGPs 1 and 2 was considered in CRT12 and implies that y 1 t is I(0). The value of δ that appears in the matrix Γ 1 was used by CRT12 because it is related to some auxiliary conditions relevant for the bootstrap procedure of Swensen [21]. These conditions are satisfied for δ = 0 but not for δ = 0 . 2 . CRT12 also included two additional values of δ (equal to 0 . 1 and 0 . 3 ) but, as will be seen, the value of δ does not have a major impact on the test procedures under consideration and so we restrict attention to just two of the four values used in CRT12. Note that, in DGP3, δ = 0 and the matrix Γ 1 is diagonal with the scalar γ forming the diagonal elements.
It is also necessary for the DGPs to satisfy the three parts of Assumption 1. The first requires the roots of the equation det [ A ( z ) ] = 0 to have modulus greater than or equal to one, where in this case A ( z ) = ( 1 - z ) I 4 - α β z - Γ 1 ( 1 - z ) z . In DGPs 1 and 2 there are three unit roots and in DGP3 there are four; the moduli of the non-unit roots are reported in Table 3, where it can be seen that Assumption 1(i) is satisfied in all cases. Comparing DGP1 with DGP2, the effect of reducing γ from 0.8 to 0.5 is to increase the modulus of each of the non-unit roots. In DGP3, increasing the parameter γ reduces the non-unit roots towards unity, and in the extreme case of γ = 1 , y t becomes an I(2) process. It is well known that the rank test performs poorly as this extreme case is approached; see, for example, the simulation evidence in Johansen (2002). Note that, when γ = 0 , there are no roots in addition to the four unit roots because, in this case, A ( z ) = ( 1 - z ) I 4 and hence det [ A ( z ) ] = ( 1 - z ) 4 . Assumption 1(ii) is obviously satisfied, while it can be shown that det [ α Γ β ] = ( 1 - γ ) 3 and hence Assumption 1(iii) is satisfied provided γ 1 .
A total of seven test statistics for cointegration rank were considered. The first is the standard (unadjusted) trace statistic S r defined in Equation (5). The second uses the small sample correction proposed by Reinsel and Ahn [12]; the resulting statistic, denoted S r R A , is defined by
S r R A = - ( T - p k ) i = r + 1 p log ( 1 - λ ^ i ) = ( T - p k ) T S r
The third statistic is the Bartlett-corrected statistic defined in Equation (9); full details concerning computation of the correction factors can be found in Johansen [11]. The fourth method is based on the bootstrap procedures of CRT12. The bootstrap samples are obtained by estimating the VECM under the null hypothesis, checking that the roots of the estimated matrix polynomial equation det [ A ( z ) ] = 0 satisfy Assumption 1(i), and then generating a total of N B S samples recursively using an appropriate method. In the simulations reported here a wild bootstrap was employed in which, if ϵ ^ i t denotes element i of the residual vector ϵ ^ t , then the residuals used for the bootstrap samples were of the form
ϵ ^ i t B S = u i t ϵ ^ i t - T - 1 t = 1 T ϵ ^ i t , i = 1 , , 4 , t = 1 , , T
where the u i t are independent normal variates. The statistic S r is computed in each bootstrap sample and the critical value is obtained from the distribution of S r over the N B S boostrap samples. We denote this test by S r B S but emphasise that the test statistic is actually S r which is compared with the critical value from the finite sample bootstrap distribution rather than the critical value from the asymptotic distribution. Full details of the procedure can be found in CRT12.
Table 3. Moduli of non-unit roots in simulations.
Table 3. Moduli of non-unit roots in simulations.
δ / γ Moduli
DGP1
0.01.11801.11801.25001.25001.2500
0.21.13351.13351.25001.25001.2972
DGP2
0.01.41421.41422.00002.00002.0000
0.21.36391.36392.00002.00002.5599
DGP3
0.52.00002.00002.00002.0000
0.81.25001.25001.25001.2500
0.91.11111.11111.11111.1111
In addition to the above statistics, three versions of the jackknife statistic are considered. The first is S r , m J defined in Equation (10). This was computed for a range of values of m where practicable, although we report mainly the results for m = 2 ; details of how the tests perform for other values of m are also provided. The remaining two jackknife statistics are based on small sample adjustments to either the full sample statistic S r and/or the sub-sample statistics S r j upon which the jackknife is based. In particular the two additional jackknife statistics are defined by
S r , m J 1 = m m - 1 S r R A - 1 m - 1 1 m j = 1 m S r j
S r , m J 2 = m m - 1 S r R A - 1 m - 1 1 m j = 1 m S r j R A
in which the small sample adjusted sub-sample statistics are defined analagously to Equation (11) by
S r j R A = ( - p k ) S r j , j = 1 , , m .
The first of these statistics uses the small sample adjustment purely on S r while the second also uses it on the sub-sample statistics.
A total of R = 10 , 000 replications were performed for each combination of parameter values for each DGP and, as in CRT12, the VAR model was fitted with a restricted intercept (case 2). The bootstrap procedure is the most computationally intensive component in the simulations, requiring a sufficiently large number ( N B S ) of bootstrap samples in each replication in order to compute the critical value from the bootstrap distribution; CRT12, for example, set N B S = 399 . The bootstrap computations are therefore O ( R N B S ) compared to O ( R ) for the other statistics. While using a large number of bootstrap samples poses no problem in a single empirical application, it is more of a computational (and time) burden when computing a large number of bootstrap samples for each one of a large number of Monte Carlo replications. We therefore employed the approach of Davidson and MacKinnon [22] and Giacomini, Politis and White [23] and used only one bootstrap sample per replication, i.e., N B S = 1 . Instead of using a large number of bootstrap samples to determine the critical value from the bootstrap distribution in each replication, the critical value is obtained from the bootstrap distribution across the R Monte Carlo replications. This `warp-speed’ method reduces substantially the number of bootstrap computations from O ( R N B S ) to O ( R ) in accordance with the other non-bootstrap statistics.
The simulation results are summarised in Table 4, Table 5, Table 6 and Table 7; in all cases the tests are based on a nominal size of 5%. Table 4 contains the empirical size of each of the seven test statistics in the case of the VAR with a single cointegrating vector. The value of δ has a relatively small impact on the performance of the tests but the reduction in γ from 0.8 to 0.5 has a much larger impact. It is apparent that, in all DGPs, the unadjusted Johansen statistic S 1 has large size distortions, with the empirical size being as large as 45% in DGP1 with δ = 0 . 2 and T = 50 . The small sample adjustment that results in S 1 R A reduces the size closer to its nominal level in all cases, particularly in DGP 2 (where γ = 0 . 5 ) but less so in DGP 1 ( γ = 0 . 8 ) where size distortions remain. The Bartlett correction has a tendency to over-compensate, leading to empirical sizes below 5% (and around 2% for T = 50 ) in most cases. The bootstrap produces sizes around 5% in DGP 1 but in DGP 2 the empirical size tends to be slightly lower than the nominal size. The jackknife statistic S 1 , 2 J manages to reduce the size towards the 5% level compared to the unadjusted statistic S 1 with empirical sizes around 6%–7% in DGP 2 and a bit higher in DGP 1. The small sample adjustment in S 1 , 2 J 1 reduces the empirical size in all cases, compared to S 1 , 2 J , while S 1 , 2 J 2 produces sizes close to the nominal level in DGP 2 but shows little improvement (if any) over S 1 , 2 J in DGP 1.
The power performance of the tests in the cointegrated VAR is summarised in Table 5 in which the probability of rejecting the null hypothesis that r = 0 is reported. Beginning with the unadjusted statistic S 0 , the high power at the smaller sample sizes in DGP 1 is a reflection of the large size distortions reported in Table 4. All of the adjusted statistics are less powerful than S 0 but it should be remembered that they do have better size properties. The statistic S 0 , 2 J 1 has particularly low power for T = 50 in DGP 2.
The size properties of the tests in a non-cointegrated VAR are reported in Table 6. As the value of γ increases from 0 to 0.9 the unadjusted statistic S 0 suffers from huge size distortions, rising to 92% for T = 50 when γ = 0 . 9 . The size properties of the adjusted statistcs are all better than S 0 with the bootstrap test controlling size best over this range of parameters. The Bartlett adjustment again tends to reduce empirical size to below its nominal level as γ increases while, for the jackknife statistics, S 0 , 2 J 2 performs best for smaller values of γ while S 0 , 2 J 1 produces the best performance of the three for larger values of γ.
Table 4. Empirical size: cointegrated VAR.
Table 4. Empirical size: cointegrated VAR.
δT S 1 S 1 RA S 1 B S 1 BS S 1 , 2 J S 1 , 2 J 1 S 1 , 2 J 2
DGP1
0.05044.6818.802.105.4914.262.5314.37
10023.0213.363.914.6410.044.839.37
20013.039.874.735.227.855.037.28
0.25045.2619.072.424.9814.532.3014.61
10022.3913.284.385.249.994.679.38
20012.619.735.025.358.015.467.49
DGP2
0.05014.353.152.112.646.000.595.01
10010.445.384.684.627.623.316.58
2007.145.214.755.116.033.955.50
0.25015.403.422.212.696.270.605.16
10010.505.374.904.587.303.186.42
2007.505.294.944.865.873.885.29
Table 5. Empirical power: cointegrated VAR.
Table 5. Empirical power: cointegrated VAR.
δT S 0 S 0 RA S 0 B S 0 BS S 0 , 2 J S 0 , 2 J 1 S 0 , 2 J 2
DGP1
0.05097.5785.2030.9351.7670.0527.7674.00
10099.9999.9298.7899.1199.5998.2699.62
200100.00100.00100.00100.00100.00100.00100.00
0.25097.0383.0930.9046.4165.7723.6369.81
10099.9999.9398.2199.0299.4097.4599.45
200100.00100.00100.00100.00100.00100.00100.00
DGP2
0.05062.5227.0818.3917.7926.143.6326.36
10092.8083.4077.7078.2081.0361.7079.93
200100.00100.00100.00100.0099.9899.9599.98
0.25066.7029.7419.4518.8827.874.1928.26
10094.2577.0682.0082.1484.4066.7383.67
200100.00100.0099.9999.9999.9999.9899.99
Table 6. Empirical size: non-cointegrated VAR (DGP3).
Table 6. Empirical size: non-cointegrated VAR (DGP3).
γT S 0 S 0 RA S 0 B S 0 BS S 0 , 2 J S 0 , 2 J 1 S 0 , 2 J 2
0.05017.302.755.403.766.330.275.19
1009.374.035.224.366.182.005.29
2007.124.715.304.985.893.365.41
0.55037.199.934.444.108.920.778.94
10016.948.074.904.737.732.626.82
2009.456.264.754.586.323.485.74
0.85078.4841.961.676.3322.522.8825.87
10044.3027.613.645.9013.845.3313.77
20021.3315.454.825.488.975.778.61
0.95092.7366.060.768.5639.167.0844.76
10075.2658.001.107.9627.6112.0928.55
20044.6935.423.095.8914.699.7814.49
Table 7. Empirical size of S 1 , m J for varying m.
Table 7. Empirical size of S 1 , m J for varying m.
m
δ / γ T 245810
DGP1
0.010010.049.9510.12
2007.857.648.007.837.92
0.21009.999.879.94
2008.017.708.017.958.01
DGP2
0.01007.626.606.01
2006.035.905.855.505.32
0.21007.306.325.87
2005.875.945.955.685.44
DGP3
0.01006.185.184.56
2005.895.645.355.174.81
0.51007.736.565.90
2006.325.765.525.375.05
0.810013.8416.4616.74
2008.979.559.7810.8310.97
0.910027.6138.0440.20
20014.6919.4521.5525.4526.62
The results for the jackknife tests in Table 4, Table 5 and Table 6 are based on m = 2 sub-samples, but it is of interest to ascertain how the performance of the tests is affected using different values of m. For T = 50 there is little scope to increase m much further; with m = 2 each sub-sample has only = 25 observations, so increasing m soon makes sub-sample estimation infeasible. However, for larger sample sizes some experimentation is possible, and so Table 7 reports the empirical size of S 1 , m J for m { 2 , 4 , 5 } when T = 100 and m { 2 , 4 , 5 , 8 , 10 } when T = 200 ; in each case, for the largest value of m, the sub-samples contain just = 20 observations. Table 7 shows that the empirical size of S 1 , m J is remarkably robust to the value of m, with the exception of DGP3 when γ = 0 . 9 .
To summarise the simulation results, it appears that rank test statistics based on some form of correction factor can provide size improvements over the unadjusted Johansen statistic while still maintaining good power properties, although a bootstrap approach offers the most consistent performance over the range of DGPs considered. It should be stressed, however, that the corrected statistics and the bootstrap operate in rather different ways. All of the corrected statistics—whether the correction is a simple small sample adjustment, a (parametric) Bartlett correction or a (nonparametric) jackknife correction—aim to adjust the raw statistic in such a way that the distribution of the corrected statistic matches better the asymptotic distribution, the critical values of which the corrected statistic is compared with. The bootstrap, on the other hand, uses as the test statistic the unadjusted statistic itself, but by generating bootstrap samples whose size is equal to the given finite number of observations, uses critical values from the finite sample bootstrap distribution against which to compare the statistic. The evidence obtained here suggests that the latter approach is the most robust in practice.

4. Conclusions

This paper has investigated the asymptotic properties and finite sample performance of jackknife-corrected test statistics for cointegration rank in a VAR system. In particular, the limiting distributions of jackknife-corrected test statistics have been derived for the two trend specifications of most empirical relevance; the asymptotic critical values for these cases have been tabulated; and the finite sample size and power properties of the jackknife-corrected tests have been compared with the usual (unadjusted) rank test statistic as well as statistics using various small sample corrections and a bootstrap approach. The simulations reveal that all the corrected statistics, including the jackknife variants, can provide size improvements over the unadjusted statistic while still maintaining good power properties, although a bootstrap approach offers the most consistent performance over the range of DGPs considered.
There are a number of ways in which the analysis of this paper can be built upon. In practice the precise form of the VAR (i.e., the specification of the deterministic trend function and the number of lags) is unknown, and various pre-tests are often conducted, including the use of information criteria to determine the VAR order. This has an impact on the performance of the rank tests, and it would be of interest to ascertain how well the jackknife methods perform relative to other tests in such a scenario. Another potentially fruitful area of investigation concerns the use of jackknife methods in estimating the cointegrating parameters themselves. Additionally, bootstrap methods have been shown to be adaptable to situations where heteroskedasticity (both conditional and unconditional) is present as well as breaks in variance and correlations; see, for example, Cavaliere, Rahbek and Taylor [24,25]. Jackknife methods have also been found to be robust to conditional heteroskedasticity in stationary autoregressions by Chambers [18], and so a further comparison with bootstrap methods would be of interest in the context of cointegration. Such avenues are left for future work.

Acknowledgments

I am grateful to the Editor-in-Chief, Kerry Patterson, and three anonymous referees for helpful comments on this paper. In particular I thank one of the referees for pointing out the independence properties of the functionals Q ( B , F 2 , δ j , m ) and Q ( B , F j , m , δ j , m ) that appear in parts (a) and (b), respectively, of Theorem 1. I also thank Giuseppe Cavaliere for providing Matlab code for implementing the bootstrap methods of Cavaliere, Rahbek and Taylor [15], upon which I was able to base my own Gauss code, and Rob Taylor for suggesting I explore `warp-speed’ methods of speeding up the simulations involving the bootstrap. The initial research upon which this paper is based was funded by the Economic and Social Research Council under grant number RES-000-22-3082.

Appendix

A.1. Proof of Theorem 1

(a) Johansen [5] (Theorem 11.1) shows that S r Q ( B , F , δ 0 ) as T under the stated conditions, where F ( s ) is defined in Equation (7) for each case. Taking each of the two relevant cases in turn:
Case 2
Here, S r Q ( B , F 2 , δ 0 ) and the sub-sample statistics have the same distribution defined on δ j , m , i.e., S r j Q ( B , F 2 , δ j , m ) . The result for S r , m J follows straightforwardly as m is fixed.
To demonstrate the independence of Q ( B , F 2 , δ j , m ) and Q ( B , F 2 , δ k , m ) for j k , consider
Q ( B , F 2 , δ j , m ) = t r a c e ( j - 1 ) / m j / m d B ( s ) F 2 ( s ) ( j - 1 ) / m j / m F 2 ( s ) F 2 ( s ) d s - 1 × ( j - 1 ) / m j / m F 2 ( s ) d B ( s )
and recall that F 2 ( s ) = [ B ( s ) , 1 ] . This expression is a function of B ( s ) for s δ j , m which is clearly not independent of the process B ( r ) for r δ k , m that enters Q ( B , F 2 , δ k , m ) . However, let
A j , m = I p - r - m ( j - 1 ) / m j / m B ( s ) d s 0 p - r 1
where 0 p - r denotes a ( p - r ) × 1 vector of zeros. We can then write
Q ( B , F 2 , δ j , m ) = t r a c e ( j - 1 ) / m j / m d B ( s ) F 2 ( s ) A j , m × ( j - 1 ) / m j / m A j , m F 2 ( s ) F 2 ( s ) A j , m d s - 1 ( j - 1 ) / m j / m A j , m F 2 ( s ) d B ( s )
which is a function of the process
A j , m F 2 ( s ) = B ( s ) - m ( j - 1 ) / m j / m B ( s ) d s 1
This shows that Q ( B , F 2 , δ j , m ) can be represented in terms of the quasi-demeaned process B j , m ( s ) = B ( s ) - m ( j - 1 ) / m j / m B ( s ) d s for s δ j , m , and it follows that Q ( B , F 2 , δ k , m ) can be written in terms of the process B k , m ( r ) = B ( r ) - m ( k - 1 ) / m k / m B ( r ) d r for r δ k , m . The two functionals of interest will be independent if B j , m ( s ) and B k , m ( r ) are independent which, due to them being Gaussian processes, only requires their covariance to be zero. The covariance of interest is
C j , k = E B j , m ( s ) B k , m ( r ) = E B ( s ) B ( r ) - m E ( j - 1 ) / m j / m B ( s ) d s B ( r ) - m E B ( s ) ( k - 1 ) / m k / m B ( r ) d r + m 2 E ( j - 1 ) / m j / m B ( s ) d s ( k - 1 ) / m k / m B ( r ) d r
Suppose, without loss of generality, that k > j which implies r > s . Then we have E B ( s ) B ( r ) = min ( s , r ) I p - r = s I p - r ,
E ( j - 1 ) / m j / m B ( s ) d s B ( r ) = ( j - 1 ) / m j / m min ( s , r ) d s I p - r = ( j - 1 ) / m j / m s d s I p - r = j - 1 2 m 2 I p - r , E B ( s ) ( k - 1 ) / m k / m B ( r ) d r = ( k - 1 ) / m k / m min ( s , r ) d r I p - r = s ( k - 1 ) / m k / m d r I p - r = s m I p - r , E ( j - 1 ) / m j / m B ( s ) d s ( k - 1 ) / m k / m B ( r ) d r = ( j - 1 ) / m j / m ( k - 1 ) / m k / m min ( s , r ) d r d s = ( j - 1 ) / m j / m s d s ( k - 1 ) / m k / m d r = j - 1 2 m 2 1 m I p - r
Combining these expressions we find that
C j , k = s - m · j - 1 2 m 2 - m · s m + m 2 · j - 1 2 m 2 · 1 m I p - r = 0 ( p - r ) × ( p - r )
as required, where 0 k denotes a k × k matrix of zeros.
Case 3
For the full-sample statistic, S r Q ( B , F 3 , δ 0 ) , while the appropriate process F j , m for the sub-samples is obtained as the residual from a continuous time projection of B ( s ) and s on a constant on the interval δ j , m (the process F 3 ( s ) is obtained in the same way but with the projections taking place on δ 0 ). The first projection is given by
B ( s ) - ( j - 1 ) / m j / m B ( s ) d s ( j - 1 ) / m j / m d s = B ( s ) - ( j - 1 ) / m j / m B ( s ) d s 1 / m = B ( s ) - m ( j - 1 ) / m j / m B ( s ) d s
which provides the first p - r elements of F j , m ( s ) . The second projection residual, which is the final element of F j , m ( s ) , is obtained as
s - ( j - 1 ) / m j / m s d s ( j - 1 ) / m j / m d s = s - j - 1 2 / m 2 1 / m = s - j - 1 2 m
The sub-sample statistics satisfy S r j Q ( B , F j , m , δ j , m ) and the result for S r , m J follows.
The independence of Q ( B , F j , m , δ j , m ) and Q ( B , F k , m , δ k , m ) follows from the arguments used for case 2 above, noting that F j , m already contains the process B j , m .
(b) In both cases, when m as T such that m - 1 + m T - 1 0 (so that = m - 1 T ), it follows that S r , m J = S r + o p ( 1 ) and the results are straightforward. □

A.2. Method of Simulation of Limiting Distributions

The objective is to simulate distributions of the form
Q ( B , F , δ ) = t r a c e δ d B F δ F F - 1 δ F d B
where B ( s ) is a ( p - r ) -dimensional standard Brownian motion process and F ( s ) is a stochastic processes whose precise form is given in Theorem 1 and depends on the specification of the deterministic component in the model. Consider, first, Q ( B , F , δ 0 ) , and define the ( p - r ) -dimensional process Δ y t = ϵ t , t = 1 , , T , where the elements of ϵ t are independent standard normal random variates and y 0 = 0 . Then, following Johansen [5] (chapter 15), the distribution of Q ( B , F , δ 0 ) is approximated by
Q ^ T = t r a c e t = 1 T ϵ t P t t = 1 T P t P t - 1 t = 1 T P t ϵ t
for an appropriate choice of P t ( t = 1 , , T ) , as follows:
  • Case 2: P t = y t - 1 , 1 ;
  • Case 3: P t = y t - 1 - y ¯ , t - 1 2 ( T + 1 ) , where y ¯ = T - 1 t = 1 T y t - 1 .
In a similar way the distributions of the sub-sample statistics are simulated using
Q ^ T , j , m = t r a c e t = ( j - 1 ) + 1 j ϵ t P t t = ( j - 1 ) + 1 j P t P t - 1 t = ( j - 1 ) + 1 j P t ϵ t
again subject to an appropriate choice of P t t = ( j - 1 ) + 1 , , j :
  • Case 2: P t = y t - 1 , 1 ;
  • Case 3: P t = y t - 1 - m y ¯ j , t - j - 1 2 - 1 2 , where y ¯ j = - 1 t = ( j - 1 ) + 1 j y t - 1 .
The simulated values are combined to approximate the distribution in Theorem 1 (a) using
m m - 1 Q ^ T - 1 m - 1 1 m j = 1 m Q ^ T , j , m
for a range of values of m.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. R.F. Engle, and C.W.J. Granger. “Cointegration and error correction: Representation, estimation and testing.” Econometrica 55 (1987): 251–276. [Google Scholar] [CrossRef]
  2. S. Johansen. “Statistical analysis of cointegration vectors.” J. Econ. Dyn. Control 12 (1988): 231–254. [Google Scholar] [CrossRef]
  3. S. Johansen. “Estimation and hypothesis testing of cointegration vectors in Gaussian vector autoregressive models.” Econometrica 59 (1991): 1551–1580. [Google Scholar] [CrossRef]
  4. T.W. Anderson. “Estimating linear restrictions on regression coefficients for multivariate normal distributions.” Ann. Math. Stat. 22 (1951): 327–351. [Google Scholar] [CrossRef]
  5. S. Johansen. Likelihood-Based Inference in Cointegrated Vector Autoregressive Models. Oxford, UK: Oxford University Press, 1995. [Google Scholar]
  6. J.A. Doornik. “Approximations to the asymptotic distribution of cointegration tests.” J. Econ. Surv. 12 (1998): 573–593. [Google Scholar] [CrossRef]
  7. J.G. MacKinnon, A.A. Haug, and L. Michelis. “Numerical distribution functions of likelihood ratio tests for cointegration.” J. Appl. Econom. 14 (1999): 563–577. [Google Scholar] [CrossRef]
  8. H.Y. Toda. “Finite sample properties of likelihood ratio tests for cointegrating ranks when linear trends are present.” Rev. Econ. Stat. 76 (1994): 66–79. [Google Scholar] [CrossRef]
  9. H.Y. Toda. “Finite sample performance of likelihood ratio tests for cointegrating ranks in vector autoregressions.” Econom. Theory 11 (1995): 1015–1032. [Google Scholar] [CrossRef]
  10. B. Nielsen. “On the distribution of likelihood ratio test statistics for cointegration rank.” Econom. Rev. 23 (2004): 1–23. [Google Scholar] [CrossRef]
  11. S. Johansen. “A small sample correction for the test of cointegrating rank in the vector autoregressive model.” Econometrica 70 (2002): 1929–1961. [Google Scholar] [CrossRef]
  12. G.C. Reinsel, and S.K. Ahn. “Vector autoregressive models with unit roots and reduced rank structure: Estimation, likelihood ratio test, and forecasting.” J. Time Series Anal. 13 (1992): 353–375. [Google Scholar] [CrossRef]
  13. H.E. Reimers. “Comparisons of tests for multivariate cointegration.” Stat. Pap. 33 (1992): 335–359. [Google Scholar] [CrossRef]
  14. A.R. Swensen. “Bootstrap algorithms for testing and determining the cointegration rank in VAR models.” Econometrica 74 (2006): 1699–1714. [Google Scholar] [CrossRef]
  15. G. Cavaliere, A. Rahbek, and A.M.R. Taylor. “Bootstrap determination of the co-integration rank in VAR models.” Econometrica 80 (2012): 1721–1740. [Google Scholar] [CrossRef]
  16. M.H. Quenouille. “Notes on bias in estimation.” Biometrika 43 (1956): 353–360. [Google Scholar] [CrossRef]
  17. J.W. Tukey. “Bias and confidence in not-quite large samples.” In Proceedings of the Institute of Mathematical Statistics, Ames, USA, 3–5 April 1958.
  18. M.J. Chambers. “Jackknife estimation of stationary autoregressive models.” J. Econom. 172 (2013): 142–157. [Google Scholar] [CrossRef]
  19. M.J. Chambers, and M. Kyriacou. Jackknife Bias Reduction in the Presence of a Unit Root. Discussion Paper 685; Colchester, UK: University of Essex Department of Economics, 2010. [Google Scholar]
  20. M.J. Chambers, and M. Kyriacou. “Jackknife estimation with a unit root.” Stat. Probab. Lett. 83 (2013): 1677–1682. [Google Scholar] [CrossRef]
  21. A.R. Swensen. “Corrigendum to “Bootstrap algorithms for testing and determining the cointegration rank in VAR models”.” Econometrica 77 (2009): 1703–1704. [Google Scholar]
  22. R. Davidson, and J.G. MacKinnon. “Improving the reliability of bootstrap tests with the fast double bootstrap.” Comput. Stat. Data Anal. 51 (2007): 3259–3281. [Google Scholar] [CrossRef]
  23. R. Giacomini, D.N. Politis, and H. White. “A warp-speed method for conducting Monte Carlo experiments involving bootstrap estimators.” Econom. Theory 29 (2013): 567–589. [Google Scholar] [CrossRef]
  24. G. Cavaliere, A. Rahbek, and A.M.R. Taylor. “Testing for co-integration in vector autoregressions with non-stationary volatility.” J. Econom. 158 (2010): 7–24. [Google Scholar] [CrossRef]
  25. G. Cavaliere, A. Rahbek, and A.M.R. Taylor. “Bootstrap determination of the co-integration rank in heteroskedastic VAR models.” Econom. Rev. 33 (2014): 606–650. [Google Scholar] [CrossRef]

Share and Cite

MDPI and ACS Style

Chambers, M.J. A Jackknife Correction to a Test for Cointegration Rank. Econometrics 2015, 3, 355-375. https://doi.org/10.3390/econometrics3020355

AMA Style

Chambers MJ. A Jackknife Correction to a Test for Cointegration Rank. Econometrics. 2015; 3(2):355-375. https://doi.org/10.3390/econometrics3020355

Chicago/Turabian Style

Chambers, Marcus J. 2015. "A Jackknife Correction to a Test for Cointegration Rank" Econometrics 3, no. 2: 355-375. https://doi.org/10.3390/econometrics3020355

APA Style

Chambers, M. J. (2015). A Jackknife Correction to a Test for Cointegration Rank. Econometrics, 3(2), 355-375. https://doi.org/10.3390/econometrics3020355

Article Metrics

Back to TopTop