Next Article in Journal
On-The-Fly Syntheziser Programming with Fuzzy Rule Learning
Next Article in Special Issue
A Novel Perspective of the Kalman Filter from the Rényi Entropy
Previous Article in Journal
Information-Geometric Optimization with Natural Selection
Previous Article in Special Issue
A New Multi-Attribute Emergency Decision-Making Algorithm Based on Intuitionistic Fuzzy Cross-Entropy and Comprehensive Grey Correlation Analysis
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Cointegration and Unit Root Tests: A Fully Bayesian Approach

by
Marcio A. Diniz
1,*,
Carlos A. B. Pereira
2 and
Julio M. Stern
3
1
Statistics Department, Universidade Federal de S. Carlos, Rod. Washington Luis, km 235, S. Carlos 13565-905, Brazil
2
Statistics Department, Universidade de S. Paulo, São Paulo 01000, Brazil
3
Applied Mathematics Department, Universidade de S. Paulo, São Paulo 01000, Brazil
*
Author to whom correspondence should be addressed.
Entropy 2020, 22(9), 968; https://doi.org/10.3390/e22090968
Submission received: 3 August 2020 / Revised: 25 August 2020 / Accepted: 27 August 2020 / Published: 31 August 2020
(This article belongs to the Special Issue Data Science: Measuring Uncertainties)

Abstract

:
To perform statistical inference for time series, one should be able to assess if they present deterministic or stochastic trends. For univariate analysis, one way to detect stochastic trends is to test if the series has unit roots, and for multivariate studies it is often relevant to search for stationary linear relationships between the series, or if they cointegrate. The main goal of this article is to briefly review the shortcomings of unit root and cointegration tests proposed by the Bayesian approach of statistical inference and to show how they can be overcome by the Full Bayesian Significance Test (FBST), a procedure designed to test sharp or precise hypothesis. We will compare its performance with the most used frequentist alternatives, namely, the Augmented Dickey–Fuller for unit roots and the maximum eigenvalue test for cointegration.

Several times series present deterministic or stochastic trends, which imply that the effects of these trends on the level of the series are permanent. Consequently, the mean and variance of the series will not be constant and will not revert to a long-term value. This feature reflects the fact that the stochastic processes generating these series are not (weakly) stationary, imposing problems to perform inductive inference using the most traditional estimators or predictors. This is so because the usual properties of these procedures will not be valid under such conditions.
Therefore, when modeling non-stationary time series, one should be able to properly detrend the used series, either by directly modeling the trend by deterministic functions, or by transforming the series to remove stochastic trends. To determine which strategy is the suitable solution, several statistical tests were developed since the 1970s by the frequentist school of statistical inference.
The Augmented Dickey–Fuller (ADF) test is one of the most popular tests used to assess if a time series has a stochastic trend or, for series described by auto-regressive models, if they have a unit root. When one is searching for long term relationships between multiple series under analysis, it is crucial to know if there are stationary linear combinations of these series, i.e., if the series are cointegrated. Cointegration tests were developed, also by the frequentist school, in the late 1980s [1] and early 1990s [2]. Only in the late 1980s did the Bayesian approach to test the presence of unit roots start to be developed.
Both unit root and cointegration tests may be considered tests on precise or sharp hypotheses, i.e., those in which the dimension of the parameter space under the tested hypothesis is smaller than the dimension of the unrestricted parameter space. Testing sharp hypotheses poses major difficulties for either the frequentist or Bayesian paradigms, such as the need to eliminate nuisance parameters.
The main goal of this article is to briefly review the shortcomings of the tests proposed by the Bayesian school and how they can be overcome by the Full Bayesian Significance Test (FBST). More specifically, we will compare its performance with the most used frequentist alternatives, the ADF for unit roots, and the maximum eigenvalue test for cointegration. Since this is a review article, it is important to remark that the results presented here were published elsewhere by the same authors, see [3,4].
To accomplish this objective, we will define the FBST in the next section, also showing how it can be implemented in a general context. The following section discusses the problems of testing the existence of unit roots in univariate time series and how the Bayesian tests approach the problem. Section 4 then shows how the FBST is applied to test if a time series has unit roots and illustrates this with applications on a real data set. In the sequel, we discuss the Bayesian alternatives to cointegration tests and then apply the FBST to test for cointegration using real data sets. We conclude with some remarks and possible extensions for future work.

1. FBST

The Full Bayesian Significance Test was proposed in [5] mainly to deal with sharp hypotheses. The procedure has several properties, see [6,7], most interestingly the fact that it is only based on posterior densities, thus avoiding the necessity of complications such as the elimination of nuisance parameters or the adoption of priors with positive probabilities attached to sets of zero Lebesgue measure.
We shall consider general statistical models in which the parameter space is denoted by Θ R m , m N . A sharp hypothesis H assumes that θ , the parameter vector of the chosen statistical model, belongs to a sub-manifold Θ H of smaller dimensions than Θ . This implies, for continuous parameter spaces, that the subset Θ H has null Lebesgue measure whenever H is sharp. The sample space, the set of all possible values of the observable random variables (or vectors), is here denoted by X .
Following the Bayesian paradigm, let h ( · ) be a probability prior density over Θ , x X , the observed sample (scalar or vector), and L ( · x ) the likelihood derived from data x . To evaluate the Bayesian evidence based on the FBST, the sole relevant entity is the posterior probability density for θ given x ,
g ( θ x ) h ( θ ) · L ( θ x ) .
It is important to highlight that the procedure may be used when the parameter space is discrete. However, when the posterior probability distribution over Θ is absolutely continuous, the FBST appears as a more suitable alternative to significance hypothesis testing. For notational simplicity, we will denote Θ H by H in the sequel.
Let r ( θ ) be a reference density on Θ such that the function s ( θ ) = g ( θ x ) / r ( θ ) is a relative surprise, (see [8], pp. 145–146) function. The reference density is important because it guarantees that the FBST is invariant to reparametrizations, even when r ( θ ) is improper, see [6,9]. Thus, when considering r ( θ ) proportional to a constant, the surprise function will be, in practical terms, equivalent to the posterior distribution. For the applications considered in this article, we will use the improper uniform density as reference density on Θ . The authors of [10] remark that it is possible to generalize the procedure using other reference densities such as neutral, invariant, maximum-entropy or non-informative priors, if they are available and desirable.
Definition 1
(Tangent set). Considering a sharp hypothesis H : θ Θ H , the tangential set of the hypothesis given the sample is given by
T x = { θ Θ : s ( θ ) > s * } .
where s * = sup θ H s ( θ ) .
Notice that the tangent set T x is the highest relative surprise set, that is, the set of points of the parameter space with higher relative surprise than any point in H, being tangential to H in this sense. This approach takes into consideration the statistical model in which the hypothesis is defined, using several components of the model to define an evidential measure favoring the hypothesis.
Definition 2
(Evidence). The Bayesian evidence value against H, e v ¯ , is defined as
e v ¯ = P θ T x x = T x d G x ( θ ) ,
where G x ( θ ) denotes the posterior distribution function of θ and the above integral is of the Riemann–Stieltjes type.
Definition 2 sets e v ¯ as the posterior probability of the tangent set that is interpreted as an evidence value against H. Hence, the evidence value supporting H is the complement of e v ¯ , namely, e v = 1 e v ¯ . Notwithstanding, e v is not evidence against A : θ Θ H , the alternative hypothesis (which is not sharp anyway). Equivalently, e v ¯ is not evidence in favor of A, although it is against H.
Definition 3
(Test). The FBST is the procedure that rejects H whenever e v = 1 e v ¯ is smaller than a critical level, e v c .
Thus, we are left with the problem of deciding the critical level e v c for each particular application. We briefly discuss this and other practical issues in the following subsection.

1.1. Practical Implementation: Critical Values and Numerical Computation

Since e v (also called e-value) is a statistic, it has a sampling distribution derived from the adopted statistical model and in principle this distribution could be used to find a threshold value. If the likelihood and the posterior distribution satisfy certain regularity conditions. See [11], p. 436. [12] proved that, asymptotically, there is a relationship between e v and the p-values obtained from the frequentist likelihood ratio procedure used to test the same hypotheses. This fact provides a way to find, at least asymptotically, a critical value to e v to reject the hypothesis being tested.
In a recent review [7], the authors discuss different ways to provide a threshold for e v . Among these alternatives, we highlight the standardized e-value, which follows, asymptotically, the uniform distribution on ( 0 , 1 ) . See also [13] for more on the standardized version of e v .
One could also try to define the FBST as a Bayes test derived from a particular loss function and the respective minimization of the posterior expected loss. Following this strategy, [10] showed that there are loss functions which result in e v as a Bayes estimator of ϕ = I H ( θ ) , where I A ( x ) denotes the indicator function, being equal to one if x A and zero otherwise, x A . Hence, the FBST is in fact a Bayes procedure in the formal sense as defined by Wald in [14].
To compute the evidence value supporting H defined in the last section, we need to follow the steps showed in Table 1. Appendix A provides detailed information about the computational resources and codes used to implement the FBST in the examples presented in this work. After defining the statistical model and prior, it is simple to find the surprise function, s ( θ ) . In step 3, one should find the point of the parameter space in H that maximizes s ( θ ) , that is, to solve a problem of constrained numerical maximization. In several applications, this step does not present a closed form solution, requiring the use of numerical optimizers.
Step 4 involves the integration of the posterior distribution on a subset of Θ , the tangent set T x that can be highly complex. Once more, since in many cases it is fairly difficult to find an explicit expression for T x , one may use various numerical techniques to compute the integral. If it is possible to generate random samples from the posterior distribution, Monte Carlo integration provides an estimate of e v , as we will show in this work. Another alternative is to use approximation techniques, such as those proposed in [15], based on a Laplace approximation. We discuss how to implement such approximations for unit root and cointegration tests in [3,4].

2. Bayesian Unit Root Tests

Before presenting the Bayesian procedures used to test the presence of unit roots, let us fix notation. We will denote by y t the t-th value of a univariate time series observed in t = 1 , , T + p dates, where T and p are positive integers. The usual approach is to assume that the series under analysis is described by an auto-regressive process with p lags, A R ( p ) , meaning that the data generating process is fully described by a stochastic difference equation of order p, possibly with an intercept or drift and a deterministic linear trend, i.e.,
y t = μ + δ · t + ϕ 1 y t 1 + + ϕ p y t p + ε t
with ε t i.i.d. N ( 0 , σ 2 ) for t = 1 , , T + p . Using the lag or backshift operator B, we denote y t k by B k y t , allowing us to rewrite (3) as
( 1 ϕ 1 B ϕ p B p ) y t = μ + δ · t + ε t
where ϕ ( B ) = ( 1 ϕ 1 B ϕ p B p ) is the autoregressive polynomial. The difference Equation (3) will be stable, implying that the process generating { y t } t = 1 T + p is (weakly) stationary, whenever the roots of the characteristic polynomial ϕ ( z ) , z C , lie outside the unit circle, since there may be complex roots. The set of polynomial operators, such as lag polynomials like ϕ ( B ) , induces an algebra that is isomorphic to the algebra of polynomials in real or complex variables, see [16].
If some of the roots lie exactly on the unit circle, it is said that the process has unit roots. In order to test such a hypothesis statistically, (3) is rewritten as
Δ y t = μ + δ · t + Γ 0 y t 1 + Γ 1 Δ y t 1 + + Γ p 1 Δ y t p + 1 + ε t
where Δ y t = y t y t 1 , Γ 0 = ϕ 1 + + ϕ p 1 and Γ i = j = i + 1 p ϕ j , for i = 1 , , p 1 . If the generating process has only one unit root, one root of the complex polynomial ϕ ( z ) ,
1 ϕ 1 z ϕ 2 z 2 ϕ p z p ,
is equal to one, meaning that
1 ϕ 1 ϕ 2 ϕ p = 0
i.e., ϕ ( 1 ) = 0 , and all the other roots are on or outside the unit circle. In this case, Γ 0 = 0 , the hypothesis that will be tested when modeling (5). Even though tests based on these assumptions verify if the process has a single unit root, there are generalizations based on the same principles that test the existence of multiple unit roots, see [17].
The search for Bayesian unit root tests began in the late 1980s. As far as we know, [18,19] were the first works to propose a Bayesian approach for unit root tests. The frequentist critics of these articles received a proper answer in [20,21], generating a fruitful debate that produced a long list of papers in the literature of Bayesian time series. A good summary of the debate and the Bayesian papers that resulted from it is presented in [22]. We will present here only the most relevant strategies proposed by the Bayesian school to test for unit roots.
Let θ = ( ρ , ψ ) be the parameters vector, in which ρ = i = 1 p ϕ i and ψ = ( μ , δ , Γ 1 , , Γ p 1 ) . Assuming σ 2 fixed, the prior density for θ can be factorized as
h ( θ ) = h 0 ( ρ ) · h 1 ( ψ ρ ) .
The marginal likelihood for ρ , denoted by L m , is:
L m ( ρ y ) Ψ L ( θ y ) · h 1 ( ψ ρ ) d ψ .
where y = { y t } t = 1 T + p is the observations vector, L ( θ | y ) the full likelihood, and Ψ the support of the random vector ψ . This marginal likelihood, associated with a prior for ρ , is the main ingredient used by standard Bayesian procedures to test the existence of unit roots. Even though the procedure varies among authors according to some specific aspects, mentioned below, basically all of them use Bayes factors and posterior probabilities.
One important issue is the specification of the null hypothesis: some authors, starting from [23], consider H 0 : ρ = 1 against H 1 : ρ < 1 . Starting from [24], this is the way the frequentist school addresses the problem, but following this approach no explosive value for ρ is considered. The decision theoretic Bayesian approach solved the problem using the posterior probabilities ratio or Bayes factor:
B 01 = L m ( ρ = 1 y ) 0 1 L m ( ρ y ) · h 0 ( ρ ) d ρ .
Advocates of this solution argue that one of the advantages of this approach is that the null and the alternative hypotheses are given equal weight. However, the expression above is not defined if h 0 ( ρ ) is not a proper density since the denominator of the Bayes factor is equal to the predictive density, defined just if h 0 ( ρ ) is a proper density. There are also problems if L m ( ρ = 1 | y ) is zero or infinite.
The problem is approached by [20,25] by testing H 0 : ρ 1 against H 1 : ρ < 1 , considering explicitly explosive values for ρ . The main advantage of this strategy is the possibility to compute posterior probabilities like
P ( ρ > 1 y ) = 1 g m ( ρ y ) d ρ
defined even for improper priors on ρ , where g m is the marginal posterior for ρ .
In [26], the authors do not choose ρ as the parameter of interest, examining instead the largest absolute value of the roots of the characteristic polynomial and then verifying if it is smaller or larger than one. Usually, this value is slightly smaller than ρ , but the authors argue that this small difference may be important. When this approach is used, unit roots are found less frequently. For an AR(3) model with a constant and deterministic trend, [26] derives the posterior density for the dominant root for the 14 series used in [27] and concluded the following: for eleven of the series, the dominant root was smaller than one, that is to say, the series were trend-stationary. These results were based on flat priors for the autoregressive parameters and the deterministic trend coefficient.
Another controversy is about the prior over ρ : [20] argues that the difference between the results given by the frequentist and Bayesian inferences is due to the fact that the flat prior proposed in [18] overweights the stationary region of ρ . Hence, he derived a Jeffreys prior for the AR(1) model: this prior quickly diverges as ρ increases and becomes larger than one. The obtained posterior led to the same results of [27], which will be discussed in detail in the following section. The critics of the approach adopted by Phillips in [20] judged the Jeffreys prior as unrealistic, from a subjective point of view. See the comments on Phillips’s paper on the Journal of Applied Econometrics, volume 6, number 4, 1991. The subsequent papers of the same number support the Bayesian approach. This is a nonsensical objection if one considers that the Jeffreys prior is crucial to ensure an invariant inferential procedure, and invariance is a highly desirable property, for either objective or subjective reasons. See [28] for more on invariance in physics and statistical models.
A final controversial point concerns the modeling of initial observations. If the likelihood explicitly models the initial observed values (it is an exact likelihood), the process is implicitly considered stationary. In fact, when it is known that the process is stationary, and it is believed that the data generating process is working for a long period, it is reasonable to assume that the parameters of the model determine the marginal distribution of the initial observations. In the simplest AR(1) model, this would imply that y 1 N ( 0 , σ 2 / ( 1 ρ 2 ) ) . In this scenario, to perform the inference conditional on the first observation would discard relevant information. On the other hand, there is no marginal distribution defined for y 1 if the generating process is not stationary. Then, it is valid to use a likelihood conditional on initial observations. For the models presented here, we always work with the conditional likelihood. As argued in [18], inferences for stationary models are little affected by using conditional likelihoods, especially for large samples. He compares these inferences with the ones based on exact likelihoods under explicit modeling for initial observations.

3. Implementing the FBST for Unit Root Testing

We will now describe how to use the FBST to test for the presence of unit roots referring to the general model (5). It is also possible to include q N moving average terms in (3) to model the process, a case that will not be covered in this article but that, in principle, shall not imply major problems for the FBST.
See Equation (5), where ε t i . i . d . N ( 0 , σ 2 ) for t = 1 , , T + p , recalling also that the hypothesis being tested is Γ 0 = 0 . We slightly change the notation of the last section now using ψ to denote the vector ( μ , δ , Γ 0 , , Γ p 1 ) and setting θ = ( ψ , σ ) .
Recalling the steps to implement the FBST displayed in Table 1, we have just specified the statistical model. The likelihood, conditional on the first p observations, derived from the Gaussian model is
L ( θ y ) = ( 2 π ) T / 2 σ T exp 1 2 σ 2 · t = p + 1 T + p ε t 2 ,
in which ε t = Δ y t μ δ · t Γ 0 y t 1 Γ 1 Δ y t 1 Γ p 1 Δ y t p + 1 . To complete step 1 of Table 1, we need a prior distribution for θ . For all the series modeled in this article, we will use the following non informative prior:
h ( θ ) = h ( ψ , σ ) 1 / σ .
We are aware of the problems caused by improper priors applied to this problem when one uses alternative approaches, like those mentioned by [22]. However, one of our goals is to show how the FBST can be implemented even for a potentially problematic prior like this one. To write the posterior, we use the following notation:
Δ Y = Δ y p + 1 Δ y p + 2 Δ y T + p , X = 1 p + 1 y p Δ y p Δ y 2 1 p + 2 y p + 1 Δ y p + 1 Δ y 3 1 T + p y T + p 1 Δ y T + p 1 Δ y T + 1 , ψ = μ δ Γ 0 Γ p 1 ,
being Δ Y of dimension T × 1 , X of dimension T × ( p + 2 ) and ψ , ( p + 2 ) × 1 . Thanks to this notation, we can write, using primes to denote transposed matrices:
t = p + 1 T + p ε t 2 = ( Δ Y X ψ ) ( Δ Y X ψ ) = ( Δ Y Δ Y ^ ) ( Δ Y Δ Y ^ ) + ( ψ ψ ^ ) X X ( ψ ψ ^ ) ,
where ψ ^ = ( X X ) 1 X · Δ Y is the ordinary least squares (OLS) estimator of ψ and Δ Y ^ = X ψ ^ its prediction for Δ Y . Thus, the full posterior is
g ( θ y ) σ ( T + 1 ) exp 1 2 σ 2 [ ( Δ Y Δ Y ^ ) ( Δ Y Δ Y ^ ) + ( ψ ψ ^ ) X X ( ψ ψ ^ ) ] ,
a Normal-Inverse Gamma density.
Step 2 demands a reference density in order to define the relative surprise function. Since we will use the improper density r ( θ ) 1 , the surprise function will be equivalent to the posterior distribution in our applications. Given this, to find s * (Step 3), we need to find the maximum value of the posterior under the hypothesis being tested, in our case, Γ 0 = 0 .
This maximization step is fairly simple to implement given the modeling choices made here: Gaussian likelihood, non informative prior and reference density proportional to a constant. The restricted (assuming H) posterior distribution is
g r ( θ r y ) σ ( T + 1 ) exp 1 2 σ 2 [ ( Δ Y Δ Y ^ r ) ( Δ Y Δ Y ^ r ) + ( ψ r ψ ^ r ) X r X r ( ψ r ψ ^ r ) ] ,
in which θ r = ( ψ r , σ ) , ψ r being vector ψ without Γ 0 ,
X r = 1 p + 1 Δ y p Δ y 2 1 p + 2 Δ y p + 1 Δ y 3 1 T + p Δ y T + p 1 Δ y T + 1 , ψ ^ r = ( X r X r ) 1 X r · Δ Y , and Δ Y ^ r = X r ψ ^ r ,
that is, X r is simply matrix X above without its third column, since under H : Γ 0 = 0 and the coefficient of the third column of X is Γ 0 —see Equation (5)— ψ ^ r is a least squares estimator of ψ r and Δ Y ^ r denotes the predicted values for Δ Y given by the restricted model. From (9), it is easy to show that the maximum a posteriori (MAP) estimator of θ r is given by ( ψ ^ r , σ ^ r ) , with
σ ^ r = ( Δ Y Δ Y ^ r ) ( Δ Y Δ Y ^ r ) T + 1 .
Plugging the values of ψ ^ r and σ ^ r into (9), we find s * , as requested in Step 3. Step 4 will also be easy to implement thanks to structure of the models assumed in this section. Since the full posterior, (8), is a Normal-Inverse Gamma density, a simple Gibbs sampler allows us to obtain a random sample from such distribution, suggesting a Monte Carlo approach to compute e v ¯ . From (8), the conditional posteriors of ψ and σ are, respectively,
g ψ ( ψ σ , y ) N ( ψ ^ , σ 2 ( X X ) 1 )
g σ 2 ( σ 2 ψ , y ) I G T + 1 2 , H
in which H = 0.5 [ ( Δ Y Δ Y ^ ) ( Δ Y Δ Y ^ ) + ( ψ ψ ^ ) X X ( ψ ψ ^ ) ] , I G denotes the Inverse-Gamma distribution and ψ ^ is the OLS estimator of ψ , as above. Appendix B brings the parametrization and the probability density function of the Inverse-Gamma distribution. With a sizable random sample from the full posterior, we estimate e v ¯ as the proportion of sampled vectors that generate a value for the posterior greater than s * , found in Step 3. Hence, in Step 5, we only compute one minus the estimate of e v ¯ found in Step 4. The whole procedure is summarized in Table 2. For the implementations in this article we sampled 51,000 vectors from (8) and discarded the first 1,000 as a burn-in sample.

Results

We implemented the FBST as described above to test the presence of unit roots in 14 U.S. macroeconomic time series, all with annual frequency, first mentioned in [27]. We used the extended series, analyzed in [23]. Appendix A brings more information on the data set and the computational resources and codes used to obtain the results displayed in Table 3 below.
Table 3 reports the names of the tested series, the number of available observations or sample size, the adopted value for p—as denoted in Equation (8)—if a linear (deterministic) trend was included in the model or not, the ADF test statistic and its respective p-value. We have used the computer package described in [29] to find the ADF p-values, available in the R library urca. The last two columns bring the posterior probability of non-stationarity, Γ 0 0 , and the FBST e-values for the specified models. In order to obtain comparable results, we have adopted the models chosen by [22] for all the series. All the models considered the intercept or constant term, μ in (9).
The results show that the non-stationary posterior probabilities are quite distant from the ADF p-values. These results were highlighted in [18,19]. Considering the simplest AR(1) model, they argued that, once frequentist inference is based on the distribution of ρ ^ | ρ = 1 , the non-stationary posterior probabilities provide counterintuitive conclusions since the referred distribution is skewed. Their main argument is that Bayesian inference uses a distribution (marginal posterior of ρ ) that is not skewed.
As mentioned before, ref. [20] claims that the difference in results between frequentist and Bayesian approaches is due to the flat prior that puts much weight on the stationary region. He proposed the use of Jeffreys priors, which restored the conclusions drawn by the frequentist test. Phillips argued that the flat prior was, actually, informative when used in time series models like those for unit root tests. Using simulations, he shows that “ [the use of a] flat prior has a tendency to bias the posterior towards stationarity. … even when [the estimate] is close to unity, there may still be a non negligible downward bias in the [flat] posterior probabilities”. Notwithstanding, the e-values reported in the last column are quite close to the ADF p-values even using the flat prior criticized by Phillips.

4. Bayesian Cointegration Tests

Before starting our brief review of the most relevant Bayesian cointegration tests, we fix notation and present the definitions to which we will refer in the sequel.
All the tests mentioned here are based on the following multivariate framework. Let Y t = [ y 1 t y n t ] be a vector with n N time series, all of them assumed to be integrated of order d N , i.e., have d unit roots. The series are said to be cointegrated if there is a nontrivial linear combination of them that has b N unit roots, b < d . We will assume that, as in most applications, d = 1 and b = 0 , meaning that, if the time series in Y t is cointegrated, there is a linear combination a Y t that is stationary, where a R n is the cointegrating vector. Since the linear combination a Y t is often motivated by problems found in economics, it is called a long-run equilibrium relationship. The explanation is that non-stationary time series that are related by a long-run relationship cannot drift too far from the equilibrium because economic forces will act to restore the relationship.
Notice also that: (i) the cointegrating vector is not uniquely determined since, for any scalar s, ( s · a ) is a cointegrating vector; and (ii) if Y t has more than two series, it is possible that there is more than one cointegrating vector generating a stationary linear combination.
It is assumed that the data generating process of Y t is described by the following vector autoregression with p N lags, denoted VAR(p), and given by:
Y t = c + Φ 0 D t + Φ 1 Y t 1 + + Φ p Y t p + E t ,
in which c is a ( n × 1 ) vector of constants, D t a vector ( n × 1 ) with some deterministic variable, such as deterministic trends or seasonal dummies, Φ i are ( n × n ) coefficients matrices and E t is a ( n × 1 ) stochastic vector with multivariate normal distribution with null expected value and covariance matrix Ω , denoted N n ( 0 , Ω ) . This dynamic model is assumed valid for t = 1 , , T + p , the available span of observations of Y t . As in the univariate case, one may include moving average terms in (12), i.e., lags for E t , but this, in principle, would not cause major problems in the Bayesian framework. Model (12) can be rewritten using the lag or backshift operator as
( I n Φ 1 B Φ p B p ) Y t = c + Φ 0 D t + E t ,
where Φ ( B ) = I n Φ 1 B Φ p B p is the (multivariate) autoregressive polynomial and I n denotes the n-dimensional identity matrix. The associate characteristic polynomial in this context will be the determinant of Φ ( z ) , z C . If all the roots of the characteristic polynomial lie outside the unit circle, it is possible to show that Y t has a stationary representation—see [30]—such as Equation (13). In order to determine if this is the case, model (12) is rewritten as an (vectorial) error correction model (VECM):
Δ Y t = c + Φ 0 D t + Γ 1 Δ Y t 1 + + Γ p 1 Δ Y t p + 1 + Π Y t 1 + E t ,
where Δ Y t = [ Δ y 1 t Δ y n t ] , Γ i = ( Φ i + 1 + Φ p ) for i = 1 , 2 , , p 1 and Π = Φ ( 1 ) = ( I n Φ 1 Φ p ) . It is possible to show that, when all the roots of det( Φ ( z ) ) are outside the unit circle, matrix Π in (14) has full rank, i.e., all the n eigenvalues of Π are n non null. If the rank of Π is null, this matrix cannot be distinguished from a null matrix, implying that the series in Y t has at least one unit root and a valid representation is a VAR of order p 1 , i.e., model (14) without the term Π Y t 1 . It is possible that the series in Y t has two unit roots each, implying that the correct VECM must be written with Δ 2 Y t as a dependent variable.
Finally, if the ( n × n ) matrix Π has rank r, 0 < r < n , it has n r non null eigenvalues, implying that the series in Y t has at least one unit root and its valid representation is given by the VECM in Equation (14). In this case, Π = α β , where α and β are matrices ( n × r ) of rank r. Matrix β denotes the one with the cointegrating vectors and matrix α is called the loading matrix, since it contains the weights of the equilibrium relationships. The tests developed in [2] focus on the rank of matrix Π .
The pioneer Bayesian works to study VAR models and reduced rank regressions are [31,32,33]. However, the main concern of these papers is to estimate the model parameters and their (marginal) posterior distributions. The usual approach is to assume a given rank for the long run matrix Π , and proceed with all the computations conditional on the given rank. The Bayesian initiatives to test the rank of the referred matrix are recent, the main reference for Bayesian inference on VECM’s being [34].
To justify inferential procedures based on prespecified ranks of matrix Π , [22] argued that an empirical cointegration analysis should be based on economic theory, which proposes models obeying equilibrium relationships. According to this view, cointegration research should be “confirmatory” rather than “exploratory”. Even though the advocated conditional inference is of simple implementation and very useful for small samples, [22] recognized that tests for the rank of matrix Π should be developed. To our knowledge, few initiatives with this purpose were developed up to now.
One common approach to test sharp hypotheses in the Bayesian framework is by means of Bayes factors. Testing the rank of matrix Π by Bayes factors implies several computational complications and requires the use of proper priors, as shown in [35]. Following an informal approach, [33] obtained the posterior distribution of the ordered eigenvalues of the “squared” long run matrix, Π · Π , obtained from a VAR model without assuming the existence of cointegration relations. As the long run matrix has a reduced rank, it has some null eigenvalues, and this should be revealed by the fact that the smallest eigenvalues should have a lot of probability mass accumulated on values close to zero. The computations can be made straightforwardly, simulating values for the long run matrix from its (marginal) posterior distribution, which is a matrix t-Student distribution under the non informative prior (16), also considered in the sequel.
Another common procedure is to estimate the rank of Π as the value r that maximizes the (marginal) posterior distribution of the rank. Conditioned on such an estimate, one proceeds to derive the full posterior and eventually estimate the cointegration space, i.e., the linear space spanned by β .
A different approach was proposed by [36], who used the Posterior Information Criterion (PIC), developed in [37], as a criterion to choose the mode of the posterior distribution of the rank of Π . However, as highlighted in [34], one of the advantages of the Bayesian approach is the possibility to incorporate the uncertainty about the parameters in the analysis, represented by the posterior distribution of the rank and, whatever the tool the scientist uses to infer the value of r, it is derived from this posterior distribution.
The authors of [38] nested the reduced rank models in an unrestricted VAR and used Metropolis–Hastings sampling with the Savage–Dickey density ratio—see [39]—to estimate the Bayes Factors of all the models with incomplete rank up to the model with full rank. The Bayes Factor derivation requires the estimation of an error correction factor for the incomplete rank. This factor, however, is not defined for improper priors due to a problem known as Bartlett paradox, which arises whenever one compares models of different dimensions. The difficulty is relevant in the present case because, after deriving the rank posterior density, one may consider that models of different dimensions are being compared. The paradox is stated informally as: improper priors should be avoided when one computes Bayes Factors (except for parameters common to both models) as they depend on arbitrary constants (that are integrals).
More recently, [40] developed an efficient procedure to obtain the posterior distribution of the rank using a uniform proper prior over the cointegration space linearly normalized. The author derived solutions for the posterior probabilities for the null rank and for the full rank of Π . The posterior probabilities of each intermediate rank are derived from the posterior samples of the matrices that compose the long run matrix ( α and β ) , properly normalized, under each rank and using the method proposed by [41].

5. Implementing the FBST as a Cointegration Test

This section describes how to implement the FBST to test for cointegration. We will proceed in the same spirit of Section 3, i.e., describing the steps given in Table 1 to implement the test for cointegration.
Let us begin recalling the VECM given by Equation (14):
See Equation (14), where t = 1 , , T + p , in which E t i . i . d . N n ( 0 , Σ ) with 0 a null vector of dimension n × 1 and Ω a symmetric positive definite real matrix. Notice that these assumptions already specify the statistical model (Gaussian) and its implied likelihood. Before giving it explicitly, let us rewrite Equation (14) using matrix notation:
Δ Y = Z · η + E
where Δ Y = Δ Y p + 1 Δ Y p + 2 Δ Y T + p , Z = 1 D p + 1 Δ Y p Δ Y 2 Y p 1 D p + 2 Δ Y p + 1 Δ Y 3 Y p + 1 1 D T + p Δ Y T 1 Δ Y T + p 1 Y T + p 1 , η = c Φ 0 Γ 1 Γ p 1 Π and the error vector is given by E M N T × n ( 0 , I T , Ω ) , denoting the matrix normal distribution. See Appendix B for more information on this distribution. Now the parameter vector is given by Θ = ( η , Ω ) .
Notice that Δ Y is formed by piling up T transposed vectors Δ Y t , thus resulting in a matrix with T lines and n columns (n is the number of time series in vector Y t ), those being also dimensions of matrix E . Matriz Z is constructed likewise—always piling up the transposed vectors—resulting in a matrix with T lines and p n + n + 1 columns. Finally, matrix η has the matrices of coefficients, all piled up properly, resulting in a matrix with p n + n + 1 lines and n columns.
Given the assumptions above, Δ Y M N T × n ( Z · η , I T , Ω ) , implying that the likelihood is
L ( Θ y ) | Ω | T / 2 exp 1 2 · tr [ Ω 1 ( Δ Y Z · η ) ( Δ Y Z · η ) ]
where y denotes the set of observed values of vectors Y t for t = 1 , , T + p . As in Section 3, we will consider an improper prior for Θ , given by
h ( Θ ) = h ( η , Ω ) | Ω | ( n + 1 ) / 2 ,
and our reference density, r ( Θ ) , will be proportional to a constant, leading to a surprise function equivalent to the (full) posterior distribution. These choices correspond to steps 1 and 2 of Table 1. These modeling choices imply the following posterior density:
g ( Θ y ) | Ω | ( T + n + 1 ) / 2 exp 1 2 · tr [ Ω 1 ( Δ Y Z · η ) ( Δ Y Z · η ) ] = | Ω | ( T + n + 1 ) / 2 exp 1 2 · tr { Ω 1 [ S + ( η η ^ ) · Z Z · ( η η ^ ) ] }
where η ^ = ( Z Z ) 1 Z Δ Y and S = Δ Y Δ Y Δ Y Z ( Z Z ) 1 Z Δ Y .
To implement Step 3 of Table 1, we need to find the maximum a posteriori of (17) under the constraint Θ Θ H , i.e., we need to maximize the posterior in Θ H . Since we are testing the rank of matrix Π , as discussed in the beginning of Section 4, it is necessary to maximize the posterior assuming the rank of Π is r, 0 r n . Thanks to the modeling choices made here—Gaussian likelihood and Equation (16) as prior—our posterior is almost identical to a Gaussian likelihood, allowing us to find this maximum using a strategy similar to that proposed by [2], who derived the maximum of the (Gaussian) likelihood function assuming a reduced rank for Π . We will summarize Johansen’s algorithm, providing in Appendix C a heuristic argument of why it indeed provides the maximum value of the posterior under the assumed hypotheses.
We begin estimating a VAR( p 1 ) model for Δ Y t with all the explanatory variables shown in (14) except for Y t 1 . Using the matrix notation established above, this corresponds to estimate
Δ Y = Z 1 · η 1 + U ,
where Z 1 = 1 D p + 1 Δ Y p Δ Y 2 1 D p + 2 Δ Y p + 1 Δ Y 3 1 D T + p Δ Y T 1 Δ Y T + p 1 and η 1 = e τ 0 υ 1 υ p 1 showing that Z 1 is obtained from matrix Z extracting its last n columns, exactly those corresponding to Y t 1 .
We also estimate a second set of auxiliary equations, regressing Y t 1 on a vector of constants and D t , Δ Y t 1 , …, Δ Y t p + 1 . By piling up all the (transposed) vectors Y t 1 for t = p + 1 , , T + p , we have a ( T × n ) matrix, denoted by Y 1 . As above, these equations can be represented by
Y 1 = Z 1 · η 2 + V ,
where Y 1 = Y p Y p + 1 Y T + p 1 and η 2 = m ν 0 ζ 1 ζ p 1 .
Considering the OLS estimates of these sets of equations and their respective estimated residuals, we may write
Δ Y ^ = Z 1 · η ^ 1 + U ^
Y ^ 1 = Z 1 · η ^ 2 + V ^
where η ^ 1 = ( Z 1 Z 1 ) 1 Z 1 · Δ Y , η ^ 2 = ( Z 1 Z 1 ) 1 Z 1 · Y 1 , U ^ and V ^ are the respective matrices of estimated residuals. Thanks to the Frisch–Waugh–Lovell theorem—see [42] theorem 3.3 or [43] Section 2.4—it is possible to show that the estimated residuals of these auxiliary regressions are related by Π in the following regressions:
U ^ = Π V ^ + W ^ .
One can prove that the OLS estimates of Π obtained from (15) and from (20) are numerically identical, as the estimated residuals E ^ and W ^ .
The second stage of Johansen’s algorithm requires the computation of the following sample covariance matrices of the OLS residuals obtained above:
Σ ^ V V = 1 T · V ^ V ^ Σ ^ U U = 1 T · U ^ U ^ Σ ^ U V = 1 T · U ^ V ^ Σ ^ V U = Σ ^ U V
and, from these, we find the n eigenvalues of matrix
Σ ^ V V 1 · Σ ^ V U · Σ ^ U U 1 · Σ ^ U V ,
ordering them decreasingly λ ^ 1 > λ ^ 2 > > λ ^ n . The maximum value attained by the log posterior subject to the constraint that there are r ( 0 r n ) cointegration relationships is
* = K ( T + n + 1 ) 2 · log | Σ ^ U U | T + n + 1 2 · i = 1 r log ( 1 λ ^ i ) ,
where K is a constant that depends only on T, n and y by means of the marginal distribution of the data set, y . Since * represents the maximum of the log-posterior, to obtain s * , one should take s * = exp ( * ) , completing step 3 of Table 1.
As in Section 3, we compute e v ¯ in step 4 by means of a Monte Carlo algorithm. It is easy to factor the full posterior (17) as a product of a (matrix) normal and an Inverse-Wishart, suggesting a Gibbs sampler to generate random samples from the full posterior. See Appendix B for more on the Inverse-Wishart distribution. Thus, the conditional posteriors for η and Ω are, respectively,
g η ( η Ω , y ) M N n × k ( η ^ , ( Z Z ) 1 , Ω )
g Ω ( Ω η , y ) I W ( Ω | S + ( η η ^ ) · Z Z · ( η η ^ ) , T )
where S = Δ Y Δ Y Δ Y Z ( Z Z ) 1 Z Δ Y , I W denotes the Inverse-Wishart, k = p n + n + 1 is the number of lines of η , and η ^ its OLS estimator, as above. From a Gibbs sampler set with these conditionals, we obtain a random sample from the full posterior to estimate e v ¯ as the proportion of sampled vectors that generate a value for the posterior greater than s * . Finally, we obtain e v = 1 e v ¯ in the final step (5). The whole implementation for cointegration tests, following the assumptions made in this section, is summarized in Table 4. See Appendix A for more information on the computational resources needed to implement the steps given by Table 4.
Before presenting the results of the procedure applied to real data sets, it is important to remark one feature of the FBST applied to cointegration tests. The estimated eigenvalues of matrix Π , λ ^ i , correspond to the squared canonical correlations between Δ Y t and Y 1 corrected for the variable in Z 1 and therefore lie between 0 and 1. Therefore, (21) shows that 0 * 1 * n * , where r * denotes the maximum of the posterior (14) assuming Π has rank 0 r n . Therefore, one may say that the hypotheses rank( Π ) = r are nested, in the sense that the respective e-values obtained by the FBST for these hypotheses are always non-decreasing e v ( 0 ) e v ( 1 ) e v ( n ) .
This nested formulation is also present in the frequentist procedure proposed by [2], based on the likelihood ratio statistics for successive ranks of Π . Thus, the FBST should be used, like the maximum eigenvalue test, in a sequential procedure to test for the number of cointegrating relationships. We will show how this should be done in presenting the applied results in the sequel.

Results

Now we present, by means of four examples, the application of FBST as a cointegration test. In all the examples, we have adopted a Gaussian likelihood and the improper prior (16). The Gibbs sampler was implemented as described above, providing 51,000 random vectors from the posterior (17). The first 1000 samples were discarded as a burn-in sample, the remaining 50,000 being used to estimate the integral (2). The tables show the e-value computed from the FBST and the maximum eigenvalue test statistics with their respective p-values.
Example 1.
We analyzed four electroencephalography (EEG) signals from a subject that has previously presented epileptic seizures. The original study, [44], had the aim of detecting seizures based on multiple hours of recordings for each individual and the cointegration analysis of the mentioned signals was presented by [45]. In fact, the cointegration hypothesis is tested using the phase processes estimated from the original signals. This is done by passing the signal into the Hilbert transform and then “unwrapping” the resulting transform. Sections 2 and 5 of [45,46] provide more details on the Hilbert transform and unwrapping.
The labels of the modeled series refer to the electrodes on the scalp. As seen in Figure 1 and Figure 2, the series are called FP1-F7, FP1-F3, FP2-F4, and FP2-F8, where FP refers to the frontal lobes and F refers to a row of electrodes placed behind these. Even numbered electrodes are on the right side and odd numbered electrodes are on the left side. The electrodes for these four signals mirror each other on the left and right sides of the scalp. The recordings of the studied subject, an 11-year-old female, identified a seizure in the interval (measured in seconds) [ 2956 , 2996 ] . Therefore, like [45], we analyze the period of 41 seconds prior to the seizure—interval [ 2956 , 2996 ] —and the subsequent 41 seconds—interval [ 2996 , 3036 ] —the seizure period. In the sequel, we will refer to these as prior to seizure and during seizure, respectively. Since the sample frequency has 256 measurements per second, there are a total of 10,496 measurements for each of the four signals. Ref. [45] used 40 seconds for each period, obtaining slightly different results.
Figure 1 and Figure 2 display the estimated phases based on the original signals. The model proposed by [45] is a VAR(1), resulting in a VECM given by
Δ Y t = c + Π Y t 1 + E t .
Table 5 and Table 6 present the results that essentially lead to the same conclusions obtained by [45], even though they have based their findings on the trace test. See Table 8 of [45].
The comparison between p-values and the FBST e-values must be made carefully, the main reason being the fact that p-values are not measures supporting the null hypothesis, while e-values provide exactly such a kind of support. That being said, a possible way to compare them is by checking the decision their use recommend regarding the hypothesis being tested, i.e., to reject or not the null hypothesis.
Frequentist tests often adopt a significance level approach: given an observed p-value, the hypothesis is rejected if the p-value is smaller or equal to the mentioned level, usually 0.1, 0.05, or 0.01. Since the cointegration ranks generate nested likelihoods, the hypotheses are tested sequentially, starting with null rank, r = 0 . For Table 5, adopting a 0.01 significance level, the maximum eigenvalue test would reject r = 0 and r = 1 , and would not reject r = 2 . The same conclusions follow for Table 6. Thus, the recommended action is to work, for estimation purposes for instance, assuming two cointegration relationships.
The question on which threshold value to adopt for the FBST was already mentioned on Section 1.1, but it is worthwhile to underline it once more. We highly recommend a principled approach deriving the cut-off value from a loss function, which is specific for the problem at hand and the purposes of the analysis. A naive but simpler approach would be to reject the hypothesis if the e-value is smaller than 0.05 or 0.01, emulating the frequentist strategy. Even not recommending this path, since p-values are not supporting measures for the hypothesis being tested while e-values are, the researcher may numerically compare p-values and e-values in a specific scenario. If the researcher derived the p-values from a generalized likelihood ratio test, it is possible to asymptotically compare them. The relationship is: e v = 1 F m [ F m h 1 ( 1 p ) ] , where m is the dimension of the full parameter space, h the dimension of the parameter space under the null hypothesis, F m the chi-square distribution function with m degrees of freedom and p the corresponding p-value. See [9,12] for the proof of the asymptotic relationship between e-values and p-values.
Since the maximum eigenvalue test is derived as a likelihood ratio test, this comparison may be done for the results of all the examples presented here, and more appropriately to this example, given its sample size of 10,496 observations. Regarding Table 5 and Table 6, one could be in doubt regarding whether to reject or not the hypothesis r = 1 since the e-values are larger than 0.01. However, for this model and hypothesis, the e-value corresponding to 0.01 is 0.436. Therefore, in both tables, one could reject the hypothesis and proceed to the next rank that has plenty of evidence in its favor. In conclusion, the practical decisions of both tests (FBST and maximum eigenvalue) would be the same: to not reject r = 2 .
Example 2
([47]). Compare three methods for modeling empirical seasonal temperature forecasts over South America. One of these methods is based on a (possible) long-term cointegration relationship between the temperatures of the quarter March–April–May (MAM) of each year and the temperature of the previous months of November–December–January (NDJ). When there is such a relationship, the authors used the NDJ temperatures (of the previous year) as a predictor for the following MAM season.
The original data set has monthly temperatures for each coordinate (latitude and longitude) of the covered area. The mentioned series of temperatures (MAM and NDJ) are computed as seasonal averages from this monthly data set by averaging over consecutive three months. Since we have data available from January 1949 to May 2020, the time series of monthly and seasonal average surface temperatures of length 72 for each grid point.
The authors of [47] consider Y t as a two-dimensional vector, its first component being the seasonal (average) MAM temperature of year t and the second component the seasonal NDJ temperature of the previous year. They consider a VAR(2) without deterministic terms to model the series, resulting in a VECM
Δ Y t = Γ 1 Δ Y t 1 + Π Y t 1 + E t .
We have chosen five grid points corresponding to major Brazilian cities to test the cointegration hypothesis of the mentioned seasonal series. The coordinates chosen were the closest ones from: 23.5505 ° S, 46.6333 ° W for São Paulo; 22.9068 ° S, 43.1729 ° W for Rio de Janeiro; 19.9167 ° S, 43.9345 ° W for Belo Horizonte; 15.8267 ° S, 47.9218 ° W for Brasília and 12.9777 ° S, 38.5016 ° W for Salvador. Figure 3 and Figure 4 show the seasonal temperatures for São Paulo and Brasília, respectively, indicating that the cointegration hypothesis is plausible for both cities.
The results are shown in Table 7. Assuming a significance level of 0.01, the maximum eigenvalue test reject the null rank and do not reject r = 1 for all the five cities. If we adopt the asymptotic relationship between p-values and e-values for the model under analysis, we obtain an e-value of 0.276 corresponding to a 0.01 p-value for r = 0 . Therefore, the FBST would also reject the null rank for all the cities. The hypothesis r = 1 is not rejected since all the e-values are close to 1, once more agreeing with the maximum eigenvalue test.
One remark about Brasília seems in order. The city was built to be the federal capital, being officially inaugurated on 21 April 1960. The construction began circa 1957 and before that the site had no human occupation. The process of moving all the administration from Rio de Janeiro, the former capital, was slow and only the 1980 census detected a population over 1 million inhabitants. The present population is almost 3.2 million inhabitants living in the Federal District that includes Brasília and minor surrounding cities. Figure 4 indicates that the seasonal temperatures began to rise exactly after 1980.
Example 3.
we applied the FBST to the Finish data set used in their seminal work [2].
The authors used the logarithms of the series of M 1 monetary aggregate, inflation rate, real income, and the primary interest rate set by the Bank of Finland to model the money demand, which, in theory, follows a long-term relationship. The sample has 106 quarterly observations of the mentioned variables, starting at the second trimester of 1958 and finishing in the third trimester of 1984. The chosen model was a VAR(2) with unrestricted constant, meaning that the series in Y t have one unit root with drift vector c and the cointegrating relations may have a non-zero mean. For more information about how to specify deterministic terms in a VAR see [48], chapter 6. Seasonal dummies for the first three quarters of the year were also considered in the model chosen by [2]. Writing the model in the error correction form, we have:
Δ Y t = c + Φ 0 , 1 D 1 t + Φ 0 , 2 D 2 t + Φ 0 , 3 D 3 t + Γ 1 Δ Y t 1 + Π Y t 1 + E t .
where Π = Φ 1 + Φ 2 I 3 , Γ 1 = Φ 2 , c is a vector with constants and D i t denote the seasonal dummies for trimester i = 1 , 2 , 3 . The results are displayed in Table 8.
In [2], the authors concluded that there is, at least, two cointegration vectors, a conclusion that follows if one adopts a 0.01 significance level, for instance. Using the asymptotic relationship between p-values and e-values for Equation (26), we obtain, for r = 0 , an e-value of 0.998, and, for r = 1 , an e-value of 0.999, corresponding to a 0.01 p-value. These apparently discrepant values for the e-values are due to the high dimensions of the unrestricted ( m = 58 ) and under H 0 ( h = 42 for r = 0 and h = 43 for r = 1 ) parameter spaces. Therefore, under this criterion, the FBST also rejects the null rank and r = 1 (since 0.132 < 0.998 and 0.994 < 0.999, respectively) and does not reject r = 2 , recommending the same action as the maximum eigenvalue test.
Example 4.
As a final example, we apply the FBST to a US data set discussed in [49]. The observations have annual periodicity and went from 1900 to 1985. We tested for cointegration between real national income, M 1 monetary aggregate deflated by the GDP deflator and the commercial papers return rate. The chosen model was a VAR(1) with unrestricted constant. The series were used in natural logarithms and the results follow below:
Table 9 shows that the maximum eigenvalue test rejects r = 0 and does not reject r = 1 at a 0.05 significance level. Once more adopting the asymptotic relationship between p-values and e-values for the chosen model, we obtain, for r = 0 , an e-value of 0.247 corresponding to a 0.01 p-value. Thus, under this criterion, the FBST also rejects the null rank and does not reject r = 1 .

6. Conclusions

In the past few decades, the econometric literature introduced statistical tests to identify unit roots and cointegration relationships in time series. The Bayesian approach applied to these topics advanced considerably after the 1990s, developing interesting alternatives, mostly for unit root testing. The (parametric) frequentist tests mentioned here may not be suitable since these procedures rely on the distribution of the test statistic—usually assuming the hypothesis being tested is true—which depend on a particular a statistical model, usually Gaussian. When the distributions of such statistics cannot be obtained, the procedure is saved by asymptotic results. If the researcher considers different statistical models and the available sample is small, the results of the tests may be quite misleading.
The present work reviewed a simple and powerful Bayesian procedure that can be applied to both purposes: unit root and cointegration testing. We have also shown that the FBST works considerably well even when one uses improper priors, a choice that may preclude the derivation of Bayes Factors, a standard Bayesian procedure in hypotheses testing.
A long series of articles provided in [7] and the references therein, has showed the versatility and properties of FBST, such as: a. the e-value derivation and computation are straightforward from its general definition; b. it uses absolutely no artificial restrictions like a distinct probability measure on the hypothesis set, induced by some specific parametrization; c. it is in strict compliance with the likelihood principle; d. it can conduct the test with any prior distribution; e. it does not need closed conjectures concerning error distributions, even for small samples; f. it is an exact procedure, since it does rely on asymptotic assumptions; and g. it is invariant with respect to the null hypothesis parametrization and with respect to the parameter space parametrization. See [9], p. 253 for this property.
To proceed with this research agenda, it would be interesting to perform more simulation studies with the FBST applied to unit root testing for a larger group of parametric and semi-parametric models (likelihoods). Another possibility is to include moving average terms in the data generating processes and work with Gaussian and non-Gaussian ARMA models. Notice that, given the points made above, these extensions would not impose major problems to the FBST as they would to the frequentist procedures. Regarding cointegration, the same extensions may be studied in future works, although the adoption of statistical models outside the Gaussian family would require further efforts to numerically implement the FBST. We shall also investigate the effect of the prior choice in the estimates of cointegration relations, especially for small samples.

Author Contributions

M.A.D. was responsible for conceptualization, computational implementation of the methods, formal analysis, investigation, and visualization. C.A.B.P.and J.M.S. were responsible for conceptualization, methodology, formal analysis, supervision, and funding acquisition. All the authors were responsible for writing, reviewing, and editing the original text. All authors have read and agreed to the published version of the manuscript.

Funding

This work was also partially funded by CNPq—the Brazilian National Council of Technological and Scientific Development (grants PQ 307648-2018-4, 302767-2017-7, 301892-2015-6, and 308776-2014-3); and FAPESP—the State of São Paulo Research Foundation (grants CEPID Shell-RCGI201450279-4; CEPIDCeMEAI 2013-07375-0). The authors are extremely grateful for the support received from their colleagues, collaborators, users, and critics in the construction works of their research projects.

Acknowledgments

The authors would like to thank J. Østergaard and C. A. Coelho for kindly providing us access to the data sets used in [45,47], respectively. We also would like to thank the support provided by UFSCar—Federal University of São Carlos, USP—University of São Paulo, and UFMS—Federal University of Mato Grosso do Sul.

Conflicts of Interest

The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

Appendix A. Computational Resources

The FBST was implemented in all the examples using codes written by the authors in Matlab/Octave programming language. The results displayed in Table 3, Table 5, Table 6, Table 7, Table 8, and Table 9 were obtained using GNU Octave version 4.4.1. The only package required to run the routines was the statistics package (version 1.4.1), necessary to simulate vectors of random variables from the distributions mentioned in the text. The codes are briefly described at https://www.ime.usp.br/~jstern/software/, where they can be freely downloaded.
The original data sets used in the examples presented in this work can be obtained from the following sources:
  • Table 3: fourteen U.S. economic time series used by [23]. Available at the R library urca, where it is named “npext”.
  • Example 1: the original series used in [44,45] are available at https://physionet.org/content/chbmit/1.0.0/. The data for the subject analyzed in Example 1 is from file chb01_03.edf, found inside folder chb01. To obtain Table 5 and Table 6, the data were transformed as described in Example 1.
  • Example 2: the original data set used in [47] is available at https://climexp.knmi.nl/NCEPData/ghcn_cams_05.nc, provided by the Global Historical Climatology Network (GHCN)/Climate Anomaly Monitoring System (CAMS). The data set studied here is the 2 m temperature analysis (0.5 × 0.5) data, a high resolution (0.5 × 0.5 degrees in latitude and longitude) global land surface temperature data set covering the period 1949 to near present, in our case May 2020.
  • Example 3: the original data set with four macroeconomic series used by [2] to estimate the money demand of Finland is available in the R library urca with the name “finland”.
  • Example 4: the original data used in [49] can be downloaded from https://www.ime.usp.br/~jstern/software/.

Appendix B. Non-Standard Distributions Used in This Article

Appendix B.1. Inverse-Gamma

The probability density function of the Inverse-Gamma distribution is given
f 0 ( x a , b ) = b a Γ ( a ) · 1 x a + 1 exp b x
for x > 0 and zero, otherwise. The parameters a and b are both positive real numbers and Γ is the gamma function.

Appendix B.2. Matrix Normal

The probability density function of the random matrix X with dimensions p × q that follows the matrix normal distribution M N p × q ( M , U , V ) has the form:
f 1 ( X M , U , V ) = exp 1 2 tr V 1 ( X M ) U 1 ( X M ) ( 2 π ) p q / 2 | V | p / 2 | U | q / 2
where M R p × q , U R p × p and V R q × q , being U and V symmetric positive semidefinite matrices. The matrix normal distribution can be characterized by the multivariate normal distribution as follows: X M N p × q ( M , U , V ) if and only if vec ( X ) N p q ( vec ( M ) , V U ) , where ⊗ denotes the Kronecker product and vec the vectorization of M .

Appendix B.3. Inverse-Wishart

The probability density function of the Inverse-Wishart distribution is
f 2 ( x Λ , ν ) = Λ ν / 2 2 ν p / 2 Γ p ( ν 2 ) x ( ν + p + 1 ) / 2 exp 1 2 tr ( Λ x 1 )
where x and Λ are p × p positive-definite matrices, and Γ p is the multivariate gamma function. Notice that we may also write the same density with tr ( x 1 Λ ) inside the exponential function, as would be convenient in our implementation of the Gibbs sampler in Section 5.

Appendix C. Heuristic Proof of Johansen’s Procedure

The goal of this appendix is to provide a brief heuristic explanation of the procedure, discussed in Section 5 that finds the maximum of posterior (17) subject to the hypothesis that matrix Π has reduced rank r, 0 r n . The procedure is based on the algorithm proposed in [2,50] to maximize a Gaussian likelihood under the same assumption (reduced rank of matrix Π ). The formal proof of Johansen’s algorithm can be found in [51], chapter 20. As mentioned in Section 5, Johansen’s algorithm can be applied to the posterior (17) since this distribution is very close to a (multivariate) Gaussian likelihood.
The first step of the algorithm involves “concentrating” the posterior, i.e., to assume Ω and Π are given and maximize the posterior with respect to all the other parameters in Θ . Hence, let γ denote the matrix η except for matrix Π , i.e., γ = c Φ 0 Γ 1 Γ p 1 . The concentrated log-posterior, denoted by M , is found by replacing γ with γ ^ ( Π ) in (17):
M ( Π , Ω y ) = ln [ g ( γ ^ ( Π ) ; Π , Ω y ) ] = C + ( T + n + 1 ) 2 ln | Ω 1 | 1 2 · tr [ Ω 1 ( U ^ Π V ^ ) ( U ^ Π V ^ ) ]
where C is a constant that depends on T, n and y . The strategy behind concentrating the posterior is that, if we can find the values Ω ^ and Π ^ that maximize M , then these same values, along with γ ^ ( Π ^ ) , will maximize (17) under the constraint rank ( Π ) = r . Carrying the concentration on one step further, we can find the value of Ω that maximizes (A1) assuming Π known, giving
Ω ^ ( Π ) = 1 T + n + 1 · ( U ^ Π V ^ ) ( U ^ Π V ^ ) .
To evaluate the concentrated log-posterior at Ω ^ ( Π ) , notice that
tr Ω ^ ( Π ) 1 ( U ^ Π V ^ ) ( U ^ Π V ^ ) = tr [ ( T + n + 1 ) I n ] = n ( T + n + 1 )
and, therefore, denoting by M * this new concentrated log-posterior, we have
M * ( Π y ) = C + ( T + n + 1 ) n 2 ( T + n + 1 ) 2 ln 1 T + n + 1 ( U ^ Π V ^ ) ( U ^ Π V ^ )
= C + ( T + n + 1 ) n 2 ( T + n + 1 ) 2 ln T T + n + 1 · 1 T ( U ^ Π V ^ ) ( U ^ Π V ^ )
= C + ( T + n + 1 ) n 2 ( T + n + 1 ) 2 ln T T + n + 1 n · 1 T ( U ^ Π V ^ ) ( U ^ Π V ^ )
= K ( T + n + 1 ) 2 · ln 1 T ( U ^ Π V ^ ) ( U ^ Π V ^ )
where K is a new constant depending only on T, n and y . Equation (A5) represents the maximum value one can achieve for the log-posterior for any given matrix Π . Thus, maximizing the posterior comes down to choosing Π so as to minimize the determinant
1 T ( U ^ Π V ^ ) ( U ^ Π V ^ )
subject to the constraint rank ( Π ) = r . The solution of this problem demands the analysis of the sample covariance matrices of the OLS residuals U ^ and V ^ and here we only present the final expression for the maximum value achieved for the log-posterior, denoted * in Section 5:
* = K ( T + n + 1 ) 2 · ln | Σ ^ U U | T + n + 1 2 · i = 1 r ln ( 1 λ ^ i ) .
Chapter 20 of [51] provides the formal derivation of (A6).

References

  1. Engle, R.F.; Granger, C.W.J. Co-Integration and Error Correction: Representation, Estimation, and Testing. Econometrica 1987, 55, 251–276. [Google Scholar] [CrossRef]
  2. Johansen, S.; Juselius, K. Maximum likelihood estimation and inference on cointegration—With application to the demand for money. Oxf. Bull. Econ. Stat. 1990, 52, 169–210. [Google Scholar] [CrossRef]
  3. Diniz, M.A.; Pereira, C.A.B.; Stern, J.M. Unit Roots: Bayesian Significance Test. Commun. Stats. Theory Methods 2011, 40, 4200–4213. [Google Scholar] [CrossRef]
  4. Diniz, M.A.; Pereira, C.A.B.; Stern, J.M. Cointegration: Bayesian Significance Test. Commun. Stats. Theory Methods 2012, 41, 3562–3574. [Google Scholar] [CrossRef]
  5. Pereira, C.A.B.; Stern, J.M. Evidence and credibility: Full Bayesian Significance Test for precise hypotheses. Entropy 1999, 1, 69–80. [Google Scholar]
  6. Pereira, C.A.B.; Stern, J.M.; Wechsler, S. Can a Significance Test Be Genuinely Bayesian. Bayesian Anal. 2008, 1, 79–100. [Google Scholar] [CrossRef]
  7. Stern, J.M.; Pereira, C.A.B. The e-value: A Fully Bayesian Significance Measure for Precise Statistical Hypotheses and its Research Program. São Paulo J. Math. Sci. 2020. Available online: https://link.springer.com/article/10.1007%2Fs40863-020-00171-7 (accessed on 20 August 2020).
  8. Good, I.J. Good Thinking: The Foundations of Probability and Its Applications; University of Minnesota Press: Minneapolis, MN, USA, 1983. [Google Scholar]
  9. Stern, J.M. Cognitive Constructivism and the Epistemic Significance of Sharp Statistical Hypotheses in Natural Sciences. arXiv 2010, arXiv:1006.5471. Available online: https://arxiv.org/abs/1006.5471 (accessed on 20 August 2020).
  10. Madruga, M.R.; Esteves, L.G.; Wechsler, S. On the Bayesianity of Pereira-Stern tests. Test 2001, 10, 291–299. [Google Scholar] [CrossRef]
  11. Schervish, M. Theory of Statistics; Springer: New York, NY, USA, 1995. [Google Scholar]
  12. Diniz, M.A.; Pereira, C.A.B.; Polpo, A.; Stern, J.M.; Wechsler, S. Relationship between Bayesian and frequentist significance indices. Int. J. Uncertain. Quantif. 2012, 2, 161–172. [Google Scholar] [CrossRef]
  13. Borges, W.; Stern, J.M. The rules of logic composition for the Bayesian epistemic E-values. Log. J. IGPL 2007, 15, 401–420. [Google Scholar] [CrossRef]
  14. Wald, A. Statistical Decision Functions; John Wiley and Sons: New York, NY, USA, 1950. [Google Scholar]
  15. Tierney, L.; Kadane, J.B. Accurate approximation for posterior moments and marginal densities. J. Am. Stat. Assoc. 1986, 81, 82–86. [Google Scholar] [CrossRef]
  16. Dhrymes, P.J. Mathematics for Econometrics; Springer: New York, NY, USA, 1978. [Google Scholar]
  17. Dickey, D.A.; Pantula, S.G. Determining the Ordering of Differencing in Autoregressive Processes. J. Bus. Econ. Stat. 1987, 5, 455–461. [Google Scholar]
  18. Sims, C.A. Bayesian skepticism on unit root econometrics. J. Econ. Dyn. Control 1988, 12, 463–474. [Google Scholar] [CrossRef] [Green Version]
  19. Sims, C.A.; Uhlig, H. Understanding unit rooters: A helicopter tour. Econometrica 1991, 59, 1591–1600. [Google Scholar] [CrossRef] [Green Version]
  20. Phillips, P.C. To criticize the critics: An objective Bayesian analysis of stochastic trends. J. Appl. Econ. 1991, 6, 333–364. [Google Scholar] [CrossRef] [Green Version]
  21. Phillips, P.C. Bayesian routes and unit roots: De rebus prioribus semper est disputandum. J. Appl. Econ. 1991, 6, 435–474. [Google Scholar] [CrossRef] [Green Version]
  22. Bauwens, L.; Lubrano, M.; Richard, J.-F. Bayesian Inference in Dynamic Econometric Models; Oxford University Press: Oxford, UK, 1999. [Google Scholar]
  23. Schotman, P.C.; van Dijk, H.K. On Bayesian routes to unit roots. J. Appl. Econ. 1991, 49, 387–401. [Google Scholar] [CrossRef] [Green Version]
  24. Dickey, D.A.; Fuller, W.A. Distribution of the estimators for autoregressive time series with a unit root. J. Am. Stat. Assoc. 1979, 74, 427–431. [Google Scholar]
  25. Lubrano, M. Testing for unit roots in a Bayesian framework. J. Econ. 1995, 69, 81–109. [Google Scholar] [CrossRef]
  26. DeJong, D.; Whiteman, C.H. Reconsidering Trends and random walks in macroeconomic time series. J. Econ. 1991, 28, 221–254. [Google Scholar] [CrossRef]
  27. Nelson, C.; Plosser, C. Trends and random walks in macroeconomic time series: Some evidence and implications. J. Monet. Econ. 1982, 10, 139–162. [Google Scholar] [CrossRef]
  28. Stern, J.M. Symmetry, Invariance and Ontology in Physics and Statistics. Symmetry 2011, 3, 611–635. [Google Scholar] [CrossRef]
  29. MacKinnon, J.G. Approximate asymptotic distribution functions for unit-root and cointegration tests. J. Bus. Econ. Stat. 1994, 12, 167–176. [Google Scholar]
  30. Johansen, S. Likelihood-based Inference in Cointegrated Vector Autoregressive Models; Oxford University Press: Oxford, UK, 1996. [Google Scholar]
  31. DeJong, D. Co-integration and trend-stationary in macroeconomic time series. J. Econ. 1992, 52, 347–370. [Google Scholar] [CrossRef]
  32. Geweke, J. Bayesian reduced rank regression in econometrics. J. Econ. 1996, 75, 121–146. [Google Scholar] [CrossRef]
  33. Bauwens, L.; Lubrano, M. Advances in Econometrics; JAI Press: Greenwich, CT, USA, 1996. [Google Scholar]
  34. Koop, G.; Strachan, R.; van Dijk, H.K.; Villani, M. The Palgrave Handbook of Theoretical Econometrics; Palgrave McMillan: London, UK, 2006. [Google Scholar]
  35. Kleibergen, F.; Paap, R. Priors, posterior odds and Lagrange multiplier statistics in Bayesian analyses of cointegration. Econ. Inst. Res. Pap. 1996. Available online: https://repub.eur.nl/pub/1398/ (accessed on 20 August 2020).
  36. Chao, J.; Phillips, P.C. Model selection in partially nonstationary vector autoregressive processes with reduced rank structure. J. Econ. 1999, 91, 227–271. [Google Scholar] [CrossRef] [Green Version]
  37. Phillips, P.C. Econometric model determination. Econometrica 1996, 59, 283–306. [Google Scholar] [CrossRef] [Green Version]
  38. Kleibergen, F.; Paap, R. Priors, posterior odds and bayes factors for a Bayesian analysis of cointegration. J. Econ. 2002, 111, 223–249. [Google Scholar] [CrossRef] [Green Version]
  39. Verdinelli, I.; Wasserman, L. Computing Bayes factors using a generalization of the Savage-Dickey density ratio. J. Am. Stat. Assoc. 1995, 90, 614–618. [Google Scholar] [CrossRef]
  40. Villani, M. Bayesian reference analysis of cointegration. Econ. Theory 2005, 21, 326–357. [Google Scholar] [CrossRef] [Green Version]
  41. Chib, S.; Greenberg, E. Understanding the Metropolis-Hastings algorithm. Am. Stat. 1995, 49, 327–335. [Google Scholar]
  42. Greene, W.H. Econometric Analysis; Prentice Hall: Bergen County, NJ, USA, 2008. [Google Scholar]
  43. Davidson, R.; MacKinnon, J.G. Econometric Theory and Methods; Oxford University Press: Oxford, UK, 2004. [Google Scholar]
  44. Shoeb, A.H. Application of Machine Learning to Epileptic Seizure Onset Detection and Treatment; MIT Press: Cambridge, MA, USA, 2009. [Google Scholar]
  45. Østergaard, J.; Rahbeck, A.; Ditlevsen, S. Oscillating systems with cointegrated phase processes. J. Math. Biol. 2017, 75, 845–883. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  46. Freeman, W.J. Hilbert transform for brain waves. Scholarpedia 2007, 2, 1338. [Google Scholar] [CrossRef]
  47. Turasie, A.A.; Coelho, C.A.S. Cointegration modeling for empirical South American seasonal temperature forecasts. Int. J. Climatol. 2016, 36, 4523–4533. [Google Scholar] [CrossRef]
  48. Lütkepohl, H. New Introduction to Multiple Time Series Analysis; Springer: Berlin, Germany, 2005. [Google Scholar]
  49. Lucas, R. Inflation and welfare. Econometrica 2000, 68, 247–274. [Google Scholar] [CrossRef]
  50. Johansen, S. Statistical analysis of cointegration vectors. J. Econ. Dyn. Control 1988, 12, 231–254. [Google Scholar] [CrossRef]
  51. Hamilton, J.D. Time Series Analysis; Princeton University Press: Princeton, NJ, USA, 1994. [Google Scholar]
Figure 1. Estimated phase processes prior to a seizure.
Figure 1. Estimated phase processes prior to a seizure.
Entropy 22 00968 g001
Figure 2. Estimated phase processes during a seizure.
Figure 2. Estimated phase processes during a seizure.
Entropy 22 00968 g002
Figure 3. Seasonal (MAM and NDJ) temperatures for São Paulo from 1949 to 2020.
Figure 3. Seasonal (MAM and NDJ) temperatures for São Paulo from 1949 to 2020.
Entropy 22 00968 g003
Figure 4. Seasonal (MAM and NDJ) temperatures for Brasília from 1949 to 2020.
Figure 4. Seasonal (MAM and NDJ) temperatures for Brasília from 1949 to 2020.
Entropy 22 00968 g004
Table 1. Pseudocode to implement the FBST.
Table 1. Pseudocode to implement the FBST.
General algorithm: compute e v supporting hypothesis H : θ Θ H
1. Specify the statistical model (likelihood) and prior distribution on Θ .
2. Specify the reference density, r ( θ ) , and derive the relative surprise function, s ( θ ) .
3. Find s * , the maximum value of s ( θ ) under the constraint θ H .
4. Integrate the posterior distribution on the tangent set—Equation (2)—to find e v ¯ .
5. Find e v = 1 e v ¯ .
Table 2. Pseudocode to implement the FBST to unit root tests.
Table 2. Pseudocode to implement the FBST to unit root tests.
General algorithm: compute e v supporting hypothesis H : Γ 0 = 0 in model (5)
1. Statistical model: Gaussian; prior: h ( θ ) 1 / σ .
2. Reference density: r ( θ ) 1 ; relative surprise function: g ( θ y ) .
3. Find s * : (9) evaluated at ψ ^ r and σ ^ r .
4. Gibbs sampler (from Equations (10) and (11)) to obtain N random samples of parameter vectors from (8). Evaluate the posterior at the sampled vectors and estimate e v ¯ as the proportion of N in which the evaluated values are larger than s * .
5. Find e v = 1 e v ¯ .
Table 3. Unit root tests for the extended Nelson and Plosser data set.
Table 3. Unit root tests for the extended Nelson and Plosser data set.
SeriesSample SizepTrendADFp-Value P ( Γ 0 0 | y ) e-Value
Real GNP802yes 3.52 0.0440.00050.040
Nominal GNP802yes 2.06 0.5590.02380.523
Real GNP per capita802yes 3.59 0.0370.00040.034
Industrial prod.1292yes 3.62 0.0320.00030.028
Employment992yes 3.47 0.0480.00040.043
Unemployment rate994no 4.04 0.0190.00010.020
GNP deflator1002yes 1.62 0.7780.05840.762
Consumer prices1294yes 1.22 0.9020.11540.983
Nominal wages892yes 2.40 0.3770.01060.341
Real wages892yes 1.71 0.7390.04750.715
Money stock1002yes 2.91 0.1640.00290.147
Velocity1192yes 1.62 0.7790.06200.777
Bond yield894no 1.35 0.6020.09620.936
Stock prices1182yes 2.44 0.3570.01030.349
Table 4. Pseudocode to implement the FBST to cointegration tests.
Table 4. Pseudocode to implement the FBST to cointegration tests.
General algorithm: compute e v supporting hypothesis H : rank ( Π ) = r ( 0 r n ) in model (14)
1. Statistical model: Gaussian; prior: h ( Θ ) | Ω | ( n + 1 ) / 2 .
2. Reference density: r ( Θ ) 1 ; relative surprise function: g ( Θ y ) .
3. Find s * : Johansen’s algorithm; obtain * from Equation (21) with s * = exp ( * ) .
4. Gibbs sampler (from Equations (22) and (23)) to obtain N random samples of parameter vectors from (17). Evaluate the posterior at the sampled vectors and estimate e v ¯ as the proportion of N for which the evaluated values are larger than s * .
5. Find e v = 1 e v ¯ .
Table 5. FBST and max. eig. test: prior to seizure.
Table 5. FBST and max. eig. test: prior to seizure.
H 0 FBSTMax.p-Value
r = 0 ≃060.966≃0
r = 1 0.069130.7270.0010
r = 2 0.999011.4580.1337
r = 3 ≃10.08120.7757
Table 6. FBST and max. eig. test: during seizure.
Table 6. FBST and max. eig. test: during seizure.
H 0 FBSTMax.p-Value
r = 0 ≃01120.5≃0
r = 1 0.114431.5630.0007
r = 2 0.99996.50150.5574
r = 3 ≃11.43830.2304
Table 7. FBST and maximum eigenvalue test applied to temperature data (MAM and NDJ series) of the mentioned Brazilian cities.
Table 7. FBST and maximum eigenvalue test applied to temperature data (MAM and NDJ series) of the mentioned Brazilian cities.
Cities H 0 : r = 0 H 0 : r = 1
FBSTMax.p-ValueFBSTMax.p-Value
São Paulo0.001233.302≃0≃10.08930.8205
Rio de Janeiro0.027323.2940.0004≃12.43e-50.9986
Belo Horizonte0.017324.6210.0001≃10.09630.8126
Brasília0.112918.0080.00450.99991.33210.2892
Salvador0.017224.4310.0001≃10.24500.6838
Table 8. FBST and maximum eigenvalue test applied to Finish data of Johansen and Juselius (1990).
Table 8. FBST and maximum eigenvalue test applied to Finish data of Johansen and Juselius (1990).
H 0 FBSTMax.p-Value
r = 0 0.13238.4890.0007
r = 1 0.99426.6420.0060
r = 2 ≃17.89240.3983
Table 9. FBST and maximum eigenvalue test applied to US data of Lucas (2000).
Table 9. FBST and maximum eigenvalue test applied to US data of Lucas (2000).
H 0 FBSTMax.p-Value
r = 0 0.04225.3340.0101
r = 1 0.9964.25070.8271

Share and Cite

MDPI and ACS Style

Diniz, M.A.; B. Pereira, C.A.; Stern, J.M. Cointegration and Unit Root Tests: A Fully Bayesian Approach. Entropy 2020, 22, 968. https://doi.org/10.3390/e22090968

AMA Style

Diniz MA, B. Pereira CA, Stern JM. Cointegration and Unit Root Tests: A Fully Bayesian Approach. Entropy. 2020; 22(9):968. https://doi.org/10.3390/e22090968

Chicago/Turabian Style

Diniz, Marcio A., Carlos A. B. Pereira, and Julio M. Stern. 2020. "Cointegration and Unit Root Tests: A Fully Bayesian Approach" Entropy 22, no. 9: 968. https://doi.org/10.3390/e22090968

APA Style

Diniz, M. A., B. Pereira, C. A., & Stern, J. M. (2020). Cointegration and Unit Root Tests: A Fully Bayesian Approach. Entropy, 22(9), 968. https://doi.org/10.3390/e22090968

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop