Next Article in Journal
Academic Rankings with RePEc
Previous Article in Journal
Ranking Leading Econometrics Journals Using Citations Data from ISI and RePEc
Previous Article in Special Issue
Parametric and Nonparametric Frequentist Model Selection and Model Averaging
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Polynomial Regressions and Nonsense Inference

by
Daniel Ventosa-Santaulària
1,* and
Carlos Vladimir Rodríguez-Caballero
2
1
Centro de Investigación y Docencia Económicas (CIDE), División de Economía, Carretera México-Toluca 3655 Col. Lomas de Santa Fe, Delegación Álvaro Obregón, México 01210, Mexico
2
Center for Research in Econometric Analysis of Time Series (CREATES) and Department of Economics and Business, Aarhus University, Fuglesangs Allé 4, Building 2622 (203), Aarhus V 8210, Denmark
*
Author to whom correspondence should be addressed.
Econometrics 2013, 1(3), 236-248; https://doi.org/10.3390/econometrics1030236
Submission received: 6 August 2013 / Revised: 28 October 2013 / Accepted: 7 November 2013 / Published: 18 November 2013
(This article belongs to the Special Issue Econometric Model Selection)

Abstract

:
Polynomial specifications are widely used, not only in applied economics, but also in epidemiology, physics, political analysis and psychology, just to mention a few examples. In many cases, the data employed to estimate such specifications are time series that may exhibit stochastic nonstationary behavior. We extend Phillips’ results (Phillips, P. Understanding spurious regressions in econometrics. J. Econom. 1986, 33, 311–340.) by proving that an inference drawn from polynomial specifications, under stochastic nonstationarity, is misleading unless the variables cointegrate. We use a generalized polynomial specification as a vehicle to study its asymptotic and finite-sample properties. Our results, therefore, lead to a call to be cautious whenever practitioners estimate polynomial regressions.
MSC Classification:
62J05; 62M10; 62F05; 62F12; 91B84

1. Introduction

There is some research on the effects of the nonstationarity of the variables on nonlinear relationships (spurious inference on linear regressions was uncovered by [1], and later explained by [2]). In [3], it is shown (both in finite samples and asymptotically) that six nonlinear tests (the Ramsey Regression Equation Specification Error Test (RESET), McLeod and Li test, Keenan test, Neural Network test, White’s information matrix, and the one proposed by [4]), when applied to independent random walks, tend to identify spurious (non-existing) nonlinear relationships (it is noteworthy that [5] studied the spurious regression phenomenon under stochastic nonstationarity when the logarithms of independent integrated of order one, I(1), variables are used; logarithmic transformations are commonly used in applied studies to deal with nonlinearity). The author of [6] extend these results by studying the behavior of two additional tests: the Brock-Dechert-Scheinkman (BDS) test and another one proposed by [7]; he finds that the former also yields results that do not make sense, whilst the latter proves to have good power properties even in small samples. The author of [8] studies the properties of the nonparametric Phillip’s unit root test applied to polynomials of integrated processes and concludes, broadly speaking, that the tests does not possess an asymptotic nuisance-parameter-free distribution, except under very specific conditions.
To the best of our knowledge, the “nonlinear relationship-spurious inference” literature (briefly sketched earlier) focuses on statistical tests rather than polynomial regressions. The latter are used to linearly relate the dependent variable to a k t h order polynomial on an independent variable, x. Such regressions therefore fit (through ordinary least squares, OLS) a nonlinear relationship between a polynomial on the independent variable and the conditional mean of y.
These specifications can be traced back to the nineteenth century, to impute series (see [9]). Despite its old age, polynomial regressions remain widely used in a large number of scientific fields, which include epidemiology/disease progression [10], geophysics [11], physics [12], political analysis [13], psychology [14], and, of course, statistics. Splines regression models (cubic splines, for example) can be used to smooth/impute series.
In empirical economics, polynomial specifications can be found in many subfields, such as, financial economics [15,16], labor economics [17,18], agricultural economics [19], macroeconomics (exchange rates, [20]) and environmental economics [21,22]. An evocative example can be found in the empirical research dealing with the Kuznets curve and the environmental Kuznets curve; the inverse U-shaped relationship between the variables is typically specified as the dependent variable regressed on the independent and its square (see [23,24]; it is noteworthy that Kuznets’ specifications usually employ even-order polynomials).
Even though polynomial regressions remain an important empirical tool, we could not find in the literature any attempt to study their properties when the variables behave as independent nonstationary processes. This might be so because the effect of nonstationarity is rather intuitive, and econometricians, at least those familiar with the spurious regression, could speculate that t-ratios diverge and the R 2 does not collapse. However, many researchers in diverse fields seem to be unaware of this possibility.
In this paper, we confirm that an inference drawn from a polynomial regression, when the variables are generated as independent integrated processes, is misleading (when the variables cointegrate, inference drawn from such a specification is no longer misleading). We provide evidence that generalizes Phillip’s results in two new directions: (i) we allow the exponent of the variables, both explanatory and dependent in a bivariate regression, to take any natural number; (ii) we allow for an arbitrary (natural number) order for the polynomial in x in a k-variate regression. The main objective of this work is to warn practitioners about the considerable risks of spurious inference when the powers of a nonstationary variable are used as regressors.
This paper is organized in a very simple manner. The next section presents the data-generating processes (DGPs) and the main results, divided in two theorems. A small Monte Carlo shows that the asymptotics are a sufficiently accurate representation of the finite sample behavior of the regressions.

2. Asymptotics of Polynomial Regressions

The variables, both dependent and independent, are generated as independent driftless unit roots:
z t = z t - 1 + u z , t
for z = x , y . The innovations, u y , t and u x , t , are independent of each other and obey the conditions stated by Phillips ([2], p. 313, Assumption 1). We use these variables to estimate the following specification:
y t m = α + β x t k + u t
where m , k N . A word on notation; the symbol, D , denotes weak convergence, and, for simplicity, W z W z ( r ) , for z = x , y , denotes a Wiener standard process. The stochastic integral, 0 1 , is written as ∫.
Theorem 1.
Let y t t = 1 and x t t = 1 be independently generated by Equation (1). Estimate by OLS specification Equation (2). Then, as T :
  • T - m 2 α ^ D σ y m w y m w x 2 k - w x k w y m w x k w x 2 k - w x k 2
  • T - 1 2 ( m - k ) β ^ D σ y m σ x k w x k w y m - w x k w y m w x 2 k - w x k 2 σ y m σ x k β ˜
  • T - 1 2 t β ^ D w x k w y m - w x k w y m w x 2 k - w x k 2 w y 2 m - w y m 2 - w x k w y m - w x k w y m 2 1 2
  • R 2 D β ˜ 2 w x 2 k - w x k 2 w y 2 m - w y m 2
Proof:
Note that all these results are an extension of [2]. It is noteworthy to mention that, for k = m = 1 , our results are exactly those of [2]. This implies that, no matter what power does the practitioner applies to the variables, the spurious regression phenomenon remains identical. That said, a more interesting specification should allow for a more complete polynomial of the independent variable, as in:
y t = β 0 + β 1 x t + β 2 x t 2 + + β k x t k + u t
where k N . In this case, OLS estimates still generate a spurious regression:
Theorem 2.
Let y t t = 1 and x t t = 1 be independently generated by Equation (1). Estimate by OLS specification Equation (3). Then, as T :
  • T - 1 2 β ^ 0 β ^ 1 T 1 2 β ^ 2 T 1 2 ( k - 1 ) β ^ k D σ y 1 0 0 0 0 σ x 0 0 0 0 σ x 2 0 0 0 0 σ x k - 1 × 1 w x ω x 2 w x k w x ω x 2 ω x 3 ω x k + 1 ω x 2 ω x 3 ω x k + 2 w x k ω x 2 k - 1 w y ω x ω y ω x 2 ω y ω x k ω y
  • S 2 = O p ( T ) , where S 2 = T - 1 t = 1 T y t - i = 0 k β ^ i x t i 2
  • t β ^ i = O p T 1 2 for i = 0 , 1 , 2 , , k
Proof:
Table 1. Rejection rates of t-ratios.
Table 1. Rejection rates of t-ratios.
TSpecification (2) Specification (3)
k m With k = 4
123 β 1 β 2 β 3 β 4
100 1 0.770.710.71
2 0.710.660.65 0.460.350.330.31
3 0.720.660.66
250 1 0.850.820.82
2 0.810.780.78 0.640.560.520.50
3 0.820.780.78
500 1 0.890.870.87
2 0.860.840.84 0.730.670.640.63
3 0.880.840.84
Rejection rates of the t-ratio associated with: (i) for specification Equation (2), β ^ ; (ii) for specification Equation (3), all β’s. Data-generating process (DGP) parameters: u z , t i i d N ( 0 , 1 ) , for z = x , y . The code of this Monte Carlo experiment is available as supplementary material.
Note the linear pattern in the order of convergence of the parameters; whilst the constant term, β ^ 0 , diverges at rate T 1 2 , β ^ 1 neither diverges, nor collapses, β ^ 2 collapses at rate T - 1 2 , and so on. Nonetheless, all the t-ratios of the estimated parameters diverge at the usual rate T 1 2 .
In both theorems, the convergence rate of the t-ratios associated with the estimates diverge. This implies that, for a sufficiently large sample, the null hypothesis that the parameters are equal to zero will eventually be rejected. Finite sample evidence suggests that this actually occurs in even rather small samples of 100–500 observations (Table 1).

3. Concluding Remarks

In this paper, we extended the results of what is known as spurious inference by studying the asymptotic and finite-sample behavior of the t-ratios in an OLS-estimated regression, where the dependent variable and/or the explanatory variable are nonlinearly transformed by means of a polynomial. When the variables are independent and stochastically nonstationary, the inference based on OLS estimates is misleading. Our results concern pure integrated of order one processes, but provide a natural guide to future research; near-integration, integrated of order two, and broken linear trend processes should be further studied. This result should be understood as a call to be cautious whenever practitioners estimate polynomial regressions.

Acknowledgments

The first draft of the article was written while Carlos Vladimir Rodríguez-Caballero was visiting the Center for Research and Teaching in Economics (CIDE). He gratefully acknowledges Alejandro López-Feldman for his support.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix

A. Proof of Theorem 1

Proof.
In order to get all of the results, we use the asymptotic results provided in [25]:
  • T - 1 2 ( k + 2 ) ξ z , t - 1 k D σ z k 0 1 ω z ( r ) k d r
  • T - 1 2 ( k + m + 2 ) ξ x , t - 1 k ξ y , t - 1 m D σ x k σ y m w x k w y m
We now define the rates of convergences of OLS estimates (∑ is short for t = 1 T ).
α ^ β ^ = T x t k x t k x t 2 k - 1 y t m x t k y t m = O ( T ) O p T 1 2 ( k + 2 ) O p T 1 2 ( k + 2 ) O p T k + 1 - 1 O p T 1 2 ( m + 2 ) O p T 1 2 ( m + k + 2 )
By simple algebra, we get:
α ^ β ^ = O p T m 2 O p T 1 2 ( m - k )
Therefore:
T - m 2 α ^ T - 1 2 ( m - k ) β ^ D 1 σ x k w x k σ x k w x k σ x 2 k w x 2 k - 1 σ y m w y m σ y m σ x k w x k w y m = 1 σ x 2 k w x 2 k - w x k 2 σ x 2 k w x 2 k - σ x k w x k - σ x k w x k 1 × σ y m w y m σ y m σ x k w x k w y m = 1 σ x 2 k ( w x 2 k - w x k 2 ) σ y m σ x 2 k w y m w x 2 k - w x k w y m w x k σ y m σ x k ( w x k w y m - w x k w y m )
Finally:
T - m 2 α ^ T - 1 2 ( m - k ) β ^ D σ y m w y m w x 2 k - w x k w y m w x k w x 2 k - w x k 2 σ y m σ x k w x k w y m - w x k w y m w x 2 k - w x k 2 as T
which proves results 1 and 2 in Theorem 1.
Let β ˜ w x k w y m - w x k w y m w x 2 k - w x k 2 ; following [2], to get t β ^ , we define S 2 = T - 1 y t m - α ^ - β ^ x t k 2 . Then:
T - m S 2 = T - ( m + 1 ) y t m - y ¯ - β ^ x t k - x ¯ 2 = T - ( m + 1 ) y t m - y ¯ 2 - β ^ 2 T - ( m + 1 ) x t k - x ¯ 2
T - m S 2 D σ y 2 m w y 2 m - w y m 2 - β ˜ 2 w x 2 k - w x k 2
as T
Then, we use Equation (A1) and Equation (A2) to get:
T - 1 2 t β ^ = β ^ T 1 2 S β ^ = β ^ T 1 2 S x t k - x ¯ 2 - 1 2 = T - 1 2 ( m - k ) β ^ T T - ( k + 1 ) x t k - x ¯ 2 1 2 T T - m 2 S T - 1 2 t β ^ D σ y m σ x k β ˜ σ x k w x 2 k - w x k 2 1 2 σ y m w y 2 m - w y m 2 - β ˜ 2 w x 2 k - w x k 2 1 2 as T
Then, after simple algebra, we get:
T - 1 2 t β ^ D w x k w y m - w y m w x k w x 2 k - w x k 2 w y 2 m - w y m 2 - w x k w y m - w x k w y m 2 1 2 as T
proving result 3 of Theorem 1.
Finally, the asymptotic nonstandard distribution of R 2 is given by:
R 2 = y ^ t m - y ¯ 2 y t m - y ¯ 2 = β ^ 2 T - ( m - k ) T - ( k + 1 ) x t k - x ¯ 2 T - ( m + 1 ) y t m - y ¯ 2 R 2 D β ˜ 2 w x 2 k - w x k 2 w y 2 m - w y m 2 , as T
This proves the last result of Theorem 1. ☐

B. Proof of Theorem 2.

Proof.
Polynomial specification Equation (3) has the following OLS estimators:
β ^ 0 β ^ 1 β ^ 2 β ^ k = T x t x t 2 x t k x t x t 2 x t 3 x t k + 1 x t 2 x t 3 x t k + 2 x t k x t 2 k - 1 y t x t y t x t 2 y t x t k y t
or B ^ = Σ x x - 1 Σ x y , for short. To obtain the rates of convergences of the OLS estimates, note that Σ x x - 1 is a Hankel matrix. The orders of convergence of each element in such a matrix are given by:
O ( T ) O p T 3 2 O p T 2 O p T 1 2 ( k + 2 ) O p T 3 2 O p T 2 O p T 5 2 O p T 1 2 ( k + 3 ) O p T 2 O p T 5 2 O p T 1 2 ( k + 4 ) O p T 1 2 ( k + 2 ) O p T k + 1
The Hankel matrix can be inverted using some results from linear algebra theory (spectral decomposition). Furthermore, while it would be possible to analyze some interesting properties of Hankel matrices given by [26] or [27] inter alia, there are some numerical algorithms, like [28], or [29] for polynomial regressions, that work with a Hankel matrix. That said, we are not interested in computing the exact inverse, but rather, using cases with k = 1 , 2 , 3 , . For specification Equation (3), it is straightforward to see that:
Σ x x - 1 = O ( T - 1 ) O p T - 3 2 O p T - 2 O p T - 1 2 ( k + 2 ) O p T - 3 2 O p T - 2 O p T - 5 2 O p T - 1 2 ( k + 3 ) O p T - 2 O p T - 5 2 O p T - 1 2 ( k + 4 ) O p T - 1 2 ( k + 2 ) O p T - ( k + 1 )
which is again a Hankel matrix. We follow [30] to obtain the orders of convergence and the asymptotic distributions of OLS estimates. We first define the following matrices:
γ 1 = T - 1 2 0 0 0 0 1 0 0 0 0 T 1 2 0 0 0 0 T 1 2 ( k - 1 )
and:
γ 2 = T 3 2 0 0 0 0 T 2 0 0 0 0 T 5 2 0 0 0 0 T 1 2 ( k + 3 )
Then, using matrices Equation (A3) and Equation (A4), we have γ 1 B ^ = γ 1 x x - 1 γ 2 γ 2 - 1 x y . Finally, we get: γ 1 B ^ = γ 1 - 1 x x - 1 γ 2 - 1 - 1 γ 2 - 1 x y .
We have:
T - 1 2 1 T 1 2 T 1 2 ( k - 1 ) β ^ 0 β ^ 1 β ^ 2 β ^ k =
T 1 2 1 T - 1 2 T - 1 2 ( k - 1 ) T x t x t 2 x t k x t x t 2 x t 3 x t k + 1 x t 2 x t 3 x t k + 2 x t k x t 2 k
T - 3 2 T - 2 T - 5 2 T - 1 2 ( k + 3 ) - 1 × T - 3 2 T - 2 T - 5 2 T - 1 2 ( k + 3 ) y t x t y t x t 2 y t x t k y t
We then multiply the matrices of Equation (A5):
T - 1 2 β ^ 0 β ^ 1 T 1 2 β ^ 2 T 1 2 ( k - 1 ) β ^ k D 1 σ x w x σ x 2 ω x 2 σ x k w x k σ x w x σ x 2 ω x 2 σ x 3 ω x 3 σ x k + 1 ω x k + 1 σ x 2 ω x 2 σ x 3 ω x 3 σ x k + 2 ω x k + 2 σ x k w x k σ x 2 k ω x 2 k - 1 × σ y w y σ x σ y ω x ω y σ x 2 σ y ω x 2 ω y σ x k σ y ω x k ω y
Finally, we factor the variances, σ x , σ y , from Equation (A6):
T - 1 2 β ^ 0 β ^ 1 T 1 2 β ^ 2 T 1 2 ( k - 1 ) β ^ k D σ y 1 0 0 0 0 σ x 0 0 0 0 σ x 2 0 0 0 0 σ x k - 1 × 1 w x ω x 2 w x k w x ω x 2 ω x 3 ω x k + 1 ω x 2 ω x 3 ω x k + 2 w x k ω x 2 k - 1 w y ω x ω y ω x 2 ω y ω x k ω y
which proves Equation (1) in Theorem 2.
To obtain the asymptotics of the k t-ratios, note that the order of convergence of the estimated variance, S 2 :
S 2 = T - 1 t = 1 T y t - β ^ 0 - β ^ 1 x t - β ^ 2 x t 2 - - β ^ k x t k 2
is always equal to T 2 . To see this, we expand expression Equation (A7) and analyze the convergence order of each element given by the previous result:
S 2 = T - 1 t = 1 T y t - β ^ 0 - β ^ 1 x t - β ^ 2 x t 2 - - β ^ k x t k 2 = T - 1 y t 2 O p ( T 2 ) + β ^ 0 2 O p ( T 2 ) + β ^ 1 2 x t 2 O p ( T 2 ) + β ^ 2 2 x t 4 O p ( T 2 ) + + β ^ k 2 x t 2 k O p ( T 2 ) - 2 β ^ 0 y t O p ( T 2 ) - β ^ 1 y t x t O p ( T 2 ) + β ^ 2 y t x t 2 O p ( T 2 ) + β ^ 3 y t x t 3 O p ( T 2 ) - - β ^ k y t x t k O p ( T 2 ) + 2 β ^ 0 O p ( T 1 2 ) β ^ 1 x t O p ( T 3 2 ) + β ^ 2 x t 2 O p ( T 3 2 ) + β ^ 3 x t 3 O p ( T 3 2 ) + β ^ 4 x t 4 O p ( T 3 2 ) + + β ^ k x t k O p ( T 3 2 ) + 2 β 1 ^ O p ( 1 ) β ^ 2 x t 3 O p ( T 2 ) + β ^ 3 x t 4 O p ( T 2 ) + β ^ 4 x t 5 O p ( T 2 ) + β ^ 5 x t 5 O p ( T 2 ) + + β ^ k x t k + 1 O p ( T 2 ) + 2 β ^ k - 1 β ^ k x t 2 k - 1 O p ( T 2 )
Therefore, S 2 = O p ( T ) , which proves result 2 of Theorem 2.
Finally, t β ^ k = β ^ k S 2 k k - 1 ( k , k ) 1 2 = O p T 1 2 ( k - 1 ) O p ( T ) O p T - ( k + 1 ) 1 2 = O p T 1 2 ( k - 1 ) O p T - k 2 = O p T 1 2 . This proves Theorem 2. ☐

References

  1. C. Granger, and P. Newbold. “Spurious regressions in econometrics.” J. Econom. 2 (1974): 111–120. [Google Scholar] [CrossRef]
  2. P. Phillips. “Understanding spurious regressions in econometrics.” J. Econom. 33 (1986): 311–340. [Google Scholar] [CrossRef]
  3. Y.-S. Lee, K. Tae-Hwan, and P. Newbold. “Spurious nonlinear regressions in econometrics.” Econ. Lett. 87 (2005): 301–306. [Google Scholar] [CrossRef]
  4. J.D. Hamilton. “A parametric approach to flexible nonlinear inference.” Econometrica 69 (2001): 537–573. [Google Scholar] [CrossRef]
  5. R.M. De Jong. “Logarithmic spurious regressions.” Econ. Lett. 81 (2003): 13–21. [Google Scholar] [CrossRef]
  6. E. O’Brien. “A note on spurious nonlinear regression.” Econ. Lett. 3 (2008): 366–368. [Google Scholar] [CrossRef]
  7. D. Peña, and J. Rodriguez. “Detecting nonlinearity in time series by model selection criteria.” Int. J. Forecast. 21 (2005): 731–748. [Google Scholar] [CrossRef]
  8. M. Wagner. “The Phillips unit root tests for polynomials of integrated processes.” Econ. Lett. 114 (2012): 299–303. [Google Scholar] [CrossRef]
  9. J. Gergonne. “The application of the method of least squares to the interpolation of sequences.” Hist. Math. 1 (1815): 439–447, (Translated by Ralph St. John and S. M. Stigler from the 1815 French edition). [Google Scholar] [CrossRef]
  10. C. Chatterjee, and R. Sarkar. “Multi-step polynomial regression method to model and forecast malaria incidence.” PLoS One 3 (2009): e4726. [Google Scholar] [CrossRef] [PubMed]
  11. S. Verma. “Evaluation of polynomial regression models for the Student t and Fisher F critical values, the best interpolation equations from double and triple natural logarithm transformation of degrees of freedom up to 1000, and their applications to quality control in science and engineering.” Revista Mex. Ciencias Geol. 26 (2009): 79–92. [Google Scholar]
  12. P. Barker, F. Street-Perrott, M. Leng, P. Greenwood, D. Swain, R. Perrott, R. Telford, and K. Ficken. “A 14,000-year oxygen isotope record from diatom silica in two alpine lakes on Mt. Kenya.” Science 292 (2001): 2307–2310, p. 2,310. [Google Scholar] [CrossRef] [PubMed]
  13. D. Green, T. Leong, H. Kern, A. Gerber, and C. Larimer. “Testing the accuracy of regression discontinuity analysis using experimental benchmarks.” Polit. Anal. 17 (2009): 400–417. [Google Scholar] [CrossRef]
  14. L. Shanock, B. Baran, W. Gentry, S. Pattison, and E. Heggestad. “Polynomial regression with response surface analysis: A powerful approach for examining moderation and overcoming limitations of difference scores.” J. Bus. Psychol. 25 (2010): 543–554. [Google Scholar] [CrossRef]
  15. R. Ferrer, C. González, and G. Soto. “Linear and nonlinear interest rate exposure in Spain.” Manag. Financ. 36 (2010): 431–451. [Google Scholar] [CrossRef]
  16. C. Ioannidis, D. Peel, and M. Peel. “The time series properties of financial ratios: Lev revisited.” J. Bus. Financ. Account. 30 (2003): 699–714. [Google Scholar] [CrossRef]
  17. M. Leonardi, and G. Pica. “Who pays for it? The heterogeneous wage effects of employment protection legislation.” Econ. J., 2013. [Google Scholar] [CrossRef]
  18. J. Straka. “Is poor worker morale costly to firms? ” Ind. Labor Relat. Rev., 1993, 381–394. [Google Scholar] [CrossRef]
  19. C. Ackello-Ogutu, Q. Paris, and W. Williams. “Testing a von Liebig crop response function against polynomial specifications.” Am. J. Agric. Econ. 67 (1985): 873–880. [Google Scholar] [CrossRef]
  20. Z. Darvas. “Estimation Bias and inference in overlapping autoregressions: Implications for the target-zone literature.” Oxf. Bull. Econ. Stat. 70 (2008): 1–22. [Google Scholar] [CrossRef]
  21. M. Auffhammer, and R. Kellogg. “Clearing the air? The effects of gasoline content regulation on air quality.” Am. Econ. Rev. 101 (2011): 2687–2722. [Google Scholar] [CrossRef]
  22. D. Kellenberg. “Trading wastes.” J. Environ. Econ. Manag. 61 (2012): 68–87. [Google Scholar] [CrossRef]
  23. G. Grossman, and A. Krueger. “Environmental Impacts of a North American Free Trade Agreement.” In The Mexico-U.S. Free Trade Agreement. Edited by P.M. Garbe. Cambridge, MA, USA: MIT Press, 1993, pp. 13–56. [Google Scholar]
  24. S. Labson, and P. Crompton. “Common trends in economic activity and metals demand: Cointegration and the intensity of use debate.” J. Environ. Econ. Manag. 25 (1993): 147–161. [Google Scholar] [CrossRef]
  25. D. Giles. “Spurious regressions with time-series data: Further asymptotic results.” Commun. Stat. Theory Methods 36 (2007): 967–979. [Google Scholar] [CrossRef]
  26. E. Hannan, and M. Deistler. The Statistical Theory of Linear Systems. Cambridge, MA, USA: Cambridge University Press, 2012. [Google Scholar]
  27. G. Golub, and C. van Loan. Matrix Computations. Baltimore, MD, USA: The Johns Hopkins University Press, 2013. [Google Scholar]
  28. W. Trench. “An algorithm for the inversion of finite Hankel matrices.” J. Soc. Ind. Appl. Math. 13 (1965): 1102–1107. [Google Scholar] [CrossRef]
  29. K. Kumar, and M. Alsaleh. “Application of Hankel matrices in polynomial regression.” Appl. Math. Comput. 77 (1996): 205–211. [Google Scholar] [CrossRef]
  30. J.D. Hamilton. Time Series Analysis. Cambridge, MA, USA: Cambridge University Press, 1994. [Google Scholar]

Share and Cite

MDPI and ACS Style

Ventosa-Santaulària, D.; Rodríguez-Caballero, C.V. Polynomial Regressions and Nonsense Inference. Econometrics 2013, 1, 236-248. https://doi.org/10.3390/econometrics1030236

AMA Style

Ventosa-Santaulària D, Rodríguez-Caballero CV. Polynomial Regressions and Nonsense Inference. Econometrics. 2013; 1(3):236-248. https://doi.org/10.3390/econometrics1030236

Chicago/Turabian Style

Ventosa-Santaulària, Daniel, and Carlos Vladimir Rodríguez-Caballero. 2013. "Polynomial Regressions and Nonsense Inference" Econometrics 1, no. 3: 236-248. https://doi.org/10.3390/econometrics1030236

APA Style

Ventosa-Santaulària, D., & Rodríguez-Caballero, C. V. (2013). Polynomial Regressions and Nonsense Inference. Econometrics, 1(3), 236-248. https://doi.org/10.3390/econometrics1030236

Article Metrics

Back to TopTop