Next Article in Journal
Structural Panel VARs
Next Article in Special Issue
Polynomial Regressions and Nonsense Inference
Previous Article in Journal / Special Issue
Generalized Empirical Likelihood-Based Focused Information Criterion and Model Averaging
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Parametric and Nonparametric Frequentist Model Selection and Model Averaging

Department of Economics, University of California, Riverside, CA 92521-0427, USA
*
Author to whom correspondence should be addressed.
Econometrics 2013, 1(2), 157-179; https://doi.org/10.3390/econometrics1020157
Submission received: 27 June 2013 / Revised: 17 July 2013 / Accepted: 13 September 2013 / Published: 20 September 2013
(This article belongs to the Special Issue Econometric Model Selection)

Abstract

:
This paper presents recent developments in model selection and model averaging for parametric and nonparametric models. While there is extensive literature on model selection under parametric settings, we present recently developed results in the context of nonparametric models. In applications, estimation and inference are often conducted under the selected model without considering the uncertainty from the selection process. This often leads to inefficiency in results and misleading confidence intervals. Thus an alternative to model selection is model averaging where the estimated model is the weighted sum of all the submodels. This reduces model uncertainty. In recent years, there has been significant interest in model averaging and some important developments have taken place in this area. We present results for both the parametric and nonparametric cases. Some possible topics for future research are also indicated.

1. Introduction

Over the last several years many econometricians and statisticians have persistently devoted their efforts in finding various paths to the true model. The uncertainty in correctly specifying the regression model has resulted in a large amount of literature in two major directions: firstly, what variables are to be included and secondly, how they are related with the dependent variable in the model. Thus “what" refers to determining the variables to be included in constructing the model and “how" refers to finding the correct functional form, e.g., parametric (specifications like linear, quadratic, etc.), or in general, nonparametric smoothing methods that do not require specifying a parametric functional form but instead let the data search for a suitable function that describes well the available data, see [1,2] among others.
To determine “what", model selection was first introduced, and it has a huge literature in statistics and econometrics. In fact, in recent years, model selection (variable selection) procedures have become more popular due to the emergence of econometric and statistical models with high dimension (large number) variables. As examples, in labor economics, wage equations can have a large number of regressors [3] and in financial econometrics, portfolio allocation may be among hundreds or thousands of stocks [4]. Such models raise additional challenges of econometric modeling and inference along with the selection of variables. Different tools have been developed based on various estimation criteria. The majority of such procedures involve variable selection by minimizing penalized loss functions based on the least squares and the log-likelihood, and their variants. The adjusted R 2 and residuals sum of squares are the usual variable selection procedures without any penalization. Among the penalized procedures we have Akaike information criterion (AIC) [5], Mallows C p procedure [6], Bayesian information criterion (BIC) by [7], cross-validation method by [8], generalized cross-validation (GCV) by [9], and the focused information criterion (FIC) by [10]. We note that the traditional AIC and BIC are based on least squares (LS), maximum likelihood (ML), or Bayesian principles, and the penalization is based on the l 0 -norm for the parameters entering in the model, with the result penalization is proportional to the number of nonzero parameters. Both AIC and BIC are variable selection procedures and do not provide estimators simultaneously. On the other hand the bridge estimator in [11,12] uses the l q -norm ( q > 0 ), and for 0 < q 1 provides a way to combine variable selection and parameter estimation simultaneously. Within this class the least absolute shrinkage and selection operator (LASSO; q = 1 ) has become the most popular. For q = 2 we get the ridge estimator [13]. For a detailed review of model selection in high dimensional modeling, see [14], and the books [15,16]. Similarly, in the context of empirical likelihood estimation and generalized methods of moments estimators, model selection criteria have been introduced by [17,18], among others.
Model selection is an important step for empirical policy evaluation and forecasting. However, it may produce unstable estimators because of bias in model selection. For example, a small data perturbation or an alternative selection procedure may give a different model. Reference [19] shows that AIC selection results in distorted inference, and [20] explores the negative impact on confidence regions. Reference [21] gives conditions under which post model selection estimators are adaptive, but see [22,23] for their comments that they cannot be uniformly estimated. For a selected model with unstable estimators, [24] provides bagging or bootstrap averaging procedure to reduce their variances for the i.i.d. data, and by [25] for the dependent time series data. But this averaging does not always work, e.g., for large samples and/or in entire parameter space.
Taking the above reasons into consideration, model averaging is introduced as an alternative to model selection. Unlike in model selection, where the model uncertainty is dealt with by econometricians selecting one model from a set of models, in model averaging, we resolve the uncertainty by averaging over the set of models. There is large recent literature on Bayesian model averaging (BMA) and more recently, on frequentist model averaging (FMA). Among the BMA contributions, model uncertainty is considered by setting a prior probability to each candidate model, see [26,27,28,29,30]; for interesting applications in econometrics, see, e.g., [31,32,33]. Also, see [10] for comments on the BMA approach. The main focus here is on the FMA method, which is totally determined by data only and assumes no priors, and it has received much attention in recent years, see [34,35,36,37,38,39,40,41]. Reference [10] provides asymptotic theory. For applications, see [16,42,43]. The concept behind the FMA estimators is related to the ideas of combining procedures based on the same data, which have been considered before in several research areas. For instance, [44] introduces forecast combination and [45,46] suggest combining parametric and kernel estimators of density and regression respectively. Other works include bootstrap based averaging (“stacking") by [24,47,48], information theoretic method to combine density by [49,50], and the mixing of experts models by [51,52]. Similar kinds of combining have been used in computational learning theory by [53,54] and in information theory by [55].
Related to “how", or rather determining the unknown functional forms of econometric models, we use data based nonparametric procedures (e.g., kernel, smoothing spline, series approximation). See, for example, [1,2,56,57], for kernel smoothing procedures, [58] for the spline methods, and [59,60] for the series methods. These procedures help in dealing with the problems of bias and inconsistency in estimation and testing due to misspecifying functional forms. Because of this recent developments on nonparametric model selection and model averaging have taken place.
The current paper is hence focused on a review of parametric and nonparametric approaches to model selection and model averaging mainly from a frequentist point of view, and for independently and identically distributed (i.i.d.) observations. Earlier [14] provides a review of parametric model selections, [61] surveys the FMA estimation, and [62] provides variable selection in semiparametric regression models. To distinguish, our paper hence concentrates on the review of frequentist model selection and model averaging under both parametric and nonparametric settings.
The paper is organized as follows. We first introduce a review of parametric model selection and parametric model averaging in Section 2. Then, in Section 3 we present nonparametric model selection and model averaging procedures. A conclusion follows in Section 4.

2. Parametric Model Selection and Model Averaging

2.1. Model Selection

Let us consider y i as a dependent variable and x i = ( x i 1 , . . . , x i q ) a q × 1 vector of explanatory variables/covariates. Then the linear regression model can be written as
y i = x i β + u i = j = 1 q x i j β j + u i , i = 1 , . . . , n
or
y = X β + u
where y is n × 1 , X is n × q , β = ( β 1 , . . . , β q ) , and u is n × 1 .
Among the well known procedures for model selection, often used routinely, we are looking at the goodness of fit R 2 , adjusted R 2 ( R a 2 ), and residuals sum of squared (RSS) given by
R 2 = 1 - u ^ i 2 ( y i - y ¯ ) 2 , R a 2 = 1 - ( n - 1 ) u ^ i 2 ( n - q ) ( y i - y ¯ ) 2 , R S S = ( u ^ i ) 2
where 0 R 2 1 . The model with the highest R 2 (or R a 2 ) or smallest RSS is chosen. However R 2 increases or RSS decreases, monotonically as q increases. Further, between R 2 and R a 2 , B i a s ( R a 2 ) B i a s ( R 2 ) but V ( R a 2 ) V ( R 2 ) . Thus R a 2 may not always be statistically more efficient ( M S E ( R a 2 ) M S E ( R 2 ) ), see [63] for further detail. Thus R a 2 and RSS are not preferred measures of goodness of fit or model selection. Recently [64] develops a model selection procedure based on the “mean squared prediction error" denoted by MSPE. Consider ( x i 1 , . . . , x i q , z i ) , i = 1 , . . . , n , as a new observed sample in which z i is the “new observed value" and y ^ i is such that M S P E = E ( z i - y ^ i ) 2 / n = σ u 2 ( n + q + 1 ) / n . When a model has q = 0 (no explanatory variable), M S P E = σ y 2 ( n + 1 ) / n . Then, using the unbiased estimator of M S P E 0 = F P E 0 = s y 2 ( n + 1 ) / n , and of M S P E = F P E as s u ^ 2 ( n + q + 1 ) / n , in [64] introduces
R F P E 2 = 1 - F P E F P E 0 = ( n - 1 ) ( n + q + 1 ) R 2 - 2 q n ( n - q - 1 ) ( n + 1 )
such that R F P E 2 R a 2 R 2 where FPE represents final prediction error. The statistical properties of the bias and MSE of R F P E 2 , compared to those of R a 2 and R 2 , are analyzed in [65]. Reference [64] has demonstrated that one of the exciting advantages of R F P E 2 is that it can be used for choosing a model with the best prediction ability. Furthermore, R F P E 2 not only overcomes inflation in R 2 , it also avoids the problem of selecting an overfitted model with some irrelevant explanatory variables due to using R a 2 . In addition, they indicate that R F P E 2 and AIC, discussed below, are asymptotically equivalent and in model selection R F P E 2 is perfectly consistent with using AIC and is closest with BIC. Thus R F P E 2 can be used simultaneously for goodness of fit as well as for model selection.

2.1.1. AIC, TIC, and BIC

Now we turn to the methods of model selection, AIC in [5], Takeuchi informaiton criterion (TIC) in [66], and BIC in [7]. For this, we first note that if f ( y ) is an unknown true density, and g ( y , θ ) is an assumed density then the Kullback-Leibler Information Criterion (KLIC) is given by
D ( f , g ) = K L I C ( f , g ) = E f log ( f ( y ) g ( y , θ ) ) = E f log f ( y ) - E f log g ( y , θ ) ,
where E f is the expectation with respect to f ( y ) . This is an expected “surprise" from knowing f is in fact the true density of y . We note that D ( f , g ) 0 where equality holds if and only if g = f almost everywhere. Further E f log f ( y ) is called the entropy of distribution f; for more on entropy and information, see [67,68].
A concept related to entropy is the quasi maximum likelihood estimator (QMLE) θ ^ Q M L which maximizes the quasi log-likelihood function
L ( θ ) = L n ( θ ) = 1 n i = 1 n log g ( y i , θ )
based on the random sample Y = ( y 1 , . . . , y n ) from f ( y ) . Since L n ( θ ) p E f [ log g ( y 1 , θ ) ] , it is expected that θ ^ Q M L converges in probability to the maximizer θ * of E f [ log g ( y 1 , θ ) ] under suitable conditions. Since E f [ log f ( y 1 ) ] does not depend on θ, QMLE minimizes a random function which converges to
K L I C ( f , g ) = E f log f ( y 1 ) - E f log g ( y 1 , θ ) = D ( f , g )
Thus θ ^ Q M L p θ * where θ * = a r g min θ D ( f , g ( θ ) ) is often referred to as the pseudo-true value of θ . It is well known that under some regularity conditions
n ( θ ^ Q M L - θ * ) d N ( 0 , G ( θ * ) - 1 I ( θ * ) G ( θ * ) - 1 )
where G ( θ ) = - E g [ 2 log g ( y , θ ) / θ θ ] and I ( θ ) = E g [ log g ( y 1 , θ ) log g ( y 1 , θ ) / θ θ ] . When f ( · ) = g ( · , θ * ) , G ( θ * ) = I ( θ * ) and θ ^ Q M L is the MLE and it is asymptotically efficient.
Now consider the fitted density g ^ ( y ) = g ( y , θ ^ Q M L ) and
K L I C ( f , g ^ ) = E f log ( f ( y ) g ^ ( y ) ) = c - E y log g ( y , θ ^ Q M L )
where c = f ( y ) log ( f ( y ) ) d y is free of the fitted model and E y ( · ) denotes the expectation with respect to the true density of y , i.e., g ( y ) here. Then E [ K L I C ( f , g ^ ) ] = c - E Y E y [ log g ( y , θ ^ Q M L ) ] = c - n - 1 E Y E y i [ log g ( y i , θ ^ Q M L ) ] where Y and y are independent. The expected KLIC can be interpreted as the expected likelihood when Y is used for θ ^ Q M L , and an independent sample y (with one observation here) used for evaluation. In linear regression, the expected KLIC is the expected squared prediction error. Dropping c , and using second order Taylor expansion, it can be shown that
n T = E [ K L I C ( f , g ^ ) ] = - E [ L n ( θ ^ ) ] + t r [ I ( θ * ) G ( θ * ) - 1 ] .
Further, an asymptotically unbiased estimator of T can be written as
T ^ = - n - 1 { L n ( θ ^ ) - t r ( I ^ G ^ - 1 ) }
where L n ( θ ^ ) = log g ( Y , θ ^ ) , I ^ G ^ - 1 is a consistent estimator of I ( θ * ) G ( θ * ) - 1 in which I ^ = 1 n log g ( y i , θ ) θ log g ( y i , θ ) θ and G ^ = - 1 n 2 log g ( y i , θ ) / θ θ .
When the model is correctly specified, that is g ( y , θ * ) = f ( y ) , G ( θ * ) = I ( θ * ) and t r ( I ( θ * ) G ( θ * ) - 1 ) = q ,
T ^ = - n - 1 L n ( θ ^ ) + n - 1 q
which is related with AIC given by 2 T ^ :
A I C = - 2 L n ( θ ^ ) n + 2 q n .
Thus, we can think of AIC as an estimate of the expected 2KLIC based on the assumption that the model is correctly specified. Therefore, selecting a model based on the smallest AIC amounts to choosing the best-fitting model in the sense of having the smallest KLIC. A robust AIC by Takeuchi [66], known as the Takeuchi Information Criterion (TIC), is
T I C = - 2 L n ( θ ^ ) n + 2 t r ( I ^ G ^ - 1 ) n ,
which, unlike AIC, does not require g ( y , θ ) to be correctly specified. In general, picking models with the smallest AIC/TIC is selecting fitted models whose densities are close to the true density.
We note that in a linear regression model, the minimization of the AIC reduces to the minimization of the following
A I C = log σ ^ 2 + 2 q n
where σ ^ 2 = u ^ u ^ n . It can be shown that G ( θ * ) = I ( θ * ) if u i | x i N ( 0 , σ 2 ) . Thus AIC is more appropriate under normality, otherwise it is an approximation for the non-normal and heteroskedastic regression cases.
Further, in a linear regression case, the minimization of TIC can be shown as the minimization of
T I C = log σ ^ 2 + 2 n σ ^ 2 i = 1 n h i u ^ i 2 + k ^ 4 n
where k ^ 4 = 1 n σ ^ 4 i = 1 n ( u ^ i 2 - σ ^ 2 ) 2 and h i = x i ( X X ) - 1 x i . When the errors are homoskedastic and normal,
T I C log σ ^ 2 + 2 ( q + 1 ) n
which is close to AIC. Although differences may arise under heteroskedasticity and nonnormality. However, as we change models, typically the results u ^ i 2 and hence k ^ 4 may not change much. In this case, TIC and AIC may give similar model selection results.
We note that the BIC due to [7] is
B I C = log σ ^ 2 + ( log n ) q n
in which the penalty term depends on the sample size and it is generally larger than the penalty term appearing in the AIC. BIC provides a large sample estimator of a transformation of the Bayesian posterior probability associated with the approximation model. In general, by choosing the fitted candidate model corresponding to the BIC criterion, one is selecting the candidate model with the highest posterior probability. A good property of BIC selection is that it provides consistent model selection, see for example [69]. That is, when the true model is of finite dimension, BIC will choose the model with probability tending to 1 as the sample size n increases.
In general, a penalized function can only be consistent if its penalty term ( log n in BIC) is a fast enough increasing function of n (see [70]). Thus AIC is not consistent as it always has some probability of selecting models that are too large. However, we note that in finite samples, adjusted versions of AIC can behave much better, see for example [71]. Further, since the penalty term of BIC is more stringent than the penalty term of AIC, BIC tends to form smaller models than AIC. However, BIC provides a large-sample estimator of the transformation of the Bayesian posterior probability associated with the approximating model, and AIC provides an asymptotically unbiased estimator of the expected Kullback discrepancy between the generating model and the fitted approximating model. In addition, AIC is asymptotically efficient in the sense that it asymptotically selects the fitted candidate model which minimizes the MSE of prediction, but BIC is not asymptotically efficient. This is because AIC can be advocated when the primary goal of the model is to induce meaningful factors influencing the outcome based on relative importance.
In summary, both AIC and BIC provide well-founded and self-contained approaches to model selection although with different motivations and penalty objectives. Both are typically good approximations of their own theoretical target quantities. Often, this also means that they will identify good models for observed data but both criteria can still fail in this respect. For a detailed simulation and empirical comparison of these two approaches, see [72], and for their properties see [69,73,74]. Both the AIC and the TIC are designed for the likelihood or quasi-likelihood context. They perform in a similar way. Their relationship is similar to the relationship between the conventional and the White covariance matrix estimators for the MSE/QMLE or LS. Unfortunately, despite the merit TIC has theoretically, it does not appear to be widely used perhaps because it needs a very large sample to get good estimates.

2.1.2. FIC

Let us start from the model
y i = x i β + z i γ + u i , i = 1 , . . . , n
or
y = X β + Z γ + u
where X is an n × p matrix of variables intended (focused) to be included all the time yet the variables in a n × q matrix Z may or may not be included. From the ML estimators ( β ^ l , γ ^ l ), corresponding with the l-th model, the predictor for m l = x β l + z γ l can be written as m l ^ = x β ^ l + z γ ^ l at ( x , z ). In [10] provides MSE of m ^ l . The basic idea of FIC is to develop a model selection criterion that chooses the model with the smalllest estimated MSE. Such an MSE-based FIC for the l-th submodel is
F I C ^ l = ( ω ^ ( I - Ψ ^ l L ^ - 1 ) γ ^ ) 2 + 2 ω ^ Ψ ^ l ω ^
where Ψ ^ l = π l ( π l L ^ - 1 π l ) - 1 π l , L ^ = ( Z M x Z ) - 1 where M x = I - X ( X X ) - 1 X , ω ^ = X ( X X ) - 1 x - z , and π l captures the projection mappings from the full model to the l-th submodel, such that ω l = π l ω .
In contrast, from [10],
A I C l = - γ ^ L ^ - 1 Ψ ^ l L ^ - 1 γ ^ + 2 l
where l is the number of uncertain parameters in the l-th submodel, shows that when the estimand m = log f ( y , β , γ ) such that f ( y , β , γ ) is the probability density function of the data, the MSE-based FIC is asymptotically equivalent to AIC.

2.1.3. Mallows Model Selection

Let us write the regression model (2) as
y = m + u
where m = X β . Then m ^ = m ^ ( q ) = P ( q ) y , where P ( q ) = X ( X X ) - 1 X .
The objective is to choose q such that the average mean squared error (risk) E L ( q | X ) is minimum, where
L ( q ) = 1 n [ m - m ^ ( q ) ] [ m - m ^ ( q ) ] = 1 n ( β ^ - β ) X X ( β ^ - β ) = 1 n u P ( q ) u
such that
R ( q ) = E [ L ( q ) | X ] = 1 n σ 2 t r ( P ( q ) ) = σ 2 q n .
Mallows criterion for selecting q is to minimize
C ( q ) = u ^ u ^ n + 2 σ 2 q n
where the seceond term on the right hand side is a penalty.
In fact, Mallows criterion is an unbiased estimator of the MSE of the predictive estimator m ^ of m. This is because E [ L ( q ) | X ] = E [ ( m ^ - m ) ( m ^ - m ) / n ] = E [ u P ( q ) u n ] = σ 2 t r P ( q ) / n and E [ C ( q ) | X ] = σ 2 ( n - q ) n + 2 σ 2 q n = σ 2 + σ 2 t r P ( q ) / n . But the minimization of E [ L ( q ) | X ] with respect to q is the same as the minimization of E [ C ( q ) | X ] since σ 2 does not depend on q .
Alternatively,
1 n ( m ^ - m ) ( m ^ - m ) = 1 n ( m ^ - y + y - m ) ( m ^ - y + y - m ) = 1 n [ u ^ u ^ + u u - 2 u ^ u ]
and E [ 1 n ( m ^ - m ) ( m ^ - m ) ] = 1 n E [ u ^ u ^ + 2 σ 2 t r P - σ 2 ] . So, an unbiased estimator is ( u ^ u ^ + 2 σ 2 q - σ 2 ) / n and its minimization is equivalent to the Mallows criterion.

2.1.4. Cross-Validation (CV)

CV is a commonly used procedure for model selection. According to this, the selection of q is made by minimizing
C V ( q ) = 1 n i = 1 n ( y i - x i β ^ - i ) 2
where β ^ - i is the LS estimator of β dropping the i-th observations y i , x i from the sample. It can be shown that E [ C V ( q ) ] M S P E ( q ) , where
M S P E ( q ) E ( y n + 1 - x n + 1 β ^ ) 2 = E u ^ n + 1 2
is the MSE of the forecast error u ^ n + 1 = y n + 1 - y ^ n + 1 with y ^ n + 1 = x n + 1 β ^ . Thus, CV is an almost unbiased estimator of M S P E ( q ) .
This can be shown by first writing the MSPE, based on an out of sample observation from the same distribution as the in sample observation, as
M S P E ( q ) = E ( y n + 1 - x n + 1 β ^ ) 2 = E u ^ n + 1 2 = E u n + 1 2 + E [ ( β ^ - β ) x n + 1 x n + 1 ( β ^ - β ) ] = E u n + 1 2 + M S E ( q )
where M S E ( q ) = E [ ( m ^ ( x n + 1 ) - m ( x n + 1 ) ) ( m ^ ( x n + 1 ) - m ( x n + 1 ) ) ] = E [ ( β ^ - β ) x n + 1 x n + 1 ( β ^ - β ) ] . Since E u n + 1 2 = σ 2 does not depend on q, its selection by M S P E ( q ) and M S E ( q ) are equivalent.
We observe that u ^ n + 1 = y n + 1 - x n + 1 β ^ is a prediction error based on first estimating β ^ based on in sample n observations, and then calculating the error by using the out of sample observation n + 1 . Therefore, M S P E ( q ) is the expectation of a squared leave-one-out prediction error when the sample length is n + 1 . Using this idea we can also obtain a similar leave-one-out prediction error for each observation i . This is given by u ^ i = y i - x i β ^ - i based on n observations. Thus, E u ^ i 2 = M S P E ( q ) for each i, and
E [ C V ( q ) ] = E [ 1 n i = 1 n u ^ i 2 ] = M S P E ( q ) .
Further, since E u ^ n + 1 2 based on n + 1 observations will be close to E u ^ i 2 based on n observations, C V ( q ) is an almost unbiased estimator of M S P E ( q ) .
The C V ( q ) written above can be rewritten as
C V ( q ) = 1 n i = 1 n u ˜ i 2 1 - h i i
where u ˜ i = y i - x i β ^ , h i i is referred to as the leverage effect and it is the diagonal element of the projection matrix X ( X X ) - 1 X , see [75]. This expression is useful for calculations. Also, see [74] for a link of C V ( q ) with AIC.

2.1.5. Model Selection by Other Penalty Functions

The issue regarding the model selection has received more attention in recent years because of the challenging problem of estimating models with large numbers of regressors, which may increase with sample size, for example, earning models in labor economics with large number of regressors, financial portfolio models with large number of stocks, and VAR models with hundreds of macro variables.
A different method of variable selection and estimating such models is penalized least squares (PLS), see [14] for a review on this. In fact in this literature estimation of parameters and variables selections are done by using a criterion function involving loss function with a penalization function. Using l p -penalized, the PLS estimator and variables selection problem are carried out as
min β [ i = 1 n ( y i - x i β ) 2 + λ ( j = 1 q | β j | p ) 1 / p ]
where λ is a tuning or shrinkage parameter and the penalty is the restriction ( j = 1 q | β j | p ) 1 / p c (another tuning parameter). For p = 0 , the l 0 -norm becomes j = 1 q I ( β j 0 ) with I ( · ) as the usual indicator function which indicates the number of nonzero β j for j = 1 , . . . , q . The AIC and BIC belong to this norm. For p = 1 , the l p -norm becomes j = 1 q | β j | c , which is used in the LASSO for simultaneous shrinkage estimation [76] and for variable selection. It can be shown analytically that the LASSO method estimates the zero coefficient as zero with positive probability as n . Next, for p = 2 the l 2 -norm uses j = 1 q β j 2 c and provides ridge type [13] shrinkage estimation but not variable selection. However, if we consider the generalized ridge estimator under λ ^ j β j 2 c then the coefficient estimates corresponding to λ ^ j will tend to zero, see [77].
Further, when 0 < p 1 we get the bridge estimator [11,12] which provides a way to combine variable selection and parameter estimation together with p = 1 as the LASSO. For adaptive LASSO and other forms of LASSO, see [62,78,79,80]. Also, see the link of LASSO with the least angel regression selection (LARS) by [81].

2.2. Model Averaging

Let us consider m be a parametric or nonparametric model, which can be a conditional mean or conditional variance. Let m ^ l , l = 1 , . . . , M be the set of estimators of m corresponding to the different sets of regressors considered in the problem of model selection. Consider w l , l = 1 , . . . , M , to be the weights corresponding to m ^ l , where 0 w l 1 and l = 1 M w l = 1 . We can then define a model averaging estimator of m as
m ^ ( w ) = l = 1 M w l m ^ l .
Below we present the choice of w l in linear regression models. For the linear regression model consider the model in (1) or (2) where the dimension of β can tend to , as n . We take M models where l-th model contains q l regressors, which is a subvector of x i . The corresponding model could be written as
y = X l β l + u ,
and the LS estimator of β l is
β ^ l = ( X l X l ) - 1 X l y .
This gives
m ^ l = X l β ^ l = P l y
where P l = X l ( X l X l ) - 1 X l . The model averaging estimator (MAE) of m is given as
m ^ ( w ) = l = 1 M w l m ^ l = P ( w ) y
where P ( w ) = l = 1 M w l P l . An alternative expression is
m ^ ( w ) = l = 1 M w l m ^ l = l = 1 M w l X l β ^ l = X β ^ ( w )
where we write β ˜ l = β ^ l 0 such that X l β ^ l = [ X l X - l ] β ^ l 0 = X β ^ l 0 = X β ˜ l and β ^ ( w ) = l = 1 M w l β ˜ l = l = 1 M w l β ^ l 0 is the MAE of β . Thus, for the linear model, the MAE of m corresponds to the MAE of β but this may not hold for the non-linear parameters model.
Now we consider the ways to determine weights.

2.2.1. Bayesian and FIC Weights

Under the Bayesian procedure we assume that there are M potential models and one of the models is the true model. Then, using the prior probabilities that each of the potential models is the true model, and considering the prior probability distributions of the parameters, the posterior probability distribution is obtained as the weighted average of the submodels where weights are the posterior probabilities that the given model is the true model given the data.
The two types of weights considered are then
w l = exp { - 1 2 A I C l } l = 1 M exp { - 1 2 A I C l } and w l = exp { - 1 2 B I C l } l = 1 M exp { - 1 2 B I C l }
where A I C l = - 2 log L + 2 q l and B I C l = - 2 log L + q l log n . These are known as smoothed AIC (SAIC) and smoothed BIC (SBIC) weights. While the Bayesian model averaging estimator (BMAE) has a neat interpretation, it searches for the true model instead of selecting an estimator of a model with a low loss function. In simulations it has been found that SAIC and SBIC tend to outperform AIC and BIC estimators, see [82].
As for the FIC, consider the model averaging estimator as
m ˜ = l = 1 M w l m ^ l
where
w l = exp ( - 1 2 F I C l κ ω L ω ) / a l l l exp ( 1 2 F I C l κ ω L ω )
and κ is an algorithmic parameter, bridging from uniform weighting (κ close to 0) to the hard-core FICC (κ is large). For this and further properties and applications of FIC, see [10] and [82].

2.2.2. Mallows Weight Selection Method

In the linear regression model, m ^ ( w ) = P ( w ) y is a linear estimator with w W M . So an optimal choice of w can be found following the Mallows criterion described above. The Mallows criterion for choosing weights w is
C ( w ) = u ^ ( w ) u ^ ( w ) + 2 σ 2 t r ( P ( w ) )
where u ^ ( w ) = y - m ^ ( w ) = y - l = 1 M w l m ^ l = l = 1 M w l ( y - m ^ l ) = l = 1 M w l u ^ l = U ^ w and
t r ( P ( w ) ) = l = 1 M w l t r P l = l = 1 M w l q l = q w
in which q = ( q 1 , . . . , q M ) , w = ( w 1 , . . . , w M ) , u ^ l is the residual vector from the l-th model and U ^ = ( u ^ 1 , . . . , u ^ M ) is an n × M matrix of residuals from all the models. Thus
C ( w ) = w U ^ U ^ w + 2 σ 2 q w
is quadratic in w . Thus
w ^ = a r g min w W M C ( w ) ,
which is obtained by using the quadratic programming procedure with inequality constraints using Gauss or MATLAB. Then Hansen’s Mallows model averaging (MMA) estimator is
m ^ ( w ^ ) = l = 1 M w ^ l m ^ l .
Following [83], [39] shows that
L ( w ^ ) I n f w W M * L ( w ) 1
as n , and w ^ is asymptotically optimal in Li’s sense, where L ( w ^ ) = ( m - m ^ ( w ^ ) ) ( m - m ^ ( w ^ ) ) . However, Hansen’s result requires weights belonging to a discrete set and the models to be nested. In [41] improves the result by relaxing discreteness and by not assuming that the models are nested. Their approach is based on deriving an unbiased estimator of the exact MSE of m ^ ( w ) .
Reference [84] also proposes a corresponding forecasting method, using Mallows model averaging (MMA). He proves that the criterion is an asymptotically unbiased estimator of both the in-sample and the out-of-sample one-step-ahead MSE.

2.2.3. Jackknife Model Averaging Method (CV)

Utilizing the leave-one-out cross validation (CV) procedure, which is also known as the Jackknife procedure, Jackknife model averaging (JMA) method of estimating m ( w ) by [40] relaxes assumptions in [39]. The submodels are now allowed to be non-nested and also the error terms can be heteroskedastic. The sum-of-squared residuals in the JMA method is
C V ( w ) = 1 n ( y - m ˜ ( w ) ) ( y - m ˜ ( w ) )
where m ˜ ( w ) is the vector of the Jackknife estimator computed with the i-th element deleted. To be more specific, m ˜ l = X ( X l ( - i ) X l ( - i ) ) - 1 X l ( - i ) y - i , where X l ( - i ) is equal to X l with its i-th row deleted and y - i is y with the i-th element deleted. Thus
u ˜ ( w ) = l = 1 M w l ( y - m ˜ l ) = l = 1 M w l u ˜ l = U ˜ w
where U ˜ = ( u ˜ 1 , . . . , u ˜ M ) is an n × M matrix, u ˜ l = ( u ˜ 1 l , . . . , u ˜ n l ) is an n × 1 vector in which u ˜ i l is computed with the i-th observation deleted. Then
C V ( w ) = 1 n u ˜ ( w ) u ˜ ( w ) = 1 n w U ˜ U ˜ w
and JMA weights are obtained by minimizing C V ( w ) with respect to w = w ˜ l , and the JMA estimator is m ˜ ( w ) = l = 1 M w l m ˜ l . Reference [40] shows the asymptotic optimality, using [83,85], in the sense of minimizing conditional risk which is equivalent to the out-of-sample prediction MSE.
There are many extensions of the JMA method to various other econometric models. Reference [86] does it for the quantile regression model. Reference [82] extends it for the dependent time series models or models with GARCH errors. Also, using MMA method in [39], for models with endogeneity, in [87] develops MMA based two-stage least squares (MATSLS), model averaging limited information maximum likelihood (MALIML), and model averaging Fuller (MAF) estimators.
However, it would be useful to have extensions of the MMA and JMA procedures to the models with GMM or IV estimator. In addition the sampling properties of the average estimators need to be developed for the purpose of statistical inference.

3. Nonparametric (NP) Model Selection and Model Averaging

3.1. NP Model Selection

Let us write the NP model as
y i = m ( x i ) + u i
where x i is i.i.d. with density f and the error u i is independent of x i .
We can write the local linear model as
y i = m ( x ) + ( x i - x ) β ( x ) + u i = z i ( x ) δ ( x ) + u i
or
y = Z ( x ) δ ( x ) + u
where z i ( x ) = [ 1 ( x i - x ) ] so that Z ( x ) is an n × ( q + 1 ) matrix and δ ( x ) = [ m ( x ) β ( x ) ] . Then the local linear LS estimator (LLLS) of δ ( x ) is
δ ^ ( x ) = ( Z ( x ) K ( x ) Z ( x ) ) - 1 Z ( x ) K ( x ) y = P ( x ) y
where P ( x ) = ( Z ( x ) K ( x ) Z ( x ) ) - 1 Z ( x ) K ( x ) , K ( x ) = d i a g ( K ( ( x 1 - x ) / h ) , . . . , K ( ( x n - x ) / h ) ) is a diagonal matrix in which the kernel K ( ( x i - x ) / h ) = j = 1 q K ( ( x i j - x j ) / h j ) , and h j is the window-width for the j-th variable. From this, pointwise m ^ ( x ) = [ 1 0 ] δ ^ ( x ) , β ^ ( x ) = [ 0 1 ] δ ^ ( x ) . Further, profiled m ^ = ( m ^ ( x 1 ) , . . . , m ^ ( x n ) ) can be written as
m ^ = P y
where P = P ( h ) is an n × n matrix generated by [ 1 0 ] P ( x i ) = [ 1 0 ] ( Z ( x i ) K ( x i ) Z ( x i ) ) - 1 Z ( x i ) K ( x i ) , for i = 1 , . . . , n . If h is fixed then m ^ is a linear estimator in y. But it will be a nonlinear estimator in y if h = h ^ is either obtained by a plug-in estimator or by cross-validation.
With respect to the goodness of fit measures for the NP models we note that
V ( y ) = V ( m ( x ) ) + E [ σ 2 ( x ) ]
So the global population goodness of fit is
ρ 2 = V ( m ( x ) ) V ( y ) = 1 - E [ y - m ( x ) ] 2 V ( y ) , 0 ρ 2 1
and its sample global estimator is given by
R 2 = [ 1 - u ^ i 2 ( y i - y ¯ ) 2 ] = [ 1 - u ^ u ^ y M 2 y ] = 1 - y M 1 ( h ) y y M 2 y = y M 1 * ( h ) y y M 2 y
where u ^ = y - m ^ = y - P ( h ) y = M ( h ) y ( M ( h ) = I - P ( h ) ), M 1 ( h ) = M ( h ) M ( h ) , M 1 * ( h ) = M 2 - M 1 ( h ) , and M 2 = I - ι ι n with ι being an n × 1 vector of unit elements. However, 0 R 2 1 may not be valid since ( y i - y ¯ ) 2 ( m ^ ( x i ) - y ¯ ) 2 + u ^ i 2 . Therefore, one can use the following modified 0 R 1 2 1 as
R 1 2 = R 2 I ( a 1 )
where a = u ^ i 2 / ( y i - y ¯ ) 2 and I ( · ) is an indicator function.
Another way to define a proper global R 2 is to first consider a local R 2 ( x ) . This is based on the fact that at the point x,
( y i - y ¯ ) 2 K ( x i - x h ) = ( m ^ ( x i ) - y ¯ ) 2 K ( x i - x h ) + u ^ i 2 K ( x i - x h )
because u i K ( x i - x h ) = 0 and ( x i - x ) u i K ( x i - x h ) = 0 due to local linear LS estimation. Thus a local R 2 ( x ) can be defined as
R 2 ( x ) = ( m ^ ( x i ) - y ¯ ) 2 K ( x i - x h ) ( y i - y ¯ ) 2 K ( x i - x h ) = S S R ( x ) S S T ( x )
which satisfies 0 R 2 ( x ) 1 . A global R 2 2 is then
R 2 2 = x S S R ( x ) d x x S S T ( x ) d x , 0 R 2 2 1
The goodness of fit R 1 2 is considered in [88] where they showed its application for the statistically significant variables selection in NP regression. R 2 2 is introduced in [89,90]. For the variables selection it may be more appropriate to consider an adjusted R 1 2 as
R 1 a 2 = R a 2 I ( b 1 )
where R a 2 = ( 1 - n - 1 t r M 1 ( h ) y M 1 ( h ) y y M 2 y ) = 1 - b . As a practical matter, the most critical choice in model selection in the nonparametric regression estimation above is the choice of the window-width h and the number of variables q. Further, if instead of considering the local linear estimator taken above and often used, we consider a local polynomial of degree d, then Z ( x ) in δ ^ ( x ) would be a n × ( q d + 1 ) matrix and we would need an additional selection for d. Thus the nonparametric goodness of fit measures described above should be considered as R 1 2 = R 1 2 ( h , q , d ) and R 1 a 2 = R 1 a 2 ( h , q , d ) and they can be used for choosing, say h, for fixed q and d, as the value which maximizes R 1 a 2 ( h , q , d ) . We note that d = 0 is the well known Nadaraya and Watson local constant estimator and for d = 1 , it is the local linear estimator. Further, for given d and h, R 1 2 = R 1 2 ( q ) and R 2 2 = R 2 2 ( q ) can be used to choose q.

3.1.1. AIC, BIC, and GCV

In the NP case the model selection (choosing q) using AIC is proposed by [91]. This is based on the LCLS estimator,
A I C = log σ ^ 2 + 1 + t r P ( h ) / n 1 - ( t r P ( h ) + 2 ) / n
where σ ^ 2 = u ^ u ^ / n = y M 1 ( h ) y / n in which M 1 ( h ) = M ( h ) M ( h ) and M ( h ) = I - P ( h ) where the ( i , j )-th element of P ( h ) is P i , j ( h ) = K i j / l = 1 n K i l and K i j = s = 1 q h s - 1 K ( ( x i s - x j s ) / h s ) .
In the same way, we note that A I C = A I C ( h , q , d ) and it can be used to select, for example, h given q and d ([92]) or q given h and d. In the latter case A I C = A I C ( q ) . The result for the B I C = B I C ( q ) procedure in the NP model is not yet known. However, if one considers NP sieve regression of the type m ( x ) = j = 1 q z j ( x ) β j where z j ( x ) are nonlinear function of x and q , then BIC is similar to the BIC given in [96]. This includes, for example, special cases of a series expansion in which z j ( x ) = x j , and a spline regression in which m ( x ) = j = 1 p x j β j + j = 1 r β p + j ( x - t j ) I ( x t j ) with q = p + r , t j as j-th knot, and I ( x t j ) = 1 if x t j and 0 otherwise.
In [9] an estimate of the minimizer of E L ( q ) , called the GCV, is proposed which does not require the knowledge of σ 2 . This can be written as the minimization of
V ( q ) = n - 1 i = 1 n ( y i - m ^ ( x i ) ) 2 ( 1 - n - 1 t r P ) 2
with respect to q . It has been shown by [9] that E [ V ( q ) | x ] - σ 2 E [ L ( q ) | x ] for large n, and the minimizer q ^ of E V ( q ) is asymptotically optimal in the sense that E L ( q ^ ) / min q E L ( q ) = 1 as n . That is, the MSE of q ^ tends to be minimum as n . We note that L ( q ) in parametric and nonparametric cases are given in Section 2.1.3 and Section 3.1.2, respectively.

3.1.2. Mallows Model Selection

Let us write the regression model
y i = m ( x i ) + u i
where E [ u i | x i ] = 0 and E ( u i 2 | x i ) = σ 2 . Then, for m = ( m ( x 1 ) , . . . , m ( x n ) ) , y = ( y 1 , . . . , y n ) and u = ( u 1 , . . . , u n )
y = m + u .
Let us consider the LLLS estimator of m , which is linear in y , as
m ^ = m ^ ( q ) = P ( q ) y
where P = P ( h ) = P ( q ) as defined in section 3.1. When h ^ h for large n , m ^ can become asymptotically linear.
Our objective is to choose q such that the average mean squared error (risk) E [ L ( q ) | x ] is minimum where
L ( q ) = 1 n ( m - m ^ ( q ) ) ( m - m ^ ( q ) ) .
We note that for u ^ = y - m ^ ( q )
L ( q ) = 1 n ( m - m ^ ( q ) y ) ( m - m ^ ( q ) y ) = 1 n [ u ^ u ^ + u u - 2 u ^ u ]
and
R ( q ) = E ( L ( q ) | x ) = 1 n E [ u ^ u ^ + 2 σ 2 t r P ( q ) - σ 2 ]
Further Mallows criterion for selecting q (number of variables in x i ) is by minimizing
C ( q ) = 1 n ( y - m ^ ( q ) ) ( y - m ^ ( q ) ) + 2 σ 2 n t r P ( q )
where the second term on the right-hand side is the penalty. Essentially, the minimization of C ( q ) is the same as the minimization of the unbiased estimator of E [ L ( q ) | x ] = R since σ 2 does not depend on q, see Section 2.1.3 and [6,9].

3.1.3. Cross Validation (CV)

The CV method is one of the most widely used window-width selectors for NP kernel smoothing. We note that the cross-validation estimator of the integrated squared error weighted by the density f ( x ) ,
I S E ( q ) = x ( m ^ ( x ) - m ( x ) ) 2 f ( x ) d x
is given by
C V ( q ) = 1 n i = 1 n ( y i - m ^ - i ( x i ) ) 2
where m ^ - i ( x i ) is m ^ ( x i ) after deleting the i-th observations y i , x i from the sample. In fact,
C V ( q ) = 1 n i = 1 n ( m ( x i ) - m ^ - i ( x i ) ) 2 + 2 n i = 1 n ( m ( x i ) - m ^ - i ( x i ) ) u i + 1 n i = 1 n u i 2
where the first term on the right-hand side is a good approximation to I S E ( h ) , because the second term is generally negligibly small, and the third term converges to a constant σ 2 = E [ σ 2 ( x ) ] free from h. Therefore C V ( q ) = I S E ( q ) + σ 2 asymptotically.
Also, in the case where m ( x ) is a sieve regression, [96] shows that CV is an unbiased estimator of the MSE of prediction error (MSEPE) of m, M S E P E = E [ y n + 1 - m ^ ( x n + 1 ) ] 2 , see section 2.1.4. In addition, the minimization of MSEPE is equivalent to the minimization of MSE and integrated MSE (IMSE) of estimated m for conditional and unconditional x, respectively.
If, instead of the local linear of m ( x i ) we consider the local polynomial of order d, then m ^ ( x i ) is the LPLS estimator [2], and C V ( q ) = C V ( h , q , d ) continues to hold. For d = 0 we have a local constant LS (LCLS) estimator developed by [98,99]. For d = 1 we have the LLLS estimator as considered above. In practice, the values of h and d can be determined by minimizing C V ( h , q , d ) with respect to h and d for given q, which is developed by [100]. For a vector x i , if the choice of h j = h ^ j for any j tends to be infinity (very large) then the corresponding variable is an irrelevant variable. This can be observed from a simple example. Suppose the m ^ ( x ) for two variables x i 1 , x i 2 , considering the LCLS estimator is m ^ ( x 1 , x 2 ) = m ^ ( x ) = y i K ( x i 1 - x 1 h 1 ) K ( x i 2 - x 2 h 2 ) / K ( x i 1 - x 1 h 1 ) K ( x i 2 - x 2 h 2 ) . Thus if h 2 , then K ( x i 2 - x 2 h 2 ) = K ( 0 ) is constant and m ^ ( x ) = m ^ ( x 1 , x 2 ) = y i K ( x i 1 - x 1 h 1 ) / K ( x i 1 - x 1 h 1 ) . Thus a large estimated value of the window-width leads to the exclusion of variables, and hence variables selection.
In a seminal paper [83] shows that Mallows, GCV and CV procedures are asymptotically equivalent and all of them lead to optimal smoothing in the sense that
( m ^ ( x , q ^ ) - m ( x ) ) 2 d F ( x ) inf q ( m ^ ( x , q ) - m ( x ) ) 2 d F ( x ) p 1
where m ^ ( x ) = m ^ ( x , q ^ ) , given h and d, is an estimator of m ( x ) with q ^ obtained using one of the above procedures.
Also, [101] demonstrates that for the local constant estimator ( d = 0 and given q), C V = C V ( h , q , 0 ) smoothing selectors of h are asymptotically equivalent to GCV selectors. In an important paper, in [92] shows the asymptotic normality of m ^ ( x ) = m ^ ( x , h ^ ) , where h ^ is obtained by the CV method and x i is a vector of mixed continuous and discrete variables. Their extensive simulation results reveal (no theoretical proof) that AIC window-width selection criterion is asymptotically equivalent to the CV method, but for small samples AIC tends to perform better than the CV method. Further, with repect to the comparison of NP and parametric models, their results explain the observations of [102] which finds that NP estimators with smoothing parameters h chosen by CV can yield better prediction relative to commonly used parametric methods for the datasets of several countries. Reference [85] shows that CV is optimal under heteroskedasticity. For GMM model selection which involves selecting moments conditions, see [93]. Also, see [94] for using minimization of empirical likelihood/KLIC and comments by [95] claiming a fundamental flaw in the application of KLIC.

3.2. NP Model Averaging

Let us consider m ^ l , l = 1 , . . . , M , to be the set of estimators of m corresponding to the different sets of regressors considered in the model selection. Then
m ^ ( w ) = l = 1 M w l m ^ l = P ( w ) y
where m ^ l = P l y , P ( w ) = l = 1 M w l P l and P l is the P matrix, as defined before, based here on the variables in the l-th model. Then the choice of w can be determined by applying Mallows criterion (see Section 2.2.2) as
C ( w ) = w U ^ U ^ w + 2 σ 2 q * w
where q * = ( t r P ( q 1 ) , . . . , t r P ( q M ) ) , and U ^ = ( u ^ 1 , . . . , u ^ M ) is a matrix of NP residuals of all the models. Thus we get m ^ ( w ^ ) = l = 1 M w ^ l m ^ l .
Similarly, as in section 2.2.3, if we calculate m ˜ l by deleting one element of each variable, then w can be determined by minimizing
C V ( w ) = 1 n w U ˜ U ˜ w
in which the NP residuals matrix U ˜ = ( u ˜ 1 , . . . , u ˜ M ) with u ˜ l = ( u ˜ 1 l , . . . , u ˜ n l ) , and u ˜ i l is computed with the i-th observation deleted.
For the fixed window-width the optimality result of w ^ can be shown to follow from [83]. However, for h = h ^ the validity of Li’s result needs further investigation.

4. Conclusions

Nonparametric and parametric models are studied in econometrics and practice. In all applications, the important issue is to reduce model uncertainty by using model selection or model averaging. This paper selectively reviews frequentist results on model selection and model averaging in the regression context.
It is clear that most of the results presented are under the i.i.d. assumption. It is useful to relax this assumption to allow dependence or heterogeneity in the data, see [103] for model selection in dependent time series models using various CV procedures. A systematic study of the properties of estimators based on FMA is warranted. Further, results need to be developed for more complicated nonparametric models, e.g., panel data models and models where variables are endogenous, although for the parametric case see [104,105,106,107,108]. Also, the properties of NP model averaging estimators, when the window-width in kernel regression is estimated are to be developed; although readers can see [96] for NP results of the estimators based on the sieve method.

Acknowledgements

The authors are thankful to L. Su, A.Wan, X. Zhang, and G. Zou for some discussions and references on the subject matter of this paper. They are also grateful to the guest editor, Tomohiro Ando, and anonymous referees for their constructive suggestions and comments. First author is also thankful to the Academic Senate, UCR for its financial support.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. A. Pagan, and A. Ullah. Nonparametric Econometrics. Cambridge, UK: Cambridge University Press, 1999. [Google Scholar]
  2. Q. Li, and J.S. Racine. Nonparametric Econometrics: Theory and Practice. Princeton, NJ, USA: Princeton University Press, 2007. [Google Scholar]
  3. A. Belloni, and V. Chernozhukov. “L1-penalized quantile regression in high-dimensional sparse models.” Ann. Stat. 39 (2011): 82–130. [Google Scholar] [CrossRef]
  4. C. Zhang, J. Fan, and T. Yu. “Multiple testing via FDRL for large-scale imaging data.” Ann. Stat. 39 (2011): 613–642. [Google Scholar] [CrossRef] [PubMed]
  5. H. Akaike. “Information Theory and An Extension of the Maximum Likelihood Principle.” In International Symposium on Information Theory. Edited by B.N. Petrov and F. Csaki. New York, USA: Springer-Verlag, 1973, pp. 267–281. [Google Scholar]
  6. C.L. Mallows. “Some comments on Cp.” Technometrics 15 (1973): 661–675. [Google Scholar]
  7. G. Schwarz. “Estimating the dimension of a model.” Ann. Stat. 6 (1978): 461–464. [Google Scholar] [CrossRef]
  8. M. Stone. “Cross-validatory choice and assessment of statistical predictions.” J. R. Stat. Soc. 36 (1974): 111–147. [Google Scholar]
  9. P. Craven, and G. Wahba. “Smoothing noisy data with spline functions.” Numer. Math. 31 (1979): 377–403. [Google Scholar] [CrossRef]
  10. G. Claeskens, and N.L. Hjort. “The focused information criterion.” J. Am. Stat. Assoc. 98 (2003): 900–945. [Google Scholar] [CrossRef]
  11. I.E. Frank, and J.H. Friedman. “A statistical view of some chemomtrics regression tools.” Technometrics 35 (1993): 109–135. [Google Scholar] [CrossRef]
  12. W. Fu, and K. Knight. “Asymptotics for lasso-type estimators.” Ann. Stat. 28 (2000): 1356–1378. [Google Scholar] [CrossRef]
  13. A.E. Hoerl, and R.W. Kennard. “Ridge regression: Biased estimation for nonorthogonal problems.” Technometrics 12 (1970): 55–67. [Google Scholar] [CrossRef]
  14. J. Fan, and J. Lv. “A selective overview of variable selection in high dimensional feature space.” Stat. Sin. 20 (2010): 101–148. [Google Scholar] [PubMed]
  15. P. Bühlmann, and S. Van de Geer. Statistics for High-Dimensional Data: Methods, Theory and Applications. New York, NY, USA: Springer, 2011. [Google Scholar]
  16. G. Claeskens, and N.L. Hjort. Model Selection and Model Averaging. Cambridge, UK: Cambridge University Press, 2008. [Google Scholar]
  17. D. Andrews, and B. Lu. “Consistent model and moment selection procedures for GMM estimation with application to dynamic panel data models.” J. Econom. 101 (2001): 123–164. [Google Scholar] [CrossRef]
  18. A.R. Hall, A. Inoue, K. Jana, and C. Shin. “Information in generalized method of moments estimation and entropy-based moment selection.” J. Econom. 138 (2007): 488–512. [Google Scholar] [CrossRef]
  19. B.M. Pötscher. “Effects of model selection on inference.” Econom. Theory 7 (1991): 163–185. [Google Scholar] [CrossRef]
  20. P. Kabaila. “The Effect of Model Selection on Confidence Regions and Prediction Regions.” Econom. Theory 11 (1995): 537–549. [Google Scholar] [CrossRef]
  21. P. Bühlmann. “Efficient and adaptive post-model-selection estimators.” J. Stat. Plan. Inference 79 (1999): 1–9. [Google Scholar] [CrossRef]
  22. H. Leeb, and B.M. Pötscher. “The finite-sample distribution of post-model-selection estimators and uniform versus nonuniform approximations.” Econom. Theory 19 (2003): 100–142. [Google Scholar] [CrossRef]
  23. H. Leeb, and B.M. Pötscher. “Can one estimate the conditional distribution of post-model-selection estimators? ” Ann. Stat. 34 (2006): 2554–2591. [Google Scholar] [CrossRef]
  24. L. Breiman. “Heuristics of instability and stabilization in model selection.” Ann. Stat. 24 (1996): 2350–2383. [Google Scholar] [CrossRef]
  25. S. Jin, L. Su, and A. Ullah. “Robustify financial time series forecasting.” Econom. Rev., 2013, in press. [Google Scholar] [CrossRef]
  26. J.F. Geweke. Contemporary Bayesian Econometrics and Statistics. Hoboken, NJ, USA: John Wiley and Sons Inc., 2005. [Google Scholar]
  27. J.F. Geweke. “Bayesian model comparison and validation.” Am. Econ. Rev. Pap. Proc. 97 (2007): 60–64. [Google Scholar] [CrossRef]
  28. D. Draper. “Assessment and propagation of model uncertainty.” J. R. Stat. Soc. 57 (1995): 45–97. [Google Scholar]
  29. J.A. Hoeting, D. Madigan, A.E. Raftery, and C.T. Volinsky. “Bayesian model averaging: A tutorial (with discussion).” Stat. Sci. 14 (1999): 382–417. [Google Scholar]
  30. M. Clyde, and E.I. George. “Model uncertainty.” Stat. Sci. 19 (2004): 81–94. [Google Scholar]
  31. W.A. Brock, S.N. Durlauf, and K.D. West. “Policy evaluation uncertain economic environment.” Brook. Pap. Econ. Act. 2003 (2003): 235–301. [Google Scholar] [CrossRef]
  32. X. Sala-i-Martin, G. Doppelhofer, and R.I. Miller. “Determinants of long-term growth: A Bayesian Averaging of Classical Estimates (BACE) approach.” Am. Econ. Rev. 94 (2004): 813–835. [Google Scholar] [CrossRef]
  33. J.R. Magnus, O. Powell, and P. Prüfer. “A comparison of two model averaging techniques with an application to growth empirics.” J. Econom. 154 (2010): 139–153. [Google Scholar] [CrossRef]
  34. S.T. Buckland, K.P. Burnham, and N.H. Augustin. “Model selection: An integral part of inference.” Biometrics 53 (1997): 603–618. [Google Scholar] [CrossRef]
  35. Y. Yang. “Adaptive regression by mixing.” J. Am. Stat. Assoc. 96 (2001): 574–586. [Google Scholar] [CrossRef]
  36. K.P. Burnham, and D.R. Anderson. Model Selection and Multimodel Inference: A Practical Information-Theoretical Approach. New York, NY, USA: Springer-Verlag, 2002. [Google Scholar]
  37. G. Leung, and A.R. Barron. “Information theory and mixing least-squares regressions.” IEEE Trans. Inf. Theory 52 (2006): 3396–3410. [Google Scholar] [CrossRef]
  38. Z. Yuan, and Y. Yang. “Combining linear regression models: When and how? ” J. Bus. Econ. Stat. 100 (2005): 1202–1204. [Google Scholar] [CrossRef]
  39. B.E. Hansen. “Notes and comments least squares model averaging.” Econometrica 75 (2007): 1175–1189. [Google Scholar] [CrossRef]
  40. B.E. Hansen, and J. Racine. “Jackknife model averaging.” J. Econom. 167 (2012): 38–46. [Google Scholar] [CrossRef]
  41. A.T.K. Wan, X. Zhang, and G. Zou. “Least squares model averaging by mallows criterion.” J. Econom. 156 (2010): 277–283. [Google Scholar] [CrossRef]
  42. G. Kapetanios, V. Labhard, and S. Price. “Forecasting using predictive likelihood model averaging.” Econ. Lett. 91 (2006): 373–379. [Google Scholar] [CrossRef]
  43. A.T.K. Wan, and X. Zhang. “On the use of model averaging in tourism research.” Ann. Tour. Res. 36 (2009): 525–532. [Google Scholar] [CrossRef]
  44. J.M. Bates, and C.W. Granger. “The combination of forecasts.” Oper. Res. Q. 20 (1969): 451–468. [Google Scholar] [CrossRef]
  45. I. Olkin, and C.H. Speigelman. “A semiparametric approach to density estimation.” J. Am. Stat. Assoc. 82 (1987): 858–865. [Google Scholar] [CrossRef]
  46. Y. Fan, and A. Ullah. “Asymptotic normality of a combined regression estimator.” J. Multivar. Anal. 71 (1999): 191–240. [Google Scholar] [CrossRef]
  47. D.H. Wolpert. “Stacked generalization.” Neural Netw. 5 (1992): 241–259. [Google Scholar] [CrossRef]
  48. M. LeBlanc, and R. Tibshirani. “Combining estimates in regression and classification.” J. Am. Stat. Assoc. 91 (1996): 1641–1650. [Google Scholar] [CrossRef]
  49. Y. Yang. “Mixing strategies for density estimation.” Ann. Stat. 28 (2000): 75–87. [Google Scholar] [CrossRef]
  50. O. Catoni. The Mixture Approach to Universal Model Selection. Technical Report; Paris, France: Ecole Normale Superieure, 1997. [Google Scholar]
  51. M.I. Jordan, and R.A. Jacobs. “Hiearchical mixtures of experts and the EM algorithm.” Neural Comput. 6 (1994): 181–214. [Google Scholar] [CrossRef]
  52. X. Jiang, and M.A. Tanner. “On the asymptotic normality of hierarchical mixtures-of-experts for generalized linear models.” IEEE Trans. Inf. Theory 46 (2000): 1005–1013. [Google Scholar] [CrossRef]
  53. V.G. Vovk. “Aggregateing Strategies.” In Proceedings of the 3rd Annual Workshop on Computational Learning Theory, Rochester, NY, USA, 06–08 August 1990; Volume 56, pp. 371–383.
  54. V.G. Vovk. “A game of prediction with expert advice.” J. Comput. Syst. Sci. 56 (1998): 153–173. [Google Scholar] [CrossRef]
  55. N. Merhav, and M. Feder. “Universal prediction.” IEEE Trans. Inf. Theory 44 (1998): 2124–2147. [Google Scholar] [CrossRef]
  56. A. Ullah. “Nonparametric estimation of econometric functionals.” Can. J. Econ. 21 (1988): 625–658. [Google Scholar] [CrossRef]
  57. J. Fan, and I. Gijbels. Nonparametric Estimation of Econometric Functionals. London, UK: Champman and Hall, 1996. [Google Scholar]
  58. R.L. Eubank. Nonparametric Regression and Spline Smoothing. New York, NY, USA: CRC Press, 1999. [Google Scholar]
  59. S. Geman, and C. Hwang. “Diffusions for global optimization.” SIAM J. Control Optim. 24 (1982): 1031–1043. [Google Scholar] [CrossRef]
  60. W.K. Newey. “Convergence rates and asymptotic normality for series estimators.” J. Econom. 79 (1997): 147–168. [Google Scholar] [CrossRef]
  61. H. Wang, X. Zhang, and G. Zou. “Frequentist model averaging estimation: A review.” J. Syst. Sci. Complex. 22 (2009): 732–748. [Google Scholar] [CrossRef]
  62. L. Su, and Y. Zhang. “Variable Selection in Nonparametric and Semiparametric Regression Models.” In Handbook of Applied Nonparametric and Semiparametric Econometrics and Statistics. Edited by A. Ullah, J. Racine and L. Su. Oxford, UK: Oxford University Press, 2013, in press. [Google Scholar]
  63. A.K. Srivastava, V.K. Srivastava, and A. Ullah. “The coefficient of determination and its adjusted version in linear regression models.” Econom. Rev. 14 (1995): 229–240. [Google Scholar] [CrossRef]
  64. V. Rousson, and N.F. Gosoniu. “An R-square coefficient based on final prediction error.” Stat. Methodol. 4 (2007): 331–340. [Google Scholar] [CrossRef]
  65. Y. Wang. On Efficiency Properties of An R-square Coefficient Based on Final Prediction Error. Working Paper; Beijing, China: School of International Trade and Economics, University of International Business and Economics, 2013. [Google Scholar]
  66. K. Takeuchi. “Distribution of information statistics and criteria for adequacy of models.” Math. Sci. 153 (1976): 12–18, In Japanese. [Google Scholar]
  67. E. Maasoumi. “A compendium to information theory in economics and econometrics.” Econom. Rev. 12 (1993): 137–181. [Google Scholar] [CrossRef]
  68. A. Ullah. “Entropy, divergence and distance measures with econometric applications.” J. Stat. Plan. Inference 49 (1996): 137–162. [Google Scholar] [CrossRef]
  69. R. Nishi. “Asymptotic properties of criteria for selection of variables in multiple regression.” Ann. Stat. 12 (1984): 758–765. [Google Scholar] [CrossRef]
  70. E.J. Hannan, and B.G. Quinn. “The determination of the order of an autoregression.” J. R. Stat. Soc. 41 (1979): 190–195. [Google Scholar]
  71. C.M. Hurvich, and C.L. Tsai. “Regression and time series model selection in small samples.” Biometrika 76 (1989): 297–307. [Google Scholar] [CrossRef]
  72. J. Kuha. “AIC and BIC: Comparisons of assumptions and performance.” Sociol. Methods Res. 33 (2004): 188–229. [Google Scholar] [CrossRef]
  73. M. Stone. “An asymptotic equivalence of choice of model by cross-validation and Akaike’s criterion.” J. R. Stat. Soc. 39 (1977): 44–47. [Google Scholar]
  74. M. Stone. “1979. Comments on model selection criteria of Akaike and Schwartz.” J. R. Stat. Soc. 41 (1979): 276–278. [Google Scholar]
  75. G.S. Maddala. Introduction to Econometrics. New York, NY, USA: Macmillan, 1988. [Google Scholar]
  76. R. Tibshirani. “Regression shrinkage and selection via the lasso.” J. R. Stat. 58 (1996): 267–288. [Google Scholar]
  77. A. Ullah, A.T.K. Wan, H. Wang, X. Zhang, and G. Zou. A Semiparametric Generalized Ridge Estimator and Link with Model Averaging. Working Paper; Riverside, CA, USA: Department of Economics, University of California, 2013. [Google Scholar]
  78. H. Zou. “The adaptive lasso and its oracle properties.” J. Am. Stat. Assoc. 101 (2006): 1418–1429. [Google Scholar] [CrossRef]
  79. C. Zhang. “Nearly unbiased variable selection under minimax concave penalty.” Ann. Stat. 38 (2010): 894–942. [Google Scholar] [CrossRef]
  80. J. Fan, and R. Li. “Variable selection via nonconcave penalized likelihood and its oracle properties.” J. Am. Stat. Assoc. 96 (2001): 1348–1360. [Google Scholar] [CrossRef]
  81. B. Efron, T. Hastie, I. Johnstone, and R. Tibshirani. “Least angle regression.” Ann. Stat. 32 (2004): 407–499. [Google Scholar]
  82. X. Zhang, A.T.K. Wan, and S.Z. Zhou. “Focused information criteria, model selection, and model averaging in a tobit model with a nonzero threshold.” J. Bus. Econ. Stat. 30 (2012): 132–143. [Google Scholar] [CrossRef]
  83. K.C. Li. “Asymptotic optimality for Cp, CL, cross-validation and generalized cross-validation: discrete index set.” Ann. Stat. 15 (1987): 958–975. [Google Scholar] [CrossRef]
  84. B. Hansen. “Least-squares forecast averaging.” J. Econom. 146 (2008): 342–350. [Google Scholar] [CrossRef]
  85. D.W.K. Andrews. “Asymptotic optimality of generalized CL, cross-validation, and generalized cross-validation in regression with heteroskedastic errors.” J. Econom. 47 (1991): 359–377. [Google Scholar] [CrossRef]
  86. X. Lu, and L. Su. Jackknife Model Averaging for Quantile Regressions. Working Paper; Singapore: School of Economics, Singapore Management University, 2012. [Google Scholar]
  87. G. Kuersteiner, and R. Okui. “Constructing optimal instruments by first-stage prediction averaging.” Econometrica 78 (2010): 697–718. [Google Scholar]
  88. F. Yao, and A. Ullah. “A nonparametric R2 test for the presence of relevant variables.” J. Stat. Plan. Inference, 143 (2013): 1527–1547. [Google Scholar] [CrossRef]
  89. L. Su, and A. Ullah. “A nonparametric goodness-of-fit-based test for conditional heteroskedasticity.” Econom. Theory 29 (2013): 187–212. [Google Scholar] [CrossRef]
  90. L.H. Huang, and J. Chen. “Analysis of variance, coefficient of determination and f-test for local polynomial regression.” Ann. Stat. 36 (2008): 2085–2109. [Google Scholar] [CrossRef]
  91. C. Hurvich, J. Simonoff, and C. Tsai. “Smoothing parameter selection in nonparametric regression using an improved Akaike information criterion.” J. R. Stat. Soc. 60 (1998): 271–293. [Google Scholar] [CrossRef]
  92. J. Racine, and Q. Li. “Nonparametric estimation of regression functions with both categorical and continuous data.” J. Econom. 119 (2004): 99–130. [Google Scholar] [CrossRef]
  93. D.W.K. Andrews. “Consistent moment selection procedures for generalized method of moments estimation.” Econometrica 67 (1999): 543–564. [Google Scholar] [CrossRef]
  94. X. Chen, H. Hong, and M. Shum. “Nonparametric likelihood ratio model selection tests between parametric likelihood and moment condition models.” J. Econom. 141 (2007): 109–140. [Google Scholar] [CrossRef]
  95. S.M. Schennach. “Instrumental variable estimation of nonlinear errors-in-variables models.” Econometrica 75 (2007): 201–239. [Google Scholar] [CrossRef]
  96. B. Hansen. Nonparametric Sieve Regression: Least Squares Averaging Least Squares, and Cross-validation. Working Paper; Madison, WI, USA: University of Wisconsin, 2012. [Google Scholar]
  97. H. Liang, G. Zou, A.T.K. Wan, and X. Zhang. “Optimal weight choice for frequentist model average estimators.” J. Am. Stat. Assoc. 106 (2011): 1053–1066. [Google Scholar] [CrossRef]
  98. E.A. Nadaraya. “Some new estimates for distribution functions.” Theory Probab. Its Appl. 9 (1964): 497–500. [Google Scholar] [CrossRef]
  99. G.S. Watson. “Smooth regression analysis.” Sankhya Ser. A 26 (1964): 359–372. [Google Scholar]
  100. P.G. Hall, and J.S. Racine. Infinite Order Cross-validated Local Polynomial Regression. Working Paper; Ontario, Canada: Department of Economic, McMaster University, 2013. [Google Scholar]
  101. W. Härdle, P. Hall, and J.S. Marron. “How far are automatically chosen regression smoothing parameters from their optimum? ” J. Am. Stat. Assoc. 83 (1988): 86–99. [Google Scholar] [CrossRef]
  102. Q. Li, and J. Racine. Empirical Applications of Smoothing Categorical Variables. Working Paper; Ontario, Canada: Department of Economic, McMaster University, 2001. [Google Scholar]
  103. J. Racine. “Consistent cross-validatory model-selection for dependent data: Hv-block cross-validation.” J. Econom. 99 (2000): 39–61. [Google Scholar] [CrossRef]
  104. M. Caner. “A lasso type GMM estimator.” Econom. Theory 25 (2009): 270–290. [Google Scholar] [CrossRef]
  105. M. Caner, and M. Fan. A Near Minimax Risk Bound: Adaptive Lasso with Heteroskedastic Data in Instrumental Variable Selection. Working Paper; Raleigh, USA: North Carolina State University, 2011. [Google Scholar]
  106. P.E. Garcia. Instrumental Variable Estimation and Selection with Many Weak and Irrelevant Instruments. Working Paper; Madison, WI, USA: University of Wisconsin, 2011. [Google Scholar]
  107. Z. Liao. “Adaptive GMM shrinkage estimation with consistent moment selection.” Econom. Theory FirstView (2013): 1–48. [Google Scholar] [CrossRef]
  108. E. Gautier, and A. Tsybakov. High-Dimensional Instrumental Variables Regression and Confidence Sets. Working Paper; Malakoff Cedex, France: Centre de Recherche en Economie et Statistique, 2011. [Google Scholar]

Share and Cite

MDPI and ACS Style

Ullah, A.; Wang, H. Parametric and Nonparametric Frequentist Model Selection and Model Averaging. Econometrics 2013, 1, 157-179. https://doi.org/10.3390/econometrics1020157

AMA Style

Ullah A, Wang H. Parametric and Nonparametric Frequentist Model Selection and Model Averaging. Econometrics. 2013; 1(2):157-179. https://doi.org/10.3390/econometrics1020157

Chicago/Turabian Style

Ullah, Aman, and Huansha Wang. 2013. "Parametric and Nonparametric Frequentist Model Selection and Model Averaging" Econometrics 1, no. 2: 157-179. https://doi.org/10.3390/econometrics1020157

APA Style

Ullah, A., & Wang, H. (2013). Parametric and Nonparametric Frequentist Model Selection and Model Averaging. Econometrics, 1(2), 157-179. https://doi.org/10.3390/econometrics1020157

Article Metrics

Back to TopTop