Next Article in Journal
Bootstrap Tests for Overidentification in Linear Regression Models
Next Article in Special Issue
Functional-Coefficient Spatial Durbin Models with Nonparametric Spatial Weights: An Application to Economic Growth
Previous Article in Journal
Testing in a Random Effects Panel Data Model with Spatially Correlated Error Components and Spatially Lagged Dependent Variables
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Forecast Combination under Heavy-Tailed Errors

School of Statistics, University of Minnesota at Twin Cities, 313 Ford Hall, 224 Church Street SE, Minneapolis, MN 55455, USA
*
Author to whom correspondence should be addressed.
Econometrics 2015, 3(4), 797-824; https://doi.org/10.3390/econometrics3040797
Submission received: 22 August 2015 / Revised: 9 November 2015 / Accepted: 10 November 2015 / Published: 23 November 2015
(This article belongs to the Special Issue Nonparametric Methods in Econometrics)

Abstract

:
Forecast combination has been proven to be a very important technique to obtain accurate predictions for various applications in economics, finance, marketing and many other areas. In many applications, forecast errors exhibit heavy-tailed behaviors for various reasons. Unfortunately, to our knowledge, little has been done to obtain reliable forecast combinations for such situations. The familiar forecast combination methods, such as simple average, least squares regression or those based on the variance-covariance of the forecasts, may perform very poorly due to the fact that outliers tend to occur, and they make these methods have unstable weights, leading to un-robust forecasts. To address this problem, in this paper, we propose two nonparametric forecast combination methods. One is specially proposed for the situations in which the forecast errors are strongly believed to have heavy tails that can be modeled by a scaled Student’s t-distribution; the other is designed for relatively more general situations when there is a lack of strong or consistent evidence on the tail behaviors of the forecast errors due to a shortage of data and/or an evolving data-generating process. Adaptive risk bounds of both methods are developed. They show that the resulting combined forecasts yield near optimal mean forecast errors relative to the candidate forecasts. Simulations and a real example demonstrate their superior performance in that they indeed tend to have significantly smaller prediction errors than the previous combination methods in the presence of forecast outliers.

1. Introduction

When multiple forecasts are available for a target variable, well-designed forecast combination methods can often outperform the best individual forecaster, as demonstrated in the literature of the applications of forecast combinations in areas, such as economics, finance, tourism and wind power generation in the last fifty years.
Many combination methods have been proposed from different perspectives since the seminal work of forecast combination by Bates & Granger [1]. See the discussions and summaries in Clemen [2], Newbold & Harvey [3] and Timmermann [4] for key developments and many references. More recently, Lahiri et al. [5] provide theoretical and numerical comparisons between adaptive and simple forecast combination methods; Armstrong, Green and Graefe [6] propose important principles to follow, centered on the golden rule of being conservative, for building accurate forecasts, and verify them empirically based on an examination of previously-published studies; Green & Armstrong [7] review studies that compare simple and complicated methods and conclude that complexity actually substantially increases forecast error. They advocate the use of sophisticatedly simple methods instead of complicated ones that are hard to understand. This is in line with the fact that complicated methods often incur unnecessarily larger instability and variability in prediction (see, e.g., Subsection 3.1 of Yang [8]). While it seems clear that researchers agree that forecast combination is very useful, they differ in their opinions on how to do forecast combination properly. Needless to say, there are many possibly drastically different scenarios one can envision for the problem of forecast combination in terms of the accuracy of the candidate forecasts, their relationships, the structure changes, the characteristics of the forecast errors and more, which naturally favor different methods to be top performers. Therefore, the availability of many combination methods and disputes on their rankings and merits, in our view, are not only expected, but also helpful to collectively reach a better understanding of the key issues in the research area by further rigorous theoretical and empirical investigations.
The present work concerns forecast combination when the forecast errors exhibit heavy-tailed behaviors, which means that the decay of the probability density function (or an estimate) of the forecast errors is much slower than that of the normal distribution. To our knowledge, few studies have proposed/discussed forecast combination methods that target such situations, where the familiar forecast combination methods, such as simple average, least squares regression with or without constraints or those based on the variance-covariance of the forecasts, may perform very poorly (some numerical examples are provided in Section 4 and Section 5 in this paper).
Heavy-tailed behaviors of forecast errors may come from different sources. First, many important variables in finance, economics and other areas are known to have heavy tails. For example, currency exchange rates have long been believed to have heavy-tailed behaviors, and Marinelli et al. [9], for instance, discussed the evidences of heavy-tailed distributions to model them. Some key macroeconomic indices, such as GDP, are also believed to have heavy-tailed tendencies, and Harvey [10], for instance, modeled the U.S. GDP with Student’s t-distributions with low degrees of freedom. The heavy tails of the variables to be forecast naturally tend to cause heavy-tailed behaviors of the forecast errors. Second, even if the target variables themselves have light tails, the variables in the information set may have long tails for various reasons, which can induce heavy tails of the forecast errors. Third, for a difficult target variable, we may also observe heavy-tailed forecast errors from predictive models when data available for model training are limited, even if the true data-generating processes have relatively normal tails.
Clearly, when some of the forecast errors of the candidate forecasters are unusually large, if a forecast combination method does not take it into consideration, the final forecast may even fully inherit large prediction errors, which may then have severe practical consequences on decisions based on the forecast. Therefore, it is crucial to devise combination methods that can deal with heavy-tailed forecast errors for robust and reliable final performances. In the rest of the work, for convenience, heavy-tailed distributions may sometimes loosely refer to distributions with tails heavier than Gaussian distributions, although specific choices, such as scaled t-distributions, will be studied.
In this paper, we propose two forecast combination methods. One is specially designed for situations when there is strong evidence that the forecast errors are heavy tailed and can be modeled by a scaled Student’s t-distribution (see below). The other is designed for more general uses. The design of these two methods follows the spirit of the adaptive forecasting through exponential re-weighting (AFTER) combination scheme by Yang [8]. The idea of the AFTER scheme is that the exponentiated cumulative historical performances of the candidate forecasts are informative and can be used to assign their combination weights for the future. This way of using the historical performances of the candidate forecasts for weighting has a natural tie to information theory and provides a near optimal final performance in mean forecasting errors. For example, if the random errors in the true model are from a normal distribution, then the weight of a candidate forecast by AFTER is proportional to exp ( - L 2 ) , where L 2 is the cumulative historical mean squared forecast error of the forecast. For the fist method mentioned above, we assume that the forecast errors follow a scaled Student’s t-distribution with a possibly unknown scale parameter and degrees of freedom. Note that if a random variable X satisfies that X / s t ν for some s > 0 , where t ν is a standard t-distribution with degrees of freedom ν, we say X has a t ν distribution with scale parameter s. For situations when the identification of the heaviness of tails of the forecast errors is not feasible, normal, double-exponential and scaled Student’s t-distributions are considered at the same time as candidates for the distribution form of the forecast errors for the second method. In either case, no parametric assumptions are needed on the relationships of the candidate forecasts.
Technically, if the forecast errors are assumed to follow a normal or a double-exponential distribution with zero mean, then the conditional probability density functions used in the combining process of the AFTER scheme can be estimated relatively easily for all of the candidate forecasters, because the estimation of the conditional scale parameters is straightforward (see, e.g., Zou & Yang [11] and Wei & Yang [12], for more details). However, this is not true if a scaled t-distribution is assumed. Among the literature discussing the maximum likelihood parameter estimation in Student’s t-regressions in the last few decades, Fernandez & Steel [13] and Fonseca et al. [14] provided comprehensive summaries of the convergence properties of the parameter estimations in different situations. Both of them showed that the estimation of the degrees of freedom and the scale parameter simultaneously in a scaled Student’s t-regression model suffer from monotonic likelihood because the likelihood goes to infinity as the scale parameter goes to zero if the degrees of freedom ν are not large enough. To deal with this difficulty, methods other than the maximum likelihood estimation have been proposed in the literature. For example, one may fix the degrees of freedom first, then estimate the scale parameter using the method of moments or other tools (see, e.g., Kan & Zhou [15]).
We follow a two-step procedure to estimate the density function given a forecast error sequence. First, estimate the scale parameter for each element in a given candidate pool of degrees of freedom. Note that each combination of the degrees of freedom and the scale parameter leads to a different estimate of the density function. Second, the weight of a density estimate is assigned from its relative historical performance. The final density estimate is a mixture of all of the candidate density estimates using the weights. More details about this procedure, including how to determine the pool of candidate estimates, are available in Section 2. There are three major advantages of this procedure: First, because a pool of degrees of freedom (rather than a single candidate) is considered, it reduces the potential risk of picking a degree of freedom that is far from the truth. Second, the likelihood that each candidate density estimate is the best is purely decided by data. Third, the calculation of the combined estimator is easy and fast.
It is worth pointing out that some popular combination methods in the literature make assumptions on the distributions of forecast errors that do not necessarily exclude heavy-tailed behaviors. For example, methods that are based on the estimation of the variance-covariance of forecasters require the existence of variances. Regression-based forecast combination methods (see, e.g., Granger and Ramanathan [16]) assume the existence of certain moments of the forecast errors. However, to our knowledge, these methods are not really designed to handle heavy-tailed errors and are not expected to work well for such situations.
Prior to our work, efforts have been made to deal with error distributions that have tails heavier than normal by adaptive forecast combination methods. For example, Sancetta [17] assumed that the tails of the target variables are no heavier than exponential decays, which restricts the heaviness of the tails of the forecast errors. Wei & Yang [12] designed a method for errors heavier than the normal distributions, but not heavier than the double-exponential distributions. More recently, Cheng & Yang [18] advocate the incorporation of a smooth surrogate of the L 0 -loss in the performance measure for weighting to reduce the occurrence of outlier forecasts. However, none of these methods can deal with forecast errors with tails as heavy as that of Student’s t-distributions. The new AFTER methods in this paper will be shown to handle such situations.
The performance of the proposed methods will be examined via simulations and a real data example. We consider two simulation settings, depending on the data-generating processes being from regression models or time series models. Several error distributions are used, and they have different degrees of heavy tails. The new methods are compared to earlier versions of AFTER, as well as some popular combination methods. Their performances in heavy-tailed situations are indeed better than the competitors and are still among or close to the best, even if the forecast errors have normal tails. For a real data application, we use 1428 time series variables from M3-competition data (see Makridakis & Hibon [19]). The M3-competition data are very popular in empirical studies in econometrics, machine learning and statistics to validate the performances of forecasting methods. For each of the variables in this dataset, forecast sequences based on 24 popular forecast methods are provided. The overall evaluation on the 1428 variables shows that our proposed methods, especially the one for general purposes, compare favorably to others. To gain more insight, we pick out a subset of the 1428 variables that have heavy-tailed forecast errors, and it is seen that the the new methods behave nicely, as intended.
The plan of the paper is as follows: Section 2 introduces the forecast combination method designed for heavy-tailed error distributions. In Section 3, a more general combination method is proposed. Simulations are presented in Section 4, and Section 5 provides a real data example. Section 6 includes a brief concluding discussion. The proofs of the theoretical results are in the Appendix.

2. The t-AFTER Methodology

In this section, we propose a forecast combination method when there is strong evidence that the random errors in the data-generating process are heavy tailed and can be modeled by a scaled Student’s t-distribution.

2.1. Problem Setting

Suppose at each time period i 1 there are J forecasters available for predicting y i and the forecast combination starts at i 0 1 . Note that some combination methods may require i 0 to be large enough, e.g., 10, to give reasonably accurate combinations. Let y ^ i , j be the forecast of y i from the j-th forecaster. Let Y ^ i : = ( y ^ i , 1 , , y ^ i , J ) be the vector of candidate forecasts for y i made at time point i - 1 .
Suppose y i : = m i + ϵ i , where m i is the conditional mean of y i given all available information prior to observing y i and ϵ i is the random error at time i. Assume ϵ i is from a distribution with probability density function ( p d f ) 1 s i h ( x s i ) , where s i is the scale parameter that depends on the data before observing y i and h ( · ) is a p d f with mean zero and scale parameter one.
Let W i : = ( W i , 1 , , W i , J ) be a vector of combination weights of Y ^ i . It is assumed that j = 1 J W i , j = 1 and W i , j 0 for any i i 0 , 1 j J . Let W i 0 = ( w 1 , , w J ) be the initial weight vector. The combined forecast for y i from a combination method is:
y ^ i = Y ^ i , W i ,
where a , b stands for the inner-product of vectors a and b . Specifically, when needed, we use a superscript δ on each W i to denote the combination weights that correspond to the method δ. For example, in the following sections, W i A 2 and W i A 1 stand for the combination weights from the L 2 - and L 1 -AFTER methods, respectively.

2.2. The Existing AFTER Methods: The L 2 - and L 1 -AFTER Methods

As one recent method of adaptive forecast combination, the general scheme of adaptive forecast combination via exponential re-weighting (AFTER) was proposed by Yang [8]. It has been applied and studied in, e.g., Fonseca et al. [14], Inoue & Kilian [20], Sanchez [21], Altavilla & De Grauwe [22] and Lahiri et al. [5] and Zhang et al. [23] handled the case that the variable to be predicted is categorical.
In the general AFTER formulation, the relative cumulative predictive accuracies of the forecasters are used to decide their combining weights. Let | | x | | 1 : = i = 1 n | x i | be the l 1 -norm of vector x = ( x 1 , , x n ) .
The general form of W i for the AFTER approach is:
W i = l i - 1 | | l i - 1 | | 1 ,
where l i - 1 = ( l i - 1 , 1 , , l i - 1 , J ) , and for any 1 j J ,
l i - 1 , j = w j i i 0 i - 1 1 s ^ i , j h y i - y ^ i , j s ^ i , j ,
where s ^ i , j is an estimate of s i from the j-th forecaster at time point i - 1 .
Below, the most commonly-used AFTER procedures, the L 2 -AFTER from Zou & Yang [11] and the L 1 -AFTER from Wei & Yang [12], are briefly introduced.
L 2 -AFTER: When the random errors in the data-generating process follow a normal distribution or a distribution close to a normal distribution, the L 2 -AFTER is both theoretically and empirically competitive in providing combined forecasts that perform at least as well as any individual forecaster in any performance evaluation period plus a small penalty. Let f N be the p d f of N ( 0 , 1 ) . To get W i A 2 , first use f N as the h in (3), then plug the new l i - 1 into (2). The s ^ i , j used in the L 2 -AFTER, denoted as σ ^ i , j , is the sample standard deviation of { y i - y ^ i , j } i = 1 i - 1 , assuming the random errors are independent and identically distributed.
L 1 -AFTER: Let f D E be the p d f of a double-exponential distribution with scale parameter one and location parameter zero. To get W i A 1 , one can follow the same procedure for W i A 2 , but use f D E as the h in (3). The s ^ i , j used in the L 1 -AFTER, denoted as d ^ i , j , is the mean of { | y i - y ^ i , j | } i = 1 i - 1 . The L 1 -AFTER method was designed for robust combination when the random errors have occasional outliers. See Wei and Yang [12] for details.

2.3. The t-AFTER Methods

Since the estimation of the degrees of freedom and the scale parameter simultaneously in a scaled Student’s t-regression setting suffers from certain theoretical difficulties, as mentioned in the Introduction, we use a different strategy in this paper. Specifically, we take an estimation procedure that has two steps:
  • We decide a pool of candidate degrees of freedom with size K. The elements in the pool are considered to be close to the degrees of freedom of the Students’ t-distribution that describes the random errors well. For each element in the set, we assume it is the true degrees of freedom to estimate the related scale parameter. Therefore, we have K sets of estimate for the degrees of freedom and scale parameter pair.
  • For each of the K sets of the estimate, we find its probability to be the true one based on the relative historical performances.
This two-step procedure is used in the t-AFTER method for forecast combination when the random errors have heavy tails that can be described well by a Students’ t-distribution.
Let Ω : = ( ν 1 , , ν K ) be a set of degrees of freedom for Student’s t-distributions. The choice of Ω will be discussed later in this subsection. Let w j , k ( w j , k 0 and k = 1 K j = 1 J w j , k = 1 ) be the initial combination weight of the forecaster j under the degrees of freedom ν k .
Let the combining weight of Y ^ i from a t-AFTER method be W i A t and the combined forecast be y ^ i A t . Then, W i A t and y ^ i A t are obtained via the following steps:
  • Estimate (e.g., by MLE) s i for each ν k Ω and for each candidate forecaster. The estimate for s i from the j-th forecaster given ν k is denoted as s ^ i , j , k .
  • Calculate W i A t and y ^ i A t :
    W i A t = l i - 1 A t | | l i - 1 A t | | 1 , y ^ i A t = Y ^ i , W i A t ,
    where l i - 1 A t = ( l i - 1 , 1 A t , , l i - 1 , J A t ) and for 1 j J and any i i 0 + 1 ,
    l i - 1 , j A t = k = 1 K l i - 1 , j , k A t with l i - 1 , j , k A t = w j , k i i 0 i - 1 1 s ^ i , j , k f t y i - y ^ i , j s ^ i , j , k | ν k ,
    where f t ( · | ν ) is the p d f of a Student’s t-distribution with degrees of freedom ν.
It is assumed that the elements in Ω are natural numbers for the sake of convenience. In general, when no specific information is available to estimate the size of candidate degrees of freedom efficiently, one can start with a large, but relatively sparse pool (say, { 1 , 3 , 5 , 8 , 12 , 15 , 20 , 30 } ) and then may narrow it down based on the performances on some training datasets. When there is strong evidence that the tails of the forecast errors are heavy, the size of Ω can be relatively small, say no more than three or five. In this situation, from our experiences, Ω = { 1 , 3 } or { 1 , 3 , 5 } works well.
Obviously, when the random errors in the true model follow a scaled Student’s t-distribution with a known degree of freedom ν, then Ω : = { ν } . Then, (5) can be simplified into:
l i - 1 , j A t = w j i i 0 i - 1 1 s ^ i , j f t y i - y ^ i , j s ^ i , j | ν ,
where w j is the initial weight of the j-th forecaster and s ^ i , j is an estimate of s i from the j-th forecaster using all information at and before time point i - 1 when the true ν is known.

2.4. Risk Bounds of the t-AFTER

To avoid potential redundancy, we first give a risk bound on the t-AFTER assuming ν is known. A more general theorem that treats ν (and even the form of error distribution) as unknown will be given in Section 3 (the third remarks of Theorem 2).

2.4.1. Conditions

Condition 1: There exists a constant τ > 0 , such that for any i i 0 ,
Pr ( sup 1 j J | y ^ i , j - m i | / s i τ ) = 1 .
Condition 2: These exists a constant ξ 1 > 0 , such that for any i i 0 and 1 j J :
Pr ( s ^ i , j s i ξ 1 ) = 1 .
Condition 2 : These exists a constant 0 < ξ 1 < 1 , such that for any i i 0 and 1 j J :
Pr ( ξ 1 s ^ i , j s i 1 ξ 1 ) = 1 .
Condition 1 holds when the forecast errors are bounded, which is true in many real applications, although it excludes some time series models, such as AR(1). It is required for the development of the theorems in this paper. As you can see, this condition does not require y i to be bounded, so it allows large outliers to occur in the random errors. When the conditional mean of y i is known to stay in certain range and the related forecasts are relatively restricted, the condition holds. See Subsection 3.1 of Wei & Yang [12] for more discussions on this condition.
Condition 2 generally requires that the estimates of the scale parameters are not too small compared to the truth. Condition 2 requires that the estimates of the scale parameters are not too far from the truth in both directions.

2.4.2. Risk Bounds for the t-AFTER with a Known ν

Assume the true forecast errors follow a scaled Student’s t-distribution with a known degree of freedom ν. Let σ i and s i be the conditional standard deviation and scale parameter, respectively, of ϵ i at time point i, and let s ^ i , j be an estimator of s i from the j-th forecaster.
Let q i = 1 s i f t y i - m i s i | ν be the actual conditional error density function at time point i and q ^ i A t = j = 1 J W i , j A t 1 s ^ i , j f t y ^ i , j - y i s ^ i , j | ν , where W i A t is defined in (4). Therefore, q ^ i A t is the mixture estimator of q i from the t-AFTER procedure. Let D ( f | | g ) : = f log f g be the Kullback–Leibler divergence between two density functions f and g. Therefore, E ( D ( q i | | q ^ i A t ) ) is a measure of the performances of q ^ i A t as an estimate of q i under the Kullback–Leibler divergence at time point i.
Theorem 1. If the random errors are from a scaled Student’s t-distribution with degrees of freedom ν and Condition 2 holds, then:
1 n i = i 0 + 1 i 0 + n E D ( q i | | q ^ i A t ) inf 1 j J log 1 w j n + 1 n i = i 0 + 1 i 0 + n E ( m i - y ^ i , j ) 2 2 s i 2 + B 1 n i = i 0 + 1 i 0 + n E ( s ^ i , j - s i ) 2 s i 2 .
Further, if ν is strictly larger than two and Conditions 1 and 2 hold, then
1 n i = i 0 + 1 i 0 + n E ( m i - y ^ i A t ) 2 σ i 2 C inf 1 j J log 1 w j n + B 2 n i = i 0 + 1 i 0 + n E ( m i - y ^ i , j ) 2 σ i 2 + B 3 n i = i 0 + 1 i 0 + n E ( s ^ i , j - s i ) 2 s i 2 .
In the above, C, B 1 , B 2 and B 3 are constants. B 1 and B 3 depend on ξ 1 and ξ 1 , respectively. B 2 is a function of ν, and C depends on τ and ξ 1 .
Remarks:
  • When only Condition 2 is satisfied, Theorem 1 shows that the cumulative distance between the true densities and their estimators from the t-AFTER is upper bounded by the cumulative (standardized) forecast errors of the best candidate forecaster plus a penalty that has two parts: the squared relative estimation errors of the scale parameters and the logarithm of the initial weights. This risk bound is obtained without assuming the existence of the variances of the random errors, and s ^ i , j / s i is only required to be lower bounded.
  • When ν is assumed to be strictly larger than two and both Conditions 1 and 2 are satisfied, Theorem 1 shows that the cumulative forecast errors have the same convergence rate of the cumulative forecast errors of the best candidate forecaster plus a penalty that depends on the initial weights and efficiency of scale parameter estimation. The risk bounds hold even if the the distribution of random errors has tails as heavy as t 3 .
  • If there is no prior information to decide the w j ’s in (6), then equal initial weights could be applied. That is, w j = 1 / J for all j. In this case, it is easy to see that the number of candidate forecasters plays a role in the penalty. When the candidate pool is large, some preliminary analysis should be done to eliminate the significantly less competitive ones before applying the t-AFTER.

3. The g-AFTER Methodology

In Section 2, the theoretical risk bounds of the combined forecasts from the t-AFTER are provided when the random errors are known to have Student’s t-distributions. However, the error distribution is typically unknown.
In this section, we propose a forecast combination method, g-AFTER, for situations when there is a lack of strong or consistent evidence on the tail behaviors of the forecast errors due to the shortage of data and/or evolving data-generating process. A theorem that allows the random errors to be from one of the three popular distribution families (normal, double-exponential and scaled Student’s t) is provided to characterize the performance of the g-AFTER.

3.1. The g-AFTER Method

Let the combining weight of Y ^ i from the g-AFTER be W i A g . For any i > i 0 , W i A g and the associated combined forecast y ^ i A g are:
W i A g = l i - 1 A g | | l i - 1 A g | | 1 , y ^ i A g = Y ^ i , W i A g ,
where l i - 1 A g = ( l i - 1 , 1 A g , , l i - 1 , J A g ) and for 1 j J ,
l i - 1 , j A g = l i - 1 , j A 2 + c 1 l i - 1 , j A 1 + c 2 l i - 1 , j A t ,
where l i - 1 , j A 2 , l i - 1 , j A 1 and l i - 1 , j A t are from the L 2 -, L 1 - and t-AFTERs, respectively, and c 1 and c 2 are non-negative constants that control the relative importance of the L 2 -, L 1 - and t-AFTERs in the g-AFTER. For instance, c 1 and c 2 can be small when one has evidence that suggests the random errors are likely to be normally distributed.

3.2. Conditions

Condition 3: Suppose the random errors have zero mean and are from one of the three families (normal, double exponential and scaled Student’s t), and there exists a constant 0 < ξ 2 1 , such that for any i i 0 , with probability one, we have:
ξ 2 s ^ i s i 1 ξ 2 ,
where s i is the actual conditional scale parameter at time point i and s ^ i refers to any estimate of s i used in the g-AFTER.
This condition requires all of the estimates of the scale parameters to stay in a reasonable range around the true values. For the j-th candidate forecaster, s ^ i is σ ^ i , j when associated with normal errors, is d ^ i , j when associated with the double exponential and is s ^ i , j , k when associated with the scaled Student’s t with degrees of freedom ν k , where σ ^ i , j , d ^ i , j , s ^ i , j , k and ν k are defined in Subsection 2.2 and Subsection 2.3.
Condition 4: When the random errors in the true model follow a scaled Student’s t-distribution with degrees of freedom ν, assume there exist positive constants ν ̲ , λ and ν ¯ , such that,
ν ̲ min ν k Ω ( ν k , ν ) - 2 ν ¯ , max ν k Ω | ν k - ν | λ .

3.3. Risk Bounds for the g-AFTER

Let w j A 2 and w j A 1 be the initial combination weights of the forecaster j in the L 2 - and L 1 -AFTERs, respectively, and w j , k A t be the initial combination weight of the j-th forecaster under the degrees of freedom ν k in the t-AFTER.
Let W ^ i , j A 2 = l i - 1 , j A 2 | | l i - 1 A g | | 1 , W ^ i , j A 1 = c 1 l i - 1 , j A 1 | | l i - 1 A g | | 1 and W ^ i , j , k A t = c 2 l i - 1 , j , k A t | | l i - 1 A g | | 1 , where l i - 1 , j , k A t is defined in (5) and l i - 1 A g is defined in (8). Therefore, W ^ i , j A 2 , W ^ i , j A 1 and W ^ i , j , k A t are the weights of the density estimates under normal, double-exponential and scaled Student’s t with degrees of freedom ν k in the g-AFTER procedure at time point i - 1 from the j-th forecast, respectively. Let G = j = 1 J ( w j A 2 + c 1 w j A 1 + c 2 k w j , k A t ) , where c 1 and c 2 are defined in (8).
Let q i be the p d f of ϵ i at time point i and its estimator from a g-AFTER procedure be:
q ^ i A g = j = 1 J W ^ i , j A 2 1 σ ^ i , j f N y ^ i , j - y i σ ^ i , j + W ^ i , j A 1 1 d ^ i , j f D E y ^ i , j - y i d ^ i , j + k = 1 K W ^ i , j , k A t 1 s ^ i , j , k f t y ^ i , j - y i s ^ i , j , k | ν k .
Theorem 2. If Conditions 3 and 4 hold, then for y ^ i A g from a g-AFTER procedure, we have:
1 n i = i 0 + 1 i 0 + n E D ( q i | | q ^ i A g ) inf 1 j J ( B 1 n i = i 0 + 1 i 0 + n E ( ( m i - y ^ i , j ) 2 σ i 2 ) + R ) ,
where:
R = log G w j A 2 n + B 2 n i = i 0 + 1 i 0 + n E ( σ ^ i , j - σ i ) 2 σ i 2 , under normal errors; log G c 1 w j A 1 n + B 2 n i = i 0 + 1 i 0 + n E ( d ^ i , j - d i ) 2 d i 2 , under double-exponential errors; inf 1 k K log G c 2 w j , k A t n + B 2 n i = i 0 + 1 i 0 + n E ( s ^ i , j , k - s i ) 2 s i 2 + B 3 | ν - ν k ν | , under scaled t errors.
If Condition 1 also holds, then:
1 n i = i 0 + 1 i 0 + n E ( m i - y ^ i A g ) 2 σ i 2 C inf 1 j J B 1 n i = i 0 + 1 i 0 + n E ( m i - y ^ i , j ) 2 σ i 2 + R .
In the above, C, B 1 , B 2 and B 3 are constants depending on τ, ξ 2 and the parameters in Condition 4.
Remarks:
  • Theorem 2 provides a risk bound for more general situations compared to Theorem 1. That is, as long as the the true random errors are from one of the three popular families, similar risk bounds hold.
  • When strong evidence is shown that the errors are highly heavy tailed, Ω can be very small with only small degrees of freedom, and the c 2 w j , k A t in G can be relatively large (relative to w j A 2 and c 1 w j A 1 ). The more information on the tails of the error distributions is available, the more efficient the allocation of the initial weights can be.
  • Specially, when the true random errors have tails significantly heavier than normal and double-exponential, they could be assumed to be from a scaled Student’s t-distribution with unknown ν, and a (general) t-AFTER procedure is more reasonable. In this case, l i - 1 , j A g = l i - 1 , j A t .
    Let q i = 1 s i f t y ^ i , j - y i s i and q ^ i A t = j , k w ^ i , j , k A t 1 s ^ i , j , k f t y ^ i , j - y i s ^ i , j , k | ν k and w ^ i , j , k A t 0 for all j and k. Without assuming Condition 1 is satisfied, it follows for any n 1 :
    1 n i = i 0 + 1 i 0 + n E D ( q i | | q ^ i A t ) inf 1 j J log ( 1 / w i , j A t ) n + B 1 n i = i 0 + 1 i 0 + n E ( m i - y ^ i , j ) 2 σ i 2 + R * ,
    where w j , k A t is defined the same as that in Subsection 2.3 and:
    R * = inf 1 k K B 2 n i = i 0 + 1 i 0 + n E ( s ^ i , j , k - s i ) 2 s i 2 + B 3 | ν - ν k ν | .
    If Condition 1 is also satisfied, then it follows:
    1 n i = i 0 + 1 i 0 + n E ( m i - y ^ i A t ) 2 σ i 2 C inf 1 j J ( log ( 1 / w i , j A t ) n + B 1 n i = i 0 + 1 i 0 + n E ( m i - y ^ i , j ) 2 σ i 2 + R * ) ,
    where C, B 1 , B 2 and B 3 are the same as in Theorem 2.

4. Simulations

We consider two simulation scenarios, with candidate forecasters from linear regression models and autoregressive ( A R ) models. Results from the linear regression models show improvements of the t- and g-AFTERs over the L 1 - and L 2 -AFTERs when the random errors have heavy tails. In the A R settings, the t- and g-AFTERs are compared to many other popular combination methods in various situations, including cases that the forecast errors have extremely symmetric/asymmetric heavy tails. We also compared the performances of the t- and g-AFTERs to other combination methods on the linear regression models, and similar results are found. Only representative results are given here.
In this and the following sections, we have the following settings:
  • Use Ω = { 1 , 3 } as the set of candidate degrees of freedom for the scaled Student’s t-distributions considered in the t-AFTER method. The t-AFTER is proposed mostly to be applied when the error terms exhibit very strong heavy-tailed behaviors. When the degrees of freedom of the Student’s t-distribution gets larger, the t-AFTER becomes similar to the L 1 - or L 2 -AFTER. Thus, a choice of Ω with relatively small degrees of freedom in the g-AFTER should provide a good enough adaption capability. In fact, other options for Ω, such as Ω = { 1 , 3 , 5 , 8 , 15 } , were considered, and similar results were found.
  • Since it is usually the case that g-AFTER is preferred when the users have no consistent and strong evidence to identify the distribution of the error terms from the three candidate distribution families, we give equal initial weights to the candidate distributions. Therefore, c 1 = 1 , c 2 = 2 , w j A 1 = w j A 2 = 1 / J and w j , k A t = 1 2 J are used in the g-AFTER. Note that, for example, if there is clear and consistent evidence that the error distribution is more likely to be from the normal distribution family, then putting relatively large initial weights on the L 2 -AFTER procedure in a g-AFTER can be more appropriate than using equal weights.
  • The s ^ i , j , k ’s are the sample median of the absolute forecast errors before time point i from the forecaster j divided by the theoretical median of the absolute value of a random variable with distribution t ν k .

4.1. Linear Regression Models

4.1.1. Simulation Settings

There are p predictors ( X 1 , , X p ) available, and the true model uses the first p 0 predictors with coefficients β = ( β 1 , , β p 0 ) . That is, Y = i = 1 p 0 X i β i + ϵ . The p candidate forecasters are generated from the following p models: Y = β 0 + X 1 β 1 + e , Y = β 0 + i = 1 2 X i β i + e , ⋯, Y = β 0 + i = 1 p X i β i + e . We take p = 2 p 0 - 1 for this scenario. Other settings for p and p 0 were also considered, and they gave similar results.
The p predictors are generated from a multivariate normal distribution with zero mean and covariance matrix Σ with sample size n = 125 . For the entries in Σ, the diagonal elements are one, and the off-diagonal elements are 0 . 8 . The forecasters are generated after the 90th observation, and the combination is generated after the fifth forecast. Various distributions for the random errors (ϵ) are considered. Note that, we also tried other structures of Σ, including the ones with Σ i , j = 0 . 5 | i - j | and Σ i , j = I ( i = j ) 1 i , j p . The results are similar.
For each set of β, we generate 200 sets of ( X 1 , , X p , Y ) , and on each of the 200 sets, we record the 1 20 i = 106 125 ( m i - y ^ i ) 2 (average squared estimation error (ASEE)) of each combination method, where y ^ i is the forecast of y i from this method. Although we focus on the ASEE in our presentation of the numerical results, another measure, the averaged absolute estimation error (AAEE), 1 20 i = 106 125 | m i - y ^ i | , is also considered. The main results are similar under the two performance measures. In Sub-subsection 4.2.3, some results are given under both ASEE and AAEE to demonstrate that the comparison results are robust to the selection of the performance measure. Note that, since this is a simulation study, the combined forecasts are compared to the conditional means ( m i ’s) instead of the observations ( y i ’s) to better compare the competing methods. For each competing method, the mean ASEE (or AAEE) over the 200 datasets is recorded.
We sample β 200 times independently from a U n i f [ 1 , 3 ] for each component with size p 0 , so 200 sets of mean ASEEs are recorded. In order to compare the performances of the four AFTER-based methods, the L 2 -, L 1 -, t- and g-AFTERs, for each β, the ratios of the mean ASEEs of the L 2 -, t- and g-AFTERs over the mean ASEE of the L 1 -AFTER are recorded. The summaries (means and their standard errors) of the 200 sets of ratios are presented.

4.1.2. Results

Three sets of results that correspond to three choices of the number of variables in the true models, i.e., p 0 = 3 , 5 , 10 , respectively, are presented in Table 1 in this subsection. In this table, A 2 , A t and A g stand for the ratios of the mean ASEEs of the L 2 -, t- and g-AFTERs over those of the L 1 -AFTER. It is expected that the t-AFTER and g-AFTER will outperform the L 1 -AFTER and L 2 -AFTER when forecasting data generating processes (DGPs) with heavy-tailed distributions in the errors. Thus, we run simulations with errors following scaled t 3 , t 10 , double-exponential (DE) and normal distributions. As one can see in Table 1, the t- and g-AFTER are the best forecasters for DGPs with errors coming from t 3 , t 10 and DE distributions. In those cases, the L 1 -AFTER also outperformed the L 2 -AFTER. In addition, the g-AFTER and the L 2 -AFTER are the best forecasters for the normal case. In summary, the t-AFTER and g-AFTER are better choices for heavy-tailed distributions, and the general forecaster g-AFTER performs also very well for DE and normal errors.
Table 1. Simulation results on the linear regression models.
Table 1. Simulation results on the linear regression models.
t 3 D E t 10 Normal
σ 2 = 1 σ 2 = 9 σ 2 = 1 σ 2 = 9 σ 2 = 1 σ 2 = 9 σ 2 = 1 σ 2 = 9
p 0 = 3
A 2 1.3021.0431.1161.0280.9830.9580.9260.931
(0.009)(0.003)(0.004)(0.001)(0.003)(0.001)(0.002)(0.001)
A t 0.9430.9800.9830.9950.9410.9550.9320.942
(0.002)(0.001)(0.001)(0.001)(0.003)(0.001)(0.001)(0.001)
A g 0.9440.9670.9740.9770.9400.9500.9260.938
(0.002)(0.001)(0.001)(0.001)(0.001)(0.001)(0.001)(0.001)
p 0 = 5
A 2 1.2571.0661.0881.0260.9800.9550.9370.927
(0.008)(0.004)(0.003)(0.001)(0.002)(0.001)(0.002)(0.001)
A t 0.9500.9670.9760.9820.9510.9500.9430.938
(0.002)(0.001)(0.001)(0.001)(0.001)(0.001)(0.001)(0.001)
A g 0.9510.9580.9710.9700.9490.9440.9390.933
(0.001)(0.001)(0.001)(0.001)(0.001)(0.001)(0.001)(0.001)
p 0 = 10
A 2 1.1661.0561.0350.9980.9680.9490.9460.929
(0.006)(0.003)(0.002)(0.001)(0.002)(0.001)(0.001)(0.001)
A t 0.9500.9570.9640.9650.9490.9460.9480.939
(0.002)(0.001)(0.001)(0.001)(0.001)(0.001)(0.001)(0.001)
A g 0.9450.9490.9610.9550.9440.9390.9420.933
(0.001)(0.001)(0.001)(0.001)(0.001)(0.001)(0.001)(0.001)
Note: The first row shows the distributions of the random errors in the true data-generating regression model, and DE stands for double-exponential distribution. The second row describes the noise variance in data generation. The A2, At and Ag stand for the L2-, t- and g-adaptive forecasting through exponential re-weighting (AFTER) methods, respectively. The parameter p0 is the number of explanatory variables in the data-generating model. The true parameter (β) values are randomly generated from a uniform distribution, and the candidate forecasts are obtained from linear regressions with 1, 2 and up to the maximum number of explanatory variables. For each set of true parameters, 200 replicated datasets are generated to simulate the mean average squared estimation error (ASEE) for each combination method. The ratio of the mean ASEE of each method over that of the L1-AFTER is used to measure the relative performance of the competitors. The process is replicated 200 times, each time with independently-generated true β values. The means and their standard errors of the 200 sets of ratios are summarized in this table (the numbers in the parentheses are the standard errors).

4.2. AR Models

4.2.1. Simulation Settings

Let the true model be a A R ( p 0 ) process with random errors from certain distributions and the candidate forecasters be based on A R ( 1 ) , A R ( 2 ) , , A R ( p ) ( 1 p 0 p ), respectively. For results on asymptotically-optimal model selection for A R models, see, e.g., Ing [24] and Ing et al. [25]. We here compare forecast combination methods.
In this scenario, given p, p 0 is randomly sampled from a uniform distribution on { 1 , 2 , , p } . Given p 0 , β in the true model is generated from [ - 1 , 1 ] . The β leading to a non-stationary A R model is not considered. Given a valid β, 200 samples with size n = 125 from the true model are generated. On each data sample, the candidate forecasters are generated after the 90th observation, and the ASEE of the last 20 forecasts is recorded. Furthermore, the combined forecasts are compared to the conditional means instead of the observations. For each β, the mean ASEE of each combining method over the 200 samples is recorded, and the ratios of the mean ASEEs of the other methods over that of the L 1 -AFTER are recorded.
We replicate the generation of p 0 ’s (and β’s) 200 times and report the mean and its standard error of the 200 ratios for each combination method.
Only the results of p = 5 are presented (other choices, such as p = 8 and 10, provide similar results).

4.2.2. Other Combination Methods

Some other popular combination methods are included in this part and compared to the newly-proposed methods. The simple average combination strategy ( S A ) uses the average of the candidate forecasts as the combined forecasts. The M D and T M strategies use the median and the trimmed mean (remove the largest and smallest before averaging) of candidate forecasts, respectively. The variance-covariance estimation-based combination method (denoted as B G , because it was first proposed by Bates & Granger [1]) we use in this paper is the version in Hansen [26]. Furthermore, a modified B G method with a discount factor 0 < ρ < 1 is considered, and the results of multiple ρ’s are presented. In the modified B G , the estimate of the (conditional) variance of the forecast errors of a forecaster at any time point is the associated discounted mean squared forecast error with factor ρ. See, e.g., Stock & Watson [27], for more details. Hereafter, for example, B G 0 . 9 denotes a B G method with ρ = 0 . 9 . Two linear-regression-based combination methods are also considered: one is the combination via ordinary linear regression ( L R ), and the other one is a constrained linear regression ( C L R ) combination. The constraints of the C L R are: all coefficients are non-negative, and the sum of the coefficients is one (without the intercept in the regressions).

4.2.3. Results

In order to demonstrate the advantageous performances of the t- and g-AFTER for heavy-tailed DGPs in various scenarios, we simulated two major cases for comparison. Table 2 and Table 3 provide the summaries of the simulation results. In these two tables, A 2 , A t , A g , S A , M D , T M , B G , L R and C L R stand for the relative performances of these methods over that of the L 1 -AFTER. See Sub-subsection 4.1.2 for the descriptions of these methods. The other entries are defined as in Table 1. Table 2 presents the results for the cases that the random errors are not (or only mildly) heavy tailed, while Table 3 contains the results when the random errors have significant heavy tails.
One can see that the t- and g-AFTERs consistently outperform all other non-AFTER-based combination methods in all of the simulated situations (heavy tailed or not) and outperform the L 1 - and L 2 -AFTERs when the random errors have tails heavier than normal. The C L R is competitive because the constraints in its processes make the combination weights of the candidate forecasts relatively more stable and resistant to dramatic changes. The C L R gets more competitive when the random errors have heavier tails. The S A and T M are vulnerable to outliers, which hurts their overall performances. Their ASEEs are over 35% to 150% more than those of the proposed methods. We can see this from both tables.
Table 2. Simulation results on the A R models with p = 5 (not or only mildly heavy tailed).
Table 2. Simulation results on the A R models with p = 5 (not or only mildly heavy tailed).
Normal t 10 DE
σ 2 = 1 σ 2 = 4 σ 2 = 9 σ 2 = 1 σ 2 = 4 σ 2 = 9 σ 2 = 1 σ 2 = 4 σ 2 = 9
A 2 0.9410.9400.9400.9720.9720.9711.0301.0321.033
(0.004)(0.004)(0.004)(0.004)(0.003)(0.003)(0.004)(0.003)(0.004)
A t 0.9540.9530.9540.9610.9620.9620.9971.0010.995
(0.003)(0.003)(0.003)(0.002)(0.003)(0.003)(0.001)(0.001)(0.001)
A g 0.9480.9470.9480.9570.9590.9580.9780.9830.976
(0.003)(0.004)(0.004)(0.003)(0.003)(0.003)(0.002)(0.001)(0.002)
S A 2.8922.4842.4082.3722.2972.0702.2782.1762.483
(0.268)(0.166)(0.189)(0.167)(0.174)(0.127)(0.148)(0.151)(0.148)
M D 1.6812.0251.8241.8841.8741.4211.7401.6021.943
(0.137)(0.191)(0.187)(0.243)(0.197)(0.076)(0.137)(0.144)(0.168)
T M 1.8051.9461.7541.8381.7051.4691.7231.5711.885
(0.121)(0.144)(0.134)(0.156)(0.138)(0.066)(0.109)(0.093)(0.120)
B G 1.4411.4621.3891.4251.3641.3211.4311.3571.500
(0.047)(0.051)(0.047)(0.042)(0.040)(0.032)(0.046)(0.035)(0.045)
B G 0 . 95 1.4321.4531.3811.4171.3581.3151.4271.3531.495
(0.047)(0.050)(0.047)(0.042)(0.040)(0.032)(0.045)(0.035)(0.045)
B G 0 . 9 1.4291.4491.3781.4141.3551.3131.4251.3521.492
(0.047)(0.049)(0.047)(0.042)(0.039)(0.032)(0.045)(0.035)(0.045)
B G 0 . 8 1.4331.4521.3821.4171.3571.3151.4271.3531.491
(0.047)(0.050)(0.047)(0.042)(0.040)(0.032)(0.045)(0.035)(0.044)
B G 0 . 7 1.4471.4641.3941.4281.3661.3221.4321.3571.495
(0.048)(0.051)(0.049)(0.043)(0.040)(0.033)(0.046)(0.036)(0.045)
L R 7.9568.3558.4918.85610.2109.13811.11011.24010.040
(0.346)(0.339)(0.342)(0.387)(1.032)(0.363)(0.504)(0.509)(0.513)
C L R 1.0361.0241.0361.0321.0361.0421.0721.0701.045
(0.011)(0.013)(0.012)(0.011)(0.010)(0.011)(0.011)(0.011)(0.013)
Note: The first column lists the competing forecast combination methods. The true models for this study are autoregressive ( A R ) models with the true order randomly generated up to p = 5 . The true parameters in the A R model are uniformly generated (with the parameters leading to non-stationary A R models removed). The 5 candidate forecasts are obtained from A R ( 1 ) , A R ( 2 ) , up to A R ( 5 ) models. All other aspects are similar to Table 1. Some other popular combination methods are included in the comparison. The S A , M D and T M methods use the average, median and trimmed mean (removing the largest and smallest before averaging) as the combined forecasts, respectively. The B G method uses the inverse of the historical mean squared forecast errors of the candidate forecasts to assign combination weights. A modified B G method is used with a discount factor 0 < ρ < 1 to discount the contribution of forecast errors at an earlier time when estimating the variances of the candidates (e.g., Stock & Watson [27]). Here, for instance, B G 0 . 9 denotes this B G method with ρ = 0 . 9 . The L R method uses linear regression with the actual value as the response and the candidate forecasts as the regressors in linear regression to assign the combination weights. The C L R method is L R with the constraint that the coefficients are non-negative and sum to 1.
Table 3. Simulation results on the A R models with p = 5 (heavy tailed) under squared estimation error.
Table 3. Simulation results on the A R models with p = 5 (heavy tailed) under squared estimation error.
t 3 Log-Normal
σ 2 = 1 σ 2 = 4 σ 2 = 9 σ = 0 . 25 σ = 0 . 5 σ = 1
A 2 1.0581.0561.0530.9641.0241.051
(0.009)(0.008)(0.008)(0.003)(0.004)(0.010)
A t 0.9550.9470.9610.9510.9400.921
(0.006)(0.006)(0.006)(0.003)(0.004)(0.008)
A g 0.9500.9430.9570.9500.9460.926
(0.006)(0.006)(0.006)(0.003)(0.004)(0.008)
S A 2.0471.8891.9312.2532.1431.730
(0.107)(0.098)(0.139)(0.173)(0.115)(0.087)
M D 1.6921.3961.6571.5171.4411.370
(0.135)(0.066)(0.182)(0.097)(0.085)(0.078)
T M 1.6251.4381.5081.5591.5551.404
(0.091)(0.060)(0.112)(0.086)(0.080)(0.057)
B G 1.3691.3071.2861.3291.3741.278
(0.034)(0.025)(0.033)(0.039)(0.038)(0.025)
B G 0 . 95 1.3651.3031.2821.3221.3701.275
(0.033)(0.025)(0.033)(0.038)(0.038)(0.025)
B G 0 . 9 1.3601.2991.2771.3191.3671.271
(0.033)(0.025)(0.032)(0.037)(0.037)(0.024)
B G 0 . 8 1.3521.2901.2691.3201.3661.259
(0.032)(0.024)(0.030)(0.038)(0.037)(0.023)
B G 0 . 7 1.3451.2841.2631.3271.3681.248
(0.032)(0.023)(0.030)(0.039)(0.037)(0.023)
L R 95.28038.29046.2209.31613.180174.000
(60.670)(7.566)(9.192)(0.375)(0.891)(56.286)
C L R 1.0141.0071.0161.0461.0320.974
(0.010)(0.010)(0.010)(0.011)(0.011)(0.010)
Note: In the columns of “log-normal”, the σ’s are the scale parameters instead of the standard deviations of the log-normal distributions. The setting is basically the same as that in Table 2, only the innovation errors in the true models have heavier tails.
In between the t- and g-AFTER, the latter is more robust, since its performances under all scenarios are the best or close to the best. For the t-AFTER, its advantages over the L 1 - and L 2 -AFTERs are clear and consistent when the tails of the distributions of the random errors get heavier. In both Table 2 and Table 3, the C L R is the most competitive method outside the AFTER family, but it still has 3% to 7% larger ASEEs than the new methods on average.
In our settings, similar to many real application situations, using the conditional variances only to assign relative combining weights may not be enough, since some of the candidate forecasters are highly correlated. This explains why the B G and the discounted B G ’s are not quite competitive, as seen in Table 2 and Table 3. The B G related methods have at least 20% larger ASEEs than the AFTER-based methods on average.
To demonstrate that our results are not sensitive to the performance measure, we redo Table 3 under the AAEE (instead of ASEE), and the comparisons are given in Table 4. The results are similar.
Table 4. Simulation results on the A R models with p = 5 (heavy tailed) under absolute estimation error.
Table 4. Simulation results on the A R models with p = 5 (heavy tailed) under absolute estimation error.
t 3 Log-Normal
σ 2 = 1 σ 2 = 4 σ 2 = 9 σ = 0 . 25 σ = 0 . 5 σ = 1
A 2 1.0181.0191.0190.9810.9971.017
(0.003)(0.002)(0.008)(0.002)(0.002)(0.003)
A t 0.9900.9880.9930.9820.9760.975
(0.002)(0.002)(0.002)(0.002)(0.002)(0.003)
A g 0.9880.9860.9910.9790.9780.977
(0.002)(0.002)(0.002)(0.002)(0.002)(0.002)
S A 1.4691.6661.7241.4351.5431.483
(0.064)(0.076)(0.080)(0.069)(0.064)(0.064)
M D 1.2091.3141.4121.1291.2791.196
(0.043)(0.068)(0.094)(0.035)(0.060)(0.037)
T M 1.2261.3671.3121.1831.3311.272
(0.040)(0.056)(0.085)(0.033)(0.050)(0.040)
B G 1.1871.2721.4891.1591.2451.210
(0.023)(0.029)(0.035)(0.021)(0.027)(0.023)
B G 0 . 95 1.1841.2691.4011.1571.2421.206
(0.022)(0.029)(0.034)(0.021)(0.027)(0.023)
B G 0 . 9 1.1811.2661.3781.1561.2401.201
(0.022)(0.029)(0.033)(0.021)(0.027)(0.022)
B G 0 . 8 1.1761.2601.4501.1561.2371.192
(0.021)(0.028)(0.033)(0.021)(0.027)(0.021)
B G 0 . 7 1.1731.2561.3521.1591.2361.185
(0.021)(0.028)(0.032)(0.021)(0.026)(0.020)
L R 2.8912.8623.6472.6902.6103.296
(0.084)(0.097)(1.393)(0.074)(0.077)(0.121)
C L R 1.0291.0251.0221.0191.0041.015
(0.006)(0.006)(0.006)(0.008)(0.008)(0.006)
Note: The settings are the same as those for Table 3, but this table uses averaged absolute estimation errors (AAEE) instead of averaged squared estimation errors (ASEE) to measure the performance of forecasters. See Sub-subsection 4.1.1 for the detailed definitions of the AAEE and ASEE. The results are similar to Table 3.

5. Real Data Example

The M3-competition data are popular and often used to compare and validate the performances of prediction methods. It contains 3003 micro, industry, macro, financial, demographic and other variables (see [28] for more details). There are 24 different forecast sequences from 24 different candidate forecast models/methods/procedures for each of the 3003 variables (N1 to N3003). For each variable, the last few (6, 8 or 18) observations are not used to train the predictive models, and they are used to evaluate the model performance. Notice that the forecasts are generated all at once (1-, 2-, ⋯ and up to 6, 8 or 18 steps ahead) by each forecast model. We use the 1428 variables (N1402 to N2829) with 18 observations as performance evaluating sets to conduct our study because some combination methods need a few forecasts to train the parameters before achieving a reasonable level of reliability.

5.1. Data and Settings

Let y ^ i be the forecast of y i for n 0 i n 1 , then the mean squared forecast error (MSFE) is 1 n 1 - n 0 + 1 i = n 0 n 1 ( y i - y ^ i ) 2 . We use the mean squared forecast errors to measure the prediction performances of the combination methods on each of the 1428 variables. For each variable, the MSFE of each of the other combination methods over the MSFE of the S A is reported. In addition, mean absolute percentage error (MAPE) is also considered.
Specifically, using the same notations as those in Subsection 4.2, the averaged relative performances (MSFE) of the M D , T M , B G , discounted B G ’s, A 2 , A 1 , A t and A g over the S A over the 1428 variables are presented. See Sub-subsection 4.1.2 for the descriptions of these methods. The main reason that we use the S A as the benchmark on this real dataset is that the S A is one of the most popular combination methods with a great reputation in a broad range of applications. Since there are too many candidate forecasters compared to the forecast periods available, the two linear regression-related combination methods discussed in Subsection 4.2 are not considered here.
For each of the variables with 18 forecast periods, the combination starts after the sixth forecasts, and the MSFE of the last nine forecasts of each method is recorded for performance comparisons. For each variable, the MSFE ratio of each method over that of the S A is reported. The summaries, mean (and its standard error), median, minimum, the first and third quartiles (denoted as Q 1 and Q 3 , respectively) and the maximum of the 1428 ratios of each method are reported in Table 5. Note that the table also contains the comparisons under the MAPE (all of the other aspects are the same).
Furthermore, the comparison on a subset of M3-competition data is provided. On this subset, the variables are considered to have high potentials to be heavy tailed. All 1428 variables have monthly time intervals. For each of the 1428 variables, there are some training data (about 70 to 128 months). We modeled the training data to find the ones with high potential to have heavy-tailed errors. Specifically, let y t be the observed value of a variable at time t, and we fit each variable with a model as: y t = β 0 + j = 1 11 β j I ( m t = j ) + β 12 y t - 1 + + β 16 y t - 5 , where m t is the month at time point t. We used AIC in backward selection, and the variables with a kurtosis of the forecast errors larger than three were considered to have heavy tails. There are 199 out of 1428 variables that were selected.
Table 5. Results on the 1428 variables of the M3-competition data.
Table 5. Results on the 1428 variables of the M3-competition data.
MeanSeMedianMin Q 1 Q 3 Max
A 1 0.7080.0160.6490.0010.3070.99411.50
0.7580.0090.7730.0380.5070.9902.901
A 2 0.6970.0170.6390.0010.3090.97913.32
0.7660.0100.7660.0300.5170.9924.138
A t 0.7080.0150.6460.0010.3121.0038.632
0.7600.0090.7690.0340.5090.9933.717
A g 0.6960.0140.6450.0010.3080.9877.710
0.7570.0090.7700.0330.5080.9903.298
M D 1.0500.0101.0220.0020.9101.1435.341
1.0150.0051.0150.0650.9441.0782.821
T M 0.9900.0041.0000.0020.9741.0232.437
0.9920.0020.9990.0620.9841.0131.747
B G 0.7840.0100.8380.0010.5960.9735.227
0.8490.0060.9020.0390.7580.9833.051
B G 0 . 95 0.7750.0100.8320.0010.5820.9697.715
0.8420.0060.8960.0370.7490.9812.841
B G 0 . 9 0.7680.0120.8250.0010.5640.96611.45
0.8350.0060.8930.0360.7390.9782.643
B G 0 . 8 0.7580.0190.8060.0010.5290.96024.08
0.8220.0060.8830.0400.7090.9742.712
B G 0 . 7 0.7570.0310.7930.0010.5030.95643.19
0.8100.0070.8700.0360.6840.9713.517
Note: For each of the 1428 variables, the methods in the first column are used to combine the 24 forecasts of the last 9 periods of the 18 data points. The mean squared forecast error (MSFE) and mean absolute percentage error (MAPE) of each method are recorded. The ratios of the MSFEs and MAPEs of these methods over that of the S A , respectively, are used as the relative performances of the competing methods. For each forecast combination method other than S A , the mean, minimum, maximum, first quartile, third quartile and the associated standard error of the mean of these ratios based on the 1428 series are summarized in this table. In the table, the summaries of the results based on MSFE are on top of those based on MAPE for each method.
For the heavy-tailed subset, we want to focus on the comparison between the g-AFTER and the non-AFTER methods, because the comparison inside the AFTER family is well addressed in the simulation settings. The reason we choose the g-AFTER instead of the t-AFTER for further comparison is because g-AFTER is practically more efficient, since it performs well even if the signal of heavy tails is not extremely strong. Therefore, for this subset, the benchmark method is the g-AFTER, and the results are reported in Table 6. The comparisons under both the MAPE and the MSFE are provided in the table.
Table 6. Results on the heavy-tailed subset.
Table 6. Results on the heavy-tailed subset.
MeanSeMedianMin Q 1 Q 3 Max
S A 7.7381.6952.2590.1311.3115.24482.734
2.0440.1661.4220.3271.0562.14725.784
M D 8.0882.0051.9120.2221.1624.974120.428
1.9980.1531.4060.4771.0302.05521.229
T M 7.6071.6642.2990.1291.2675.17578.481
2.0140.1651.4160.3161.0352.15026.039
B G 2.0730.2451.2660.2450.9612.16040.137
1.3490.0531.1570.4680.9711.5657.845
B G 0 . 95 2.0170.2171.4310.2410.9652.47212.551
1.3220.0481.1540.4650.9651.5256.703
B G 0 . 9 1.8460.1821.3370.2080.9582.44410.383
1.2950.0431.1140.4610.9541.4975.655
B G 0 . 8 1.6560.1501.3400.1790.8512.0748.577
1.2460.0361.1000.4540.9401.4483.985
B G 0 . 7 1.5360.1411.2560.1580.8131.6737.746
1.2020.0321.0890.4310.9281.3713.461
Note: Out of the 1428 variables, 199 of them are identified to have heavy tails in the forecast errors. For these 199 variables, the g-AFTER method is used as the benchmark method for comparison, and all other aspects in the setting are the same as Table 5. Note that for this heavy-tailed subset, we want to focus on the comparison between the g-AFTER and the non-AFTER methods, because the comparison inside the AFTER family is well addressed in the simulation settings. The reason we choose the g-AFTER instead of the t-AFTER for further comparison is because g-AFTER is practically more efficient, since it performs well even if the signal of heavy tails is not extremely strong.

5.2. Results

As one can see from Table 5, the overall performances of the AFTER-based methods are better than the other popular combination methods considered by providing at least 6% to 7% smaller MSFEs on average. It also suggests that the t- and g-AFTERs have competitive performances in general while being more robust than others, since their overall performances are outstanding and are still acceptable for the worst cases, as seen from the last column.
In the scenario that the the DGPs are believed to have heavy-tailed distributions, the g-AFTER is significantly better than the non-AFTER methods by providing significantly smaller MSFE (about 33% smaller than the best of the competitors on average), as seen in Table 6. Therefore, the robustness of g-AFTER is supported by the M3-competition data.
Table 5 also shows that the AFTERs can occasionally be significantly worse than the S A and other methods. From Table 5, it is worth noticing that the performances of the AFTERs can be a thousand times better while only about 10 times worse than that of S A . An examination reveals that for certain variables, such as N1837 and N2217, some candidate forecasters are consistently and significantly worse than others. In this situation, since the S A cannot remove the extreme “disturbing” ones before averaging, its performance is extremely poor. However, the AFTERs essentially ignore the “unreasonable” candidate forecasts, so they can be significantly better than the S A .
From Table 5 and Table 6, it is clear that the advantages of the t- and g-AFTER hold under both MSFE and MAPE.

6. Conclusions

Forecast combination is an important tool to achieve better forecasting accuracy when multiple candidate forecasters are available. Although many popular forecast combination methods do not necessarily exclude heavy-tailed situations, little is found in the literature that examines the performances of forecast combination methods in such situations with theoretical characterizations.
In this paper, we propose combination methods designed for cases when forecast errors exhibit heavy-tailed behaviors that can be modeled by a scaled Student’s t-distribution and for the cases when the heaviness of the forecast errors is not easy to identify. The t-AFTER models the heavy-tailed random errors with scaled Student’s t-distributions with unknown (or known) degrees of freedom and scale parameters. A candidate pool of degrees of freedom is proposed to solve the estimation problem, and the resulting t-AFTER works well, as seen in the simulation and real example analysis.
However, in many cases, the heaviness of the tails of the random errors is difficult to identify. Therefore, we design a combination process for general use and call it g-AFTER. For these situations, instead of assuming a certain distribution form for the random errors, a set of possible heaviness of the tails is considered, and the combination process automatically decides which ones are more reasonable by giving them high weights. The numerical results suggest that the performance of the g-AFTER is more robust than other popular combination methods because of its adaptive capability. The design of the g-AFTER provides a general idea: when there are multiple reasonable candidate distributions for the random errors, combining them in an AFTER scheme like the g-AFTER for forecast combination should work well.
In the present numerical work, the numbers of candidate forecasts considered are relatively small. In some situations, there are large numbers of candidate forecasts to begin with. It has been shown in the literature that a proper screening before combining can be beneficial, and information criteria can be used to choose top performers to be combined (see, e.g., Yuan & Yang [29] and Zhang et al. [23]). Alternatively, one may also use model confidence sets (see Hansen et al. [30] and Ferrari & Yang [31]) to narrow the pool of candidates before applying a combining method. Samuwels & Sekkel [32] provide an interesting comparative study on the effect of screening via a model confidence set of Hansen et al. [30], which shows that removing poor candidates indeed improves the final performance of the combined forecast. In the future, it will be useful to investigate how the t- and g-AFTER methods behave when a screening step is applied before combining.

Acknowledgments

We thank the associate editor and two reviewers for their very helpful comments and suggestions for improving the paper. This work was partially supported by the U.S. National Science Foundation Grant DMS-1106576. We thank the Minnesota Supercomputing Institute for providing computing resources.

Author Contributions

All the authors contributed to the formulation of the problem, its solution, numerical work and the writing of the paper.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix

A.1

In this subsection, some simple facts are given. They are used in Subsection A.2 of the Appendix.
  • Fact 1: 1 - ( 1 - t ) a a t 1 - t for a 0 , 0 t < 1 . Let f ( t , a ) = 1 - ( 1 - t ) a - a t / ( 1 - t ) , then f ( t , a ) 0 , since f / t = a ( 1 - t ) - 2 ( ( 1 - t ) a + 1 - 1 ) 0 and f ( 0 , a ) = 0 .
  • Fact 2: log ( x ) x - 1 for x 0 .
  • Fact 3: For any c > 0 , B ( a , b ) / B ( a , b + c ) decreases as b increases. The proof is pure arithmetic, and the key point is using the fact that B ( x , y ) = x + y x y n = 1 1 + x y n ( x + y + n ) - 1 .
  • Fact 4: E ( 1 + Y 2 ν ) - 1 = ν / ( ν + 1 ) , where Y t ν conditional on ν. Let Z = Y ( ν + 2 ) / ν , then it is easy to show that E ( 1 + Y 2 ν ) - 1 = B ( 1 / 2 , ( ν + 2 ) / 2 ) / B ( 1 / 2 , ν / 2 ) = ν / ( ν + 1 ) .
  • Fact 5: ( s 2 - 1 ) / 2 - log ( s ) s 0 + 2 2 s 0 ( 1 - s ) 2 if s s 0 > 0 . Use Fact 2 to show that - log ( s ) = log ( 1 + ( 1 - s ) / s ) ( 1 - s ) / s .

A.2

Lemma 1. Let h ν ( x ) be the density function of t ν , ν ̲ > 0 and λ > 0 be constants. Then, for any 0 < s 0 s , ν ̲ min ( ν , ν ) - 2 ν ¯ and | ν - ν | λ , we have:
h ν ( x ) log h ν ( x ) 1 s h ν ( x - t s ) C 1 ( 1 - s ) 2 + C 2 t 2 + C 3 ν - ν ν ,
where C 1 , C 2 and C 3 are constants depending on s 0 , ν ̲ , ν ¯ and λ.
Proof. After a proper reorganization, we have:
E log h ν ( X ) 1 s h ν ( X - t s ) = log ( s ) + 1 2 log ν ν + log B ( 1 2 , ν 2 ) B ( 1 2 , ν 2 ) + E 1 + ν 2 log ( 1 + ( X - t ) 2 s 2 ν ) - 1 + ν 2 log X 2 + ν ν
  • Let ν * = min ( ν , ν ) and using Facts 1, 2 and 3, then:
    log B ( 1 2 , ν 2 ) B ( 1 2 , ν 2 ) | B ( 1 2 , ν 2 ) - B ( 1 2 , ν 2 ) | B ( 1 2 , ν 2 ) = t - 1 / 2 ( 1 - t ) ν * / 2 - 1 ( 1 - ( 1 - t ) | ν - ν | / 2 ) d t B ( 1 2 , ν 2 ) | ν - ν | 2 t 1 / 2 ( 1 - t ) ν * / 2 - 2 d t B ( 1 2 , ν 2 ) = | ν - ν | 2 B ( 3 2 , ν * - 2 2 ) B ( 1 2 , ν 2 ) = | ν - ν | 2 B ( 3 2 , ν * - 2 2 ) B ( 1 2 , ν * - 2 2 ) B ( 1 2 , ν * - 2 2 ) B ( 1 2 , ν 2 ) = | ν - ν | 2 1 ν * - 1 B ( 1 2 , ν ̲ 2 ) B ( 1 2 , ν ̲ + 2 2 ) = | ν - ν | ν ν ν * - 1 B ( 1 2 , ν ̲ 2 ) B ( 1 2 , ν ̲ + 2 2 ) | ν - ν | ν ν ̲ + λ ν ̲ + 1 B ( 1 2 , ν ̲ 2 ) B ( 1 2 , ν ̲ + 2 2 ) | ν - ν | ν ν ̲ + λ ν ̲ + 1
  • Using Fact 2 in Subsection A.1, it follows: 1 2 log ν ν 1 2 ν - ν ν 1 2 | ν - ν | ν .
  • It is easy to show that:
    E log ( s ) + 1 + ν 2 log ( 1 + ( X - t ) 2 s 2 ν ) - 1 + ν 2 log ( 1 + X 2 ν ) = E log ( s ) - ( 1 + ν ) log ( s ) + 1 + ν 2 log ( s 2 + ( X - t ) 2 ν 1 + X 2 ν ) + ν - ν 2 log ( 1 + X 2 / ν ) - ν log ( s ) + E 1 + ν 2 s 2 - 1 + ( X - t ) 2 / ν - X 2 / ν 1 + X 2 / ν + X 2 | ν - ν | / ν ( 2 + ν ¯ ) 2 + s 0 2 s 0 ( 1 - s ) 2 + ν ̲ + 3 ν ̲ + 2 t 2 + C 3 * | ν - ν | ν ,
    where C 3 * is a constant depending on s 0 , ν ̲ , ν ¯ and λ.
Note that if ν is known, then ν = ν . Then,
E log h ν ( X ) 1 s h ν ( X - t s ) ν 2 + s 0 2 s 0 ( 1 - s ) 2 + 1 2 t 2 .
The proof can be completed by combining these steps. ☐
Lemma 2. Let h ( x ) be the density function of a double-exponential distribution with μ = 0 and d = 1 , then for s 0 > 0 and s s 0 , it follows:
h ( x ) log h ( x ) 1 s h x - t s C 4 ( 1 - s ) 2 + C 5 t 2 ,
where C 4 and C 5 are constants depending only on s 0 .
Proof. Since h ( y ) = 1 2 exp ( - | y | ) and exp ( - x ) 1 - x + x 2 2 for x 0 , then:
E log h ( Y ) 1 s h ( Y - t s ) d y = log ( s ) + E ( | Y - t | s ) - E | Y | = log ( s ) + exp ( - t ) + t s - 1 ( s - 1 ) + 1 + t 2 / 2 s - 1 = t 2 2 s + ( 1 - s ) 2 1 s t 2 2 s 0 + 1 s 0 ( 1 - s ) 2 .
Lemma 3. Let h ( y ) be the density function of a standard normal distribution, then for s 0 > 0 and s s 0 , it follows:
h ( x ) log h ( x ) 1 s h x - t s C 6 ( 1 - s ) 2 + C 7 t 2 ,
where C 6 and C 7 are constants depending only on s 0 .
Proof. Using Fact 2,
E log h ( Y ) 1 s h ( Y - t s ) d y = log ( s ) + 1 + t 2 - s 2 2 s 2 = 1 2 s 2 t 2 + log ( s ) + 1 - s 2 2 s 2 1 2 s 2 t 2 + ( s - 1 ) + 1 - s 2 2 s 2 = 1 2 s 2 t 2 + 2 s + 1 2 s 2 ( s - 1 ) 2 1 2 s 0 2 t 2 + 2 s 0 + 1 2 s 0 2 ( s - 1 ) 2 .

A.3

In this subsection, we prove Theorem 1.
Conditional on the information available until time point i, it is assumed that Y i - m i s i t ν , where s i is the conditional scale parameter at time i. Let s ^ i , j be the estimator of s i from the j-th forecaster.
Let f n = i = i 0 + 1 i 0 + n 1 s i h y i - m i s i and q n = j = 1 K π j i = i 0 + 1 i 0 + n 1 s ^ i , j h y i - y ^ i , j s ^ i , j , where h ( · ) is the density function of t ν and π j is the initial combining weight of the j-th forecaster. Therefore, q n is the estimator of f n .
Then, for any 1 j J ,
log ( f n / q n ) log i = i 0 + 1 i 0 + n 1 s i h ( y i - m i s i ) π j i = i 0 + 1 i 0 + n 1 s ^ i , j h ( y i - y ^ i , j s ^ i , j ) = log 1 π j + i = i 0 + 1 i 0 + n log 1 s i h ( y i - m i s i ) 1 s ^ i , j h ( y i - y ^ i , j s ^ i , j )
Conditional on all of the information before time point i,
E i log 1 s i h ( Y i - m i s i ) 1 s ^ i , j h ( Y i - y ^ i , j s ^ i , j ) = 1 s i h ( y i - m i s i ) log 1 s i h ( y i - m i s i ) 1 s ^ i , j h ( y i - y ^ i , j s ^ i , j ) d y i = h ( x ) log h ( x ) 1 s ^ i , j / s i h ( x - ( y ^ i , j - m i ) / s i s ^ i , j / s i ) d x
By Lemma 1 in Subsection A.2,
E i log 1 s i h ( Y i - m i s i ) 1 s ^ i , j h ( Y i - y ^ i , j s ^ i , j ) ( y ^ i , j - m i ) 2 2 s i 2 + B 1 ( s ^ i , j - s i ) 2 s i 2
where B 1 = ν 2 + s 0 2 s 0 . Therefore,
1 n i = i 0 + 1 i 0 + n E D ( q i | | q ^ i A t ) inf 1 j J log 1 w j A t n + 1 n i = i 0 + 1 i 0 + n E ( y ^ i , j - m i ) 2 2 s i 2 + B 1 n i = i 0 + 1 i 0 + n E ( s ^ i , j - s i ) 2 s i 2
From Theorem 1 of Yang [8], there exists a constant C depending on the parameters in Conditions 1 and 2 , such that,
E D ( q i | | q ^ i A t ) 1 C E ( m i - y ^ i A t ) 2 σ i 2 .
Therefore,
1 n i = i 0 + 1 i 0 + n E ( m i - y ^ i A t ) 2 σ i 2 C inf 1 j J log 1 w j A t n + B 2 n i = i 0 + 1 i 0 + n E ( y ^ i , j - m i ) 2 σ i 2 + B 3 n i = i 0 + 1 i 0 + n E ( s ^ i , j - s i ) 2 s i 2 ,
where B 2 is a function of ν; and B 3 is deducted the same as B 1 , but under Condition 2 instead of Condition 2.

A.4

The essential part of the proof of Theorem 2 is provided in this subsection. We only provide the steps of the proof when the random errors are scaled Student’s t-distributed, since the proofs of other situations are similar.
Let s ^ i , j , k be the estimator of s i from the j-th forecaster assuming ν k is the true degree of freedom. If Condition 4 holds, then obviously:
q n k = 1 K j = 1 J c 2 w j , k A t / G i = i 0 + 1 i 0 + n 1 s ^ i , j , k h ν l ( y i - y ^ i , j s ^ i , j , k ) .
Therefore, for any j * and k * ,
log f n q n log i = i 0 + 1 i 0 + n 1 s i h ( y i - m i s i ) c 2 w j * , k * A t / G i = i 0 + 1 i 0 + n 1 s ^ i , j * , k * h ν k * ( y i - y ^ i , j * s ^ i , j * , k * ) = log G c 2 w j * , k * A t + i = i 0 + 1 i 0 + n log 1 s i h ( y i - m i s i ) 1 s ^ i , j * , k * h ( y i - y ^ i , j * s ^ i , j * , k * ) .
Similarly, by Lemma 1 in A.2,
E i log 1 s i h ( Y i - m i s i ) 1 s ^ i , j * , k * h ( Y i - y ^ i , j * s ^ i , j * , k * ) B 1 ( y ^ i , j * - m i ) 2 σ i 2 + B 2 ( s ^ i , j * , k * - s i ) 2 s i 2 + B 3 | ν k - ν ν | .
The rest of the proof is similar to that of Theorem 1.

References

  1. J.M. Bates, and C.W.J. Granger. “The combination of forecasts.” OR 20 (1969): 451–468. [Google Scholar] [CrossRef]
  2. R.T. Clemen. “Combining forecasts: A review and annotated bibliography.” Int. J. Forecast. 5 (1989): 559–583. [Google Scholar] [CrossRef]
  3. P. Newbold, and D.I. Harvey. “Forecast combination and encompassing.” In A Companion to Economic Forecasting. Edited by M.P. Clements and D.F. Hendry. Malden, MA, USA: WILEY, 2002, pp. 268–283. [Google Scholar]
  4. A. Timmermann. “Forecast combinations.” In Handbook of Economic Forecasting. Amsterdam, The Netherlands: NORTH-HOLLAND, 2006, Volume 1, pp. 135–196. [Google Scholar]
  5. K. Lahiri, H. Peng, and Y. Zhao. “Machine Learning and Forecast Combination in Incomplete Panels.” 2013. Available online: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2359523 (accessed on 10 October 2015).
  6. J.S. Armstrong, K.C. Green, and A. Graefe. “Golden rule of forecasting: Be conservative.” J. Bus. Res. 68 (2015): 1717–1731. [Google Scholar] [CrossRef]
  7. K.C. Green, and J.S. Armstrong. “Simple versus complex forecasting: The evidence.” J. Bus. Res. 68 (2015): 1678–1685. [Google Scholar] [CrossRef]
  8. Y. Yang. “Combining forecasting procedures: Some theoretical results.” Econom. Theory 20 (2004): 176–222. [Google Scholar] [CrossRef]
  9. C. Marinelli, S. Rachev, and R. Roll. “Subordinated exchange rate models: Evidence for heavy tailed distributions and long-range dependence.” Math. Comput. Model. 34 (2001): 955–1001. [Google Scholar] [CrossRef]
  10. A.C. Harvey. Dynamic Models for Volatility and Heavy Tails: With Applications to Financial and Economical Time Series. New York, NY, USA: Cambridge University Press, 2013, p. 69. [Google Scholar]
  11. H. Zou, and Y. Yang. “Combining time series models for forecasting.” Int. J. Forecast. 20 (2004): 69–84. [Google Scholar] [CrossRef]
  12. X. Wei, and Y. Yang. “Robust forecast combinations.” J. Econom. 166 (2012): 224–236. [Google Scholar] [CrossRef]
  13. C. Fernandez, and M.F.J. Steel. “Multivariate Student-t regression models: Pitfalls and inference.” Biometrika 86 (1999): 153–167. [Google Scholar] [CrossRef]
  14. T.C.O. Fonseca, M.A.R. Ferreira, and H.S. Migon. “Objective bayesian analysis for the Student-t regression model.” Biometrika 95 (2008): 325–333. [Google Scholar] [CrossRef]
  15. R. Kan, and G. Zhou. Modeling Non-Normality Using Multivariate T: Implications for Asset Pricing. Technical Report; Toronto, ON, Canada: Rotman School of Management, University of Toronto, 2003. [Google Scholar]
  16. C.W.J. Granger, and R. Ramanathan. “Improved methods of forecasting.” J. Forecast. 3 (1984): 197–204. [Google Scholar] [CrossRef]
  17. A. Sancetta. “Recursive forecast combination for dependent heterogeneous data.” Econom. Theory 26 (2010): 598–631. [Google Scholar] [CrossRef]
  18. G. Cheng, and Y. Yang. “Forecast combination with outlier protection.” Int. J. Forecast. 31 (2015): 223–237. [Google Scholar] [CrossRef]
  19. S. Makridakis, and M. Hibon. “The M3-Competition: Results, conclusions and implications.” Int. J. Forecast. 16 (2000): 451–476. [Google Scholar] [CrossRef]
  20. A. Inoue, and L. Kilian. “How useful is bagging in forecasting economic time series? A case study of U.S. consumer price inflation.” J. Am. Stat. Assoc. 103 (2008): 511–522. [Google Scholar] [CrossRef]
  21. I. Sanchez. “Adaptive combination of forecasts with application to wind energy.” Int. J. Forecast. 24 (2008): 679–693. [Google Scholar] [CrossRef]
  22. C. Altavilla, and P. de Grauwe. “Forecasting and combining competing models of exchange rate determination.” Appl. Econ. 42 (2010): 3455–3480. [Google Scholar] [CrossRef]
  23. X. Zhang, Z. Lu, and G. Zou. “Adaptively combined forecasting for discrete response time series.” J. Econom. 176 (2013): 80–91. [Google Scholar] [CrossRef]
  24. C.K. Ing. “Accumulated prediction errors, information criteria and optimal forecasting for autoregressive time series.” Ann. Stat. 35 (2007): 1238–1277. [Google Scholar] [CrossRef]
  25. C.K. Ing, C.-Y. Sin, and S.-H. Yu. “Model selection for integrated autoregressive processes of infinite order.” J. Multivar. Anal. 106 (2012): 57–71. [Google Scholar] [CrossRef]
  26. B.E. Hansen. “Least squares forecast averaging.” J. Econom. 146 (2008): 342–350. [Google Scholar] [CrossRef]
  27. J.H. Stock, and M.W. Watson. “Forecasting with many predictors.” In Handbook of Economic Forecasting. Amsterdam, The Netherlands: NORTH-HOLLAND, 2006, Volume 1, pp. 515–554. [Google Scholar]
  28. “M3-Competition data.” Available online: http://forecasters.org/resources/time-series-data/m3-comp-etition/ (accessed on 9 October 2015).
  29. Z. Yuan, and Y. Yang. “Combining Linear Regression Models: When and How? ” J. Am. Stat. Assoc. 100 (2005): 1202–1204. [Google Scholar] [CrossRef]
  30. P. Hansen, A. Lunde, and J. Nason. “The model confidence set.” Econometrica 79 (2011): 453–497. [Google Scholar] [CrossRef]
  31. D. Ferrari, and Y. Yang. “Confidence sets for model selection by F-testing.” Stat. Sin. 25 (2015): 1637–1658. [Google Scholar] [CrossRef]
  32. J.D. Samuels, and R.M. Sekkel. “Forecasting with Many Models: Model Confidence Sets and Forecast Combination.” Working Paper. 2013. Available online: http://www.bankofcanada.ca/wp-content/uploads/2013/04/wp2013-11.pdf (accessed on 10 October 2015).

Share and Cite

MDPI and ACS Style

Cheng, G.; Wang, S.; Yang, Y. Forecast Combination under Heavy-Tailed Errors. Econometrics 2015, 3, 797-824. https://doi.org/10.3390/econometrics3040797

AMA Style

Cheng G, Wang S, Yang Y. Forecast Combination under Heavy-Tailed Errors. Econometrics. 2015; 3(4):797-824. https://doi.org/10.3390/econometrics3040797

Chicago/Turabian Style

Cheng, Gang, Sicong Wang, and Yuhong Yang. 2015. "Forecast Combination under Heavy-Tailed Errors" Econometrics 3, no. 4: 797-824. https://doi.org/10.3390/econometrics3040797

APA Style

Cheng, G., Wang, S., & Yang, Y. (2015). Forecast Combination under Heavy-Tailed Errors. Econometrics, 3(4), 797-824. https://doi.org/10.3390/econometrics3040797

Article Metrics

Back to TopTop