Forecast Combination under Heavy-Tailed Errors
Abstract
:1. Introduction
2. The t-AFTER Methodology
2.1. Problem Setting
2.2. The Existing AFTER Methods: The - and -AFTER Methods
2.3. The t-AFTER Methods
- We decide a pool of candidate degrees of freedom with size K. The elements in the pool are considered to be close to the degrees of freedom of the Students’ t-distribution that describes the random errors well. For each element in the set, we assume it is the true degrees of freedom to estimate the related scale parameter. Therefore, we have K sets of estimate for the degrees of freedom and scale parameter pair.
- For each of the K sets of the estimate, we find its probability to be the true one based on the relative historical performances.
- Estimate (e.g., by MLE) for each and for each candidate forecaster. The estimate for from the j-th forecaster given is denoted as .
- Calculate and :
2.4. Risk Bounds of the t-AFTER
2.4.1. Conditions
2.4.2. Risk Bounds for the t-AFTER with a Known ν
- When only Condition 2 is satisfied, Theorem 1 shows that the cumulative distance between the true densities and their estimators from the t-AFTER is upper bounded by the cumulative (standardized) forecast errors of the best candidate forecaster plus a penalty that has two parts: the squared relative estimation errors of the scale parameters and the logarithm of the initial weights. This risk bound is obtained without assuming the existence of the variances of the random errors, and is only required to be lower bounded.
- When ν is assumed to be strictly larger than two and both Conditions 1 and 2 are satisfied, Theorem 1 shows that the cumulative forecast errors have the same convergence rate of the cumulative forecast errors of the best candidate forecaster plus a penalty that depends on the initial weights and efficiency of scale parameter estimation. The risk bounds hold even if the the distribution of random errors has tails as heavy as .
- If there is no prior information to decide the ’s in (6), then equal initial weights could be applied. That is, for all j. In this case, it is easy to see that the number of candidate forecasters plays a role in the penalty. When the candidate pool is large, some preliminary analysis should be done to eliminate the significantly less competitive ones before applying the t-AFTER.
3. The g-AFTER Methodology
3.1. The g-AFTER Method
3.2. Conditions
3.3. Risk Bounds for the g-AFTER
- Theorem 2 provides a risk bound for more general situations compared to Theorem 1. That is, as long as the the true random errors are from one of the three popular families, similar risk bounds hold.
- When strong evidence is shown that the errors are highly heavy tailed, Ω can be very small with only small degrees of freedom, and the in G can be relatively large (relative to and ). The more information on the tails of the error distributions is available, the more efficient the allocation of the initial weights can be.
- Specially, when the true random errors have tails significantly heavier than normal and double-exponential, they could be assumed to be from a scaled Student’s t-distribution with unknown ν, and a (general) t-AFTER procedure is more reasonable. In this case, .Let and and for all j and k. Without assuming Condition 1 is satisfied, it follows for any :If Condition 1 is also satisfied, then it follows:
4. Simulations
- Use as the set of candidate degrees of freedom for the scaled Student’s t-distributions considered in the t-AFTER method. The t-AFTER is proposed mostly to be applied when the error terms exhibit very strong heavy-tailed behaviors. When the degrees of freedom of the Student’s t-distribution gets larger, the t-AFTER becomes similar to the - or -AFTER. Thus, a choice of Ω with relatively small degrees of freedom in the g-AFTER should provide a good enough adaption capability. In fact, other options for Ω, such as , were considered, and similar results were found.
- Since it is usually the case that g-AFTER is preferred when the users have no consistent and strong evidence to identify the distribution of the error terms from the three candidate distribution families, we give equal initial weights to the candidate distributions. Therefore, , , and are used in the g-AFTER. Note that, for example, if there is clear and consistent evidence that the error distribution is more likely to be from the normal distribution family, then putting relatively large initial weights on the -AFTER procedure in a g-AFTER can be more appropriate than using equal weights.
- The ’s are the sample median of the absolute forecast errors before time point i from the forecaster j divided by the theoretical median of the absolute value of a random variable with distribution .
4.1. Linear Regression Models
4.1.1. Simulation Settings
4.1.2. Results
1.302 | 1.043 | 1.116 | 1.028 | 0.983 | 0.958 | 0.926 | 0.931 | |
(0.009) | (0.003) | (0.004) | (0.001) | (0.003) | (0.001) | (0.002) | (0.001) | |
0.943 | 0.980 | 0.983 | 0.995 | 0.941 | 0.955 | 0.932 | 0.942 | |
(0.002) | (0.001) | (0.001) | (0.001) | (0.003) | (0.001) | (0.001) | (0.001) | |
0.944 | 0.967 | 0.974 | 0.977 | 0.940 | 0.950 | 0.926 | 0.938 | |
(0.002) | (0.001) | (0.001) | (0.001) | (0.001) | (0.001) | (0.001) | (0.001) | |
1.257 | 1.066 | 1.088 | 1.026 | 0.980 | 0.955 | 0.937 | 0.927 | |
(0.008) | (0.004) | (0.003) | (0.001) | (0.002) | (0.001) | (0.002) | (0.001) | |
0.950 | 0.967 | 0.976 | 0.982 | 0.951 | 0.950 | 0.943 | 0.938 | |
(0.002) | (0.001) | (0.001) | (0.001) | (0.001) | (0.001) | (0.001) | (0.001) | |
0.951 | 0.958 | 0.971 | 0.970 | 0.949 | 0.944 | 0.939 | 0.933 | |
(0.001) | (0.001) | (0.001) | (0.001) | (0.001) | (0.001) | (0.001) | (0.001) | |
1.166 | 1.056 | 1.035 | 0.998 | 0.968 | 0.949 | 0.946 | 0.929 | |
(0.006) | (0.003) | (0.002) | (0.001) | (0.002) | (0.001) | (0.001) | (0.001) | |
0.950 | 0.957 | 0.964 | 0.965 | 0.949 | 0.946 | 0.948 | 0.939 | |
(0.002) | (0.001) | (0.001) | (0.001) | (0.001) | (0.001) | (0.001) | (0.001) | |
0.945 | 0.949 | 0.961 | 0.955 | 0.944 | 0.939 | 0.942 | 0.933 | |
(0.001) | (0.001) | (0.001) | (0.001) | (0.001) | (0.001) | (0.001) | (0.001) |
4.2. AR Models
4.2.1. Simulation Settings
4.2.2. Other Combination Methods
4.2.3. Results
0.941 | 0.940 | 0.940 | 0.972 | 0.972 | 0.971 | 1.030 | 1.032 | 1.033 | |
(0.004) | (0.004) | (0.004) | (0.004) | (0.003) | (0.003) | (0.004) | (0.003) | (0.004) | |
0.954 | 0.953 | 0.954 | 0.961 | 0.962 | 0.962 | 0.997 | 1.001 | 0.995 | |
(0.003) | (0.003) | (0.003) | (0.002) | (0.003) | (0.003) | (0.001) | (0.001) | (0.001) | |
0.948 | 0.947 | 0.948 | 0.957 | 0.959 | 0.958 | 0.978 | 0.983 | 0.976 | |
(0.003) | (0.004) | (0.004) | (0.003) | (0.003) | (0.003) | (0.002) | (0.001) | (0.002) | |
2.892 | 2.484 | 2.408 | 2.372 | 2.297 | 2.070 | 2.278 | 2.176 | 2.483 | |
(0.268) | (0.166) | (0.189) | (0.167) | (0.174) | (0.127) | (0.148) | (0.151) | (0.148) | |
1.681 | 2.025 | 1.824 | 1.884 | 1.874 | 1.421 | 1.740 | 1.602 | 1.943 | |
(0.137) | (0.191) | (0.187) | (0.243) | (0.197) | (0.076) | (0.137) | (0.144) | (0.168) | |
1.805 | 1.946 | 1.754 | 1.838 | 1.705 | 1.469 | 1.723 | 1.571 | 1.885 | |
(0.121) | (0.144) | (0.134) | (0.156) | (0.138) | (0.066) | (0.109) | (0.093) | (0.120) | |
1.441 | 1.462 | 1.389 | 1.425 | 1.364 | 1.321 | 1.431 | 1.357 | 1.500 | |
(0.047) | (0.051) | (0.047) | (0.042) | (0.040) | (0.032) | (0.046) | (0.035) | (0.045) | |
1.432 | 1.453 | 1.381 | 1.417 | 1.358 | 1.315 | 1.427 | 1.353 | 1.495 | |
(0.047) | (0.050) | (0.047) | (0.042) | (0.040) | (0.032) | (0.045) | (0.035) | (0.045) | |
1.429 | 1.449 | 1.378 | 1.414 | 1.355 | 1.313 | 1.425 | 1.352 | 1.492 | |
(0.047) | (0.049) | (0.047) | (0.042) | (0.039) | (0.032) | (0.045) | (0.035) | (0.045) | |
1.433 | 1.452 | 1.382 | 1.417 | 1.357 | 1.315 | 1.427 | 1.353 | 1.491 | |
(0.047) | (0.050) | (0.047) | (0.042) | (0.040) | (0.032) | (0.045) | (0.035) | (0.044) | |
1.447 | 1.464 | 1.394 | 1.428 | 1.366 | 1.322 | 1.432 | 1.357 | 1.495 | |
(0.048) | (0.051) | (0.049) | (0.043) | (0.040) | (0.033) | (0.046) | (0.036) | (0.045) | |
7.956 | 8.355 | 8.491 | 8.856 | 10.210 | 9.138 | 11.110 | 11.240 | 10.040 | |
(0.346) | (0.339) | (0.342) | (0.387) | (1.032) | (0.363) | (0.504) | (0.509) | (0.513) | |
1.036 | 1.024 | 1.036 | 1.032 | 1.036 | 1.042 | 1.072 | 1.070 | 1.045 | |
(0.011) | (0.013) | (0.012) | (0.011) | (0.010) | (0.011) | (0.011) | (0.011) | (0.013) |
Log-Normal | ||||||
---|---|---|---|---|---|---|
1.058 | 1.056 | 1.053 | 0.964 | 1.024 | 1.051 | |
(0.009) | (0.008) | (0.008) | (0.003) | (0.004) | (0.010) | |
0.955 | 0.947 | 0.961 | 0.951 | 0.940 | 0.921 | |
(0.006) | (0.006) | (0.006) | (0.003) | (0.004) | (0.008) | |
0.950 | 0.943 | 0.957 | 0.950 | 0.946 | 0.926 | |
(0.006) | (0.006) | (0.006) | (0.003) | (0.004) | (0.008) | |
2.047 | 1.889 | 1.931 | 2.253 | 2.143 | 1.730 | |
(0.107) | (0.098) | (0.139) | (0.173) | (0.115) | (0.087) | |
1.692 | 1.396 | 1.657 | 1.517 | 1.441 | 1.370 | |
(0.135) | (0.066) | (0.182) | (0.097) | (0.085) | (0.078) | |
1.625 | 1.438 | 1.508 | 1.559 | 1.555 | 1.404 | |
(0.091) | (0.060) | (0.112) | (0.086) | (0.080) | (0.057) | |
1.369 | 1.307 | 1.286 | 1.329 | 1.374 | 1.278 | |
(0.034) | (0.025) | (0.033) | (0.039) | (0.038) | (0.025) | |
1.365 | 1.303 | 1.282 | 1.322 | 1.370 | 1.275 | |
(0.033) | (0.025) | (0.033) | (0.038) | (0.038) | (0.025) | |
1.360 | 1.299 | 1.277 | 1.319 | 1.367 | 1.271 | |
(0.033) | (0.025) | (0.032) | (0.037) | (0.037) | (0.024) | |
1.352 | 1.290 | 1.269 | 1.320 | 1.366 | 1.259 | |
(0.032) | (0.024) | (0.030) | (0.038) | (0.037) | (0.023) | |
1.345 | 1.284 | 1.263 | 1.327 | 1.368 | 1.248 | |
(0.032) | (0.023) | (0.030) | (0.039) | (0.037) | (0.023) | |
95.280 | 38.290 | 46.220 | 9.316 | 13.180 | 174.000 | |
(60.670) | (7.566) | (9.192) | (0.375) | (0.891) | (56.286) | |
1.014 | 1.007 | 1.016 | 1.046 | 1.032 | 0.974 | |
(0.010) | (0.010) | (0.010) | (0.011) | (0.011) | (0.010) |
Log-Normal | ||||||
---|---|---|---|---|---|---|
1.018 | 1.019 | 1.019 | 0.981 | 0.997 | 1.017 | |
(0.003) | (0.002) | (0.008) | (0.002) | (0.002) | (0.003) | |
0.990 | 0.988 | 0.993 | 0.982 | 0.976 | 0.975 | |
(0.002) | (0.002) | (0.002) | (0.002) | (0.002) | (0.003) | |
0.988 | 0.986 | 0.991 | 0.979 | 0.978 | 0.977 | |
(0.002) | (0.002) | (0.002) | (0.002) | (0.002) | (0.002) | |
1.469 | 1.666 | 1.724 | 1.435 | 1.543 | 1.483 | |
(0.064) | (0.076) | (0.080) | (0.069) | (0.064) | (0.064) | |
1.209 | 1.314 | 1.412 | 1.129 | 1.279 | 1.196 | |
(0.043) | (0.068) | (0.094) | (0.035) | (0.060) | (0.037) | |
1.226 | 1.367 | 1.312 | 1.183 | 1.331 | 1.272 | |
(0.040) | (0.056) | (0.085) | (0.033) | (0.050) | (0.040) | |
1.187 | 1.272 | 1.489 | 1.159 | 1.245 | 1.210 | |
(0.023) | (0.029) | (0.035) | (0.021) | (0.027) | (0.023) | |
1.184 | 1.269 | 1.401 | 1.157 | 1.242 | 1.206 | |
(0.022) | (0.029) | (0.034) | (0.021) | (0.027) | (0.023) | |
1.181 | 1.266 | 1.378 | 1.156 | 1.240 | 1.201 | |
(0.022) | (0.029) | (0.033) | (0.021) | (0.027) | (0.022) | |
1.176 | 1.260 | 1.450 | 1.156 | 1.237 | 1.192 | |
(0.021) | (0.028) | (0.033) | (0.021) | (0.027) | (0.021) | |
1.173 | 1.256 | 1.352 | 1.159 | 1.236 | 1.185 | |
(0.021) | (0.028) | (0.032) | (0.021) | (0.026) | (0.020) | |
2.891 | 2.862 | 3.647 | 2.690 | 2.610 | 3.296 | |
(0.084) | (0.097) | (1.393) | (0.074) | (0.077) | (0.121) | |
1.029 | 1.025 | 1.022 | 1.019 | 1.004 | 1.015 | |
(0.006) | (0.006) | (0.006) | (0.008) | (0.008) | (0.006) |
5. Real Data Example
5.1. Data and Settings
Mean | Se | Median | Min | Max | |||
---|---|---|---|---|---|---|---|
0.708 | 0.016 | 0.649 | 0.001 | 0.307 | 0.994 | 11.50 | |
0.758 | 0.009 | 0.773 | 0.038 | 0.507 | 0.990 | 2.901 | |
0.697 | 0.017 | 0.639 | 0.001 | 0.309 | 0.979 | 13.32 | |
0.766 | 0.010 | 0.766 | 0.030 | 0.517 | 0.992 | 4.138 | |
0.708 | 0.015 | 0.646 | 0.001 | 0.312 | 1.003 | 8.632 | |
0.760 | 0.009 | 0.769 | 0.034 | 0.509 | 0.993 | 3.717 | |
0.696 | 0.014 | 0.645 | 0.001 | 0.308 | 0.987 | 7.710 | |
0.757 | 0.009 | 0.770 | 0.033 | 0.508 | 0.990 | 3.298 | |
1.050 | 0.010 | 1.022 | 0.002 | 0.910 | 1.143 | 5.341 | |
1.015 | 0.005 | 1.015 | 0.065 | 0.944 | 1.078 | 2.821 | |
0.990 | 0.004 | 1.000 | 0.002 | 0.974 | 1.023 | 2.437 | |
0.992 | 0.002 | 0.999 | 0.062 | 0.984 | 1.013 | 1.747 | |
0.784 | 0.010 | 0.838 | 0.001 | 0.596 | 0.973 | 5.227 | |
0.849 | 0.006 | 0.902 | 0.039 | 0.758 | 0.983 | 3.051 | |
0.775 | 0.010 | 0.832 | 0.001 | 0.582 | 0.969 | 7.715 | |
0.842 | 0.006 | 0.896 | 0.037 | 0.749 | 0.981 | 2.841 | |
0.768 | 0.012 | 0.825 | 0.001 | 0.564 | 0.966 | 11.45 | |
0.835 | 0.006 | 0.893 | 0.036 | 0.739 | 0.978 | 2.643 | |
0.758 | 0.019 | 0.806 | 0.001 | 0.529 | 0.960 | 24.08 | |
0.822 | 0.006 | 0.883 | 0.040 | 0.709 | 0.974 | 2.712 | |
0.757 | 0.031 | 0.793 | 0.001 | 0.503 | 0.956 | 43.19 | |
0.810 | 0.007 | 0.870 | 0.036 | 0.684 | 0.971 | 3.517 |
Mean | Se | Median | Min | Max | |||
---|---|---|---|---|---|---|---|
7.738 | 1.695 | 2.259 | 0.131 | 1.311 | 5.244 | 82.734 | |
2.044 | 0.166 | 1.422 | 0.327 | 1.056 | 2.147 | 25.784 | |
8.088 | 2.005 | 1.912 | 0.222 | 1.162 | 4.974 | 120.428 | |
1.998 | 0.153 | 1.406 | 0.477 | 1.030 | 2.055 | 21.229 | |
7.607 | 1.664 | 2.299 | 0.129 | 1.267 | 5.175 | 78.481 | |
2.014 | 0.165 | 1.416 | 0.316 | 1.035 | 2.150 | 26.039 | |
2.073 | 0.245 | 1.266 | 0.245 | 0.961 | 2.160 | 40.137 | |
1.349 | 0.053 | 1.157 | 0.468 | 0.971 | 1.565 | 7.845 | |
2.017 | 0.217 | 1.431 | 0.241 | 0.965 | 2.472 | 12.551 | |
1.322 | 0.048 | 1.154 | 0.465 | 0.965 | 1.525 | 6.703 | |
1.846 | 0.182 | 1.337 | 0.208 | 0.958 | 2.444 | 10.383 | |
1.295 | 0.043 | 1.114 | 0.461 | 0.954 | 1.497 | 5.655 | |
1.656 | 0.150 | 1.340 | 0.179 | 0.851 | 2.074 | 8.577 | |
1.246 | 0.036 | 1.100 | 0.454 | 0.940 | 1.448 | 3.985 | |
1.536 | 0.141 | 1.256 | 0.158 | 0.813 | 1.673 | 7.746 | |
1.202 | 0.032 | 1.089 | 0.431 | 0.928 | 1.371 | 3.461 |
5.2. Results
6. Conclusions
Acknowledgments
Author Contributions
Conflicts of Interest
Appendix
A.1
- Fact 1: for . Let , then , since and .
- Fact 2: for .
- Fact 3: For any , decreases as b increases. The proof is pure arithmetic, and the key point is using the fact that .
- Fact 4: , where conditional on ν. Let , then it is easy to show that .
- Fact 5: if . Use Fact 2 to show that .
A.2
- Let and using Facts 1, 2 and 3, then:
- Using Fact 2 in Subsection A.1, it follows:
- It is easy to show that:
A.3
A.4
References
- J.M. Bates, and C.W.J. Granger. “The combination of forecasts.” OR 20 (1969): 451–468. [Google Scholar] [CrossRef]
- R.T. Clemen. “Combining forecasts: A review and annotated bibliography.” Int. J. Forecast. 5 (1989): 559–583. [Google Scholar] [CrossRef]
- P. Newbold, and D.I. Harvey. “Forecast combination and encompassing.” In A Companion to Economic Forecasting. Edited by M.P. Clements and D.F. Hendry. Malden, MA, USA: WILEY, 2002, pp. 268–283. [Google Scholar]
- A. Timmermann. “Forecast combinations.” In Handbook of Economic Forecasting. Amsterdam, The Netherlands: NORTH-HOLLAND, 2006, Volume 1, pp. 135–196. [Google Scholar]
- K. Lahiri, H. Peng, and Y. Zhao. “Machine Learning and Forecast Combination in Incomplete Panels.” 2013. Available online: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2359523 (accessed on 10 October 2015).
- J.S. Armstrong, K.C. Green, and A. Graefe. “Golden rule of forecasting: Be conservative.” J. Bus. Res. 68 (2015): 1717–1731. [Google Scholar] [CrossRef]
- K.C. Green, and J.S. Armstrong. “Simple versus complex forecasting: The evidence.” J. Bus. Res. 68 (2015): 1678–1685. [Google Scholar] [CrossRef]
- Y. Yang. “Combining forecasting procedures: Some theoretical results.” Econom. Theory 20 (2004): 176–222. [Google Scholar] [CrossRef]
- C. Marinelli, S. Rachev, and R. Roll. “Subordinated exchange rate models: Evidence for heavy tailed distributions and long-range dependence.” Math. Comput. Model. 34 (2001): 955–1001. [Google Scholar] [CrossRef]
- A.C. Harvey. Dynamic Models for Volatility and Heavy Tails: With Applications to Financial and Economical Time Series. New York, NY, USA: Cambridge University Press, 2013, p. 69. [Google Scholar]
- H. Zou, and Y. Yang. “Combining time series models for forecasting.” Int. J. Forecast. 20 (2004): 69–84. [Google Scholar] [CrossRef]
- X. Wei, and Y. Yang. “Robust forecast combinations.” J. Econom. 166 (2012): 224–236. [Google Scholar] [CrossRef]
- C. Fernandez, and M.F.J. Steel. “Multivariate Student-t regression models: Pitfalls and inference.” Biometrika 86 (1999): 153–167. [Google Scholar] [CrossRef]
- T.C.O. Fonseca, M.A.R. Ferreira, and H.S. Migon. “Objective bayesian analysis for the Student-t regression model.” Biometrika 95 (2008): 325–333. [Google Scholar] [CrossRef]
- R. Kan, and G. Zhou. Modeling Non-Normality Using Multivariate T: Implications for Asset Pricing. Technical Report; Toronto, ON, Canada: Rotman School of Management, University of Toronto, 2003. [Google Scholar]
- C.W.J. Granger, and R. Ramanathan. “Improved methods of forecasting.” J. Forecast. 3 (1984): 197–204. [Google Scholar] [CrossRef]
- A. Sancetta. “Recursive forecast combination for dependent heterogeneous data.” Econom. Theory 26 (2010): 598–631. [Google Scholar] [CrossRef]
- G. Cheng, and Y. Yang. “Forecast combination with outlier protection.” Int. J. Forecast. 31 (2015): 223–237. [Google Scholar] [CrossRef]
- S. Makridakis, and M. Hibon. “The M3-Competition: Results, conclusions and implications.” Int. J. Forecast. 16 (2000): 451–476. [Google Scholar] [CrossRef]
- A. Inoue, and L. Kilian. “How useful is bagging in forecasting economic time series? A case study of U.S. consumer price inflation.” J. Am. Stat. Assoc. 103 (2008): 511–522. [Google Scholar] [CrossRef]
- I. Sanchez. “Adaptive combination of forecasts with application to wind energy.” Int. J. Forecast. 24 (2008): 679–693. [Google Scholar] [CrossRef]
- C. Altavilla, and P. de Grauwe. “Forecasting and combining competing models of exchange rate determination.” Appl. Econ. 42 (2010): 3455–3480. [Google Scholar] [CrossRef]
- X. Zhang, Z. Lu, and G. Zou. “Adaptively combined forecasting for discrete response time series.” J. Econom. 176 (2013): 80–91. [Google Scholar] [CrossRef]
- C.K. Ing. “Accumulated prediction errors, information criteria and optimal forecasting for autoregressive time series.” Ann. Stat. 35 (2007): 1238–1277. [Google Scholar] [CrossRef]
- C.K. Ing, C.-Y. Sin, and S.-H. Yu. “Model selection for integrated autoregressive processes of infinite order.” J. Multivar. Anal. 106 (2012): 57–71. [Google Scholar] [CrossRef]
- B.E. Hansen. “Least squares forecast averaging.” J. Econom. 146 (2008): 342–350. [Google Scholar] [CrossRef]
- J.H. Stock, and M.W. Watson. “Forecasting with many predictors.” In Handbook of Economic Forecasting. Amsterdam, The Netherlands: NORTH-HOLLAND, 2006, Volume 1, pp. 515–554. [Google Scholar]
- “M3-Competition data.” Available online: http://forecasters.org/resources/time-series-data/m3-comp-etition/ (accessed on 9 October 2015).
- Z. Yuan, and Y. Yang. “Combining Linear Regression Models: When and How? ” J. Am. Stat. Assoc. 100 (2005): 1202–1204. [Google Scholar] [CrossRef]
- P. Hansen, A. Lunde, and J. Nason. “The model confidence set.” Econometrica 79 (2011): 453–497. [Google Scholar] [CrossRef]
- D. Ferrari, and Y. Yang. “Confidence sets for model selection by F-testing.” Stat. Sin. 25 (2015): 1637–1658. [Google Scholar] [CrossRef]
- J.D. Samuels, and R.M. Sekkel. “Forecasting with Many Models: Model Confidence Sets and Forecast Combination.” Working Paper. 2013. Available online: http://www.bankofcanada.ca/wp-content/uploads/2013/04/wp2013-11.pdf (accessed on 10 October 2015).
© 2015 by the authors; licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution license ( http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Cheng, G.; Wang, S.; Yang, Y. Forecast Combination under Heavy-Tailed Errors. Econometrics 2015, 3, 797-824. https://doi.org/10.3390/econometrics3040797
Cheng G, Wang S, Yang Y. Forecast Combination under Heavy-Tailed Errors. Econometrics. 2015; 3(4):797-824. https://doi.org/10.3390/econometrics3040797
Chicago/Turabian StyleCheng, Gang, Sicong Wang, and Yuhong Yang. 2015. "Forecast Combination under Heavy-Tailed Errors" Econometrics 3, no. 4: 797-824. https://doi.org/10.3390/econometrics3040797
APA StyleCheng, G., Wang, S., & Yang, Y. (2015). Forecast Combination under Heavy-Tailed Errors. Econometrics, 3(4), 797-824. https://doi.org/10.3390/econometrics3040797