Next Article in Journal / Special Issue
Bayesian Calibration of Generalized Pools of Predictive Distributions
Previous Article in Journal / Special Issue
Timing Foreign Exchange Markets
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Evolving Transmission of Uncertainty Shocks in the United Kingdom

School of Economics and Finance, Queen Mary College, London E1 4NS, UK
Econometrics 2016, 4(1), 16; https://doi.org/10.3390/econometrics4010016
Submission received: 4 September 2015 / Revised: 30 November 2015 / Accepted: 28 January 2016 / Published: 14 March 2016
(This article belongs to the Special Issue Computational Complexity in Bayesian Econometric Analysis)

Abstract

:
This paper investigates if the impact of uncertainty shocks on the U.K. economy has changed over time. To this end, we propose an extended time-varying VAR model that simultaneously allows the estimation of a measure of uncertainty and its time-varying impact on key macroeconomic and financial variables. We find that the impact of uncertainty shocks on these variables has declined over time. The timing of the change coincides with the introduction of inflation targeting in the U.K.
JEL:
C15; C32; E32

1. Introduction

The recent financial crisis and ensuing recession have led to a renewed interest in the possible relationship between economic uncertainty and macroeconomic variables. A number of papers use VAR-based analyses to estimate the impact of uncertainty shocks for the U.S. and the U.K. (see, for example, [1] for the U.S. and [2] for the U.K.). In general, these studies report that uncertainty shocks have an adverse impact on the economy. For example, [2] find that uncertainty shocks depress GDP and industrial production.
However, the estimates reported in these papers are typically based on data that span the last three or four decades and, thus, cover periods potentially characterised by changing dynamics, policy regimes and economic shocks. There has been limited focus on exploring whether the impact of uncertainty shocks has changed over time in the United Kingdom and identifying the factors that can possibly explain any temporal shifts.1
This paper attempts to fill this gap. We propose an extended TVP-VAR model that allows the estimation of a measure of uncertainty that encompasses volatility from the real and financial sectors of the economy and is a proxy for macroeconomic uncertainty. The proposed model incorporates time-varying parameters and simultaneously provides an estimate of the time-varying response of macroeconomic variables to shocks to this uncertainty measure, thus allowing the investigation of temporal shifts in a coherent manner.
Our results suggest that the impact of uncertainty shocks on measures of real activity, inflation and interest rates has declined systematically over time, with the change coinciding with the introduction of inflation targeting in 1992. The impact of these shocks on stock returns has also declined, but the degree of the shift is smaller.
The analysis in the paper adds to the literature on uncertainty by systematically investigating how the impact of uncertainty has changed over time in the U.K. The empirical model proposed in the paper builds upon existing VAR models by simultaneously allowing the estimation of time-varying volatility and the time-varying impact of this volatility on the endogenous variables. Our results have important implications. Our empirical findings suggest that uncertainty is less of a concern for real activity and inflation than in the past, but continues to have an important impact on the stock market. This suggests that uncertainty shocks mainly affect the U.K. through a financial channel, and policies designed to ameliorate the impact of these shocks need to take this mechanism into account.
The paper is organised as follows: Section 2 and Section 3 introduce the empirical model and discuss the estimation method. The results from the empirical model are presented in Section 4.3. Section 5 concludes.

2. Empirical Model

The core of the empirical model is the following time-varying parameter vector autoregression (TVP VAR):
Z t = c t + j = 1 P β t j Z t - j + j = 0 J γ t j ln λ t - j + Ω t 1 / 2 e t
where Z t is a matrix of endogenous variables that we describe below. The law of motion for the VAR coefficients is given by:
B = v e c ( [ c ; β ; γ ] ) B t = B t - 1 + η t , V A R η t = Q B
As in [5], the covariance matrix of the residuals is defined as:
Ω t = A t - 1 H t A t - 1
where A t is lower triangular. Each non-zero element of A t evolves as a random walk:
a t = a t - 1 + g t , V A R ( g t ) = G
where G is block diagonal, as in [5].
Following [6], the volatility of the shocks e t is given by:
H t = λ t S S = d i a g ( s 1 , . . , s N )
The overall volatility evolves as an AR(1) process:
ln λ t = α + F ln λ t - 1 + η ¯ t , V A R ( η ¯ t ) = Q λ
and the diagonal elements of S are scaling factors.
The structure defined by Equation (4) suggests that the specification is characterised by the following feature. First, the model does not distinguish between the common and idiosyncratic component in volatility, and λ t is a convolution of both components. In other words, Equation (5) implicitly imposes a factor structure on the volatilities where the loadings equal one and the idiosyncratic components are suppressed. With such a structure, λ t is approximately the average volatility.
While separating the unobserved components in Equation (4) may be interesting in its own right, it is not directly relevant for our application, where the key aim is to estimate a measure of the common volatility of the shocks, which, by definition, is a combination of the two components. As we show below, this simple scheme produces volatility estimates that are plausible from a historical perspective.
The shock η ¯ t represents the innovation to the volatility of the residuals e t and is interpreted as an uncertainty shock. In the empirical analysis below, we examine the time-varying response of the endogenous variables to this shock.
The formulation presented in Equations (4) and (5) is related to a number of recent empirical contributions. For example, the structure of the stochastic volatility model used above closely resembles the formulations used in time-varying VAR models (see [5,7]). Our model differs from these studies in that it allows a direct impact of the volatilities on the level of the endogenous variables. The model proposed above can be thought of as a multivariate extension of the stochastic volatility in the mean model proposed in [8] and applied in [9,10,11]. In addition, our model has similarities with the stochastic volatility models with leverage studied in [12] and the non-linear model proposed in Aruoba et al. [13]. Finally, the model is based on the VAR with stochastic volatility introduced in [14]. While [14] focus on the impact of volatility associated with the output shock, we focus on an overall measure of uncertainty that incorporates the variance of all shocks in the model. In addition, the model proposed above incorporates time variation, a feature missing from the studies that consider stochastic volatility in mean models.2

3. Estimation and Model Specification

The model defined in Equations (1) and (5) is estimated using an MCMC algorithm. In this section, we summarise the key steps of the algorithm and provide the details of the prior distributions.

3.1. Priors and Starting Values

3.1.1. VAR Coefficients

The initial conditions for the VAR coefficients B 0 are obtained via an OLS estimate of a fixed coefficient VAR using the first T 0 = 180 observations of the sample period, which corresponds to the first 15 years of the monthly dataset described below. Let B ^ o l s and v ^ o l s denote the OLS estimate of the VAR coefficients and the covariance matrix estimated on the pre-sample data described above. The prior for B 0   ~ N ( B ^ o l s , v a r ( B ^ o l s ) ) . The prior on Q B is assumed to be inverse Wishart Q B , 0 I W Q ¯ B , 0 , T T 0 , where Q ¯ B , 0 is assumed to be T 0 × v a r ( B ^ o l s ) × k , T 0 is the length of the sample used for calibration and T T 0 equals 10 plus the columns of Q B . Following [7], the scaling factor k is set to 3 . 5 × 10 - 4 .

3.1.2. Elements of the A Matrix

The prior for the off-diagonal elements A t is A 0 N a ^ o l s , V a ^ o l s , where a ^ o l s are the off-diagonal elements of v ^ o l s , with each row scaled by the corresponding element on the diagonal. V a ^ o l s is assumed to be diagonal with the elements set equal to 10-times the absolute value of the corresponding element of a ^ o l s . The prior distribution for the blocks of G is inverse Wishart: G i , 0 I W ( G ¯ i , K i ) , where i = 1 . . N - 1 indexes the blocks of S. G ¯ i is calibrated using a ^ o l s . Specifically, G ¯ i is a diagonal matrix with the relevant elements of a ^ o l s multiplied by 10 - 3 . This prior specification is used in previous studies, such as [15].

3.1.3. Elements of S and the Parameters of the Stochastic Volatility Transition Equation

The elements of S are assumed to have an inverse Gamma prior: P ( s i )   ~ I G ( S 0 , i , V 0 ) . The degrees of freedom V 0 are set equal to five. The prior scale parameters are set by estimating the following regression. λ ¯ i t = S 0 , i λ ¯ t + ε t , where λ ¯ t is the first principal component of the stochastic volatilities λ ¯ i t obtained using a univariate stochastic volatility model for the residuals of each equation of a VAR estimated via OLS using the endogenous variables Z t .
We set a normal prior for the unconditional mean μ = α 1 - F . This prior is N ( μ 0 , Z 0 ) , where μ 0 = 0 and Z 0 = 10 . The prior for Q λ is I G Q 0 , V Q 0 , where Q 0 is the average of the variances of the transition equations of the initial univariate stochastic volatility estimates, and V Q 0 = 5 . The prior for F is N F 0 , L 0 , where F 0 = 0 . 8 and L 0 = 1 .

3.1.4. Common Volatility λ t

The prior for the initial value of λ t is defined as ln λ 0 N ( ln μ 0 , I ) , where μ 0 is the initial value of λ ¯ t defined above.

3.2. MCMC Algorithm

The Gibbs sampling algorithm is based on drawing from the following conditional posterior distributions:
  • G ( B t \ Ξ ) . The distribution of the time-varying VAR coefficients B t conditional on all other parameters Ξ is linear and Gaussian: B t \ Z t , Ξ N B T \ T , P T \ T and B t \ B t + 1 , Z t , Ξ N B t \ t + 1 , B t + 1 , P t \ t + 1 , B t + 1 , where t = T - 1 , . . 1 , Ξ denotes a vector that holds all of the other VAR parameters. As shown by [16] the simulation proceeds as follows. First, we use the Kalman filter to draw B T \ T and P T \ T and then proceed backwards in time using B t | t + 1 = B t | t + P t | t P t + 1 | t - 1 B t + 1 - B t and B t | t + 1 = B t | t - P t | t P t + 1 | t - 1 P t | t .
  • G ( Q B \ Ξ ) . The conditional posterior for Q B is inverse Wishart: I W η t η t + Q ¯ B , 0 , T + T 0 , i.e., the posterior scale matrix is given by η t η t + Q ¯ B , 0 , and the degrees of freedom are T + T 0 .
  • G ( A t \ Ξ ) . Given a draw for the VAR parameters, the model can be written as A t v t = e t , where v t denotes the VAR residuals. This is a system of linear equations with time-varying coefficients and a known form of heteroscedasticity. The j-th equation of this system is given as v j t = - a j t v - j t + e j t , where the subscript j denotes the j-th column of v, while - j denotes Columns 1 to j - 1 . Note that the variance of e j t is time-varying and given by λ t s j . The time-varying coefficient follows the process a j t = a j t - 1 + g j t with the shocks to the j-th equation g j t uncorrelated with those from other equations. In other words, the covariance matrix v a r g is assumed to be block diagonal, as in [5]. With this assumption in place, the [16] algorithm can be applied to draw the time varying coefficients for each equation of this system separately.
  • G ( S \ Ξ ) . Given a draw for the VAR parameters, the model can be written as A v t = e t . The j-th equation of this system is given by v j t = - a j t v - j t + e j t , where the variance of e j t is time-varying and given by λ t s j . Given a draw for λ t , this equation can be re-written as v ¯ j t = - a j t v ¯ - j t + e ¯ j t , where v ¯ j t = v j t λ t 1 / 2 , and the variance of e ¯ j t is s j . The conditional posterior for this variance is inverse Gamma with scale parameter e ¯ j t e ¯ j t + S 0 , j and degrees of freedom V 0 + T .
  • G ( λ t \ Ξ ) . Conditional on the VAR parameters, and the parameters of the transition equation, the model has a multivariate non-linear state-space representation. The work in [17] shows that the conditional distribution of the state variables in a general state-space model can be written as the product of three terms:
    h ˜ t \ Z t , Ξ f h ˜ t \ h ˜ t - 1 × f h ˜ t + 1 \ h ˜ t × f Z t \ h ˜ t , Ξ
    where Ξ denotes all other parameters and h ˜ t = ln λ t . In the context of stochastic volatility models, [18] show that this density is a product of log normal densities for λ t and λ t + 1 and a normal density for Z t . The work in [17] derives the general form of the mean and variance of the underlying normal density for f h ˜ t \ h ˜ t - 1 , h ˜ t + 1 , Ξ f h ˜ t \ h ˜ t - 1 × f h ˜ t + 1 \ h ˜ t and shows that this is given as:
    f h ˜ t \ h ˜ t - 1 , h ˜ t + 1 , Ξ N B 2 t b 2 t , B 2 t
    where B 2 t - 1 = Q λ - 1 + F Q λ - 1 F and b 2 t = h ˜ t - 1 F Q λ - 1 + h ˜ t + 1 Q λ - 1 F . Note that due to the non-linearity of the observation equation of the model, an analytical expression for the complete conditional h ˜ t \ Z t , Ξ is unavailable, and a Metropolis step is required. Following [18], we draw from Equation (6) using a date-by-date independence Metropolis step using the density in Equation (7) as the candidate generating density. This choice implies that the acceptance probability is given by the ratio of the conditional likelihood f Z t \ h ˜ t , Ξ at the old and the new draw. To implement the algorithm, we begin with an initial estimate of h ˜ = ln λ ¯ t . We set the matrix h ˜ o l d equal to the initial volatility estimate. Then, at each date, the following two steps are implemented:
    (a)
    Draw a candidate for the volatility h ˜ t n e w using the density 6, where b 2 t = h ˜ t - 1 n e w F Q λ - 1 + h ˜ t + 1 o l d Q λ - 1 F and B 2 t - 1 = Q λ - 1 + F Q λ - 1 F .
    (b)
    Update h ˜ t o l d = h ˜ t n e w with acceptance probability f Z t \ h ˜ t n e w , Ξ f Z t \ h ˜ t o l d , Ξ , where f Z t \ h ˜ t , Ξ is the likelihood of the VAR for observation t and defined as Ω t - 0 . 5 - 0 . 5 exp e ˜ t Ω t - 1 e ˜ t , where e ˜ t = Z t - c t + j = 1 P β t j Z t - j + j = 0 J γ t j ln λ t - j and Ω t = A t - 1 exp ( h ˜ t ) S A t - 1 .
    Repeating these steps for the entire time series delivers a draw of the stochastic volatilities.3
  • G ( α , F \ Ξ ) . We re-write the transition equation in deviations from the mean:
    h ˜ t - μ = F h ˜ t - 1 - μ + η ¯ t
    where the elements of the mean vector μ i are defined as α i 1 - F i . Conditional on a draw for h ˜ t and μ, the transition Equation (8) is simply a linear regression, and the standard normal and inverse Gamma conditional posteriors apply. Consider h ˜ t * = F h ˜ t - 1 * + η ¯ t , V A R η ¯ t = Q λ and h ˜ t * = h ˜ t - μ , h ˜ t - 1 * = h ˜ t - 1 - μ . The conditional posterior of F is N θ * , L * , where:
    θ * = L 0 - 1 + 1 Q λ h ˜ t - 1 * h ˜ t - 1 * - 1 L 0 - 1 F 0 + 1 Q λ h ˜ t - 1 * h ˜ t * L * = L 0 - 1 + 1 Q λ h ˜ t - 1 * h ˜ t - 1 * - 1
    The conditional posterior of Q λ is inverse Gamma with scale parameter η ¯ t η ¯ t + Q 0 and degrees of freedom T + V Q 0 .
    Given a draw for F, Equation (8) can be expressed as Δ ¯ h ˜ t = C μ + η ¯ t , where Δ ¯ h ˜ t = h ˜ t - F h ˜ t - 1 and C = 1 - F . The conditional posterior of μ is N μ * , Z * , where:
    μ * = Z 0 - 1 + 1 Q λ C C - 1 Z 0 - 1 μ 0 + 1 Q λ C Δ ¯ h ˜ t Z * = Z 0 - 1 + 1 Q λ C C - 1
    Note that α can be recovered as μ 1 - θ .

3.3. Estimation Using Artificial Data

To test the algorithm, we conduct a small Monte Carlo experiment. Seven hundred twenty observations are generated from the following data-generating process with the number of variables N = 2 . The first 100 observations are discarded to remove the impact of initial conditions, and 120 observations of the remaining series are used as a training sample. Estimation is carried out using 500 observations. The length of the artificial sample broadly matches the monthly dataset used in the empirical analysis below. The DGP is defined as:
Z t = β t Z t - 1 + γ t ln λ t + c t + Ω t 1 / 2 e t , e t   ~ N ( 0 , 1 )
Ω t = A t - 1 H t A t - 1 , H t = λ t S S = 1 0 0 2 λ t = - 0 . 1 + 0 . 75 λ t - 1 + 0 . 5 1 2 v t β t = β 11 , t β 12 , t β 21 , t β 22 , t , γ t = γ 11 , t γ 21 , t
where λ t is generated once using v t   ~ N ( 0 , 1 ) and fixed for all iterations of the experiment. Following [19], we assume that a one time shift defines the change in the VAR coefficients and the non-zero element of A t . During the first 250 observations, these coefficients equal β t = 0 . 5 0 . 1 0 . 1 0 . 5 , γ t = - 0 . 5 0 . 5 and A = - 1 . During the next 250 observations, the coefficients change to β t = 0 . 5 0 . 1 0 . 1 0 . 5 , γ t = - 1 . 5 1 . 5 and A = 0 . 1 .
The data is generate 1000 times. For each replication, the MCMC algorithm described above is run using 5000 iterations, and the last 1000 draws are used to compute the posterior mean of λ t , A t and B t . The figure below plots the median estimate and 84th percentile of λ t , A t and B t across Monte Carlo replications and compares these with the true underlying values. Figure 1 shows that the estimated change in λ 11 and λ 21 closely matches the assumed shift in these coefficients. Note, however, that the model estimates the shift in these coefficients to be smoother than assumed in the DGP. This is not surprising, given the assumed random walk form for the transition of the VAR coefficients in the model, which contrasts with the one-time change in the DGP. The results do show that the model is able to pick up changes in the impact of uncertainty and is suited to the type of investigation undertaken by this paper. Figure 2 and Figure 3 show that the Monte Carlo estimates of A t and ln λ t are close to their true estimated values. Overall, the results provides some evidence that the MCMC algorithm delivers a satisfactory performance.

4. Empirical Analysis for the U.K.

4.1. Data

The TVP-VAR model in Equation (1) is estimated using the following endogenous variables: (1) The growth rate of industrial production; (2) CPI inflation; (3) the three month T-Bill rate; and (4) FTSE returns. By using these four variables, we aim to broadly capture the real and financial sectors of the economy and to account for the changes in monetary policy. The data are monthly and available over the period of January 1960 to April 2015, with the first 15 years employed as a training sample. All series are obtained from the Global Financial Database. The growth rates of CPI and industrial production are calculated as log differences times 100. FTSE returns are based on the FTSE all share index.

4.2. Model Specification

The lag length in the VAR model P is set to six, and we allow three lags of ln λ t to affect the endogenous variables ( J = 3 ) . The choice of P partly reflects the convention in TVP-VAR studies that allow dependence on data up to two quarters in the past. More importantly, this parsimonious specification reduces the probability of instability in the VAR coefficients and, thus, allows reliable computation of impulse response functions. By setting J = 3 , we allow uncertainty shocks to have an impact in a period up to one quarter. The Gibbs sampling algorithm described above is run for 25,000 iterations with the final 3000 iterations used for inference. Appendix A below shows that the recursive means of the retained draws are fairly stable, providing evidence in favour of convergence.

4.3. Empirical Results

4.3.1. Estimated Volatility

Figure 4 plots the posterior estimate of the common volatility λ t . We interpret this estimate as a measure of uncertainty, as it summarises the common variance of the unpredictable component of real and financial variables included in our VAR model.
Uncertainty was high during the mid 1970s following the first oil shock and a period of extremely high inflation. The late 1970s saw the highest peak in the uncertainty measure in the aftermath of the second oil shock. Uncertainty then peaked in September 1981 following a stock market collapse (dubbed as ‘blue Monday’) in the U.K. and other industrial countries. Uncertainty was high during the Sterling crisis of the mid-1980s and then during the stock market crash on ‘black Monday’ in October 1987. Uncertainty increased in 1991 as the U.K. entered a recession, with the measure peaking again with the U.K.’s exit from the ERM. It is interesting to note that fluctuations in the uncertainty measure were smaller and less frequent over the 1992 to 2007 period, again providing evidence of ‘Great Moderation’ in the U.K. One key episode of elevated uncertainty occurred around 2003 coinciding with the invasion of Iraq. However, this stability was broken in late 2008 as stock markets across the world crashed following the sub-prime crisis in the U.S. Note that the recent debt crisis in the Euro-zone has also translated into higher U.K. uncertainty in 2012 and 2013.
Figure 5 presents the quarterly uncertainty index developed by [20] along with the estimate of λ t . The figure shows that the periods of high uncertainty identified by the quarterly index match those indicated by λ t . Note, however, that the use of monthly data in our study implies that λ t also incorporates higher frequency movements in U.K. uncertainty.

4.3.2. Impulse Response to Volatility Shocks

Figure 6 plots the time-varying impulse response to a one standard deviation shock to λ t . Following [5], the impulse responses are computed assuming that the parameters are fixed at their estimated values and that they do not vary over the impulse response horizon. As shown by [21], this assumption based on ‘anticipated utility’ provides a reasonable approximation to the underlying object of interest.
The left panel of the figure shows the median impulse response, while the right panel displays the cumulated response at the one-year horizon. Consider the top row of the figure, which shows the response of industrial production. The results show that the response has declined over time. During the 1970s and the 1980s, an increase in uncertainty reduced industrial production by about 0.4% at the one-year horizon. The magnitude of the response is similar to that reported by [2] using a fixed coefficient VAR model. After the early 1990s, this response declined sharply, with the median close to zero in 2005. There is weak evidence that the magnitude of this response has increased again over the recent past, albeit to a value less than the estimate in the earlier part of the sample.
The response of inflation to the uncertainty shock is positive. This supports the existence of the pricing bias channel postulated in [22]; in other words, when the economy is characterised by price and wage rigidity, inflation rises in the face of uncertainty, because forward-looking agents bias their pricing decision upwards in order to avoid supplying goods when demand and costs are high. The estimated cumulated response displays a decline, but the degree of the change is estimated to be smaller than the shift in the industrial production response. The cumulated response at the one-year horizon averages 0.5% to 0.6% over the 1970s and the 1980s. This falls to around 0.2% to 0.3% over the last two decades.
The decline in the response of the short-term interest rate is more dramatic. While the cumulated response is positive before the early 1990s. After this period, the null hypothesis of a zero response cannot be rejected.
The final sub-plot shows that the response of stock market returns to this shock has also declined over time. After the late 1990s, the cumulated response of returns is about −2% in contrast to a decline of about 3% earlier in the sample. In comparison to the macroeconomic variables, the magnitude of the decline in the stock returns response appears to be smaller, and uncertainty shocks still have a substantial impact on the stock market.
The time variation in the impulse responses is not sensitive to the prior distribution assumed for Q B . As discussed above, the benchmark model uses a relatively loose prior with the degrees of freedom for the inverse Wishart prior distribution set to a fairly low value. If the degrees of freedom are increased and more weight placed on the prior distribution, the resulting impulse responses are very similar. In particular, if the degrees of freedom are set equal to the length of the training sample (i.e., 180 observations), the estimated impulse responses still indicate that the response to uncertainty shocks has declined over time. These additional impulse responses are presented in Appendix B.
It is interesting to consider the possible factors that can explain the decline in the response to uncertainty shocks. A detailed DSGE-based analysis of this question is undertaken in [4] for the U.S. The simulations in that paper suggest that the decline in the response to uncertainty shocks may be consistent with an increase in weight placed on inflation in the policy rule employed by the central bank. When this coefficient rises and authorities react strongly to inflation, future inflation is expected to be on target. This reduces firms’ concerns about expected inflation and makes them less forward looking. In other words, the pricing bias decreases, and the link between inflation and marginal cost is renewed. In this case, authorities are able to cut the policy rate by more and for a longer period, which helps them to address the adverse effects from elevated uncertainty, thus ameliorating the decline in output and stock returns. Note that that the empirical results point to a change in the responses after the early 1990s when the Bank of England introduced inflation targeting. As documented in [23], there is strong evidence that the Bank placed a greater weight on inflation control after this date. This provides tentative evidence that the change in the response to uncertainty may be linked to a change in the practice of monetary policy. Of course, the U.K. economy was subject to other changes at the same time, and a more structural analysis is required to distinguish between different factors affecting the transmission mechanism of uncertainty shocks.

5. Conclusions

This paper considers whether the impact of uncertainty shocks on the U.K. economy has changed over time. Using an extended TVP VAR model that allows the estimation of the time-varying impact of uncertainty shocks, we find that the responses of industrial production growth, CPI inflation, the short-term interest rate and stock market returns have declined over time. The main change in the response coincides with the adoption of inflation targeting in the U.K. and is consistent with simulations from a DSGE model that assumes an increase in the inflation coefficient in the monetary policy rule.
In future work, it may be interesting to investigate more thoroughly the factors that may have led to the change in the response to uncertainty in the U.K. In addition, it may be useful to apply the model to a cross-section of countries that have had different historical experiences with regards to policy and structural changes. This would also allow the estimation of uncertainty indices for a larger range of countries.

Acknowledgments

We thank the anonymous referees and the editor for useful comments.

Conflicts of Interest

The author declares no conflict of interest.

Appendix

A. Convergence

Figure A1. Recursive Means calculated every 100 Gibbs draws.
Figure A1. Recursive Means calculated every 100 Gibbs draws.
Econometrics 04 00016 g007

B. Sensitivity Analysis

Figure A2. Impulse responses from the model using a tighter prior.
Figure A2. Impulse responses from the model using a tighter prior.
Econometrics 04 00016 g008

References

  1. N. Bloom. “The Impact of Uncertainty Shocks.” Econometrica 77 (2009): 623–685. [Google Scholar]
  2. S. Denis, and P. Kannan. The Impact of Uncertainty Shocks on the UK Economy. IMF Working Papers 13/66; Washington, DC, USA: International Monetary Fund, 2013. [Google Scholar]
  3. R. Beetsma, and M. Giuliodori. “The changing macroeconomic response to stock market volatility shocks.” J. Macroecon. 34 (2012): 281–293. [Google Scholar] [CrossRef]
  4. H. Mumtaz, and K. Theodoridis. The Changing Transmission of Uncertainty shocks in the US: An Empirical Analysis. Working Papers 735; London, UK: Queen Mary University of London, School of Economics and Finance, 2014. [Google Scholar]
  5. G. Primiceri. “Time varying structural vector autoregressions and monetary policy.” Rev. Econ. Stud. 72 (2005): 821–852. [Google Scholar] [CrossRef]
  6. A. Carriero, T.E. Clark, and M. Marcellino. “Common Drifting Volatility in Large Bayesian VARs.” J. Bus. Econ. Stat., 2015. [Google Scholar] [CrossRef]
  7. T. Cogley, and T.J. Sargent. “Drifts and Volatilities: Monetary policies and outcomes in the Post WWII U.S.” Rev. Econ. Dyn. 8 (2005): 262–302. [Google Scholar] [CrossRef]
  8. S.J. Koopman, and E. Hol Uspensky. “The stochastic volatility in mean model: Empirical evidence from international stock markets.” J. Appl. Econom. 17 (2002): 667–689. [Google Scholar] [CrossRef]
  9. H. Berument, Y. Yalcin, and J. Yildirim. “The effect of inflation uncertainty on inflation: Stochastic volatility in mean model within a dynamic framework.” Econ. Model. 26 (2009): 1201–1207. [Google Scholar] [CrossRef]
  10. L. Kwiatkowski. “Markov Switching In-Mean Effect. Bayesian Analysis in Stochastic Volatility Framework.” Cent. Eur. J. Econ. Model. Econ. 2 (2010): 59–94. [Google Scholar]
  11. M. Lemoine, and C. Mougin. The Growth-Volatility Relationship: New Evidence Based on Stochastic Volatility in Mean Models. Working Paper 285; Paris, France: Banque de France, 2010. [Google Scholar]
  12. M. Asai, and M. McAleer. “Multivariate stochastic volatility, leverage and news impact surfaces.” Econom. J. 12 (2009): 292–309. [Google Scholar] [CrossRef]
  13. B. Aruoba, L. Bocola, F. Schorfheide, and Department of Economics, University of Maryland. “A New Class of Nonlinear Times Series Models for the Evaluation of DSGE Models.” 2011, unpublished work. [Google Scholar]
  14. H. Mumtaz, and K. Theodoridis. “The international transmission of volatility shocks: An empirical analysis.” J. Eur. Econ. Assoc. 13 (2015): 512–533. [Google Scholar] [CrossRef]
  15. L. Benati, and H. Mumtaz. U.S. evolving macroeconomic dynamics—A structural investigation. Working Paper Series 746; Frankfurt am Main, Germany: European Central Bank, 2007. [Google Scholar]
  16. C. Carter, and P. Kohn. “On Gibbs sampling for state space models.” Biometrika 81 (2004): 541–553. [Google Scholar] [CrossRef]
  17. B.P. Carlin, N.G. Polson, and D.S. Stoffer. “A Monte Carlo Approach to Nonnormal and Nonlinear State-Space Modeling.” J. Am. Stat. Assoc. 87 (1992): 493–500. [Google Scholar] [CrossRef]
  18. E. Jacquier, N. Polson, and P. Rossi. “Bayesian analysis of stochastic volatility models.” J. Bus. Econ. Stat. 12 (1994): 371–418. [Google Scholar]
  19. J.A. Gamble, and J.P. LeSage. “A Monte Carlo Comparison of Time Varying Parameter and Multiprocess Mixture Models in the Presence of Structural Shifts and Outliers.” Rev. Econ. Stat. 75 (1993): 515–519. [Google Scholar] [CrossRef]
  20. A. Haddow, C. Hare, J. Hooley, and T. Shakir. “Macroeconomic uncertainty: What is it, how can we measure it and why does it matter? ” Bank Engl. Q. Bull. 53 (2013): 100–109. [Google Scholar]
  21. T. Cogley, and T.J. Sargent. “Anticipated utility and rational expectations as approximations of bayesian decision making.” Int. Econ. Rev. 49 (2008): 185–221. [Google Scholar] [CrossRef]
  22. J. Fernández-Villaverde, K.K. Pablo Guerrón-Quintana, and J. Rubio-Ramírez. “Fiscal Volatility Shocks and Economic Activity.” Am. Econ. Rev. 105 (2015): 3352–3384. [Google Scholar] [CrossRef]
  23. E. Nelson. UK Monetary Policy 1972-97: A Guide Using Taylor Rules. CEPR Discussion Paper 2931; London, UK: Centre for Economic Policy Research, 2001. [Google Scholar]
  • 1.Beetsma and Giuliodori [3] and Mumtaz and Theodoridis [4] investigate this question for the U.S.
  • 2.An exception is [4], who use an extended version of the proposed model to investigate the time-varying impact of uncertainty shocks in the U.S. The model in [4] incorporates a factor structure in the observation Equation (1) and, thus, incorporates more information.
  • 3.In order to take endpoints into account, the algorithm is modified slightly for the initial condition and the last observation. Details of these changes can be found in [18].
Figure 1. Monte Carlo estimates of VAR coefficients. The black line is the true value of the parameter. The red line is the median estimate across 1000 replications, and the shaded area represents the 68% interval.
Figure 1. Monte Carlo estimates of VAR coefficients. The black line is the true value of the parameter. The red line is the median estimate across 1000 replications, and the shaded area represents the 68% interval.
Econometrics 04 00016 g001
Figure 2. Monte Carlo estimates of the non-zero and non-unity element of the A matrix. The black line is the true value of the parameter. The red line is the median estimate across 1000 replications, and the shaded area represents the 68% interval.
Figure 2. Monte Carlo estimates of the non-zero and non-unity element of the A matrix. The black line is the true value of the parameter. The red line is the median estimate across 1000 replications, and the shaded area represents the 68% interval.
Econometrics 04 00016 g002
Figure 3. Monte Carlo estimates of the stochastic volatility. The black line is the true value of the parameter. The red line is the median estimate across 1000 replications, and the shaded area represents the 68% interval.
Figure 3. Monte Carlo estimates of the stochastic volatility. The black line is the true value of the parameter. The red line is the median estimate across 1000 replications, and the shaded area represents the 68% interval.
Econometrics 04 00016 g003
Figure 4. Estimate of common volatility λ t . The figure shows the median (red line) and 68% error bands (green area).
Figure 4. Estimate of common volatility λ t . The figure shows the median (red line) and 68% error bands (green area).
Econometrics 04 00016 g004
Figure 5. Estimate of common volatility λ t . The figure shows the median (red line) and 68% error bands (green area). The index of [20] is presented as the dashed dotted line for comparison.
Figure 5. Estimate of common volatility λ t . The figure shows the median (red line) and 68% error bands (green area). The index of [20] is presented as the dashed dotted line for comparison.
Econometrics 04 00016 g005
Figure 6. Impulse response to a one standard deviation volatility shock. The left panel shows the median response. The right panel shows the median and the 68% error bands.
Figure 6. Impulse response to a one standard deviation volatility shock. The left panel shows the median response. The right panel shows the median and the 68% error bands.
Econometrics 04 00016 g006

Share and Cite

MDPI and ACS Style

Mumtaz, H. The Evolving Transmission of Uncertainty Shocks in the United Kingdom. Econometrics 2016, 4, 16. https://doi.org/10.3390/econometrics4010016

AMA Style

Mumtaz H. The Evolving Transmission of Uncertainty Shocks in the United Kingdom. Econometrics. 2016; 4(1):16. https://doi.org/10.3390/econometrics4010016

Chicago/Turabian Style

Mumtaz, Haroon. 2016. "The Evolving Transmission of Uncertainty Shocks in the United Kingdom" Econometrics 4, no. 1: 16. https://doi.org/10.3390/econometrics4010016

APA Style

Mumtaz, H. (2016). The Evolving Transmission of Uncertainty Shocks in the United Kingdom. Econometrics, 4(1), 16. https://doi.org/10.3390/econometrics4010016

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop