We estimate uncertainty measures for point forecasts obtained from survey data, pooling information embedded in observed forecast errors for different forecast horizons. To track time-varying uncertainty in the associated forecast errors, we derive a multiple-horizon specification of stochastic volatility. We apply our method to forecasts for various macroeconomic variables from the Survey of Professional Forecasters. Compared to constant variance approaches, our stochastic volatility model improves the accuracy of uncertainty measures for survey forecasts. Our method can also be applied to other surveys like the Blue Chip Consensus, or the Federal Open Market Committee’s Summary of Economic Projections.
We develop uncertainty measures for point forecasts from surveys such as the Survey of Professional Forecasters, Blue Chip, or the Federal Open Market Committee’s Summary of Economic Projections. At a given point of time, these surveys provide forecasts for macroeconomic variables at multiple horizons. To track time-varying uncertainty in the associated forecast errors, we derive a multiple-horizon specification of stochastic volatility. Compared to constant-variance approaches, our stochastic-volatility model improves the accuracy of uncertainty measures for survey forecasts.
Many forecasts are conditional in nature. For example, a number of central banks routinely report forecasts conditional on particular paths of policy instruments.
Many forecasts are conditional in nature. For example, a number of central banks routinely report forecasts conditional on particular paths of policy instruments.
This paper examines the asymptotic and finite-sample properties of tests of equal forecast accuracy when the models being compared are overlapping in the sense of Vuong (1989).