Following Manzan (2021), this paper examines how professional forecasters revise their fixed-event uncertainty (variance) forecasts and tests the Bayesian learning prediction that variance forecasts should decrease as the horizon shortens. We show that Manzan's (2021) use of first moment "efficiency" tests are not applicable to studying revisions of variance forecasts. Instead, we employ monotonicity tests developed by Patton and Timmermann (2012) in the first application of these tests to second moments of survey expectations. We find strong evidence that the variance forecasts are consistent with the Bayesian learning prediction of declining monotonicity.
This chapter provides an overview of surveys of professional forecasters, with a focus on the U.S. Survey of Professional Forecasters and the European Central Bank Survey of Professional Forecasters. A distinguishing feature of these surveys is that they collect point and density forecasts and make the data publicly available. We discuss their structure, issues involved in using the data, and the construction of measures such as disagreement and uncertainty at the aggregate and individual levels. Our review also summarizes the findings of studies exploring issues such as the alignment of point forecasts with measures of central tendency from associated density forecasts, the coverage of density forecasts, the rounding of point and density forecasts, comparisons of forecast accuracy across respondents, and heterogeneity in forecast behavior and the persistence of these differential features. We conclude with some observations for future work.