Skip to:
  1. Main navigation
  2. Main content
  3. Footer
Speech

Acknowledging Uncertainty

Former President and CEO (2014-2024)
Uncertainty is an uncomfortable position, but certainty is an absurd one.
—  Voltaire

I thank Mickey Levy and the Shadow Open Market Committee for inviting me to speak today. I have known many of the Shadow Committee’s members for quite a long time — longer than I care to admit! Over the years, I have learned much from the position papers and conferences put together by this serious group of economists. I share their view that the active exchange of diverse ideas and careful deliberations ultimately result in better policy decisions.

Today, I would like to share my perspective as someone who has participated in some of those policy decisions. I will comment on how I approach monetary policymaking in an uncertain world, review the types of uncertainty policymakers and economists need to deal with, and provide some recommendations for improving monetary policy communications. Of course, the views I’ll present today are my own and not necessarily those of the Federal Reserve System or my colleagues on the Federal Open Market Committee.

Monetary Policy Communications

I believe monetary policy should be set based on the outlook for the economy over the medium run because this is the time horizon over which monetary policy can affect the economy. I focus on underlying fundamentals in determining that medium-run outlook, and I have cautioned against over-reacting to short-term fluctuations in the economic and financial data.1 I believe credible policy communications play a key role in policymaking. It has been well established that when the public has a clearer understanding about how monetary policy is likely to change as economic conditions evolve — whether those changes in conditions are anticipated or not — monetary policy is more effective. Policymakers can improve the public’s understanding by being clear about the goals of monetary policy, those aspects of the economy monetary policy can and can’t influence, and the economic information that influences their forecasts and policy decisions, as well as by striving to be systematic in their policy responses to changes in economic conditions that influence the outlook. When the public has a clearer understanding of the strategy monetary policymakers follow in normal times, not only will they be able to make better financial and employment decisions, they will also understand when nonstandard monetary policy action is required in extraordinary circumstances.

The Federal Reserve has taken many steps over time to improve its policy communications. Recent enhancements include the Chair’s press briefings four times a year, the Summary of Economic Projections, and the Statement on Longer-Run Goals and Monetary Policy Strategy, which established an explicit numerical goal for inflation.

So CNBC’s August Fed survey of market economists, fund managers, and strategists revealed to my mind some pretty troubling news. Nearly half of the respondents reported that they believe current Federal Reserve policy is mostly influenced by the current data, while less than 40 percent said they think it’s influenced by the medium-run outlook, and the rest were unsure. Sixty percent said they thought the Fed doesn’t have a framework for deciding when to adjust policy, while only about a quarter of the respondents said they think we do.

These results suggest to me that our policy communications could benefit from further enhancements. Recently, the FOMC has been describing its policymaking approach as being “data dependent.” Unfortunately, I believe there is some confusion about what the Fed actually means by “data dependent.” This phrase has provided a transition from a period of explicit forward guidance, which was used as a policy tool during the recession and early in the recovery, back to more normal policymaking times.2 But this transition has posed somewhat of a challenge for FOMC communications. After the Great Inflation of the 1970s, the FOMC became more predictable and systematic in how it reacted to changes in economic activity and inflation.3 So the public had a pretty good sense of the Fed’s so-called reaction function and explicit forward guidance was rarely used. But the Great Recession required the Fed to behave in a way quite distinct from its past behavior, and consequently, there is less understanding today about how policymakers are likely to react to incoming economic information. Another factor complicating communication is that market participants prefer more explicit statements and less uncertainty. Thus, they may interpret the forecasts of the economy and the appropriate policy path as having more certitude than they actually do, which creates some communications issues when the forecasts and policy path change.

The concept of “data dependence” was meant to reinforce the idea that the economy is dynamic and will be hit by economic disturbances that can’t be known in advance. Some shocks will result in an accumulation of economic information that changes the medium-run outlook for the economy and the risks around the outlook in a way to which monetary policy will want to respond. But some of these shocks will not materially change the outlook or policymakers’ view of appropriate policy. Unfortunately, referring to policy as “data-dependent” could be giving the wrong impression that policy is driven by short-run movements in a couple of different data reports. It may even suggest that policy-setting is unsystematic in that the salient data reports may be viewed as changing from meeting to meeting. We seem to find ourselves in a situation where market participants and commentators view any one monthly or quarterly data release as the definitive piece of evidence that will result in either a policy action or no action.

Types of Uncertainty

I mentioned that market participants tend to like certainty. But that applies more broadly — in many situations, people prefer certainty. But the world is an uncertain place, and I think policymakers should find a better way to acknowledge and convey that uncertainty. The 18th century French philosopher Voltaire said, “Uncertainty is an uncomfortable position, but certainty is an absurd one.” In other words, we might prefer to live in a world with more certainty, but we don’t. And to pretend we do live in such a world is absurd — it can lead to bad outcomes.

In terms of economics and monetary policymaking, uncertainty comes into play in a number of ways. For example, price stability and monetary policy are intimately linked, but setting monetary policy to achieve price stability is not trivial. There is uncertainty around our measures and forecasts of inflation and about the transmission of monetary policy to inflation. Recently, economists have been focusing on the uncertainty surrounding the underlying structural aspects of the economy, such as the longer-run levels of the unemployment rate, trend output growth, structural productivity growth, and equilibrium interest rates, and their implications for monetary policy. As former Federal Reserve Chair Alan Greenspan pointed out: “... uncertainty is not just a pervasive feature of the monetary policy landscape; it is the defining characteristic of that landscape.”4

Data uncertainty

One type of uncertainty economists and policymakers need to confront is data uncertainty. The U.S. statistical agencies provide excellent service using best-practice techniques to gather large volumes of high-quality data on numerous aspects of the economy. But even the highest quality data are inevitably measured with some error and are sometimes subject to revision as more information is gathered. For example, according to the Bureau of Labor Statistics, the 90 percent confidence interval due to sampling error for the monthly change in nonfarm payroll employment is about plus or minus 115,000 jobs.5 Of course, users of the data know this, but we tend to ignore this issue. Charles Manski has written extensively on what he calls the problem of “incredible certitude.”6 Downplaying the fact that the official statistics are measured with error can lead less sophisticated users of the data to believe they are more precisely measured than they actually are. Manski points out that the idea that economists, policymakers, and the statistical agencies should do much more to convey the sense of error around the statistics is not a new idea. It was strongly encouraged more than 50 years ago by Oskar Morgenstern, a founder of the field of game theory.7

Things have improved since then. Revisions in some of the data reports, like GDP growth and employment, routinely get media coverage, and a body of research investigating the implications of data revisions for forecasting, structural modeling, and policy is growing.8 Still, it seems likely that the imprecision in some of the data and the difficulties in forecasting are not fully appreciated. This comes to light every month in the days leading up to the release of the monthly employment report. Economists are polled to see what they expect the monthly number to be, and then when the report is released, the financial press often reports the number as good if it exceeds the consensus estimate or bad if it is weaker than the consensus. Little attention is paid to the dispersion in the economists’ estimates in the first place, to the fact that the number in the release is measured with some error, or to the fact that even a job growth number that comes in less than analysts expected could be strong enough to put further downward pressure on the unemployment rate.

Data revisions complicate making monetary policy in real time.9 PCE inflation, the measure the Fed uses for its inflation goal, is subject to revision. For example, FOMC transcripts and minutes show that in early 2002, policymakers were concerned about a drop in inflation. Ultimately, much of this drop was revised away. Someone reading the transcripts today not knowing that the data were subsequently revised could be quite confused by the discussion.

Measurement issues also affect some of the important constructs in macroeconomic models. In a number of papers, Athanasios Orphanides has laid out a convincing case that mismeasurement of slack and other unobservables like the natural rate of interest led to monetary policy mistakes that contributed to the Great Inflation of the 1970s. He argues that these mismeasured concepts continue to unduly influence monetary policy today and can lead to poor policy decisions that induce undesirable fluctuations in the economy.1011

Model uncertainty

Economists and policymakers also need to confront model uncertainty.12 Even if we were all to agree on one model of the economy — a heroic assumption to be sure — the parameters governing how economic agents interact with one another would be estimated from the data and would not be precisely known. So there would be uncertainty around forecasts derived from the model and the appropriate policy stance based on the model, even if we knew with certainty what shocks were going to hit the economy in the future.

Of course, the situation is even more complicated because economists don’t agree on a single model or a single set of assumptions within a general class of models. Often, there are competing models or different sets of assumptions that are consistent with the observable data.13 Before the financial crisis, we may have convinced ourselves that we could rely on representative agent models, linearized around a steady state, with one interest rate. But the nature of the financial crisis pointed out the inadequacies of these models for understanding the interplay between the real economy and financial markets. The good news is that macroeconomic models are being developed that include more than just a rudimentary financial sector, and policymakers at the Fed and elsewhere are broadening the set of models we routinely consult.14 Nonetheless, while our usual models can give us a pretty good sense of the employment and inflation costs of a change in monetary policy, we are still less able to quantify the financial stability costs and benefits of particular monetary policy paths. So we need to remain humble, and continue to examine the economy’s performance to assess these costs and benefits.

Addressing Uncertainty in Theory and Practice

Economists and forecasters have developed several techniques to handle uncertainty. Bayesian estimation techniques are commonly used in macroeconomic modeling to handle parameter uncertainty. Given the model and the available data, these Bayesian methods yield probability distributions of forecasts that reflect both uncertainty about the future evolution of the economy and uncertainty about the parameters of the model. Model uncertainty is more difficult to address. But if we know the set of relevant models and can write them down, then Bayesian techniques can also be used to address model uncertainty. In particular, Bayesian techniques can be used to average across multiple models, based on the models’ relative abilities to fit the data. In this model-averaging approach, appropriate policy would be the policy that performs well on average across the set of models but is not necessarily the best policy in any one particular model.15 A related literature studies setting policy using simple rules that are robust across a variety of model and economic circumstances.16

In some cases, it may not be easy to write down all the models that could characterize the economy or associate probabilities to various outcomes. Nobel Laureates Tom Sargent and Lars Hansen have developed a robust-control approach that can address model uncertainty and misspecification even in these circumstances. Their approach confronts head-on the fact that models are only an approximation to reality and they show the benefit of choosing the policy that produces the best outcome in the worst-case scenario across models. The policymaker doesn’t necessarily expect the worst, but she should plan against it because doing so will lead to acceptable performance across a wide array of circumstances.17

From a practical policymaking standpoint, I find that looking at forecasts from several models gives me a better sense not only of the most likely forecast but also the risks around the forecast. I don’t believe we are at the state of knowledge where a single policy rule can be used to set policy because no rule works well enough across a variety of economic models and in a variety of economic circumstances. But I do find it useful to look at the outcomes of an array of simple, robust monetary policy rules as a benchmark against which to assess current policy. The Cleveland Fed website now publishes the outcomes of seven simple monetary policy rules based on three publicly available forecasts.18

Despite the diversity across the outcomes, I find that the rules provide some discipline in systematically relating incoming data to policy decisions: if the current policy stance is quite different from what the rules suggest, one must carefully consider the factors that support that deviation. One caveat of looking at the outcomes of several models and several rules is that you want to be consistent about it. You must guard against changing which model or rule you favor merely because it happens to produce results that confirm your intuition or preferred policy stance at the time.

In terms of policy responses to uncertainty, some results in the literature suggest that when policymakers confront more uncertainty either in their data or models, they should be more cautious in acting, that is, be more inertial in their responses.19 However, subsequent research has shown that this is not generally true. For example, Sargent (1998) points out that caution does not necessarily mean doing less. When there’s uncertainty, it might be better in some cases for policymakers to act more aggressively, not less, because aggressive and preemptive action can prevent the worst-case outcomes from actually coming about.20 Another factor that can affect whether the policymaker should be inertial or not is the public’s understanding of the policymaker’s reaction function and the policymaker’s commitment to following that reaction function. For example, if the policymaker hasn’t effectively communicated, retaining a very accommodative monetary policy stance might be interpreted as signaling a gloomy economic outlook rather than as a preemptive move against downside risk.21 This points out the importance of clear communications, the starting point for this talk and where I’d like to conclude.

Three Recommendations for Monetary Policy Communications

It might seem counterintuitive, but I think we would clarify things for the public by acknowledging uncertainty and focusing attention on the medium-run outlook rather than on short-run fluctuations in the data. Let me offer three recommendations that I believe would improve FOMC communications.

First, the FOMC should publish confidence bands around the projections in the Summary of Economic Projections (SEP). Four times a year, the FOMC summarizes Committee participants’ projections of output growth, the unemployment rate, inflation, and the associated appropriate policy path. For the past year, we have also been providing the median projections across the participants for each variable. Although it is a topic of discussion, the FOMC does not publish error bands around these projections. I believe we could improve our communications if we did.23 Confidence bands are a standard part of forecasting, illustrating that the future is inherently uncertain. The confidence bands would give the public a better sense of the normal type of forecast variation one should expect to see, so they could better understand some of the risks around the forecast and subsequent changes in the forecast. The confidence bands would also be a helpful reminder to policymakers to remain humble about our ability to know the future with much certainty.

Although the public at-large may not be aware of it, the Committee does publish a summary table of the average historical errors of projections from 1996 through 2015 made by various private and government forecasters. We can apply historical errors to the median FOMC projections to get an approximate, symmetric 70 percent confidence interval for each variable, as illustrated in Figures 1-4.24

As you can see in Figure 3, the error band around the inflation forecast one or two years out is about ±1 percentage point. Keeping those confidence bands in mind helps one to judge progress toward our policy goals. In addition, the figure clearly shows that even though the dispersion across FOMC participants often gets media attention, it is actually quite narrow when compared with the confidence band around the inflation forecast.

The federal funds rate path differs from the other variables in the SEP because policymakers choose the path. But because there is uncertainty around each participant’s projections of growth, the unemployment rate, and inflation, there is also uncertainty around the appropriate policy path. Providing a confidence band would help remind people that the median policy path in the SEP is not meant to be a firm commitment on the part of the FOMC. Instead, policy should be expected to respond to changes in economic and financial conditions that materially affect the medium-run outlook. As you can see in Figure 4, the range of reasonable outcomes for the policy path is actually quite wide, and considerably wider than some of the variation we’ve seen in the SEP policy path over time, even though those shifts have often drawn considerable media attention.

My second recommendation is that the FOMC present a forecast that could serve as the benchmark for understanding the FOMC’s policy actions and post-meeting statements. The median paths in the SEP are a step in that direction, but the variables are not linked. So, for example, there is no guarantee that someone projecting the median inflation path would necessarily be projecting the median output path. Publishing a benchmark forecast — with error bands — as do many other central banks would make it somewhat easier to explain how the economic outlook is dependent on the future path of monetary policy. In 2012, the FOMC experimented with developing a forecast representing the consensus of the Committee.25 It proved difficult to reach a consensus on a consensus forecast, but I think we should continue to pursue this.26 In the meantime, we should consider publishing the staff’s forecast. Policymakers need not agree with the staff’s forecast, but they could use it as a benchmark against which to explain how and why their forecasts may differ.

My third recommendation pertains to our post-meeting FOMC statement. While it continues to serve the Committee well, I believe the statement could do more to dissuade people from thinking short term, and to illuminate that policy is being formulated based on the medium-run outlook, the risks around the outlook, and the progress on our policy goals. The statement is an important part of FOMC communications, providing information on the mapping from economic conditions to the outlook, and then to policy actions. The current formulation of the statement does highlight factors that are important in that reaction function, namely, the medium-run outlook for inflation, resource utilization, and inflation expectations. But the first paragraph in the statement tends to concentrate on changes in economic conditions since the last FOMC meeting, which can spur a short-run focus. The facts in the paragraph are always true — investment has been soft, unemployment is little changed, employment growth has been solid, on average, and so on. But we could improve the public’s understanding of our monetary policy strategy if we provided more interpretation of those facts — namely, our assessment of how recent changes in economic and financial data have or have not changed the medium-run outlook, the risks around that outlook, and therefore the appropriate policy path. We could also strive for more consistency about the conditions we systematically assess in calibrating the stance of policy so that the public would get a better sense of the Committee’s reaction function over time.

Conclusion

Uncertainty is the norm, not the exception. I believe it will serve both the public and the FOMC well if we more explicitly acknowledge this uncertainty. Doing so will help the public evaluate whether changes in economic conditions or in the outlook are significant or not. It will help them see that the economy often evolves differently than the modal forecast, and that it is better to focus on the medium run than on short-run fluctuations in the data. It will give them a better sense of what policymakers mean when they say their policy is data-dependent. My suggestions here are simple ones, but I believe they are consistent with the evolutionary changes the FOMC has been making on its journey to increased transparency. Although policy communications will likely always remain somewhat of a challenge, I believe striving for even clearer communications is worth the effort.


https://youtu.be/KkSgen4ybK8
https://youtu.be/KkSgen4ybK8

Shadow Open Market Committee: President Loretta J. Mester, Federal Reserve Bank of Cleveland

Figure 1. FOMC Summary of Economic Projections: Change in Real GDP (Q4/Q4)
Figure 2. FOMC Summary of Economic Projections: Unemployment Rate (Q4 avg)
Figure 3. FOMC Summary of Economic Projections: PCE Inflation (Q4/Q4)
Figure 4. FOMC Summary of Economic Projections: Federal Funds Rate (year-end)
Footnotes
  1. See Mester (April 1, 2016). Return to 1
  2. See Mester (November 20, 2014) for an overview of the use and evolution of forward guidance during the Great Recession and its aftermath. Return to 2
  3. See Taylor (2012). Return to 3
  4. See Greenspan (2004). Return to 4
  5. The surveys are also affected by nonsampling errors, including data collection errors or variation in response rates. See Technical Notes in "The Employment Situation — August 2016,” Bureau of Labor Statistics, U.S. Department of Labor. Return to 5
  6. See Manski (2011 and 2015). Return to 6
  7. See Morgenstern (1950 and1963). Return to 7
  8. See Croushore (2011) for a review of the literature on real-time data analysis. Return to 8
  9. As discussed by Croushore (2011), the largest revision ever recorded for quarterly GDP growth was for the fourth quarter of 2008. The first release was made in January 2009 and showed GDP declining 3.8 percent at an annual rate. Just a month later, this number was revised down by 2.4 percentage points, to minus 6.2 percent, confirming the extent of the worst recession since the Great Depression. In the third monthly release in March 2009, the number was revised up a bit to minus 5.4 percent. With the benchmark revisions since then, the reading is now minus 8.2 percent. (Data are available in the Federal Reserve Bank of Philadelphia’s Real-Time Data Set for Macroeconomists.) Return to 9
  10. See, for example, Orphanides and Van Norden (2005) and Orphanides (2015). Return to 10
  11. A thorny issue related to data uncertainty is uncertainty about the nature of the shocks hitting the economy. For example, to understand the implications of incoming data on jobs, one needs to understand the nature of the factors affecting recent employment growth — are they demand-side factors like growth in output or supply-side factors like a mismatch between skills available and skills in demand? Return to 11
  12. Dennis (2005) presents a useful nontechnical summary of some of the types of uncertainty confronting monetary policymakers. Return to 12
  13. Manski (2011) calls different assumptions generating different projections “dueling certitudes.” He posits a “Law of Decreasing Credibility: The credibility of inference decreases with the strength of the assumptions maintained.” Strong assumptions can yield more definitive conclusions, but if the assumptions are questionable, then those definitive conclusions will be questionable as well. Return to 13
  14. The FOMC has been expanding the models it routinely examines as a part of the policymaking process. These include the Board of Governors staff’s large-scale FRB/US model and two smaller-scale dynamic stochastic general equilibrium (DSGE) models called EDO and SIGMA, as well as various models maintained and utilized at the Federal Reserve Banks. See the discussion of the Federal Reserve System’s ongoing research on DSGE models in the Minutes of the Federal Open Market Committee, June 21-22, 2011. Academic researchers are now building model archives to aid in the systematic comparison of empirical results and policy implications across a large set of economic models as an aid to policy analysis. One such archive, The Macroeconomic Model Data Base (MMB), headed by Volker Wieland of Goethe University Frankfurt, currently includes 61 models. See The Macroeconomic Model Data Base (MMB) web page at www.macromodelbase.com for more information on the database; for a discussion of the approach, see Wieland, Cwik, Müller, Schmidt, and Wolters (2012). Return to 14
  15. Bernanke (2007) discusses these techniques in an accessible way. See Waggoner and Zha (2012) for an application. Return to 15
  16. See, for example, Orphanides and Williams (2002 and 2007). Return to 16
  17. See Sargent (1998) and Hansen and Sargent (2007 and 2011). As discussed in Hansen and Sargent (2001), Brunner and Meltzer (1967) were early proponents of using a min-max strategy for handling model ambiguity. Return to 17
  18. See “Simple Monetary Policy Rules,” Federal Reserve Bank of Cleveland. In addition to posting current outcomes for the set of rules, the web page includes a tool that allows the user to customize the rules and the forecasted inputs into the rules to generate alternative policy paths. Return to 18
  19. Aoki (2003) studied the optimal policy response when data are measured with error and concluded that the degree of response to a variable in the policy rule should be less the higher the variable’s measurement error. Brainard (1967) studied optimal policy in response to a shock when there is uncertainty about the effect of policy on the economy and concluded that policy should respond less when there is uncertainty than when there is no uncertainty. This result has been shown not to be general across models. Return to 19
  20. Giannoni (2002 and 2007) shows policymakers averse to uncertainty will react more strongly to fluctuations in inflation and the output gap than if there were no uncertainty. They would put more weight on stabilizing inflation and the output gap and less weight on stabilizing the nominal interest rate. Return to 20
  21. See Woodford (2012). Return to 21
  22. See Minutes of the FOMC Meeting of January 26-27, 2016. Return to 22
  23. The Bank of Canada, Bank of England, European Central Bank, Norges Bank, and the Riksbank all publish a forecast with error bands as part of their communications; in some cases, it is the policymakers’ forecast, and in other cases, it is a staff forecast. Return to 23
  24. Because the September SEP has not yet been published, in the figures, the confidence bands for GDP growth, the unemployment rate, and inflation are constructed using the average historical projection errors in Table 2 in the June SEP, and the confidence band for the fed funds rate is constructed using the band illustrated in Chair Yellen’s speech in Jackson Hole in August (Yellen, 2016). Return to 24
  25. See the minutes from the July, September, and October 2012 FOMC meetings (www.federalreserve.gov/monetarypolicy/fomccalendars.htm#11655). Return to 25
  26. Hetzel (2016) provides one proposal for how this might be implemented. Return to 26
References
  • Aoki, Kosuke, “On the Optimal Monetary Policy Response to Noisy Indicators,” Journal of Monetary Economics 50, April 2003, pp. 501-523.
  • Bernanke, Ben S., “Monetary Policy under Uncertainty,” remarks at the 32nd Annual Economic Policy Conference, Federal Reserve Bank of St. Louis, St. Louis, MO, October 19, 2007.
  • Brainard, William C., “Uncertainty and the Effectiveness of Policy,” American Economic Review: Papers and Proceedings 57, May 1967, pp. 411-425.
  • Brunner, Karl, and Allan H. Meltzer, “The Meaning of Monetary Indicators,” in G. Horwich, ed. Monetary Process and Policy: A Symposium, Homewood, Illinois: Richard D. Irwin, Inc., 1967.
  • Croushore, Dean, “Frontiers of Real-Time Data Analysis,” Journal of Economic Literature 49, 2011, pp. 72-100.
  • Dennis, Richard, “Uncertainty and Monetary Policy,” Economic Letter, Federal Reserve Bank of San Francisco, Number 2005-33, November 30, 2005.
  • Giannoni, Marc P., “Does Model Uncertainty Justify Caution? Robust Optimal Monetary Policy in a Forward-Looking Model,” Macroeconomic Dynamics 6, 2002, pp. 111-141.
  • Giannoni, Marc P., “Robust Optimal Monetary Policy in a Forward-Looking Model with Parameter and Shock Uncertainty,” Journal of Applied Econometrics 22, 2007, pp. 179-213.
  • Greenspan, Alan, “Risk and Uncertainty in Monetary Policy,” remarks at the Meetings of the American Economic Association, San Diego, CA, January 3, 2004 (and published in the American Economic Review: Papers and Proceedings 94, May 2004, pp. 33-40).
  • Hansen, Lars Peter, and Thomas J. Sargent, “Wanting Robustness in Macroeconomics,” Chapter 20 in Benjamin J. Friedman and Michael Woodford, eds., Handbook of Monetary Economics 3B, Amsterdam, Netherlands: Elsevier-North-Holland, 2011, pp. 1097-1157.
  • Hansen, Lars Peter, and Thomas J. Sargent, Robustness, Princeton, NJ: Princeton University Press, 2007.
  • Hansen, Lars Peter, and Thomas J. Sargent, “Acknowledging Misspecification in Macroeconomic Theory,” Monetary and Economic Studies (Special Edition), Bank of Japan’s Institute for Monetary and Economic Studies, February 2001, pp. 213-225.
  • Hetzel, Robert L., “A Proposal to Clarify the Objectives and Strategy of Monetary Policy,” Federal Reserve Bank of Richmond Working Paper No. 16-11, September 12, 2016.
  • Manski, Charles F., “Policy Analysis with Incredible Certitude,” The Economic Journal 121, August 2011, pp. F261-F288.
  • Manski, Charles F., “Communicating Uncertainty in Official Economic Statistics: An Appraisal Fifty Years after Morgenstern,” Journal of Economic Literature 53, 2015, pp. 631-653.
  • Mester, Loretta J., “Forward Guidance and Communications in U.S. Monetary Policy,” remarks at the Imperial Business Insights Series, Imperial College, London, UK, November 20, 2014.
  • Mester, Loretta J., “The Outlook for the Economy and Monetary Policy: Low-Frequency Policymaking in a High-Frequency World,” remarks at the New York Association for Business Economics, New York, NY, April 1, 2016.
  • Morgenstern, Oskar, On the Accuracy of Economic Observations, Princeton, NJ: Princeton University Press, 1950; second edition, 1963.
  • Orphanides, Athanasios, “Inflation Dynamics: Lessons From Past Debates for Current Policy,” remarks at the Federal Reserve Bank of Kansas City Economic Policy Symposium, Jackson Hole, WY, August 29, 2015.
  • Orphanides, Athanasios, and Simon Van Norden, “The Reliability of Inflation Forecasts Based on Output Gap Estimates in Real Time,” Journal of Money, Credit, and Banking, 37, 2005, pp. 583-601.
  • Orphanides, Athanasios, and John C. Williams, “Robust Monetary Policy with Imperfect Knowledge,” Journal of Monetary Economics 54, 2007, pp. 1406-1435.
  • Orphanides, Athanasios, and John C. Williams, “Robust Monetary Policy Rules with Unknown Natural Rates,” Brookings Papers on Economic Activity 2, 2002, pp. 63-145.
  • Sargent, Thomas J., “Discussion of ‘Policy Rules for Open Economies,’by Laurence Ball,” remarks at the NBER Conference on Monetary Policy Rules, Islamorada, FL, January 15-17, 1998.
  • Taylor, John B. “Monetary Policy During the Past 30 Years with Lessons for the Next 30 Years,” luncheon address at the Cato Institute’s 30th Annual Monetary Conference on Money, Markets and Government: The Next 30 Years, Washington, D.C., November 15, 2012 (published in Cato Journal 33, Fall 2013, pp. 333-334).
  • Waggoner, Daniel F., and Tao Zha, “Confronting Model Misspecification in Macroeconomics,” Journal of Econometrics 171, 2012, pp. 167-184.
  • Wieland, Volker, Tobias Cwik, Gernot J. Müller, Sebastian Schmidt, and Maik Wolters, “A New Comparative Approach to Macroeconomic Modeling and Policy Analysis,” Journal of Economic Behavior and Organization 83, 2012, pp. 523-541.
  • Woodford, Michael, “Methods of Policy Accommodation at the Interest-Rate Lower Bound,” in The Changing Policy Landscape, Federal Reserve Bank of Kansas City Economic Symposium, Jackson Hole, WY, 2012, pp. 185-288.
  • Yellen, Janet L., “The Federal Reserve’s Monetary Policy Toolkit: Past, Present, and Future,” remarks at “Designing Resilient Monetary Policy Frameworks for the Future,” a symposium sponsored by the Federal Reserve Bank of Kansas City, Jackson Hole, WY, August 26, 2016.