We study the use of a zero mean first difference model to forecast the level of a scalar time series that is stationary in levels. Let bias be the average value of a series of forecast errors. Then the bias of forecasts from a misspecified ARMA model for the first difference of the series will tend to be smaller in magnitude than the bias of forecasts from a correctly specified model for the level of the series. Formally, let P be the number of forecasts. Then the bias from the first difference model has expectation zero and a variance that is O(1/P-squared), while the variance of the bias from the levels model is generally O(1/P). With a driftless random walk as our first difference model, we confirm this theoretical result with simulations and empirical work: random walk bias is generally one-tenth to one-half that of an appropriately specified model fit to levels.
We study long-run correlations between safe real interest rates in the United States and over 20 variables that have been hypothesized to influence real rates. The list of variables is motivated by the familiar intertermporal IS equation, by models of aggregate savings and investment, and by reduced form studies. We use annual data, mostly from 1890 to 2016. We find that safe real interest rates are correlated as expected with demographic measures. For example, the long-run correlation with labor force hours growth is positive, which is consistent with overlapping generations models. For another example, the long-run correlation with the proportion of 40- to 64-year-olds in the population is negative. This is consistent with standard theory where middle-aged workers are high-savers who drive down real interest rates. In contrast to standard theory, we do not find productivity to be positively correlated with real rates. Most other variables have a mixed relationship with the real rate, with long-run correlations that are statistically or economically large in some samples and by some measures but not in others.