Theory Ahead of Rhetoric: Economic Policy for a "New Economy"
What accounts for the U.S. economic miracle? The end of the Cold War? The high-tech boom? Brilliant economic policy? Or simply luck, perhaps? Economists, historians, and political scientists will undoubtedly analyze the period and make their attributions. But this essay claims a more modest objective. We suggest that the past decade is a telling reminder of how little economists actually know about managing the business cycle and, ironically, how much they know about promoting economic welfare.
Advances in economic theory support the growing skepticism over the efficacy and desirability of economic policies geared toward smoothing what has come to be known as "the business cycle." This means much more than to say that fine-tuning the economy is difficult. Indeed, the most important theoretical developments of the past 20 years call into question the notion that substantial benefits are to be had from policies aimed at smoothing such economic fluctuations. And the costs of trying to do so may be great if stability comes at the price of higher, unpredictable inflation.
But the language of monetary policy is replete with concepts and empirical constructs inherited from an era when damping business-cycle fluctuations was the sine qua non of successful economic policy. The deep theoretical weaknesses of these ideas — embodied in notions such as "potential" output, ôtheö noninflationary rate of unemployment, growth "speed limits," and the like — have manifested themselves with a vengeance over the past decade, prompting casual observers to hail the so-called "New Economy." In fact, it's not that the economy is new, but that the policy lexicon is old. That is, the puzzling evolution of the current expansion is not a failure of economic theory, but of economic rhetoric.
Looking ahead, economic policymakers will face new and different obstacles to promoting the nation's welfare. To make the most of these challenges, they will need to look at the world through a different filter and adopt a new language that is consistent with that perspective.
According to every conventional measure, the U.S. economy is operating above its potential. And this, as we are told, is not a good thing. An economy that exceeds its potential is "overheated" — a situation that causes inflation to rise and, ultimately, the economy to slump. The Bureau of Labor Statistics tells us that more than 2,500,000 net new jobs were generated in 1999, and the rate of joblessness fell to a 30-year low. Bad news. The stock market is high, capital is flowing into the nation from around the world, and wages are rising. Bad, bad, and very bad.
At what point did economists begin to regard bad news asùwell, bad newsùand good news as just bad news waiting to happen? Where did this philosophy of pessimism come from? It is, we believe, a legacy of the 1930s, the Great Depression. True, only a fraction of our population can remember this unfortunate time in our economic history, but the economic philosophies and policy prescriptions born of that era remain with us today.
The trauma of the Depression left an indelible mark on macroeconomic policy. By the time Arthur Burns and Wesley Mitchell published their landmark study of business-cycle measurement in 1946, the intellectual tradition of postwar macroeconomics was well entrenched.1 Central to this tradition, which persisted for at least 30 years, is the notion that economic fluctuations are simply smaller versions of the dramatic boom-bust pattern of the 1920s and 1930s. According to this theory, such fluctuations are, by nature, economic defects, and the goal of economic policy is their elimination.
This was a significant deviation from the classical tradition articulated by Adam Smith in his 1776 work, An Inquiry into the Nature and Causes of the Wealth of Nations, which sought to develop a basic understanding of the sources of national prosperity and the institutional policies that would maximize the general welfare.
It is arguable that Congress, by creating the Federal Reserve System, expected its central bank to move beyond the laissez-faire mind-set of the classical tradition. And indeed, the laws defining the goals of the U.S. central bank have been reformed several times since 1913. But early in its history, the Federal Reserve's role was strictly to provide a financial infrastructure that would facilitate a national payments system. Its mission, described in the original Federal Reserve Act, was "to furnish an elastic currency, to afford the means of rediscounting commercial paper, to establish a more effective supervision of banking in the United States, and for other purposes."
This language clearly contemplates that the Reserve Banks should have the means and mission to address episodes of financial distress, and perhaps to provide some economic stability, at least as it concerns banking and other monetary crises. But this is a decidedly different kind of economic stabilization than what has come to be characterized as monetary policy today — the manipulation of national spending. In fact, prior to the 1920s, the dominant theory of monetary policy was the so-called "real-bills doctrine," which prescribed that growth in the money stock should passively accommodate the expansion and contraction of commercial activity. Modern stabilization policy was not envisioned, if only because the framework for thinking about such a mandate had not been developed.
Fear and Loathing on the Business-Cycle Trail
In the long run, we are all dead. Economists set themselves too easy, too useless a task if in the tempestuous seasons they can only tell us that when the storm is long past the ocean will be flat.
The Depression marked a turning point in the theories used to evaluate the successes and failures of economic policy. The business-cycle models that dominated the policy mind-set of the ensuing decades originated from one of the most influential books of the twentieth century, John Maynard Keynes' The General Theory of Employment, Interest and Money, published in 1936.
The problem, as Keynes saw it, is that an economy's equilibrium can be achieved at a less-than-optimal level of employment and production3. Keynes turned classical economic theory on its head, arguing that, in the short run, fluctuations in a nation's spending are instrumental in determining its income. Keynes and his disciples proposed economic models in which low levels of spending (or high levels of saving) produced a drop in national output.
Keynes' view of the economy seemed to square with the ghastly economic performance of the 1930s. Perhaps even more important, the Keynesian model offered a solution that other theories could not provide so confidently. Keynes conjectured that nations could lessen the severity of economic downturns by prescribing government fiscal policies, like reduced taxation or increased government spending, that encouraged the expansion of demand. Stimulate demand, and production and income are sure to follow. Keynes' solution created a revolution among the era's young economists, who would play important and decisive roles in shaping national economic policies for the next 40 years.4
The logic of Keynes' framework would eventually be understood as applying equally to "overly" good times and to overly bad. If economic fluctuations result from imperfections in the operation of the economy, then smoothing all fluctuations would be a desirable policy goal. In other words, "appropriate" economic policy embodies the goal of eliminating deviations from the trend growth rate.
Aspiring to Be Average
Numerical estimates of a nation's economic potential began as simple trend lines drawn, after the fact, through the ups and downs of the aggregate data, as Paul Samuelson discusses in his classic college textbook on economics:
"If we draw a smooth trend line or curve, either by eye or by some statistical formula, through the growing components of NNP [net national product], we discover the business cycle in the twistings of the data above and below the trend line."5
In 1961, a novel technique for estimating the economy's potential was developed by Arthur M. Okun and later was given official sanction by the President's Council of Economic Advisers. OkunÆs procedure connected the "problem" of the business cycle to the "problem" of unemployment. He conjectured that "a 4 percent unemployment rate is a reasonable target under existing labor market conditions," and estimated that for every percentage point the unemployment rate rises above this optimal level, the economy (measured by real GNP) will fall 3 percent below its potential. Using this three-to-one rule, Okun argued that policymakers could translate the rate of joblessness into a measure of actual output in relation to national potential.6
At about the same time Okun was giving policymakers a target for national economic performance, a New Zealander named Alban W. Phillips was documenting a negative correlation between the rate of joblessness and another undesirable economic condition, inflation. In what is now known as the "Phillips curve," economists observed that underperforming economies tend to see inflation fall, while overperforming economies see inflation rise.7 Eventually, Okun's and Phillips' ideas would be connected and captured in what economists call the "nonaccelerating inflation rate of unemployment," or NAIRU. According to this labor-market indicator of potential GDP, sustained movements in measured unemployment below the NAIRU portend accelerating prices, while unemployment rates above the NAIRU precede disinflation.
The Costly Experiment
A major problem with the use of potential output and NAIRU as the basis for economic policy emerged in the 1960s. In 1964, despite three years of strong growth (averaging about 4 percent annually after inflation), the Council of Economic Advisors estimated that the economy was operating well under its "potential." In its 1964 report, the Council claimed that "only a significant acceleration of expansion can enable the Nation to make full use of its growing labor force and productive potential."8 The report proposed a major tax reduction program that "would add $30 billion to total output and create 2 to 3 million extra jobs" and called for monetary policy to work in conjunction with the fiscal authority to stimulate demand conditions.
By 1966, the Council reported that "the economy [had] caught up with its potential" and heralded the closing of the gap as "a great achievement."9 But in subsequent reports, the Council noted that the economy had probably overshot its potential in mid-1965 and was operating above it during the latter half of the 1960s. That view was based not only on GNP statistics, but also on the unexpected acceleration of inflation during 1968-69.
During the first two years of the 1970s, attempts were made to curb inflation by restraining the demands that were presumably pushing the economy above its potential. But those steps proved less effective than hoped. In 1971, inflation was slightly above its level of two years earlier and the unemployment rate nearly doubled. In August 1971, President Nixon took more drastic measures by imposing a 90-day freeze on wages and prices, followed by still other price-control measures that continued through the spring of 1974. In the end, the dismal economic performance of the 1970s — a succession of fits and starts leading to ever-higher unemployment and inflationùintroduced the term "stagflation" into public discourse.
What went wrong? Economists now accept that the policy prescriptions suggested by the Phillips curve failed to account for the important role that expectations play in the observed inflation-unemployment trade-off. As inflation's trend escalated, people changed their behavior. The patterns in the data that economists had used to derive their trade-off theories — and that policymakers had relied on in responding to economic conditions — did not remain stable when inflation expectations changed. Specifically, the lower rates of joblessness that policymakers believed could be "bought" with higher inflation were not realized for long, as employees adjusted their wage demands upward to compensate for their rising cost of living.
It became clear — painfully so — that there is no fixed mapping of the rates of unemployment and inflation that is independent of the public's inflationary expectations. In the 1975 Economic Report of the President, the Council declared that "In the long runàthere would not appear to be a mechanism linking the rate of unemployment to any one rate of stable wage or price increase."12 Although this statement seems, in isolation, to cast off the Okun's law–NAIRU–Phillips curve troika as a meaningful policy guide, that certainly wasn't the result. This passage laments not the Phillips-curve framework but the inability to use it better.
This belief persists today. A growing number of economists are coming to the conclusion that the policy failures of the late 1960s and 1970s (and perhaps other episodes) can be attributed less to the inadequacy of the framework than to the inherent uncertainty of determining the economy's potential.13 To many, the undisputed improvement in monetary policy from the 1980s through the 1990s was the happy consequence of simply learning the economy's true potential.14 The promise for sustaining this improvement, then, was to be found in better statistical techniques and enhanced information collection.
There is another interpretation: Potential output or the NAIRU cannot be made more useful concepts, even with better measurement or better econometrics. The policy successes of the past two decades have not been the result of more precise knowledge of NAIRU or potential GDP, but rather from a more determined concentration on long-term goals and a deeper appreciation of the dynamic forces driving modern economies.
Losing the Forest Amid the Trees
An intriguing analogy to the postwar history of U.S. monetary policy can be found in the Forest Service's war against fires. It began with a simple enough question: How do we reduce the number of forest fires? Many solutions, each having a measurable degree of success, resulted. Educate the public about the harm caused by forest fires, put more resources into fighting forest fires, and encourage the development of fire-retarding technologies. And fires were, in fact, reduced — initially.
Unfortunately, it turned out that reducing forest fires had the unexpected consequence of allowing underbrush to grow more dense, creating an unnatural change in the ecological balance of the forests. Fires are a naturally occurring phenomena that serve to clean up the accumulated debris on the forest floor, thereby creating opportunities for wildlife and growth that would otherwise have been squeezed out by the heavy undergrowth.
Even more ironic, the excess buildup of debris increased the severity of fires when they did occur, so that the occasional fire was more catastrophic than the smaller fires the Forest Service had hoped to contain. In the end, the well-intended policy considered too narrow a model of the forest. Instead of asking how to prevent forest fires, the Forest Service should have asked, what is the function of fires in the forest ecology?
The lesson of this example is that it is easy to lose the forest amid the treesùin this case, literally. It is absolutely understandable that the dominant question to come out of the Depression would be, how do we avoid a catastrophic collapse of economic activity? Likewise, it was reasonable that the creation of the Federal Reserve System would be motivated by the question, how do we avoid a catastrophic collapse of the financial sector?
However, as we understand them today, these two questions are likely related. In an important and influential paper published in 1983, Ben Bernanke of Princeton University proposed that the systemic collapse of financial intermediation converted what might have been a significant, but otherwise unexceptional, downturn into the Great Depression.17 Embracing this view leads one to ask about reforming the institutional structure of financial institutions and markets, questions far removed from that of how to eliminate the business cycle as it has been understood since Keynes.
In fact, the post-Depression view that ups and downs in economic activity are, by and large, pathological, begged the real question: What is the role of business-cycle fluctuations in the macroeconomic ecology? It would be some 40 years before economists would address this question in earnest, but attendant on its answer came a discernible shift toward the establishment of long-term goals for monetary policy.
Lessons in Long-run Policy Dynamics
If you ask us to name the three theoretical developments that have had the most significant influence on economic policy thinking in the past 30 years, we answer: rational expectations, time inconsistency, and "real" business cycles.
The first two would raise few eyebrows among academics. Rational expectations, brought to modern macroeconomics by Nobel laureate Robert E. Lucas, Jr., introduced forward-looking behavior into policy discussions in a formal and systematic way. This sounded the death knell for the Phillips curve as an exploitable tool of policy and spawned a rich, varied literature on the vital role of expectations in the dynamics of economic activity.
Related to rational expectations, time inconsistency predicted adverse consequences from economic policies that failed to commit to clear and consistent long-term objectives. This was an old but underappreciated principle that applied to the formulation of economic policies. Because of dynamic rational expectations, short-run policies that, individually, appear to be reasonable (if not optimal) in the short run, are decidedly less than optimal when considered over time.
These two contributions emphasize the importance of rules, as opposed to discretion, in economic policy. But not any rule will do. The policy rule must commit to future actions today and the policymaker must be held accountable to them. In the case of monetary policy, the problem of time inconsistency implies that the monetary authority should emphasize transparent, credible policies regarding the future purchasing power of money.18 Without commitment, the rule on which inflation expectations are formed is not credible, since the public knows that at any point, the monetary authority will be tempted to renege on its long-run promise in the interest of short-run expedience.
Clearly these ideas have taken hold, and they provide much of the current intellectual underpinnings of central banks' behavior all over the world — not least because they explain how policy had previously erred. In the United States, the economic stabilization policies of the 1960s and 1970s which caused instability in the purchasing power of money produced a reduction in the national welfare. Inflation, the nation learned, redistributes wealth capriciously. If the general price level unexpectedly rises because of an excess supply of money, people who made decisions based on the expectation of a stable purchasing power of money lose. Savers come to realize that they lent money at too small a return when they are paid back in dollars that have less purchasing power than before. And employees will regret that their dollar-denominated earnings did not anticipate the drop in the dollar's purchasing power. These are just two examples of the countless bad decisions caused by unexpected inflation.
At this point, the importance of dynamics is revealed as a crucial shortcoming of the original Phillips-curve approach. Losers, it turns out, don't like to lose. Once people have experienced a loss caused by capricious changes in the purchasing power of their money, they take precautions to prevent future losses. That is, they alter their behavior and redirect their resources to protect against losses from future inflation, leaving the economy with fewer resources to devote to production.
These reallocations can take many forms: People may buy land or homes as an inflation hedge, or financial institutions may raise borrowing rates to compensate for the risk associated with the uncertain purchasing power of a dollar. Indeed, any decision with a dollar-denominated outcome will involve an added cost associated with uncertainty about the future purchasing power of money. In short, knowing that the purchasing power of a dollar is stable will lead to better allocation of resources than is possible in an environment that suffers from inflation.
The Real-Business-Cycle Approach to Economic Modeling
While the ideas of rational expectations and time inconsistency have had a profound impact on monetary policy over the past two decades, can the same be said of real-business-cycle theory? After all, here is a line of research originating in two articles — Kydland and Prescott (1982) and Long and Plosser (1983) — that pointedly omitted money altogether.19 That is, these models had no clear role for the monetary authority.
Real-business-cycle theory now refers generically to a class of models in which aggregate outcomes are the sum of the decisions made by individual firms and households operating in fully dynamic environments with explicitly modeled constraints, opportunities, market structures, and coordination mechanisms. These models incorporate money, taxes, and a variety of market frictions and imperfections.20
Despite a promising body of research incorporating the older Keynesian notions of market imperfections — sticky prices and such — the lessons of the original real-business-cycle models have survived. These models are still "real" in the sense that their economic fluctuations come from informed decisions of perfectly competitive, efficiently functioning households and businesses as they respond to changes in productivity. Real-business-cycle models can account for the economic patterns we actually observe — large fluctuations in output around a statistical trend. Furthermore, these fluctuations are quantitatively significant, suggesting that the bulk of typical business-cycle fluctuations might best be characterized as the economy's optimal response to random external forces that — fortunate or unfortunate — are not appropriate objects of policy response.
Indeed, the real-business-cycle framework leads to the conclusion that the concept of potential output is hollow. It is always possible to measure some average or trend level of output after the fact. But if one views the path of the economy, approximately and excepting extreme circumstances, as the dynamic unfolding of a sequence of optimal outcomes given the inherited structure of the economy, then actual and potential output become one and the same.
Further theoretical advances have subjected the NAIRU to the same fate as potential output. So-called "search-theoretic" models, of the kind pioneered by Mortensen and Pissarides (1994),21 generate variations in equilibrium unemployment analogous to output fluctuations in the real-business-cycle tradition, making the notion of NAIRU equally vacuous. As with potential output, it is always possible (after the fact) to correlate some level of unemployment with accelerating inflation. But without an explicit description of how economic policies can be used to alter the matching of workers and jobs in the labor market, that correlation is meaningless to economic policymakers.22
Aligning Rhetoric with Reality
A critical feature of the real-business-cycle framework and its offspring is the intentional and explicit connection to the theory of economic growth. The economist or policymaker viewing the world through the lens of dynamic general-equilibrium intuition is never far-removed from the long-run consequences of his or her reasoning. And this is the true legacy of the empirical failure of traditional postwar thinking and the attendant theoretical advances in macroeconomics from the early 1970s on: The breakdown of support for activist stabilization policies in favor of policies and institutional structures that tether the short-run behavior of policymakers to long-run economic welfare.
That monetary policy can wreak havoc on financial markets and can be a disruptive influence on the economy is unquestioned. This was a hard lesson learned. But whether a central bank can systematically and predictably "create" prosperity is another matter entirely.
This is not to say that monetary policy does not have an important role to play in the economy; but that "good policy" is not synonymous with accurate demand management. An effective policy is one that aims to promote long-run national growth, not one that manages movements around a statistical growth trend.
In the short run, it is important to strike a balance between the quantity of money demanded in the economy and the amount the central bank supplies. Such a balance keeps the purchasing power of money constant. If policy is backed by commitment, thus making it time consistent, the Federal Reserve promotes economic prosperity by reducing the risk associated with dollar-denominated decisions. In so doing, it helps to promote the creation of wealth. While Congress requires the Federal Reserve to promote effectively the goals of maximum employment, stable prices, and moderate long-term interest rates, it does not specify how these objectives are to be accomplished. Over time many Federal Reserve officials have come to regard the attainment of price stability as the most effective means of achieving these legislated goals.
We contend that this perspective has been absolutely pervasive in U.S. monetary policy over the past decade. The resolutely forward-looking focus on potential price pressures reflects the increasingly popular view that maintaining a relatively stable and predictable purchasing power of money is the primary welfare-enhancing role of monetary policy. The increasing openness of Federal Reserve decisionmakers — reflected in announced policies aimed at more rapid and transparent dissemination of Federal Open Market Committee decisions — needs to be appreciated in light of the established importance of credibility in the policymaking process. The more frequent unwillingness of policymakers to aggressively respond, in the absence of discernible inflationary pressures, to output and unemployment levels merely because they diverge from presumed estimates of potential and the NAIRU suggests the waning influence of these ideas on the establishment of economic policy.
If the principles guiding monetary policy have changed, why do some analysts still talk about "overheating," "growth above potential," unemployment rates that are "too low," and "wage pressures"? One explanation is that our assertion is wrong, and old-style stabilization policy is still the order of the day, at least for some policymakers.
Another explanation is that the rhetoric of monetary policy has failed to keep pace with theory and practice. Although policymakers may have conquered the fine-tuning impulse, they have yet to fully abandon the language that accompanies it. In a world where expectations matter, the language of policymakers can have consequences. As we confront the real challenges that financial innovation, rapid globalization, and the "new economy" will bring, these are complications we can ill afford. It is time to align rhetoric with reality.
- Arthur F. Burns and Wesley C. Mitchell, Measuring Business Cycles (New York: National Bureau of Economic Research, 1946).
- John Maynard Keynes, A Tract on Monetary Reform (London: MacMillan, 1924).
- The term “equilibrium” in this context is somewhat different from its more familiar meaning of an economic outcome generated from competitive, efficient resource markets. Here, equilibrium is merely the short-run outcome in an economy that need not correspond with its long-run (steady-state) value.
- See John Kenneth Galbraith, The Age of Uncertainty (Boston: Houghton Mifflin, 1977), pp. 216-26.
- From Paul A. Samuelson, Economics (New York: McGraw Hill, 1951), p. 253. Net national product was a commonly used measure of national production, but similar in spirit to the gross domestic product measure used today.
- Okun’s formula for potential GNP is P=A[1+0.03(U–4)], where P is potential GNP, A is actual GNP, and U is the percentage of civilian unemployment. See Arthur M. Okun, “Potential GNP: Its Measurement and Significance,” American Statistical Association, Proceedings of the Business and Economic Statistics Sections, Washington, D.C., 1962.
- A paper published in 1926 by the famous Yale economist Irving Fisher is now credited with observing the link between unemployment and price growth for the U.S. economy. See “A Statistical Relation between Unemployment and Price Changes,” International Labor Review (June 1926), reprinted as “I Discovered the Phillips Curve,” Journal of Political Economy, vol. 81, no. 2, pt. 1 (March/April 1973), pp. 496-502.
- Economic Report of the President, 1964, p. 37.
- Economic Report of the President, 1967, pp. 44-5.
- See Alban W. Phillips, “Mechanical Models in Economic Dynamics,” Economica (August 1950), p. 283. In another surprising connection to Phillips (see footnote 7), machines that are similar in spirit were proposed much earlier in Irving Fisher’s 1891 Ph.D. thesis. Fisher’s machines solved for equilibrium prices by monitoring rods and floats that fluctuated with the water levels flowing through a system of cisterns connected by rubber tubing.
- C. Archibald Blyth, “Alban W. Phillips,” The New Palgrave: A Dictionary of Economics, (London: The Macmillan Press Limited, 1992), p. 857.
- Economic Report of the President, 1975, p. 94.
- See, for example, J. Bradford DeLong, “America’s Peacetime Inflation: The 1970s” in Christina D. Romer and David H. Romer, eds., Reducing Inflation: Motivation and Strategy (Chicago: University of Chicago Press, 1997); and Athanasios Orphanides, “Activist Stabilization Policy and Inflation: The Taylor Rule in the 1970s,” Federal Reserve Board, Finance and Economics Discussion Series no. 2000-13 (February 2000).
- See, for example, Thomas J. Sargent, The Conquest of American Inflation (Princeton, N.J: Princeton University Press, 1999).
- The principle that bad money tends to drive away good is what economists call ”Gresham’s Law.”
- From H. Michell, “The Edict of Diocletian: A Study of Price Fixing in the Roman Empire,” Canadian Journal of Economics and Political Science, vol. 13, no. 1 (February 1947), pp. 1-12.
- Ben S. Bernanke, “Nonmonetary Effects of the Financial Crisis in Propagation of the Great Depression,” American Economic Review, vol. 73, no. 3 (June 1983), pp. 257-76.
- Although the idea of time inconsistency has a long history, Finn E. Kydland and Edward C. Prescott are usually credited with bringing the notion to prominence in modern discussions of economic policy. See “Rules Rather Than Discretion: The Inconsistency of Optimal Plans,” Journal of Political Economy, vol. 85, no. 3 (June 1977), pp. 473-91. The first specific application to monetary policy is generally attributed to Robert J. Barro and David B. Gordon, “Rules, Discretion and Reputation in a Model of Monetary Policy,” Journal of Monetary Economics, vol. 12, no. 1 (July 1983), pp. 101-21, and “A Positive Theory of Monetary Policy in a Natural Rate Model,” Journal of Political Economy, vol. 91, no. 4 (August 1983), pp. 589-610.
- Finn E. Kydland and Edward C. Prescott, “Time to Build and Aggregate Fluctuations,” Econometrica, vol. 50, no. 6 (November 1982), pp. 1345-70, and John B. Long, Jr. and Charles I. Plosser, “Real Business Cycles,” Journal of Political Economy, vol. 91, no. 1 (February 1983), pp. 39-69.
- For a spirited presentation of this point of view, see Randall Wright, “Search, Evolution, and Money,” Journal of Economic Dynamics and Control, vol. 19, no. 1/2 (January/February 1995), pp. 181-206.
- Dale T. Mortensen and Christopher A. Pissarides, “Job Creation and Job Destruction in the Theory of Unemployment,” Review of Economic Studies, vol. 61, no. 3 (July 1994), pp. 397-415.
- For a complete discussion of this issue, see Richard Rogerson, “Theory Ahead of Language in the Economics of Unemployment,” Journal of Economic Perspectives, vol. 11, no. 1 (Winter 1997), pp. 73-92. The similarity to the present article’s title is not coincidental.