The Dry, Wonky, and Utterly Essential World of Financial Stability Analysis

| |   
Bookmark and Share

In November 2007, the stock market was approaching all-time highs. The unemployment rate stood at a healthy 4.7 percent. And a variety of consumer sentiment measures showed Americans to be in generally decent spirits.

The Cleveland Fed’s early warning systemic risk model didn’t exist then. If it had, it would have given us a glimpse underneath the shiny facade: lurking financial stress that indicators commonly used at the time didn’t flag. We can’t know whether policymakers could have used the early warning to thwart the ensuing financial crisis, but we do know that it would at least have put them on notice.

Much as retailers use data and algorithms to predict what their customers will purchase, financial system supervisors today use similar tools to spot the next potential financial crisis. Other tools are designed to help policymakers know what to do when crisis conditions emerge.

To keep the financial system safe, you need to first know what combinations of conditions are likely to make the system unsafe. Then, the job is to calculate how financial institutions can be nudged back into less-risky behaviors.

On this topic, the Cleveland Fed joined with the newly created Office of Financial Research earlier this year to gather some of the world’s top financial stability modelers and data junkies at the 2013 Financial Stability Analysis Conference. And though it may have been “dry” and “wonky,” as the Wall Street Journal put it, the conference underlined both the importance and the amount of work yet to be done.

The science of financial stability analysis remains imperfect. The data is rife with holes and abnormalities. The models are largely untested in real-life situations. The financial system itself is so large and complex that supervisors are at a fundamental disadvantage. “We have only begun to fill in the gaps to assess and monitor threats to financial stability,” said Richard Berner, director of the Treasury Department’s Office of Financial Research, at the conference’s opening.

The old adage “you can’t manage what you can’t measure” applies to financial stability supervision. To keep the financial system safe, you need to first know what combinations of conditions are likely to make the system unsafe. Then, the job is to calculate how financial institutions can be nudged back into less-risky behaviors.

Today’s bank examiners have evolved into financial system supervisors, and they—or at least some of them—must be skilled in modeling techniques that capture the financial system’s full array of interconnected activities. They must be able to spot signs of systemic risk and then know what to do, with enough lead time for an effective response.

In the aftermath of the 2008 financial crisis, financial stability analysis tools have become en vogue. They tend to fall into several categories, including stress indexes, early warning systems, asset price/real estate valuation models, and contagion risk models. All of them share the goal of trying to tell us what combinations of economic conditions could lead to systemic risk events—replicating thousands upon thousands of various balance-sheet and income-statement machinations across financial institutions.

[Financial stability analysis tools] share the goal of trying to tell us what combinations of economic conditions could lead to systemic risk events.

A sampling of the indicators used to monitor systemic risk:

  • The ratio of household debt to GDP
  • Capital adequacy
  • Bankruptcy proceedings
  • Real estate prices
  • Indicators of liquidity (such as liquid assets to total assets)

If that strikes you as awfully complicated, it’s because it is. Today’s financial stability analysis tools must take into account the interconnectedness of the financial system and associated institutional factors. For example, the extent to which the default of a large bank is going to affect other players depends on the resolution regime in place in any given country. Large and medium-sized financial institutions conduct business with one another in ways that are complex and hard to track. Some models use algorithms capable of pinpointing signs of overheating in different nodes in the network.

The Cleveland approach

Let’s take a closer look at one class of tools—early warning models. Of the growing number of early warning models, each has its own methodology and favored data sources. Some focus on measures of credit, others on liquidity. The trick is figuring out which factor or arrangement of factors is most likely to trigger a crisis, because the policy response must fit the problem or risk creating an even bigger problem.

The Cleveland Fed’s early warning model (dubbed Systemic Assessment of the Financial Environment, or SAFE) has a couple of unique features. For one, it combines confidential information gleaned from regular bank examinations with publicly available data on asset prices. Additionally, it looks for structural weaknesses in the system that might make it particularly vulnerable to shocks. In this context, weaknesses are certain macroeconomic variables—asset quality, for example—that have veered from their historical norms.

Another unique feature is how the model defines “stress.” Typically, if looked at in one particular timeframe, SAFE considers stress in two time periods—short term and long term. That way, policymakers have the information to respond immediately to imbalances that are known to create problems relatively quickly. And they can look even further ahead at the potential implications of imbalances known to generate stress over a longer period (such as 18 months) and respond accordingly.

Another way to approach it

The flip side to paying attention, like the Cleveland Fed does, to potential shocks that may trigger stress is paying attention to underlying vulnerabilities in the financial system. Shocks are called shocks for a reason—they’re surprising. Financial system vulnerabilities, on the other hand, are easier to spot.

Nellie Liang, director of the Office of Financial Stability Research at the Federal Reserve, explains the two approaches as the difference between assessing the likelihood of a shock and the consequences of a shock. “Less time is spent on debating whether or not there is an asset bubble, and more time is spent on the consequences of what would happen if it were a bubble and it were to burst,” Liang says.

Tools to use

Beyond anticipating or tracking systemic risk, there is a class of financial stability analysis tools that were designed for the express purpose of suggesting some ways for supervisors to calm the landscape once risk is detected. These tools recommend certain corrective actions to pop bubbles that have emerged in places of financial excess.

While early intervention may well stave off a crisis (or not), it is almost certain to come with some costs to the financial system.

These tools are called “macroprudential” because they apply to the entire financial system, not just individual firms. And they should be thought of as distinct from the blunt interest-rate adjustments of monetary policy. The actions they prescribe might be as simple as raising the floor on downpayments as a means of cooling the housing market. Or they could suggest something a bit more complicated, such as taxing a financial firm’s short-term borrowing.

A pair of Purdue University researchers, for example, has studied how risk propagates within the financial system, in order to determine the best strategies for controlling it. They suggest providing certain kinds of loans to different institutions as a means of maximizing the welfare of the entire network.

Another option: A firm exceeding certain numerical thresholds might be forced to pay a tax on certain kinds of assets—a way for supervisors to change behaviors and incentivize firms to move into less-risky asset classes. Or perhaps capital positions would have to go up or down depending on a firm’s exposures.

The tradeoff is that while early intervention may well stave off a crisis (or not), it is almost certain to come with some costs to the financial system. First, there is the straightforward resource burden of complying with regulations. Following that, it’s important to establish which level of necessary compliance is the most efficient. The difference between a 14 percent capital buffer and a 15 percent buffer may seem inconsequential, when in fact it could mean billions of dollars lost or gained.

For example, a pension fund may react differently to financial stress than an insurance firm, so policies aimed at curbing stress may have unintended consequences on certain institutions. A policy aimed at tamping down instability may catch one set of firms in the midst of contraction but another in the midst of expansion. That’s why it’s important to get financial stability analysis right—the potential impact is powerful and far-reaching.

“Development has to happen. So how do we prevent a financial crisis on the one hand but also enable enough risk-taking in the private sector to support development?”
Martin Melecky, an economist with the World Bank

“The Holy Grail for me would be some ability to explain this information with a better understanding of the behavior of economic agents,” says Mikhail Oet, an economist at the Cleveland Fed who helped design its early warning model. “That would go a long way toward improving our ability to make meaningful interpretations of what we observe and make thoughtful contributions to potential policies for mitigating some of the adverse conditions.”

This cost-benefit tradeoff is especially true in emerging-market countries. The World Bank is particularly wary of financial stress models that rely on rigid measures of credit that underestimate the role of financial development. The amount of indebtedness a developing country is likely to have—and, truly, to need—as it grows is going to be higher than that of a more developed nation. Risk adversity should only go so far, in this case, in an attempt to balance financial stability and financial development. “Development has to happen,” says Martin Melecky, an economist with the World Bank. “So how do we prevent a financial crisis on the one hand but also enable enough risk-taking in the private sector to support development? Because not all risk-taking is bad, especially if it’s well managed and taken in pursuit of development opportunities.”

Moreover, one size most definitely does not fit all when it comes to financial stability tools. The International Monetary Fund, in a survey of tools, concluded that the best approach was to use multiple tools. This way, supervisors could cast a wider net to better differentiate among exposures. In addition, the IMF paper emphasized the importance of using tools that incorporate the impact of policy actions on market conditions and behaviors. Really good models will have to somehow incorporate how policy changes or new regulatory regimes will affect behaviors.

It must be said that macroprudential tools have their detractors. The University of Chicago economist John Cochrane wrote in the Wall Street Journal that such tactics are “active, discretionary micromanagement of the whole financial system…The US experienced a financial crisis just a few years ago. Doesn’t the country need the Fed to stop another one? Yes, but not this way,” Cochrane wrote.

The tools that supervisors use to monitor the financial system are only as good as the data that populate those tools. And the data, in many cases, is in less than perfect shape. Although much of it is out there in one form or another, most is proprietary and not always accessible to all parties in comparable forms.

“Financial data is really in a terrible state,” says Allan Mendelowitz, a Deloitte Consulting executive and former chief of the Federal Housing Finance Agency. Mendelowitz was among those whose advocacy led to the creation of the Office of Financial Research, part of whose mission is to bring order to the current chaos of financial data.

A mish-mash

Data that can’t be standardized can’t very well be aggregated. Data that can’t be aggregated is of little use to early warning models. That is why good data can obviously help supervisors get more out of their models, both within their own agencies and across them.

The tools that supervisors use to monitor the financial system are only as good as the data that populate those tools. And the data, in many cases, is in less than perfect shape.

Data that is clear and accessible carries the added virtue of helping private-sector players understand just how much risk is out there. If the risk becomes more clear-cut with better data, then market participants are more likely to discipline themselves or others for taking on too much risk. That was a problem in the run-up to the financial crisis; firms entered into bets that in reality were far riskier than the existing data had led them to believe.

Efforts are being made on two fronts—on one, improvements are being made to existing data, and on the other, a whole new set of data that captures granular transactions and positions is being produced.

Accessibility

The data needs to be not only high quality, but also be accessible to every financial market regulator, and (as appropriate) to the public. The ideal would be regular reporting of granular transaction and position data, which would give supervisors a continuous view of both the stock and flow of financial market operations.

Among the initiatives of the Office of Financial Research is to establish a global “legal entity identifier” that would make it easier to track parties to financial transactions instantly. It’s described as a unique, alphanumeric figure for each financial entity in the world. Somewhere down the road, it may even encompass individual financial instruments, not just the entities themselves. More immediately, efforts are underway to establish “mortgage identifiers” to keep tabs on these financial instruments as they get sliced and diced and scattered throughout the financial system.

Had a legal entity identifier existed in 2008, for example, it would have been much more possible for supervisors and risk managers to quickly assess the extent of counterparty exposures to Lehman Brothers. Indeed, new troves of data are available from the shadow banking sector, which had been lightly overseen before the crisis.

The global nature of data standards is increasingly important. Though the central bank of, say, Canada, may have ample information about exposures between Canadian banks and some international banks, it may have much less information about exposures within the broader set of international banks. To have a comprehensive view of the potential for contagion and systemic risk, that kind of data is absolutely vital. One cannot even know how to begin modeling the possible exposures without a decent picture of the actual ones.

Beyond possibly making systemic risk analysis easier, standardized data might also somewhat relieve the burden of regulatory compliance for many firms. Streamlined reporting requirements could make regulatory reporting cheaper, improve transparency for investors, and provide new opportunities in the technology sector.

Too much already?

Some turn the problem on its head—that there is already too much data. Perhaps “better data” is a worthy goal, but right now the data available is overwhelmingly vast. The challenge, as Harry Mamaysky of Citigroup puts it, is “how to look at a small enough subset of the data so that you can actually understand what you’re looking at but that still captures enough of the big picture.”

At best, data limitations make early warning models and financial stress indexes less accurate than they could be. At worst, the data that populates them poisons their results. If the data is standardized, accurate, and accessible, then it’s up to the modelers to figure out how to use it.

One lesson from the financial crisis was that each financial regulatory agency tends to be dominated by a single discipline. The SEC’s enforcement area uses mainly attorneys. Banking supervisory agencies naturally employ bank examiners. At the Federal Reserve, macroeconomists often prevail. To avoid blind spots in the future, all these viewpoints will need to be incorporated.

Society can benefit from early warning models, but only if they work. And to make them work, institutions must fork over their private information.

Consider the presenters at the 2013 conference, Financial Stability Analysis, who included economists, accountants, lawyers, mathematicians, cryptographers, computer scientists, bankers, and an array of central-bank regulators. They resembled the sort of crew that central casting would think up for a crowd of disparate wonks, but the mixing it up was done on purpose.

Society can benefit from early warning models, but only if they work. And to make them work, institutions must fork over their private information. Many institutions will look for opportunities to take a free ride on others’ disclosed information while attempting to withhold their own. This kind of self-monitoring could reduce the quality of the data being plugged into models.

How might private data become public? Let’s say that in submitting information about their activities, 10 percent of banks provide information about a certain kind of transaction they’re involved in. To the other 90 percent of banks, this might be news—a transaction they may not have conducted, or one that they hadn’t realized had become so ubiquitous. For the 10 percent of banks that disclosed the information, business might suffer from that kind of disclosure.

Enter cryptographers, epidemiologists

Cryptographer Adam Smith of Pennsylvania State University illustrates the importance of a multidisciplinary approach to sharing the right—and the right amount of—data. One problem with making data accessible is ensuring that it isn’t too accessible. Trade secrets should be kept secret. But efforts to aggregate data invariably encounter instances when confidential data could be released inadvertently.

A cough may be harmless, or it may spread into an epidemic. Similarly, the failure of a regional bank may be an isolated incident, or only the first domino to fall.

“Cryptography and some related fields have been thinking a lot about data–sharing problems for many years,” Smith says. “They’ve developed some very powerful and counterintuitive tools. These tools have the potential to really change the ways regulators think of tradeoffs between transparency and confidentiality.”

Another discipline that surprisingly has been brought into the financial stability analysis mix is epidemiology, the branch of medicine that studies the causes, distribution, and control of disease in populations. Some try using the modeling techniques developed to understand how diseases are transmitted to predict whether certain shocks will cause wider crises.

A cough may be harmless, or it may spread into an epidemic. Similarly, the failure of a regional bank may be an isolated incident, or only the first domino to fall. Supervisors must be able to forecast with some reliability the amount of contagion that the failure of one or more institutions will have on the financial network.

As monitoring improves, it becomes possible that the activities of financial institutions will change in ways that aren’t currently built into modeling assumptions. That is, a shift in the structure of the market will change the strategic considerations of market participants.

“None of these tools are ever going to be static, or can be static,” says Stephen Ong, vice president in the Supervision, Credit, and Statistics Department of the Cleveland Fed. “To the extent that the financial system continues to evolve and always will, our financial stability monitoring tools will also need to evolve.”

Moreover, this is an effort with no end. Financial innovation and risk-taking is more than a way of life—it’s a crucial factor in driving economic growth. The next crisis may rear its head in ways that current models haven’t even considered possible. Even as disciplined reviews of methodological contributions are valuable, so is the imaginative, if subjective, work of considering possibilities that aren’t in the models right now.

“None of these tools are ever going to be static, or can be static. To the extent that the financial system continues to evolve and always will, our financial stability monitoring tools will also need to evolve.”
Stephen Ong, vice president at the Cleveland Fed

Before the crisis, financial stability analysis was most likely to be practiced only episodically, if at all, in the official and private sectors. But the need for some benchmark measures of stress and some recommended policy responses has since grown acute. Today, stress indexes, early warning systems, and macroprudential policy tools are abundant, and the need is to accurately evaluate which tools are best in which situations, and how to improve them with better data.

“We are definitely making progress in this,” says the Cleveland Fed’s Joe Haubrich. “But in some sense, the big question is, are we going to have a quiet period like we had after 1933?

“We didn’t have banking crises for 75 years or so. Will we be able to keep things quiet for that long? I certainly hope so, but at this point it’s too early to tell.”

“I believe we would all agree that the recent financial crisis developed, in large part, due to both a lack of information transparency, and a lack of regulatory transparency. By ‘information transparency,’ I mean transparency of information about individual firms and the financial system. By "regulatory transparency," I mean transparency related to the actions of regulators. Regulators play an important role in promoting both information and regulatory transparency. Information transparency should reveal the risks in financial firms and markets, and regulatory transparency should communicate how supervisors will respond to situations that threaten the financial system.”

--Federal Reserve Bank of Cleveland President Sandra Pianalto, speaking at the May 2013 Financial Stability Analysis Conference

Topics : ;

Meet the Author

Douglas W. Campbell

| Executive Speechwriter

Douglas W. Campbell

Doug Campbell is the executive speechwriter in the Public Affairs Department at the Federal Reserve Bank of Cleveland. He also leads the public-official outreach function and is founding editor of Forefront, the Cleveland Fed’s policy publication. He previously served as a policy advisor, economics writer, and editor.

Read full bio