The Map and the Territory
Page 5
The 2008 financial collapse has provided reams of new data to identify the shape of the critical, heretofore unknown tails of investors’ “loss functions”; the challenge will be to use the new data to develop a more realistic assessment of the range and probabilities of financial outcomes, with an emphasis on those that pose the greatest dangers to the financial system and the economy. One can hope that in a future deep financial crisis—and there will surely be one—we will be more informed as to the way fat-tail markets work.
CREDIT-RATING AGENCIES FAIL
Another important source of the failure of risk management was the almost indecipherable complexity of the broad spectrum of new financial products and markets that developed as number-crunching and communication capabilities soared.10 Investment managers subcontracted an inordinately large part of their task to the “safe harbor” risk designations of the credit-rating agencies, especially Moody’s, Standard & Poor’s, and Fitch. Most investment officers believed no further judgment was required of them because they were effectively held harmless by the judgments of these government-sanctioned rating organizations. Especially problematic were the triple-A ratings bestowed by the credit-rating agencies on many securities that in fact proved highly toxic. Despite decades of experience, the analysts at the credit-rating agencies proved no more adept at anticipating the onset of crisis than the investment community at large, and their favorable ratings of many securities offered a false sense of security to a great many investors.
REGULATION FAILS
Even with the breakdown of our sophisticated risk management models and the failures of the credit-rating agencies, the financial system would likely have held together had the third bulwark against crisis—our regulatory system—functioned effectively. But it, too, failed for many of the same reasons that risk management and the credit-rating agencies failed: an underappreciation of the risks faced by the financial system and an increasing complexity that made effective oversight especially difficult. Along with the vast majority of market participants, regulators did not anticipate the onset of crisis. Not only did regulators in the United States fail, but abroad, the heavily praised U.K. Financial Services Authority was unable to anticipate and prevent the bank run that threatened one of that country’s largest commercial banks, Northern Rock, the first such run in Britain in a century. Moreover, the Basel Committee on Banking Supervision, representing regulatory authorities from the world’s major financial systems, promulgated a set of capital rules (Basel II) that did not foresee the rapidly rising capital needs of the institutions under their purview.
It was not a lack of regulatory depth that was at fault. U.S. commercial and savings banks are extensively regulated; despite the fact that for years our ten to fifteen largest banking institutions have had permanently assigned on-site examiners to oversee daily operations, many of these banks still were able to take on toxic assets that brought them to their knees. Bank regulators had always relied on the thought that “prompt corrective action” would be a key weapon to be wielded against default; weak institutions would be shut down well before they ran out of capital, thereby preventing losses to the FDIC’s reserves and ultimately to taxpayers. In the event, and contrary to every expectation of regulatory practice, the FDIC has had to charge off well upward of a half trillion dollars since the Lehman default.
THE SHORTFALL OF CAPITAL
One of my very first experiences as Federal Reserve chairman was at a staff meeting where I naively asked, “How do you determine the appropriate level of capital?” I was surprised at the lack of response. I soon realized that such fundamental issues are taken as a given and rarely addressed other than in the aftermath of a crisis. And through all of the years of my tenure at the Fed, bank capital had always seemed adequate to regulators. (See, for example, the 2006 FDIC statement quoted earlier in this chapter.) I have since regretted that we regulators never pursued the issue of capital adequacy in a timely manner.
No regulatory structures anywhere in the developed world required all of the major global financial institutions to maintain adequate capital buffers. And there can be little doubt that had capital levels of banks and other financial intermediaries worldwide been high enough to absorb all of the losses that surfaced after the Lehman default, no contagious defaults could have occurred and the crisis of 2008 would have been contained. In the normal course of banking, unexpected adverse economic events diminish a bank’s capital, but in almost all cases, the buffer (provision for loan losses plus capital) remains adequate to fend off default. And with time, the flow of undistributed earnings and newly raised equity replenishes the depleted bank capital.
However, as 2008 starkly demonstrated, not all such events end so benignly. On rare occasions enough capital is breached, or wiped out, setting off an avalanche of serial defaults in which the suspension of payments by one firm throws its often highly leveraged financial counterparties into default. Those cascading defaults lead cumulatively to a full-blown crisis. Default contagion has many of the same characteristics of a snow avalanche, where a small breach in snow cover progressively builds until the surface tension breaks and a whole hillside of snow collapses.
For the same reasons that it is difficult to determine when a small crack in the snow cover will trigger a full-blown avalanche, it has proved difficult to judge in advance what will trigger a full-blown financial crisis, especially on the scale of September 2008.
DEBT MATTERS
Still, the question remains: Why did the bursting of the housing bubble set off an avalanche of financial failure when the deflation of the dot-com bubble in 2000 left so mild an imprint on the financial system and on the macroeconomy? To be sure, a recession followed the stock market bust, but the recession was one of the mildest on record and was relatively short-lived. Real GDP and employment in that downturn exhibited scarcely anything close to the savage contraction that followed the bursting of the housing bubble six years later. Reaching even further back, despite the (still) record one-day destruction of stock market wealth on October 19, 1987, there was virtually no mark left on overall economic activity.
Because the U.S. economy had so readily weathered the dot-com and 1987 bubbles, I had hopes at the outset of the 2008 crisis that the reaction to the housing bubble collapse would be similar. I did raise an early caution flag before a Federal (Reserve) Open Market Committee meeting in 2002 when I asserted that “our extraordinary housing boom . . . financed by very large increases in mortgage debt, cannot continue indefinitely.” It did—for four more years. And I thought its effect could be contained. It wasn’t.11
The critical reason for the much more severe outcome in the wake of the bursting of the housing bubble is that debt matters. In retrospect, and as I discuss in detail in Chapter 3, there can be little doubt that escalating defaults of securitized subprime mortgages were the trigger of the recent financial crisis. However, even after financial subprime problems arose in August 2007, there was little awareness of what was on the horizon.12 When defaults of the underlying collateral for mortgage pools (primarily of privately issued subprime and Alt-A mortgage-backed securities) became widespread in 2007, the capital buffers of many banks (commercial and shadow) were dangerously impaired.13 And as the demand in the United States for homeownership collapsed and home prices fell, widespread defaults of mortgage-backed securities saddled banks and other highly leveraged financial institutions with heavy losses, both in the United States and Europe.
In contrast, on the eve of the dot-com stock market crash of 2000, highly leveraged institutions held a relatively small share of equities, and an especially small share of technology stocks, the toxic asset of that bubble. Most stock was held by households (who were considerably less leveraged at that time than they became as the decade progressed) and pension funds. Their losses, while severe, were readily absorbed without contagious bankruptcies because the amount of debt held to fund equity investment was small. Accordingly, few lenders went into default and an avalanc
he was avoided. A similar scenario played out following the crash of 1987.
One can imagine how the crisis would have played out if the stocks that fell sharply in 2000 (or 1987) had been held by leveraged institutions in the proportions that mortgages and mortgage securities were held in 2008. The U.S. economy almost certainly would have experienced a far more destabilizing scenario than in fact occurred.
Alternatively, if mortgage-backed securities in 2008 had been held in unleveraged institutions—defined contribution pension funds (401ks) and mutual funds, for example—as had been the case for stocks in 2000, those institutions would still have suffered large losses, but bankruptcies, triggered by debt defaults, would have been far fewer.
Whether the toxic assets precipitating the bubble collapses of 1987, 2000, and 2008 were equities or mortgage-backed securities probably mattered little. It was the capital impairment on the balance sheets of financial institutions that provoked the crisis. Debt securities were the problem in 2008, but the same effect would have been experienced by the financial system had the dollar amount of losses incurred by highly leveraged financial institutions in the wake of the collapsing housing bubble been in equity investments rather than mortgage-backed securities.
Had Bear Stearns, the smallest of the investment banks, been allowed to fail, it might merely have advanced the crisis by six months. Alternatively, had the market absorbed the Bear failure without contagion, Lehman Brothers might have been put on notice, with ample time, to aggressively lower its high-risk profile. We will never know. But I assume that, seeing a successful Bear Stearns rescue, Lehman concluded that all investment banks larger than Bear would have been judged “too big to fail,” offering the prospect of a similar rescue to Lehman had it been necessary. That scenario conceivably dulled Lehman’s incentive to take (costly) precautionary actions to augment its capital.
IDENTIFYING TOXIC ASSETS
A related obstacle for forecasting and policy setting is that we seek to identify in advance which assets or markets could turn toxic and precipitate a crisis. It was not apparent in the early 2000s, as many commentators retroactively assume, that subprime securities were headed toward being the toxic asset that in 2007 they turned out to be. AAA-rated collateralized debt obligations based on subprime mortgages issued in 2005, for example, were bid effectively at par through mid-2007. They were still bid at over 90 percent of par just prior to the crisis. By March 2009, six months after the crisis erupted, they had fallen to 60 percent of par.14
Bankers, like all asset managers, try to avoid a heavy concentration of related assets in highly leveraged portfolios in order to avoid the risk that they will all turn sour simultaneously. Nonetheless, such a concentration of assets—securitized mortgages—did end up on the balance sheets of innumerable banks, both in the United States and abroad. At the time, presumably knowledgeable bankers judged the assets, at acquisition, sufficiently sound to leverage them. For most it was only in retrospect that they were able to differentiate good assets from bad. Securitization conveyed a false sense of financial well-being. Large bundles of seemingly diversified mortgages appeared a lot less risky than stand-alone mortgages. The problem was that if all those mortgages were vulnerable to the same macroshock (a decline in house prices), there was in the end more risk and less diversification than mortgage investors realized.
Regulators, in my experience, are no better qualified to make such judgments than the initiators of the investments. This is the reason I have long argued that regulators should let banks buy (within limits) whatever they choose, but impose large generic equity capital requirements as reserve against losses that will happen, but which cannot be identified in advance.15 As I demonstrate in Chapter 5, regulations whose effectiveness relies heavily on regulators’ forecasts of the future credit quality of the portfolios they regulate have almost always proved ineffective.
BEWARE OF POLICY SUCCESS
All speculative bubbles have a roughly similar trajectory and time frame over which the expansion leg of a bubble takes place.16 Bubbles often emerge from growing expectations of stable long-term productivity and output growth combined with stable prices.
The near quarter century from 1983 to 2007 was a period of very shallow recessions and seemingly extraordinary stability. But protracted economic stability is precisely the tinder that ignites bubbles. All that is necessary is that a modest proportion of market participants view the change as structural. A quarter century of stability is rationally intoxicating. Herd behavior then takes over to enhance the uptrend.
Central banks have increasingly been confronted by the prospect that their success in achieving stable prices has laid the groundwork for asset price bubbles. This issue has concerned me for years. I expressed my discomfort in a Federal Open Market Committee meeting in May 1995. “The disequilibrium that is implicit in this [current] forecast is an asset price bubble . . . I am not sure at this stage that we know how, or by what means, we ought to be responding to that, and whether we dare . . . I almost hope that the economy will be a little less tranquil, buoyant, and pleasant because the end result of that [has] not [been] terribly helpful.”17
How to deal with this prospect remains a challenge without a simple solution, at least to date. As copycat herd behavior converts “skeptical” investors to “believers,” stock prices, capital investment, and the economy are thought to have nowhere to go but up. With different assets and actors, the numerous bubbles of the last century have followed similar paths.
HISTORY REPEATS
Nonetheless, given the repetitiveness of history, I could never get beyond the general notion that as the years of only modestly interrupted economic expansion rolled on, we would eventually be assaulted by disabling financial crises. As I put it in 2000, “we do not, and probably cannot, know the precise nature of the next international financial crisis. That there will be one is as certain as the persistence of human financial indiscretion.”18 The evidence was compelling that these episodes, though only occurring once or twice in a century, were nonetheless too recurrent and eerily similar in nature to be wholly sui generis.
In the chapters ahead, I will delve more deeply into the causes of the current crisis and its aftermath, and evaluate the tools that we economists have created to peer into the future, parsing the major policy disagreements that have plagued the economics profession in recent years. Every policy initiative reflects both a forecast of the future and a paradigm of the way an economy works. The current debates are part of an ongoing evolution of economic forecasting.
REGRESSION PRIMER
REGRESSION ANALYSIS
Astronomers have the capacity to forecast when the sun will rise outside my bedroom window exactly six months from now. Economists have no such capabilities. We seek instead to infer what history tells us about our future by disaggregating the “causes” of our economic past and assuming they will prevail in the future. In short, we endeavor to learn what caused, for example, capital investments to behave as they did in the past, and where they will settle if those forces are replicated in our future. To assist in that daunting task, economists rely heavily on the discipline of regression19 analysis—statistical techniques whose roots lie in probability analysis, a discipline well known to all who play games of chance.
The raw material of business forecasting is the extensive body of time series that trace, for example, retail sales, industrial production, and housing starts. We seek to understand the economic factors determining monthly single-family housing starts, for example, and hope to forecast them. As a result of conversations with home builders, I might initially choose home prices and household formation as plausible explanatory variables. We call the time series being analyzed the dependent variable, and those explaining it—home prices and household formation—the independent variables. Regression analysis, then, statistically seeks out how a change in each independent variable impacts housing starts. The cleverness of such a filtering process is that it infers the relative statis
tical weights—coefficients—that, when applied to both home prices and household formation, yield a “fitted” time series that most closely approximates the history of actual housing starts.
With these data we can measure the fraction of the fluctuations (variance) of the dependent variable that is “explained” by the fluctuations of the independent variables in the model. That fraction is what we call the multiple regression coefficient (R2). The higher the R2, the closer the fitted time series is to the actual historical series. At 1.0, it exactly predicts the actual data series and explains all of the variance in the dependent variable.
But the reliability of the results rests on a number of mathematical conditions required of the regression variables. For example, the independent variables have to be completely uncorrelated with one another—that is, home price must not be correlated with household formation. In addition, the residuals of the regression, the difference between actual housing starts and their fitted (calculated) value in each period, cannot be “serially correlated”—that is, the residual in one period cannot influence the residual in the next.
In the real world, these conditions are almost never met. So statisticians have devised ways to measure and partially correct the extent to which the assumptions fall short. For instance, the Durbin-Watson statistic (D-W) measures the extent to which the sequential residuals are serially correlated. The D-W ranges between 0 and 4.0. A D-W of 2.0 indicates that the residuals are uncorrelated while a D-W of less than 2.0 indicates positive serial correlation, a bias that creates an overestimation of the statistical significance of the independent variables (see discussion of the t-statistic and statistical significance below).20 Serial correlation is a characteristic of virtually all economic time series because the previous quarter’s residual, in reality, does economically impact that of the current quarter. Converting the level of a time series to its absolute change will decrease serial correlation in a regression, but such a transformation will eliminate important information contained in the level form of the data. In my analyses, I prefer to live with serial correlation.