by Robert Litan
Politicians promoted home ownership and so did many economists, who argued with some justification that those who owned their homes were more likely to take care of them, benefiting surrounding neighborhoods, and making them nicer, safer places to live. What the home-ownership advocates did not take fully into account—at least not until after the financial crisis and the crash in home prices—is that most Americans’ wealth consists of the equity in their homes, and when that equity is slashed or disappears, so does their sense of wellbeing and their willingness to spend, which surely dampened the pace of the subsequent recovery. Moreover, with as many as one-quarter of home owners under water after the crisis—that is, their home values were less than their mortgages—even households that were able to keep their homes out of foreclosure were reluctant to move to other areas of the country where jobs were more plentiful. When the economic wheel of fortune turned, the large numbers of people who felt locked into their homes added to the normal frictions in the labor market and slowed the pace of hiring and economic growth after the economy began to recover.20
This is not to say that the financial crisis proved that home ownership is bad, only that any full accounting of the social benefits and costs of owning a home must take into account the very real macroeconomic costs associated with high levels of home ownership. With the benefit of hindsight, those economists who joined the politicians (either privately or publicly) in supporting the great expansion of home ownership before the crisis were mistaken. Knowing what we know now, there is a strong case that a home ownership rate of roughly 65 percent, which prevailed before the explosive rise in real estate prices and cheap mortgage money of the 2000s, is much closer to the social optimum than the 69 percent at which the ownership rate peaked in 2004, before the proverbial roof on the housing market caved in several years later.
But not all economists supported the great expansion of home ownership, and I doubt whether there were many who missed the two main reasons for the crisis: excessive subprime mortgage lending and securitization and excessive financial institution leverage. Nonetheless, were economists or the economics profession guilty of errors of omission? Did enough economists fail to warn policy makers of the dangers to the economy that were brewing because mortgage lending standards were too loose and the financial sector was too highly leveraged? And if more economists had issued warnings, would it have made any difference?
Subprime Lending
Certainly more economists should have expressed concern about the bubble in the housing market and the contribution made by subprime lending. There were a few exceptions, notably 2013 Nobel winner Robert Shiller of Yale, who warned of the housing price bubble well before it burst. In addition, the late Edward “Ned” Gramlich, warned his colleagues on the board of governors at the Federal Reserve, policy makers, and the public about the unsustainable growth in subprime lending. Gramlich’s main villains were the lenders, many of whom Gramlich (and others) believed were misleading borrowers to take out mortgages they were not financially equipped to handle.21 Gramlich urged the Fed to crack down on such lending, but he lost his battle with then Federal Reserve Chairman Alan Greenspan—a component of a broader failure to recognize that free markets do not always regulate themselves, which Greenspan later admitted was an error.22
Clearly, Greenspan, as Fed chairman, was in a position to have reined in subprime lending through the Fed’s supervisory control over the largest banks (and through their holding companies), many of which were lenders to the nonbank originators of most subprime loans. It is also likely that had there been an earlier, publicly stated consensus among economists about doing this, Greenspan and his Fed colleagues, as well as other federal bank regulators, would have felt more comfortable about acting earlier to restrain subprime mortgage origination.
Regulators very likely still would have faced stiff political opposition in Congress, however, from advocates of more lending to low- and moderate-income households, especially minorities. Even if economists were more united earlier in warning about the egregious ways in which mortgage lenders relaxed their mortgage underwriting—by reducing borrowers’ down payments, extending interest-only loans and loans with negative amortization (allowing borrowers to add to their mortgage balances rather than reduce them over time), and offering low initial teaser rates that then adjusted to much higher rates after some initial period—it is possible that this wouldn’t have been enough to have significantly changed the trajectory of subprime lending. The same statement can be made had more economists known about and warned of the mistakes that the credit-rating agencies were making by assigning excessively optimistic ratings to securities backed by subprime loans.
My bottom line on economists and subprime lending is that we all should have been there earlier; it might have made a difference, but unfortunately, probably not.
Excessive Leverage: An Introduction
On the leverage front, the story is longer and more complicated, but in a different way. All financial economists I know have long supported strong, well-enforced capital standards for banks in particular, because more capital limits leverage and provides a thicker cushion against losses. Since the financial crisis, however, there has been no clear consensus among economists (and policy makers) about how those standards should be defined, and how high they should be.
First, some background: For much of American history, except for rules setting minimum capital amounts for new banks, the bank regulatory agencies haven’t had formal ongoing rules for bank capital—the sum of shareholders’ equity and the retained earnings of the bank—or what some economists call skin in the game. For decades before federal deposit insurance, banks maintained capital-to-asset ratios well above 10 percent, meaning that banks could suffer a loss equivalent to 10 percent of their assets before failing.
Capital-to-asset ratios are defined on a book value or historical cost basis, meaning that bank loans are counted at face value, less an amount projected for losses (typically only a few percentage points or less), and thus do not reflect current market values of bank loans (where there is a market, which often is not the case). Accordingly, even reported bank capital ratios at the 10 percent level or more did not prevent thousands of banks from failing when the Great Depression hit and real estate values plummeted. The huge drop in these prices, coupled with mass unemployment, meant that too many banks could not collect on their loans, both to businesses and mortgages on houses. In those days, mortgage maturities generally were no longer than five years, so borrowers had to refinance when the mortgages were due or pay the mortgage balances off, options that were unavailable to large numbers of borrowers during the Depression.
To make matters worse, after the 1929 stock market crash and subsequent decline in economic activity, depositors feared that their banks were truly insolvent, and ran to take their money back, actions which expedited bank failures, and caused President Roosevelt to declare a short-term bank holiday as one of his very first actions upon taking office in March 1933. The rash of bank failures during the Depression underscores the fact that it is not only banks’ capital ratios that matter for survival (whether those ratios are computed by assuming loans are recorded at the face values but with a sizeable reserve for future loan losses, or measured at some estimate of their market values), but also their liquidity, or the ability to pay depositors on demand. Of course, solvency and liquidity can be interrelated, as both the Depression and financial crisis of 2008 demonstrated. If banks are forced into selling their loans or securities at fire-sale prices in order to raise cash to pay off depositors who suddenly want much of their money back, then banks that may be solvent in normal circumstances can become insolvent in a general crisis.
The banking panic in the Depression caused Roosevelt to embrace an idea he had rejected before and that has been a staple of financial policy ever since: Federal deposit insurance for banks was initially established for accounts up to $2,500, but over time has been raised to $250,000. The presence of deposit i
nsurance has had conflicting impacts on bank capital: On the one hand, it strengthens the case for having regulatory minimum capital-to-asset ratios in order to protect the deposit insurance fund (which is financed by banks, though it has a line of credit from the U.S. Treasury), but on the other hand, by reducing the risk of bank deposit runs, deposit insurance has allowed banks to operate with lower capital ratios than in the era before that insurance. For the latter reason, some critics of deposit insurance have charged that it leads to “moral hazard”—an economic term that has nothing to do with morals but everything to do with hazard, or the taking of extra risks knowing that if things turn out badly shareholders will not bear the full loss because the deposit insurer will.
Of course, all insurance entails this problem. Some people, knowing they have insurance on homes or cars, may be less careful. Private insurers attempt to curtail risky behavior by adding deductibles to their insurance policies, so insured customers bear at least some loss in case of an insured event. Health insurers add co-insurance provisions to deductibles to address the moral hazard challenge.
In the case of banks, however, co-insurance or deductibles are likely to be self-defeating since the purpose of the insurance in the first place is to prevent depositors from running; they may still run even if they bore the first loss of a fixed amount or a percentage of their account balances. A good example of this was the run on the Northern Rock bank in the United Kingdom during the most recent financial crisis, which shook public confidence in that country’s banking system at the time. Bank regulation and supervision, coupled with minimum ratios of capital-to-bank assets—effectively a deductible for shareholders—are the policy tools that act like deductibles and co-insurance that are typically found in other types of insurance.
Federal bank regulators began to give more formal guidance about minimum bank capital ratios in the late 1970s and early 1980s, but a number of financial economists were long uncomfortable with the less-than-formal way that bank regulators enforced their capital guidelines. This was especially true of the regulators overseeing savings and loan associations, which disappeared in massive numbers in the 1980s.23 Federal regulators looked the other way—in supervisory jargon they engaged in “regulatory forbearance”—when the federal insurance fund for thrift depositors had very limited funds that could not possibly be stretched to cover the recognition of all of the thrift institution losses. Accordingly, regulators waited until they could find buyers of the troubled firms or, in rare cases, shut them down. Banking regulators did the same thing with the nation’s largest banks during the 1980s, which were severely troubled if not insolvent by lending too much to governments of less developed countries (LDCs) that couldn’t service their debts. Like the thrift insurance fund that couldn’t come close to covering the rash of small thrift insolvencies, the bank insurance fund couldn’t cover the potential losses if the large banks failed.
Two economists watching these events unfold, the late George Benston of Emory University and George Kaufman of Loyola University of Chicago, laid out a system during the 1980s for bringing much greater rigor to enforcing minimum capital requirements so that policy makers need never again be forced to engage in forbearance. Dubbed “structured early intervention and resolution” or SEIR, the Benston/Kaufman proposal spelled out specific sanctions that regulators should apply as bank ratios of capital-to-assets declined: the suspension of dividends to shareholders, limits on managerial salaries, and ultimately the takeover by regulators of weak institutions before they technically became insolvent. Having clear and progressively stiffer sanctions ideally would encourage the owners and managers of banks to steer clear of the risks that could get them punished, while sanctions, if they had to be applied, would force troubled institutions either to shrink or raise new shareholder money so they could stay afloat and not become a burden on the deposit insurance fund.
Benston and Kaufman presented their proposal to a conference at the American Enterprise Institute shortly after Kaufman and Robert Eisenbeis (then at the University of North Carolina) formed the Shadow Financial Regulatory Committee (SFRC),24 a group of market-oriented experts in banking and finance (mainly but not always from academia) that was modeled on a similar shadow committee on monetary policy (the Shadow Open Market Committee) launched in the 1970s by economists Karl Brunner of Rochester University and Allan Meltzer of Carnegie Mellon University. Like its monetary policy counterpart, the SFRC meets regularly and issues statements, aimed at policy makers and the media, to both support and criticize legislative and regulatory policies that affect the financial sector. The U.S. SFRC now has replicas in five other regions of the world, and the committees from the various regions attempt to meet every other year to discuss and issue statements on financial regulatory issues that are now common among countries in these regions.
The American SFRC had plenty to comment on: The 1980s had the largest number of bank and thrift failures since the Great Depression. Benston and Kaufman had an easy time persuading the full SFRC to endorse their idea (SEIR), which the committee did in several statements. Individual members of the SFRC also endorsed the idea in their own writings. [Full disclosure: I was asked to join the SFRC in the mid-1980s and was privileged to serve as a member of the group, with a few years off for government service, until 2012.]
Despite its free-market orientation, the SFRC and its members saw a proper role for government regulation of financial institutions, primarily to protect the deposit insurance fund and to offset the moral hazard created by deposit insurance (an idea which the committee did not contest, but also never wanted to go too far; namely, to morph into protecting all liabilities of banks or their holding companies, which in fact regulators did during the financial crisis of 2008).
Looking back, what is remarkable is how quickly the idea of formalizing capital requirements and their enforcement for insured depositories was implemented by policy makers. Congress took a first, small step in 1989 when it enacted a minimum 3 percent of assets-to-capital ratio for thrift institutions. Two years later, Congress instructed bank regulators to set a higher capital ratio for banks, backed by an enforcement regime that was almost identical to the concept that Benston and Kaufman originally suggested and which the SFRC had then consistently championed. In less than a decade, SEIR went from an academic idea to policy—a truly remarkable success story and one which I suspect is little known outside the small circle of people who developed, embraced, and implemented the idea.
The Pre-crisis Demise of SEIR
SEIR appeared to work for over a decade after it was implemented, well into the mid-2000s, when bank regulators failed to miss the signs of trouble in the housing market and the loans and securities that were supporting it. Bank failures were uncommon during this period, harking back to the post-war decades before the 1980s when that was the case as well. But it wasn’t just that regulators suddenly forgot about SEIR in the mid to late 2000s; two other important developments played central roles in the effective demise of SEIR, which contributed to the subsequent financial crisis.
The first reason why the SEIR system eroded is that regulators essentially failed to stick to it. They did not compel banks to provide sufficient reserves against mounting mortgage losses, which would have lowered reported capital (under either standard leverage ratios or the risk-based measures, which I discuss shortly), and thus would have triggered SEIR enforcement measures much earlier in the 2000s. The principal error came through the failure to recognize the potential threat to the solvency of some of the largest banks posed by their supposedly independent structured investment vehicles (SIVs), which the banks created to warehouse, theoretically only for a short time, mortgage-based securities fully or partially backed by subprime loans.
After reading the multiple books on the crisis, it is still not clear to me, at least, when bank regulators became aware that SIVs existed, but outsiders, including academics, congressional members, and the public did not become aware of them until sometime in 2007,
when it was too late. By then, they were unraveling, and the largest bank sponsors of SIVs came to the Treasury asking for the government to bail them out, which Treasury was reluctant to do. When that happened, the banks took SIVs back on their balance sheets, shearing them of their illusory independence, but thereby importing their financial troubles back to the bank sponsors. These decisions played major roles in weakening the banks’ own solvency.
Economists were not to blame for the SIV end-run around bank capital standards or the demise of SEIR, designed to enforce them. They were as much in the dark about what was happening as the rest of the public.
The second reason for the failure of SEIR to prevent or limit the crisis is far more complicated, and so in advance, I ask readers to bear with me. It’s a story that has not been widely told and thus bears some explanation.
The short version, and I don’t know how else to say this, is that bank regulators got too cute, or too complicated, depending on which description one prefers. The initial actors were central bank regulators in the United States and the United Kingdom in the mid-1980s, who took an obviously correct theoretical insight—not all bank assets posed equivalent risks of causing a bank’s failure—but had the arrogance to believe that they could redefine minimum capital standards in a way that would accurately take account of these differential risks without also causing unintended harm. Events eventually would prove the dangers outweighed any potential benefits.