Bailout Nation
Page 10
Let’s go back to the end of the last recession: The nation had suffered through a wrenching three-year stock market crash (2000 to 2003). NASDAQ, where the hottest stocks had been listed, plummeted 78 percent from peak to trough. The losses were nearly identical to those of the 1929 crash and subsequent bear market. After the crash came the 2001 recession. Companies cut back their hiring and spending. It was unusual for a recession that consumers barely paused (consumer spending accounts for nearly 70 percent of the U.S. economy). The official National Bureau of Economic Research (NBER) dates for the economic contraction were March to November 2001—when 9/11 and its economic aftershocks hit, the recession was actually near its end.
With the U.S. economy under the weather, the government prescribed the usual medicine: big tax cuts in 2001, lots of deficit spending, increased money supply, military spending for two wars, and significant interest rate cuts. This tried-and-true treatment is usually effective in jump-starting economic growth. Some theorists argue that left alone, any economy subjected to a run of the mill recession would eventually self-correct anyway, but that’s a discussion best saved for another day.
Figure 8.1 Real Gross Domestic Product
SOURCE: U.S. Bureau of Economic Analysis
A funny thing happened on the way to the recovery: Nothing. Despite the massive stimulus, the economy failed to respond. Following the Tax Relief Act of 2001, plenty of deficit spending in 2002, lower rates (and even more tax cuts in 2003), the economy was barely limping along. Real gross domestic product (GDP) was nearly flat in Q4 of 2002 (see Figure 8.1). The possibility of a double-dip recession was real, and that was making the Federal Reserve very nervous.
In the aftermath of the 2000-2001 recession, nonfarm payroll growth was anemic. With the exception of only one quarter (Q4 2006) real (i.e., after-inflation) wage growth was flat or negative. As of the third quarter of 2006, there were only 3.5 percent more jobs than there had been at the end of the recession. This compares very unfavorably with prior recoveries.
Consider the 1953-1954 period, which was followed by the worst of the nine recession recoveries since World War II. Yet even that recovery period had job gains more than double the current cycle: Following the 1953-1954 recession, total employment gained 7.6 percent over the ensuing five years. Even more astounding, that relatively poor showing was actually held down by the recession of 1957-1958.
To put this into context, five years after each of the previous nine recessions, “there were an average of 11.9 percent more jobs in the economy than there had been at the end of the recession.”1
New job creation during the 2002-2007 recovery cycle was the worst on record since World War II. And income fared no better: Wage gains failed to even keep up with inflation for most of this period even as home prices and asset prices soared.
The Federal Open Market Committee (FOMC) had watched Japan get caught in a decade-long recession, compounded by a nasty case of deflation. After Japan’s own real estate and stock bubbles burst in 1989, consumers there had become increasingly cautious. While the Japanese are culturally much more likely to save than Americans are, they had taken frugality to new extremes. And the less the Japanese spent, the more manufacturers and retailers slashed prices, hoping to draw them back to a consumptive mood. The longer consumers waited, the cheaper goods got. This vicious deflationary cycle, once started, is difficult to break.
On November 21, 2002, then Fed Governor Ben Bernanke gave a speech entitled “Deflation: Making Sure ‘It’ Doesn’t Happen Here.” Bernanke made reference to the government’s not-so-secret antideflation weapon:
The U.S. government has a technology, called a printing press (or, today, its electronic equivalent), that allows it to produce as many U.S. dollars as it wishes at essentially no cost. By increasing the number of U.S. dollars in circulation, or even by credibly threatening to do so, the U.S. government can also reduce the value of a dollar in terms of goods and services, which is equivalent to raising the prices in dollars of those goods and services. We conclude that, under a paper-money system, a determined government can always generate higher spending and hence positive inflation.2
That antideflation speech turned out to be quite prophetic: Bernanke eventually became Fed chair, and he put those printing presses to good use. Bond desks would nickname him “Helicopter Ben,” thanks to his speech that threatened a metaphorical money drop as a way to stave off deflation.
Figure 8.2 Federal Fund Rates, 2000-2006
SOURCE: Economagic
But that nickname was still off in the future. Circa 2001, the Federal Reserve was getting increasingly nervous. Under the leadership of then Chair Alan Greenspan, the Fed began the most significant rate-cutting cycle in its history (see Figure 8.2). From precrash highs of 6.5 percent, the Fed took rates all the way down to 1.75 percent. As discussed in Chapter 7, the Greenspan Fed maintained a 1.75 percent federal funds rate for 33 months (December 2001 to September 2004), a 1.25 percent rate for 21 months (November 2002 to August 2004), and last, a 1 percent fed funds rate for over 12 months (June 2003 to June 2004). While the fed funds rate had been as low as 1 percent some 46 years earlier, it had never been allowed to stay that low for more than a year! This was simply unprecedented.
Money is often described as “cheap” or “expensive,” depending on how costly it is to borrow. This money wasn’t cheap—it was ultra-cheap. That fueled the housing fire, sending prices skyward.
As Figure 8.3 shows, this degree of stimulus—and for such an extended period—had never occurred before.
The global reaction was a boom in dollar-denominated assets. Consumers responded with a new round of cheap debt-funded spending. Residential real estate prices soared, and automobile sales spiked. Industrial metals reached all-time highs. And corporate profitability, as a percentage of GDP, reached never-before-seen heights. All thanks to the Fed’s easy money prescription.
The Maestro had apparently done it again. Greenspan turned a market collapse into a full-blown recovery. If the tech boom and crash were caused by low rates and easy money, then a hair of the dog that bit you was just the hangover cure for the economy.
Or so it seemed.
Figure 8.3 Federal Fund Rates, 1954-2006
SOURCE: Economagic
Beneath the surface, the economy was much less rosy than it appeared. Inflation was starting to heat up. Commodity prices exploded, and oil broke out to record highs. That old inflation standby, gold, reached levels not seen in decades. At the same time, salaries remained flat, and compensation as a percentage of GDP dropped to multidecade lows. This was very unusual for the recovery period following a recession. It was not typically seen in healthy economies.
Also anomalous was the housing boom’s outsized impact on jobs. According to a 2005 study by Asha Bangalore of Northern Trust Company, 43 percent of all new job creation between November 2001 and April 2005 was real estate related:
Residential investment outlays have made a sizable contribution to the growth of real GDP in the current business expansion and sales of new and existing homes have soared to set new records. The future of the housing market is tied to employment conditions in the economy. The sluggish performance of payroll employment is the primary reason for the FOMC to take a measured path toward bringing the federal funds rate to a neutral level. At the same time, the performance of the housing market has played a visible role in payroll growth. Employment in housing and related industries (sum of employment in the establishment survey under various categories related to housing industry) accounted for about 43.0% of the increase in private sector payrolls since the economic recovery began in November 2001.3
The housing boom was creating jobs for builders, contractors, real estate agents, mortgage brokers, and even employees at Home Depot and Lowe’s. But the most significant impact to the economy came from home equity lines of credit (HELOCs) and cash-out mortgage refinancings. With wages stagnant, Americans turned to home equity withdrawals in order to maintain the
ir standard of living.
This was one of the single biggest and most unexpected elements of the debt-driven economic expansion. Outside of real estate, employment gains were modest and real wage gains flat. It was debt that drove the increase in consumer spending. Mortgage equity withdrawals (MEWs)—normally a small portion of consumer debt—exploded. The accelerating borrowing against their homes allowed consumers to keep on spending, even as their savings rate went negative for the first time since the 1930s.
Without this home equity-based consumption, the nation would have been in recession territory, with GDP flat to 1 percent. At least, according to an unofficial Fed study by none other than Alan Greenspan (see Figures 8.4 and 8.5).
Since rates hit their lows in 2003, the impact of mortgage equity withdrawal has been nothing short of breathtaking: MEW was responsible for more than 75 percent of GDP growth from 2003 to 2006.
It’s helpful to consider what MEW had been like in the past: For most of the 1990s, the net equity pulled out by homeowners—either through sales or through home equity refinancing—was fairly modest: about $25 billion per quarter, or about 1 percent of disposable personal income.
The impact of MEW began to accelerate once the Fed cut rates so spectacularly. By mid-2002, the quarterly average MEW was north of $100 billion, up nearly 400 percent since 1997 and greater than 4 percent of disposable income. By 2003, those quarterly numbers were $150 billion and 6 percent.
Then, things exploded: MEW hit a peak in 2004, as quarterly withdrawals were almost a quarter of a trillion dollars—over 10 percent of disposable personal income. To put that into context, that was a 1,000 percent gain in the 10 years since 1995.
In addition to the actual dollars extracted from housing, the psychological impact that feeling financially flush has on spending cannot be underestimated. The wealth effect, as it is known, shows that every $100 gain in a stock portfolio creates $4 in additional consumer spending. But this wealth effect is even more significant for homes. A recent study found that an increase in owned housing value of $100 will boost spending by $9—more than twice the impact on spending of stock market wealth effect gains.4
Considering how widespread home ownership is in the United States, this is quite significant: About 68.5 percent of American families live in their own homes (it was as high as 70 percent recently). While ownership of stocks is widespread—studies put market participation at near 50 percent of all Americans—the typical family has a rather small percentage of their net worth in equities. Indeed, in most cases, stocks are their second- or third-largest asset. For the vast majority of Americans, their home is their largest asset.
Figure 8.4 Mortgage Equity Withdrawal, Net Extraction, and Percentage of Disposable Personal Income
SOURCE: Calculated Risk, www.calculatedriskblog.com
Figure 8.5 GDP Growth: With and Without Mortgage Equity Withdrawal
SOURCE: Calculated Risk, www.calculatedriskblog.com
Now factor that into family wealth: Increases in home price provided 70 percent of the gains in household net worth since 2001.
The wealth effect of home price appreciation is much more widely distributed than stocks. This made the generational-low interest rates the single largest factor that resuscitated the economy. Sure, tax cuts, deficit spending, increased money supply, war spending, and the like all played a role—but it was the ultralow rates and the mortgage equity withdrawal they allowed that dominated U.S. economic activity.
Even China’s explosive growth was indirectly related to FOMC actions. Chinese apparel, electronics, and durable goods manufacturers were prime beneficiaries of America’s debt-fueled spending binge. Beijing returned the favor, buying a trillion dollars’ worth of U.S. Treasuries. This helped to keep rates relatively low, even as the Fed shifted into tightening mode, raising rates from 1 percent to over 5 percent. This “conundrum,” as Fed Chief Greenspan called it, reinforced the virtuous real estate cycle, extending it even further.
The first rule of economics is that there is no free lunch, and the massive ultralow rate stimulus came at a price: Cheap money led to inflation, fueled American’s worst consumption habits, and added a ton to consumers’ total debt.
But it was not just Main Street that binged on easy money. On Wall Street, cheap money would become even more addictive. Leverage (borrowing capital to invest) fueled investment banks, while liquidity powered hedge funds. Private equity gorged on cheap cash and used it to go on a buying spree. How else can one explain the ridiculous purchase of Chrysler in the spring of 2007 by the hedge fund Cerberus, if not for the nearly free cost of capital used to make this bone-headed investment? It wasn’t the only dumb acquisition of the time; plenty of other ill-advised mergers and acquisitions (M&A) were funded via easy money.
Corporate America also rushed to grab cheap cash; many companies increased their dividend issuance, and quite a few borrowed heavily to do so. Others used the money to make stock buybacks as markets rallied higher. Most of these share repurchases have since proven to be horrific investments.
In the age of Sarbanes-Oxley, earnings manipulation via accounting trickery was out. What was in was the simple form of financial engineering enabled by easy money: reducing share count via stock repurchases.
TrimTabs Investment Research estimated that $456 billion worth of stock repurchases—nearly half a trillion dollars!—took place in 2005. A study by David Rosenberg, Merrill Lynch’s chief economist, discovered that in Q3 2006, nearly a third of earnings gains were due to share repurchases.5
With so much cheap money liquefying the system, the new mantra seemed to be to borrow freely. There was no need to worry about the debt or leverage—the day of reckoning was far, far off in the future.
Or so it seemed.
INTERMEZZO
A Brief History of Finance and Credit
The great credit boom-and-bust cycle of the early twenty-first century was a typical bubble. It had its supporters and early detractors; there were the usual tortured rationales to explain what was unusual economic behavior; and there was a land rush to grab short-term profits despite increasingly obvious risks. As is often the case, it went on much longer than one would have reasonably expected.
One aspect of this credit boom, however, stands out as particularly unusual: the astonishing shift in the fundamental basis of credit transactions.
Throughout the history of human finance, the underlying premise of any lending, credit, or financing—indeed, all loans, mortgages, and debt instruments—has always been the borrower’s ability to repay the loan. It is the most basic aspect of all finance.
This system of economic transactions goes as far back as when Og lent the guy in the next cave a dozen clamshells so he could purchase that newfangled wheel. If Og didn’t have a reasonable basis to expect his neighbor would be able to service that debt—Is he a good hunter? Is he trustworthy? Will he be able to repay those clamshells?—he never would’ve entered into what was the first commercial loan.
From 1 million B.C. up until the present day, the ability to repay the debt has always been the dominant factor—except, however, for a brief five-year period starting around 2002. During that short era, the fundamental basis of all credit transactions was turned on its head. It was no longer the borrower’s ability to repay the loan that was of paramount importance. Rather, the basis of lending money shifted to the lender’s ability to sell the debt for securitization purposes.
As the world soon discovered, this was enormously important, and was the basis for what came afterward: credit bubble, housing boom and bust, derivatives explosion, economic chaos. It can all be traced back to that shift in making a lending decision.
Since the crisis began with real estate loans, let’s use the typical mortgage as an example of how these earth-shattering changes occurred.
The basis for making a mortgage loan to a potential home buyer has relied on several simple factors: Banks looked at the home buyer’s employment history and income, the size o
f the down payment, and the person’s credit rating to determine the borrower’s ability to service the debt. They also considered the loan-to-value ratio of the property, as well as other assets the borrower might own, to ensure that the loan was secured by the property.
Those factors went away during the early 2000s housing boom when the basis for mortgage lending was no longer the borrower’s ability to pay—it was the lender’s ability to securitize and repackage a mortgage.
This was a game changer. Any loan originators that could process the paperwork and quickly ship the loan off to Wall Street stood to make a lot of money from this process.
If we were to sum up the entire history of finance on a time line, it would look something like this, with the five-year period—the paradigm shift—as an unusual aberration relative to the prior million years:
It is the duty of the Federal Reserve to supervise credit and lending. We have since discovered that numerous people, including (now deceased) Fed Governor Ed Gramlich, tried to bring the problems of this lending to the attention of then Fed Chairman Alan Greenspan. You will be astonished to learn that the Federal Reserve of the United States did nothing about this shift. Indeed, the change in lending standards was praised by Greenspan as an important innovation.
I call this “nonfeasance”—failure to carry out an official charge or duty.
The so-called innovation turned out to be nothing of the sort. It was a deeply flawed lending process camouflaged, at first, by strongly rising home prices. Once prices peaked, the fault lines became clearly visible. Since late 2006, 306 major U.S. lending operations have imploded and over two million U.S. homes have been foreclosed (and rising).3