Aftershock

Home > Other > Aftershock > Page 4
Aftershock Page 4

by Robert B. Reich


  It is a problem few of us are acquainted with, but the fact is that the richest human beings find it difficult to spend more than a fraction of their fortunes, notwithstanding an abundance of pricey temptations. The sheer magnitude of the task of spending obscene amounts of money can be surprisingly challenging. Few people have the time, energy, or stamina that’s required. The true advantage of a fortune lies less in its purchasing power than in its power to confer high social standing and attract the adoring and enthusiastic attention of other people who want some of it.

  Nor, it turns out, do most rich people have the appetite to spend all the money they accumulate. Being rich changes the very nature of desire. As we shall examine in more detail later, happiness diminishes rapidly after the first flush of acquisitive excitement. That second piece of pie never tastes quite as good as the first. Once we have had our fill of anything, additional portions aren’t as attractive to us. (In some cases they can even make us sick.) How much additional bliss can one obtain from owning a fourth home or a fifth sports car or from sitting down, for the hundredth time, to a dinner of $80-an-ounce Beluga caviar and Corton-Charlemagne wine? Paul Allen’s first 757 jet may have lifted his spirits as well as his body, but it’s doubtful he experiences the same rush from having two.

  One ethical argument for redistributing income from rich to poor comes from this psychological truth. The nineteenth-century founder of a branch of ethics called “utilitarianism,” Jeremy Bentham, thought the purpose of all law should be to produce the greatest possible happiness, counting each person’s happiness equally. Taking a thousand dollars from someone who’s rich and giving it to someone who’s poor might diminish the former’s happiness slightly, Bentham reasoned, but would almost certainly increase the happiness of the poor person far more. Taxing the wealthy to help the poor, as Bentham saw it, therefore increases the sum total of happiness.

  But Marriner Eccles and John Maynard Keynes saw a broader economic justification for organizing the economy in such a way that the rich did not accumulate a disproportionate share to begin with: the need to maintain enough total demand. Assume that Ken Lewis somehow managed to spend a quarter of his $100 million compensation in 2007. That would have left him with $75 million. While his $25 million of spending likely would have created lots of American jobs—construction workers who built new additions to his estates; restaurant and retail workers who catered to his appetites; doctors and hospital workers who attended to his health; financial consultants, accountants, and tax attorneys who managed his money; personal trainers, therapists, coaches, and masseurs who attended to his psychological stresses; technicians who repaired and upgraded his music systems, his personal communications systems, and his cars; people who cleaned his homes, laundered his clothing, and tended to his gardens; those who piloted his personal jets and drove his limousine—his $75 million of savings would have created far fewer. Even if invested rather than hoarded or circulated in a frenzy of speculation, the money would have moved at the speed of an electronic impulse wherever around the world it could get the highest return. (To be sure, many of the goods he bought likely would have been assembled abroad; but the largest portion of his direct spending would remain here, mostly for services.)

  Now suppose Ken Lewis’s $100 million had instead been divided among five hundred people, each of whom took home $200,000 that year. Assume that each spent $150,000—hardly difficult in and around New York City, or in other big cities—and saved $50,000. Total spending by those five hundred would have added up to $75 million, most of it supporting jobs in the United States. Take the logic a step further. Suppose Lewis’s $100 million had been paid instead to two thousand people, each of whom took home $50,000—just about what the typical American family earned in 2007. Each of those two thousand families is likely to spend all, or nearly all, of that money. The lion’s share will be for services. Most of that $100 million would have gone directly into the U.S. economy, sustaining jobs.

  Before the 2008 meltdown, about half of U.S. consumer spending was done by the highest-earning fifth of the population. Roughly 40 percent of total spending came from the top 10 percent. But that was hardly because richer Americans were spendthrifts; it was because the top 10 percent took home almost 50 percent of total income. Had the broad middle class taken home a larger portion, total spending would likely have been far greater—and the middle class would not have had to go so deeply into debt.

  This is not an argument for more personal consumption, per se. It is rather an argument for paying attention to total demand for all types of goods and services a society might need and be capable of producing—including, hopefully, those that conserve energy and reduce carbon emissions. Greater consumption of education, public recreation, and the arts would also, presumably, make daily life more useful and pleasant for more people without increasing material consumption at all.

  Many of America’s very rich have been exceedingly generous. Andrew Carnegie built libraries and opera houses. John D. Rockefeller and his sons established a famously important foundation. During the most recent swing of the pendulum, Bill and Melinda Gates set up another large foundation—and in June 2006, Warren Buffett pledged a huge share of his total wealth to support its activities. These are all commendable acts, but they are beside the point I am trying to make, which is that they do not, in and of themselves, generate more jobs and more economic growth than would be the case had a larger percentage of the nation’s people shared a bigger portion of the nation’s bounty. (From a moral perspective, the balance here is delicate, of course. Had Microsoft been broken up and its software made generally available at lower prices, for example, middle-class Americans would have had more money to spend on, say, flat-screen TVs, while Bill Gates’s correspondingly smaller fortune might have caused him to donate less to AIDS research and other Gates Foundation priorities.)

  The lure of great wealth undoubtedly inspires great entrepreneurial zeal, to the benefit of all. Businesses need to be able to attract the necessary talent. The question is what portion of total national income must go to the very top in order to provide adequate motivation. On the evidence of what occurred after 2007, for example, it seems fair to conclude that Richard Fuld’s $500 million compensation that year failed to provide the incentive needed for him to act in ways that benefited Lehman Brothers’ shareholders and customers, and it seems doubtful that a higher sum would have produced a much better result. Indeed, it seems unlikely that he would have performed any worse had he earned just $10 million, or even a paltry $2 million. The high-stakes lure of vast sums can spur great achievement, but as Keynes observed when considering the large disparities of income and wealth in Britain in the 1920s, “much lower stakes will serve the purpose equally well.”

  Eccles’s insight is no criticism of the rich. It points instead to a different organization of the economy and society, one that allows a broader sharing of the gains of economic growth. To this end it requires that policymakers focus on the real economy, not only the financial one.

  5

  Why Policymakers Obsess About the Financial Economy Instead of About the Real One

  September 26, 2008. “This sucker could go down,” President George W. Bush warns congressional leaders meeting with him in the White House, as he tries to wrest their agreement to a $700 billion bailout of Wall Street. A few weeks later, the most dogmatically conservative administration in recent American history—which had consistently and vociferously argued against giving anyone a helping hand for fear they’d become dependent on government—delivers the goods. “Without this rescue plan,” President Bush explains to the nation, “the costs to the American economy could be disastrous.” New Hampshire senator Judd Gregg, the leading Republican negotiator of the bailout bill, adds ominously, “If we do not do this, the trauma, the chaos, and the disruption to everyday Americans’ lives will be overwhelming, and that’s a price we can’t afford to risk paying.”

  In less than a year, Wall Street was back. The s
ix largest remaining banks had grown larger; their executives and traders were as rich or richer, their strategies of placing large bets with other people’s money no less bold than they were before the meltdown of September 2008. The possibility of new financial regulations emanating from Congress barely inhibited the Street’s exuberance. The Dow Jones Industrial Average had made up for some of its losses, and the financial recovery was proceeding nicely.

  But Senator Gregg notwithstanding, the everyday lives of large numbers of Americans continued to be subject to overwhelming trauma, chaos, and disruption.

  It is a common practice among economic policymakers to fervently and sincerely believe that Wall Street’s financial health is not only a precondition for a prosperous real economy but that when the former thrives, the latter will necessarily follow. Few fictions of modern economic life are more assiduously defended than the central importance of the Street to the well-being of the rest of us.

  Inhabitants of the real economy, including corporations and small business owners, do need to borrow money from the financial economy. But their overwhelming reliance on Wall Street is a relatively recent phenomenon. Back when middle-class Americans earned enough to be able to save more of their incomes, they borrowed from one another through intermediaries called local commercial banks and “savings and loans”—the sorts of institutions Marriner Eccles ran. Wall Street’s main function was to shepherd new issues of stock. But over time, rules were loosened. The Depression-era law separating investment from commercial banking was repealed in 1999 when the Street convinced Congress (and the Clinton administration) that it had outlived its usefulness, and today the Street’s major function is to make financial bets. Wall Street is a casino in which high-stakes wagers are placed within a limited number of betting houses that keep a percentage of the wins for themselves and fob off losses on others, including taxpayers.

  Many economic policymakers cannot see the real economy because their formative years have been spent on Wall Street and they share its myopic view of finance as the crucial center of the economy. Presidents routinely appoint Treasury secretaries from the Street who cannot help but double as the Street’s ambassador to the White House. It is easy to understand policymakers’ being seduced by the great flows of wealth created by the denizens of the Street, from whom they invariably seek advice. One of the basic assumptions of capitalism is that anyone paid huge sums of money must be worth it. Policymakers are not immune to this logic. Who among us is? Besides, the culture of high finance is attractive. It promises exquisite and unimaginable levels of comfort. The limousines and private jets that transport financiers, the hushed conference rooms, the luxurious accommodations, all add to the mystique. But this has as much relevance to the everyday economy in which most Americans work as does a masquerade ball.

  The costly bailout of the Street, accompanied by massive lending to banks by the Federal Reserve, was just the largest and most recent version of what has become the standard response of policymakers to financial tremors. Officials of the Treasury and the Federal Reserve instinctively throw money in the direction of whatever assets are threatened. They talk solemnly of the importance of “stabilizing” the system and “recapitalizing” it. Roughly translated, this means saving the assets—and the asses—of bankers. We were told in 1994 that Mexico’s “peso crisis” required financial rescue; in 1997, that East Asia’s crisis demanded capital infusions; in 1998, that Long-Term Capital Management had to be bailed out; that after the dot-com crash and the financial anxieties set off by Enron’s majestic plunge, capital markets needed additional coddling. Financial officials viewed all these rescues (Lehman Brothers’ death notwithstanding) as necessary and regrettable exceptions to the heroic assumption that rational, privately interested investors are never threatened by financial crises because they are smart enough to effectively evaluate all relevant information and properly weigh all risks beforehand.

  The biggest banks and a very large insurer that backed them up (AIG) were bailed out in 2008. But as the real economy on Main Streets steadily worsened, policymakers looked the other way. Officials preferred to view the meltdown as the consequence of excessively risky lending rather than the culmination of ever greater borrowing by millions of Americans who had no other way to maintain what they considered a decent living standard. To be sure, the financial economy had gone on a binge. The relative calm of preceding decades, conveniently punctuated by financial bailouts, had lured investors into taking ever greater risks, with the expectations of ever larger returns. But the locus of the problem was not in the financial economy; it was in the real economy.

  When policymakers viewed the debt load taken on by ordinary Americans, they saw it as a problem to be remedied; they did not examine carefully the circumstances that caused the borrowing. Officials assumed that Americans had splurged—saving too little and buying more than they could afford—as Secretary Geithner said. China, it was assumed, had saved too much and consumed too little, while we had done the reverse.

  To be sure, in the years leading up to the Great Recession, China accumulated a substantial amount of savings and lent a big portion to the United States, from which China could secure a good return. Savings also flowed to the United States from Japan, Germany, and oil-producing nations. These savings undoubtedly made it easier for Americans to afford the costs of borrowing. But to conclude from this that the long-term answer to the nation’s economic ills is for typical Americans to borrow less, save more, tighten their belts, and spend “within our means,” entails a giant and questionable leap of economic logic.

  Had most Americans’ share of the economy’s gains kept up, they could have afforded a lifestyle as good as if not better than they had before. They would have been able to save for rainy days, meeting their expenses even if they lost their jobs or their wages dipped. Consequently, they would not have felt the need to borrow as much. The problem was not that they had been living beyond their means but that their earnings had not kept up with their reasonable expectations for what they could afford as the economy grew.

  The problem, in short, was that the basic bargain had been broken.

  6

  The Great Prosperity: 1947–1975

  One of my earliest memories is the day my father brought home a TV—a large, square box with a tiny, round tube on the front, which, when switched on, would pick up shadowy shapes and voices from somewhere beyond. We weren’t rich by a long shot, but my father had returned from the war with enough money to rent a store and fill it with women’s cotton blouses and skirts. Factories were humming. Workers had paychecks, and the blouses and skirts sold. And with some of the profits my father bought our TV. We were the first family on the block to have one, and I remember neighbors crowding around it to watch Milton Berle in Texaco Star Theater. Within the decade, almost every family had its own TV.

  Call it the Great Prosperity—the three decades from about 1947 to 1975. During this era, America as a whole implemented the basic bargain. The nation provided its workers enough money to buy what they produced. Mass production and mass consumption proved perfect complements. Almost everyone who wanted a job could find one with good wages, or at least wages that were trending upward. During this quarter century, everyone’s wages grew—not just those in the top 1 percent, or the top 10 percent.

  Go back to Figure 1 and that long valley between the peaks and you’ll see during these years a time of widely shared prosperity. The wages of lower-income Americans grew faster than those at or near the top. The pay of workers in the bottom fifth more than doubled over these years—a faster pace than the pay of those in the top fifth. By the late 1940s, the nation was “more than halfway to perfect equality,” as the National Bureau of Economic Research dryly observed.

  Productivity also grew quickly during these years, defying the self-serving predictions of those who said wide inequality was necessary for rapid growth because top executives and innovators needed the incentive of outsized earnings. Labor productiv
ity—average output per hour worked—doubled, as did median incomes. Expressed in 2007 dollars, the typical family’s annual income rose from about $25,000 to $55,000. The bargain was cinched.

  So how did we go from the Great Depression to a quarter century of Great Prosperity? And from there, to thirty years of stagnant incomes and widening inequality, culminating in the Great Recession? It was no accident.

  It is still possible to find people who believe that government policy did not end the Great Depression and undergird the Great Prosperity, just as it is possible to uncover people who do not believe in evolution. To be sure, the U.S. government refrained from doing what many of Europe’s social democratic countries did—directly redistribute income from the rich to the poor and middle class, and nationalize industries. Nonetheless, it actively created the conditions for the middle class to fully share in the nation’s prosperity. It did so by pushing the economy toward full employment, creating a more progressive income tax, enhancing the bargaining power of average workers, building up Social Security, providing workers with a strong safety net when they couldn’t work, and improving their productivity.

  Franklin D. Roosevelt never fully understood Keynesian economics, despite the efforts of Marriner Eccles and others to educate him, but FDR proved the success of Keynesianism. He proved it not so much by the relatively tempered government spending of the New Deal but by the astonishingly huge spending demanded by World War II. By the end of the war, when the national debt equaled almost 120 percent of the entire economy, most Americans who survived were better off than they were before the war began because they had been put to work. And although policymakers worried that the economy would thereafter slip back into depression or stagnation—“All alike expect and fear a post-war collapse,” wrote economist Alvin Hansen of Harvard University—the feared collapse never came. By then the middle class—its pockets bulging with pay accumulated during the war that it was not allowed to spend during wartime—had the means to buy, and its pent-up demand for houses, cars, appliances, and almost every bit of baby paraphernalia imaginable created new jobs. And as the economy grew, the debt shrank as a percentage of it. “We’re all Keynesians now,” Richard Nixon purportedly proclaimed in 1971.* By then even a conservative like Nixon had accepted government’s ability to keep people employed, to fill the breach when consumers and businesses did not spend enough.

 

‹ Prev