Every time investors become enthusiastic about some new investment proposition, they assure us that this time will be different. During the dot-com bubble the surge of productivity enabled by information technology allowed us to believe that the historical moment was unique. During the housing boom, we were sure that high-tech financial engineering would protect us from financial risks, spreading them among investors who knew how to handle them. Each time the Pollyannas were wrong.
WHAT RATIONALITY?
Interestingly, there are economists—prominent ones—who believe bubbles don’t exist. Indeed, during the past four decades the prevailing view among many if not most economists was that prices could never be wrong. The insight that prices set in a free exchange between willing buyers and sellers can allocate resources to where they would be most profitably used somehow transformed into a blind belief in the infallibility of markets. According to this model of reality, processes that took prices way above their reasonable, true value, luring people into big mistakes, could not possibly exist.
The seeds of this ideology were laid in nineteenth-century Vienna before settling in the middle of the twentieth at the University of Chicago, perhaps the most influential school of economics of the last thirty years. It held that the free market was the only legitimate way to organize society because it started with individuals’ free will. Markets would organize the world impeccably by assigning relative values to goods, services, and individual courses of action. Humans being rational—meaning that they had a consistent set of preferences and beliefs about how their choices would improve their well-being—their decisions had to be the right ones. Government intervention, imposing the will of the state upon the people it ruled, was in this view necessarily inefficient and wrong.
To be sure, the so-called rational actor model has been enormously powerful in understanding people’s choices. Its simple core idea, that we set out to maximize our well-being, provides a convincing immediate explanation of people’s behavior. And it meshes with our understanding of the evolutionary processes shaping the development of species: if each decision we make leads to a set of probable outcomes with different odds of genetic survival, natural selection would shape preferences in such a way as to maximize biological fitness. But our faith in this theory went much too far. In the 1970s, the rational actor model was extended into the theory of “rational expectations.” This adapted the belief in humanity’s rationality to the fact that we cannot predict the future and thus must make decisions based only on what we expect the consequences of our actions will be, fitting the probable outcomes of our choices to our set of preferences. For instance, it posits that we plan our lives by coolly estimating our likely future income paths, adjusting our savings accordingly in order to smooth our consumption through our entire lifetimes—consuming less during our peak earning years in order to be able to consume more in retirement.
This was a perfect perch for a theory of perfect prices to latch on to. It posited that the price at which rational people would trade an asset, like orange juice futures, would reflect the available information affecting the asset, such as the weather and its impact on the orange crop. If a set of unusual expectations led a group of investors to push prices away from this rational path, the other investors in the market would make money by betting against them and bring prices back to reason. Economists called this the hypothesis of “efficient markets.” These views reached their zenith in the 1980s, after Ronald Reagan and Margaret Thatcher rose to power in the United States and Britain amid the economic stagnation and high inflation produced by the oil crisis of the 1970s. They were on a mission to reduce the role of government in the economy. And the Chicago economists served them with a body of theory.
TO THE EFFICIENT-MARKETS crowd, the financial zigzags that often look like crazy booms and busts are the natural outcomes of the actions of rational investors who face an uncertain future and have to constantly update their expectations in response to new information about the potential profitability of investments. In this land the dot-com bubble was only a bubble in hindsight. In 1999 it might have made sense to put all one’s money into the online grocer Web-van. It did go bankrupt two years later. But in 1999 one could believe it might evolve into the next Microsoft. As the economy reeled following the collapse of the housing bubble, the leading lights of Chicago stuck to their guns. “Economists are arrogant people. And because they can’t explain something, it becomes irrational,” said Eugene Fama, one of the leading economists of this school. “The word ‘bubble’ drives me nuts.”
Yet in the wake of the disaster sparked by the frenzied lurch of housing prices, the assumption of rational investors relentlessly driving prices to their true value looks either wrong or irrelevant.
A Cambridge don and Bloomsbury habitué, a British representative to the peace talks in Versailles, where he argued that imposing tough reparation payments on Germany after World War I would impoverish Germans and lead them to extremism, Keynes was also a savvy investor who made a lot of money in the market. His experience in finance informed his perception that most of the time investors don’t know what they are doing. Investment decisions, he thought, are the result of “animal spirits—of a spontaneous urge to action rather than inaction, and not as the outcome of a weighted average of quantitative benefits multiplied by quantitative probabilities.”
Robert Shiller, an economist at Yale, has proposed a model based on Keynes’s insight. In it, rationality takes a hike: a plausible new economic opportunity—say the Internet or new trade routes across the Atlantic—leads early investors to make a lot of money. This generates enthusiasm. The prices of the hot new asset—dot-com stocks, shares in shipping companies, whatever—are bid up as investors rush to partake of the profits. This leads to euphoria. Eventually the investments overrun the underlying logic. Investors see the price of stocks go up and assume they will continue to do so. They construct a narrative about how the new economic opportunity changes the conventional rules of the game, justifying stratospheric valuations. They borrow to double up on their investments.
Unfortunately, pessimism inevitably sets in when it turns out that the world was not really transformed by the new investment opportunity but operates in the same way it used to. Then the bubble bursts. Prices fall, begetting more pessimism and further price declines. Investors are forced to liquidate their depreciating portfolios to cover their debts, so asset prices fall further. It ends badly. As I watched estimates for my old Los Angeles condo soar above $900,000, two and a half times what I paid for it, I couldn’t help thinking that home buyers and the banks that financed them were insane. But they could all justify their strategies by pointing to what other investors were doing. And their justification made some sense at the time. It doesn’t make any sense now.
ECONOMICS FOR A NEW WORLD
The financial disaster spawned in the American housing market is changing economics. Forced to reassess, many economists have suddenly acknowledged that we’ve known for a long time that the proposition of unerring reason has limitations. We know that sometimes people’s preferences do not increase their welfare. Preferences can change unpredictably in response to events. And our belief about what set of choices will lead to our preferred outcomes is also a moving target. Add to that our limited ability to process information and compute the probable outcomes of our decisions, and the notion that we should always allow people’s individual preferences to guide prices across society starts looking reckless. Change is even coming to the Booth School of Business at the University of Chicago, the cathedral of efficient markets. The school recently took out an ad in the Financial Times trumpeting that despite its reputed belief in boundless rationality, it actually has psychologists on staff too, studying what happens when rationality fails.
Belief in unbounded rationality is not economics’ only flaw. The self-serving Homo economicus—willing to relentlessly pursue her individual preferences—is too narrow a being. The model fails to explain behavi
ors that are a fundamental part of who we are. Wedded to the notion that individuals will only do something if they get something in return, economics cannot properly explain why people help strangers whom they will never see again. It believes that people who reject free money must be crazy. But there are many instances of people rejecting payment for doing things they believe to be intrinsically good for society. In one experiment, Swedish women who were offered fifty kroner for their blood donations cut their donation rates in half. It was as if the payment crowded out an intrinsic nonmonetary incentive to give.
Homo economicus must be stripped of unbridled selfishness and modeled to fit a world where the relative distribution of prosperity is often more important than individual satisfaction. It must incorporate how the social norms built over evolutionary time to enhance societies’ ability to survive feed into people’s preferences even though they may not contribute to their immediate well-being. A comprehensive model of humankind must understand that people pursue not what they want but what they think they want, and how these objectives can diverge. It must include people who will pay an exorbitant price for a license plate precisely because its price is exorbitant, as if sticker shock were a desirable attribute. It must incorporate people’s persistent lack of self-control, even when they know that indulging their appetites—whether smoking, overeating, or forgetting to save for a rainy day—will carry a high price in the end.
Including all these dimensions of humanity is likely to turn economics into a messier, less mathematically elegant discipline than the one we’ve been used to for the past half century, which thought that one simple process—a relentless drive to maximize our objective well-being—could explain every human behavior. It will have to tag on other considerations, and understand how they interact with self-gratification. It is likely to be more tentative. But, in exchange, the new economics will provide a more comprehensive understanding of the world. Also important, it will be able to grapple with the many ways in which the decisions we make based on the prices arrayed before us can take us in directions that, individually or as a society, we would rather avoid.
BEYOND THE CHANGE of the discipline of economics, the more interesting question is how the global meltdown will change capitalism itself. In 2008, as the financial disaster spread outward from New York to London, Zurich, and around the world, many announced the end of the era of so-called Anglo-Saxon capitalism of small government and unfettered markets. “Self-regulation is finished, laissez-faire is finished, the idea of an all powerful market which is always right is finished,” said France’s president, Nicolas Sarkozy. Peer Steinbrück, Germany’s finance minister at the time, argued that “the US will lose its status as the superpower of the world financial system.” Some policy makers have touted a Chinese model of capitalism, in which the state exerts direct control over huge swaths of economic activity, including credit allocation and the price of the nation’s currency, to fuel export-led development. As the global economic balance shifts—the OECD club of industrial nations estimates that nonmembers will make up 57 percent of the world economy by 2030, up from 49 percent today—perhaps the liberal model of democracy and market capitalism that powered prosperity in the West will lose influence.
I am somewhat skeptical that China could provide a model for countries that are not accustomed to totalitarian rule. But it seems inevitable that the rules of the economic order will change as we incorporate the lessons of the crash. Crashes like the one we just experienced affect people’s attitudes deeply. Opinion surveys in the United States over the past few decades suggest that Americans who experienced a deep recession between the ages of eighteen and twenty-five were more likely to grow up to believe that success is achieved through luck rather than effort and were more likely to support redistributing income from the lucky rich to the unlucky poor. Paradoxically, the shock also diminished their trust in public institutions, like the presidency and Congress, so even as they demanded more of government, they doubted government’s ability to deliver necessary services.
History has many examples of crises causing deep changes in economic and political governance. At the beginning of the twentieth century, France was a highly evolved capitalist economy. The market capitalization of companies listed on the Parisian bourse reached 78 percent of French GDP, more than the value of the firms on the New York Stock Exchange as a share of the American economy. But the Great Depression and the German occupation delivered a shock to the faith of the French in the Third Republic. And their faith in laissez-faire capitalism suffered a permanent blow too.
The history of capitalism is punctuated by changes of direction in response to crises. In the 1930s, even as most major economies were mired in what would come to be known as the Great Depression, economic orthodoxy had it that government had no role to play in economic management. After the stock-market crisis of 1929 Secretary of the Treasury Andrew Mellon argued that government should stay out. According to the memoirs of President Herbert Hoover, Mellon’s formula was “liquidate labor, liquidate stocks, liquidate farmers, liquidate real estate . . . It will purge the rottenness out of the system.” Keynes, who proposed vigorous government spending to replace collapsing private demand, had a hard time being heard. There is a document in the archive of the British Treasury that shows the reaction of the permanent secretary of the Treasury to Keynes’s proposal for government spending to boost Britain’s economy, scribbled in three words: “Extravagance, Inflation, Bankruptcy.” By the end of the decade, however, Keynes’s work had become the basis for a new economic orthodoxy that persisted until the 1970s, based on the view that governments had a substantial role to play in economic management. And Keynes was the hero who saved the world.
THE STAGFLATION OF the 1970s and early 1980s provided a similar shock to the world’s economic organization, but in the opposite direction. Sparked by a combination of skyrocketing oil prices and bad economic management by overconfident governments willing to print money at will to meet their spending requirements, a combination of high inflation and high unemployment that the world had never seen before fatally undermined people’s trust in the state. This laid the stage for a three-decade-long period of government withdrawal. Starting with the election of Margaret Thatcher in Britain in 1979 and of Ronald Reagan in the United States a year later, governments around the world cut taxes, privatized state enterprises, and deregulated economies. Even in France, where President François Mitterrand nationalized the banking system, increased government employment, and raised public-sector pay, soon after being elected in 1981, the new orthodoxy ultimately prevailed. In 1983, President Mitterrand did a complete U-turn, froze the budget, and put in place a policy regime he called “La Rigueur”: The Austerity.
The financial crisis of 2008-9 has all the markings of such a momentous watershed moment. But what will the new era on the other side of it look like? The crisis squarely undermined the belief that markets are always better than policy makers at allocating resources, setting prices freely by force of supply and demand. Could it point us toward a more aggressive social democracy, with a more active role for government in allocating resources and steering the economy? Could unfettered market capitalism have sung its last hurrah?
The Obama administration seems to be trying to steer such a course. It is apparent in the Federal Communications Commission’s efforts to establish oversight over access to the Internet and in the increasingly activist stance of the trustbusters at the Federal Trade Commission and the Department of Justice. It is evident in the president’s ultimately successful battle to extend health insurance to all Americans. So, too, governments around the world have been working on new rules to further regulate and constrain the activities of banks—forcing them to amass bigger precautionary cushions of money, limiting the kinds of businesses they can engage in, and targeting them for special taxes to pay for the potential costs of any future financial disaster.
But it would be naive to believe that industrial nations will inevita
bly move back to a Big Government era, clipping bankers’ horns, clamping down on monopolies, and generally taking a decisive role in shaping the economic order. Bank shares jumped the day after the U.S. Congress passed a new law to regulate the financial industry. They surged again after global regulators agreed on new, higher capital cushions. Both jumps suggest the new rules will do little to curtail banks’ risk taking.
What’s more, President Obama’s efforts to reform health insurance and stimulate the economy with government spending provoked a furious populist backlash. Forty-eight percent of Americans tell Gallup their taxes are too high. As I write this passage, the loudest protesters on the streets are not railing against bankers. They are members of the Tea Party, who accuse President Obama of being a “socialist” out to undermine the nation’s values. In Europe, faith in government is not doing much better. Following the battering of the bonds of weaker countries like Greece and Spain in the spring of 2010, European Union governments virtually across the board declared it was time to start slashing their budget deficits. This, despite the fact that employment was still contracting, unemployment remained around 10 percent, and there was no plausible alternative source of demand to take the place of the spending that governments planned to withdraw from their economies. In other words, they risked a deeper economic downturn just for the sake of pulling the government out of the economy.
These are early days. Little more than two years have passed since the demise of Lehman Brothers. Finding a new equilibrium between government action and private markets was always going to take longer than that. And citizens’ mistrust of their governments seems about as high as their mistrust of bankers. But if we learn only one thing from the economic disaster of the last two years, it should be this: we should never again accept unchallenged the notion that the prices set by unfettered markets must inevitably be right. Sometimes they are. Sometimes they are not.
The Price of Everything Page 25