by Paul Krugman
The current conventional wisdom is that the budget burden of health care will be cured with rationing—the Federal government will simply decline to pay for many of the expensive procedures that medical science makes available. But what if, as seems likely, those procedures really work—if there comes a time when those who can afford it can expect to be vigorous centenarians, and perhaps even buy themselves smarter children, while those who cannot can look forward only to the Biblical threescore and ten. Is this really a tolerable prospect?
There is, some might say, no alternative. But of course there is. It is possible to imagine a society that taxes itself heavily in order to provide advanced medical care to everyone, and that rations that care not by wealth but by other criteria. (Bruce Sterling’s imaginary future is ruled by “the polity,” a nanny state that rewards not wealth but personal hygiene: Society takes care of those who take care of themselves.)
Such an outcome sounds unthinkable in the current political climate, which is dominated by a low-tax, antigovernment ideology. But history is not over; ideologies may change. For all we know, the future may belong to the medical welfare state, a state whose slogan might be, “From each according to his ability, to each according to his needs.”
The CPI and the Rat Race
Let’s talk about inflation indexing and the meaning of life.
Late in 1996 a panel of economists, led by Stanford’s Michael Boskin, made semiofficial what most experts have been saying for some time: The Consumer Price Index overstates inflation. Nobody really knows by how much, but Boskin and company made a guesstimate of 1.1 percent annually. Compounded over decades, this is a huge error.
This conclusion is controversial. Some people are upset because any reduction of inflation estimates will reduce Social Security benefits, which are indexed to the CPI. Others are upset because a revision of recent price history would mean abandoning a worldview on which they have staked their reputations. Quite a few people have committed themselves to the story line that productivity is up but real wages are down. If inflation has been lower than was previously assumed, that means the real value of wages may have gone up after all. And some economists with no particular ax to grind simply have doubts about the methodology.
Boskin may be right or wrong, but one argument by his critics is clearly wrong. They say, suppose it’s true that inflation has been less than the official increase in the CPI over the past few decades. If you assume a lower inflation rate and recalculate real incomes back to, say, 1950, you reach what seems to be a crazy conclusion: That in the early 1950s, the era of postwar affluence, most Americans were living below what we now regard as the poverty line. Some critics of the Boskin report regard this as a decisive blow to its credibility.
The idea that most Americans were poor in 1950 is indeed absurd, but not because of Boskin’s numbers. After all, even if you use an unadjusted CPI, the standard of living of the median family (fiftieth percentile) in 1950 America appears startlingly low by current standards. In that year, median-family income in 1994 dollars was only about $18,000. That’s about the twentieth percentile today. Families at the twentieth percentile—that is, poorer than 80 percent of the population—may not be legally poor (only about 12 percent of families are officially below the poverty line), but they are likely to regard themselves as very disadvantaged and unsuccessful. So even using the old numbers, most families in 1950 had a material standard of living no better than that of today’s poor or near-poor.
We can confirm this with more direct measures of the way people lived. In 1950 some 35 percent of dwellings lacked full indoor plumbing. Many families still did not have telephones or cars. And of course very few people had televisions. A modern American family at the twelfth percentile (that is, right at the poverty line) surely has a flushing toilet, a working shower, and a telephone with direct-dial long-distance service; probably has a color television; and may well even have a car. Take into account improvements in the quality of many other products, and it does not seem at all absurd to say that the material standard of living of that poverty-level family in 1996 is as good as or better than that of the median family in 1950.
What do we mean by this? We mean that if you could choose between the two material standards of living, other things being the same, you might well prefer the twelfth percentile standard of 1996 to the fiftieth percentile standard of 1950. But does that mean that most people were poor in 1950? No—because man does not live by bread, cars, televisions, or even plumbing alone.
Imagine that a mad scientist went back to 1950 and offered to transport the median family to the wondrous world of the 1990s, and to place them at, say, the twenty-fifth percentile level. The twenty-fifth percentile of 1996 is a clear material improvement over the median of 1950. Would they accept his offer? Almost surely not—because in 1950 they were middle class, while in 1996 they would be poor, even if they lived better in material terms. People don’t just care about their absolute material level—they care about their level compared with others’.
I know quite a few academics who have nice houses, two cars, and enviable working conditions, yet are disappointed and bitter men—because they have never received an offer from Harvard and will probably not get a Nobel Prize. They live very well in material terms, but they judge themselves relative to their reference group, and so they feel deprived. And on the other hand, it is an open secret that the chief payoff from being really rich is, as Tom Wolfe once put it, the pleasure of “seeing ’em jump.” Privilege is not merely a means to other ends, it is an end in itself.
My fellow Slate columnist Robert Wright would undoubtedly emphasize that our concern over status exists for good evolutionary reasons. In the ancestral environment a man would be likely to have more offspring if he got his pick of the most fertile-seeming women. That, in turn, would depend on his status, not his absolute standard of living. So males with a predisposition to status-seeking left more offspring than those without, and the end result is Bill G-g-g—I mean, Ronald Perelman.
Is my license as a practicing economist about to be revoked? Aren’t we supposed to believe in Economic Man? And doesn’t admitting that people care about fuzzy things like status undermine the whole economic method? Not really: Homo economicus is not a central pillar of my faith—he is merely a working assumption, albeit one that is extremely useful in many circumstances.
But admitting that people’s happiness depends on their relative economic level as well as their absolute economic resources has some subversive implications. For example: Many conservatives have seized on the Boskin report as a club with which to beat all those liberals who have been whining about declining incomes and increasing poverty in America. It was all, they insist, a statistical hoax. But you could very well make the opposite argument. America in the 1950s was a middle-class society in a way that America in the 1990s is not. That is, it had a much flatter income distribution, so that people had much more sense of sharing a common national lifestyle. And people in that relatively equal America felt good about their lives, even though by modern standards, they were poor—poorer, if Boskin is correct, than we previously thought. Doesn’t this mean, then, that having a more or less equal distribution of income makes for a happier society, even if it does not raise anyone’s material standard of living? That is, you can use the fact that people did not feel poor in the 1950s as an argument for a more radical egalitarianism than even most leftists would be willing to espouse.
You could even argue that American society in the 1990s is an engine that maximizes achievement yet minimizes satisfaction. In a society with a very flat distribution of income and status, nobody feels left out. In a society with rigid ranks, people do not expect to rise above their station and therefore do not feel that they have failed if they do not rise. (Aristocrats are not part of a peasant’s reference group.) Modern America, however, is a hugely unequal society in which anyone can achieve awesome success, but not many actually do. The result is that many—perhaps even
most—people feel that they have failed to make the cut, no matter how comfortable their lives. (In a land where anyone can become president, anyone who doesn’t become president is a failure.) My European friends always marvel at how hard Americans work, even those who already have plenty of money. Why don’t we take more time to enjoy what we have? The answer, of course, is that we work so hard because we are determined to get ahead—an effort that (for Americans as a society) is doomed to failure, because competition for status is a zero-sum game. We can’t all “get ahead.” No matter how fast we all run, someone must be behind.
If one follows this line of thought one might well be led to some extremely radical ideas about economic policy, ideas that are completely at odds with all current orthodoxies. But I won’t try to come to grips with such ideas in this column. Frankly, I don’t have the time. I have to get back to my research—otherwise, somebody else might get that Nobel.
Looking Backward
When looking backward, one must always be prepared to make allowances: It is unfair to blame late twentieth-century observers for their failure to foresee everything about the century to come. Long-term social forecasting is an inexact science even now, and in 1996 the founders of modern nonlinear socioeconomics were still obscure graduate students. Still, even then many people understood that the major forces driving economic change would be the continuing advance of digital technology, on one side, and the spread of economic development to previously backward nations, on the other; in that sense there were no big surprises. The puzzle is why the pundits of the time completely misjudged the consequences of those changes.
Perhaps the best way to describe the flawed vision of fin-de-siècle futurists is to say that, with few exceptions, they expected the coming of an “immaculate” economy—an economy in which people would be largely emancipated from any grubby involvement with the physical world. The future, everyone insisted, would bring an “information economy,” which would mainly produce intangibles; the good jobs would go to “symbolic analysts,” who would push icons around on computer screens; and knowledge rather than traditionally important resources like oil or land would become the main source of wealth and power.
But even in 1996 it should have been obvious that this was silly. First, for all the talk of an information economy, ultimately an economy must serve consumers—and consumers don’t want information, they want tangible goods. In particular, the billions of Third World families who finally began to have some purchasing power as the twentieth century ended did not want to watch pretty graphics on the Internet—they wanted to live in nice houses, drive cars, and eat meat. Second, the Information Revolution of the late twentieth century was—as everyone should have realized—a spectacular but only partial success. Simple information processing became faster and cheaper than anyone had imagined possible; but the once confident Artificial Intelligence movement went from defeat to defeat. As Marvin Minsky, one of the movement’s founders, despairingly remarked, “What people vaguely call common sense is actually more intricate than most of the technical expertise we admire.” And it takes common sense to deal with the physical world—which is why, even at the end of the twenty-first century, there are still no robot plumbers.
Most important of all, the prophets of an “information economy” seem to have forgotten basic economics. When something becomes abundant, it also becomes cheap. A world awash in information will be a world in which information per se has very little market value. And in general when the economy becomes extremely good at doing something, that activity becomes less rather than more important. Late-twentieth-century America was supremely efficient at growing food; that was why it had hardly any farmers. Late-twenty-first-century America is supremely efficient at processing routine information; that is why the traditional white-collar worker has virtually disappeared from the scene.
With these observations as background, then, let us turn to the five great economic trends that observers in 1996 should have expected but didn’t.
Soaring resource prices. The first half of the 1990s was an era of extraordinarily low raw-material prices. Yet it is hard to see why anyone thought this situation would continue. The Earth is, as a few lonely voices continued to insist, a finite planet; when two billion Asians began to aspire to Western levels of consumption, it was inevitable that they would set off a scramble for limited supplies of minerals, fossil fuels, and even food.
In fact, there were some warning signs as early as 1996. There was a temporary surge in gasoline prices during the spring of that year, due to an unusually cold winter and miscalculations about Middle East oil supplies. Although prices soon subsided, the episode should have reminded people that by the mid-nineties the world’s industrial nations were once again as vulnerable to disruptions of oil supply as they had been in the early 1970s; but the warning was ignored.
Quite soon, however, it became clear that natural resources, far from becoming irrelevant, had become more crucial than ever before. In the nineteenth century great fortunes were made in industry; in the late twentieth they were made in technology; but today’s superrich are, more often than not, those who own prime land or mineral rights.
The environment as property. In the twentieth century people used some quaint expressions—“free as air,” “spending money like water”—as if such things as air and water were available in unlimited supply. But in a world where billions of people have enough money to buy cars, take vacations, and buy food in plastic packages, the limited carrying capacity of the environment has become perhaps the single most important constraint on the average standard of living.
By 1996 it was already clear that one way to cope with environmental limits was to use the market mechanism—in effect to convert those limits into new forms of property rights. A first step in this direction was taken in the early 1990s, when the U.S. government began allowing electric utilities to buy and sell rights to emit certain kinds of pollution; the principle was extended in 1995 when the government began auctioning off rights to use the electromagnetic spectrum. Today, of course, practically every activity with an adverse impact on the environment carries a hefty price tag. It is hard to believe that as late as 1995 an ordinary family could fill up a Winnebago with dollar-a-gallon gasoline, then pay only five dollars to drive it into Yosemite. Today such a trip would cost about fifteen times as much even after adjusting for inflation.
The economic consequences of the conversion of environmental limits into property were unexpected. Once governments got serious about making people pay for the pollution and congestion they caused, the cost of environmental licenses became a major part of the cost of doing business. Today license fees account for more than 30 percent of GDP. And such fees have become the main source of government revenue; after repeated reductions, the Federal income tax was finally abolished in 2043.
The rebirth of the big city. During the second half of the twentieth century, the traditional densely populated, high-rise city seemed to be in unstoppable decline. Modern telecommunications had eliminated much of the need for close physical proximity between routine office workers, leading more and more companies to shift their backoffice operations from lower Manhattan and other central business districts to suburban office parks. It began to seem as if cities as we knew them would vanish, replaced with an endless low-rise sprawl punctuated by an occasional cluster of ten-story office towers.
But this proved to be a transitory phase. For one thing, high gasoline prices and the cost of environmental permits made a one-person, one-car commuting pattern impractical. Today the roads belong mainly to hordes of share-a-ride minivans, efficiently routed by a web of intercommunicating computers. However, although this semi-mass-transit system works better than twentieth-century commuters could have imagined—and employs more than four million drivers—suburban door-to-door transportation still takes considerably longer than it did when ordinary commuters and shoppers could afford to drive their own cars. Moreover, the jobs that had temporarily f
lourished in the suburbs—mainly relatively routine office work—were precisely the jobs that were eliminated in vast numbers beginning in the mid-nineties. Some white-collar jobs migrated to low-wage countries; others were taken over by computers. The jobs that could not be shipped abroad or handled by machines were those that required the human touch—that required face-to-face interaction, or close physical proximity between people working directly with physical materials. In short, they were jobs best done in the middle of dense urban areas, areas served by what is still the most effective mass-transit system yet devised: the elevator.
Here again, there were straws in the wind. At the beginning of the 1990s, there was much speculation about which region would become the center of the burgeoning multimedia industry. Would it be Silicon Valley? Los Angeles? By 1996 the answer was clear; the winner was…Manhattan, whose urban density favored the kind of close, face-to-face interaction that turned out to be essential. Today, of course, Manhattan boasts almost as many 200-story buildings as St. Petersburg or Bangalore.
The devaluation of higher education. In the 1990s everyone believed that education was the key to economic success, for both individuals and nations. A college degree, maybe even a postgraduate degree, was essential for anyone who wanted a good job as one of those “symbolic analysts.”
But computers are very good at analyzing symbols; it’s the messiness of the real world they have trouble with. Furthermore, symbols can be quite easily transmitted to Asmara or La Paz and analyzed there for a fraction of the cost of doing it in Boston. So over the course of this century many of the jobs that used to require a college degree have been eliminated, while many of the rest can, it turns out, be done quite well by an intelligent person whether or not she has studied world literature.