by Robert Litan
Ironically, Samuelson earned his undergraduate degree in economics at the University of Chicago, the intellectual home of staunch defenders of free markets, whose views Samuelson later did not always share. This was especially true of Milton Friedman, who was Samuelson’s academic foil through much of both their careers. Samuelson earned his doctorate at Harvard, but thereafter joined the faculty at MIT, where he remained until he died at the age of 94 in 2009.
Samuelson published research on virtually every major topic in economics, or as he stated in one essay about his life, he “had his finger in every pie.”19 Not surprisingly, therefore, he authored (and later coauthored, with William Nordhaus of Yale University, one of many of Samuelson’s outstanding students you will meet in Chapter 14) an introductory economics textbook that for several decades was the leading one in the field (I used it in my freshmen year at Penn).
Like Friedman, Samuelson wrote for popular magazines, including Newsweek, and counseled politicians, most notably candidate and then President John F. Kennedy, and President Lyndon Johnson.
Samuelson influenced so many people and students that inevitably he had a major effect on business simply through his teaching. He was also an avid investor, and you will meet in Chapter 8 a legend in that field (John Bogle) who traces one of his important innovations to Samuelson’s work in finance.
Later, as I was in graduate school, the United States suffered through stagflation in 1973 to 1974—a combination of high inflation and high unemployment—that some claimed could not be easily reconciled and so demanded a new macroeconomic paradigm. The result was a new skepticism about the ability of any governmental policies to have a long-run impact on unemployment, but there was still a strong worry that excessive monetary growth would lead to runaway inflation. The result was a new, quasi-Keynesian synthesis that asserted there was a natural rate of unemployment that the government could affect marginally in the short run through fiscal and monetary policies, and perhaps might nudge lower through better education and training in the long run. Still, there remained a number of skeptics, three of whom eventually won Nobel Prizes (Robert Lucas of Chicago, Thomas Sargent of the University of Minnesota, and Edward Prescott of Arizona State University), who questioned the effectiveness of any governmental policies to affect the economy even in the short run. Once the financial crisis and recession of 2008–2009 was under way, the intellectual and political fights about the impact of macroeconomic policies broke out again and continue to this day.
This is what the public, to the extent it follows economics, most sees about the field: vigorous and sometimes ad hominem arguments with seemingly no consensus. But as I asserted at the outset of this chapter, relatively few economists actually are involved in these disputes or spend much of their research time seeking to resolve them. Much more day-to-day work is spent by many more economists attempting to understand the behavior of the sub-units of the macro-economy—individual firms, industries, and consumers. These activities are lumped under the broad field of microeconomics, and it is the insights of some of this work that are the focus of this book.
In recent years, and especially in the wake of the global financial crisis, macroeconomics has become a more humble science. Prior to the crisis, many economists were convinced that our understanding of the macro economy, including the nature of economic fluctuations and the effects of fiscal and monetary actions, was overall pretty solid.20 The financial crisis, however, severely challenged many of the assumptions and conclusions that had gained consensus among many macro economists over the years. It turns out that it is extremely difficult to estimate, assess, or forecast many dynamics in the macro economy with any reasonable degree of certainty. Indeed, one of the most contentious policy debates during the financial crisis concerned the unknown effects of the fiscal stimulus: Was it too small or too large? What would be its impact on consumer spending, unemployment, or GDP? It’s simply impossible to run experiments with the economy. For these reasons, macroeconomics remains an imperfect science at best.21
Microeconomics, on the other hand, is a much better understood segment of the field for at least two reasons. First, understanding and predicting the behavior of individuals and firms is much easier than predicting what happens in the entire economy, which is really the summation of literally millions of micro units. Second, micro economists often have richer data and more tools at their disposal (such as running experiments) to better understand and predict consumer and firm behavior. To be sure, microeconomics is also an imperfect science, but as I demonstrate throughout this book, many of its key insights have spurred innovation and led to trillion-dollar benefits to the economy.
Economic Growth in the Short and Long Run
There is one branch of macroeconomics that plays a background role for much of what follows, and this involves attempts to understand what really drives, and ideally predicts, the rate of growth for entire economies over the long run. This is a distinctly different subject from what determines the quarterly or annual ups and downs of the economy, which is what most macroeconomists spend their professional lives trying to understand.
Economists often discuss long-run growth in terms of an economy’s potential growth rate. This is the rate at which total output (putting aside how it is distributed) is capable of growing, assuming those who are willing and able to work have jobs. This doesn’t mean the unemployment rate is zero, but rather because of the frictions in the labor market and the mismatch at any time between the qualities employers seek in employees and the skills of available workers, full employment is taken to mean the lowest unemployment rate that is consistent with stable inflation. Until the 2008–2009 recession the so-called natural rate of unemployment in the United States was thought to be in the neighborhood of 5 percent. Since the recession, with so many workers out of jobs for so long and skills requirements continuing to increase, there seems to be a rough consensus that the natural rate of unemployment has inched above 5 percent, perhaps closer to 6 percent.
Assuming this to be the case, then the long-run growth rate of the United States economy, or any economy, is at once simple because it is a mathematical identity, but also inherently complex because one large component of that identity is virtually impossible to project with any accuracy, even by the best of economists.
The simple mathematics are that total output growth equals the sum of the projected growth of the labor force and the growth of productivity, defined as output per worker. Of the two halves of this equation, productivity is far more important, and it is also the most difficult to project. That is because the easiest component of productivity growth to predict, that which depends on the projected growth in equipment and buildings that enable workers to be more productive, is also of lesser importance. By far the most important factor affecting productivity growth, at least in countries that have the most advanced technology, is the growth of innovation, or new and more highly valued products and services and methods of generating and delivering them.
Innovation and growth matter because they determine the rate at which general living standards advance. The average American has about 10 times the income of one living at the beginning of the nineteenth century because of the wave of innovations that now characterize modern life: indoor plumbing, air conditioning, modern means of communication and transportation, huge advances in medical care, and amazing advances in information technology. Similar benefits are enjoyed by the average citizens of other, now rich or almost rich, countries in Europe (despite that continent’s post-2008 difficulties), in parts of Asia (Singapore), Latin America (Chile), Canada, and Australia. Meanwhile, much of the rest of the world wants to be like the rich world, and China and India are racing ahead as fast as they can to catch up (though on a per capita basis they still have a long way to go).
Averages, of course, conceal how income and wealth are actually distributed. A society can be relatively rich on average by both measures, but have extremes of poverty and prosperity. As the former Se
cretary of Labor Robert Reich has quipped, “Shaquille O’Neal [the former 7 foot-plus center for the Los Angeles Lakers] and I [Reich] have an average height of six feet.”22
The Equity–Efficiency Tradeoff
Ideally, there is a balance between distribution and growth. Societies need some degree of inequality in order to give incentives to the high performers to want to continue to work hard and to innovate, and thereby lift the economic fortunes of everyone, even those at the bottom. But too much inequality can leave the rich feeling physically vulnerable, behind their gated communities and with their personal bodyguards, while stoking support for leaders, even in quasi-democratic regimes, such as Venezuela when it was led by Hugo Chavez, who favor populist policies that force money and wealth to be redistributed to the poor.
Many (but not all) economists believe in a limited amount of redistribution—it has been the basis for the income tax code in the United Sates and other advanced economies—as a way of giving families at the bottom of the income distribution at least some chance to help themselves and their children have access to goods and services, most importantly education, that will enable them to improve their lives and climb up over time into better stations in life. It remains a stubborn fact that your own incomes are heavily influenced by the socioeconomic status of your parents.23
In addition, redistribution provides a safety net for the aged, those with limited skills and therefore limited incomes, and those temporarily out of work. In doing so, redistribution is a form of social glue that can help keep a society together, especially one as heterogeneous as the United States. But too much redistribution can drive those at the very top to invest in nontaxed assets (such as municipal bonds) or even to leave for other countries, which private jets and Internet access make it easier to do.
More broadly, there has always been an ongoing conversation about not only redistribution but also about the proper role of government in the economy: When is it appropriate for the government to intervene in the economy? The standard textbook answer is that government intervention can be justified on both efficiency and equity grounds.24
The efficiency rationale is that the government should intervene in instances where there is market failure. As we have seen, market failures deliver outcomes that do not maximize efficiency for society. That is, the pie is not as large as it could be, so the government can intervene and make it larger for society.
The second rationale for government intervention has to do with equity, and particularly redistribution—the size of each person’s slice of the pie. The utilitarian view posits that since an extra dollar is worth more to a poor person than to a rich person, society would be better off if some amount of redistribution were allowed. However—and this is a huge point—too much redistribution, changing the size of each person’s slice, can shrink the size of the entire pie. Why? Because of incentives, the most productive and rich members of society may be less willing to work hard and innovate if they see a larger share of their pie taken away and given to less productive or less well-off individuals.
The subject of distribution is a hot button issue that deserves its own very different book from this one. It is a topic that involves both positive and normative economics—another big distinction in economics. Positive economics is the branch of economics concerned with explaining phenomena without making value judgments as to the merits or fairness of a given outcome. Normative economics, on the other hand, is the branch of economics that specifically deals with expressing how things should be, not just the way they are.
The debate about redistribution and whether to greatly expand it through much higher taxes, specifically a global wealth tax, really heated up in 2014 with the publication of Capital in the 21st Century, by French economist Thomas Piketty.25 Since this book is about how economics is used in business, I will not wade into either the technical or policy aspects of this already bestselling book, but simply note that it exists and has clearly become a lightning rod for both economists and noneconomists alike.
One much less controversial but I believe equally important book relating to the subject of distribution, and one that I highly recommend to readers, is The Race between Education and Technology by Claudia Goldin and Lawrence Katz. These two highly regarded Harvard economists advance the view that most economists would agree with—that eventually the best way to narrow income inequality is to widen opportunities for education, especially among those in disadvantaged neighborhoods and in single-parent families.26
This challenge grows harder when technology is racing ahead, rapidly increasing the minimum level of skills workers need to secure even moderately well-paid jobs, while public resources available for teaching those from disadvantaged homes are under stress. Tyler Cowen, one of the more ingenious economists of this generation, has written a provocative, disturbing, but also very persuasive book, Average Is Over,27 which argues that the U.S. economy (and by logical extension other rich country economies) is really becoming two economies: one for those whose earnings put them in the top 20 percent of incomes, and another for the other 80 percent. The keys to being in the top 20 percent are facility with information technology and ability to market oneself, a characteristic of successful entrepreneurs.
There is no amount of redistribution that will rectify this 20/80 situation without severely undermining incentives for growth. Nor, in Cowen’s view, is education the whole answer. People in the bottom 80 percent have to be motivated to learn the skills that can put them in the top 20 percent, which in the process would increase that figure. Figuring out how to encourage students and adults to constantly retrain themselves to be equipped to deal with fast-moving technology is one of the great challenges of our time.
Whether or not Cowen turns out to be right that at least for the next couple of decades the 20/80 division will continue and possibly widen, economic growth can still increase the incomes of even those at the bottom of the income distribution in each society. The poorest Americans today are better off than those in the middle and upper classes of the nineteenth century, and that is because of the invention and commercialization of many of those technologies listed earlier that characterize our modern society and have powered its growth.
As for the rest of the world, literally billions of people today are living above the level of extreme poverty—one or two dollars a day—because of economic growth, especially in the once really poor countries, China and India. If you care about all people living better and longer lives, you cannot be opposed to economic growth.
Innovation and Growth: The Role of Economists
In the meantime, the main focus in the following chapters is on one aspect of economic growth that even many economists have not well recognized: how ideas of economists have contributed and, if given the chance, will continue to contribute to innovation and growth and thereby enhance human welfare. The importance of this theme is best understood if I first give you some context about the much broader discussion about the prospects for future growth that is being vigorously debated among some academic economists, and, I predict, is not likely to die down anytime soon.
Shortly after the recovery from the Great Recession of 2008–2009 began, the same Tyler Cowen to whom I have just referred penned a short e-book (later published in print), The Great Stagnation (as an e-book, this was first of its kind in economics, a format that is beginning to be more widely used). In brief, the book’s main thesis was that the United States, even as a technological leader, has already picked all the low-hanging fruit available for increasing growth: moving women into the labor force, topping out the percentage of the population that goes to college, and wringing about as much innovation as it can from its universities and private sector. The future outlook for productivity growth at least for the next several decades therefore looks dim, in Cowen’s view, much more pessimistic than the long-term forecasts of somewhat less than 2 percent per year issued by the Congressional Budget Office (which is more than a percentage point below the
3 percent pace in the “golden” quarter century after the end of World War II and during the revival years of the 1990s).28
Cowen’s dour view about the future was reinforced with another, even more pessimistic outlook painted by one of America’s leading macro economists, Robert Gordon of Northwestern University, in two widely read academic studies.29 Gordon’s story, broadly speaking, is that the information technology revolution was never what it was cracked up to be, while the innovative streak that drove growth in the United States and other advanced countries dating from the Industrial Revolution, has run its course. Gordon went out on a limb, at least relative to other economists, and projected that productivity would essentially stop growing at all at some near point in the future.
Among academic economists, the leading rebuttal to Cowen’s and Gordon’s pessimism was advanced in another e-book (later also published in hard copy), The Race Against the Machine, authored by two technology economists, Erik Brynjolfsson and Andrew McAfee. These authors argue that Moore’s law—the historical doubling of computing power every 12 to 18 months named after Intel cofounder Gordon Moore—will continue making the information technology industry even more productive, while improving productivity in a wide range of sectors using IT, such as education and health care. At the same time, as Cowen would write more expansively later, the two authors worried that continued IT-driven innovation would widen income inequalities.30 Brynjolffson and McAfee have since published a second book, The Second Machine Age, which amplifies the themes of their first book.31