by Kai-Fu Lee
This, the techno-optimists assert, is the real story of technological change and economic development. Technology improves human productivity and lowers the price of goods and services. Those lower prices mean consumers have greater spending power, and they either buy more of the original goods or spend that money on something else. Both of these outcomes increase the demand for labor and thus jobs. Yes, shifts in technology might lead to some short-term displacement. But just as millions of farmers became factory workers, those laid-off factory workers can become yoga teachers and software programmers. Over the long term, technological progress never truly leads to an actual reduction in jobs or rise in unemployment.
It’s a simple and elegant explanation of the ever-increasing material wealth and relatively stable job markets in the industrialized world. It also serves as a lucid rebuttal to a series of “boy who cried wolf” moments around technological unemployment. Ever since the Industrial Revolution, people have feared that everything from weaving looms to tractors to ATMs will lead to massive job losses. But each time, increasing productivity has paired with the magic of the market to smooth things out.
Economists who look to history—and the corporate juggernauts who will profit tremendously from AI—use these examples from the past to dismiss claims of AI-induced unemployment in the future. They point to millions of inventions—the cotton gin, lightbulbs, cars, video cameras, and cell phones—none of which led to widespread unemployment. Artificial intelligence, they say, will be no different. It will greatly increase productivity and promote healthy growth in jobs and human welfare. So what is there to worry about?
THE END OF BLIND OPTIMISM
If we think of all inventions as data points and weight them equally, the techno-optimists have a compelling and data-driven argument. But not all inventions are created equal. Some of them change how we perform a single task (typewriters), some of them eliminate the need for one kind of labor (calculators), and some of them disrupt a whole industry (the cotton gin).
And then there are technological changes on an entirely different scale. The ramifications of these breakthroughs will cut across dozens of industries, with the potential to fundamentally alter economic processes and even social organization. These are what economists call general purpose technologies, or GPTs. In their landmark book The Second Machine Age, MIT professors Erik Brynjolfsson and Andrew McAfee described GPTs as the technologies that “really matter,” the ones that “interrupt and accelerate the normal march of economic progress.”
Looking only at GPTs dramatically shrinks the number of data points available for evaluating technological change and job losses. Economic historians have many quibbles over exactly which innovations of the modern era should qualify (railroads? the internal combustion engine?), but surveys of the literature reveal three technologies that receive broad support: the steam engine, electricity, and information and communication technology (such as computers and the internet). These have been the game changers, the disruptive technologies that extended their reach into many corners of the economy and radically altered how we live and work.
These three GPTs have been rare enough to warrant evaluation on their own, not simply to be lumped in with millions of more narrow innovations like the ballpoint pen or automatic transmission. And while it’s true that the long-term historical trend has been toward more jobs and greater prosperity, when looking at GPTs alone, three data points are not enough to extract an ironclad principle. Instead, we should look to the historical record to see how each of these groundbreaking innovations has affected jobs and wages.
The steam engine and electrification were crucial pieces of the first and second Industrial Revolutions (1760–1830 and 1870–1914, respectively). Both of these GPTs facilitated the creation of the modern factory system, bringing immense power and abundant light to the buildings that were upending traditional modes of production. Broadly speaking, this change in the mode of production was one of deskilling. These factories took tasks that once required high-skilled workers (for example, handcrafting textiles) and broke the work down into far simpler tasks that could be done by low-skilled workers (operating a steam-driven power loom). In the process, these technologies greatly increased the amount of these goods produced and drove down prices.
In terms of employment, early GPTs enabled process innovations like the assembly line, which gave thousands—and eventually hundreds of millions—of former farmers a productive role in the new industrial economy. Yes, they displaced a relatively small number of skilled craftspeople (some of whom would become Luddites), but they empowered much larger numbers of low-skilled workers to take on repetitive, machine-enabled jobs that increased their productivity. Both the economic pie and overall standards of living grew.
But what about the most recent GPT, information and communication technologies (ICT)? So far, its impact on labor markets and wealth inequality have been far more ambiguous. As Brynjolfsson and McAfee point out in The Second Machine Age, over the past thirty years, the United States has seen steady growth in worker productivity but stagnant growth in median income and employment. Brynjolfsson and McAfee call this “the great decoupling.” After decades when productivity, wages, and jobs rose in almost lockstep fashion, that once tightly woven thread has begun to fray. While productivity has continued to shoot upward, wages and jobs have flatlined or fallen.
This has lead to growing economic stratification in developed countries like the United States, with the economic gains of ICT increasingly accruing to the top 1 percent. That elite group in the United States has roughly doubled its share of national income between 1980 and 2016. By 2017, the top 1 percent of Americans possessed almost twice as much wealth as the bottom 90 percent combined. While the most recent GPT proliferated across the economy, real wages for the median of Americans have remained flat for over thirty years, and they’ve actually fallen for the poorest Americans.
One reason why ICT may differ from the steam engine and electrification is because of its “skill bias.” While the two other GPTs ramped up productivity by deskilling the production of goods, ICT is instead often—though not always—skill biased in favor of high-skilled workers. Digital communications tools allow top performers to efficiently manage much larger organizations and reach much larger audiences. By breaking down the barriers to disseminating information, ICT empowers the world’s top knowledge workers and undercuts the economic role of many in the middle.
Debates over how large a role ICT has played in job and wage stagnation in the United States are complex. Globalization, the decline of labor unions, and outsourcing are all factors here, providing economists with fodder for endless academic arguments. But one thing is increasingly clear: there is no guarantee that GPTs that increase our productivity will also lead to more jobs or higher wages for workers.
Techno-optimists can continue to dismiss these concerns as the same old Luddite fallacy, but they are now arguing against some of the brightest economic minds of today. Lawrence Summers has served as the chief economist of the World Bank, as the treasury secretary under President Bill Clinton, and as the director of President Barack Obama’s National Economic Council. In recent years, he has been warning against the no-questions-asked optimism around technological change and employment.
“The answer is surely not to try to stop technical change,” Summers told the New York Times in 2014, “but the answer is not to just suppose that everything’s going to be O.K. because the magic of the market will assure that’s true.”
Erik Brynjolfsson has issued similar warnings about the growing disconnect between the creation of wealth and jobs, calling it “the biggest challenge of our society for the next decade.”
AI: PUTTING THE G IN GPT
What does all this have to do with AI? I am confident that AI will soon enter the elite club of universally recognized GPTs, spurring a revolution in economic production and even social organization. The AI revolution will be on the scale of the Industrial Revolution, bu
t probably larger and definitely faster. Consulting firm PwC predicts that AI will add $15.7 trillion to the global economy by 2030. If that prediction holds up, it will be an amount larger than the entire GDP of China today and equal to approximately 80 percent of the GDP of the United States in 2017. Seventy percent of those gains are predicted to accrue in the United States and China.
These disruptions will be more broad-based than prior economic revolutions. Steam power fundamentally altered the nature of manual labor, and ICT did the same for certain kinds of cognitive labor. AI will cut across both. It will perform many kinds of physical and intellectual tasks with a speed and power that far outstrip any human, dramatically increasing productivity in everything from transportation to manufacturing to medicine.
Unlike the GPTs of the first and second Industrial Revolutions, AI will not facilitate the deskilling of economic production. It won’t take advanced tasks done by a small number of people and break them down further for a larger number of low-skill workers to do. Instead, it will simply take over the execution of tasks that meet two criteria: they can be optimized using data, and they do not require social interaction. (I will be going into greater detail about exactly which jobs AI can and cannot replace.)
Yes, there will be some new jobs created along the way—robot repairing and AI data scientists, for example. But the main thrust of AI’s employment impact is not one of job creation through deskilling but of job replacement through increasingly intelligent machines. Displaced workers can theoretically transition into other industries that are more difficult to automate, but this is itself a highly disruptive process that will take a long time.
HARDWARE, BETTER, FASTER, STRONGER
And time is one thing that the AI revolution is not inclined to grant us. The transition to an AI-driven economy will be far faster than any of the prior GPT-induced transformations, leaving workers and organizations in a mad scramble to adjust. Whereas the Industrial Revolution took place across several generations, the AI revolution will have a major impact within one generation. That’s because AI adoption will be accelerated by three catalysts that didn’t exist during the introduction of steam power and electricity.
First, many productivity-increasing AI products are just digital algorithms: infinitely replicable and instantly distributable around the world. This makes for a stark contrast to the hardware-intensive revolutions of steam power, electricity, and even large parts of ICT. For these transitions to gain traction, physical products had to be invented, prototyped, built, sold, and shipped to end users. Each time a marginal improvement was made to one of these pieces of hardware, it required that the earlier process be repeated, with the attendant costs and social frictions that slowed down adoption of each new tweak. All of these frictions slowed down development of new technologies and extended the time until a product was cost-effective for businesses to adopt.
In contrast, the AI revolution is largely free of these limitations. Digital algorithms can be distributed at virtually no cost, and once distributed, they can be updated and improved for free. These algorithms—not advanced robotics—will roll out quickly and take a large chunk out of white-collar jobs. Much of today’s white-collar workforce is paid to take in and process information, and then make a decision or recommendation based on that information—which is precisely what AI algorithms do best. In industries with a minimal social component, that human-for-machine replacement can be made rapidly and done en masse, without any need to deal with the messy details of manufacturing, shipping, installation, and on-site repairs. While the hardware of AI-powered robots or self-driving cars will bear some of these legacy costs, the underlying software does not, allowing for the sale of machines that actually get better over time. Lowering these barriers to distribution and improvement will rapidly accelerate AI adoption.
The second catalyst is one that many in the technology world today take for granted: the creation of the venture-capital industry. VC funding—early investments in high-risk, high-potential companies—barely existed before the 1970s. That meant the inventors and innovators during the first two Industrial Revolutions had to rely on a thin patchwork of financing mechanisms to get their products off the ground, usually via personal wealth, family members, rich patrons, or bank loans. None of these have incentive structures built for the high-risk, high-reward game of funding transformative innovation. That dearth of innovation financing meant many good ideas likely never got off the ground, and successful implementation of the GPTs scaled far more slowly.
Today, VC funding is a well-oiled machine dedicated to the creation and commercialization of new technology. In 2017, global venture funding set a new record with $148 billion invested, egged on by the creation of Softbank’s $100 billion “vision fund,” which will be disbursed in the coming years. That same year, global VC funding for AI startups leaped to $15.2 billion, a 141 percent increase over 2016. That money relentlessly seeks out ways to wring every dollar of productivity out of a GPT like artificial intelligence, with a particular fondness for moonshot ideas that could disrupt and recreate an entire industry. Over the coming decade, voracious VCs will drive the rapid application of the technology and the iteration of business models, leaving no stone unturned in exploring everything that AI can do.
Finally, the third catalyst is one that’s equally obvious and yet often overlooked: China. Artificial intelligence will be the first GPT of the modern era in which China stands shoulder to shoulder with the West in both advancing and applying the technology. During the eras of industrialization, electrification, and computerization, China lagged so far behind that its people could contribute little, if anything, to the field. It’s only in the past five years that China has caught up enough in internet technologies to feed ideas and talent back into the global ecosystem, a trend that has dramatically accelerated innovation in the mobile internet.
With artificial intelligence, China’s progress allows for the research talent and creative capacity of nearly one-fifth of humanity to contribute to the task of distributing and utilizing artificial intelligence. Combine this with the country’s gladiatorial entrepreneurs, unique internet ecosystem, and proactive government push, and China’s entrance to the field of AI constitutes a major accelerant to AI that was absent for previous GPTs.
Reviewing the preceding arguments, I believe we can confidently state a few things. First, during the industrial era, new technology has been associated with long-term job creation and wage growth. Second, despite this general trend toward economic improvement, GPTs are rare and substantial enough that each one’s impact on jobs should be evaluated independently. Third, of the three widely recognized GPTs of the modern era, the skill biases of steam power and electrification boosted both productivity and employment. ICT has lifted the former but not necessarily the latter, contributing to falling wages for many workers in the developed world and greater inequality. Finally, AI will be a GPT, one whose skill biases and speed of adoption—catalyzed by digital dissemination, VC funding, and China—suggest it will lead to negative impacts on employment and income distribution.
If the above arguments hold true, the next questions are clear: What jobs are really at risk? And how bad will it be?
WHAT AI CAN AND CAN’T DO: THE RISK-OF-REPLACEMENT GRAPHS
When it comes to job replacement, AI’s biases don’t fit the traditional one-dimensional metric of low-skill versus high-skill labor. Instead, AI creates a mixed bag of winners and losers depending on the particular content of job tasks performed. While AI has far surpassed humans at narrow tasks that can be optimized based on data, it remains stubbornly unable to interact naturally with people or imitate the dexterity of our fingers and limbs. It also cannot engage in cross-domain thinking on creative tasks or ones requiring complex strategy, jobs whose inputs and outcomes aren’t easily quantified. What this means for job replacement can be expressed simply through two X–Y graphs, one for physical labor and one for cognitive labor.
Risk of Replaceme
nt: Cognitive Labor
Risk of Replacement: Physical Labor
For physical labor, the X-axis extends from “low dexterity and structured environment” on the left side, to “high dexterity and unstructured environment” on the right side. The Y-axis moves from “asocial” at the bottom to “highly social” at the top. The cognitive labor chart shares the same Y-axis (asocial to highly social) but uses a different X-axis: “optimization-based” on the left, to “creativity- or strategy-based” on the right. Cognitive tasks are categorized as “optimization-based” if their core tasks involve maximizing quantifiable variables that can be captured in data (for example, setting an optimal insurance rate or maximizing a tax refund).
These axes divide both charts into four quadrants: the bottom-left quadrant is the “Danger Zone,” the top-right is the “Safe Zone,” the top-left is the “Human Veneer,” and the bottom right is the “Slow Creep.” Jobs whose tasks primarily fall in the “Danger Zone” (dishwasher, entry-level translators) are at a high risk of replacement in the coming years. Those in the “Safe Zone” (psychiatrist, home-care nurse, etc.) are likely out of reach of automation for the foreseeable future. The “Human Veneer” and “Slow Creep” quadrants are less clear-cut: while not fully replaceable right now, reorganization of work tasks or steady advances in technology could lead to widespread job reductions in these quadrants. As we will see, occupations often involve many different activities outside of the “core tasks” that we have used to place them in a given quadrant. This task-diversity will complicate the automation of many professions, but for now we can use these axes and quadrants as general guidance for thinking about what occupations are at risk.