The Technology Trap

Home > Other > The Technology Trap > Page 26
The Technology Trap Page 26

by Carl Benedikt Frey


  It is indeed telling that resistance to machinery in America ended at the advent of the Second Industrial Revolution. In the nineteenth century, workers at times rebelled against mechanization. But the twentieth century did not witness such incidents. Other factors besides technology also played an important, if secondary, role. The early advent of American democracy, with universal white male suffrage achieved in the 1820s, meant that people no longer had to resort to violent protest to have their views heard. But even so, American labor history was exceedingly violent, and there was resistance to mechanization as late as 1879. The rise of the welfare state unquestionably made losing one’s job less harsh, but welfare spending took off only with the Great Depression and World War II. The expansion of education and additional years of schooling made the young better equipped for the evolving labor market, but those who found their skills made redundant didn’t go back to school. Perhaps more importantly, workers began to unionize and push for better pay and working conditions. But unlike the craft guilds, the unions rarely resisted new technologies. Even when unrest erupted, people didn’t target machines to express their misgivings. Though workers organized and became a force of growing political power, resistance to mechanization was feeble, if not nonexistent. The benefits of progress for labor, it seems, were simply too great for the unions and their members to resist it.

  PART IV

  THE GREAT REVERSAL

  Since the Industrial Revolution, mechanization has been controversial. Machines pushed up productivity, raising incomes per capita. But they threatened to put people out of work, to lower their wages and to divert all the gains from growth to the owners of businesses.… Now, it is robots that threaten work, wages and equality.… There have been long periods of economic history in which things did not work out well, and we must wonder whether we are in another.… The Luddites and other opponents of mechanization are often portrayed as irrational enemies of progress, but they were not the people set to benefit from the new machinery, so their opposition makes sense.

  —ROBERT C. ALLEN, “LESSONS FROM HISTORY FOR THE FUTURE OF WORK”

  One of the greatest achievements of the twentieth century was unquestionably the creation of a diverse and prosperous middle class. It is therefore a matter of great concern that American society is now experiencing a dramatic decline in the fortunes of those people who might be described as middle class. The previous chapters have shown that technology played a key role in their rise. This part of the book will show the role it has played in their fall. As discussed above, several factors have shaped the trajectories of people’s wages, but over the grand sweep of history, technology has been the predominant factor. The breathtaking rise in inequality after 1980 has without doubt been affected by other significant variables, like financial sector deregulation and superstar compensation. But these factors are primarily relevant in explaining the rise of the top 1 percent. The bigger story, however, is the decline of the middle class. The top pulling away from the rest would be much less troubling if the middle had continued to prosper. For all the talk of rising inequality per se, the greatest tragedy is that large parts of the workforce have actually seen their wages fall, adjusted for inflation. In the age of computers, the ranks of the affluent have grown—but at the cost of a withering middle class.

  FIGURE 10: Real Weekly Wages for Full-Time, Full-Year Workers by Educational Attainment, 1963–2015

  Sources: D. Acemoglu and D. H. Autor, 2011, “Skills, Tasks and Technologies: Implications for Employment and Earnings,” in Handbook of Labor Economics, ed. David Card and Orley Ashenfelter, 4:1043–171 (Amsterdam: Elsevier).

  Note: Their analysis has been extended to include the years 2009–15 using data from the Current Population Survey and a DO file provided by David Autor.

  Since the pioneering work of Jan Tinbergen, economists tend to think about inequality as a race between technology and education. Skill-biased technological change means that new technologies increase the demand for workers with more sophisticated skills, relative to those without such skills. Thus, inequality between the skilled and the unskilled will rise unless the educational system churns out skilled workers at a greater pace than technology increases the demand for them. We saw in chapter 8 that the supply of skilled workers outpaced demand during the period of the great leveling, leading the wage differential between the skilled and others to become compressed. The post-1980 upsurge in wage inequality could simply reflect that the marketplace increasingly rewards people with more skills and the failure of the educational system to meet skill demand in the higher-tech economy. Yet if economic progress was just a race between technology and education, we would expect the wages of the skilled to pull away from those of the rest, but we would not expect the wages of the unskilled to fall. Inequality could grow, but everyone would still see their wages rise—though at different speeds. The great reversal depicted in figure 10 was first noted by Daron Acemoglu and David Autor.1 It shows that up until the 1970s, wages rose for people at all educational levels, but after the first oil shock in 1973, wages fell and then stagnated for all Americans for about a decade. The great reversal began in the 1980s, when the wages of those with no more than a high school diploma began to fall again and continued to do so for three consecutive decades. This decline, as figure 10 shows, has primarily occurred among unskilled men who would have taken on jobs in the factories before the dawn of automation.

  9

  THE DESCENT OF THE MIDDLE CLASS

  The computer era does not just mark a shift in labor markets. It also marks a shift in how economists think about technological progress. Daron Acemoglu and Pascal Restrepo have recently argued that the wage trends depicted in figure 10 are best understood as a race between enabling and replacing technologies. In a world of enabling technologies, the view of progress as a race between technology and education holds. New technologies augment the capabilities of some workers and enable them to perform new functions, making them more productive in a way that also increases their wages. Replacing technologies, by contrast, have the opposite effect. They render some workers’ skills redundant in the tasks and jobs they perform, putting downward pressure on those people’s wages.

  In the 1960s, the management guru Peter F. Drucker argued that automation was no more than a more fashionable term for what once had been known as mechanization, and he took both words to mean the displacement of hand labor by machines.1 As discussed above, replacing technologies did render the skills of some workers redundant during the first three-quarters of the twentieth century. Lamplighters, longshoremen, and elevator operators, to name just a few, saw their jobs disappear. Still, the age of automation must be distinguished from the age of mechanization. At the time when Drucker was writing, all workers saw their wages rise (figure 10). Indeed, before the spread of computers, machines could not operate on their own. They required operatives to keep production lines running. The explosive growth of semiskilled clerical and blue-collar jobs meant that even people who found themselves being replaced faced a much greater variety of job options. Factory and office machines alike were enabling technologies that made workers more productive, allowing them to take home better wages. In this regard, the computer revolution was not a continuation of twentieth-century mechanization but the reversal of it. Computer-controlled machines have eliminated precisely the jobs created for a host of machine operators during the Second Industrial Revolution. The workers that were once pulled into decent-paying jobs in mass-production industries are now being pushed out.

  What Computers Do

  In the Wealth of Nations, Adam Smith observed the division of labor in Britain’s pin factories. By dividing activities into narrow tasks, he found that the first factories were able to increase efficiency enormously. While his observation concerned the division of labor between human workers, the age of automation came with a new division of labor: tasks can now be divided between humans and computers. Before the advent of the first electronic computer in 194
6, the distinction between humans and computers was meaningless. Humans were computers. “Computer” was an occupation, typically performed by women who specialized in basic arithmetic.2

  The division of labor between human and machine crucially depends on the tasks computers can do more effectively. Prior to the age of artificial intelligence (AI)—to which we shall return in chapter 12—computerization was largely confined to routine work. The simple reason is that computer-controlled machines have a comparative advantage over people in activities that can be described by a programmer using rule-based logic. Until very recently, automation was technically feasible only when an activity could be broken down into a sequence of steps, with an action specified at every contingency. A mortgage underwriter, for example, decides whether a mortgage application should be approved on the basis of explicit criteria. Because we know the “rules” for obtaining a mortgage, we can use computers instead of underwriters.3 But in other cases, we know the rules for only some of the tasks involved in an occupation. As is evident by the existence of ATMs, we can easily write a set of rules that allows computers to substitute for bank tellers in accepting deposits and paying out withdrawals. Yet we struggle to define the rules for dealing with an unsatisfied customer. Naturally, banks have taken advantage of this by reorganizing work so that tellers are no longer checkout clerks but relationship managers, advising customers on loans and other investment products. Consequently, as the handling of money has been automated, tellers have taken on nonroutine functions.

  On the eve of the computer revolution, many occupations—like mortgage underwriter—were essentially rule-based. The majority of Americans still worked in what economists call routine occupations. In 1974, Harry Braverman, an American Marxist, drew attention to the dehumanizing nature of routine work, which he observed had persisted since the birth of the factory system. “The earliest innovative principle of the capitalist mode of production,” he argued, “was the manufacturing division of labor, and in one form or another the division of labor has remained the fundamental principle of industrial organization.”4 In this regard, Braverman merely revived an old concern. As discussed above, in the 1830s, non-Marxist writers like Peter Gaskell and Sir James Kay-Shuttleworth argued that the repetitive motions of machines absorbed workers’ attention to an extent that adversely affected their moral and intellectual capabilities. Braverman, who lived through the age of mass production, found that the Fordization of America had accelerated routinization. Machine operations had become even more subdivided. Workers’ jobs were turned into mechanical motions, in which conveyors brought the task to the worker. Such specialization greatly increased productivity in American factories but brought greater monotony for the worker. From this point of view, factory automation can be regarded as a blessing because it meant that industrial robots, controlled by computers, could eliminate the need for direct human intervention in operating machines. Instead of having workers specializing in machine tending, many routine tasks could suddenly be performed by robots with a higher degree of accuracy. As automation progressed, more complex and creative functions became more plentiful. Computers, as Norbert Wiener declared, made possible “more human use of human beings.”5

  On the downside, these allegedly mindless, degrading, machine-tending, routine jobs were the ones that employed a large share of the American middle class. Numerous studies have shown that routine jobs were overwhelmingly clustered at the middle of both the skill and the income distribution.6 As computer-controlled machines reduced the need for routinized chores, middle-class Americans saw their jobs disappear. As recently as 1970, more than half of working Americans were employed in blue-collar or clerical jobs. While few of them got rich, these jobs supported a broad and relatively prosperous middle class. And perhaps more important, most of these jobs were open to people with no more than a high school degree.7 What Braverman was challenging, however, was the notion that mechanization had increased the demand for skilled workers. He had little data to prove his point, but the idea that jobs had become more routine and required more skills does seem like a contradiction in terms. Many of the routine jobs that emerged over the course of the twentieth century were surely not too intellectually demanding, yet as we saw in chapter 8, the growing complexity of heavy industrial machinery and the expanding array of office machines did require more skilled operators.

  The great reversal, depicted in figure 10, is in large part a consequence of computers making the skills of machine-tending workers obsolete. As the scope of automation has expanded from one routine task to another, those workers have faced worsening options in the labor market. But like electrification and the adoption of steam power, computerization did not happen overnight. Its impact on the labor market came decades after the birth of the electronic computer. William Nordhaus’s heroic study of computer performance over the centuries shows that the first major discontinuity occurred around World War II.8 The real cost of computing fell by a factor of 1.7 trillion over the 1900s, with the greatest leap occurring in the second half of the century. The timing is no mystery: the first programmable and fully electronic computer—the Electronic Numerical Integrator and Calculator (ENIAC)—arrived in 1946 and was accompanied by the invention of the transistor a year later. But for all its virtues, ENIAC was hardly fit for office use. It contained 18,000 vacuum tubes, 70,000 resistors, and weighed thirty tons. And while it was a general purpose computer, it was primarily built to calculate artillery firing tables. As discussed above, computers were a key source of automation anxiety in the 1950s and 1960s. But like the hype surrounding autonomous vehicles and AI today, concerns over computers’ taking people’s jobs merely reflected a few first use cases (chapter 7). At the 1958 annual convention of the National Retail Merchants Association, for example, there was much excitement over the new computers and merchandise handling systems, but few attendees opened their wallets. Computers were still too bulky and expensive for widespread adoption.9

  Though ENIAC can justly be regarded as the symbolic inception of the computer revolution, it was the personal computer (PC) that heralded the dawn of the age of automation.10 When Time displaced humans on its front cover and declared the PC the “Machine of the Year” in 1982, America had just begun to computerize. According to Time, “Now, thanks to the transistor and the silicon chip, the computer has been reduced so dramatically in both bulk and price that it is accessible to millions.… In contrast to the $487,000 paid for ENIAC, a top IBM PC today costs about $4,000, and some discounters offer a basic Timex-Sinclair 1000 for $77.95. One computer expert illustrates the trend by estimating that if the automobile business had developed like the computer business, a Rolls-Royce would now cost $2.75 and run 3 million miles on a gallon of gas.”11

  In America’s largest five hundred industrial companies, only 10 percent of typewriters had given way to the word processor. Robots, to which computers provided the mechanical brain, had taken over some of the nation’s dull and dirty jobs, but few industries had robotized. Of the 6,300 robots operating in America’s factories in 1982, 57 percent of them were at four companies: General Motors, Ford, Chrysler, and IBM.12 Yet from the 1980s onward, a growing share of routine tasks were transferred to computer-controlled machines. As computers became smaller, cheaper, and more powerful, routine employment began to shrink (figure 11). But we now know that the consequence was not widespread technological unemployment, as many had predicted in the 1950s and 1960s. While automation replaced workers in some jobs, it also created new ones. Robots replaced workers in repetitive assembly work, but the machines also required skilled personnel capable of programming, reprogramming, and occasionally repairing them. Job titles like robot engineer and computer-software programmer are a direct consequence of automation. Thus, the erosion of old jobs gave rise to new ones. When automatic flight reservation systems arrived, for example, “the strictly routine tasks of posting each sale on a sales control chart and the cumbersome method of using a visual display board to denote availabi
lity of flight space were both eliminated.”13 But another outcome was the enlargement of the sales function: “The job title of clerk was replaced by sales or service agent. An upgrading took place for two employees who perform the functions of Specialist (Reservisor Information) and Assistant to the Specialist.”14

  FIGURE 11: The Falling Cost of Computing and Disappearance of Routine Jobs, 1980–2010

 

‹ Prev