The Technology Trap

Home > Other > The Technology Trap > Page 37
The Technology Trap Page 37

by Carl Benedikt Frey


  The performance of the technology is not all that matters. Realizing the productivity gains of computers required complementary organizational, process, and strategic changes. In the early days of automation, the training and retraining of employees often took longer than expected, and many companies did not fully appreciate the obstacles involved in getting machines, computers, and sophisticated software to work together effectively. In a number of studies, the economists Erik Brynjolfsson, Timothy Bresnahan, and Lorin Hitt consistently found that investments in computer technology contributed to firm productivity mainly when complementary organizational changes were made.68 In the 1980s, the computer revolution centered on productivity improvements in individual tasks, such as word processing and manufacturing operations control. Yet preexisting business processes remained intact for the most part. In 1990, Michael Hammer, a management scholar and former professor of computer science, published his famous essay “Reengineering Work: Don’t Automate, Obliterate.” In the article, which appeared in the Harvard Business Review, Hammer argued that productivity gains would not come from using automation to make existing processes more efficient.69 Managers trying to do so had gotten it wrong from the outset. Unleashing the full potential of automation, he declared, required analyzing and redesigning workflows and business processes to improve customer service and cut operational costs. By the mid-1990s, the majority of Fortune 500 companies claimed to have reengineering plans.70 It was also around then that computers began to have an impact on productivity.

  Just like the switch from group drive to unit drive in the age of mass production, computerization and reorganization were gradual processes that required rethinking how the firm worked. Thus, the productivity puzzle of the late 1980s was not a puzzle to everyone. Economic historians realized that they had heard this story before. Studying the evolution of factory electrification, Oxford University’s Paul David noted that it took roughly four decades for electricity to appear in the productivity statistics, after the construction of Thomas Edison’s first power station in 1882. As discussed in chapter 6, harnessing the mysterious force of electricity required a complete reorganization of the factory, and the switch to unit drive as the organizing principle took plenty of experimentation—so the productivity gains of electrification did not show up until the 1920s.71 David went on to predict a similar trajectory for computer-led productivity growth. And he was right on target: the similarities between the 1920s and 1990s are tantalizing. Both decades saw productivity blossom and an explosion in the application of GPTs (electricity in the 1920s and computers in the 1990s).72 The former, economists agree, was the consequence of the latter. About 70 percent of the productivity acceleration in the years 1996–99, relative to that in the period 1991–95, has been attributed to computer technologies.73 And the productivity rebound was not narrowly focused on a few sectors but was extremely broad based, with wholesale trade, retail, and services showing sizable gains—which pointed to GPTs at work.74

  AI has only recently expanded the realm of what computers can do. Thus, there are good reasons to believe that the greatest productivity gains from automation are still to come. Multipurpose robots, as noted above, are already being adopted, but though their contributions to productivity growth have been significant, their use is still largely confined to heavy industry.75 And AI, more broadly, is still in its infancy. A 2017 survey of three thousand executives by the McKinsey Global Institute found that AI adoption outside of the tech sector is still at an early stage. Few firms have deployed it at scale, declaring that they are uncertain of the business case or return on investment. And a review of more than 160 use cases further showed that AI had been deployed commercially in only 12 percent of the cases.76

  As is well known, productivity growth has slowed since 2005, but that can happen when technologies are at an experimental stage.77 Technology improves productivity only after long delays, and it primarily incurs costs in the early stages of development. And after a new discovery is made, it often takes years until prototypes become economically viable in production. Thus, the contribution of new technologies to aggregate economic variables has always been delayed: “The case of self-driving cars discussed earlier provides a more prospective example of how productivity might lag technology. Consider what happens to the current pools of vehicle production and vehicle operation workers when autonomous vehicles are introduced. Employment on production side will initially increase to handle R&D, AI development, and new vehicle engineering.”78 The Brookings Institution, for example, calculates that investments in autonomous driving amounted to roughly $80 billion in the period 2014–17, with only a few first case uses of adoption.79 Over those three years, this is estimated to have lowered labor productivity by 0.1 percent per year.80 In this light, it is not all that surprising that economists have found current productivity growth to be a bad predictor of future productivity growth.81

  It is true that the smartphone and the internet have spread much faster than the electric motor or the tractor once did. Yet it makes little sense to compare the spread of consumer goods and services to technologies being used in production. The latter requires the reconfiguration of production processes while the former does not. What’s more, firms faced with the decision to automate or not have to weigh not just the engineering bottlenecks to be overcome. Beyond the technology, they must also consider increased overheads, the availability of sufficiently large markets, the cost of scrapping existing machines, the cost of financing new ones, and (as Harry Jerome pointed out) “the possible opposition of [their] workers, and sometimes adverse public opinion and even restrictive legislation.”82 While one might think that in the age of AI, much less capital expenditure is required for automation to happen, significant complementary investments are required to deploy a machine learning system. As Google’s chief economist Hal Varian explains:

  The first requirement is to have a data infrastructure that collects and organizes the data of interest—a data pipeline. For example, a retailer would need a system that can collect data at point of sale, and then upload it to a computer that can then organize the data into a database. This data would then be combined with other data, such as inventory data, logistics data, and perhaps information about the customer. Constructing this data pipeline is often the most labor intensive and expensive part of building a data infrastructure, since different businesses often have idiosyncratic legacy systems that are difficult to interconnect.83

  And while data may be the new oil, the bottlenecks are often not related just to data but also to skills and training:

  In my experience, the problem is not lack of resources, but is lack of skills. A company that has data but no one to analyze it is in a poor position to take advantage of that data. If there is no existing expertise internally, it is hard to make intelligent choices about what skills are needed and how to find and hire people with those skills. Hiring good people has always been a critical issue for competitive advantage. But since the widespread availability of data is comparatively recent, this problem is particularly acute. Automobile companies can hire people who know how to build automobiles since that is part of their core competency. They may or may not have sufficient internal expertise to hire good data scientists, which is why we can expect to see heterogeneity in productivity as this new skill percolates through the labor markets.84

  For these reasons, Amara’s Law will likely to apply to AI, too. Myriad necessary ancillary inventions and adjustment are required for automation to happen. Erik Brynjolfsson, who was among those investigating the role of computer technologies in the productivity boom of the late 1990s, thinks that the trajectory of AI adoption is likely to mirror the past in this regard. In a joint paper with Daniel Rock and Chad Syverson, two economists, he argues that as happened with computers back in the 1990s, the adoption of AI will require not only improvements in the technology itself, but significant complementary investment and plenty of experimentation to exploit its full potential.85 During this
phase, history tells us, the economy goes through an adjustment process with slow productivity growth.

  * * *

  The Industrial Revolution in Britain was exceedingly similar. As Nicholas Crafts has shown, James Watt’s steam engine delivered its main boost to productivity some eight decades after it was invented.86 When John Smeaton examined Watt’s invention, patented in 1769, he declared that “neither the tools nor the workmen existed that could manufacture so complex a machine with sufficient precision.”87 Complementary skills had to be developed to perfect the technology. But ten years later, the combined genius of Matthew Boulton and Watt saw his engine a commercial success. Writing in 1815, Patrick Colquhoun, a Scottish merchant and statistician, declared: “It is impossible to contemplate the progress of manufactures in Great Britain within the last thirty years without wonder and astonishment. Its rapidity … exceeds all credibility. The improvement of the steam engines, but above all the facilities afforded to the great branches of the woollen and cotton manufactories by ingenious machinery, invigorated by capital and skill, are beyond all calculation.”88 Yet water power remained a cheaper source of energy for some time, so that the contribution of the steam engine to productivity growth remained absent.

  Had Malthus been given the modern statistical apparatus in 1800, he would not have found much suggestive of the coming productivity boom. In the early stages of technological revolutions, current productivity growth does not tell us much about future productivity growth. We have to examine what is going on inside the labs instead. Malthus was dismissive of this view, and consequently he had no way of seeing what was coming. As he declared in his famous 1798 essay, “The moment we leave past experience as the foundation of our conjectures concerning the future, and, still more, if our conjectures absolutely contradict past experience, we are thrown upon a wide field of uncertainty, and any one supposition is then just as good as another.… Persons almost entirely unacquainted with the powers of a machine cannot be expected to guess at its effects.”89

  When Malthus wrote his essay, of course, the world barely knew of Schumpeterian growth. We now also know from past experience that what is going on in the labs is a better guide to the future of productivity at times of accelerating innovation. Great inventions may deliver enormous economic benefits, but often with long time lags. At the same time, we must acknowledge that this approach also has shortcomings. The mere existence of new technology does not tell us whether it will find widespread use. Even if Malthus had looked more to the wave of gadgets that made the Industrial Revolution and had realized the pervasiveness of the first machine age, how could he had known that they would be adopted so eagerly? For most of history, as noted, worker-replacing technologies have been fiercely resisted by angry workmen, leading governments to implement policies to restrict their use due to the fear of social upheaval (see chapter 3). As Malthus was writing, the British government had only recently begun to side with the innovators.

  Looking forward, worker resistance and adverse public opinion could slow the pace of change, as it has in the past. And some economists have begun to point to the risk of opposition. As Harvard University’s Rebecca Henderson warned at a recent National Bureau of Economic Research conference, “There is a real risk of a public backlash against AI that could dramatically reduce its diffusion rate.… Productivity seems likely to sky rocket, while with luck tens of thousands of people will no longer perish in car crashes every year. But ‘driving’ is one of the largest occupations there is. What will happen when millions of people begin to be laid off? … I’m worried about the transition problem at the societal level quite as much as I’m worried about it at the organizational level.”90 Those societal consequences are already being felt. The return of Engels’s pause, as discussed above, has fueled the populist backlash, and attitudes toward automation itself are seemingly shifting (see chapter 11). The pervasiveness of AI and citizens’ reactions to displacement will jointly determine future productivity growth. Any attempt to analyze the role of globalization in shaping the labor market going forward would be misleading if it overlooked the political economy of trade: the future impact of globalization on labor markets, for example, cannot be analyzed in isolation from the Trump administration’s trade war with China. The same likely goes for automation. A worry, as automation progresses, is that resistance will grow. As we have seen, historically when machines threaten to take people’s jobs and governments fear instability as a consequence, implementation can often be blocked for entirely political reasons.

  If Amara’s Law ceases to hold, it will likely be due to the return of Luddite sentiment.

  Work and Leisure

  Will there be enough jobs if automation is allowed to progress uninterrupted? In the public mind, there is a widespread dystopian belief that the rise of brilliant machines will ruin working people’s lives by causing wages to fall and unemployment to rise. By contrast, an equally common utopian belief is that technology will herald a new age of leisure, where people will prefer to work less and play more. Neither of these beliefs is new. And over the long run, both have so far been proven wrong, or at least vastly exaggerated. Though there have clearly been episodes when workers have suffered hardships as technology has advanced, fears over end-of-work scenarios have always been overblown, as has the idea that we would all give up work and live a life of fulfillment and leisure.

  In the 1930 “Economic Possibilities for Our Grandchildren,” John Maynard Keynes famously declared that mechanization was progressing at a rate greater than at any other time in history. Our discovery of ways to replace people with machines, he suggested, was outrunning the pace at which new uses for labor could be found—which he held would lead to widespread technological unemployment. Keynes’s essay was a reflection of the productivity boom of the 1920s, which did indeed come with some adjustment problems that sparked a revival of the machinery question (see chapter 7). But Keynes was still optimistic about the long run. Technology, he argued, would solve mankind’s economic problems and deprive us of our purpose of subsistence. Instead, our main concern would become how to occupy our leisure. In a century, Keynes predicted, people would enjoy a fifteen-hour workweek.91

  Keynes was right that mechanization was progressing at a more rapid pace than had been previously seen, yet things still unfolded quite differently. It is true that people in richer countries have shorter workweeks, take more vacations, and spend more years in retirement as they live longer. But the time citizens have decided to take as leisure, as they have grown richer, has not increased by as much as is commonly believed, and certainly not by as much as Keynes predicted. That is what the economists Valerie Ramey and Neville Francis found when they traced the trajectories of work and leisure in America over the past century.92 True, in 1900, a typical workweek in manufacturing was around fifty-nine hours. Yet in 1900, manufacturing still accounted for only about a fifth of total employment, and industrial laborers worked much longer hours than those in other sectors of the economy.93 When government and farm workers are taken into account, Americans in 1900 worked around fifty-three hours per week on average. By 2005, this figure had fallen to roughly thirty-eight hours. However, looking merely at changes in hours per worker misses the fact that a larger share of the population today work than they did a century ago, as a growing percentage of women have entered the workforce (see chapter 6). When they accounted for the growing share of citizens at work, Ramey and Francis found a much less pronounced decline in working hours: average weekly hours worked per person fell by 4.7 hours between 1900 and 2005.94

  All of this decline, in turn, occurred among the young and the elderly. Among those ages 25–54, in contrast, the average workweek actually got longer, even though weekly hours among men declined. The upsurge was driven entirely by working women. Among the young, the reason for the decline in working hours is straightforward: it followed from more children going to school and additional years spent in school, as farmers realized that their children would n
eed an education to prosper in the age of the Second Industrial Revolution. And the fall in weekly hours among the elderly is no mystery, either. Before the Social Security Act of 1935, which provided a nationwide pension system, most people worked until they dropped; private pension plans were available only to a fraction of the population. As pension coverage gradually increased thereafter, citizens who reached retirement age could suddenly enjoy a life of leisure—which, if anything, served to create more jobs. The demands of a new class of leisured but active citizens caused a massive boom in the construction of retirement homes, golf courses, and shopping centers, and retirement cities like Sun City, Arizona, were built to accommodate the massive exodus from the Northeast to the Sun Belt.

  Factoring in weekly hours of paid work, hours of schooling, household work, and so on, Ramey and Francis estimate average lifetime leisure over a century. This entails estimating people’s average weekly hours of leisure for each year of life from age fourteen to expected death for different cohorts.95 In doing so, they show that average weekly leisure increased from 39.3 to 43.1 hours per week between 1890 and 2000. Most of the increase was due to the welcome fact that people live longer today. Their findings allow us to shed some light on Keynes’s predictions, whereby he suggested that productivity would increase by four to eight times over the next century. Despite the unforeseeable event of World War II, his productivity forecast was quite accurate: labor productivity is now almost nine times higher than it was in 1900, yet the time citizens decided to take as leisure had increased by a mere 10 percent by 2000 (figure 19). And after 1930, when Keynes was writing, labor productivity experienced a fivefold increase while leisure grew just by 3 percent.96

 

‹ Prev