While it is still early days, autonomous vehicles are being deployed in a number of settings. Some agricultural vehicles, forklifts, and cargo-handling vehicles are already autonomous; and in recent years hospitals have begun to use autonomous robots to transport food, prescriptions, and samples.24 In 2017, Rio Tinto, an Anglo-Australian metals and mining giant, announced that it will expand its fleet of autonomous hauling trucks in its Pilbara mine by 50 percent by 2019, making operations fully autonomous.25 But so far, the adoption of autonomous vehicles has mostly been limited to relatively structured environments like warehouses, hospitals, factories, and mines. When computer programs can better anticipate the range of objects and scenarios a vehicle may encounter, automation is relatively straightforward. Using explicit if-then-do rules, the programs can just tell the vehicle to stop or slow down if another object approaches it. But in unstructured environments, like the streets of major cities, there are so many possible scenarios that this approach would require an almost infinite number of such rules.
AI combined with cheap and powerful digital sensors has recently raised the prospects of having fully autonomous vehicles also in unstructured environments. By equipping vehicles with a host of sensors, car companies have now collected millions of miles of human driving data for algorithms to learn from. As Ajay Agrawal, Joshua Gans, and Avi Goldfarb write, “By linking the incoming environmental data from sensors on the outside of the car to the driving decisions made by the human inside the car (steering, braking, accelerating), the AI learned to predict how humans would react to each second of incoming data about their environment.”26 Still, one obvious limiting factor for all AI models is that they struggle to predict outcomes when new situations arise that are not included in their training data. And in city traffic, vehicles constantly encounter new situations. One way forward has been to reduce the complexity of the environment. In Frisco, Texas, the company Drive.ai deploys autonomous minivans to transport people, but they are used only within specific office and retail areas. Instead of trying to mimic a human driver, engineers try to simplify things. All pickups and drop-offs take place at designated stops: “Riders hail the vans using an app and go to the nearest stop; a vehicle then appears to pick them up.”27
We all know that the path to autonomous driving has been one of impressive progress but also one of setbacks. In 2018, one of Uber’s self-driving vehicles tragically killed a woman who was crossing a street on her bicycle in Tempe, Arizona, sparking concerns over safety and, more fundamentally, over the future of autonomous driving. Yet similar and equally tragic setbacks were just as prevalent with earlier transportation technologies. As noted in chapter 4, the first public railroad demonstration in 1830 ended with a member of Parliament being fatally injured because the brakes on the train were slow to respond. The incident was reported in nearly every British media outlet, but that did not hinder the adoption of railroad technology. And in 1931—just before tractor adoption accelerated—the New York Times reported that, in Somerville, New Jersey, a tractor had crushed a four-year old boy to death, and one tractor was reported to have exploded, killing several people.28 It is also worth recalling that as engineers are pushing autonomous driving forward, accidents involving human drivers are happening every minute. A survey prepared for the National Highway Transportation Safety Administration of car crashes found that human error was responsible for 92.6 percent of them.29 And the number of casualties are many: just in 2013, 1.25 million people died in car accidents globally and 32,000 in the United States alone.30 Thus, autonomous cars do not need to be perfect to be justifiable. Human drivers are certainly not.
There are still situations that autonomous vehicles struggle to handle, especially in crowded cities where pedestrians and cyclists provide additional complicating elements. In Singapore, autonomous taxis have a safety driver in them who takes over in emergencies, to minimize the possibility of accidents. But while self-driving cars are still at an experimental stage, successful trips in city traffic have already been accomplished. In Tokyo a self-driving taxi—also with a safety driver—has already driven paying passengers, “raising the prospect that the service will be ready in time to ferry athletes and tourists between sports venues and the city centre during the 2020 Summer Olympics.”31 These events are important, because the underlying AI systems require the collection of millions of miles of real-world data from vehicles’ sensors. And the quantity of data is not all that matters. Driving on the interstate highway or through some quiet Midwestern town is hardly the same as driving in Manhattan. This is just as true for algorithms as it is for human drivers. Allowing algorithms to practice in city traffic is therefore an important step toward the age of driverless transportation.
Progress is likely to be more rapid outside of cities, however, where there are fewer complicating elements. In May 2015, Daimler-Benz put the first autonomous big rig on the road. Approved by the state of Nevada, the autonomous system will take hauls only on highways, to keep things simple for now. And in Colorado in October 2016, an autonomous semitrailer successfully drove fifty thousand cans of Budweiser beer from Fort Collins to Colorado Springs. The truck drove itself 100 miles on the interstate, but when it reached the city limits, a human driver took over.
These achievements produce mixed responses. There are 1.9 million Americans working as heavy and tractor-trailer truck drivers today. Worries that autonomous trucks will cause a “tsunami of displacement” are widespread, though this is unlikely to happen in the next few years.32 In light of these concerns, it is also important to remember that the barriers to technology adoption are not just technological. As we have seen in the preceding chapters, replacing technologies are likely to be resisted if workers face poor alternative options—an issue to which we shall return.
* * *
All human performers of transportation and delivery tasks are not at immediate risk from the rise of autonomous vehicles, of course. As AI skeptics like Robert Gordon have pointed out, even if “the car drives up in front of my house, how does the package get from the Amazon car to my front porch? Who carries it up when I’m away from home?”33 At the same time, we have been able to overcome seemingly more complicated engineering problems in the past through clever task redesign. As Hans Moravec has noted, it is hard for computers to do many tasks that are easy for humans, and vice versa. But while this remains true, engineers have also been able to take steps toward resolving Moravec’s paradox (see chapter 9) by making simple tasks even simpler.
Indeed, a common misconception is that for a task to be automated, a machine must replicate the exact procedures of the worker it is intended to replace. Simplification is mostly how automation happens. Even state-of-the-art robotics would not be able to replicate the motions and procedures carried out by medieval craftsmen. Production became automatable only because previously unstructured tasks were subdivided and simplified in the factory setting. The factory assembly line turned the nonroutine tasks of the artisan shop into repetitive tasks that were automatable once robots arrived. In similar fashion, we did not automate the jobs of laundresses by inventing multipurpose robots capable of chopping down trees; carrying water, wood, or coal from the outside to the stove; and performing the motions that washing clothes by hand entails. And we did not automate the jobs of lamplighters by inventing robots capable of climbing lampposts.
A contemporary example of task simplification is prefabrication:34 “On-site construction tasks typically demand a high degree of adaptability, so as to accommodate work environments that are typically irregularly laid out, and vary according to weather. Prefabrication, in which the construction object is partially assembled in a factory before being transported to the construction site, provides a way of largely removing the requirement for adaptability. It allows many construction tasks to be performed by robots under controlled conditions that eliminate task variability—a method that is becoming increasingly widespread, particularly in Japan.”35 Not just in construction but also in retai
ling, clever task redesign has yielded promising results. For example, Kiva Systems, acquired by Amazon, solved the problem of warehouse navigation simply by placing bar-code stickers on the floor that inform robots of their precise location. With clever task redesign, engineers are already breaking the rules about what robots can do.
In the late 1990s, computers lent steam to retailing operations. But productivity growth could not be sustained, as companies soon ran into bottlenecks. Goods still needed to be moved from the factory to the warehouse, then to the retail store, and finally to the ultimate buyer. Freight trucking was an “inherently unproductive activity, as delivery drivers navigate congested and potholed streets, search for parking spaces, ring doorbells, and wait for an answer.”36 To work around this, Amazon is now experimenting with using drones (which can bypass congested streets) for delivery. To return to Gordon’s question of “How does the package get from the Amazon car to my front porch?,” it looks increasingly likely that many packages will not arrive by car. In London, for example, a company called Skyports is already acquiring rooftop spaces that it plans to convert into vertiports, where drones can take off and land. And in March 2018, Amazon was granted a patent for a delivery drone that responds to human gestures. The technology should help address the issue of how “flying robots might interact with human bystanders and customers waiting on their doorsteps. Depending on a person’s gestures—a welcoming thumbs up, shouting or frantic arm waving—the drone can adjust its behavior, according to the patent. The machine could release the package it’s carrying, alter its flight path to avoid crashing, [and] ask humans a question or abort the delivery, the patent says.”37
Aided by AI, engineers have also come up with clever ways of reducing labor requirements within stores, without offloading the tasks done by cashiers onto consumers through complicated self-service checkout procedures. One example is Amazon Go, an archetypical example of a replacing technology. Today, some 3.5 million Americans work as cashiers across the country. But if you go to an Amazon Go store, you will not see a single cashier or even a self-service checkout stand. Customers walk in, scan their phones, and walk out with what they need. To achieve this, Amazon is leveraging recent advances in computer vision, deep learning, and sensors that track customers, the items they reach for, and take with them. Amazon then bills the credit card passed through the turnstile when the customer leaves the store and sends the receipt to the Go app. While the rollout of the first Seattle, Washington, prototype store was delayed because of issues with tracking multiple users and objects, Amazon now runs three Go stores in Seattle and another in Chicago, Illinois, and plans to launch another three thousand by 2021. Globally, companies like Tencent, Alibaba, and JD. com are also investing in AI to achieve the same goal.
Chinese companies like JD. com have also started to invest more heavily in unmanned warehouses. Inside JD. com’s Shanghai warehouse, machines are guided by image scanners. They handle all the goods, most of which are consumer electronic products: “Packages travel along a highway of belts. Mechanical arms stationed throughout the network place the items on the right tracks, wrap them in plastic or cardboard and set them onto motorized pucks [that] carry the parcels across a floor that resembles a giant checkerboard and plunk them down chutes to sacks. Computerized shelves on wheels retrieve the loads and transport them to trucks, which deliver most orders within 24 hours of a shopper’s click.”38 While JD. com employs some one hundred sixty thousand workers throughout Asia today, it has made its intent clear to trim that number to fewer than eight thousand over the next decade. And those jobs, it expects, will require a very different set of skills.39
The main reason why warehouses still employ large swaths of the population is that order picking remains a largely manual process. Humans still hold the comparative advantage in complex perception and manipulation tasks. But here, too, AI has made many recent breakthroughs possible. At the OpenAI lab in San Francisco, California, set up by Elon Musk, a robotic five-fingered hand called Dactyl bears witness to impressive progress in recent years: “If you give Dactyl an alphabet block and ask it to show you particular letters—let’s say the red O, the orange P and the blue I—it will show them to you and spin, twist and flip the toy in nimble ways.”40 Though this is an easy task for any human, the achievement lies in the fact that AI allows Dactyl to learn new tasks, largely on its own through trial and error.
For robots to become effective manipulators, however, they must also learn to identify and distinguish between various items. In this domain, the state of the art only a few years ago is probably best exemplified by the Gripper—a machine equipped with a two-fingered gripper, which is much easier to control than a five-fingered hand. The Gripper is able to identify, manipulate, and sort familiar objects like a screwdriver or a ketchup bottle. But when faced with an object it has not seen before, all bets are off.41 This might not be a problem in a warehouse with a limited set of items, but in warehouses that store thousands of objects and receive a steady flow of new items, robots are needed that can pick up just about anything. Researchers at Autolab, a robotics lab inside the University of California, Berkeley, are now building such systems using AI:
The Berkeley researchers modeled the physics of more than 10,000 objects, identifying the best way to pick up each one. Then, using an algorithm called a neural network, the system analyzed all this data, learning to recognize the best way to pick up any item. In the past, researchers had to program a robot to perform each task. Now it can learn these tasks on its own. When confronted with, say, a plastic Yoda toy, the system recognizes it should use the gripper to pick the toy up. But when it faces the ketchup bottle, it opts for the suction cup. The picker can do this with a bin full of random stuff. It is not perfect, but because the system can learn on its own, it is improving at a far faster rate than machines of the past.42
Thus, while robots are still far from having human-level capabilities when it comes to perception and manipulation tasks, they are becoming sufficiently sophisticated to handle gripping tasks in a structured warehouse setting, like picking items and parcels off a pallet and placing them into cartons or boxes. Just as robots entered the factories, they are gradually making an appearance outside manufacturing. Warehouse automation today is probably where factory automation was in the 1980s.
It is true that many of the AI technologies discussed above are still imperfect prototypes. But it is important to remember that just about every technology was imperfect in its early days. To most observers, for example, the first telephones seemed ridiculous. Getting used to hearing a disembodied voice through an earpiece was an experience entirely different from any previous form of communication. An early article in Scientific American argued that it was a silly invention, for which people would find little use: “The dignity of talking consists of having a listener, and it seems absurd to be addressing a piece of iron.”43 Retrospectively, this might seem like a silly thing to think. But early telephony was made with a single-wire system that suffered great losses in clarity: “In 1878 the recently invented telephone was hardly more than a scientific toy. In order to use it a person was required to briskly turn a crank and to scream into a crude mouthpiece. One could faintly hear the return message only if the satanic screechings and groanings of static permitted.”44 But just a decade later the technology looked much more promising. In 1890, a reporter for Time was invited by the American Telephone and Telegraph Company (AT&T) to inspect the status of long-distance telephony. General Superintendent A. S. Hibbard made a test call to showcase the technology: “Boston, 300 miles away was rung up, and then ensued a pleasant conversation. The operator at the other end was a young woman, who at once struck up a spirited discussion of the latest development in Theosophic Buddhism. Her voice was not so much raised as in ordinary talking, and its expressiveness was perfect.”45
The Next Wave
More and more jobs lend themselves to automation. But anecdotes alone cannot tell us much about the extent
to which jobs will be replaced in the future, or the types of job that will be affected. So in a 2013 paper titled “The Future of Employment: How Susceptible Are Jobs to Computerisation?,” my Oxford University colleague Michael Osborne and I set out to identify the near-term engineering bottlenecks to automation as a way of estimating the exposure of current jobs to recent advances in AI. As noted above, until recently, computers had a comparative advantage in tasks involving routine rule-based activities, while humans did better at everything else.
Routine jobs began to disappear in large numbers in the 1980s, but some economists made accurate predictions about the domains in which humans would be replaced much earlier simply by observing what computers do. One Bureau of Labor Statistics case study, conducted in 1960, found that “a little over 80 percent of the employees affected by the change were in routine jobs involving posting, checking, and maintaining records; filing; computing; or tabulating, keypunch, and related machine operations. The rest were mainly in administrative, supervisory, and accounting work.”46 But if there was a Nobel Prize for predicting the future of work, it should have gone to Herbert Simon for his essay titled “The Corporation: Will It Be Managed by Machines?,” first published in 1960.47 (Of course, Simon did win one in economics for his work on the decision-making process within economic organizations.) While Simon did not lay out an explicit framework, he got things spectacularly right by looking at trends in technology. He was right to think that computers would take over many routine factory and office jobs. He correctly predicted that there would still be many jobs responsible for the design of products, processes, and general management. And he saw that a growing share of the population would be employed in personal service jobs. In other words, he basically predicted the hollowing out of middle-class jobs decades before it happened.
The Technology Trap Page 35