Book Read Free

Machines of Loving Grace

Page 8

by John Markoff


  Another aspect of the liability issue is what has been described as a version of the “trolley problem,” which is generally stated thus: A runaway trolley is hurtling down the tracks toward five people who will be killed if it proceeds on its present course. You can save these five people by diverting the trolley onto a different set of tracks that has only one person on it, but that person will be killed. Is it morally permissible to turn the trolley and thus prevent five deaths at the cost of one? First posed as a thought problem in a paper about the ethics of abortion by British philosopher Philippa Foot in 1967, it has led to endless philosophical discussions on the implications of choosing the lesser evil.9 More recently it has been similarly framed for robot vehicles deciding between avoiding five schoolchildren who have run out onto the road when the only option is swerving onto the sidewalk to avoid them, thus killing a single adult bystander.

  Software could generally be designed to choose the lesser evil; however, the framing of the question seems wrong on other levels. Because 90 percent of road accidents result from driver error, it is likely that a transition to autonomous vehicles will result in a dramatic drop in the overall number of injuries and deaths. So, clearly the greater good would be served even though there will still be a small number of accidents purely due to technological failures. In some respects, the automobile industry has already agreed with this logic. Air bags, for example, save more lives than are lost due to faulty air bag deployments.

  Secondly, the narrow focus of the question ignores how autonomous vehicles will probably operate in the future, when it is highly likely that road workers, cops, emergency vehicles, cars, pedestrians, and cyclists will electronically signal their presence to each other, a feature that even without complete automation should dramatically increase safety. A technology known as V2X that continuously transmits the location of nearby vehicles to each other is now being tested globally. In the future, even schoolchildren will be carrying sensors to alert cars to their presence and reduce the chance of an accident.

  It’s puzzling, then, that the philosophers generally don’t explore the trolley problem from the point of view of the greater good, but rather as an artifact of individual choice. Certainly it would be an individual tragedy if the technology fails—and of course it will fail. Systems that improve the overall safety of transportation seem vital, even if they aren’t perfect. The more interesting philosophical conundrum is over the economic, social, and even cultural consequences of taking humans out of the loop in driving. More than 34,000 people died in 2013 in the United States in automobile accidents, and 2.36 million were injured. Balance that against the 3.8 million people who earned a living by driving commercially in the United States in 2012.10 Driverless cars and trucks would potentially displace many if not most of those jobs as they emerge during the next two decades.

  Indeed, the question is more nuanced than one narrowly posed as a choice of saving lives or jobs. When Doug Engelbart gave what would later be billed as “The Mother of All Demos” in 1968—a demonstration of the technologies that would lead to personal computing and the Internet—he implicitly adopted the metaphor of driving. He sat at a keyboard and a display and showed how graphical interactive computing could be used to control computing and “drive” through what would become known as cyberspace. The human was very much in control in this model of intelligence augmentation. Driving was the original metaphor for interactive computing, but today Google’s vision has changed the metaphor. The new analogy will be closer to traveling in an elevator or a train without human intervention. In Google’s world you will press a button and be taken to your destination. This conception of transportation undermines several notions that are deeply ingrained in American culture. In the last century the car became synonymous with the American ideal of freedom and independence. That era is now ending. What will replace it?

  It is significant that Google is instrumental in changing the metaphor. In one sense the company began as the quintessential intelligence augmentation, or IA, company. The PageRank algorithm Larry Page developed to improve Internet search results essentially mined human intelligence by using the crowd-sourced accumulation of human decisions about valuable information sources. Google initially began by collecting and organizing human knowledge and then making it available to humans as part of a glorified Memex, the original global information retrieval system first proposed by Vannevar Bush in the Atlantic Monthly in 1945.11

  As the company has evolved, however, it has started to push heavily toward systems that replace rather than extend humans. Google’s executives have obviously thought to some degree about the societal consequences of the systems they are creating. Their corporate motto remains “Don’t be evil.” Of course, that is nebulous enough to be construed to mean almost anything. Yet it does suggest that as a company Google is concerned with more than simply maximizing shareholder value. For example, Peter Norvig, a veteran AI scientist who has been director of research at Google since 2001, points to partnerships between human and computer as the way out of the conundrum presented by the emergence of increasingly intelligent machines. A partnership between human chess experts and a chess-playing computer program can outplay even the best AI chess program, he notes. “As a society that’s what we’re going to have to do. Computers are going to be more flexible and they’re going to do more, and the people who are going to thrive are probably the ones who work in a partnership with machines,” he told a NASA conference in 2014.12

  What will the partnerships between humans and intelligent cars of the future look like? What began as a military plan to automate battlefield logistics, lowering costs and keeping soldiers out of harm’s way, is now on the verge of reframing modern transportation. The world is plunging ahead and automating transportation systems, but the consequences are only dimly understood today. There will be huge positive consequences in safety, efficiency, and environmental quality. But what about the millions of people now employed driving throughout the world? What will they do when they become the twenty-first-century equivalent of the blacksmith or buggy-whip maker?

  3|A TOUGH YEAR FOR THE HUMAN RACE

  With these machines, we can make any consumer device in the world,” enthused Binne Visser, a Philips factory engineer who helped create a robot assembly line that disgorges an unending stream of electric shavers. His point is they could be smartphones, computers, or virtually anything that is made today by hand or using machines.1

  The Philips electric razor factory in Drachten, a three-hour train ride north from Amsterdam through pancake-flat Dutch farmland, offers a clear view of the endgame of factory robots: that “lights-out,” completely automated factories are already a reality, but so far only in limited circumstances. The Drachten plant feels from the outside like a slightly faded relic of an earlier era when Philips, which started out making lightbulbs and vacuum tubes, grew to be one of the world’s dominant consumer electronics brands. Having lost its edge to Asian upstarts in consumer products such as television sets, Philips remains one of the world’s leading makers of electric shavers and a range of other consumer products. Like many European and U.S. companies, it has based much of its manufacturing in Asia where labor is less expensive. A turning point came in 2012 when Philips scrapped a plan to move a high-end shaver assembly operation to China. Because of the falling prices of sensors, robots, and cameras and the increasing transportation costs to ship finished goods to markets outside Asia, Philips built an almost entirely automated assembly line of robot arms at the Drachten factory. Defeated in many consumer electronics categories, Philips decided to invest to maintain its edge in an eclectic array of precomputer home appliances.

  The brightly lit single-story automated shaver factory is a modular mega-machine composed of more than 128 linked stations—each one a shining transparent cage connected to its siblings by a conveyor, resembling the glass-enclosed popcorn makers found in movie theaters. The manufacturing line itself is a vast Rube Goldberg–esque orchestra. Each of
the 128 arms has a unique “end effector,” a specialized hand for performing the same operation over and over and over again at two-second intervals. One assembly every two seconds translates into 30 shavers a minute, 1,800 an hour, 1,304,000 a month, and an astounding 15,768,000 a year.

  A Philips shaver assembly plant in Drachten, Netherlands, that operates without assembly workers. (Photo courtesy of Philips)

  The robots are remarkably dexterous, each specialized to repeat its single task endlessly. One robot arm simultaneously picks up two toothpick-thin two-inch pieces of wire, precisely bends the wires, and then delicately places their stripped ends into tiny holes in a circuit board. The wires themselves are picked from a parts feeder called a shake table. A human technician loads them into a bin that then spills them onto a brightly lit surface observed by a camera placed overhead. As if playing Pick Up Sticks, the robot arm grabs two wires simultaneously. Every so often, when the wires are jumbled, it shakes the table to separate them so it can see them better and then quickly grabs two more. Meanwhile, a handful of humans flutter around the edges of the shaver manufacturing line. A team of engineers dressed in blue lab coats keeps the system running by feeding it raw materials. A special “tiger team” is on-call around the clock so no robot arm is ever down for more than two hours. Unlike human factory workers, the line never sleeps.

  The factory is composed of American robot arms programmed by a team of European automation experts. Is it a harbinger of an era of manufacturing in which human factory line workers will vanish? Despite the fact that in China millions of workers labor to hand-assemble similar consumer gadgets, the Drachten plant is assembling devices more mechanically complex than a smartphone—entirely without human labor. In the automated factory mistakes are rare—the system is meant to be tolerant of small errors. At one station, toward the end of the line, small plastic pieces of the shaver case are snapped in place just beneath the rotary cutting head. One of the pieces, resembling a guitar pick, pops off onto the floor, like a Tiddlywink. The line doesn’t stutter. A down-the-line sensor recognizes that the part is missing and the shaver is shunted aside into a special rework area. The only humans directly working on the shaver factory line are eight women performing the last step in the process: quality inspection, not yet automated because the human ear is still the best instrument for determining that each shaver is functioning correctly.

  Lights-out factories, defined as robotic manufacturing lines without humans, create a “good news, bad news” scenario. To minimize the total cost of goods, it makes sense to place factories either near sources of raw materials, labor, and energy or near the customers for the finished goods. If robots can build virtually any product more cheaply than human workers, then it is more economical for factories to be close to the markets they serve, rather than near sources of low-cost labor. Indeed, factories are already returning to the United States. A solar panel manufacturing factory run by Flextronics has now located in Milpitas, south of San Francisco, where a large banner proudly proclaims, BRINGING JOBS & MANUFACTURING BACK TO CALIFORNIA! Walking the Fremont factory line, however, it quickly becomes clear that the facility is a testament to highly automated manufacturing rather than creating jobs; there are fewer than ten workers actually handling products on the assembly line producing almost as many panels as hundreds of employees would in the company’s conventional factory in Asia. “At what point does the chainsaw replace Paul Bunyan?” a Flextronics executive asks. “There’s always a price point, and we’re very close to that point.”2

  At the dawn of the Information Age, the pace and consequences of automation were very much on Norbert Wiener’s mind. During the summer of 1949, Wiener wrote a single-spaced three-page letter to Walter Reuther, the head of the United Auto Workers, to tell Reuther that he had turned down a consulting opportunity with a General Electric corporation to offer technical advice on designing automated machinery. GE had approached the MIT scientist twice in 1949 asking him both to lecture and to consult on the design of servomechanisms for industrial control applications. Servos used feedback to precisely control a component’s position, which was essential for the automated machinery poised to enter the factory after World War II. Wiener had refused both offers for what he called ethical reasons, even though he realized that others with similar knowledge but no sense of obligation to factory workers would likely accept.

  Wiener, deeply attuned to the potential dire “social consequences,” had already unsuccessfully attempted to contact other unions, and his frustration came through clearly in the Reuther letter. By late 1942 it was clear to Wiener that a computer could be programmed to run a factory, and he worried about the ensuing consequences of an “assembly line without human agents.”3 Software had not yet become a force that, in the words of browser pioneer Marc Andreessen, would “eat the world,” but Wiener portrayed the trajectory clearly to Reuther. “The detailed development of the machine for particular industrial purpose is a very skilled task, but not a mechanical task,” he wrote. “It is done by what is called ‘taping’ the machine in the proper way, much as present computing machines are taped.”4 Today we call it “programming,” and software animates the economy and virtually every aspect of modern society.

  Writing to Reuther, Wiener foresaw an apocalypse: “This apparatus is extremely flexible, and susceptible to mass production, and will undoubtedly lead to the factory without employees; as for example, the automatic automobile assembly line,” he wrote. “In the hands of the present industrial set-up, the unemployment produced by such plants can only be disastrous.” Reuther responded by telegram: DEEPLY INTERESTED IN YOUR LETTER. WOULD LIKE TO DISCUSS IT WITH YOU AT EARLIEST OPPORTUNITY.

  Reuther’s response was sent in August 1949 but it was not until March 1951 that the two men met in a Boston hotel.5 They sat together in the hotel restaurant and agreed to form a joint “Labor-Science-Education Association”6 to attempt to deflect the worst consequences of the impending automation era for the nation’s industrial workers. By the time Wiener met with Reuther he had already published The Human Use of Human Beings, a book that argued both for the potential benefits of automation and warned about the possibility of human subjugation by machines. He would become a sought-after national speaker during the first half of the 1950s, spreading his message of concern both about the possibility of runaway automation and the concept of robot weapons. After the meeting Wiener enthused that he had “found in Mr. Reuther and the men about him exactly that more universal union statesmanship which I had missed in my first sporadic attempts to make union contacts.”7

  Wiener was not the only one to attempt to draw Reuther’s attention to the threat of automation. Several years after meeting with Wiener, Alfred Granakis, president of UAW 1250, also wrote to Reuther, warning him about the loss of jobs after he was confronted with new workplace automation technologies at a Ford Motor engine plant and foundry in Cleveland, Ohio. He described the plant as “today’s nearest approach to a fully automated factory in the automobile industry,” adding: “What is the economic solution to all this, Walter? I am greatly afraid of embracing an economic ‘Frankenstein’ that I helped create in its infancy. It is my opinion that troubled days lie ahead for Labor.”8

  Wiener had broken with the scientific and technical establishment some years earlier. He expressed strong beliefs about ethics in science in a letter to the Atlantic Monthly titled “A Scientist Rebels,” published in December 1946, a year after he had suffered a crisis of conscience resulting from the bombing of Hiroshima and Nagasaki. The essay contained this response to a Boeing research scientist’s request for a technical analysis of a guided missile system during the Second World War: “The practical use of guided missiles can only be used to kill foreign civilians indiscriminately, and it furnishes no protection whatever to civilians in this country.”9 The same letter raises the moral question of the dropping of the atomic bomb: “The interchange of ideas which is one of the great traditions of science must of course receive
certain limitations when the scientist becomes an arbiter of life and death.”10

  In January of 1947 he withdrew from participating in a symposium on calculating machinery at Harvard University, in protest that the systems were to be used for “war purposes.” In the 1940s both computers and robots were entirely the stuff of science fiction, so it’s striking how clearly fleshed out Wiener’s understandings were of the technology impact that is only today playing out. In 1949, the New York Times invited Wiener to summarize his views about “what the ultimate machine age is likely to be,” in the words of its longtime Sunday editor, Lester Markel. Wiener accepted the invitation and wrote a draft of the article; the legendarily autocratic Markel was dissatisfied and asked him to rewrite it. He did. But through a distinctly pre-Internet series of fumbles and missed opportunities, neither version ever appeared at the time.

  In August of 1949, according to Wiener’s papers at MIT, the Times asked him to resend the first draft of the article to be combined with the second draft. (It is unclear why the editors had misplaced the first draft.) “Could you send the first draft to me, and we’ll see whether we can combine the two into one story?” wrote an editor in the paper’s Sunday department, then separate from the daily paper. “I may be mistaken, but I think you lost some of your best material.” But by then Wiener was traveling in Mexico, and he responded: “I had assumed that the first version of my article was finished business. To get hold of the paper in my office at the Massachusetts Institute of Technology would involve considerable cross-correspondence and annoyance to several people. I therefore do not consider it a practical thing to do. Under the circumstances I think that it is best for me to abandon this undertaking.”

  The following week the Times editor returned the second draft to Wiener, and it eventually ended up with his papers in MIT Libraries’ Archives and Special Collections, languishing there until December 2012, when it was discovered by Anders Fernstedt, an independent scholar researching the work of Karl Popper, Friedrich Hayek, and Ernst Gombrich, three Viennese philosophers active in London for most of the twentieth century.11 In the unpublished essay Wiener’s reservations were clear: “The tendency of these new machines is to replace human judgment on all levels but a fairly high one, rather than to replace human energy and power by machine energy and power. It is already clear that this new replacement will have a profound influence upon our lives,” he wrote.

 

‹ Prev