The Glass Cage: Automation and Us

Home > Other > The Glass Cage: Automation and Us > Page 16
The Glass Cage: Automation and Us Page 16

by Nicholas Carr


  CHAPTER SEVEN

  AUTOMATION FOR THE PEOPLE

  WHO NEEDS HUMANS, anyway?

  That question, in one rhetorical form or another, comes up frequently in discussions of automation. If computers are advancing so rapidly, and if people by comparison seem slow, clumsy, and error prone, why not build immaculately self-contained systems that perform flawlessly without any human oversight or intervention? Why not take the human factor out of the equation altogether? “We need to let robots take over,” declared the technology theorist Kevin Kelly in a 2013 Wired cover story. He pointed to aviation as an example: “A computerized brain known as the autopilot can fly a 787 jet unaided, but irrationally we place human pilots in the cockpit to babysit the autopilot ‘just in case.’ ”1 The news that a person was driving the Google car that crashed in 2011 prompted a writer at a prominent technology blog to exclaim, “More robo-drivers!”2 Commenting on the struggles of Chicago’s public schools, Wall Street Journal writer Andy Kessler remarked, only half-jokingly, “Why not forget the teachers and issue all 404,151 students an iPad or Android tablet?”3 In a 2012 essay, the respected Silicon Valley venture capitalist Vinod Khosla suggested that health care will be much improved when medical software—which he dubs “Doctor Algorithm”—goes from assisting primary-care physicians in making diagnoses to replacing the doctors entirely. “Eventually,” he wrote, “we won’t need the average doctor.” 4 The cure for imperfect automation is total automation.

  That’s a seductive idea, but it’s simplistic. Machines share the fallibility of their makers. Sooner or later, even the most advanced technology will break down, misfire, or, in the case of a computerized system, encounter a cluster of circumstances that its designers and programmers never anticipated and that leave its algorithms baffled. In early 2009, just a few weeks before the Continental Connection crash in Buffalo, a US Airways Airbus A320 lost all engine power after hitting a flock of Canada geese on takeoff from LaGuardia Airport in New York. Acting quickly and coolly, Captain Chesley Sullenberger and his first officer, Jeffrey Skiles, managed, in three harrowing minutes, to ditch the crippled jet safely in the Hudson River. All passengers and crew were evacuated. If the pilots hadn’t been there to “babysit” the A320, a craft with state-of-the-art automation, the jet would have crashed and everyone on board would almost certainly have perished. For a passenger jet to have all its engines fail is rare. But it’s not rare for pilots to rescue planes from mechanical malfunctions, autopilot glitches, rough weather, and other unexpected events. “Again and again,” Germany’s Der Spiegel reported in a 2009 feature on airline safety, the pilots of fly-by-wire planes “run into new, nasty surprises that none of the engineers had predicted.”5

  The same is true elsewhere. The mishap that occurred while a person was driving Google’s Prius was widely reported in the press; what we don’t hear much about are all the times the backup drivers in Google cars, and other automated test vehicles, have to take the wheel to perform maneuvers the computers can’t handle. Google requires that people drive its cars manually when on most urban and residential streets, and any employee who wants to operate one of the vehicles has to complete rigorous training in emergency driving techniques.6 Driverless cars aren’t quite as driverless as they seem.

  In medicine, caregivers often have to overrule misguided instructions or suggestions offered by clinical computers. Hospitals have found that while computerized drug-ordering systems alleviate some common errors in dispensing medication, they introduce new problems. A 2011 study at one hospital revealed that the incidence of duplicated medication orders actually increased after drug ordering was automated.7 Diagnostic software is also far from perfect. Doctor Algorithm may well give you the right diagnosis and treatment most of the time, but if your particular set of symptoms doesn’t fit the probability profile, you’re going to be glad that Doctor Human was there in the examination room to review and overrule the computer’s calculations.

  As automation technologies become more complicated and more interconnected, with a welter of links and dependencies among software instructions, databases, network protocols, sensors, and mechanical parts, the potential sources of failure multiply. Systems become susceptible to what scientists call “cascading failures,” in which a small malfunction in one component sets off a far-flung and catastrophic chain of breakdowns. Ours is a world of “interdependent networks,” a group of physicists reported in a 2010 Nature article. “Diverse infrastructures such as water supply, transportation, fuel and power stations are coupled together” through electronic and other links, which ends up making all of them “extremely sensitive to random failure.” That’s true even when the connections are limited to exchanges of data.8

  Vulnerabilities become harder to discern too. With the industrial machinery of the past, explains MIT computer scientist Nancy Leveson in her book Engineering a Safer World, “interactions among components could be thoroughly planned, understood, anticipated, and guarded against,” and the overall design of a system could be tested exhaustively before it was put into everyday use. “Modern, high-tech systems no longer have these properties.” They’re less “intellectually manageable” than were their nuts-and-bolts predecessors.9 All the parts may work flawlessly, but a small error or oversight in system design—a glitch that might be buried in hundreds of thousands of lines of software code—can still cause a major accident.

  The dangers are compounded by the incredible speed at which computers can make decisions and trigger actions. That was demonstrated over the course of a hair-raising hour on the morning of August 1, 2012, when Wall Street’s largest trading firm, Knight Capital Group, rolled out a new automated program for buying and selling shares. The cutting-edge software had a bug that went undetected during testing. The program immediately flooded exchanges with unauthorized and irrational orders, trading $2.6 million worth of stocks every second. In the forty-five minutes that passed before Knight’s mathematicians and computer scientists were able to track the problem to its source and shut the offending program down, the software racked up $7 billion in errant trades. The company ended up losing almost half a billion dollars, putting it on the verge of bankruptcy. Within a week, a consortium of other Wall Street firms bailed Knight out to avoid yet another disaster in the financial industry.

  Technology improves, of course, and bugs get fixed. Flawlessness, though, remains an ideal that can never be achieved. Even if a perfect automated system could be designed and built, it would still operate in an imperfect world. Autonomous cars don’t drive the streets of utopia. Robots don’t ply their trades in Elysian factories. Geese flock. Lightning strikes. The conviction that we can build an entirely self-sufficient, entirely reliable automated system is itself a manifestation of automation bias.

  Unfortunately, that conviction is common not only among technology pundits but also among engineers and software programmers—the very people who design the systems. In a classic 1983 article in the journal Automatica, Lisanne Bainbridge, an engineering psychologist at University College London, described a conundrum that lies at the core of computer automation. Because designers often assume that human beings are “unreliable and inefficient,” at least when compared to a computer, they strive to give them as small a role as possible in the operation of systems. People end up functioning as mere monitors, passive watchers of screens.10 That’s a job that humans, with our notoriously wandering minds, are particularly bad at. Research on vigilance, dating back to studies of British radar operators watching for German submarines during World War II, shows that even highly motivated people can’t keep their attention focused on a display of relatively stable information for more than about half an hour.11 They get bored; they daydream; their concentration drifts. “This means,” Bainbridge wrote, “that it is humanly impossible to carry out the basic function of monitoring for unlikely abnormalities.”12

  And because a person’s skills “deteriorate when they are not used,” she added, even an experienced sy
stem operator will eventually begin to act like “an inexperienced one” if his main job consists of watching rather than acting. As his instincts and reflexes grow rusty from disuse, he’ll have trouble spotting and diagnosing problems, and his responses will be slow and deliberate rather than quick and automatic. Combined with the loss of situational awareness, the degradation of know-how raises the odds that when something goes wrong, as it sooner or later will, the operator will react ineptly. And once that happens, system designers will work to place even greater limits on the operator’s role, taking him further out of the action and making it more likely that he’ll mess up in the future. The assumption that the human being will be the weakest link in the system becomes self-fulfilling.

  ERGONOMICS, THE art and science of fitting tools and workplaces to the people who use them, dates back at least to the Ancient Greeks. Hippocrates, in “On Things Relating to the Surgery,” provides precise instructions for how operating rooms should be lit and furnished, how medical instruments should be arranged and handled, even how surgeons should dress. In the design of many Greek tools, we see evidence of an exquisite consideration of the ways an implement’s form, weight, and balance affect a worker’s productivity, stamina, and health. In early Asian civilizations, too, there are signs that the instruments of labor were carefully designed with the physical and psychological well-being of the worker in mind.13

  It wasn’t until the Second World War, though, that ergonomics began to emerge, together with its more theoretical cousin cybernetics, as a formal discipline. Many thousands of inexperienced soldiers and other recruits had to be entrusted with complicated and dangerous weapons and machinery, and there was little time for training. Awkward designs and confusing controls could no longer be tolerated. Thanks to trailblazing thinkers like Norbert Wiener and U.S. Air Force psychologists Paul Fitts and Alphonse Chapanis, military and industrial planners came to appreciate that human beings play as integral a role in the successful workings of a complex technological system as do the system’s mechanical components and electronic regulators. You can’t optimize a machine and then force the worker to adapt to it, in rigid Taylorist fashion; you have to design the machine to suit the worker.

  Inspired at first by the war effort and then by the drive to incorporate computers into commerce, government, and science, a large and dedicated group of psychologists, physiologists, neurobiologists, engineers, sociologists, and designers began to devote their varied talents to studying the interactions of people and machines. Their focus may have been the battlefield and the factory, but their aspiration was deeply humanistic: to bring people and technology together in a productive, resilient, and safe symbiosis, a harmonious human-machine partnership that would get the best from both sides. If ours is an age of complex systems, then ergonomists are our metaphysicians.

  At least they should be. All too often, discoveries and insights from the field of ergonomics, or, as it’s now commonly known, human-factors engineering, are ignored or given short shrift. Concerns about the effects of computers and other machines on people’s minds and bodies have routinely been trumped by the desire to achieve maximum efficiency, speed, and precision—or simply to turn as big a profit as possible. Software programmers receive little or no training in ergonomics, and they remain largely oblivious to relevant human-factors research. It doesn’t help that engineers and computer scientists, with their strict focus on math and logic, have a natural antipathy toward the “softer” concerns of their counterparts in the human-factors field. A few years before his death in 2006, the ergonomics pioneer David Meister, recalling his own career, wrote that he and his colleagues “always worked against the odds so that anything that was accomplished was almost unexpected.” The course of technological progress, he wistfully concluded, “is tied to the profit motive; consequently, it has little appreciation of the human.”14

  It wasn’t always so. People first began thinking about technological progress as a force in history in the latter half of the eighteenth century, when the scientific discoveries of the Enlightenment began to be translated into the practical machinery of the Industrial Revolution. That was also, and not coincidentally, a time of political upheaval. The democratic, humanitarian ideals of the Enlightenment culminated in the revolutions in America and France, and those ideals also infused society’s view of science and technology. Technical advances were valued—by intellectuals, if not always by workers—as means to political reform. Progress was defined in social terms, with technology playing a supporting role. Enlightenment thinkers such as Voltaire, Joseph Priestley, and Thomas Jefferson saw, in the words of the cultural historian Leo Marx, “the new sciences and technologies not as ends in themselves, but as instruments for carrying out a comprehensive transformation of society.”

  By the middle of the nineteenth century, however, the reformist view had, at least in the United States, been eclipsed by a new and very different concept of progress in which technology itself played the starring role. “With the further development of industrial capitalism,” writes Marx, “Americans celebrated the advance of science and technology with increasing fervor, but they began to detach the idea from the goal of social and political liberation.” Instead, they embraced “the now familiar view that innovations in science-based technologies are in themselves a sufficient and reliable basis for progress.”15 New technology, once valued as a means to a greater good, came to be revered as a good in itself.

  It’s hardly a surprise, then, that in our own time the capabilities of computers have, as Bainbridge suggested, determined the division of labor in complex automated systems. To boost productivity, reduce labor costs, and avoid human error—to further progress—you simply allocate control over as many activities as possible to software, and as software’s capabilities advance, you extend the scope of its authority even further. The more technology, the better. The flesh-and-blood operators are left with responsibility only for those tasks that the designers can’t figure out how to automate, such as watching for anomalies or providing an emergency backup in the event of a system failure. People are pushed further and further out of what engineers term “the loop”—the cycle of action, feedback, and decision making that controls a system’s moment-by-moment operations.

  Ergonomists call the prevailing approach technology-centered automation. Reflecting an almost religious faith in technology, and an equally fervent distrust of human beings, it substitutes misanthropic goals for humanistic ones. It turns the glib “who needs humans?” attitude of the technophilic dreamer into a design ethic. As the resulting machines and software tools make their way into workplaces and homes, they carry that misanthropic ideal into our lives. “Society,” writes Donald Norman, a cognitive scientist and author of several influential books about product design, “has unwittingly fallen into a machine-centered orientation to life, one that emphasizes the needs of technology over those of people, thereby forcing people into a supporting role, one for which we are most unsuited. Worse, the machine-centered viewpoint compares people to machines and finds us wanting, incapable of precise, repetitive, accurate actions.” Although it now “pervades society,” this view warps our sense of ourselves. “It emphasizes tasks and activities that we should not be performing and ignores our primary skills and attributes—activities that are done poorly, if at all, by machines. When we take the machine-centered point of view, we judge things on artificial, mechanical merits.”16

  It’s entirely logical that those with a mechanical bent would take a mechanical view of life. The impetus behind invention is often, as Norbert Wiener put it, “the desires of the gadgeteer to see the wheels go round.”17 And it’s equally logical that such people would come to control the design and construction of the intricate systems and software programs that now govern or mediate society’s workings. They’re the ones who know the code. As society becomes ever more computerized, the programmer becomes its unacknowledged legislator. By defining the human factor as a peripheral concern, the techn
ologist also removes the main impediment to the fulfillment of his desires; the unbridled pursuit of technological progress becomes self-justifying. To judge technology primarily on its technological merits is to give the gadgeteer carte blanche.

  In addition to fitting the dominant ideology of progress, the bias to let technology guide decisions about automation has practical advantages. It greatly simplifies the work of the system builders. Engineers and programmers need only take into account what computers and machines can do. That allows them to narrow their focus and winnow a project’s specifications. It relieves them of having to wrestle with the complexities, vagaries, and frailties of the human body and psyche. But however compelling as a design tactic, the simplicity of technology-centered automation is a mirage. Ignoring the human factor does not remove the human factor.

  In a much-cited 1997 paper, “Automation Surprises,” the human-factors experts Nadine Sarter, David Woods, and Charles Billings traced the origins of the technology-focused approach. They described how it grew out of and continues to reflect the “myths, false hopes, and misguided intentions associated with modern technology.” The arrival of the computer, first as an analogue machine and then in its familiar digital form, encouraged engineers and industrialists to take an idealistic view of electronically controlled systems, to see them as a kind of cure-all for human inefficiency and fallibility. The order and cleanliness of computer operations and outputs seemed heaven-sent when contrasted with the earthly messiness of human affairs. “Automation technology,” Sarter and her colleagues wrote, “was originally developed in hope of increasing the precision and economy of operations while, at the same time, reducing operator workload and training requirements. It was considered possible to create an autonomous system that required little if any human involvement and therefore reduced or eliminated the opportunity for human error.” That belief led, again with pristine logic, to the further assumption that “automated systems could be designed without much consideration for the human element in the overall system.”18

 

‹ Prev