Book Read Free

The Glass Cage: Automation and Us

Page 17

by Nicholas Carr


  The desires and beliefs underpinning the dominant design approach, the authors continued, have proved naive and damaging. While automated systems have often enhanced the “precision and economy of operations,” they have fallen short of expectations in other respects, and they have introduced a whole new set of problems. Most of the shortcomings stem from “the fact that even highly automated systems still require operator involvement and therefore communication and coordination between human and machine.” But because the systems have been designed without sufficient regard for the people who operate them, their communication and coordination capabilities are feeble. In consequence, the computerized systems lack the “complete knowledge” of the work and the “comprehensive access to the outside world” that only people can provide. “Automated systems do not know when to initiate communication with the human about their intentions and activities or when to request additional information from the human. They do not always provide adequate feedback to the human who, in turn, has difficulties tracking automation status and behavior and realizing there is a need to intervene to avoid undesirable actions by the automation.” Many of the problems that bedevil automated systems stem from “the failure to design human-machine interaction to exhibit the basic competencies of human-human interaction.”19

  Engineers and programmers compound the problems when they hide the workings of their creations from the operators, turning every system into an inscrutable black box. Normal human beings, the unstated assumption goes, don’t have the smarts or the training to grasp the intricacies of a software program or robotic apparatus. If you tell them too much about the algorithms or procedures that govern its operations and decisions, you’ll just confuse them or, worse yet, encourage them to tinker with the system. It’s safer to keep people in the dark. Here again, though, the attempt to avoid human errors by removing personal responsibility ends up making the errors more likely. An ignorant operator is a dangerous operator. As the University of Iowa human-factors professor John Lee explains, it’s common for an automated system to use “control algorithms that are at odds with the control strategies and mental model of the person [operating it].” If the person doesn’t understand those algorithms, there’s no way she can “anticipate the actions and limits of the automation.” The human and the machine, operating under conflicting assumptions, end up working at cross-purposes. People’s inability to comprehend the machines they use can also undermine their self-confidence, Lee reports, which “can make them less inclined to intervene” when something goes wrong.20

  HUMAN-FACTORS EXPERTS have long urged designers to move away from the technology-first approach and instead embrace human-centered automation. Rather than beginning with an assessment of the capabilities of the machine, human-centered design begins with a careful evaluation of the strengths and limitations of the people who will be operating or otherwise interacting with the machine. It brings technological development back to the humanistic principles that inspired the original ergonomists. The goal is to divide roles and responsibilities in a way that not only capitalizes on the computer’s speed and precision but also keeps workers engaged, active, and alert—in the loop rather than out of it.21

  Striking that kind of balance isn’t hard. Decades of ergonomic research show it can be achieved in a number of straightforward ways. A system’s software can be programmed to shift control over critical functions from the computer back to the operator at frequent but irregular intervals. Knowing that they may need to take command at any moment keeps people attentive and engaged, promoting situational awareness and learning. A design engineer can put limits on the scope of automation, making sure that people working with computers perform challenging tasks rather than being relegated to passive, observational roles. Giving people more to do helps sustain the generation effect. A designer can also give the operator direct sensory feedback on the system’s performance, using audio and tactile alerts as well as visual displays, even for those activities that the computer is handling. Regular feedback heightens engagement and helps operators remain vigilant.

  One of the most intriguing applications of the human-centered approach is adaptive automation. In adaptive systems, the computer is programmed to pay close attention to the person operating it. The division of labor between the software and the human operator is adjusted continually, depending on what’s happening at any given moment.22 When the computer senses that the operator has to perform a tricky maneuver, for example, it might take over all the other tasks. Freed from distractions, the operator can concentrate her full attention on the critical challenge. Under routine conditions, the computer might shift more tasks over to the operator, increasing her workload to ensure that she maintains her situational awareness and practices her skills. Putting the analytical capabilities of the computer to humanistic use, adaptive automation aims to keep the operator at the peak of the Yerkes-Dodson performance curve, preventing both cognitive overload and cognitive underload. DARPA, the Department of Defense laboratory that spearheaded the creation of the internet, is even working on developing “neuroergonomic” systems that, using various brain and body sensors, can “detect an individual’s cognitive state and then manipulate task parameters to overcome perceptual, attentional, and working memory bottlenecks.”23 Adaptive automation also holds promise for injecting a dose of humanity into the working relationships between people and computers. Some early users of the systems report that they feel as though they’re collaborating with a colleague rather than operating a machine.

  Studies of automation have tended to focus on large, complex, and risk-laden systems, the kind used on flight decks, in control rooms, and on battlefields. When these systems fail, many lives and a great deal of money can be lost. But the research is also relevant to the design of decision-support applications used by doctors, lawyers, managers, and others in analytical trades. Such programs go through a lot of personal testing to make them easy to learn and operate, but once you dig beneath the user-friendly interface, you find that the technology-centered ethic still holds sway. “Typically,” writes John Lee, “expert systems act as a prosthesis, supposedly replacing flawed and inconsistent human reasoning with more precise computer algorithms.”24 They’re intended to supplant, rather than supplement, human judgment. With each upgrade in an application’s data-crunching speed and predictive acumen, the programmer shifts more decision-making responsibility from the professional to the software.

  Raja Parasuraman, who has studied the personal consequences of automation as deeply as anyone, believes this is the wrong approach. He argues that decision-support applications work best when they deliver pertinent information to professionals at the moment they need it, without recommending specific courses of action.25 The smartest, most creative ideas come when people are afforded room to think. Lee agrees. “A less automated approach, which places the automation in the role of critiquing the operator, has met with much more success,” he writes. The best expert systems present people with “alternative interpretations, hypotheses, or choices.” The added and often unexpected information helps counteract the natural cognitive biases that sometimes skew human judgment. It pushes analysts and decision makers to look at problems from different perspectives and consider broader sets of options. But Lee stresses that the systems should leave the final verdict to the person. In the absence of perfect automation, he counsels, the evidence shows that “a lower level of automation, such as that used in the critiquing approach, is less likely to induce errors.”26 Computers do a superior job of sorting through lots of data quickly, but human experts remain subtler and wiser thinkers than their digital partners.

  Carving out a protected space for the thoughts and judgments of expert practitioners is also a goal of those seeking a more humanistic approach to automation in the creative trades. Many designers criticize popular CAD programs for their pushiness. Ben Tranel, an architect with the Gensler firm in San Francisco, praises computers for expanding the possibilities of design. He
points to the new, Gensler-designed Shanghai Tower in China, a spiraling, energy-efficient skyscraper, as an example of a building that “couldn’t have been built” without computers. But he worries that the literalism of design software—the way it forces architects to define the meaning and use of every geometric element they input—is foreclosing the open-ended, unstructured explorations that freehand sketching encouraged. “A drawn line can be many things,” he says, whereas a digitized line has to be just one thing.27

  Back in 1996, the architecture professors Mark Gross and Ellen Yi-Luen Do proposed an alternative to literal-minded CAD software. They created a conceptual blueprint of an application with a “paper-like” interface that would be able to “capture users’ intended ambiguity, vagueness, and imprecision and convey these qualities visually.” It would lend design software “the suggestive power of the sketch.”28 Since then, many other scholars have made similar proposals. Recently, a team led by Yale computer scientist Julie Dorsey created a prototype of a design application that provides a “mental canvas.” Rather than having the computer automatically translate two-dimensional drawings into three-dimensional virtual models, the system, which uses a touchscreen tablet as an input device, allows an architect to do rough sketches in three dimensions. “Designers can draw and redraw lines without being bound by the constraints of a polygonal mesh or the inflexibility of a parametric pipeline,” the team explained. “Our system allows easy iterative refinement throughout the development of an idea, without imposing geometric precision before the idea is ready for it.”29 With less pushy software, a designer’s imagination has more chance to flourish.

  THE TENSION between technology-centered and human-centered automation is not just a theoretical concern of academics. It affects decisions made every day by business executives, engineers and programmers, and government regulators. In the aviation business, the two dominant airliner manufacturers have been on different sides of the design question since the introduction of fly-by-wire systems thirty years ago. Airbus pursues a technology-centered approach. Its goal is to make its planes essentially “pilot-proof.”30 The company’s decision to replace the bulky, front-mounted control yokes that have traditionally steered planes with diminutive, side-mounted joysticks was one expression of that goal. The game-like controllers send inputs to the flight computers efficiently, with minimal manual effort, but they don’t provide pilots with tactile feedback. Consistent with the ideal of the glass cockpit, they emphasize the pilot’s role as a computer operator rather than as an aviator. Airbus has also programmed its computers to override pilots’ instructions in certain situations in order to keep the jet within the software-specified parameters of its flight envelope. The software, not the pilot, wields ultimate control.

  Boeing has taken a more human-centered tack in designing its fly-by-wire craft. In a move that would have made the Wright brothers happy, the company decided that it wouldn’t allow its flight software to override the pilot. The aviator retains final authority over maneuvers, even in extreme circumstances. And not only has Boeing kept the big yokes of yore; it has designed them to provide artificial feedback that mimics what pilots felt back when they had direct control over a plane’s steering mechanisms. Although the yokes are just sending electronic signals to computers, they’ve been programmed to provide resistance and other tactile cues that simulate the feel of the movements of the plane’s ailerons, elevators, and other control surfaces. Research has found that tactile, or haptic, feedback is significantly more effective than visual cues alone in alerting pilots to important changes in a plane’s orientation and operation, according to John Lee. And because the brain processes tactile signals in a different way than visual signals, “haptic warnings” don’t tend to “interfere with the performance of concurrent visual tasks.”31 In a sense, the synthetic, tactile feedback takes Boeing pilots out of the glass cockpit. They may not wear their jumbo jets the way Wiley Post wore his little Lockheed Vega, but they are more involved in the bodily experience of flight than are their counterparts on Airbus flight decks.

  Airbus makes magnificent planes. Some commercial pilots prefer them to Boeing’s jets, and the safety records of the two manufacturers are pretty much identical. But recent incidents reveal the shortcomings of Airbus’s technology-centered approach. Some aviation experts believe that the design of the Airbus cockpit played a part in the Air France disaster. The voice-recorder transcript revealed that the whole time the pilot controlling the plane, Pierre-Cédric Bonin, was pulling back on his sidestick, his copilot, David Robert, was oblivious to Bonin’s fateful mistake. In a Boeing cockpit, each pilot has a clear view of the other pilot’s yoke and how it’s being handled. If that weren’t enough, the two yokes operate as a single unit. If one pilot pulls back on his yoke, the other pilot’s goes back too. Through both visual and haptic cues, the pilots stay in sync. The Airbus sidesticks, in contrast, are not in clear view, they work with much subtler motions, and they operate independently. It’s easy for a pilot to miss what his colleague is doing, particularly in emergencies when stress rises and focus narrows.

  Had Robert seen and corrected Bonin’s error early on, the pilots may well have regained control of the A330. The Air France crash, Chesley Sullenberger has said, would have been “much less likely to happen” if the pilots had been flying in a Boeing cockpit with its human-centered controls.32 Even Bernard Ziegler, the brilliant and proud French engineer who served as Airbus’s top designer until his retirement in 1997, recently expressed misgivings about his company’s design philosophy. “Sometimes I wonder if we made an airplane that is too easy to fly,” he said to William Langewiesche, the writer, during an interview in Toulouse, where Airbus has its headquarters. “Because in a difficult airplane the crews may stay more alert.” He went on to suggest that Airbus “should have built a kicker into the pilots’ seats.” 33 He may have been joking, but his comment jibes with what human-factors researchers have learned about the maintenance of human skills and attentiveness. Sometimes a good kick, or its technological equivalent, is exactly what an automated system needs to give its operators.

  When the FAA, in its 2013 safety alert for operators, suggested that airlines encourage pilots to assume manual control of their planes more frequently during flights, it was also taking a stand, if a tentative one, in favor of human-centered automation. Keeping the pilot more firmly in the loop, the agency had come to realize, could reduce the chances of human error, temper the consequences of automation failure, and make air travel even safer than it already is. More automation is not always the wisest choice. The FAA, which employs a large and respected group of human-factors researchers, is also paying close attention to ergonomics as it plans its ambitious “NextGen” overhaul of the nation’s air-traffic-control system. One of the project’s overarching goals is to “create aerospace systems that adapt to, compensate for, and augment the performance of the human.”34

  In the financial industry, the Royal Bank of Canada is also going against the grain of technology-centered automation. At its Wall Street trading desk, it has installed a proprietary software program, called THOR, that actually slows down the transmission of buy and sell orders in a way that protects them from the algorithmic manipulations of high-speed traders. By slowing the orders, RBC has found, trades often end up being executed at more attractive terms for its customers. The bank admits that it’s making a trade-off in resisting the prevailing technological imperative of speedy data flows. By eschewing high-speed trading, it makes a little less money on each trade. But it believes that, over the long run, the strengthening of client loyalty and the reduction of risk will lead to higher profits overall.35

  One former RBC executive, Brad Katsuyama, is going even further. Having watched stock markets become skewed in favor of high-frequency traders, he spearheaded the creation of a new and fairer exchange, called IEX. Opened late in 2013, IEX imposes controls on automated systems. Its software manages the flow of data to ensure that all members of t
he exchange receive pricing and other information at the same time, neutralizing the advantages enjoyed by predatory trading firms that situate their computers next door to exchanges. And IEX forbids certain kinds of trades and fee schemes that give an edge to speedy algorithms. Katsuyama and his colleagues are using sophisticated technology to level the playing field between people and computers. Some national regulatory agencies are also trying to put the brakes on automated trading, through laws and regulations. In 2012, France placed a small tax on stock trades, and Italy followed suit a year later. Because high-frequency-trading algorithms are usually designed to execute volume-based arbitrage strategies—each trade returns only a minuscule profit, but millions of trades are made in a matter of moments—even a tiny transaction tax can render the programs much less attractive.

  SUCH ATTEMPTS to rein in automation are encouraging. They show that at least some businesses and government agencies are willing to question the prevailing technology-first attitude. But these efforts remain exceptions to the rule, and their continued success is far from assured. Once technology-centered automation has taken hold in a field, it becomes very hard to alter the course of progress. The software comes to shape how work is done, how operations are organized, what consumers expect, and how profits are made. It becomes an economic and a social fixture. This process is an example of what the historian Thomas Hughes calls “technological momentum.”36 In its early development, a new technology is malleable; its form and use can be shaped not only by the desires of its designers but also by the concerns of those who use it and the interests of society as a whole. But once the technology becomes embedded in physical infrastructure, commercial and economic arrangements, and personal and political norms and expectations, changing it becomes enormously difficult. The technology is at that point an integral component of the social status quo. Having amassed great inertial force, it continues down the path it’s on. Particular technological components will still become outdated, of course, but they’ll tend to be replaced by new ones that refine and perpetuate the existing modes of operation and the related measures of performance and success.

 

‹ Prev