The commercial aviation system, for example, now depends on the precision of computer control. Computers are better than pilots at plotting the most fuel-efficient routes, and computer-controlled planes can fly closer together than can planes operated by people. There’s a fundamental tension between the desire to enhance pilots’ manual flying skills and the pursuit of ever higher levels of automation in the skies. Airlines are unlikely to sacrifice profits and regulators are unlikely to curtail the capacity of the aviation system in order to give pilots significantly more time to practice flying by hand. The rare automation-related disaster, however horrifying, may be accepted as a cost of an efficient and profitable transport system. In health care, insurers and hospital companies, not to mention politicians, look to automation as a quick fix to lower costs and boost productivity. They’ll almost certainly keep ratcheting up the pressure on providers to automate medical practices and procedures in order to save money, even if doctors have worries about the long-term erosion of their most subtle and valuable talents. On financial exchanges, computers can execute a trade in ten microseconds—that’s one ten-millionth of a second—but it takes the human brain nearly a quarter of a second to respond to an event or other stimulus. A computer can process tens of thousands of trades in the blink of a trader’s eye.37 The speed of the computer has taken the person out of the picture.
It’s commonly assumed that any technology that comes to be broadly adopted in a field, and hence gains momentum, must be the best one for the job. Progress, in this view, is a quasi-Darwinian process. Many different technologies are invented, they compete for users and buyers, and after a period of rigorous testing and comparison the marketplace chooses the best of the bunch. Only the fittest tools survive. Society can thus be confident that the technologies it employs are the optimum ones—and that the alternatives discarded along the way were flawed in some fatal way. It’s a reassuring view of progress, founded on, in the words of the late historian David Noble, “a simple faith in objective science, economic rationality, and the market.” But as Noble went on to explain in his 1984 book Forces of Production, it’s a distorted view: “It portrays technological development as an autonomous and neutral technical process, on the one hand, and a coldly rational and self-regulating process, on the other, neither of which accounts for people, power, institutions, competing values, or different dreams.”38 In place of the complexities, vagaries, and intrigues of history, the prevailing view of technological progress presents us with a simplistic, retrospective fantasy.
Noble illustrated the tangled way technologies actually gain acceptance and momentum through the story of the automation of the machine tool industry in the years after World War II. Inventors and engineers developed several different techniques for programming lathes, drill presses, and other factory tools, and each of the control methods had advantages and disadvantages. One of the simplest and most ingenious of the systems, called Specialmatic, was invented by a Princeton-trained engineer named Felix P. Caruthers and marketed by a small New York company called Automation Specialties. Using an array of keys and dials to encode and control the workings of a machine, Specialmatic put the power of programming into the hands of skilled machinists on the factory floor. A machine operator, explained Noble, “could set and adjust feeds and speeds, relying upon accumulated experience with the sights, sounds, and smells of metal cutting.” 39 In addition to bringing the tacit know-how of the experienced craftsman into the automated system, Specialmatic had an economic advantage: a manufacturer did not have to pay a squad of engineers and consultants to program its equipment. Caruthers’s technology earned accolades from American Machinist magazine, which noted that Specialmatic “is designed to permit complete set-up and programming at the machine.” It would allow the machinist to gain the efficiency benefits of automation while retaining “full control of his machine throughout its entire machining cycle.” 40
But Specialmatic never gained a foothold in the market. While Caruthers was working on his invention, the U.S. Air Force was plowing money into a research program, conducted by an MIT team with long-standing ties to the military, to develop “numerical control,” a digital coding technique that was a forerunner of modern software programming. Not only did numerical control enjoy the benefits of a generous government subsidy and a prestigious academic pedigree; it appealed to business owners and managers who, faced with unremitting labor tensions, yearned to gain more control over the operation of machinery in order to undercut the power of workers and their unions. Numerical control also had the glow of a cutting-edge technology—it was carried along by the burgeoning postwar excitement over digital computers. The MIT system may have been, as the author of a Society of Manufacturing Engineers paper would later write, “a complicated, expensive monstrosity,” 41 but industrial giants like GE and Westinghouse rushed to embrace the technology, never giving alternatives like Specialmatic a chance. Far from winning a tough evolutionary battle for survival, numerical control was declared the victor before competition even began. Programming took precedence over people, and the momentum behind the technology-first design philosophy grew. As for the general public, it never knew that a choice had been made.
Engineers and programmers shouldn’t bear all the blame for the ill effects of technology-centered automation. They may be guilty at times of pursuing narrowly mechanistic dreams and desires, and they may be susceptible to the “technical arrogance” that “gives people an illusion of illimitable power,” in the words of the physicist Freeman Dyson.42 But they’re also responding to the demands of employers and clients. Software developers always face a trade-off in writing programs for automating work. Taking the steps necessary to promote the development of expertise—restricting the scope of automation, giving a greater and more active role to people, encouraging the development of automaticity through rehearsal and repetition—entails a sacrifice of speed and yield. Learning requires inefficiency. Businesses, which seek to maximize productivity and profit, would rarely, if ever, accept such a trade-off. The main reason they invest in automation, after all, is to reduce labor costs and streamline operations.
As individuals, too, we almost always seek efficiency and convenience when we decide which software application or computing device to use. We pick the program or gadget that lightens our load and frees up our time, not the one that makes us work harder and longer. Technology companies naturally cater to such desires when they design their wares. They compete fiercely to offer the product that requires the least effort and thought to use. “At Google and all these places,” says Google executive Alan Eagle, explaining the guiding philosophy of many software and internet businesses, “we make technology as brain-dead easy to use as possible.” 43 When it comes to the development and use of commercial software, whether it underpins an industrial system or a smartphone app, abstract concerns about the fate of human talent can’t compete with the prospect of saving time and money.
I asked Parasuraman whether he thinks society will come to use automation more wisely in the future, striking a better balance between computer calculation and personal judgment, between the pursuit of efficiency and the development of expertise. He paused a moment and then, with a wry laugh, said, “I’m not very sanguine.”
Interlude, with Grave Robber
I WAS IN A FIX. I had—by necessity, not choice—struck up an alliance with a demented grave robber named Seth Briars. “I don’t eat, I don’t sleep, I don’t wash, and I don’t care,” Seth had informed me, not without a measure of pride, shortly after we met in the cemetery beside Coot’s Chapel. He knew the whereabouts of certain individuals I was seeking, and in exchange for leading me to them, he had demanded that I help him cart a load of fresh corpses out past Critchley’s Ranch to a dusty ghost town called Tumbleweed. I drove Seth’s horse-drawn wagon, while he stayed in the back, rifling the dead for valuables. The trip was a trial. We made it through an ambush by highwaymen along the route—with firearms, I was more than handy—but
when I tried to cross a rickety bridge near Gaptooth Ridge, the weight of the bodies shifted and I lost control of the horses. The wagon careened into a ravine, and I died in a volcanic, screen-coating eruption of blood. I came back to life after a couple of purgatorial seconds, only to go through the ordeal again. After a half-dozen failed attempts, I began to despair of ever completing the mission.
The game I was playing, an exquisitely crafted, goofily written open-world shooter called Red Dead Redemption, is set in the early years of the last century, in a mythical southwestern border territory named New Austin. Its plot is pure Peckinpah. When you start the game, you assume the role of a stoic outlaw-turned-rancher named John Marston, whose right cheek is riven by a couple of long, symbolically deep scars. Marston is being blackmailed into tracking down his old criminal associates by federal agents who are holding his wife and young son hostage. To complete the game, you have to guide the gunslinger through various feats of skill and cunning, each a little tougher than the one preceding it.
After a few more tries, I finally did make it over that bridge, grisly cargo in tow. In fact, after many mayhem-filled hours in front of my Xbox-connected flat-screen TV, I managed to get through all of the game’s fifty-odd missions. As my reward, I got to watch myself—John Marston, that is—be gunned down by the very agents who had forced him into the quest. Gruesome ending aside, I came away from the game with a feeling of accomplishment. I had roped mustangs, shot and skinned coyotes, robbed trains, won a small fortune playing poker, fought alongside Mexican revolutionaries, rescued harlots from drunken louts, and, in true Wild Bunch fashion, used a Gatling gun to send an army of thugs to Kingdom Come. I had been tested, and my middle-aged reflexes had risen to the challenge. It may not have been an epic win, but it was a win.
Video games tend to be loathed by people who have never played them. That’s understandable, given the gore involved, but it’s a shame. In addition to their considerable ingenuity and occasional beauty, the best games provide a model for the design of software. They show how applications can encourage the development of skills rather than their atrophy. To master a video game, a player has to struggle through challenges of increasing difficulty, always pushing the limits of his talent. Every mission has a goal, there are rewards for doing well, and the feedback (an eruption of blood, perhaps) is immediate and often visceral. Games promote a state of flow, inspiring players to repeat tricky maneuvers until they become second nature. The skill a gamer learns may be trivial—how to manipulate a plastic controller to drive an imaginary wagon over an imaginary bridge, say—but he’ll learn it thoroughly, and he’ll be able to exercise it again in the next mission or the next game. He’ll become an expert, and he’ll have a blast along the way.*
When it comes to the software we use in our personal lives, video games are an exception. Most popular apps, gadgets, and online services are built for convenience, or, as their makers say, “usability.” Requiring only a few taps, swipes, or clicks, the programs can be mastered with little study or practice. Like the automated systems used in industry and commerce, they’ve been carefully designed to shift the burden of thought from people to computers. Even the high-end programs used by musicians, record producers, filmmakers, and photographers place an ever stronger emphasis on ease of use. Complex audio and visual effects, which once demanded expert know-how, can be achieved by pushing a button or dragging a slider. The underlying concepts need not be understood, as they’ve been incorporated into software routines. This has the very real benefit of making the software useful to a broader group of people—those who want to get the effects without the effort. But the cost of accommodating the dilettante is a demeaning of expertise.
Peter Merholz, a respected software-design consultant, counsels programmers to seek “frictionlessness” and “simplicity” in their products. Successful devices and applications, he says, hide their technical complexity behind user-friendly interfaces. They minimize the cognitive load they place on users: “Simple things don’t require a lot of thought. Choices are eliminated, recall is not required.”1 That’s a recipe for creating the kinds of applications that, as Christof van Nimwegen’s Cannibals and Missionaries experiment demonstrated, bypass the mental processes of learning, skill building, and memorization. The tools demand little of us and, cognitively speaking, give little to us.
What Merholz calls the “it just works” design philosophy has a lot going for it. Anyone who has struggled to set the alarm on a digital clock or change the settings on a WiFi router or figure out Microsoft Word’s toolbars knows the value of simplicity. Needlessly complicated products waste time without much compensation. It’s true we don’t need to be experts at everything, but as software writers take to scripting processes of intellectual inquiry and social attachment, frictionlessness becomes a problematic ideal. It can sap us not only of know-how but of our sense that know-how is something important and worth cultivating. Think of the algorithms for reviewing and correcting spelling that are built into virtually every writing and messaging application these days. Spell checkers once served as tutors. They’d highlight possible errors, calling your attention to them and, in the process, giving you a little spelling lesson. You learned as you used them. Now, the tools incorporate autocorrect functions. They instantly and surreptitiously clean up your mistakes, without alerting you to them. There’s no feedback, no “friction.” You see nothing and learn nothing.
Or think of Google’s search engine. In its original form, it presented you with nothing but an empty text box. The interface was a model of simplicity, but the service still required you to think about your query, to consciously compose and refine a set of keywords to get the best results. That’s no longer necessary. In 2008, the company introduced Google Suggest, an autocomplete routine that uses prediction algorithms to anticipate what you’re looking for. Now, as soon as you type a letter into the search box, Google offers a set of suggestions for how to phrase your query. With each succeeding letter, a new set of suggestions pops up. Underlying the company’s hyperactive solicitude is a dogged, almost monomaniacal pursuit of efficiency. Taking the misanthropic view of automation, Google has come to see human cognition as creaky and inexact, a cumbersome biological process better handled by a computer. “I envision some years from now that the majority of search queries will be answered without you actually asking,” says Ray Kurzweil, the inventor and futurist who in 2012 was appointed Google’s director of engineering. The company will “just know this is something that you’re going to want to see.”2 The ultimate goal is to fully automate the act of searching, to take human volition out of the picture.
Social networks like Facebook seem impelled by a similar aspiration. Through the statistical “discovery” of potential friends, the provision of “Like” buttons and other clickable tokens of affection, and the automated management of many of the time-consuming aspects of personal relations, they seek to streamline the messy process of affiliation. Facebook’s founder, Mark Zuckerberg, celebrates all of this as “frictionless sharing”—the removal of conscious effort from socializing. But there’s something repugnant about applying the bureaucratic ideals of speed, productivity, and standardization to our relations with others. The most meaningful bonds aren’t forged through transactions in a marketplace or other routinized exchanges of data. People aren’t nodes on a network grid. The bonds require trust and courtesy and sacrifice, all of which, at least to a technocrat’s mind, are sources of inefficiency and inconvenience. Removing the friction from social attachments doesn’t strengthen them; it weakens them. It makes them more like the attachments between consumers and products—easily formed and just as easily broken.
Like meddlesome parents who never let their kids do anything on their own, Google, Facebook, and other makers of personal software end up demeaning and diminishing qualities of character that, at least in the past, have been seen as essential to a full and vigorous life: ingenuity, curiosity, independence, perseverance, daring. It may be
that in the future we’ll only experience such virtues vicariously, through the exploits of action figures like John Marston in the fantasy worlds we enter through screens.
* In suggesting video games as a model for programmers, I’m not endorsing the voguish software-design practice that goes by the ugly name “gamification.” That’s when an app or a website uses a game-like reward system to motivate or manipulate people into repeating some prescribed activity. Building on the operant-conditioning experiments of the psychologist B. F. Skinner, gamification exploits the flow state’s dark side. Seeking to sustain the pleasures and rewards of flow, people can become obsessive in their use of the software. Computerized slot machines, to take one notorious example, are carefully designed to promote an addictive form of flow in their players, as Natasha Dow Schüll describes in her chilling book Addiction by Design: Machine Gambling in Vegas (Princeton: Princeton University Press, 2012). An experience that is normally “life affirming, restorative, and enriching,” she writes, becomes for gamblers “depleting, entrapping, and associated with a loss of autonomy.” Even when used for ostensibly benign purposes, such as dieting, gamification wields a cynical power. Far from being an antidote to technology-centered design, it takes the practice to an extreme. It seeks to automate human will.
The Glass Cage: Automation and Us Page 18