The congeniality of hand tools encourages us to take responsibility for their use. Because we sense the tools as extensions of our bodies, parts of ourselves, we have little choice but to be intimately involved in the ethical choices they present. The scythe doesn’t choose to slash or spare the flowers; the mower does. As we become more expert in the use of a tool, our sense of responsibility for it naturally strengthens. To the novice mower, a scythe may feel like a foreign object in the hands; to the accomplished mower, hands and scythe become one thing. Talent tightens the bond between an instrument and its user. This feeling of physical and ethical entanglement doesn’t have to go away as technologies become more complex. In reporting on his historic solo flight across the Atlantic in 1927, Charles Lindbergh spoke of his plane and himself as if they were a single being: “We have made this flight across the ocean, not I or it.”24 The airplane was a complicated system encompassing many components, but to a skilled pilot it still had the intimate quality of a hand tool. The love that lays the swale in rows is also the love that parts the clouds for the stick-and-rudder man.
Automation weakens the bond between tool and user not because computer-controlled systems are complex but because they ask so little of us. They hide their workings in secret code. They resist any involvement of the operator beyond the bare minimum. They discourage the development of skillfulness in their use. Automation ends up having an anesthetizing effect. We no longer feel our tools as parts of ourselves. In a seminal 1960 paper called “Man-Computer Symbiosis,” the psychologist and engineer J. C. R. Licklider described the shift in our relation to technology well. “In the man-machine systems of the past,” he wrote, “the human operator supplied the initiative, the direction, the integration, and the criterion. The mechanical parts of the systems were mere extensions, first of the human arm, then of the human eye.” The introduction of the computer changed all that. “ ‘Mechanical extension’ has given way to replacement of men, to automation, and the men who remain are there more to help than to be helped.”25 The more automated everything gets, the easier it becomes to see technology as a kind of implacable, alien force that lies beyond our control and influence. Attempting to alter the path of its development seems futile. We press the on switch and follow the programmed routine.
To adopt such a submissive posture, however understandable it may be, is to shirk our responsibility for managing progress. A robotic harvesting machine may have no one in the driver’s seat, but it is every bit as much a product of conscious human thought as a humble scythe is. We may not incorporate the machine into our brain maps, as we do the hand tool, but on an ethical level the machine still operates as an extension of our will. Its intentions are our intentions. If a robot scares a bright green snake (or worse), we’re still to blame. We shirk a deeper responsibility as well: that of overseeing the conditions for the construction of the self. As computer systems and software applications come to play an ever larger role in shaping our lives and the world, we have an obligation to be more, not less, involved in decisions about their design and use—before technological momentum forecloses our options. We should be careful about what we make.
If that sounds naive or hopeless, it’s because we have been misled by a metaphor. We’ve defined our relation with technology not as that of body and limb or even that of sibling and sibling but as that of master and slave. The idea goes way back. It took hold at the dawn of Western philosophical thought, emerging first, as Langdon Winner has described, with the ancient Athenians.26 Aristotle, in discussing the operation of households at the beginning of his Politics, argued that slaves and tools are essentially equivalent, the former acting as “animate instruments” and the latter as “inanimate instruments” in the service of the master of the house. If tools could somehow become animate, Aristotle posited, they would be able to substitute directly for the labor of slaves. “There is only one condition on which we can imagine managers not needing subordinates, and masters not needing slaves,” he mused, anticipating the arrival of computer automation and even machine learning. “This condition would be that each [inanimate] instrument could do its own work, at the word of command or by intelligent anticipation.” It would be “as if a shuttle should weave itself, and a plectrum should do its own harp-playing.”27
The conception of tools as slaves has colored our thinking ever since. It informs society’s recurring dream of emancipation from toil, the one that was voiced by Marx and Wilde and Keynes and that continues to find expression in the works of technophiles and technophobes alike. “Wilde was right,” Evgeny Morozov, the technology critic, wrote in his 2013 book To Save Everything, Click Here: “mechanical slavery is the enabler of human liberation.”28 We’ll all soon have “personal workbots” at our “beck and call,” Kevin Kelly, the technology enthusiast, proclaimed in a Wired essay that same year. “They will do jobs we have been doing, and do them much better than we can.” More than that, they will free us to discover “new tasks that expand who we are. They will let us focus on becoming more human than we were.”29 Mother Jones’s Kevin Drum, also writing in 2013, declared that “a robotic paradise of leisure and contemplation eventually awaits us.” By 2040, he predicted, our super-smart, super-reliable, super-compliant computer slaves—“they never get tired, they’re never ill-tempered, they never make mistakes”—will have rescued us from labor and delivered us into an upgraded Eden. “Our days are spent however we please, perhaps in study, perhaps playing video games. It’s up to us.”30
With its roles reversed, the metaphor also informs society’s nightmares about technology. As we become dependent on our technological slaves, the thinking goes, we turn into slaves ourselves. From the eighteenth century on, social critics have routinely portrayed factory machinery as forcing workers into bondage. “Masses of labourers,” wrote Marx and Engels in their Communist Manifesto, “are daily and hourly enslaved by the machine.”31 Today, people complain all the time about feeling like slaves to their appliances and gadgets. “Smart devices are sometimes empowering,” observed The Economist in “Slaves to the Smartphone,” an article published in 2012. “But for most people the servant has become the master.”32 More dramatically still, the idea of a robot uprising, in which computers with artificial intelligence transform themselves from our slaves to our masters, has for a century been a central theme in dystopian fantasies about the future. The very word robot, coined by a science-fiction writer in 1920, comes from robota, a Czech term for servitude.
The master-slave metaphor, in addition to being morally fraught, distorts the way we look at technology. It reinforces the sense that our tools are separate from ourselves, that our instruments have an agency independent of our own. We start to judge our technologies not on what they enable us to do but rather on their intrinsic qualities as products—their cleverness, their efficiency, their novelty, their style. We choose a tool because it’s new or it’s cool or it’s fast, not because it brings us more fully into the world and expands the ground of our experiences and perceptions. We become mere consumers of technology.
More broadly, the metaphor encourages society to take a simplistic and fatalistic view of technology and progress. If we assume that our tools act as slaves on our behalf, always working in our best interest, then any attempt to place limits on technology becomes hard to defend. Each advance grants us greater freedom and takes us a stride closer to, if not utopia, then at least the best of all possible worlds. Any misstep, we tell ourselves, will be quickly corrected by subsequent innovations. If we just let progress do its thing, it will find remedies for the problems it creates. “Technology is not neutral but serves as an overwhelming positive force in human culture,” writes Kelly, expressing the self-serving Silicon Valley ideology that in recent years has gained wide currency. “We have a moral obligation to increase technology because it increases opportunities.”33 The sense of moral obligation strengthens with the advance of automation, which, after all, provides us with the most animate of instruments, the
slaves that, as Aristotle anticipated, are most capable of releasing us from our labors.
The belief in technology as a benevolent, self-healing, autonomous force is seductive. It allows us to feel optimistic about the future while relieving us of responsibility for that future. It particularly suits the interests of those who have become extraordinarily wealthy through the labor-saving, profit-concentrating effects of automated systems and the computers that control them. It provides our new plutocrats with a heroic narrative in which they play starring roles: recent job losses may be unfortunate, but they’re a necessary evil on the path to the human race’s eventual emancipation by the computerized slaves that our benevolent enterprises are creating. Peter Thiel, a successful entrepreneur and investor who has become one of Silicon Valley’s most prominent thinkers, grants that “a robotics revolution would basically have the effect of people losing their jobs.” But, he hastens to add, “it would have the benefit of freeing people up to do many other things.”34 Being freed up sounds a lot more pleasant than being fired.
There’s a callousness to such grandiose futurism. As history reminds us, high-flown rhetoric about using technology to liberate workers often masks a contempt for labor. It strains credulity to imagine today’s technology moguls, with their libertarian leanings and impatience with government, agreeing to the kind of vast wealth-redistribution scheme that would be necessary to fund the self-actualizing leisure-time pursuits of the jobless multitudes. Even if society were to come up with some magic spell, or magic algorithm, for equitably parceling out the spoils of automation, there’s good reason to doubt whether anything resembling the “economic bliss” imagined by Keynes would ensue. In a prescient passage in The Human Condition, Hannah Arendt observed that if automation’s utopian promise were actually to pan out, the result would probably feel less like paradise than like a cruel practical joke. The whole of modern society, she wrote, has been organized as “a laboring society,” where working for pay, and then spending that pay, is the way people define themselves and measure their worth. Most of the “higher and more meaningful activities” revered in the distant past have been pushed to the margin or forgotten, and “only solitary individuals are left who consider what they are doing in terms of work and not in terms of making a living.” For technology to fulfill humankind’s abiding “wish to be liberated from labor’s ‘toil and trouble’ ” at this point would be perverse. It would cast us deeper into a purgatory of malaise. What automation confronts us with, Arendt concluded, “is the prospect of a society of laborers without labor, that is, without the only activity left to them. Surely, nothing could be worse.”35 Utopianism, she understood, is a form of miswanting.
The social and economic problems caused or exacerbated by automation aren’t going to be solved by throwing more software at them. Our inanimate slaves aren’t going to chauffeur us to a utopia of comfort and harmony. If the problems are to be solved, or at least attenuated, the public will need to grapple with them in their full complexity. To ensure society’s well-being in the future, we may need to place limits on automation. We may have to shift our view of progress, putting the emphasis on social and personal flourishing rather than technological advancement. We may even have to entertain an idea that’s come to be considered unthinkable, at least in business circles: giving people precedence over machines.
IN 1986, a Canadian ethnographer named Richard Kool wrote Mihaly Csikszentmihalyi a letter. Kool had read some of the professor’s early work about flow, and he had been reminded of his own research into the Shushwap tribe, an aboriginal people who lived in the Thompson River Valley in what is now British Columbia. The Shushwap territory was “a plentiful land,” Kool noted. It was blessed with an abundance of fish and game and edible roots and berries. The Shushwaps did not have to wander to survive. They built villages and developed “elaborate technologies for very effectively using the resources in the environment.” They viewed their lives as good and rich. But the tribe’s elders saw that in such comfortable circumstances lay danger. “The world became too predictable and the challenge began to go out of life. Without challenge, life had no meaning.” And so, every thirty years or so, the Shushwaps, led by their elders, would uproot themselves. They’d leave their homes, abandon their villages, and head off into the wilds. “The entire population,” Kool reported, “would move to a different part of the Shushwap land.” And there they would discover a fresh set of challenges. “There were new streams to figure out, new game trails to learn, new areas where the balsam root would be plentiful. Now life would regain its meaning and be worth living. Everyone would feel rejuvenated and happy.”36
E. J. MEADE, the Colorado architect, said something revealing when I talked to him about his firm’s adoption of computer-aided design systems. The hard part wasn’t learning how to use the software. That was pretty easy. What was tough was learning how not to use it. The speed, ease, and sheer novelty of CAD made it enticing. The first instinct of the firm’s designers was to plop themselves down at their computers at the start of a project. But when they took a hard look at their work, they realized that the software was a hindrance to creativity. It was closing off aesthetic and functional possibilities even as it was quickening the pace of production. As Meade and his colleagues thought more critically about the effects of automation, they began to resist the technology’s temptations. They found themselves “bringing the computer in later and later” in the course of a project. For the early, formative stages of the work, they returned to their sketchbooks and sheets of tracing paper, their models of cardboard and foam core. “At the back end, it’s brilliant,” Meade said in summing up what he’s learned about CAD. “The convenience factor is great.” But the computer’s “expedience” can be perilous. For the unwary and the uncritical, it can overwhelm other, more important considerations. “You have to dig deep into the tool to avoid being manipulated by it.”
A year or so before I talked to Meade—just as I was beginning the research for this book—I had a chance meeting on a college campus with a freelance photographer who was working on an assignment for the school. He was standing idly under a tree, waiting for some uncooperative clouds to get out of the way of the sun. I noticed he had a large-format film camera set up on a bulky tripod—it was hard to miss, as it looked almost absurdly old-fashioned—and I asked him why he was still using film. He told me that he had eagerly embraced digital photography a few years earlier. He had replaced his film cameras and his darkroom with digital cameras and a computer running the latest image-processing software. But after a few months, he switched back. It wasn’t that he was dissatisfied with the operation of the equipment or the resolution or accuracy of the images. It was that the way he went about his work had changed, and not for the better.
The constraints inherent in taking and developing pictures on film—the expense, the toil, the uncertainty—had encouraged him to work slowly when he was on a shoot, with deliberation, thoughtfulness, and a deep, physical sense of presence. Before he took a picture, he would compose the shot meticulously in his mind, attending to the scene’s light, color, framing, and form. He would wait patiently for the right moment to release the shutter. With a digital camera, he could work faster. He could take a slew of images, one after the other, and then use his computer to sort through them and crop and tweak the most promising ones. The act of composition took place after a photo was taken. The change felt intoxicating at first. But he found himself disappointed with the results. The images left him cold. Film, he realized, imposed a discipline of perception, of seeing, which led to richer, more artful, more moving photographs. Film demanded more of him. And so he went back to the older technology.
Neither the architect nor the photographer was the least bit antagonistic toward computers. Neither was motivated by abstract concerns about a loss of agency or autonomy. Neither was a crusader. Both just wanted the best tool for the job—the tool that would encourage and enable them to do their finest, most fulfilling work.
What they came to realize was that the newest, most automated, most expedient tool is not always the best choice. Although I’m sure they would bristle at being likened to the Luddites, their decision to forgo the latest technology, at least in some stages of their work, was an act of rebellion resembling that of the old English machine-breakers, if without the fury and the violence. Like the Luddites, they understood that decisions about technology are also decisions about ways of working and ways of living—and they took control of those decisions rather than ceding them to others or giving way to the momentum of progress. They stepped back and thought critically about technology.
As a society, we’ve become suspicious of such acts. Out of ignorance or laziness or timidity, we’ve turned the Luddites into caricatures, emblems of backwardness. We assume that anyone who rejects a new tool in favor of an older one is guilty of nostalgia, of making choices sentimentally rather than rationally. But the real sentimental fallacy is the assumption that the new thing is always better suited to our purposes and intentions than the old thing. That’s the view of a child, naive and pliable. What makes one tool superior to another has nothing to do with how new it is. What matters is how it enlarges us or diminishes us, how it shapes our experience of nature and culture and one another. To cede choices about the texture of our daily lives to a grand abstraction called progress is folly.
The Glass Cage: Automation and Us Page 23