Brin is mistaken, though, in suggesting that Glass and other such devices represent a break from computing’s past. They give the established technological momentum even more force. As the smartphone and then the tablet made general-purpose, networked computers more portable and personable, they also made it possible for software companies to program many more aspects of our lives. Together with cheap, friendly apps, they allowed the cloud-computing infrastructure to be used to automate even the most mundane of chores. Computerized glasses and wristwatches further extend automation’s reach. They make it easier to receive turn-by-turn directions when walking or riding a bike, for instance, or to get algorithmically generated advice on where to grab your next meal or what clothes to put on for a night out. They also serve as sensors for the body, allowing information about your location, thoughts, and health to be transmitted back to the cloud. That in turn provides software writers and entrepreneurs with yet more opportunities to automate the quotidian.
WE’VE PUT into motion a cycle that, depending on your point of view, is either virtuous or vicious. As we grow more reliant on applications and algorithms, we become less capable of acting without their aid—we experience skill tunneling as well as attentional tunneling. That makes the software more indispensable still. Automation breeds automation. With everyone expecting to manage their lives through screens, society naturally adapts its routines and procedures to fit the routines and procedures of the computer. What can’t be accomplished with software—what isn’t amenable to computation and hence resists automation—begins to seem dispensable.
The PARC researchers argued, back in the early 1990s, that we’d know computing had achieved ubiquity when we were no longer aware of its presence. Computers would be so thoroughly enmeshed in our lives that they’d be invisible to us. We’d “use them unconsciously to accomplish everyday tasks.”26 That seemed a pipe dream in the days when bulky PCs drew attention to themselves by freezing, crashing, or otherwise misbehaving at inopportune moments. It doesn’t seem like such a pipe dream anymore. Many computer companies and software houses now say they’re working to make their products invisible. “I am super excited about technologies that disappear completely,” declares Jack Dorsey, a prominent Silicon Valley entrepreneur. “We’re doing this with Twitter, and we’re doing this with [the online credit-card processor] Square.”27 When Mark Zuckerberg calls Facebook “a utility,” as he frequently does, he’s signaling that he wants the social network to merge into our lives the way the telephone system and electric grid did.28 Apple has promoted the iPad as a device that “gets out of the way.” Picking up on the theme, Google markets Glass as a means of “getting technology out of the way.” In a 2013 speech, the company’s then head of social networking, Vic Gundotra, even put a flower-power spin on the slogan: “Technology should get out of the way so you can live, learn, and love.”29
The technologists may be guilty of bombast, but they’re not guilty of cynicism. They’re genuine in their belief that the more computerized our lives become, the happier we’ll be. That, after all, has been their own experience. But their aspiration is self-serving nonetheless. For a popular technology to become invisible, it first has to become so essential to people’s existence that they can no longer imagine being without it. It’s only when a technology surrounds us that it disappears from view. Justin Rattner, Intel’s chief technology officer, has said that he expects his company’s products to become so much a part of people’s “context” that Intel will be able to provide them with “pervasive assistance.”30 Instilling such dependency in customers would also, it seems safe to say, bring in a lot more money for Intel and other computer companies. For a business, there’s nothing like turning a customer into a supplicant.
The prospect of having a complicated technology fade into the background, so it can be employed with little effort or thought, can be as appealing to those who use it as to those who sell it. “When technology gets out of the way, we are liberated from it,” the New York Times columnist Nick Bilton has written.31 But it’s not that simple. You don’t just flip a switch to make a technology invisible. It disappears only after a slow process of cultural and personal acclimation. As we habituate ourselves to it, the technology comes to exert more power over us, not less. We may be oblivious to the constraints it imposes on our lives, but the constraints remain. As the French sociologist Bruno Latour points out, the invisibility of a familiar technology is “a kind of optical illusion.” It obscures the way we’ve refashioned ourselves to accommodate the technology. The tool that we originally used to fulfill some particular intention of our own begins to impose on us its intentions, or the intentions of its maker. “If we fail to recognize,” Latour writes, “how much the use of a technique, however simple, has displaced, translated, modified, or inflected the initial intention, it is simply because we have changed the end in changing the means, and because, through a slipping of the will, we have begun to wish something quite else from what we at first desired.”32
The difficult ethical questions raised by the prospect of programming robotic cars and soldiers—who controls the software? who chooses what’s to be optimized? whose intentions and interests are reflected in the code?—are equally pertinent to the development of the applications used to automate our lives. As the programs gain more sway over us—shaping the way we work, the information we see, the routes we travel, our interactions with others—they become a form of remote control. Unlike robots or drones, we have the freedom to reject the software’s instructions and suggestions. It’s difficult, though, to escape their influence. When we launch an app, we ask to be guided—we place ourselves in the machine’s care.
Look more closely at Google Maps. When you’re traveling through a city and you consult the app, it gives you more than navigational tips; it gives you a way to think about cities. Embedded in the software is a philosophy of place, which reflects, among other things, Google’s commercial interests, the backgrounds and biases of its programmers, and the strengths and limitations of software in representing space. In 2013, the company rolled out a new version of Google Maps. Instead of providing you with the same representation of a city that everyone else sees, it generates a map that’s tailored to what Google perceives as your needs and desires, based on information the company has collected about you. The app will highlight nearby restaurants and other points of interest that friends in your social network have recommended. It will give you directions that reflect your past navigational choices. The views you see, the company says, are “unique to you, always adapting to the task you want to perform right this minute.”33
That sounds appealing, but it’s limiting. Google filters out serendipity in favor of insularity. It douses the infectious messiness of a city with an algorithmic antiseptic. What is arguably the most important way of looking at a city, as a public space shared not just with your pals but with an enormously varied group of strangers, gets lost. “Google’s urbanism,” comments the technology critic Evgeny Morozov, “is that of someone who is trying to get to a shopping mall in their self-driving car. It’s profoundly utilitarian, even selfish in character, with little to no concern for how public space is experienced. In Google’s world, public space is just something that stands between your house and the well-reviewed restaurant that you are dying to get to.”34 Expedience trumps all.
Social networks push us to present ourselves in ways that conform to the interests and prejudices of the companies that run them. Facebook, through its Timeline and other documentary features, encourages its members to think of their public image as indistinguishable from their identity. It wants to lock them into a single, uniform “self” that persists throughout their lives, unfolding in a coherent narrative beginning in childhood and ending, one presumes, with death. This fits with its founder’s narrow conception of the self and its possibilities. “You have one identity,” Mark Zuckerberg has said. “The days of you having a different image for your work friends or co-workers and for the other people you kno
w are probably coming to an end pretty quickly.” He even argues that “having two identities for yourself is an example of a lack of integrity.”35 That view, not surprisingly, dovetails with Facebook’s desire to package its members as neat and coherent sets of data for advertisers. It has the added benefit, for the company, of making concerns about personal privacy seem less valid. If having more than one identity indicates a lack of integrity, then a yearning to keep certain thoughts or activities out of public view suggests a weakness of character. But the conception of selfhood that Facebook imposes through its software can be stifling. The self is rarely fixed. It has a protean quality. It emerges through personal exploration, and it shifts with circumstances. That’s especially true in youth, when a person’s self-conception is fluid, subject to testing, experimentation, and revision. To be locked into an identity, particularly early in one’s life, may foreclose opportunities for personal growth and fulfillment.
Every piece of software contains such hidden assumptions. Search engines, in automating intellectual inquiry, give precedence to popularity and recency over diversity of opinion, rigor of argument, or quality of expression. Like all analytical programs, they have a bias toward criteria that lend themselves to statistical analysis, downplaying those that entail the exercise of taste or other subjective judgments. Automated essay-grading algorithms encourage in students a rote mastery of the mechanics of writing. The programs are deaf to tone, uninterested in knowledge’s nuances, and actively resistant to creative expression. The deliberate breaking of a grammatical rule may delight a reader, but it’s anathema to a computer. Recommendation engines, whether suggesting a movie or a potential love interest, cater to our established desires rather than challenging us with the new and unexpected. They assume we prefer custom to adventure, predictability to whimsy. The technologies of home automation, which allow things like lighting, heating, cooking, and entertainment to be meticulously programmed, impose a Taylorist mentality on domestic life. They subtly encourage people to adapt themselves to established routines and schedules, making homes more like workplaces.
The biases in software can distort societal decisions as well as personal ones. In promoting its self-driving cars, Google has suggested that the vehicles will dramatically reduce the number of crashes, if not eliminate them entirely. “Do you know that driving accidents are the number one cause of death for young people?” Sebastian Thrun said in a 2011 speech. “And do you realize that almost all of those are due to human error and not machine error, and can therefore be prevented by machines?”36 Thrun’s argument is compelling. In regulating hazardous activities like driving, society has long given safety a high priority, and everyone appreciates the role technological innovation can play in reducing the risk of mishaps and injuries. Even here, though, things aren’t as black-and-white as Thrun implies. The ability of autonomous cars to prevent accidents and deaths remains theoretical at this point. As we’ve seen, the relationship between machinery and human error is complicated; it rarely plays out as expected. Society’s goals, moreover, are never one-dimensional. Even the desire for safety requires interrogation. We’ve always recognized that laws and behavioral norms entail trade-offs between safety and liberty, between protecting ourselves and putting ourselves at risk. We allow and sometimes encourage people to engage in dangerous hobbies, sports, and other pursuits. A full life, we know, is not a perfectly insulated life. Even when it comes to setting speed limits on highways, we balance the goal of safety with other aims.
Difficult and often politically contentious, such trade-offs shape the kind of society we live in. The question is, do we want to cede the choices to software companies? When we look to automation as a panacea for human failings, we foreclose other options. A rush to embrace autonomous cars might do more than curtail personal freedom and responsibility; it might preclude us from exploring alternative ways to reduce the probability of traffic accidents, such as strengthening driver education or promoting mass transit.
It’s worth noting that Silicon Valley’s concern with highway safety, though no doubt sincere, has been selective. The distractions caused by cell phones and smartphones have in recent years become a major factor in car crashes. An analysis by the National Safety Council implicated phone use in one-fourth of all accidents on U.S. roads in 2012.37 Yet Google and other top tech firms have made little or no effort to develop software to prevent people from calling, texting, or using apps while driving—surely a modest undertaking compared with building a car that can drive itself. Google has even sent its lobbyists into state capitals to block bills that would ban drivers from wearing Glass and other distracting eyewear. We should welcome the important contributions computer companies can make to society’s well-being, but we shouldn’t confuse those companies’ interests with our own.
IF WE don’t understand the commercial, political, intellectual, and ethical motivations of the people writing our software, or the limitations inherent in automated data processing, we open ourselves to manipulation. We risk, as Latour suggests, replacing our own intentions with those of others, without even realizing that the swap has occurred. The more we habituate ourselves to the technology, the greater the risk grows.
It’s one thing for indoor plumbing to become invisible, to fade from our view as we adapt ourselves, happily, to its presence. Even if we’re incapable of fixing a leaky faucet or troubleshooting a balky toilet, we tend to have a pretty good sense of what the pipes in our homes do—and why. Most technologies that have become invisible to us through their ubiquity are like that. Their workings, and the assumptions and interests underlying their workings, are self-evident, or at least discernible. The technologies may have unintended effects—indoor plumbing changed the way people think about hygiene and privacy38—but they rarely have hidden agendas.
It’s a very different thing for information technologies to become invisible. Even when we’re conscious of their presence in our lives, computer systems are opaque to us. Software codes are hidden from our eyes, legally protected as trade secrets in many cases. Even if we could see them, few of us would be able to make sense of them. They’re written in languages we don’t understand. The data fed into algorithms is also concealed from us, often stored in distant, tightly guarded data centers. We have little knowledge of how the data is collected, what it’s used for, or who has access to it. Now that software and data are stored in the cloud, rather than on personal hard drives, we can’t even be sure when the workings of systems have changed. Revisions to popular programs are made all the time without our awareness. The application we used yesterday is probably not the application we use today.
The modern world has always been complicated. Fragmented into specialized domains of skill and knowledge, coiled with economic and other systems, it rebuffs any attempt to comprehend it in its entirety. But now, to a degree far beyond anything we’ve experienced before, the complexity itself is hidden from us. It’s veiled behind the artfully contrived simplicity of the screen, the user-friendly, frictionless interface. We’re surrounded by what the political scientist Langdon Winner has termed “concealed electronic complexity.” The “relationships and connections” that were “once part of mundane experience,” manifest in direct interactions among people and between people and things, have become “enshrouded in abstraction.”39 When an inscrutable technology becomes an invisible technology, we would be wise to be concerned. At that point, the technology’s assumptions and intentions have infiltrated our own desires and actions. We no longer know whether the software is aiding us or controlling us. We’re behind the wheel, but we can’t be sure who’s driving.
CHAPTER NINE
THE LOVE THAT LAYS THE SWALE IN ROWS
THERE’S A LINE OF VERSE I’M ALWAYS COMING BACK TO, and it’s been on my mind even more than usual as I’ve worked my way through the manuscript of this book:
The fact is the sweetest dream that labor knows.
It’s the second to last line of one of Robert Frost’s e
arliest and best poems, a sonnet called “Mowing.” He wrote it just after the turn of the twentieth century, when he was a young man, in his twenties, with a young family. He was working as a farmer, raising chickens and tending a few apple trees on a small plot of land his grandfather had bought for him in Derry, New Hampshire. It was a difficult time in his life. He had little money and few prospects. He had dropped out of two colleges, Dartmouth and Harvard, without earning a degree. He had been unsuccessful in a succession of petty jobs. He was sickly. He had nightmares. His firstborn child, a son, had died of cholera at the age of three. His marriage was troubled. “Life was peremptory,” Frost would later recall, “and threw me into confusion.”1
But it was during those lonely years in Derry that he came into his own as a writer and an artist. Something about farming—the long, repetitive days, the solitary work, the closeness to nature’s beauty and carelessness—inspired him. The burden of labor eased the burden of life. “If I feel timeless and immortal it is from having lost track of time for five or six years there,” he would write of his stay in Derry. “We gave up winding clocks. Our ideas got untimely from not taking newspapers for a long period. It couldn’t have been more perfect if we had planned it or foreseen what we were getting into.”2 In the breaks between chores on the farm, Frost somehow managed to write most of the poems for his first book, A Boy’s Will; about half the poems for his second book, North of Boston; and a good number of other poems that would find their way into subsequent volumes.
“Mowing,” from A Boy’s Will, was the greatest of his Derry lyrics. It was the poem in which he found his distinctive voice: plainspoken and conversational, but also sly and dissembling. (To really understand Frost—to really understand anything, including yourself—requires as much mistrust as trust.) As with many of his best works, “Mowing” has an enigmatic, almost hallucinatory quality that belies the simple and homely picture it paints—in this case of a man cutting a field of grass for hay. The more you read the poem, the deeper and stranger it becomes:
The Glass Cage: Automation and Us Page 21