Book Read Free

The Glass Cage: Automation and Us

Page 20

by Nicholas Carr


  The slope gets only more slippery. The military and political advantages of robot soldiers bring moral quandaries of their own. The deployment of LARs won’t just change the way battles and skirmishes are fought, Heyns pointed out. It will change the calculations that politicians and generals make about whether to go to war in the first place. The public’s distaste for casualties has always been a deterrent to fighting and a spur to negotiation. Because LARs will reduce the “human costs of armed conflict,” the public may “become increasingly disengaged” from military debates and “leave the decision to use force as a largely financial or diplomatic question for the State, leading to the ‘normalization’ of armed conflict. LARs may thus lower the threshold for States for going to war or otherwise using lethal force, resulting in armed conflict no longer being a measure of last resort.”9

  The introduction of a new class of armaments always alters the nature of warfare, and weapons that can be launched or detonated from afar—catapults, mines, mortars, missiles—tend to have the greatest effects, both intended and unintended. The consequences of autonomous killing machines would likely go beyond anything that’s come before. The first shot freely taken by a robot will be a shot heard round the world. It will change war, and maybe society, forever.

  THE SOCIAL and ethical challenges posed by killer robots and self-driving cars point to something important and unsettling about where automation is headed. The substitution myth has traditionally been defined as the erroneous assumption that a job can be divided into separate tasks and those tasks can be automated piecemeal without changing the nature of the job as a whole. That definition may need to be broadened. As the scope of automation expands, we’re learning that it’s also a mistake to assume that society can be divided into discrete spheres of activity—occupations or pastimes, say, or domains of governmental purview—and those spheres can be automated individually without changing the nature of society as a whole. Everything is connected—change the weapon, and you change the war—and the connections tighten when they’re made explicit in computer networks. At some point, automation reaches a critical mass. It begins to shape society’s norms, assumptions, and ethics. People see themselves and their relations to others in a different light, and they adjust their sense of personal agency and responsibility to account for technology’s expanding role. They behave differently too. They expect the aid of computers, and on those rare instances when it’s not forthcoming, they feel bewildered. Software takes on what the MIT computer scientist Joseph Weizenbaum termed a “compelling urgency.” It becomes “the very stuff out of which man builds his world.”10

  In the 1990s, just as the dot-com bubble was beginning to inflate, there was much excited talk about “ubiquitous computing.” Soon, pundits assured us, microchips would be everywhere—embedded in factory machinery and warehouse shelving, affixed to the walls of offices and shops and homes, buried in the ground and floating in the air, installed in consumer goods and woven into clothing, even swimming around in our bodies. Equipped with sensors and transceivers, the tiny computers would measure every variable imaginable, from metal fatigue to soil temperature to blood sugar, and they’d send their readings, via the internet, to data-processing centers, where bigger computers would crunch the numbers and output instructions for keeping everything in spec and in sync. Computing would be pervasive; automation, ambient. We’d live in a geek’s paradise, the world a programmable machine.

  One of the main sources of the hype was Xerox PARC, the fabled Silicon Valley research lab where Steve Jobs found the inspiration for the Macintosh. PARC’s engineers and information scientists published a series of papers portraying a future in which computers would be so deeply woven into “the fabric of everyday life” that they’d be “indistinguishable from it.”11 We would no longer even notice all the computations going on around us. We’d be so saturated with data, so catered to by software, that, instead of experiencing the anxiety of information overload, we’d feel “encalmed.”12 It sounded idyllic. But the PARC researchers weren’t Pollyannas. They also expressed misgivings about the world they foresaw. They worried that a ubiquitous computing system would be an ideal place for Big Brother to hide. “If the computational system is invisible as well as extensive,” the lab’s chief technologist, Mark Weiser, wrote in a 1999 article in IBM Systems Journal, “it becomes hard to know what is controlling what, what is connected to what, where information is flowing, [and] how it is being used.”13 We’d have to place a whole lot of trust in the people and companies running the system.

  The excitement about ubiquitous computing proved premature, as did the anxiety. The technology of the 1990s was not up to making the world machine-readable, and after the dot-com crash, investors were in no mood to bankroll the installation of expensive microchips and sensors everywhere. But much has changed in the succeeding fifteen years. The economic equations are different now. The price of computing gear has fallen sharply, as has the cost of high-speed data transmission. Companies like Amazon, Google, and Microsoft have turned data processing into a utility. They’ve built a cloud-computing grid that allows vast amounts of information to be collected and processed at efficient centralized plants and then fed into applications running on smartphones and tablets or into the control circuits of machines.14 Manufacturers are spending billions of dollars to outfit factories with network-connected sensors, and technology giants like GE, IBM, and Cisco, hoping to spearhead the creation of an “internet of things,” are rushing to develop standards for sharing the resulting data. Computers are pretty much omnipresent now, and even the faintest of the world’s twitches and tremblings are being recorded as streams of binary digits. We may not be encalmed, but we are data saturated. The PARC researchers are starting to look like prophets.

  There’s a big difference between a set of tools and an infrastructure. The Industrial Revolution gained its full force only after its operational assumptions were built into expansive systems and networks. The construction of the railroads in the middle of the nineteenth century enlarged the markets companies could serve, providing the impetus for mechanized mass production and ever larger economies of scale. The creation of the electric grid a few decades later opened the way for factory assembly lines and, by making all sorts of electrical appliances feasible and affordable, spurred consumerism and pushed industrialization into the home. These new networks of transport and power, together with the telegraph, telephone, and broadcasting systems that arose alongside them, gave society a different character. They altered the way people thought about work, entertainment, travel, education, even the organization of communities and families. They transformed the pace and texture of life in ways that went well beyond what steam-powered factory machines had done.

  Thomas Hughes, in reviewing the consequences of the arrival of the electric grid in his book Networks of Power, described how first the engineering culture, then the business culture, and finally the general culture shaped themselves to the new system. “Men and institutions developed characteristics that suited them to the characteristics of the technology,” he wrote. “And the systematic interaction of men, ideas, and institutions, both technical and nontechnical, led to the development of a supersystem—a sociotechnical one—with mass movement and direction.” It was at this point that technological momentum took hold, both for the power industry and for the modes of production and living it supported. “The universal system gathered a conservative momentum. Its growth generally was steady, and change became a diversification of function.”15 Progress had found its groove.

  We’ve reached a similar juncture in the history of automation. Society is adapting to the universal computing infrastructure—more quickly than it adapted to the electric grid—and a new status quo is taking shape. The assumptions underlying industrial operations and commercial relations have already changed. “Business processes that once took place among human beings are now being executed electronically,” explains W. Brian Arthur, an economist and technology t
heorist at the Santa Fe Institute. “They are taking place in an unseen domain that is strictly digital.”16 As an example, he points to the process of moving a shipment of freight through Europe. A few years ago, this would have required a legion of clipboard-wielding agents. They’d log arrivals and departures, check manifests, perform inspections, sign and stamp authorizations, fill out and file paperwork, and send letters or make phone calls to a variety of other functionaries involved in coordinating or regulating international freight. Changing the shipment’s routing would have involved laborious communications among representatives of various concerned parties—shippers, receivers, carriers, government agencies—and more piles of paperwork. Now, pieces of cargo carry radio-frequency identification tags. When a shipment passes through a port or other way station, scanners read the tags and pass the information along to computers. The computers relay the information to other computers, which in concert perform the necessary checks, provide the required authorizations, revise schedules as needed, and make sure all parties have current data on the shipment’s status. If a new routing is required, it’s generated automatically and the tags and related data repositories are updated.

  Such automated and far-flung exchanges of information have become routine throughout the economy. Commerce is increasingly managed through, as Arthur puts it, “a huge conversation conducted entirely among machines.”17 To be in business is to have networked computers capable of taking part in that conversation. “You know you have built an excellent digital nervous system,” Bill Gates tells executives, “when information flows through your organization as quickly and naturally as thought in a human being.”18 Any sizable company, if it wants to remain viable, has little choice but to automate and then automate some more. It has to redesign its work flows and its products to allow for ever greater computer monitoring and control, and it has to restrict the involvement of people in its supply and production processes. People, after all, can’t keep up with computer chatter; they just slow down the conversation.

  The science-fiction writer Arthur C. Clarke once asked, “Can the synthesis of man and machine ever be stable, or will the purely organic component become such a hindrance that it has to be discarded?”19 In the business world at least, no stability in the division of work between human and computer seems in the offing. The prevailing methods of computerized communication and coordination pretty much ensure that the role of people will go on shrinking. We’ve designed a system that discards us. If technological unemployment worsens in the years ahead, it will be more a result of our new, subterranean infrastructure of automation than of any particular installation of robots in factories or decision-support applications in offices. The robots and applications are the visible flora of automation’s deep, extensive, and implacably invasive root system.

  That root system is also feeding automation’s spread into the broader culture. From the provision of government services to the tending of friendships and familial ties, society is reshaping itself to fit the contours of the new computing infrastructure. The infrastructure orchestrates the instantaneous data exchanges that make fleets of self-driving cars and armies of killer robots possible. It provides the raw material for the predictive algorithms that inform the decisions of individuals and groups. It underpins the automation of classrooms, libraries, hospitals, shops, churches, and homes—places traditionally associated with the human touch. It allows the NSA and other spy agencies, as well as crime syndicates and nosy corporations, to conduct surveillance and espionage on an unprecedented scale. It’s what has shunted so much of our public discourse and private conversation onto tiny screens. And it’s what gives our various computing devices the ability to guide us through the day, offering a steady stream of personalized alerts, instructions, and advice.

  Once again, men and institutions are developing characteristics that suit them to the characteristics of the prevailing technology. Industrialization didn’t turn us into machines, and automation isn’t going to turn us into automatons. We’re not that simple. But automation’s spread is making our lives more programmatic. We have fewer opportunities to demonstrate our own resourcefulness and ingenuity, to display the self-reliance that was once considered the mainstay of character. Unless we start having second thoughts about where we’re heading, that trend will only accelerate.

  IT WAS a curious speech. The event was the 2013 TED conference, held in late February at the Long Beach Performing Arts Center near Los Angeles. The scruffy guy on stage, fidgeting uncomfortably and talking in a halting voice, was Sergey Brin, reputedly the more outgoing of Google’s two founders. He was there to deliver a marketing pitch for Glass, the company’s “head-mounted computer.” After airing a brief promotional video, he launched into a scornful critique of the smartphone, a device that Google, with its Android system, had helped push into the mainstream. Pulling his own phone from his pocket, Brin looked at it with disdain. Using a smartphone is “kind of emasculating,” he said. “You know, you’re standing around there, and you’re just like rubbing this featureless piece of glass.” In addition to being “socially isolating,” staring down at a screen weakens a person’s sensory engagement with the physical world, he suggested. “Is this what you were meant to do with your body?”20

  Having dispatched the smartphone, Brin went on to extol the benefits of Glass. The new device would provide a far superior “form factor” for personal computing, he said. By freeing people’s hands and allowing them to keep their head up and eyes forward, it would reconnect them with their surroundings. They’d rejoin the world. It had other advantages too. By putting a computer screen permanently within view, the high-tech eyeglasses would allow Google, through its Google Now service and other tracking and personalization routines, to deliver pertinent information to people whenever the device sensed they required advice or assistance. The company would fulfill the greatest of its ambitions: to automate the flow of information into the mind. Forget the autocomplete functions of Google Suggest. With Glass on your brow, Brin said, echoing his colleague Ray Kurzweil, you would no longer have to search the web at all. You wouldn’t have to formulate queries or sort through results or follow trails of links. “You’d just have information come to you as you needed it.”21 To the computer’s omnipresence would be added omniscience.

  Brin’s awkward presentation earned him the ridicule of technology bloggers. Still, he had a point. Smartphones enchant, but they also enervate. The human brain is incapable of concentrating on two things at once. Every glance or swipe at a touchscreen draws us away from our immediate surroundings. With a smartphone in hand, we become a little ghostly, wavering between worlds. People have always been distractible, of course. Minds wander. Attention drifts. But we’ve never carried on our person a tool that so insistently captivates our senses and divides our attention. By connecting us to a symbolic elsewhere, the smartphone, as Brin implied, exiles us from the here and now. We lose the power of presence.

  Brin’s assurance that Glass would solve the problem was less convincing. No doubt there are times when having your hands free while consulting a computer or using a camera would be an advantage. But peering into a screen that floats in front of you requires no less an investment of attention than glancing at one held in your lap. It may require more. Research on pilots and drivers who use head-up displays reveals that when people look at text or graphics projected as an overlay on the environment, they become susceptible to “attentional tunneling.” Their focus narrows, their eyes fix on the display, and they become oblivious to everything else going on in their field of view.22 In one experiment, performed in a flight simulator, pilots using a head-up display during a landing took longer to see a large plane obstructing the runway than did pilots who had to glance down to check their instrument readings. Two of the pilots using the head-up display never even saw the plane sitting directly in front of them.23 “Perception requires both your eyes and your mind,” psychology professors Daniel Simons and Christopher Chabris explained in
a 2013 article on the dangers of Glass, “and if your mind is engaged, you can fail to see something that would otherwise be utterly obvious.”24

  Glass’s display is also, by design, hard to escape. Hovering above your eye, it’s always at the ready, requiring but a glance to call into view. At least a phone can be stuffed into a pocket or handbag, or slipped into a car’s cup holder. The fact that you interact with Glass through spoken words, head movements, hand gestures, and finger taps further tightens its claim on the mind and senses. As for the audio signals that announce incoming alerts and messages—sent, as Brin boasted in his TED talk, “right through the bones in your cranium”—they hardly seem less intrusive than the beeps and buzzes of a phone. However emasculating a smartphone may be, metaphorically speaking, a computer attached to your forehead promises to be worse.

  Wearable computers, whether sported on the head like Google’s Glass and Facebook’s Oculus Rift or on the wrist like the Pebble smartwatch, are new, and their appeal remains unproven. They’ll have to overcome some big obstacles if they’re to gain wide popularity. Their features are at this point sparse, they look dorky—London’s Guardian newspaper refers to Glass as “those dreadful specs”25—and their tiny built-in cameras make a lot of people nervous. But, like other personal computers before them, they’ll improve quickly, and they’ll almost certainly morph into less obtrusive, more useful forms. The idea of wearing a computer may seem strange today, but in ten years it could be the norm. We may even find ourselves swallowing pill-sized nanocomputers to monitor our biochemistry and organ function.

 

‹ Prev