To Be a Machine

Home > Other > To Be a Machine > Page 13
To Be a Machine Page 13

by Mark O'Connell


  Not everyone found these robots falling on their faces so amusing. I saw one of the stewards, a woman in her early twenties with a blue DRC STAFF T-shirt, greet a colleague on the steps. “Did you see that robot who fell over just there? It was really sad to watch.” Her colleague agreed; it made her sad, too. “I felt terrible for him.”

  On the Jumbotron, a robot accomplished a flawless exit from its SUV, and began its approach to the door.

  “Impressive performance now from Momaro over on that yellow course,” said the announcer. “Just a very impressive egress.”

  At midday, the robots broke for lunch. There was a burst of enthusiastic applause, and the robust drumbeat and rumbling bass line of the Foo Fighters’ “My Hero” blasted from the PA system, as footage of some of the morning’s more audacious feats of vehicle-exiting and door-opening and lever-pulling played on the Jumbotron.

  I noticed, then, a dark form hovering above the racetrack, high and lonesome as a buzzard in the shimmering heat of noon. It was a small drone: another reminder of DARPA’s history of innovation in unmanned combat, its wider project of panoptic surveillance, signature strikes, screamings across the sky. As I watched the silent machine rise against the backdrop of the San Jose Hills, flashing in the sun with a glitter of knives, I felt with sudden force the schizoid strangeness of the event I had come to witness, which seemed designed both to reassure the world of DARPA’s humanitarian intentions and to expedite the development of technologies that would, in the fullness of time, be trained upon kill zones far from this fairground, with its onsite Sheraton, its conference center, its dedicated RV parking lot.

  I looked around, and saw the crowd—the families with their young children; the clusters of twenty- and thirty-something programmers; the uniformed Marines who were themselves human components of the machinery of government referred to by Hobbes as the “great Leviathan called a Common-Wealth or State which is but an Artificiall Man”—filtering downward out of the grandstand toward the burger concessions and hot dog carts, and was overtaken by a sudden bleak intimation of technology as an instrument of human perversity, in the service of power and money and war.

  —

  Out in the fairground, in the Technology Exposition area where the industry had come to set out its various stalls, the general understanding was that robots were the future. The word for what was going on here would most likely be something like “outreach,” or “engagement.” Walking under a large canvas DARPA banner (“Thank You for Cheering Us On!”), I entered a kind of scaffolded tunnel that housed a “DARPA Through the Decades” exhibit. Highlighted here were some of the organization’s major accomplishments, among the more recent of which were the 2003 launch of the X-45A, an early prototype of the Predator and Reaper drones responsible for the deaths of hundreds of Pakistani civilians and children, and a monstrous unmanned armored ground vehicle named, with admirable frankness, “The Crusher.”

  Further on, I passed a black quadruped robot in a glass display cabinet, a nightmarish pastiche of a Damien Hirst installation. The encased specimen was a creature known as Cheetah, developed with DARPA funding by Boston Dynamics, an industry-leading robotics laboratory that had been acquired by Google in 2013. This robot was capable of running at 28.3 miles per hour, faster than any recorded human. I had seen it in action on YouTube—itself a wholly owned subsidiary of Google—and it was somehow thrilling and abominable: this rough beast, its hour come at last, emerging at an uncanny gallop from some final merger of corporate and state power in the crucible of technology.

  I walked on, and saw a tall, sickly looking young man wearing dark sunglasses, a black fedora, and a black suit with a vaguely clerical purple silk shirt. A toy monkey was perched on his shoulder, and in his black leather-gloved hands he held a small device with which he was controlling an arachnoid robot roughly the size of a bull terrier. Standing next to him was another man, who wore a laminated DARPA lanyard around his neck, and who was presumably the father of the sun-hatted toddler, some eight or ten feet away, being chased in a widening circle by the mechanical spider.

  At the stall of a company called Softbank Robotics, a Frenchman was attempting to convince a four-foot humanoid to hug a three-year-old girl.

  “Pepper,” he said. “Please hug the little girl.”

  “I’m sorry,” said Pepper, in an appealingly childlike voice lightly inflected with a Japanese accent and genuine regret. “I didn’t understand.”

  “Pepper,” said the Frenchman, with elaborate clarity and forbearance. “Can you please give this little girl a hug.”

  The little girl in question, who was sullen and silent and clutching the leg of her father, did not look much like she wanted Pepper to give her a hug.

  “I’m sorry,” said Pepper again. “I didn’t understand.”

  I felt a sudden surge of compassion for this winsome creature, with its huge innocent eyes, its touchscreen chest, its beautiful human failure to understand.

  The Frenchman smiled tightly, and bent down to the side of the robot’s head, where its auditory receptors were located.

  “Pepper! Please! Hug! The girl!”

  Pepper at last raised her arms, and made her wheeled approach to the child, who then gave herself up, tentatively and with obvious misgivings, to the robot’s embrace, before quickly backing out of the whole deal and returning to the shelter of her father’s legs.

  The Frenchman explained to me that Pepper was a customer service humanoid, designed to “interface with people in a natural and social manner.” It was capable, apparently, of feeling emotions ranging from joy to sadness to anger to doubt, its “mood” influenced by data received through touch sensors and cameras.

  “It is mostly for greeting people when they come in. At a mobile phone store, for instance, it will come to you and ask you if you need something, and maybe explain you some special offers that the mobile phone store is running. It will give you a fist-bump, or maybe a hug. As you can see, we are still perfecting this, but we are close. You would be surprised how difficult it is to solve the problem of hugging.”

  I asked him whether robots like these were intended eventually to replace the human beings who currently worked in mobile phone stores, and he told me that, although this was a likely eventuality of the progress of robotics, Pepper’s immediate function was purely a “social and emotional” one: she was a kind of corporate ambassador from the future, intended to put customers at their ease in the presence of humanoid robots.

  “We need to break that barrier first,” he said. “People will eventually become comfortable.”

  I did not doubt that this was true. Already, we had become comfortable with automated checkouts at supermarkets, with touchscreens and instructions from computerized voices where previously there would have been a human being, earning a paycheck.

  Earlier that week, in Seattle, Amazon had held a robotics competition of its own. The Amazon Picking Challenge set companies the task of developing a robot capable of replacing its human stock pickers. And you could see how this would make sense for Amazon, a company that had long been known for its poor treatment of its warehouse workers, and for its monomaniacal focus on the elimination of every kind of middleman—of booksellers, editors, publishers, postal workers, couriers. (Amazon was, at that point, on the verge of launching a drone delivery program, whereby consumer goods made and packaged by robots could be delivered into your hands by an unmanned carrier-drone within thirty minutes of you placing your order.) Robots don’t need toilet breaks, and drones don’t get tired, and neither are likely to form unions.

  And so this seemed like the ultimate fulfillment of the logic of techno-capitalism: the outright ownership not just of the means of production, but of the labor force itself. Čapek’s term “robot” was, after all, taken from the Czech word for “forced labor.” The image and valence of the human body has always shaped how we think about machines; humans have always succeeded in reducing the bodies of other humans to mechanisms, compo
nents in systems of their own design. As Lewis Mumford put it in his book Technics and Civilization, written during the early years of the Great Depression:

  Long before the peoples of the Western World turned to the machine, mechanism as an element in social life had come into existence. Before inventors created engines to take the place of men, the leaders of men had drilled and regimented multitudes of human beings: they had discovered how to reduce men to machines. The slaves and peasants who hauled the stones for the pyramids, pulling in rhythm to the crack of the whip, the slaves working in the Roman galley, each man chained to his seat and unable to perform any other motion than the limited mechanical one, the order and march and system of attack of the Macedonian phalanx—these were all machine phenomena. Whatever limits the actions of human beings to their bare mechanical elements belongs to the physiology, if not the mechanics, of the machine age.

  Recently, on the website of the World Economic Forum, I had seen a list of the “20 Jobs That Robots Are Most Likely to Take Over.” Jobs with a 95 percent or higher chance of their practitioners being made obsolete by machines within twenty years included postal workers, jewelers, chefs, corporate bookkeepers, legal secretaries, credit analysts, loan officers, bank tellers, tax accountants, and drivers.

  This last occupation, which was the largest category of employment for American men, was particularly ripe for automated disruption. The original DARPA Grand Challenge, held in 2004 to stimulate the development of driverless vehicles, was a 150-mile race across the Mojave Desert from Barstow to the Nevada border. The event was a fiasco: not one of the robotic vehicles even came close to finishing the route. The car that got the farthest from the starting pistol made it just under seven and a half miles before finally coming to grief on a large rock, and DARPA declined to award its million-dollar prize.

  But when the race was held again the following year, five cars finished the route, and the winning team went on to form the nucleus of Google’s Self-Driving Car Project, under the auspices of which, even now, California’s roads were being successfully navigated by vehicles unguided by human hands, luxury ghostmobiles on the decaying highways, an advance guard of an automated future. Uber, the drive-sharing service that had seriously damaged the taxi sector in recent years, was already speaking openly about its plans to replace all of its drivers with automated cars as soon as the technology allowed. At a conference in 2014, the company’s preeminently obnoxious CEO Travis Kalanick had explained that “the reason Uber could be expensive is because you’re not just paying for the car, you’re paying for the other dude in the car. When there’s no other dude in the car, the cost of taking an Uber anywhere becomes cheaper than owning a vehicle.” When asked how he might explain to these other dudes the reality of their obsolescence, their versioning out, he said this: “Look, this is the way of the world, and the world isn’t always great. We all have to find ways to change the world.” Kalanick, I had heard, was here in Pomona today, in search of further ways to change a world that was increasingly his to change.

  The Frenchman asked if I would like a hug from Pepper, and I assented as much out of politeness as journalistic rigor.

  “Pepper,” he said. “This man would like a hug.”

  I fancied that I detected something like ambivalence in Pepper’s impassive gaze; but she raised her arms and I bent toward her, and suffered her to enfold me in her unnatural clasp. It was, frankly, an underwhelming experience; I felt that we were both, in our own ways, phoning it in. I patted her on the back, lightly and perhaps a little passive-aggressively, and we went our separate ways.

  —

  Hans Moravec (the Carnegie Mellon robotics professor who outlined a speculative procedure for transferring the material of human brains to machines) projects a future in which, “by performing better and cheaper, the robots will displace humans from essential roles.” Soon after that, he writes, “they could displace us from existence.” But as a transhumanist, Moravec doesn’t see this as something to be feared, or even necessarily avoided; because these robots will be our evolutionary heirs, our “mind children,” as he puts it, “built in our image and our likeness, ourselves in more potent and efficient form. Like biological children of previous generations, they will embody humanity’s best chance for a long-term future. It behooves us to give them every advantage and to bow out when we can no longer contribute.”

  There is, obviously, something about the idea of intelligent robots that frightens and titillates us, that fuels our feverish visions of omnipotence and obsolescence. The technological imagination projects a fantasy of godhood, with its attendant Promethean anxieties, onto the figure of the automaton. A few days after I returned from Pomona, I read that Steve Wozniak, the cofounder of Apple, had spoken at a conference about his conviction that humans were destined to become the pets of superintelligent robots. But this, he stressed, would not necessarily be an especially undesirable outcome. “It’s actually going to turn out really good for humans,” he said. Robots “will be so smart by then that they’ll know they have to keep nature, and humans are part of nature.” The robots, he believed, would treat us with respect and kindness, with a patrician generosity, because we humans were “the gods originally.”

  It would seem to be among the oldest collective fantasies of our species, this fantasy of creation. It would seem to be part of us, a thing we carry with us across cultures and centuries, a dream of burnished hardware that replicates our bodies and acts in accordance with our desires. Frustrated gods that we are, we have always dreamt of creating machines in our own image, and of re-creating ourselves in the image of these machines.

  Hellenic mythology had its automata, its living statues. The artificer Daedalus, remembered mostly for his disastrous efforts at human enhancement (labyrinth, waxen wings, tragic but morally instructive drowning), was also a maker of mechanical men, animated effigies capable of walking, speaking, weeping. Hephaestus, the blacksmith god of fire and metal and technology, constructed a bronze giant named Talos, to protect Europa (whom his father, Zeus, had abducted) from any further abductions.

  Medieval alchemists were obsessed with the idea of creating men from scratch, believing it was possible to bring into being tiny humanoid creatures called homunculi; this they insisted could be done through arcane practices involving such diverse substances as cow’s wombs, sulfur, magnets, animal blood, and locally sourced semen (preferably the alchemist’s own).*2

  Saint Albertus Magnus, a thirteenth-century Bavarian bishop, was said to have constructed a metal statue with the power of reason and speech. According to popular accounts from the time, this alchemical AI, which Albertus referred to as his “android,” met a violent end at the hands of a young Saint Thomas Aquinas, who was then a student of Albertus, and had serious issues with the android’s incessant chatter and, even more problematically, its obvious origins in some kind of diabolical covenant.

  In Europe, with the increasing popularity of clockwork during the Renaissance, and as the Enlightenment project supposedly cleared the mists of occult superstition from the field of science, there was a surge of interest in automata. In the 1490s, in an expansion of his own anatomical studies likely inspired by reading of the ancient Greek automata, Leonardo da Vinci designed and built a robotic knight. This automaton, often considered the world’s first humanoid robot, was a suit of armor animated by internal cables and pulleys and gears. The knight, built for display at the home of Ludovico Sforza, the Milanese duke who had commissioned The Last Supper, was capable of a range of movements, including sitting, standing, waving, and simulating speech by moving its armored jaw.

  Descartes’ Treatise on Man—which he never published in his lifetime for fear of the Church’s reaction to its central thesis—is predicated on the idea that our bodies are essentially machines, moving statues of flesh and bone animated by a divine infusion of spirit or soul. Part one, entitled “On the Machine of the Body,” draws an explicit analogy between the clockwork mechanisms so popular at th
e time and the inner operations of the human body. “We see clocks, artificial fountains, mills, and other similar machines which, even though they are only made by men, have the power to move of their own accord in various ways. And, as I am supposing that this machine is made by God, I think you will agree that it is capable of a greater variety of movements than I could possibly imagine in it, and that it exhibits a greater ingenuity than I could possibly ascribe to it.” Descartes wanted us to consider that everything we are—all our “functions,” including “passion, memory and imagination”—follow “from the mere arrangement of the machine’s organs every bit as naturally as the movements of a clock or other automaton follow from the arrangement of its counter-weights and wheels.”

  The Treatise is a weird and vaguely disturbing text, more for how it is written than for its mechanistic message. It is a work less of philosophy than straightforward anatomy, which reads like a kind of technological primer. Descartes’ insistence on repeatedly referring to the body and its constituent parts as “this machine” has a powerfully estranging effect; reading it, you begin to feel a growing distance from your own body, this complex edifice of interconnected and autonomous systems—this soft machine within which you yourself, the impalpable reader of the Treatise, reside and hold sway. That this idea seems both utterly absurd and utterly familiar is a testament to the extent to which Cartesian dualism has, over the centuries, become a rigid orthotic structure around our relationships with our bodies. (The fact that a distinction between “us” and “our bodies” is even intelligible seems itself largely a result of his philosophy’s despotic influence over how we think about these machines of ours.)

  Descartes was also subject to what you’d imagine to be a peculiarly modern, or postmodern, preoccupation: the anxious imagination of actual machines that might pass themselves off as human. In his Discourse on Method, the famously rigorous austerity of his doubt is brought to bear on the contemporary vogue for automata, and its epistemological implications. Gazing out his window, he draws our attention to the people passing below. “In this case, I do not fail to say that I see the men themselves,” he writes, “and yet, what do I see from the window beyond hats and cloaks that might cover artificial machines, whose motions might be determined by springs.” If you’re going to take your doubts seriously, in other words, if you’re going to have the courage of your solipsism, what grounds do you have for believing that the man on the street—or for that matter the other dude driving your Uber—is not literally a machine, a replicant passing itself off as human?

 

‹ Prev