Machines of Loving Grace

Home > Other > Machines of Loving Grace > Page 27
Machines of Loving Grace Page 27

by John Markoff


  Nevertheless, the DARPA Robotics Challenge did what it was designed to do: expose the limits of today’s robotic systems. Truly autonomous robots are not yet a reality. Even the prancing and trotting Boston Dynamics machines that performed on the racetrack tarmac were wirelessly tethered to human controllers. It is equally clear, however, that truly autonomous robots will arrive soon. Just as the autonomous vehicle challenges of 2004 through 2007 significantly accelerated the development of self-driving cars, the Robotics Challenge will bring us close to Gill Pratt’s dream of a robot that can work in hazardous environments and Andy Rubin’s vision of the automated Google delivery robot. What Homestead-Miami also made clear was that there are two separate paths forward in defining the approaching world of humans and robots, one moving toward the man-machine symbiosis that J. C. R. Licklider had espoused and another in which machines will increasingly supplant humans. Just as Norbert Wiener realized at the onset of the computer and robotics age, one of the future possibilities will be bleak for humans. The way out of that cul-de-sac will be to follow in Terry Winograd’s footsteps by placing the human in the center of the design.

  Darkness had just fallen on the pit lane at Homestead-Miami Speedway, giving the robotic bull trotting on the roadway a ghostlike form. The bull’s machinery growled softly as its mechanical legs swung back and forth, the crate latched to its side rhythmically snapping against its trunk in a staccato rhythm. A human operator trailed the robot at a comfortable pace. Wearing a radio headset and a backpack full of communications gear, he used an oversized video game–style controller to guide the beast’s pace and direction. The contraption trotted past the garages where clusters of engineers and software hackers were busy packing up robots from the day’s competition.

  The DRC evoked the bar scene in the Star Wars movie Episode IV: A New Hope. Boston Dynamics designed most of its robots in humanoid form. This was a conscious decision: a biped interacts better with man-made environments than other forms do. There were also weirder designs at the contest, like a “transformer” from Carnegie Mellon that was reminiscent of robots in Japanese sci-fi films, and a couple of spiderlike walking machines as well. The most attractive robot was Valkyrie, a NASA robot that resembled a female Star Wars Imperial Stormtrooper. Sadly, Valkyrie was one of the three underperformers in the competition; it completed none of the tasks successfully. NASA engineers had little time to refine its machinery because the shutdown of the federal government cut funds for development.

  The star of the two-day event was clearly the Team Schaft robot. The designers, a crew of about a dozen Japanese engineers, had been the only team to almost perfectly complete all the tasks and so they easily won the first Robotics Challenge. Indeed, the Schaft robot had only made a single error: it tried to walk through a door that was slammed shut by the wind. Gusts of wind had repeatedly blown the door out of the Japanese robot’s grasp before it could extend its second arm to secure the door’s spring closing mechanism.

  While the competition took place, Rubin was busy moving his Japanese roboticists into a sprawling thirty-thousand-square-foot office perched high atop a Tokyo skyscraper. To ensure that the designers did not disturb the building’s other tenants—lawyers, in this case—Google had purchased two floors in the building and decided to leave one floor as a buffer for sound isolation.

  In the run-up to the Robotics Challenge, both Boston Dynamics and several of the competing teams had released videos showcasing Atlas’s abilities. Most of the videos featured garden-variety demonstrations of the robot walking, balancing, or twisting in interesting ways. However, one video of a predecessor to Atlas showed the robot climbing stairs and crossing an obstacle field that involved spreading its legs across a wide gap while balancing its arms against the walls of the enclosure. It moved at human speed and with human dexterity. The video had been carefully staged and the robot was being teleoperated—it was not acting autonomously.13 But the implications of the video were clear—the hardware for robots was capable of real-world mobility when the software and sensors caught up.

  While public reaction to the video was mixed, the Schaft team loved it. In the wake of their victory, they watched in amazement as the Boston Dynamics robotic bull trotted toward their garage. It squatted on the ground and shut down. The team members swarmed around the robot and opened the crate that was strapped to its back. It contained a case of champagne, brought as a congratulatory offering from the Boston Dynamics engineers in an attempt to bond the two groups of roboticists who would soon be working together on some future Google mobile robot.

  Several of the company’s engineers had considered doing something splashier. While planning for the Boston Dynamics demonstrations at the speedway, executives at another one of Rubin’s AI companies came up with a PR stunt to unveil at the Boston Dynamics demonstrations during both afternoons of the Robotics Challenge. The highlight of the two-day contest had not been watching the robots as they tried to complete a set of tasks. The real crowd-pleasers were the LS3 and Wildcat four-legged robots, both of which had come out on the raceway tarmac to trot back and forth. LS3, a robotic bull-like machine without a head, growled as it moved at a determined pace. Every once in a while, a Boston Dynamics employee pushed the machine to set it off balance. The robot nimbly moved to one side and recovered quickly—as if nothing had happened. Google initially wanted to stage something more impressive. What if they could show off a robot dog chasing a robot car? That would be a real tour de force. DARPA quickly nixed the idea, however. It would have smacked of a Google promotional and the “optics” might not play well either. After all, if robots could chase each other, what else might they chase?

  Team Schaft finished the champagne as quickly as they had cracked it open. It was a heady night for the young Japanese engineers. One researcher, who staggered around with a whole bottle of champagne in his hand, ended up in a hospital and with a fierce headache the next day. As the evening wound down, the implications of Schaft’s win were very clear to the crowd of about three dozen robot builders who were gathered in front of the Schaft garage that evening. Rubin’s new team shared a common purpose. Machines would soon routinely move among people and would inevitably assume even more of the drudgery of human work. Designing robots that could do anything from making coffee to loading trucks was well within the engineers’ reach.

  The Google roboticists believed passionately that, in the long run, machines woud inevitably substitute for humans. Given enough computing power and software ingenuity, it now seemed possible that engineers could model all human qualities and capabilities, including vision, speech, perception, manipulation, and perhaps even self-awareness. To be sure, it was important to these designers that they operated in the best interests of society. However, they believed that while the short-term displacement of humans would stoke conflict, in the long run, automation would improve the overall well-being of humanity. This is what Rubin had set out to accomplish. That evening he hung back from the crowd and spoke quietly with several of the engineers who were about to embark on a new journey to introduce robots into the world. He was at the outset of his quest but had already won a significant wager, which was a sign of his confidence in his team. He had bet Google CEO Larry Page his entire salary for a year that the Schaft team would win the DARPA trials. Luckily for Page, Rubin’s annual salary was just one dollar. Like many Google executives’, his actual compensation was much, much higher. However, a year after launching the company’s robotics division, Rubin would depart the company. He had acquired a reputation as one of the Valley’s most elite technologists. But, by his own admission, he was more interested in creating new projects than running them. The robot kingdom he set out to build would remain very much a work in progress after his abrupt departure at the end of 2014.

  In the weeks after Homestead, Andy Rubin made it clear that his ultimate goal was to build a robot that could complete each of the competitive tasks in the challenge at the push of a button. Ultimately, it was not to be.
Months later, Google would withdraw Schaft from the finals to focus on supplying state-of-the-art second-generation Atlas robots for other teams to use.

  Today Google’s robot laboratory can be found in the very heart of Silicon Valley, on South California Ave., which divides College Terrace, a traditional student neighborhood that was once full of bungalows and now has grown increasingly tony, from the Stanford Industrial Park, which might properly be called the birthplace of the Valley. Occupying seven hundred acres of the original Leland Stanford Jr. family farm, the industrial park was the brainchild of Frederick Terman, the Stanford dean who convinced his students William Hewlett and David Packard to stay on the West Coast and start their own business, instead of following a more traditional career path and heading east to work for the electronics giants of the first half of the last century.

  The Stanford Industrial Park has long since grown from a manufacturing center into a sprawling cluster of corporate campuses. Headquarters, research and development centers, law offices, and finance firms have gathered in the shadow of Stanford University. In 1970, Xerox Corp. temporarily located its Palo Alto Research Center at South California Avenue and Hanover Street, where, shortly thereafter, a small group of researchers designed the Alto computer. Smalltalk, the Alto’s software, was created by another PARC group led by computer scientist Alan Kay, a student of Ivan Sutherland at Utah. Looking for a way to compete with IBM in the emerging market for office computing, Xerox had set out to build a world-class computer science lab from scratch in the Industrial Park.

  More than a decade ahead of its time, the Alto was the first modern personal computer with a windows-based graphical display that included fonts and graphics, making possible on-screen pages that corresponded precisely to final printed documents (ergo WYSIWYG, pronounced “whizziwig,” which stands for “what you see is what you get”). The machine was controlled by an oddly shaped rolling appendage with three buttons wired to the computer known as a mouse. For those who saw the Alto while it was still a research secret, it drove home the meaning of Engelbart’s augmentation ideas. Indeed, one of those researchers was Stewart Brand, a counterculture impresario—photographer, writer, and editor—who had masterminded the Whole Earth Catalog. In an article for Rolling Stone, Brand referred to PARC as “Shy Research Center,” and he coined the term “personal computing.” Now, more than four decades later, the desktop personal computers of PARC are handheld and they are in the hands of much of the world’s population.

  Today Google’s robot laboratory sits just several hundred feet from the building where the Xerox pioneers conceived of personal computing. The proximity emphasizes Andy Rubin’s observation that “Computers are starting to sprout legs and move around in the environment.” From William Shockley’s initial plan to build an “automatic trainable robot” at the very inception of Silicon Valley to Xerox PARC and the rise of the PC and now back to Google’s mobile robotics start-up, the proximity of the two laboratories underscores how the region has moved back and forth in its efforts to alternatively extend and replace humans, from AI to IA and back again.

  There is no sign that identifies Google’s robot laboratory. Inside the entryway, however, stands an imposing ten-foot-high steel statue—of what? It doesn’t quite look like a robot. Maybe it is meant to signify some kind of alien creature. Maybe it is a replicant? The code name of Rubin’s project was “Replicant,” inspired, of course, by the movie Blade Runner. Rubin’s goal was to build and commercially introduce a humanoid robot that could move around in the world: a robot that could deliver packages, work in factories, provide elder care, and generally collaborate with and potentially replace human workers. He had set out to finish what had in effect begun nearby almost a half century earlier at the Stanford Artificial Intelligence Laboratory.

  The earlier research spawned by SAIL had created a generation of students like Ken Salisbury. As a young engineer Salisbury viewed himself as less of an “AI guy,” and more of a “control person.” He was trained in the Norbert Wiener tradition and so didn’t believe that intelligent machines needed autonomy. He had been involved in automation long enough to see the shifting balance between human and machine, and preferred to keep humans in the loop. He wanted to build a robot that, for example, could shake hands with you without crushing your hand. Luckily for Salisbury, autonomy was slow to arrive. The challenge of autonomous manipulation as humans are capable of—“pick up that red rag over there”—has remained a hard problem.

  Salisbury lived at the heart of the paradox described by Hans Moravec—things that are hardest for humans are easiest for machines, and vice versa. This paradox was first clarified by AI researchers in the 1980s, and Moravec had written about it in his book Mind Children: “It is comparatively easy to make computers exhibit adult level performance on intelligence tests or playing checkers, and difficult or impossible to give them the skills of a one-year-old when it comes to perception and mobility.”14 John McCarthy would frame the problem by challenging his students to reach into their pocket, feel a coin, and identify that coin as a nickel. Build a robot that could do that! Decades later, Rodney Brooks was still beginning his lectures and talks with the same scenario. It was something that a human could do effortlessly. Despite machines that could play chess and Jeopardy! and drive cars, little progress had been made in the realms of touch and perception.

  Salisbury was a product of the generation of students that emerged from SAIL during its heyday in the 1970s. While he was a graduate student at Stanford, he designed the Stanford/JPL hand, an example of the first evolution of robotic manipulators from jawed mechanical grippers into more articulated devices that mimicked human hands. His thesis had been about the geometric design of a robotic hand, but Salisbury was committed to the idea of something that worked. He stayed up all night before commencement day to get a final finger moving.

  He received his Ph.D. in 1982, just a year after Brooks. Both would ultimately migrate to MIT as young professors. There, Salisbury explored the science of touch because he thought it was key to a range of unsolved problems in robotics. At MIT he became friendly with Marvin Minsky and the two spent hours discussing and debating robot hands. Minsky wanted to build hands covered with sensors, but Salisbury felt durability was more important than perception, and many designs forced a trade-off between those two qualities.

  While a professor at the MIT Artificial Intelligence Laboratory he worked with a student, Thomas Massie, on a handheld controller to serve as a computer interface making three-dimensional images on a computer display something that people could touch and feel. The technology effectively blurs the line between the virtual computer world and the real world. Massie—who would later become a Tea Party congressman representing Kentucky—and his wife, both mechanical engineers, turned the idea into Sensable Devices, a company that created an inexpensive haptic—or touch—control device. After taking a sabbatical year to help found both Sensable and Intuitive Surgical, a robot surgery start-up based in Silicon Valley, Salisbury returned to Stanford, where he established a robotics laboratory in 1999.

  In 2007, he created the PR1, or Personal Robot One, with his students Eric Berger and Keenan Wyrobek. The machine was a largely unnoticed tour de force. It had the capabilities to leave a building, buy coffee for Salisbury, and return. The robot asked Salisbury for some money, then made its way through a series of three heavy doors. It opened each of them by pulling the handle halfway, then turning sideways so it could fit through the opening. Then it found its way to an elevator, called it, checked to make sure that no humans were inside, entered the elevator, pressed the button for the third floor, and checked to make sure the elevator had indeed gotten to the correct floor using visual cues. The robot then left the elevator, made its way to the coffee vendor, purchased coffee, and brought it back to the lab—without spilling it and before it got cold.

  The PR1 looked a little like a giant coffee can with arms, motorized wheels for traction, and stereo cameras for vision. Buil
ding it cost about $300,000 over about eighteen months. It was generally run by teleoperation except for specific preprogrammed tasks, such as fetching coffee or a beer. Capable of holding about eleven pounds in each arm, it could perform a variety of household chores. An impressive YouTube video shows the PR1 cleaning a living room. Like the Boston Dynamics Atlas, however, it was teleoperated and that particular video was sped up eight times to make it look like it moved at human speed.15

  The PR1 project emerged from Salisbury’s lab at the same time Andrew Ng, a young Stanford professor who was an expert in machine vision and statistical techniques, was working on a similar but more software-focused project, the Stanford Artificial Intelligence Robot, or STAIR. At one point Ng gave a talk describing STAIR to the Stanford Industrial Affiliates program. In the audience was Scott Hassan, the former Stanford graduate student who had done the original heavy lifting for Google as the first PageRank algorithm programmer, the basis for the company’s core search engine.

  It’s time to build an AI robot, Ng told the group. He said his dream was to put a robot in every home. The idea resonated with Hassan. A student in computer science first at the State University of New York at Buffalo, he then entered graduate programs in computer science at both Washington University in St. Louis and Stanford, but dropped out of both programs before receiving an advanced degree. Once he was on the West Coast, he had gotten involved with Brewster Kahle’s Internet Archive Project, which sought to save a copy of every Web page on the Internet.

  Larry Page and Sergey Brin had given Hassan stock for programming PageRank, and Hassan also sold E-Groups, another of his information retrieval projects, to Yahoo! for almost a half-billion dollars. By then, he was a very wealthy Silicon Valley technologist looking for interesting projects.

  In 2006 he backed both Ng and Salisbury and hired Salisbury’s students to join Willow Garage, a laboratory he’d already created to facilitate the next generation of robotics technology—like designing driverless cars. Hassan believed that building a home robot was a more marketable and achievable goal, so he set Willow Garage to work designing a PR2 robot to develop technology that he could ultimately introduce into more commercial projects.

 

‹ Prev