Book Read Free

Machines of Loving Grace

Page 29

by John Markoff


  Bradski’s new company set up operations in an industrial neighborhood in South Palo Alto. The office was in a big garage, which featured one room of office cubicles and a large unfinished space where they set up stacks of boxes for the robots to endlessly load and unload. By this point, Industrial Perception had garnered interest from giant companies like Procter & Gamble, which was anxious to integrate automation technologies into its manufacturing and distribution operations. More importantly, Industrial Perception had a potential first customer: UPS, the giant package delivery firm, had a very specific application in mind—replacing human workers who loaded and unloaded their trucks.

  Industrial Perception made an appearance at just one trade show, Automatica, in Chicago in January 2012. As it turned out, they didn’t even need that much publicity. A year later, Andy Rubin visited their offices. He was traveling the country, scouting and acquiring robotics firms. He told those he visited that in ten to fifteen years, Google would become the world’s delivery service for information and material goods. He needed to recruit machine vision and navigation technologies, and Industrial Perception had seamlessly integrated these technologies into their robotic arms so they could move boxes. Along with Boston Dynamics and six other companies, Rubin secretly acquired Industrial Perception. The deals, treated as “nonmaterial” by Google, would not become public for more than six months. Even when the public found out about Google’s new ambitions, the company was circumspect about its plans. Just as with the Google car, the company would keep any broader visions to itself until it made sense to do otherwise.

  For Rubin, however, the vision was short-lived. He tried to persuade Google to let him run his new start-up independently from what he now saw as a claustrophobic corporate culture. He lost that battle, so at the end of 2014 he left the company and moved on to create an incubator for new consumer electronics start-up ideas.

  The majority of the Industrial Perception team was integrated into Google’s new robotics division. Bradski, however, turned out to be too much of a Wild Duck for Google as well—which was fortuitous, because Hassan still had plans for him. He introduced Bradski to Rony Abovitz, a successful young roboticist who had recently sold Mako Surgical, a robotic surgery company that developed robots to provide support to less-experienced surgeons. Abovitz had another, potentially even bigger idea, and he needed a machine vision expert.

  Abovitz believed he could reinvent personal computing so it could serve as the ultimate tool for augmenting the human mind. If he was right, it would offer a clear path to merging the divergent worlds of artificial intelligence and augmentation. At Mako, Abovitz had used a range of technologies to digitally capture the skills of the world’s best surgeons and integrate them into a robotic assistant. This made it possible for a less-skilled surgeon to use a robotic template to get consistently good results using a difficult technique. The other major robot surgery company, Intuitive Surgical, was an SRI spin-off that sold teleoperated robotic instruments that allowed surgeons to operate remotely with great precision. Abovitz instead focused on the use of haptics—giving the robot’s operators a sense of touch—to attempt to construct a synthesis of human and robot, a surgeon more skilled than a human surgeon alone. It helped that Mako focused on operations that dealt with bone instead of soft tissue surgery (which, incidentally, was the focus of Intuitive’s research). Bone, a harder material, was much easier to “feel” with touch feedback. In this system, the machine and the human would each do what they were good at to create a powerful symbiosis.

  It’s important to note that the resulting surgeon isn’t a “cyborg”—a half-man, half-machine. A bright line between the surgeon and the robot is maintained. In this case the human surgeon works with the separate aid of a robotic surgery tool. In contrast, a cyborg is a creature in which the line between human and machine becomes blurred. Abovitz believed that “Strong” artificial intelligence—a machine with human-level intelligence—was an extremely difficult problem and would take decades to develop, if it was ever possible. From his Mako experience designing a robot to aid a surgeon, he believed the most effective way to design systems was instead to use artificial intelligence technology to enhance human powers.

  After selling Mako Surgical for $1.65 billion in late 2013, Abovitz set out to pursue his broader and more powerful augmentation idea—Magic Leap, a start-up with the modest goal of replacing both televisions and personal computers with a technology known as augmented reality. In 2013, the Magic Leap system worked only in a bulky helmet. However, the company’s goal was to shrink the system into a pair of glasses less obtrusive and many times more powerful than Google Glass. Instead of joining Google, Bradski went to work for Abovitz’s Magic Leap.

  In 2014, there was already early evidence that Abovitz had made significant headway in uniting AI and IA. It could be seen in Gerald, a half-foot-high animated creature floating in an anonymous office complex in a Miami suburb. His four arms waved gently while he hung in space and walked in circles in front of a viewer. Gerald wasn’t really there. He was actually an animated projection that resembled a three-dimensional hologram. Users could watch him through transparent lenses that project what computer scientists and optical engineers describe as a “digital light field” into the eyes of a human observer. Although Gerald doesn’t exist in the real world, Abovitz is trying to create an unobtrusive pair of computer-augmented glasses with which to view animations like Gerald. And it doesn’t stop with imaginary creatures. In principle it is possible to project any visual object created with the technology that matches the visual acuity of the human eye. For example, as Abovitz describes the Magic Leap system, it will make it possible for someone wearing the glasses to simply gesture with their hands to create a high-resolution screen as crisp as a flat-panel television. If they are perfected, the glasses will replace not only our TVs and computers, but many of the other consumer electronics gadgets that surround us.

  The glasses are based on a transparent array of tiny electronic light emitters that are installed in each lens to project the light field—and so the image—onto each retina. In practice, computer-generated light fields attempt to mimic what the human eye sees in the physical world. It is a computer-generated version of the analog light field that comprises the sum of all of the light rays that form a visual scene for the human eye. Digital light fields simulate the way light behaves in the physical world. When photons bounce off objects in the world, they act like rivers of light. The human neuro-optic system has evolved so that the lenses in our eyes adjust to match the wavelength of the natural light field and focus on objects. Watching Gerald wander in space through a prototype of the Magic Leap glasses gives a hint that in the future it will be possible to visually merge computer-generated objects with the real world. Significantly, Abovitz claims that digital light field technology holds out the promise of circumventing the limitations that have plagued stereoscopic displays for decades. Today, these displays cause motion sickness in users and they do not offer “true” depth-of-field perception.

  By January of 2015 it had become clear that augmented reality was no longer a fringe idea. With great fanfare Microsoft demonstrated a similar system called HoloLens based on a competing technology. Is it possible to imagine a world where the ubiquitous LCDs of today’s modern world—televisions, computer monitors, smartphone screens—simply disappear? In Hollywood, Florida, Magic Leap’s demonstration suggests that workable augmented reality is much closer than we might assume. If they are correct, such an advance would also change the way we think about and experience augmentation and automation. In October 2014, Magic Leap’s technology received a significant boost when Google led a $524 million investment round in the tiny start-up.

  The Magic Leap prototype glasses look like ordinary glasses, save for the thin cable that runs down a user’s back and connects to a small, smartphone-sized computer. These glasses don’t simply represent a break with existing display technologies. The technology behind them makes extensiv
e use of artificial intelligence, and machine vision, to remake reality. The glasses are compelling for two reasons. First, their resolution will approach the resolving power of the human eye. The best computer displays are just reaching this level of resolution. As a result, the animations and imagery will surpass those of today’s best consumer video game systems. Second, they are the first indication that it is possible to seamlessly blend computer-generated imagery with physical reality. Until now, the limits of consumer computing technology have been defined by what is known as the “WIMP” graphical interface—the windows, icons, menus, and pointer of the Macintosh and Windows. The Magic Leap glasses, however, will introduce augmented reality as a way of revitalizing personal computing and, by extension, presenting new ways to augment the human mind.

  In an augmented reality world, the “Web” will become the space that surrounds you. Cameras embedded in the glasses will recognize the objects in people’s environments, making it possible to annotate and possibly transform them. For example, reading a book might become a three-dimensional experience: images could float over the text, hyperlinks might be animated, readers could turn pages with the movement of their eyes, and there would be no need for limits to the size of a page.

  Augmented reality is also a profoundly human-centered version of computing, in line with Xerox PARC computer scientist Mark Weiser’s original vision of “calm” ubiquitous computing. It will be a world in which computers “disappear” and everyday objects acquire “magical” powers. This presents a host of new and interesting ways for humans to interact with robots. The iPod and the iPhone were the first examples of this transition as a reimagining of the phonograph and the telephone. Augmented reality would also make the idea of telepresence far more compelling. Two people separated by great distance could gain the illusion of sharing the same space. This would be a radical improvement on today’s videoconferencing and awkward telepresence robots like Scott Hassan’s Beam, which place a human face on a mobile robot.

  Gary Bradski left the world of robots to join Abovitz’s effort to build what will potentially become the most intimate and powerful augmentation technology. Now he spends his days refining computer vision technologies to fundamentally remake computing in a human-centered way. Like Bill Duvall and Terry Winograd, he has made the leap from AI to IA.

  8|“ONE LAST THING”

  Set on the Pacific Ocean a little more than an hour’s drive south of San Francisco, Santa Cruz exudes a Northern California sensibility. The city blends the Bohemian flavor of a college town with the tech-savvy spillover from Silicon Valley just over the hill. Its proximity to the heart of the computing universe and its deep countercultural roots are distinct counterpoints to the tilt-up office and manufacturing buildings that are sprinkled north from San Jose on the other side of the mountains. Geographically and culturally, Santa Cruz is about as far away from the Homestead-Miami Speedway as you can get.

  It was a foggy Saturday morning in this eclectic beach town, just months after the Boston Dynamics galloping robots stole the show at the steamy Florida racetrack. Bundled against the morning chill, Tom Gruber and his friend Rhia Gowen wandered into The 418 Project, a storefront dance studio that backs up against the river. They were among the first to arrive. Gruber is a wiry salt-and-pepper-goateed software designer and Gowen is a dance instructor. Before returning to the United States several years ago, she spent two decades in Japan, where she directed a Butoh dance theater company.

  Tom Gruber began his career as an artificial intelligence researcher who swung from AI to work on augmenting human intelligence. He was a cofounder of the team of programmers who designed Siri, Apple’s iPhone personal assistant. (Photo © 2015 by Tom Gruber)

  In Santa Cruz, Gowen teaches a style of dance known as Contact Improvisation, in which partners stay physically in touch with each other while moving in concert with a wide range of musical styles. To the untrained eye, “Contact Improv” appears to be part dance, part gymnastics, a bit of tumbling, and even part wrestling. Dancers use their bodies in a way that provides a sturdy platform for their partners, who may roll over and even bounce off them in sync with the music. The Saturday-morning session that Gruber and Gowen attended was even more eclectic: it was a morning weekend ritual for the Santa Cruz Ecstatic Dance Community. Some basic rules are spelled out at ecstaticdance.org:

  1.Move however you wish;

  2.No talking on the dance floor;

  3.Respect yourself and one another.

  There is also an etiquette that requires that partners be “sensitive” if they want to dance with someone and that offers a way out if they don’t: “If you’d rather not dance with someone, or are ending a dance with someone, simply thank them by placing your hands in prayer at your heart.”

  The music mix that morning moved from meditative jazz to country, rock, and then to a cascade of electronic music styles. The room gradually filled with people, and the dancers each entered a personal zone. Some danced together, some traded partners, some swayed to an inner rhythm. It was free-form dance evocative of a New Age gym class.

  Gruber and Gowen wove through the throng. Sometimes they were in contact, and sometimes they broke off to dance with other partners, then returned. He picked her up and bent down and let her roll across his back. It wasn’t exactly “do-si-do your partner,” but if the move was done well, one body formed a platform that shouldered the other partner’s weight without strain. Gruber was a confident dancer and comfortable with moves that evoked a modern dance sensibility. It offered a marked contrast to the style of many of the more hippie, middle-aged Californians, who were skipping and waving in all directions against a quickening beat. The pace of the dancers ascended to a frenzy and then backed down to a mellower groove. Gradually, the dancers melted away from the dance floor. Gruber and Gowen donned their jackets and stepped out into the still-foggy morning air.

  Gruber casually pulled an iPhone from his pocket and asked Siri, the software personal assistant he designed, a simple question about his next stop. On Monday he would be back in the fluorescent-lit hallways of Apple, amid endless offices overloaded with flat-panel displays. On that morning, however, he wandered in a more human-centric world, where computers had disappeared and everyday devices like phones were magical.

  Apple’s corporate campus is circumscribed by Infinite Loop, a balloon-shaped street set just off the Interstate 280 freeway in Cupertino. The road wraps in a protective circle around a modern cluster of six office buildings facing inward onto a grassy courtyard. It circles a corporate headquarters that reflects Apple’s secretive style. The campus was built during the era in which John Sculley ran the company. When originally completed, it served as a research and development center, but as Apple scaled down after Sculley left in 1993, it became a fortress for an increasingly besieged company. When Steve Jobs returned, first as “iCEO” in 1997, there were many noticeable changes including a dramatic improvement in the cafeteria food. The fine silver that had marked the executive suite during the brief era when semiconductor chief Gilbert Amelio ran the company also disappeared.

  As his health declined during a battle with pancreatic cancer in 2011, Steve Jobs came back for one last chapter at Apple. He had taken his third medical leave, but he was still the guiding force at the company. He had stopped driving and so he would come to Apple’s corporate headquarters with the aid of a chauffeur. He was bone-thin and in meetings he would mention his health problems, although never directly acknowledging the battle was with cancer. He sipped 7UP, which hinted to others that he might have been struggling through chemotherapy.

  The previous spring Jobs had acquired Siri, a tiny developer of a natural language software application that was designed to act as a virtual assistant, in effect a software assistant, on the iPhone. The acquisition had drawn a great deal of attention in Silicon Valley. Apple acquisitions, particularly large ones, are extremely rare. When word circulated that the firm had been acquired, possibly for more than $200 milli
on, it sent shock waves up and down Sand Hill Road and within the burgeoning “app economy” that the iPhone had spawned. After Apple acquired Siri, the program was immediately pulled from the App Store, the iPhone service through which programs were screened and sold, and the small team of programmers who had designed Siri vanished back into “stealth mode” inside the Cupertino campus. The larger implications of the acquisition weren’t immediately obvious to many in the Valley, but as one of his last acts as the leader of Apple, Steve Jobs had paved the way for yet another dramatic shift in the way humans would interact with computers. He had come down squarely on the side of those who placed humans in control of their computing systems.

  Jobs had made a vital earlier contribution to the computing world by championing the graphical desktop computing approach as a more powerful way to operate a PC. The shift from the command line interface of the IBM DOS era to the desktop metaphor of the Macintosh had opened the way for the personal computer to be broadly adopted by students, designers, and office workers—a computer for “the rest of us,” in Apple parlance. Steve Jobs’s visits to PARC are the stuff of legend. With the giant copier company’s blessing and a small but lucrative Xerox investment in Apple pre-IPO, he visited several times in 1979 and then over the next half decade created first the Lisa and then the Macintosh.

  But the PC era was already giving way to a second Xerox PARC concept—ubiquitous computing. Mark Weiser, the PARC computer scientist, had conceived the idea during the late 1980s. Although he had been given less credit for the insight and the shift, Jobs had been the first to successfully translate Weiser’s ideas for general consumer audiences. The iPod and then the iPhone were truly ubiquitous computing devices. Jobs first transformed the phonograph and then the telephone by adding computing. “A thousand songs in your pocket” and “something wonderful for your hand.” He was the consummate showman, and “one more thing” had become a trademark slogan that Jobs used at product introductions, just before announcing something “insanely great.” For Jobs, however, Siri was genuinely his “one last thing.” By acquiring Siri he took his final bow for reshaping the computing world. He bridged the gap between Alan Kay’s Dynabook and the Knowledge Navigator, the elaborate Apple promotional video imagining a virtual personal assistant. The philosophical distance between AI and IA had resulted in two separate fields that rarely spoke. Even today, in most universities artificial intelligence and human-computer interaction remain entirely separate disciplines. In a design approach that resonated with Lee Felsenstein’s original golemics vision, Siri would become a software robot—equipped with a sense of humor—intended to serve as a partner, not a slave.

 

‹ Prev