The Imagineers of War

Home > Other > The Imagineers of War > Page 39
The Imagineers of War Page 39

by Sharon Weinberger


  Separate from the Grand Challenge, however, were DARPA’s regular robotics programs, now run by Jackel, who was fast learning the limitations of state-of-the-art autonomous vehicles. One DARPA-funded robot, nicknamed Spinner, was essentially a giant sport-utility vehicle that was designed for “extreme mobility,” meaning it could traverse some of the most rugged terrain. “It was meant to be able to flip over and run upside down,” Jackel said. “The cargo bay was on a pivot: if the vehicle went upside down the cargo bay could flip over.” That sounded great until Spinner went out to the desert for testing and everyone realized that the more than ten-thousand-pound vehicle was so hard to tip over there was really no reason for all those complicated mechanisms to allow it to operate that way. “It just wasn’t needed,” Jackel said.

  The bigger problem for robots was not agility but brains. Vision, or lack of it, is what had flummoxed DARPA’s 1980s-era smart truck in Pittsburgh’s Schenley Park. Twenty years later, DARPA was still trying to solve the fundamental problem of providing robots with the ability to process what they see and navigate around obstacles. Robotic vehicles over the years had added all kinds of sensors, such as lidar, which sends out a laser and then measures reflected light to sense objects. But with most ground robots, Jackel found, if they encountered an obstacle, they would back up, often hitting yet another obstacle, move forward into the original obstacle, essentially getting stuck in a loop, just like his Roomba trapped by chair legs.

  Jackel had inherited a program called PerceptOR, which was supposed to improve robotic navigation, but it was not clear DARPA was making much progress. Then, one day, Jackel was walking his two dogs, American Eskimos, in the woods behind his house and watched as they bounded forward. The dog’s stereovision was similar to a human’s, limited to about forty or fifty feet. But he watched with fascination as his dogs would spot something of interest, like a possible animal, and then dart forward at full speed, navigating around trees with ease. “I thought, ‘Gee, I don’t know what these dogs are doing, but they’re not running on lidar and they aren’t running on stereo.’ They’re somehow interpreting the image. The dog doesn’t go around and label that this is a tree and this is a bush.”

  That became Jackel’s inspiration for a new program called Learning Applied to Ground Vehicles, or LAGR, focusing on machine learning. Rather than having to identify each specific object, the LAGR robots would learn by experience how to navigate the terrain, mapping out a path in the distance. The robots did this using stereo cameras that created three-dimensional models looking out to about nine meters ahead, where obstacles could be more easily identified, and then comparing that with the color and shading of more distant scenes, where objects are not as easily identified. In that way, the robots could identify a clear path. The program ended up enabling robots to extend their effective vision out to a hundred meters. “We never got to the point where they were as good as the dogs, but they were a whole lot better than when they started,” said Jackel.

  —

  In 2005, Tony Tether returned to Barstow to kick off the second Grand Challenge competition. While spectators and press were still excited, Tether was even more nervous than he had been in 2003 when he showed up at the Petersen Automotive Museum. “I never said anything to anybody in DARPA, but I knew that we had to get somebody across that finish line, or, at least we had to get them really close,” he later recounted.

  Many of the same competitors from the first Grand Challenge lined up for the second race. Carnegie Mellon, a longtime leader in robotics and a race favorite, entered two vehicles. The course was even more challenging the second time around and included routes like Beer Bottle Pass, a treacherous strip sandwiched between a rock face and a sheer cliff. There was also a new entrant from a group at Stanford University led by the German-born computer scientist Sebastian Thrun. The Stanford team’s vehicle, an unassuming blue Volkswagen Touareg named Stanley, was outfitted with lidar, cameras, GPS, and an inertial guidance system. Thrun, along with several other competitors, had participated in Jackel’s computer vision program.

  As in the first Grand Challenge, DARPA provided GPS waypoints just prior to the race to guide the cars along the course. GPS, however, was of little use to vehicles when they had to navigate rocks, shrubbery, and other desert obstacles or even manage sharp turns and an occasional cliff. While some vehicles focused on identifying each obstacle, Stanley’s sensors scanned the road, not only concentrating on specific objects, but looking ahead and identifying the best course. Like Jackel’s bounding dogs, Stanley did not need to identify every single obstacle; it just needed to choose a good enough course to allow it to move along at a decent pace. That application of machine learning is what Thrun and his team had been practicing in the desert. “It was our secret weapon,” he told a reporter from The New Yorker.

  Even then, the second Grand Challenge was no high-speed robotic drag race. Stanley’s average speed was about nineteen miles per hour, but that was enough to pull ahead of its main competitors, the two Carnegie Mellon vehicles. Stanley nabbed first place, while Carnegie Mellon took second and third prizes. A dark horse candidate headed by an IT manager for a Louisiana insurance company took fourth place. When Stanley crossed the finish line, Tether took a deep breath. “Holy cow, we did it,” he said to himself.

  The Stanford team took home the $2 million jackpot. In all, five vehicles crossed the finish line, compared with none in the first event. What exactly enabled the winning teams to pull ahead of the others is hard to pinpoint. All the teams learned from studying the experience of the first Grand Challenge, according to Jackel, and knew what to expect the second time around. But it is impossible to ignore that the winning teams, Stanford and Carnegie Mellon, had received significant DARPA support for their robotics programs over the years.

  The Grand Challenge did not produce any new technology; its success was simply in demonstrating that self-driving cars would work. Though that demonstration was itself a critical achievement, Jackel was hesitant to give it an unwavering endorsement. There were benefits to incentive prizes, but they should not replace funded research, he argued. In the first two competitions, people had to fund themselves or find corporate sponsorship. Jackel was concerned about the long-term implications of such competitions for research and the survival of institutions that support research. “At some place, money had to flow into the system,” he said.

  Jackel knew how precarious the support for research institutions could be, even those that were nationally revered. Jackel was a refugee from Bell Labs, the storied research and development division of the Bell Telephone Company. Ma Bell, as the monopoly was affectionately called, operated its lab as a quasi-academic institution, allowing its scientists to work with a large degree of independence. The scientists were encouraged to work on problems facing the telecommunications industry, but their research was judged by its scientific merit, not by the dollar figure their innovations generated. “Basically, the U.S. population funded Bell Labs through their phone bills,” Jackel said.

  In its heyday, Bell Labs gave birth to the first transistor, enabling a revolution in electronic devices. The lab was also home to scientific giants, like the father of information theory, Claude Shannon, whose work contributed to digital computers. In a rough sense, Bell Labs was to Ma Bell what DARPA was to the Pentagon—a problem-solving organization afforded wide latitude to explore scientific and technological solutions. And that worked well as long as Bell had a monopoly on telecommunications, the way the Pentagon has a monopoly on running the military. When the telephone monopoly was broken up, the lab was downsized, and its autonomy all but eliminated.

  The Grand Challenge was good publicity for robotics and for DARPA, but Jackel worried that it would overshadow the need to support long-term research. Without funding for early scientific exploration, challenges would not accomplish much. The contests cost much more than a $1 or $2 million prize; DARPA also had to pay for logistics, which was the most expensive part of the comp
etition. And yet no money went to research. “It’s not self-sustaining,” Jackel said. “You can do it based on something that already exists, but if all we did was have challenges, then at some point we’d just stagnate.”

  —

  In 2007, DARPA held yet a third and final Grand Challenge, called Urban Challenge, which took place on a former military base in Victorville, California. Instead of just going along a single road, teams had six hours to navigate a course in a city-like environment. It was not exactly The Fast and the Furious; the emphasis was on avoiding collisions while obeying traffic laws, so the average speed was around fourteen miles per hour. At one point, competing vehicles by MIT and Cornell University ended up in a bizarre slow-motion collision, with both cars creeping along at just five miles per hour.

  Carnegie Mellon University took first place. By that point, the Grand Challenge was already a national sensation, featured on magazine covers and in television documentaries. It has also ended up, by tragic happenstance, being prescient. By 2007, roadside bombs were the leading cause of casualties for American and coalition troops fighting in Iraq and Afghanistan. “Imagine if we had convoys being driven by robots,” Tether said.

  The Grand Challenge was about the future of DARPA more than about robots. In 2003, in the midst of the Total Information Awareness imbroglio, the agency had been a hairbreadth away from congressional intervention that would have permanently ended its independence. “Total Information Awareness got to the point where, quite frankly, I almost lost the agency,” Tether said. “The Grand Challenge really saved DARPA.” The agency that just a few years earlier was accused by one California senator of paving the way for a “George Orwell America” was now the hero of politicians, techies, and science fiction enthusiasts. “The Grand Challenge was one of the greatest public relations efforts, I mean worldwide, and that instantly changed the whole image of DARPA back to where it was,” Tether said.

  The Grand Challenge did more than restore the agency’s image. Tether would soon be presiding over the largest expansion of DARPA’s budget since the agency’s creation. When Tether took over in 2001, the DARPA budget had been stable at about $2 billion a year, but it started to climb dramatically along with the rest of the Pentagon budget, rising to $3 billion a year by 2005. Tether, the former Fuller Brush salesman, had read the mood correctly.

  —

  By the middle of the first decade of the twenty-first century, DARPA was facing a paradox: it was billing itself as a science fiction agency in the midst of a war with mounting casualties and military leaders demanding immediate solutions. The Grand Challenge might have saved DARPA, or at least the agency’s image, but it had no immediate effect on the wars in Afghanistan and Iraq, nor was it intended to, because robotic vehicles that could go beyond a racecourse were still years in the future.

  The Pentagon’s immediate response to casualties, mostly caused by homemade bombs, was to establish an agency known as the Joint Improvised Explosive Device Defeat Organization. A few former officials wondered why the Pentagon did not turn to DARPA, whose resident technical expertise and ability to work quickly outside bureaucracy would have made it ideal as a place for the bomb-fighting mission. No one in the Pentagon appears to have even considered the option.

  If during the Vietnam War DARPA had sent social scientists to the battlefield, this time around it employed them at home, designing computer programs to predict future conflicts. The army sent anthropologists to Iraq and Afghanistan but without DARPA’s support or involvement. DARPA did contribute to the war, but in piecemeal fashion. It deployed a few technologies, like the Wasp, a handheld drone that troops could put in a backpack. But the most public face of DARPA’s war effort was the Phraselator, a handheld translation device that was rushed to Afghanistan after the U.S. invasion in 2001. In hearings and interviews over the next several years, Tether touted the Phraselator as a prime example of DARPA’s battlefield innovations. The technology press also lavished praise on DARPA’s “universal translator,” even though the device did not really translate; it essentially had preloaded phrases that could be activated by voice recognition of the English equivalent or manual selection. The Phraselator fast became one of DARPA’s most public accomplishments in Afghanistan.

  —

  Some eight thousand miles away from Disneyland, Ken Zemach was on foot patrol with American troops in Afghanistan, when the soldiers he was with decided it was time to test the Phraselator with an Afghan villager they encountered. Zemach, a PhD engineer from MIT, was working for the Rapid Equipping Force, an army organization that was created in 2002 to rush technology to soldiers in Afghanistan, and then later in Iraq, without having to go through the typical military bureaucracy that could take years or decades. Without even realizing it, Zemach and his colleagues were doing a bit of what DARPA’s AGILE program had done in the 1960s in Vietnam, which was field-testing off-the-shelf or rapidly developed technology in a war zone.

  The Phraselator’s road to war had started shortly after the September 11 attacks, when Tether called for possible DARPA technologies that might be deployed quickly to troops in the field. A DARPA program manager working on automated speech recognition had suggested a handheld translator, and the agency awarded a $1 million contract to a Maryland-based company called Voxtec to build what became known as the Phraselator. By 2002, the clunky-looking devices started showing up in Afghanistan, and at DARPATech two years later Tether praised the Phraselator as a prime example of what DARPA was doing to help troops.

  DARPA was once again trying to send technology into war, only this time around it had no deployed personnel or any sort of larger strategy. And Zemach was quickly growing disillusioned with what would become the most public face of DARPA’s wartime efforts to field technology in Afghanistan and Iraq. The Phraselator was held up in Washington as a grand success, but Zemach had a different assessment: “It sucked.”

  On patrol in the Afghan village, Zemach held up the Phraselator, which looked more like a Star Trek tricorder than a universal translator. The device spit out a few sentences in the local language. The Phraselator had just said it was going to ask some questions and instructed the man being addressed to raise one hand for yes and two hands for no. The first question was whether the man understood this. The Afghan smiled and raised one hand. The next question was whether there were any foreign fighters in the area. The man raised two hands. No. Were there any minefields in the area? The man raised two hands. No, again.

  Then the group brought over a local interpreter. Suddenly the man’s answers changed. There was a minefield in the area, he reported. It was not that he was lying, Zemach concluded after many similar experiences; it was just that Afghans did not feel comfortable giving information to an electronic device. This scenario repeated itself in village after village, Zemach recalled a decade later.

  The device was designed to recognize a select set of English phrases and then translate them into different languages, like Pashto, Dari, and Arabic. Though the hope was to eventually build a two-way device that could translate the replies, the Phraselator was one-way, limiting it to simple commands and questions. Even with those limitations, the Phraselator was fielded to troops in Afghanistan, spitting out questions to confused Afghans, who often found themselves faced with a device speaking a dialect they did not understand. Even in the rarefied atmosphere of a government office building, military and law enforcement officials trying out the Phraselator were perplexed. One navy tester expressed frustration that the Phraselator, even after five tries, failed to translate a simple question like “Do you speak English?” instead rendering it into phrases like “Follow me,” “Drop it,” and “Can you walk?”

  More important, the Phraselator lacked what troops really needed. Most of the preloaded phrases in the Phraselator were either yes or no questions, asking about the presence of foreign fighters, or direct orders, such as telling people to put up their hands. What troops usually needed were simple instructions to help defuse a poten
tial confrontation when clearing villages. “You are effectively the invading army,” Zemach said. “You are going into a man’s home in front of his family with weapons and going through his stuff. It’s emasculating.”

  What they needed were phrases explaining that the soldiers were Americans and needed to search the village and its homes. Of course, those phrases could be loaded onto a Phraselator, but there was really no need for a custom device that cost thousands of dollars. Zemach had an interpreter record the phrases they needed on a pocket computer, and then he used a web page interface to call up the phrases as needed. It cost nothing and required no additional technology. “You didn’t need this power,” he said of the Phraselator. “You needed something simple.”

  Yet for months, and even years, after the introduction of the Phraselator, the talking phrase book was hauled out at Capitol Hill hearings and in Pentagon meeting rooms, usually by people who did not speak the foreign languages it was programmed with and had never used it in an operational situation. In 2009, an army report collected surveys from soldiers in the field, which failed to garner a single positive comment about the device. “Took too long to translate the correct phrase.” “Translation wrong more often than not.” “It translated the wrong words.” “Is not adequate for ‘heat of the moment’ situations.”

  Zemach said he encountered similar problems a few years later with another DARPA quick-reaction technology fielded in Iraq. The device, called Boomerang, was an acoustic sniper detection system. “DARPA swore up and down they never had a single false positive in the seven months of testing,” he said. “By the time they got from Kuwait to Iraq, they had over five thousand false positives.” DARPA suggested downloading updates on a weekly basis, without realizing that many of the soldiers using it had no easy access at the time to the Internet. DARPA eventually made the fixes to Boomerang, but the agency’s lack of knowledge about war made the process tortuous. “You have no right deploying the stuff” is what Zemach said he wanted to tell DARPA. “You have no idea how war works.”

 

‹ Prev