Book Read Free

The Imagineers of War

Page 38

by Sharon Weinberger


  —

  While Tether was pushing the frontier of robotics with the Grand Challenge, DARPA-supported scientists were seeing the flip side of science fiction fandom. Tether was a disciple of George Heilmeier’s and his “catechism”—the seven questions the then director used to evaluate whether to start a program—to the point that chocolate bars with wrappers imprinted with the catechism were distributed at DARPATech. Like Heilmeier, Tether wanted breakthroughs, and he wanted those breakthroughs on a schedule, which he called go/no-go points. Programs that could not reach specific goals on schedule, such as six months or a year, were swiftly ended. University researchers accustomed to long-term funding took a hit. The New York Times reported in 2005 that DARPA had slashed its computer science funding for academics; by 2004, it had dropped to $123 million, down from $214 million in 2001. Tether defended the cuts, saying that he had not seen any fresh ideas from computer science departments. “The message of the complaints seems to be that the computer science community did good work in the past and, therefore, is entitled to be funded at the levels to which it has become accustomed,” Tether shot back, when faced with criticism.

  This was not Tether’s first clash with the scientific community. In 2002, he suddenly ended DARPA’s four-decade-long relationship with the JASONs, the independent scientific advisory group, by taking away their funding. Tether never publicly commented on what precisely led to his decision to sever ties, but according to several accounts the conflict was over the group’s membership, which some Pentagon officials perceived as being weighted toward older physicists. Secretary of Defense Donald Rumsfeld wanted to add specific members to the JASONs, “a couple of young Silicon Valley 30-year-old types,” he later told Fortune magazine in an interview. The JASONs, who had long prided themselves on selecting their own members, refused, and Tether canceled their contract. After a congressional outcry, the Pentagon’s director of defense research and engineering funded the contract; the JASONs lived, but the break with DARPA was permanent.

  The JASON controversy, combined with the cuts to university computer science departments, was beginning to paint Tether as antiacademic. But Tether insisted DARPA was not cutting back on funding for universities; it was merely redirecting funds into interdisciplinary research that he believed would result in major breakthroughs. One of the examples he highlighted at a congressional hearing was computers that could read people’s thoughts, or what DARPA ended up calling “augmented cognition.”

  The term “augmented cognition” came from Eric Horvitz, the Microsoft scientist who had theorized the “hall of mirrors” privacy device for Total Information Awareness. At a DARPA-sponsored meeting in 2000, Horvitz proposed a computer that could adapt directly to a person’s mental state, an extension of J. C. R. Licklider’s dream of man-computer symbiosis. Licklider had wanted computers to help with decision making, because computers could do calculations faster than the human brain. The focus for years had been on making computers more powerful, while the human brain remained the same. Now the computers were much faster and smarter than they had been in Licklider’s time, and the problem was the human brain could not keep up with them. Horvitz wanted to combine cognitive psychology with computer science to find ways to allow the human brain to work faster with computers.

  Horvitz envisioned computers that would sense when a person was tired, overtaxed, or forgetful, reorienting its display, for example, or providing auditory cues to alert an inattentive user. To demonstrate this vision, he and his colleagues in the Information Science and Technology study group even arranged at one meeting for “mind reading” helmets, to demonstrate one aspect of augmented cognition: the use of sensors to detect brain signals. The idea was that helmets with electrodes could be placed on someone’s head to detect neural signals—or point infrared sensors at the brain to look for changes in blood flow—and then use that information to adjust information provided by the computer.

  Augmented cognition captured the attention of Dylan Schmorrow, a new DARPA program manager, who funded a formal study. Horvitz envisioned augmented cognition as a mix of basic science and engineering—a broad research program studying how people integrate information from across the senses. That research could improve how people interact with computers, such as by creating displays that adapt to the user by highlighting important information. Schmorrow liked the idea, but he liked the helmets even better. The Augmented Cognition program DARPA created ended up focusing on a single application—a device that would detect and respond to someone’s cognitive state. “We actually thought it was interesting still, but we were surprised at just the narrow-focused course,” Horvitz said. “Then again, the program resonated with DARPA’s interest in hardware, devices, and putting a cap on someone’s head.”

  Horvitz and Schmorrow did not realize initially that augmented cognition had a predecessor at DARPA known as biocybernetics, the research program George Lawrence had led in the 1970s. When word of a prior DARPA program eventually filtered down to Schmorrow, he called Emanuel Donchin, one of the original DARPA-funded biocybernetics researchers, asking him to come to DARPA to discuss his earlier work. Donchin was shocked when he showed up at DARPA’s headquarters in Northern Virginia. Back in the 1970s, Donchin might drop in at DARPA to see Lawrence about brain-driven computers, but he would also poke his head in Licklider’s office or chat with other program managers with similar interests. DARPA back then was an open office building, at least for its unclassified projects, and visiting one research manager was an invitation to chat and meet with other officials and swap ideas. “When I came to see Dylan [Schmorrow], there were security people in the lobby,” Donchin said. “I couldn’t speak to anybody about anything except Dylan Schmorrow. It was an amazing transformation.” More shocking for Donchin was that DARPA officials had no idea that the agency had been involved in similar work in the past. “They had zero information about the biocybernetics program,” he said. “DARPA had no capacity for institutional memory. It was very strange.”

  Biocybernetics had focused not on working devices but on investing in a new field of science. In the late 1960s and early 1970s, the technology for detecting neural signals was rudimentary; forty years after that initial DARPA program, the technology had evolved, but scientists still disagreed over the interpretation of those signals. For example, scientists had developed better ways to detect the P300, a brain signal that occurs about three hundred milliseconds after a stimulus, like a specific sight or sound. Yet DARPA’s vision for augmented cognition assumed that such brain signals, which were still being studied in the lab, could be used to start immediately building the equivalent of mind-reading caps for the military. It was science fiction, quite literally.

  To illustrate its vision, the agency enlisted the services of Alexander Singer, a Hollywood television director best known for the new Star Trek series, Deep Space Nine, to create a half-hour mini-film depicting augmented cognition. The video, inspired by the Star Trek holodeck, opened with lingering shots of groundbreaking scientists: Charles Darwin, father of evolution theory; B. F. Skinner, famous for operant conditioning; and Hans Berger, inventor of electro​encephalo​graphy. It then flashed to DARPA’s Dylan Schmorrow, credited as the father of augmented cognition. The science fiction story line featured a cyber-security officer, Claudia, who must head off a cyber attack designed to destabilize Africa. Claudia is outfitted with a headpiece that monitors her cognitive state, and the computer parses out information to speed her decisions, with occasional interruptions by a Yoda-like cyber chief teleconferenced in from a fishing vacation. “We may be looking at an anomaly, big time,” Claudia declares.

  Alan Gevins, a neuroscientist and longtime researcher in brain-computer interface, was perplexed. Gevins, whose forty-year career had spanned the original DARPA biocybernetics program, was invited to take part in the Augmented Cognition program but was disillusioned with the emphasis on science fiction visions over experimentation. “I’m a data guy, not a philosopher,” h
e said. DARPA was calling the researchers “performers,” which was accurate, Gevins joked, because some of those under contract to DARPA were acting like performers at a circus. Gevins recalled watching one DARPA-funded researcher demonstrate the use of a brain signal to move a cursor on a computer screen. Picking up these sorts of signals required careful controls and knowledge of the equipment, but the researcher simply stomped his foot when he wanted the cursor to move, introducing a deliberate “artifact,” or error. (Ironically, this was the same method that Uri Geller, who claimed to have psychic powers, was accused of using three decades earlier.) “It clearly was fake, and it wasn’t subtle at all,” Gevins said. “I pointed that out, but it didn’t seem to make any difference. It was astounding actually.” Aghast that DARPA was spending huge amounts of money on efforts that yielded dubious scientific results without using any sort of peer review, Gevins soon dropped out of the program.

  The Augmented Cognition program was less interested in exploring a field of science than in producing hardware. The first phase of the program ended with what DARPA called the Augmented Cognition Technical Integration Experiment, which tested some twenty different gauges of cognitive state, ranging from electro​encephalo​graphy to pupil tracking. Subjects were monitored as they played a video game called Warship Commander Task, which tested a person’s ability to respond to threatening aircraft. In a sense, it was like playing an old Atari game, where the primary goal was to spot and destroy an enemy aircraft without shooting down friendly aircraft. As the game progressed, sensors would monitor the player’s cognitive state to identify when the brain was overloaded and then parse out information in the most efficient way possible. In an overview paper describing the work, Schmorrow and two colleagues called the results “promising” and said they pointed to “great potential” of using such sensors for applications.

  Not everyone was so optimistic. Reviewing the 2003 experiment, Mary Cummings, a human-computer interface expert, noted that even the published results indicated that none of the signs of “overload” that the researchers were testing were consistent across all three variations of the test (the researchers varied the number of aircraft, the level of difficulty, and authority). And the two measurable signals that worked consistently across two variations of the game—mouse clicks and pressure—were only indirectly related to someone’s cognitive state. In a published critique of DARPA’s claimed success, she noted the experiment’s errors, data problems, and, most critically, the potential absurdity of developing military equipment that would require someone in combat to carry a thirty-five-pound device and wear an EEG cap with gel sensors attached to the scalp.

  Cummings’s criticisms were particularly stinging. She had been one of the navy’s first female fighter pilots and later went on to get a PhD in systems engineering, then worked at MIT. As an experienced pilot and specialist in human-computer interaction, she knew more than most about creating military technology that could pass scientific muster while also being usable in a realistic military environment, and she was underwhelmed on both fronts by the DARPA Augmented Cognition program. Cummings laughed when asked if she saw scripted demonstrations, like foot stomping. “What didn’t I see?” she replied.

  She recalled being briefed by a company that had already spent several million dollars of DARPA funding on eye tracking—looking at gaze and blink rate—which can be used to gauge someone’s attention to features on a computer screen or even determine whether someone is mentally overloaded. The company officials were claiming an order of magnitude improvement in reaction time using eye tracking, which sounded impressive. “When I asked them to show the experimental results—the results, which were the basis of millions of dollars of funding, I found out that it had only been tested on two people—the creators of the system.”

  The Augmented Cognition program wound down by 2007, although DARPA continued work on two related technologies, including brain-reading goggles that were supposed to help soldiers detect possible threats and a wearable head device that would allow intelligence analysts to sort through imagery quickly. Both programs were based on detecting the P300, the neural signal sparked when someone has unconsciously recognized an object. In the case of the goggles, the device would act as a sort of “sixth sense,” alerting the wearer to a possible threat, like a sniper, or someone planting a bomb, before the conscious brain has recognized it. For the intelligence analyst, it would tap his or her unconscious thoughts to sort through thousands of images quickly.

  Todd Hughes, a DARPA official who ran the program that created wearable brain-reading devices for imagery analysts, admitted it required some imagination to see the applications. The technology required electrodes attached to the scalp with gel—not the way most government employees would like to work. Hughes joked that his vision was a special team of analysts: “There would be a dozen guys with shaved heads; they would wear special armbands. When the plane crashes and they don’t know where it is, they run into the lab, put on their headgear, and start searching imagery until they find it. Then they walk out of the room heroes.”

  DARPA eventually turned over the imagery analyst gear to the National Geospatial-Intelligence Agency, and the brain-reading goggles went to the army’s Night Vision Laboratory. Technically, that meant both programs “transitioned,” DARPA-speak for technologies that have successfully gone to the military, but it appears that neither was ever used outside of a lab.

  As a research program, augmented cognition was a great idea, Cummings maintained. The problem with the program, she said, was that researchers were being asked to show concrete results in an area that was still basic science. “Where DARPA started to fall overboard is when they started to try and make it applied, ready for some sort of operational results,” she said. The allure of science fiction, without the checks and balances of rigorous science, had led the promising field of augmented cognition down a rabbit hole. The question was whether the same would be true for robotic cars.

  —

  Back in the 1980s, DARPA as part of the Strategic Computing Initiative had funded an autonomous land vehicle, dubbed the “smart truck,” which the historian Alex Roland described as a “large ungainly, box-shaped monster.” Instead of a windshield, the front of the vehicle sported a “large Cyclopean eye” that housed the robot’s sensors. It looked more 1950s camp science fiction than Terminator, but the exterior was not important. What mattered were the rows of computers stacked inside the fiberglass shell of the truck and the algorithms that were supposed to make sense of the outside world. Those algorithms did not work very well.

  The truck was equipped with television cameras, whose pictures were analyzed by the onboard computers to create what is known as “computer vision,” the term for how computers process and analyze images. The human brain does this well, letting someone know, for example, the difference between a tree and the shadow of a tree. The smart truck did this badly, so researchers found it was best to test it in the noonday sun, when there were no shadows. When Carnegie Mellon researchers took the truck out for a spin in Pittsburgh’s Schenley Park, they had to use masking tape to denote borders, because the truck’s computer vision would confuse inanimate objects, like a tree trunk, for the edge of the pavement. If robots were really going to take to the road, computer vision had to be improved well beyond the smart truck.

  The year of the first Grand Challenge, Larry Jackel, a physicist by training, came to DARPA to take over the agency’s robotics programs. One of the first things he did was buy himself a Roomba, the autonomous vacuum cleaner that has spawned thousands of YouTube videos, many involving the robot’s interaction with people’s pets. The Roomba was made by iRobot, a company that also produced military robots, including its flagship PackBot, which had been developed with DARPA funding in the 1990s. The PackBot showed up in Afghanistan in 2002 to help clear caves and was not particularly effective (the robots lost communications and got stuck). The robot soon found a higher calling in explosive ordnance dispos
al. Eventually, thousands of modified PackBots were sent to Iraq and Afghanistan to help defuse roadside bombs. But the PackBot’s civilian cousin, the Roomba, frustrated Jackel: it got stuck on the modern shag area rug in his New Jersey home; it was flummoxed by computer cords; and it could not navigate inside the four legs of a chair, or if it did, it got stuck there, as if trapped in a virtual jail of invisible walls. Frustrated, he finally got rid of the Roomba and went back to a regular vacuum cleaner.

  In popular culture, robotic vehicles—or even robotic soldiers—are often portrayed as right over the horizon. The threat of armed Terminators is debated as if the Pentagon were already building armies of them. Most of DARPA’s programs were focusing on advancing different aspects of robotics, rather than building war robots. For example, DARPA sponsored Boston Dynamics to build LittleDog, a four-legged vehicle (which actually looked more like a bug than a canine) that was designed to travel on rough terrain. LittleDog was followed by BigDog, a larger version that could carry supplies for troops, like a robotic mule. While tech blogs and popular magazines often called the headless BigDog a “war robot,” it was actually more appropriately a lab robot. BigDog was meant to demonstrate a specific ability, in this case how to move a legged robot over rough terrain. BigDog was not destined for the battlefield.

  Even Congress was possessed by technological optimism in 2000, writing into law that by 2015 one-third of all military ground vehicles should be unmanned. It was an ambitious if misinformed goal. The enthusiasm stemmed from growing use of unmanned aerial vehicles. Drones in the first decade of the twenty-first century were rapidly replacing manned aircraft, so unmanned ground vehicles sounded like the next logical step. What was not immediately apparent to Congress was how different an unmanned aerial vehicle was from an unmanned ground vehicle. Drones, particularly at high altitudes, are mostly in danger of hitting other aircraft. On the ground, robots have to contend with every type and size of obstacle. Differentiating between a rock and its shadow, as the 1980s DARPA smart truck demonstrated, could be difficult for even the most advanced robots. The 2004 Grand Challenge demonstrated all of the limitations of technology in gory, tire-burning detail.

 

‹ Prev