The Naked Future

Home > Other > The Naked Future > Page 24
The Naked Future Page 24

by Patrick Tucker


  Today’s computerized lie detectors take the form of Embodied Avatar kiosks. These watch eye dilation and other factors to discern whether passengers are being truthful or deceitful. No, the kiosk isn’t going to do a cavity search, but it can summon an agent if it robotically determines you’re just a bit too shifty to be allowed on a plane without an interview.1

  Their functioning is based around the work of Dr. Paul Ekman, creator of one of the world’s foremost experiments on lie detection, specifically how deception reveals itself through facial expression. Ekman’s previous work has shown that with just a bit of training a person can learn to spot active deceit with 90 percent accuracy simply by observing certain visual and auditory cues—wide, fearful eyes and fidgeting, primarily—and do so in just thirty seconds. If you’re a TSA agent and have to screen hundreds of passengers at a busy airport, thirty seconds is about as much time as you can take to decide if you want to pull a suspicious person out of line or let her board a plane.2

  The biometric detection of lies could involve a number of methods, the most promising of which is thermal image analysis for anxiety. If you look at the heat coming off someone’s face with a thermal camera, you can see large hot spots in the area around the eyes (the periorbital region). This indicates activity in the sympathetic-adrenergic nervous systems; this is a sign of fear, not necessarily lying. Someone standing in a checkpoint line with hot eyes is probably nervous about something.3 The presence of a high degree of nervousness at an airport checkpoint could be considered enough justification for additional screening. The hope of people in the lie detection business is that very sensitive sensors placed a couple of inches away from a subject’s face would provide reliable data on deception.

  In 2006, the TSA began to experiment with live screeners who were being taught to examine people’s facial expressions, mannerisms, and so on, for signs of lying as part of a program called SPOT (Screening Passengers by Observational Techniques).4, 5 When an airport-stationed police officer trained in “behavior detection” harassed King Downing, an ACLU coordinator and an African American, an embarrassing lawsuit followed. As Downing’s lawyer John Reinstein told New York Times reporter Eric Lipton, “There is a significant prospect this security method is going to be applied in a discriminatory manner. It introduces into the screening system a number of highly subjective elements left to the discretion of the individual officer.”6

  Later the U.S. Government Accountability Office (GAO) would tell Congress that the TSA had “deployed its behavior detection program nationwide before first determining whether there was a scientifically valid basis for the program.”

  DARPA’s Larry Willis defended the program before the U.S. Congress, noting that “a high-risk traveler is nine times more likely to be identified using Operational SPOT versus random screening.”7

  You may feel that computerized behavior surveillance at airports is creepy, but isn’t a future where robots analyze our eye movements and face heat maps to detect lying preferable to one where policemen make inferences about intent on the basis of what they see? And aren’t both of these methods, cop and robot, better than what we’ve got, a system that will deny someone a seat on a plane because her name bears a slight similarity to that of someone on a watch list? Probably the worst aspect of our airport security system as it currently exists is that evidence suggests we’re not actually getting the security we think we are. As I originally wrote for the Futurist, recent research suggests that ever more strict security measures in place in U.S. airports are making air travel less safe and airports more vulnerable. So much money is spent screening passengers who pose little risk that it’s hurting the TSA’s ability to identify real threats, according to research from University of Illinois mathematics professor Sheldon H. Jacobson. Consider that for a second. We’ve finally reached a point where a stranger with a badge can order us to disrobe . . . in public . . . while we’re walking . . . we accept this without the slightest complaint . . . and it’s not actually making us any safer.

  Our present system cannot endure forever. We won’t be X-raying our shoes ten years from now. But what will replace it? What is the optimal way of making sure maniacs can’t destroy planes while also keeping intercontinental air traffic on schedule?

  The best solution, Jacobson’s research suggests, is to separate the relatively few high-risk passengers from the vast pool of low-risk passengers long before anybody approaches the checkpoint line. The use of passenger data to separate the sheep from the goats would shorten airport screening lines, catch more threats, and improve overall system efficiency. To realize those three benefits we will all be asked to give up more privacy. We’ll grumble at first, write indignant tweets and blog posts as though George Orwell had an opinion on the TSA, but in time we will exhaust ourselves and submit to predictive screening in order to save twenty minutes here or there. Our surrender, like so many aspects of our future, is already perfectly predictable. Here’s why:

  Our resistance to ever more capable security systems originates from a natural and appropriate suspicion of authority but also the fear of being found guilty of some trespass we did not in fact commit, of becoming a “false positive.” This fear is what allows us to sympathize with defendants in a courtroom setting, and indeed, with folks who have been put on the wrong watch list and kept off an aircraft through no fault of their own. In fact, the entire functioning of our criminal justice system depends on all of us, as witnesses, jury members, and taxpayers, caring a lot about false positives. As the number of false positives decreases, our acceptance of additional security actually grows.

  Convicting the wrong person for a crime is a high-cost false positive (often of higher cost than the crime). Those costs are borne mostly by the accused individual but also by society. Arresting an innocent bystander is also high cost, but less so. Relatively speaking, pulling the wrong person out of a checkpoint line for additional screening has a low cost but if you do it often enough, the costs add up. You increase wait time for everyone else (as measured by time that could be spent doing something else), and as Jacobson’s model shows, it serves to erode overall system performance very quickly.

  Now here’s the tyranny of numbers: decrease the number of high-cost false positives and you can afford to make more premature arrests; bring that number down and you can afford more stop-and-frisks or security-check pat downs. Bring that number down again and the balance sheet looks like progress. The average citizen knows only that the system is improving. The crime rate appears to be going down; the security line at the airport seems to be moving faster. Life is getting better. If cameras, robots, big data systems, and predictive analytics played a part in that, then we respond by becoming more accepting of robots, cameras, and systems quantifying our threat potential when we’re about to get on a plane. We grow more accustomed to surveillance in general, especially when submitting to extra surveillance has a voluntary component, one that makes submission convenient and resistance incredibly inconvenient. This is why, in the week following the disclosure of the massive NSA metadata surveillance program, a majority (56 percent) of Americans polled by Pew said they believed the tactics that the NSA was employing were acceptable. That’s astounding considering that at the time, the media narrative was running clearly in the opposite direction.8

  Another example of the opt-in surveillance state is the TSA’s PreCheck program, which expedites screening for eligible passengers by rating their risk against that of the entire flying population. In order to be eligible for PreCheck, you’re required to give the Department of Homeland Security a window into your personal life, including where you go, your occupation, your green card number if you’re a legal alien, your fingerprints, and various other facts and tidbits of the sort that you could be forgiven for assuming the TSA had already (certainly the IRS has a lot of it). It’s not exactly more invasive than a full body scan but in many respects it is more personal. Homeland Security uses the information
it gets to calculate the probability that you might be a security threat. If you, like most people in the United States, are a natural-born citizen and don’t have any outstanding warrants, you’re not a big risk.

  People who use TSA PreCheck compare it with being back in a simpler and more innocent time. But there’s a downside, just as there is with customer loyalty programs at grocery stores. Programs such as PreCheck make a higher level of constant surveillance acceptable to more people. Suddenly, individuals who don’t want to go along look extra suspicious. By definition, they are abnormal.

  Security becomes faster, more efficient, and more effective through predictive analytics and automation so you should expect to be interacting with predictive screeners in more places beyond the X-ray line at the airport. But for computer programs, clearing people to get on a plane isn’t as clear as putting widgets in a box. Trained algorithms are more sensitive than an old-school southern sheriff when it comes to what is “abnormal.” Yet when a deputy or state trooper questions you on the side of the road, he knows only as much about you as he can perceive with his eyes, his ears, and his nose (or perhaps his dog’s nose if his dog’s at the border). Because the digital trail we leave behind is so extensive, the potential reach of these programs is far greater. And they’re sniffing you already. Today, many of these programs are already in use to scan for “insider threats.” If you don’t consider yourself an insider, think again.

  Abnormal on the “Inside”

  The location is Fort Hood, Texas. The date is November 5, 2009. It is shortly after 1 P.M.

  Army psychiatrist Major Nidal Hasan approaches the densely packed Soldier Readiness Processing Center where hundreds of soldiers are awaiting medical screening. At 1:20 P.M., Hasan, who is a Muslim, bows his head and utters a brief Islamic prayer. Then he withdraws an FN Hertsal 5.7 semiautomatic pistol (a weapon he selected based on the high capacity of its magazine) and another pistol.9

  As he begins firing, the unarmed soldiers take cover. Hasan discharges the weapon methodically, in controlled bursts. Soldiers hear rapid shots, then silence, then shots. Several wounded men attempt to flee from the building and Hasan chases them. This is how thirty-four-year-old police sergeant Kimberly D. Munley encounters him, walking quickly after a group of bleeding soldiers who have managed to make it out of the center. Hasan is firing on them as though shooting at a coven of quail that has jumped up from a bluff of tall grass. Munley draws her gun and pulls the trigger. Hasan turns, charges, fires, and hits Munley in the legs and wrists before she lands several rounds in his torso and he collapses. The entire assault has lasted seven minutes and has left thirteen dead, thirty-eight wounded.10

  Following the Fort Hood incident, the Federal Bureau of Investigation, Texas Rangers, and U.S. chattering classes went about the usual business of disaster forensics, piecing together (or inventing) the hidden story of what made Hasan snap, finding the “unmistakable” warning signs in Hasan’s behavior that point to the crime he was about to commit. After systematic abuse from other soldiers Hasan had become withdrawn. He wanted out of the military but felt trapped. Some of Hasan’s superiors had pegged him as a potential “insider threat” years before the Hood shootings, but when they reported their concerns, nothing came of it. The biggest warning signal sounded in the summer of 2009 when Hasan went out shopping for very specific and nonstandard-issue firearms.11

  The army had a lot of data on Hasan, much of which could have yielded clues to his intentions. The problem was that the army has a lot of data on everybody in the army. Some sixty-five thousand personnel were stationed at Fort Hood alone. The higher-ups soon realized that if they were to screen every e-mail or text message between soldiers and their correspondents for signs of future violence, it would work out to 14,950,000 people and 4,680,000,000 potential messages. Valuable warning signs of future insider threats were contained in those messages, which had been exchanged on systems and devices to which the army had access. But it was too much data for any human team to work through.

  Not long after Fort Hood, army private Bradley Manning was arrested for giving confidential material to the Web site WikiLeaks, material that showed the United States was involved in killing civilians in Iraq. President Obama, who has proven to be exceedingly hard on whistle-blowing, responded with Executive Order 13587, which established an Insider Threat Task Force and mandated that the NSA and DOD each set up their own insider threat program.12 DARPA issued a broad agency announcement on October 22, 2010, indicating that it was looking to develop a technology it called Anomaly Detection at Multiple Scales (ADAMS).13

  The goal of this program is to “create, adapt and apply technology to the problem of anomaly characterization and detection in massive data sets . . . The focus is on malevolent insiders that started out as ‘good guys.’ The specific goal of ADAMS is to detect anomalous behaviors before or shortly after they turn,” to train a computer system to detect the subtle signals of intent in e-mails and text messages of the sort that might have stopped the Fort Hood disaster, the Bradley Manning disclosure of classified information to WikiLeaks, or the Edward Snowden leak to the Guardian newspaper.

  Varying bodies have differing definitions of what constitutes an “insider” in a military context but most agree that an insider is anyone with authorized access to any sensitive information that could be used against U.S. interests if disclosed improperly. What constitutes that sensitive information is a rather open-ended question but we know that it extends beyond the files, reports, or data has been officially labeled top secret.14 For instance, the 2013 disclosures about the NSA PRISM system showed that several prominent Silicon Valley companies were forced to comply with NSA programs and orders from the secret Foreign Intelligence Surveillance Act (FISA) court. In that instance, an insider would include not just government workers or government contractors such as Edward Snowden but also any person at any of those private companies such as Google, Facebook, or Microsoft who simply knew of the existence of particular FISA orders.15

  That broadness in the definition of both insider and outsider information is important for anyone concerned that an insider threat program could be abused. From the perspective of an algorithm, there is no meaningful difference between someone who is inside the military, inside the TSA PreCheck program, or inside Facebook. The same methods of anomaly detection can be applied to any observable domain.

  We are all insiders.16

  What are the telltale marks of a dangerous traitor? Some studies by military scholars list such seemingly benign traits as “lacks positive identity with unit or country,” “strange habits,” “behavior shifts,” and “choice of questionable reading materials,” adjectival phrases that describe virtually every American teenager since Rebel Without a Cause. The more provocative “stores ammunition” and “exhibits sudden interest in particular headquarters” seem of greater use but is something of a lagging indicator. And, naturally, it represents only one specific type of threat. In cyber-sabotage, attempting to gain access to a system or information source unrelated to your job function is often considered “abnormal” behavior. But how do you separate actionable abnormal from regular curiosity?17

  No one at DARPA or any of the academic teams applying for the money is eager to discuss their research with reporters. Some of the most interesting work in predicting insider threats that has been made available to the public comes from a team of researchers led by Oliver Brdiczka and several of his colleagues at PARC. Instead of trying to pin down the traits associated with a treasonous personality, they sought to create a telemetric experiment where they could actually observe the threat develop. Here’s another example of a simulation that would have been extremely costly a few years ago becoming cheap and relatively easy to perform thanks to more people living more of their lives online.18

  Brdiczka and his colleagues looked at World of Warcraft, a massively multiplayer online game in which people develop a character, join teams c
alled guilds, and go on quests that can last for days, during such time players effectively shun conventional hygiene practices or real-world contact with the opposite sex. Brdiczka had read the literature on how interpersonal dynamics, little exchanges between coworkers or between workers and supervisors, can predict workplace problems. World of Warcraft provided a perfect environment to telemetrically explore how group dynamics can turn a happy worker into a player who is willing to sabotage or steal from the members of his guild (a proxy for teammates). The researchers had each subject fill out a twenty-question survey to scan for the key personality traits of extroversion, agreeableness, conscientiousness, risk taking, and neuroticism, and they looked at the subjects’ Facebook pages (and other social networking profiles) for similar personality clues. Then the researchers let them loose in the land of orcs and trolls.

  Brdiczka and his team measured everything from which characters played more defensively to how quickly certain players achieved certain goals, what kinds of assignments they took, whether they gave their characters pets, how likely a subject was to shove a fellow player in front of a dragon to buy time to use a healing potion. Then they ran every verbal or text exchange between characters through a sentiment analysis algorithm to get a sense of how the subjects were communicating. In all, they looked for sixty-eight behavioral features related to how their players played the game. When they coupled those scores with the scores from the surveys and social network profiles, they found they could predict the players most likely to “quit” and thus sabotage their guild, within a six-month survey window, with 89 percent accuracy.

  World of Warcraft functions as a useful proxy for all Internet interaction, and the government believes it has the right to access any of it. In 2012 the FBI created what it calls its Domestic Communications Assistance Center (DCAC) for the purpose of building back doors into the Internet and particularly into social networks, part of a sweeping Electronic Surveillance (ELSUR) Strategy. These online information collection devices join physical sensors, cameras, and scopes in the physical world and all of it contributes to an ever more revealing picture of our naked future.

 

‹ Prev