Raj De, then the NSA general counsel, told me soon after the first Snowden disclosures, “Even if you agree, as I do, that we should be having public discussion about some of these issues, it’s still okay to think what he did was wrong and this wasn’t the way to go about it. To my mind in some ways it’s pretty antidemocratic for any individual to put their judgment above everyone else’s.”
Yet this was always the leaker’s role, taking authority upon himself to spill secrets. It was a reporter’s role, too. I never heard a plausible scenario for a national reckoning with NSA surveillance in a universe that did not include a leak like Snowden’s. Without those disclosures, Comey told me, “I don’t think we were going to have the conversation in public, certainly. There’s a good argument to make that—I won’t say we’re better off for it, but not as bad off as people thought we would be and said we would be, and there have been some benefits to the conversation.”
I asked Raj De, “If we could have had this debate without the Snowden leaks, why didn’t we?”
“I think it’s because our political system is broken at some level, probably,” he replied. “The national security bureaucracy is naturally tilted in a conservative way. . . . By and large, these are really good people who are trying to keep everyone safe. So, if that’s your job, of course you’re going to be risk-averse about being transparent. The systems are not geared to allow for greater public debate. Every incentive is to not do anything that would increase risk. Transparency is an abstract value in some ways, and a bomb going off is a concrete nightmare, and when just human beings are worried about raising the risk of one by trying to fulfill this other value it gets really hard.”
Some Snowden critics acknowledged that, in principle, a leak of classified information could bring democratic benefits. There might be justifiable leaks, some said, but most of Snowden’s did not qualify.
“You can make an argument—I don’t fully agree with it—that the disclosure of the [domestic telephone] metadata program was in the public interest,” Ledgett told me over seafood soup at the Elkridge Furnace Inn, a few miles north of Fort Meade. “The argument stretches more thinly when you talk about the PRISM program. And with all the stuff after that, it breaks down entirely.”
The idea was that only those programs, if any, counted as domestic surveillance, and only domestic surveillance raised legitimate points of debate. The rest of the disclosures were “national security porn,” Ledgett said, gratuitously damaging to valuable intelligence techniques.
I disagreed, or I would not have written the stories I wrote. Even if you set the bar at eavesdropping on Americans, you still had to consider the impact of collection overseas. The NSA’s and the CIA’s technical collection machinery tapped foreign circuits in bulk, and Americans were ubiquitous on those lines. Google and Yahoo cloud exploitation gathered American data. Location tracking in bulk tracked Americans. Address book collection swept Americans into the NSA’s social networking database. Bulk acquisition methods, inherently, were as nearly comprehensive as possible.
Two years in a row, in public presentations, Gus Hunt, the CIA’s chief technical officer, made exactly that point. In 2012, he displayed a slide that spelled out the reasoning behind Keith Alexander’s haystack metaphor.
Don’t know the future value of a dot today
We cannot connect dots we don’t have
The old collect, winnow, dissem[inate] model fails spectacularly in the Big Data world
Therefore, Hunt said, the intelligence community had to collect all the dots, as many as data science allowed. It might sound surprising to a layperson, but scale was not an obstacle. With Big Data techniques, he wrote, “6,998,329,787 is a small number.” That was the estimated population of the entire world that year. “It is nearly within our grasp to compute on all human generated information,” he wrote, underscoring the word “all.” One year later, at a conference held by GigaOm in New York City, he made himself still more explicit. “The value of any piece of information is only known when you can connect it with something else that arrives at a future point in time,” he told the audience. “Since you can’t connect dots you don’t have, it drives us into a mode of, we fundamentally try to collect everything and hang on to it forever.”
I had no quarrel with foreign intelligence gathering. It was essential to national defense and rational policymaking. It was simply wrong to say, however, that “everything . . . forever” could be configured to leave Americans alone. In our lunch together, Ledgett did not insist on that point, but he thought the risks it posed were fanciful. He was game to discuss the hard problems, but he was also exasperated.
“People are uncomfortable with the idea of the NSA penetrating technology and services that everybody uses,” I said.
“How would you propose that we understand what the nation’s adversaries are doing?” he asked. “If you could talk them into going to badguy.com, that would be cool. Then we’ll only concentrate on that part.”
This was the root of the problem. Intelligence targets in days gone by—the Nazi high command in the 1940s was the classic case—had used unique ciphers, codes, and communications technology. Nobody else was on those channels. The NSA’s predecessor, alongside British allies, could break in without any prospect of “incidental” bystanders. Today there were still intelligence targets who used bespoke technology, but they were exceptions. Most of them used the same pipes as the rest of us.
Why not find your targets where they are and other people are not? Go after their individual devices, local networks, junction points?
“Some of that’s opportunity-based,” Ledgett said. “Sometimes we don’t have the opportunity. Then there’s efficiencies as well. Do you have enough—mainly people, but also money—to go do that everywhere you need to do that?”
We were back to efficiency and my question for Comey at Fordham. At times in its history, the NSA had achieved something like a God’s-eye view of its targets on their specialized communications channels. Now, with the same goal, the agency wanted something it had never had before: efficient means to read and listen to anything on any channel at all. My instincts rebelled against a too-efficient state on this scale of operations. I worried about the Dark Mirror, so transparent on one side and so opaque on the other. The power gradient of government to citizens became too steep.
I asked Ledgett about the artificial divide between surveillance conducted at home and overseas. Even before September 11, 2001, the intelligence establishment had pressed for relief from the strictures of FISA, the 1978 law that governed electronic surveillance inside the United States. They pointed out that purely foreign communications—from Russia to Italy, for example—might pass through internet infrastructure in New York. Under the original FISA law, intercepting those communications required an individual warrant. Why, they asked, should foreigners, while located overseas, be granted the protection of the Fourth Amendment? It was a reasonable question. Congress passed the Protect America Act of 2007 and the FISA Amendments Act of 2008 to ease restrictions on operations of this kind. What legislators never really considered was the flip side. The same revolution in global telecoms sent purely domestic communications overseas, as I have described above. Almost nobody called for tightening controls on U.S. intelligence gathering abroad to account for all the Americans swept in. Congress passed no such law. Twelve Triple Three remained the only legal framework outside U.S. territory.
“The default assumption and the reason FISA applies if you’re collecting inside the United States—or from a collection system based in the U.S.—is you’ll run into a U.S. person, more likely than not,” Ledgett said. “So you need procedures in place that are more restrictive in authorization. The default assumption in E.O. 12333 is that you’re more likely to find a non-U.S. person. I don’t think those assumptions are necessarily false.”
But Americans are all over the foreign circuit
s, I pointed out.
“The fact is you run into Americans in every part of the global network,” Ledgett acknowledged. “When I started in this business it was all about the Soviet Union, and if you got into a network, all it was was Soviets. They built and deployed and operated their own networks for their own use. The global system [today] means that everything’s interconnected. That’s why there are minimization procedures that are designed so when you inevitably encounter Americans in pursuing foreign intelligence targets—everything from targeting to collection to processing to storage to dissemination—there are gates that are designed to protect the identity of the U.S. communicant. So that they’re not subjected to . . .”
He trailed off.
“There’s no legal regime you can establish that can eliminate that,” he finished.
* * *
—
Minimization,” the term Ledgett used, was the intelligence community’s answer to the bystander problem. It was jargon for a dense thicket of rules intended to limit intrusions into the privacy of Americans who were incidentally swept in by surveillance of others. Minimization did not stop the collection of American communications. It imposed a set of procedures after the fact. The procedures told intelligence authorities what they could and could not do with the U.S. data once they had it in their hands. “Collection rules prevent the government from having the ability to misuse data,” as Jennifer Granick, then director of civil liberties at Stanford Law School’s Center for Internet and Society, wrote in her book American Spies. “Minimization rules, in contrast, deny government officials permission to misuse data in particular ways.”
Ledgett was right to say that incidental collection could not be eliminated entirely. If the NSA wiretapped only a single telephone line, it could still collect personal calls made by the target’s spouse or conversations between the target and the target’s American fishing buddy. Even in that simplest of cases, there could easily be more nontargets than targets in the surveillance take. Incidental collection was exponentially greater when it came to digital content. The contents of a target’s laptop computer or a Gmail account, for example, could—and often did—contain personal photos and documents belonging to other people who were not pertinent to foreign intelligence. In the real world of surveillance operations at scale, this imbalance was a fact of life.
In order to demonstrate what that meant concretely, Snowden had given me the content of 160,000 actual communications intercepted in the PRISM operation. Ashkan, Julie Tate, and I did weeks of computer-assisted analysis on the cache, which filled about a quarter million pages. Picture it as one big pile of conversations intercepted by the NSA. In it were the texts of chats and emails along with photos and other kinds of files sent as attachments. We counted the number of unique accounts in the pile. More than 9 out of 10 of the accounts we found were not the intended targets of NSA surveillance.
That figure—9 out of 10—represented the “incidentally collected” bystanders. They accounted for more than 10,000 of the 11,400 unique accounts whose contents were intercepted. Some of the bystanders knew the NSA targets and conversed with them. Many others fell into the pile by joining a chat room, regardless of subject, or using an online service hosted on a server that a target used for something else.
Half of the files that held those intercepted conversations included Americans. The NSA ingested so much content as it spied on 1,250 foreign targets that it had to black out 65,000 references to U.S. citizens and green card holders. We also found roughly 900 U.S. accounts that NSA analysts had neglected to black out.
Even when the analysts explicitly described intercepted files as useless for intelligence purposes, the NSA retained them. The contents had an intimate, even voyeuristic quality. They told stories of love and heartbreak, illicit sexual liaisons, mental health crises, political and religious conversions, financial anxieties, and disappointed hopes. They included medical records sent from one family member to another, résumés from job hunters, and academic transcripts of schoolchildren. In one photo, a young girl in religious dress beamed at a camera outside a mosque. Scores of pictures showed infants and toddlers in bathtubs, on swings, sprawled on their backs, and kissed by their mothers. In some photos, men showed off their physiques. In others, women modeled lingerie, leaning suggestively into a webcam or striking risqué poses in shorts and bikini tops.
All of those examples were from nontargets. “None of the hits that were received were relevant,” two Navy cryptologic technicians wrote in one of many summaries of nonproductive surveillance. “No additional information,” wrote a civilian analyst. If a target entered an online chat room, the NSA collected the words and identities of every person who posted there, regardless of subject, as well as every person who simply “lurked,” reading passively what other people wrote. “1 target, 38 others on there,” one analyst wrote. She collected data on them all. In other cases, the NSA designated as its target the internet protocol, or IP, address of a computer server used by hundreds of people.
The NSA treated all content intercepted incidentally from third parties as permissible to retain, store, search, and distribute to its government customers. Raj De testified that the NSA did not generally attempt to remove irrelevant personal content, because it was difficult for one analyst to know what might become relevant to another.
Minimization was fiendishly difficult to explain because there were so many nuances and conditional clauses. Bob Litt described it this way for a lay audience at the Brookings Institution: “Minimization procedures are procedures . . . that must be ‘reasonably designed in light of the purpose and technique of the particular surveillance, to minimize the acquisition and retention, and prohibit the dissemination, of nonpublicly available information concerning unconsenting United States persons consistent with the need of the United States to obtain, produce, and disseminate foreign intelligence information.’” Later, in another public appearance, he said it had taken him years to understand the safeguards. With rules so complex, as Granick noted, it was reasonable to worry about their application and effectiveness in practice.
At the simplest level, minimization required that the names of Americans be redacted before the NSA distributed an intelligence report. Usually. Contingently. The practice was qualified. For one thing, the names were masked, not deleted. They could be unmasked at will, and unmasking was fairly routine.
If the NSA had reported on the telephone call I received from the Israeli prime minister near the end of March 1997, when I was the Washington Post Jerusalem correspondent, the report would have conveyed that Benjamin Netanyahu “told MINIMIZED U.S. JOURNALIST that his story ‘was bullshit and you know it’s bullshit and you did it on purpose.’” If, however, a recipient of that report asked for more details, including my name, in order to understand the meaning or significance of the intercepted telephone call, then the NSA would identify me. Probably. Contingently. There was some discretion involved. The names of Americans could also be unmasked and reported to the FBI or another law enforcement agency if the NSA believed it had come across evidence of a crime. That alone was a significant exception because the evidence, in such a case, would have been obtained without a criminal warrant.
It was apt that Ledgett used the term “gates,” not “prohibitions,” to describe the limits imposed by minimization procedures. Every barrier set by the rules could be opened. Nonpertinent information about Americans was supposed to be deleted from NSA data stores, for example, but that restriction applied only if the information “could not be” foreign intelligence and an analyst had affirmative reason to believe the person in question was an American. If the communication was “enciphered” or “reasonably believed” to have secret meaning, the government could keep the contents regardless of the time limits that otherwise would apply. (Cryptanalysis might break the cipher later.) Some of the minimization procedures, moreover, were classified. If you did not have a government clearanc
e, you could not even read what the standards were supposed to be. And as Litt pointed out, the rules “can and do differ depending on the purpose of the surveillance and the technique used to implement it.” He described the tailoring of rules, despite their secrecy, as “an important way in which we provide appropriate protections for privacy.” It would be easier to have confidence in privacy safeguards that were a lot less opaque. Based on my interviews and the evidence in the files, I believed that NSA personnel took the procedures seriously, as best they could understand them; that they wanted to do the right thing; and that minimization could reduce the harms of overcollection. Granick, I thought, took skepticism too far when she wrote that “malleable secret rules are a lot like no rules at all.”
Still, minimization could easily be oversold. It was sometimes depicted as a cure for whatever ailed electronic surveillance in its contemporary form, especially when it came to bulk collection. The intelligence establishment would say, in as many words: We know we collect too much (“incidentally”), and American communications are swept in, but you need not worry because we minimize the results after the fact—we close our eyes to prevent ourselves from looking at things we are not supposed to see. There was nothing wrong with that argument if you read it narrowly. Overcollection was bound to happen, and after-the-fact safeguards reduced the impact on privacy. But sometimes the argument was offered more broadly, as though minimization disposed of any question about whether to limit collection in the first place. That did not follow. It would not have followed even if minimization were a great deal stricter, a great deal clearer in its meaning, and a great deal more transparent to the public.
Dark Mirror Page 35