Dark Mirror

Home > Other > Dark Mirror > Page 23
Dark Mirror Page 23

by Barton Gellman


  The NSA, CIA, and other U.S. agencies piggybacked on the work of those hobbyists. Jared Osborne of the Applied Physics Laboratory at Johns Hopkins University gave a classified presentation at the 2010 Jamboree that summarized “‘jailbreak’ methods used in the iPhone community and how those capabilities can be leveraged” by government surveillance tools. What troubled NSA hackers was that Apple, more aggressively than its competitors, kept raising the bar on iPhone security. The company released frequent software updates and a new model every year. “The Intelligence Community (IC) is highly dependent on a very small number of security flaws, many of which are public,” wrote a trio of researchers from the Sandia National Laboratories in a classified presentation at Jamboree 2011. They added that “Apple is quick to address these flaws with each new release of firmware and hardware.”

  When Apple introduced the iPhone 4, it featured a custom-built chip, the A4, that incorporated strong cryptography into the phone’s main processor. The key to the firmware, which controlled all other functions of the phone, was now enciphered with the government’s own advanced encryption standard. Even the NSA could not break the cipher in a frontal attack.

  In 2011, two teams of Sandia researchers attacked the problem on separate paths. Both looked promising but neither was ready by the time they reported their work at Jamboree. Importantly, use of either one would require physical access to the phone. One team explored a technique called differential power analysis, which took extremely sensitive measurements of the new Apple chip’s electronic emissions. The procedure could be likened to recording the sounds of the silicon circuits as the decryption key traveled their paths. The other technique required disassembly of the phone and attachment of specialized hardware. Researchers were closing in, but they still did not know exactly where the new chip stored its keys.

  Both approaches had a ways to go. If and when ready for real-world deployment, they would be suitable only for closely targeted surveillance operations. Neither method attempted or had any potential to compromise iPhones in quantity or at a distance.

  The Jamboree conference of 2012 marked the end of that self-restraint. This time a more audacious plan was on the table. Clyde Rogers, the project lead, called it STRAWHORSE. His research team, he told the Jamboree audience, had already tested components of the new digital weapon. It worked. What stood out about the breakthrough was that STRAWHORSE looked to be capable of compromising iPhones in quantity—and from half a world away.

  STRAWHORSE did not bother trying to break down the walls that Apple had built to prevent execution of unauthorized code. Instead, it looked for ways to induce an iPhone to drop its guard. If the device could be persuaded to unlock its defenses, the agency could build malware into apps that an iPhone would freely allow to be installed. As a bonus, the STRAWHORSE method worked just as well on Apple’s laptop and desktop computers.

  Ken Thompson, a celebrated computer scientist, had pointed out what he called a “chicken and egg” problem for computer security as long ago as 1984. Programmers write software in the form of source code that they can review and verify. Before a computer can run the software, however, the source code must be compiled into binary instructions that are all but impossible for humans to read. The compiler is itself a piece of software, susceptible to attack, but programmers have little choice but to trust it. If the compiler is somehow compromised, it can alter the software it builds in ways that are very hard to detect. STRAWHORSE took inspiration from that dilemma.

  Apple’s compiler is part of what the company calls the Xcode software development kit. Xcode is like a software factory. Car makers build their vehicles in an assembly plant. Apple developers build iPhone apps in Xcode. STRAWHORSE, the Sandia team reported, was a “whacked,” or maliciously modified, version of the Apple compiler. If the tool worked as advertised, the NSA would have the means to install surveillance implants in any iPhone app created on a STRAWHORSE-infected machine.

  NSA hackers would no longer need to break into one iPhone at a time. STRAWHORSE would position them as if they were factory managers who could reconfigure an assembly line to install a hidden microphone in every car.

  In the executive summary of his talk, Rogers described how his Sandia team had used “our whacked Xcode” to insert a remote-controlled back door into each app it compiled. The implant had already been tested to “send embedded data to a listening post,” to rewrite the iPhone’s security checks, and to sign uninfected iPhone apps with a digital certificate that allowed the malware to spread. STRAWHORSE added one more devious touch to extend its useful life: it modified Apple’s software installer “to include our . . . whacks” in any future installation of Xcode on the app developer’s machine.

  As STRAWHORSE entered research and development, Jon Callas was nearing the end of his tour as Apple’s “security privateer.” It was a self-deprecating title for a pioneering software engineer. Among his other credentials, he was a principal designer of the cryptographic protocols and security architecture of the iPhone and Macintosh operating systems. When we discovered the STRAWHORSE documents, Soltani and I reached out to Callas for his assessment. His first reaction was anger. “I’m pretty irritated,” he told us. “It’s the reputation of the company. You know, it’s my responsibility as a developer to make a secure system. You can’t make security that decides whether or not you fit certain political persuasions. It’s a machine. It either works or not.” Callas gave special attention to a STRAWHORSE feature that removed and replaced the iPhone’s built-in “security daemon,” a background process that keeps continuous watch over the operating system. “If you put your own ‘securityd’ in there, you can do anything,” he said.

  The STRAWHORSE project, unlike previous attacks on the iPhone, was not designed to “target” its malware in the usual understanding of that word. On the one hand, it did not aspire to mass surveillance. The STRAWHORSE designers had no plan or plausible path to place their malware in Apple’s official App Store for download by all the company’s registered developers. On the other hand, STRAWHORSE could not be called a discriminate weapon. Its purpose was to infect in-house developers employed by organizations, agencies, and companies whose software might be used by an NSA surveillance target. In-house developers typically work for large companies. It was inherent in the STRAWHORSE design that it could compromise hundreds or thousands of iPhones in order to reach one or two. The malware did not take aim at software developers because they were intelligence targets themselves. It used them as stepping-stones. In order to reach a single iPhone, STRAWHORSE would infect every app on every iPhone that used any software that the developer wrote. Every car in the factory came with a microphone, but the target drove only one.

  It was a perfectly rational strategy, and it fit the hacker ethos of circumvention and victory at all cost. If you can’t break into one device, try slipping past the perimeter that protects them all. Power up. Beat the game. Deployment of STRAWHORSE in this way would probably comply with U.S. law, no matter how many iPhone owners were needlessly compromised, so long as it took place overseas and focused on a valid foreign intelligence target. Would it be a wise and proportionate use of awesome powers? The people who make those choices in practice, driven to breach barriers and win the game, are not trained to consider such policy questions, nor be attentive to the privacy of bystanders. That is not their culture or their job.

  The Snowden files are full of operations like STRAWHORSE, some of them more ambitious still. The Intercept reported on another jaw-dropping example. The NSA and GCHQ tunneled jointly into a Netherlands company called Gemalto, which makes the lion’s share of SIM chips used in mobile phones around the world, including those sold by Verizon, AT&T, T-Mobile, and Sprint. Each of them is embedded with a unique encryption key for use on the latest LTE mobile networks to safeguard against eavesdropping. The NSA and GCHQ broke into the online accounts of Gemalto engineers and stole tens of millions of those encryption keys
. Breaking the encryption itself would have been hard, maybe too hard even for the NSA. Somebody looked at the problem, dreamed up the Gemalto gambit, and thought, “Game over.” Now the two intelligence allies can listen in on conversations held on tens of millions of phones.

  The game is always evolving. It never really ends. A few months after STRAWHORSE was unveiled at Jamboree, the NSA’s elite hackers put out a new call for talent. One of the Rock’s offensive teams, code-named POLITERAIN, sent word that it was “looking for interns who want to break things.”

  “We are tasked to remotely degrade or destroy opponent computers, routers, servers and network enabled devices by attacking the hardware using low-level programming,” the classified announcement said. “We are also always open for ideas.”

  SEVEN

  FIRSTFRUITS

  You can’t tell everybody without telling the bad guys.

  —Edward Snowden to author, December 5, 2013

  Late that first summer of Snowden, as I made my way through the NSA archive, I came across my own name in the documents. I gawped at the screen and bit back an impulse to swear. The jolt of alarm felt naïve. I knew perfectly well that government agencies prefer not to read their secrets on the front page. Sometimes they resent a story enough to investigate. How in blazes did the reporter find that out? In serious cases maybe the Justice Department steps in. I knew all that, but I had not often felt it personally. Until Snowden upended my professional life, I seldom imagined myself a target of special attention. I put time and thought into protecting people who spoke to me in confidence, but the risks felt abstract. Most of the time I lacked conviction that anybody was watching.

  I had skimmed the first page of this document and set it aside in the first tumultuous weeks after receiving the NSA archive. More than two months passed before I returned to the memo and noticed my name on page seven. Why the discovery took so long is hard to explain. I am not above a vanity check for Gellman, Barton, in the index of a book. (Did the author mention my work? Why not?) A few keystrokes would have found me in the Snowden materials, but I did not make the search. The idea seemed melodramatic.

  The document that proved me wrong was more than a decade old, a TOP SECRET//COMINT//ORCON//NOFORN memorandum for the attorney general of the United States about “unauthorized disclosures . . . of high-level concern to U.S. policy makers.” Three of my stories, the memo said, had been referred to the Justice Department for criminal investigation in early 1999. A sensation of exposure crawled up my spine. The FBI had been put on the case. I had no inkling at the time. How much did the bureau find out? The memo did not say. No harm, as far as I knew, had come to my sources, but I realized that in some cases I could not really say. It had been a long time.

  An intriguing file name, “Denial and Deception—Ashcroft.doc,” had attracted my attention to this document. John Ashcroft was attorney general under President George W. Bush when al Qaeda killed 2,996 people in the attacks of September 11, 2001. In preparation for the coming war with Osama bin Laden, the Justice Department launched a task force to deter the leak of classified secrets. The NSA was eager to take part. “We are aware that your Committee is interested in unauthorized disclosures that may have affected intelligence operations,” said a nine-page memo for Ashcroft from NSA director Michael V. Hayden near the end of that year. The memo, an undated draft, nominated forty-nine recent stories with “disclosures that we deem especially egregious, and in apparent violation of federal criminal laws.”

  I was in good journalistic company: other bylines on the list included New York Times correspondents James Risen and Don Van Natta Jr., Washington Post reporters Doug Farah, Steve Mufson, Thomas Lippman, and Kathy Sawyer, and the New Yorker’s Seymour Hersh. The roster could have been a lot longer. It is not easy to write about diplomacy or war without touching on something classified. And the accounting, I noted, stopped a few weeks after 9/11. Many of us on this list did our deepest digging into national security in the decade that followed. I had to assume that my subsequent work may have come under an FBI microscope more than once.

  In the three stories singled out in this memo, I had described an intelligence operation gone wrong in the aftermath of the Gulf War of 1990–91. Back then, Iraq really did have a nuclear weapons development program and an arsenal of biological and chemical munitions. Seven years after the war, United Nations arms inspectors were still finding remnants. The United States accused Baghdad of continuing to hide weapons of mass destruction. Iraqi president Saddam Hussein accused Washington of packing the UN Special Commission, or UNSCOM, with American spies. Both claims turned out to be true, my stories revealed. The U.S. government was using inspections as cover for straight-up espionage. The NSA, with CIA help, placed cleverly concealed microwave antennas inside UN facilities around Iraq, enabling the agency to listen to conversations inside the Baghdad government. UNSCOM, unbeknownst to its own personnel, had become the Trojan horse that Saddam always alleged. Technically ingenious, the U.S. operation produced a diplomatic debacle. As word of the surveillance began to circulate inside the UN Secretariat, weeks before I published my stories, UNSCOM lost the last of its political support as a neutral disarmament authority. By the end of 1999 the Security Council disbanded it.

  According to the NSA “compromise assessment report,” my reporting

  disclosed the basic concept for a highly classified NSA collection system, as well as Iraqi diplomatic communications. This communications system was inactive prior to the disclosure, and no efforts were made to regain access due to the increased risk following the media disclosure. A report of this disclosure was made to the Department of Justice.

  The most intriguing part of the memo was the framing of the harm that the NSA ascribed to my stories. The harm fell into a category called denial and deception. That is a counterintelligence term of art for keeping valuable secrets away from prying eyes. The lessons of history, the NSA wrote, suggest that “press leaks could result in our adversaries implementing Denial and Deception (D&D) practices.” If adversaries know how the United States spies on them, in other words, they can do a better job of covering their tracks. That is a legitimate concern, but there is a flip side. Good journalism sometimes exposes deception by the U.S. government itself—not only in tradecraft but also in matters of basic policy and principle. The Clinton administration publicly defended UNSCOM’s impartiality even as it turned the inspectors into unwitting spies. My coverage of the ensuing meltdown revealed the strategic price of subverting an international mission for tactical gain. Exposure of this betrayal on the American side, from the NSA’s point of view, constituted a crime.

  A whole folder in the Snowden archive was devoted to denial and deception. It was not about foreign spies. The counterintelligence adversaries in these documents were journalists and the people who gave us information. The memos and slide decks laid out the grave dangers posed by news reporting in theory and practice. They also sketched the beginnings of a plan to do something about it. National security was a playing field on which the government was at the peak of its advantage over journalists. U.S. elected and appointed officials held more power here than on any other subject to prevent, discourage, shape, and punish unwelcome disclosures. And when it came to the Snowden story my own government might not be the most serious threat. My reporting took place in a perilous environment, and I never had the luxury of forgetting that.

  Every file in the “denial and deception” folder mentioned a cryptonym. None defined the term fully, but it seemed to be the cover name for an effort to track and trace journalistic leaks.

  FIRSTFRUITS. I had heard that name before. I had thought it a myth.

  * * *

  —

  By the way,” I told Snowden a few days later in a live chat, feigning nonchalance, “my name is in the file.” I noted the reference to FIRSTFRUITS. He did not know what it meant.

  “Maybe you should FOIA yourself and reference
the program,” he typed back. “Just for the delicious pages of black bars.”

  “Doing that. I can make street art of the redactions.”

  We were sharing a nerdy little joke. It was not likely that I would learn more about FIRSTFRUITS by asking nicely in a Freedom of Information Act request.

  Sure enough, when the FOIA results came in, years later, the interesting ones mostly looked like the example I reproduce on this page exchange among senior White House, Justice Department, and DNI officials.

  Banter with Snowden, regardless of subject, came as a relief to me. It was our first contact in months. Snowden had stopped speaking to me after I wrote a newspaper profile about him in June. I deserved some of his ire. When my story referred to his alias, Verax, I inadvertently exposed an online handle that he was still using. The next day he vanished from the hidden server where we met for live chat. I had no other way to reach him.

  When I logged in on August 24, 2013, and finally found him in our old meeting place, he had a new anonymous handle that he chose as a rebuke—something akin to “Bart sucks,” but subtler. I let it pass without comment. I was northbound on an Amtrak train from Washington when Snowden popped up on my screen. My encrypted link to the server cut in and out, but I had to keep him talking. There were so many more questions to ask. I could not afford a permanent breach between us.

  “Thanks for coming back,” I wrote. “Blowing the handle was really stupid.”

 

‹ Prev