Book Read Free

Permanent Record

Page 26

by Edward Snowden


  I’d usually try to banter with the guards, and this was where my Rubik’s Cube came in most handy. I was known to the guards and to everybody else at the Tunnel as the Rubik’s Cube guy, because I was always working the cube as I walked down the halls. I got so adept I could even solve it one-handed. It became my totem, my spirit toy, and a distraction device as much for myself as for my coworkers. Most of them thought it was an affectation, or a nerdy conversation starter. And it was, but primarily it relieved my anxiety. It calmed me.

  I bought a few cubes and handed them out. Anyone who took to it, I’d give them pointers. The more that people got used to them, the less they’d ever want a closer look at mine.

  I got along with the guards, or I told myself I did, mostly because I knew where their minds were: elsewhere. I’d done something like their job before, back at CASL. I knew how mind-numbing it was to spend all night standing, feigning vigilance. Your feet hurt. After a while, all the rest of you hurts. And you can get so lonely that you’ll talk to a wall.

  I aimed to be more entertaining than the wall, developing my own patter for each human obstacle. There was the one guard I talked to about insomnia and the difficulties of day-sleeping (remember, I was on nights, so this would’ve been around two in the morning). Another guy, we discussed politics. He called Democrats “Demon Rats,” so I’d read Breitbart News in preparation for the conversation. What they all had in common was a reaction to my cube: it made them smile. Over the course of my employment at the Tunnel, pretty much all the guards said some variation of, “Oh man, I used to play with that when I was a kid,” and then, invariably, “I tried to take the stickers off to solve it.” Me too, buddy. Me too.

  It was only once I got home that I was able to relax, even just slightly. I was still worried about the house being wired—that was another one of those charming methods the FBI used against those it suspected of inadequate loyalty. I’d rebuff Lindsay’s concerns about my insomniac ways until she hated me and I hated myself. She’d go to bed and I’d go to the couch, hiding with my laptop under a blanket like a child because cotton beats cameras. With the threat of immediate arrest out of the way, I could focus on transferring the files to a larger external storage device via my laptop—only somebody who didn’t understand technology very well would think I’d keep them on the laptop forever—and locking them down under multiple layers of encryption algorithms using differing implementations, so that even if one failed the others would keep them safe.

  I’d been careful not to leave any traces at my work, and I took care that my encryption left no traces of the documents at home. Still, I knew the documents could lead back to me once I’d sent them to the journalists and they’d been decrypted. Any investigator looking at which agency employees had accessed, or could access, all these materials would come up with a list with probably only a single name on it: mine. I could provide the journalists with fewer materials, of course, but then they wouldn’t be able to most effectively do their work. Ultimately, I had to contend with the fact that even one briefing slide or PDF left me vulnerable, because all digital files contain metadata, invisible tags that can be used to identify their origins.

  I struggled with how to handle this metadata situation. I worried that if I didn’t strip the identifying information from the documents, they might incriminate me the moment the journalists decrypted and opened them. But I also worried that by thoroughly stripping the metadata, I risked altering the files—if they were changed in any way, that could cast doubt on their authenticity. Which was more important: personal safety, or the public good? It might sound like an easy choice, but it took me quite a while to bite the bullet. I owned the risk, and left the metadata intact.

  Part of what convinced me was my fear that even if I had stripped away the metadata I knew about, there could be other digital watermarks I wasn’t aware of and couldn’t scan for. Another part had to do with the difficulty of scrubbing single-user documents. A single-user document is a document marked with a user-specific code, so that if any publication’s editorial staff decided to run it by the government, the government would know its source. Sometimes the unique identifier was hidden in the date and time-stamp coding, sometimes it involved the pattern of microdots in a graphic or logo. But it might also be embedded in something, in some way, I hadn’t even thought of. This phenomenon should have discouraged me, but instead it emboldened me. The technological difficulty forced me, for the first time, to confront the prospect of discarding my lifetime practice of anonymity and coming forward to identify myself as the source. I would embrace my principles by signing my name to them and let myself be condemned.

  Altogether, the documents I selected fit on a single drive, which I left out in the open on my desk at home. I knew that the materials were just as secure now as they had ever been at the office. Actually, they were more secure, thanks to multiple levels and methods of encryption. That’s the incomparable beauty of the cryptological art. A little bit of math can accomplish what all the guns and barbed wire can’t: a little bit of math can keep a secret.

  24

  Encrypt

  Most people who use computers, and that includes members of the Fourth Estate, think there’s a fourth basic permission besides Read, Write, and Execute, called “Delete.”

  Delete is everywhere on the user side of computing. It’s in the hardware as a key on the keyboard, and it’s in the software as an option that can be chosen from a drop-down menu. There’s a certain finality that comes with choosing Delete, and a certain sense of responsibility. Sometimes a box even pops up to double-check: “Are you sure?” If the computer is second-guessing you by requiring confirmation—click “Yes”—it makes sense that Delete would be a consequential, perhaps even the ultimate decision.

  Undoubtedly, that’s true in the world outside of computing, where the powers of deletion have historically been vast. Even so, as countless despots have been reminded, to truly get rid of a document you can’t just destroy every copy of it. You also have to destroy every memory of it, which is to say you have to destroy all the people who remember it, along with every copy of all the other documents that mention it and all the people who remember all those other documents. And then, maybe, just maybe, it’s gone.

  Delete functions appeared from the very start of digital computing. Engineers understood that in a world of effectively unlimited options, some choices would inevitably turn out to be mistakes. Users, regardless of whether or not they were really in control at the technical level, had to feel in control, especially with regard to anything that they themselves had created. If they made a file, they should be able to unmake it at will. The ability to destroy what they created and start over afresh was a primary function that imparted a sense of agency to the user, despite the fact that they might be dependent on proprietary hardware they couldn’t repair and software they couldn’t modify, and bound by the rules of third-party platforms.

  Think about the reasons that you yourself press Delete. On your personal computer, you might want to get rid of some document you screwed up, or some file you downloaded but no longer need—or some file you don’t want anyone to know you ever needed. On your email, you might delete an email from a former lover that you don’t want to remember or don’t want your spouse to find, or an RSVP for that protest you went to. On your phone, you might delete the history of everywhere that phone has traveled, or some of the pictures, videos, and private records it automatically uploaded to the cloud. In every instance, you delete, and the thing—the file—appears to be gone.

  The truth, though, is that deletion has never existed technologically in the way that we conceive of it. Deletion is just a ruse, a figment, a public fiction, a not-quite-noble lie that computing tells you to reassure you and give you comfort. Although the deleted file disappears from view, it is rarely gone. In technical terms, deletion is really just a form of the middle permission, a kind of Write. Normally, when you press Delete for one of your files, its data—which has
been stashed deep down on a disk somewhere—is not actually touched. Efficient modern operating systems are not designed to go all the way into the bowels of a disk purely for the purposes of erasure. Instead, only the computer’s map of where each file is stored—a map called the “file table”—is rewritten to say “I’m no longer using this space for anything important.” What this means is that, like a neglected book in a vast library, the supposedly erased file can still be read by anyone who looks hard enough for it. If you only erase the reference to it, the book itself still remains.

  This can be confirmed through experience, actually. Next time you copy a file, ask yourself why it takes so long when compared with the instantaneous act of deletion. The answer is that deletion doesn’t really do anything to a file besides conceal it. Put simply, computers were not designed to correct mistakes, but to hide them—and to hide them only from those parties who don’t know where to look.

  * * *

  THE WANING DAYS of 2012 brought grim news: the few remaining legal protections that prohibited mass surveillance by some of the most prominent members of the Five Eyes network were being dismantled. The governments of both Australia and the UK were proposing legislation for the mandatory recording of telephony and Internet metadata. This was the first time that notionally democratic governments publicly avowed the ambition to establish a sort of surveillance time machine, which would enable them to technologically rewind the events of any person’s life for a period going back months and even years. These attempts definitively marked, to my mind at least, the so-called Western world’s transformation from the creator and defender of the free Internet to its opponent and prospective destroyer. Though these laws were justified as public safety measures, they represented such a breathtaking intrusion into the daily lives of the innocent that they terrified—quite rightly—even the citizens of other countries who didn’t think themselves affected (perhaps because their own governments chose to surveil them in secret).

  These public initiatives of mass surveillance proved, once and for all, that there could be no natural alliance between technology and government. The rift between my two strangely interrelated communities, the American IC and the global online tribe of technologists, became pretty much definitive. In my earliest years in the IC, I could still reconcile the two cultures, transitioning smoothly between my spy work and my relationships with civilian Internet privacy folks—everyone from the anarchist hackers to the more sober academic Tor types who kept me current about computing research and inspired me politically. For years, I was able to fool myself that we were all, ultimately, on the same side of history: we were all trying to protect the Internet, to keep it free for speech and free of fear. But my ability to sustain that delusion was gone. Now the government, my employer, was definitively the adversary. What my technologist peers had always suspected, I’d only recently confirmed, and I couldn’t tell them. Or I couldn’t tell them yet.

  What I could do, however, was help them out, so long as that didn’t imperil my plans. This was how I found myself in Honolulu, a beautiful city in which I’d never had much interest, as one of the hosts and teachers of a CryptoParty. This was a new type of gathering invented by an international grassroots cryptological movement, at which technologists volunteered their time to teach free classes to the public on the topic of digital self-defense—essentially, showing anyone who was interested how to protect the security of their communications. In many ways, this was the same topic I taught for JCITA, so I jumped at the chance to participate.

  Though this might strike you as a dangerous thing for me to have done, given the other activities I was involved with at the time, it should instead just reaffirm how much faith I had in the encryption methods I taught—the very methods that protected that drive full of IC abuses sitting back at my house, with locks that couldn’t be cracked even by the NSA. I knew that no number of documents, and no amount of journalism, would ever be enough to address the threat the world was facing. People needed tools to protect themselves, and they needed to know how to use them. Given that I was also trying to provide these tools to journalists, I was worried that my approach had become too technical. After so many sessions spent lecturing colleagues, this opportunity to simplify my treatment of the subject for a general audience would benefit me as much as anyone. Also, I honestly missed teaching: it had been a year since I’d stood at the front of a class, and the moment I was back in that position I realized I’d been teaching the right things to the wrong people all along.

  When I say class, I don’t mean anything like the IC’s schools or briefing rooms. The CryptoParty was held in a one-room art gallery behind a furniture store and coworking space. While I was setting up the projector so I could share slides showing how easy it was to run a Tor server to help, for example, the citizens of Iran—but also the citizens of Australia, the UK, and the States—my students drifted in, a diverse crew of strangers and a few new friends I’d only met online. All in all, I’d say about twenty people showed up that December night to learn from me and my co-lecturer, Runa Sandvik, a bright young Norwegian woman from the Tor Project. (Runa would go on to work as the senior director of information security for the New York Times, which would sponsor her later CryptoParties.) What united our audience wasn’t an interest in Tor, or even a fear of being spied on as much as a desire to re-establish a sense of control over the private spaces in their lives. There were some grandparent types who’d wandered in off the street, a local journalist covering the Hawaiian “Occupy!” movement, and a woman who’d been victimized by revenge porn. I’d also invited some of my NSA colleagues, hoping to interest them in the movement and wanting to show that I wasn’t concealing my involvement from the agency. Only one of them showed up, though, and sat in the back, legs spread, arms crossed, smirking throughout.

  I began my presentation by discussing the illusory nature of deletion, whose objective of total erasure could never be accomplished. The crowd understood this instantly. I went on to explain that, at best, the data they wanted no one to see couldn’t be unwritten so much as overwritten: scribbled over, in a sense, with random or pseudo-random data until the original was rendered unreadable. But, I cautioned, even this approach had its drawbacks. There was always a chance that their operating system had silently hidden away a copy of the file they were hoping to delete in some temporary storage nook they weren’t privy to.

  That’s when I pivoted to encryption.

  Deletion is a dream for the surveillant and a nightmare for the surveilled, but encryption is, or should be, a reality for all. It is the only true protection against surveillance. If the whole of your storage drive is encrypted to begin with, your adversaries can’t rummage through it for deleted files, or for anything else—unless they have the encryption key. If all the emails in your inbox are encrypted, Google can’t read them to profile you—unless they have the encryption key. If all your communications that pass through hostile Australian or British or American or Chinese or Russian networks are encrypted, spies can’t read them—unless they have the encryption key. This is the ordering principle of encryption: all power to the key holder.

  Encryption works, I explained, by way of algorithms. An encryption algorithm sounds intimidating, and certainly looks intimidating when written out, but its concept is quite elementary. It’s a mathematical method of reversibly transforming information—such as your emails, phone calls, photos, videos, and files—in such a way that it becomes incomprehensible to anyone who doesn’t have a copy of the encryption key. You can think of a modern encryption algorithm as a magic wand that you can wave over a document to change each letter into a language that only you and those you trust can read, and the encryption key as the unique magic words that complete the incantation and put the wand to work. It doesn’t matter how many people know that you used the wand, so long as you can keep your personal magic words from the people you don’t trust.

  Encryption algorithms are basically just sets of math problems des
igned to be incredibly difficult even for computers to solve. The encryption key is the one clue that allows a computer to solve the particular set of math problems being used. You push your readable data, called plaintext, into one end of an encryption algorithm, and incomprehensible gibberish, called ciphertext, comes out the other end. When somebody wants to read the ciphertext, they feed it back into the algorithm along with—crucially—the correct key, and out comes the plaintext again. While different algorithms provide different degrees of protection, the security of an encryption key is often based on its length, which indicates the level of difficulty involved in solving a specific algorithm’s underlying math problem. In algorithms that correlate longer keys with better security, the improvement is exponential. If we presume that an attacker takes one day to crack a 64-bit key—which scrambles your data in one of 264 possible ways (18,446,744,073,709,551,616 unique permutations)—then it would take double that amount of time, two days, to break a 65-bit key, and four days to break a 66-bit key. Breaking a 128-bit key would take 264 times longer than a day, or fifty million billion years. By that time, I might even be pardoned.

 

‹ Prev