by Yasha Levine
During his trial, it came out that the FBI and DHS had infiltrated Silk Road almost from the very beginning. A DHS agent had even taken over a senior Silk Road administrator account, which gave federal agents access to the back end of Silk Road’s system, a job for which Ulbricht paid the DHS agent $1,000 a week in Bitcoins.136 Meaning, one of Ulbricht’s top lieutenants was a fed, and he had no idea. But it was Silk Road’s leaked IP address that ultimately led DHS agents to track Ulbricht’s connection to a cafe in San Francisco, and ultimately to him.137
Ulbricht confessed to being Dread Pirate Roberts and to setting up Silk Road. After being found guilty of seven felonies, including money laundering, drug trafficking, running a criminal enterprise, and identity fraud, he went from calling for revolution to begging the judge for leniency. “Even now I understand what a terrible mistake I made. I’ve had my youth, and I know you must take away my middle years, but please leave me my old age. Please leave a small light at the end of the tunnel, an excuse to stay healthy, an excuse to dream of better days ahead, and a chance to redeem myself in the free world before I meet my maker,” he said to the court. The judge had no pity. She hit him with a double life sentence without the possibility of parole. And more years may be added to the clock if he is convicted for any of his murders for hire.
The fall of Silk Road pricked Tor’s invincibility. Even as Edward Snowden and organizations like the Electronic Frontier Foundation promoted Tor as a powerful tool against the US surveillance state, that very surveillance state was poking Tor full of holes.138
In 2014, the FBI along with the DHS and European law enforcement agencies went on the hunt for Silk Road copycat stores, taking down fifty marketplaces hawking everything from drugs to weapons to credit cards to child abuse pornography in an international sweep codenamed Operation Omynous. In 2015, international law enforcement in conjunction with the FBI arrested more than five hundred people linked with Playpen, a notorious child pornography network that ran on the Tor cloud. Seventy-six people were prosecuted in the United States, and nearly three hundred child victims from around the world were rescued from their abusers.139 These raids were targeted and extremely effective. It seemed that cops knew exactly where to hit and how to do it.
What was going on? How did law enforcement penetrate what was supposed to be ironclad anonymity strong enough to withstand an onslaught by the NSA?
Confirmation was hard to come by, but Tor’s Roger Dingledine was convinced that at least some of these stings were using an exploit developed by a group at Carnegie Mellon University in Pennsylvania. Working under a Pentagon contract, researchers had figured out a cheap and easy way to crack Tor’s super-secure network with just $3,000 worth of computer equipment.140 Dingledine accused the researchers of selling this method to the FBI.
“The Tor Project has learned more about last year’s attack by Carnegie Mellon researchers on the hidden service subsystem. Apparently these researchers were paid by the FBI to attack hidden services users in a broad sweep, and then sift through their data to find people whom they could accuse of crimes,” he lashed out in a blog post in November 2015, saying that he had been told the FBI paid at least $1 million for these services.141
It was strange to see Dingledine getting angry about researchers taking money from law enforcement when his own salary was paid almost entirely by military and intelligence-linked contracts. But Dingledine did something that was even stranger. He accused Carnegie Mellon researchers of violating academic standards for ethical research by working with law enforcement. He then announced that the Tor Project would publish guidelines for people who might want to hack or crack Tor for “academic” and “independent research” purposes in the future but do so in an ethical manner by first obtaining consent of the people who were being hacked.
“Research on humans’ data is human research. Over the last century, we have made enormous strides in what research we consider ethical to perform on people in other domains,” read a draft of this “Ethical Tor Research” guide. “We should make sure that privacy research is at least as ethical as research in other fields.” The requirements set forth in this document include sections like: “Only collect data that is acceptable to publish” and “Only collect as much data as is needed: practice data minimization.”142
Although demands like this make sense in a research context, they were baffling when applied to Tor. After all, Tor and its backers, including Edward Snowden, presented the project as a real-world anonymity tool that could resist the most powerful attackers. If it was so frail that it needed academic researchers to abide by an ethical honor code to avoid deanonymizing users without their consent, how could it hold up to the FBI or NSA or the scores of foreign intelligence agencies from Russia to China to Australia that might want to punch through its anonymity systems?
In 2015, when I first read these statements from the Tor Project, I was shocked. This was nothing less than a veiled admission that Tor was useless at guaranteeing anonymity and that it required attackers to behave “ethically” in order for it to remain secure. It must have come as an even greater shock to the cypherpunk believers like Ross Ulbricht, who trusted Tor to run his highly illegal Internet business and who is now in jail for the rest of his life.
Tor’s spat with the researchers at Carnegie Mellon University revealed another confusing dynamic. Whereas one part of the federal government—which included the Pentagon, State Department, and the Broadcasting Board of Governors—funded the ongoing development of the Tor Project, another wing of this same federal government—which included the Pentagon, the FBI, and possibly other agencies—was working just as hard to crack it.
What was going on? Why was the government working at cross-purposes? Did one part simply not know what the other was doing?
Strangely enough, Edward Snowden’s NSA documents provided the beginnings of an answer. They showed that multiple NSA programs could punch through Tor’s defenses and possibly even uncloak the network’s traffic on a “wide scale.” They also showed that the spy agency saw Tor as a useful tool that concentrated potential “targets” in one convenient location. 143 In a word, the NSA saw Tor as a honeypot.
In October 2013, the Washington Post reported on several of these programs, revealing that the NSA had been working to crack Tor since at least 2006, the same year that Dingledine signed his first contract with the BBG.144 One of these programs, codenamed EGOTISTICALGIRAFFE, was actively used to trace the identity of Al-Qaeda operatives. “One document provided by Snowden included an internal exchange among NSA hackers in which one of them said the agency’s Remote Operations Center was capable of targeting anyone who visited an al-Qaeda Web site using Tor.”145 Another set of documents, made public by the Guardian that same month, showed that the agency viewed Tor in a positive light. “Critical mass of targets use Tor. Scaring them away might be counterproductive. We will never get 100% but we don’t need to provide true IPs for every target every time they use Tor,” explained a 2012 NSA presentation.146 Its point was clear: people with something to hide—whether terrorists, foreign spies, or drug dealers—believed in Tor’s promise of anonymity and used the network en masse. By doing so, they proceeded with a false sense of safety, doing things on the network they would never do out in the open, all while helping to mark themselves for further surveillance.147
This wasn’t surprising. The bigger lesson of Snowden’s NSA cache was that almost nothing happened on the Internet without passing through some kind of US government bug. Naturally, popular tools used by the public that promised to obfuscate and hide people’s communications were targets regardless of who funded them.
As for the other crypto tools financed by the US government? They suffered similar security and honeypot pitfalls. Take Signal, the encrypted app Edward Snowden said he used every day. Marketed as a secure communication tool for political activists, the app had strange features built in from the very beginning. It required that users link their active mobile phone number and u
pload their entire address book into Signal’s servers—both questionable features of a tool designed to protect political activists from law enforcement in authoritarian countries. In most cases, a person’s phone number was effectively that person’s identity, tied to a bank account and home address. Meanwhile, a person’s address book contained that user’s friends, colleagues, fellow political activists, and organizers, virtually the person’s entire social network.
Then there was the fact that Signal ran on Amazon’s servers, which meant that all its data were available to a partner in the NSA’s PRISM surveillance program. Equally problematic, Signal needed Apple and Google to install and run the app on people’s mobile phones. Both companies were, and as far as we know still are, partners in PRISM as well. “Google usually has root access to the phone, there’s the issue of integrity,” writes Sander Venema, a respected developer and secure-technology trainer, in a blog post explaining why he no longer recommends people use Signal for encrypted chat. “Google is still cooperating with the NSA and other intelligence agencies. PRISM is also still a thing. I’m pretty sure that Google could serve a specially modified update or version of Signal to specific targets for surveillance, and they would be none the wiser that they installed malware on their phones.”148
Equally weird was the way the app was designed to make it easy for anyone monitoring Internet traffic to flag people using Signal to communicate. All that the FBI or, say, Egyptian or Russian security services had to do was watch for the mobile phones that pinged a particular Amazon server used by Signal, and it was trivial to isolate activists from the general smartphone population. So, although the app encrypted the content of people’s messages, it also marked them with a flashing red sign: “Follow Me. I Have Something To Hide.” (Indeed, activists protesting at the Democratic National Convention in Philadelphia in 2016 told me that they were bewildered by the fact that police seemed to know and anticipate their every move despite their having used Signal to organize.)149
Debate about Signal’s technical design was moot anyway. Snowden’s leaks showed that the NSA had developed tools that could grab everything people did on their smartphones, which presumably included texts sent and received by Signal. In early March 2017, WikiLeaks published a cache of CIA hacking tools that confirmed the inevitable. The agency worked with the NSA as well as other “cyber arms contractors” to develop hacking tools that targeted smartphones, allowing it to bypass the encryption of Signal and any other encrypted chat apps, including Facebook’s WhatsApp.150 “The CIA’s Mobile Devices Branch (MDB) developed numerous attacks to remotely hack and control popular smart phones. Infected phones can be instructed to send the CIA the user’s geolocation, audio and text communications as well as covertly activate the phone’s camera and microphone,” explained a WikiLeaks press release. “These techniques permit the CIA to bypass the encryption of WhatsApp, Signal, Telegram, Wiebo, Confide and Cloackman by hacking the ‘smart’ phones that they run on and collecting audio and message traffic before encryption is applied.”
Disclosure of these hacking tools showed that, in the end, Signal’s encryption didn’t really matter, not when the CIA and NSA owned the underlying operating system and could grab whatever they wanted before encryption or obfuscation algorithms were applied. This flaw went beyond Signal and applied to every type of encryption technology on every type of consumer computer system. Sure, encryption apps might work against low-level opponents when used by a trained army intelligence analyst like Pvt. Chelsea Manning, who had used Tor while stationed in Iraq to monitor forums used by Sunni insurgents without giving away his identity.151 They also might work for someone with a high degree of technical savvy—say, a wily hacker like Julian Assange or a spy like Edward Snowden—who can use Signal and Tor combined with other techniques to effectively cover their tracks from the NSA. But, for the average user, these tools provided a false sense of security and offered the opposite of privacy.
The old cypherpunk dream, the idea that regular people could use grassroots encryption tools to carve out cyber islands free of government control, was proving to be just that, a dream.
Crypto War, Who Is It Good For?
Convoluted as the story may be, US government support for Internet Freedom and its underwriting of crypto culture makes perfect sense. The Internet came out of a 1960s military project to develop an information weapon. It was born out of a need to quickly communicate, process data, and control a chaotic world. Today, the network is more than a weapon; it is also a field of battle, a place where vital military and intelligence operations take place. Geopolitical struggle has moved online, and Internet Freedom is a weapon in that fight.
If you take a big-picture view, Silicon Valley’s support for Internet Freedom makes sense as well. Companies like Google and Facebook first supported it as a part of a geopolitical business strategy, a way of subtly pressuring countries that closed their networks and markets to Western technology companies. But after Edward Snowden’s revelations exposed the industry’s rampant private surveillance practices to the public, Internet Freedom offered another powerful benefit.
For years, public opinion has been stacked firmly against Silicon Valley’s underlying business model. In poll after poll, a majority of Americans have voiced their opposition to corporate surveillance and have signaled support for increased regulation of the industry.152 This has always been a deal breaker for Silicon Valley. For many Internet companies, including Google and Facebook, surveillance is the business model. It is the base on which their corporate and economic power rests. Disentangle surveillance and profit, and these companies would collapse. Limit data collection, and the companies would see investors flee and their stock prices plummet.
Silicon Valley fears a political solution to privacy. Internet Freedom and crypto offer an acceptable alternative. Tools like Signal and Tor provide a false solution to the privacy problem, focusing people’s attention on government surveillance and distracting them from the private spying carried out by the Internet companies they use every day. All the while, crypto tools give people a sense that they’re doing something to protect themselves, a feeling of personal empowerment and control. And all those crypto radicals? Well, they just enhance the illusion, heightening the impression of risk and danger. With Signal or Tor installed, using an iPhone or Android suddenly becomes edgy and radical. So instead of pushing for political and democratic solutions to surveillance, we outsource our privacy politics to crypto apps—software made by the very same powerful entities that these apps are supposed to protect us from.
In that sense, Edward Snowden is like the branded face of an Internet consumerism-as-rebellion lifestyle campaign, like the old Apple ad about shattering Big Brother or the Nike spot set to the Beatles’ “Revolution.” While Internet billionaires like Larry Page, Sergey Brin, and Mark Zuckerberg slam government surveillance, talk up freedom, and embrace Snowden and crypto privacy culture, their companies still cut deals with the Pentagon, work with the NSA and CIA, and continue to track and profile people for profit. It is the same old split-screen marketing trick: the public branding and the behind-the-scenes reality.
Internet Freedom is a win-win for everyone involved—everyone except regular users, who trust their privacy to double-dealing military contractors, while powerful Surveillance Valley corporations continue to build out the old military cybernetic dream of a world where everyone is watched, predicted, and controlled.
Epilogue
Mauthausen, Austria
It is a crisp and sunny morning in late December 2015 when I take a right turn off a small country highway and drive into Mauthausen, a tiny medieval town in northern Austria about thirty-five miles from the border with the Czech Republic. I pass through a cluster of low-slung apartment buildings and continue on, driving through spotless green pastures and pretty little farmsteads.
I park on a hill overlooking the town. Below is the wide Danube River. Clusters of rural homes poke out from the cusp of two soft gre
en hills, smoke lazily wafting out of their chimneys. A small group of cows is out to pasture, and I can hear the periodic braying of a flock of sheep. Out in the distance, the hills recede in layers of hazy green upon green, like the scales of a giant sleeping dragon. The whole scene is framed by the jagged white peaks of the Austrian Alps.
Mauthausen is an idyllic place. Calm, almost magical. Yet I drove here not to enjoy the view but to get close to something I came to fully understand only while writing this book.
Today, computer technology frequently operates unseen, hidden in gadgets, wires, chips, wireless signals, operating systems, and software. We are surrounded by computers and networks, yet we barely notice them. If we think about them at all, we tend to associate them with progress. We rarely stop to think about the dark side of information technology—all the ways it can be used and abused to control societies, to inflict pain and suffering. Here, in this quiet country setting, stands a forgotten monument to that power: the Mauthausen Concentration Camp.
Built on a mound above the town, it is amazingly well preserved: thick stone walls, squat guard towers, a pair of ominous smoke stacks connected to the camp’s gas chamber and crematorium. A few jagged metal bars stick out of the wall above the camp’s enormous gates, remnants of a giant iron Nazi eagle that was torn down immediately after liberation. It is quiet now, just a few solemn visitors. But in the 1930s, Mauthausen had been a vital economic engine of Hitler’s genocidal plan to remake Europe and the Soviet Union into his own backyard utopia. It started out as a granite quarry but quickly grew into the largest slave labor complex in Nazi Germany, with fifty subcamps that spanned most of modern-day Austria. Here, hundreds of thousands of prisoners—mostly European Jews but also Roma, Spaniards, Russians, Serbs, Slovenes, Germans, Bulgarians, even Cubans—were worked to death. They refined oil, built fighter aircraft, assembled cannons, developed rocket technology, and were leased out to private German businesses. Volkswagen, Siemens, Daimler-Benz, BMW, Bosch—all benefited from the camp’s slave labor pool. Mauthausen, the administrative nerve center, was centrally directed from Berlin using the latest in early computer technology: IBM punch card tabulators.