Book Read Free

Data and Goliath

Page 17

by Bruce Schneier


  We know of a few examples. In Chapter 6, I talked about Microsoft weakening Skype for the NSA. The NSA also pressured Microsoft to put a backdoor in its BitLocker hard drive encryption software, although the company seems to have resisted. Presumably there have been other efforts involving other products; I’ve heard about several unsuccessful attempts privately.

  Deliberately created vulnerabilities are very risky, because there is no way to implement backdoor access to any system that will ensure that only the government can take advantage of it. Government-mandated access forces companies to make their products and services less secure for everyone.

  For example, between June 2004 and March 2005 someone wiretapped more than 100 cell phones belonging to members of the Greek government—the prime minister and the ministers of defense, foreign affairs, and justice—and other prominent Greek citizens. Swedish telecommunications provider Ericsson built this wiretapping capability into Vodafone products, but enabled it only for governments that requested it. Greece wasn’t one of those governments, but some still-unknown party—a rival political group? organized crime?—figured out how to surreptitiously turn the feature on.

  This wasn’t an isolated incident. Something similar occurred in Italy in 2006. In 2010, Chinese hackers exploited an intercept system Google had put into Gmail to comply with US government surveillance requests. And in 2012, we learned that every phone switch sold to the Department of Defense had security vulnerabilities in its surveillance system; we don’t know whether they were inadvertent or deliberately inserted.

  The NSA regularly exploits backdoors built into systems by other countries for other purposes. For example, it used the wiretap capabilities built in to the Bermuda phone system to secretly intercept all the country’s phone calls. Why does it believe the same thing won’t be done to us?

  Undermining encryption algorithms and standards. Another objective of the SIGINT Enabling Project is to “influence policies, standards and specifications for commercial public key technologies.” Again, details are few, but I assume these efforts are more focused on proprietary standards like cell phone security than on public standards like encryption algorithms. For example, the NSA influenced the adoption of an encryption algorithm for GSM phones that it can easily break. The one public example we know of is the NSA’s insertion of a backdoored random number generator into a common Internet standard, followed by efforts to get that generator used more widely. The intent was to subvert the encryption that people use to protect their Internet communications and web browsing, but it wasn’t very successful.

  Hacking the Internet. In Chapter 5, I talked about the NSA’s TAO group and its hacking mission. Aside from directly breaking into computers and networking equipment, the NSA masquerades as Facebook and LinkedIn (and presumably other websites as well) to infiltrate target computers and redirect Internet traffic to its own dummy sites for eavesdropping purposes. The UK’s GCHQ can find your private photos on Facebook, artificially increase traffic to a website, disrupt video from a website, delete computer accounts, hack online polls, and much more.

  In addition to the extreme distrust that all these tactics engender amongst Internet users, they require the NSA to ensure that surveillance takes precedence over security. Instead of improving the security of the Internet for everyone’s benefit, the NSA is ensuring that the Internet remains insecure for the agency’s own convenience.

  This hurts us all, because the NSA isn’t the only actor out there that thrives on insecurity. Other governments and criminals benefit from the subversion of security. And a surprising number of the secret surveillance technologies revealed by Snowden aren’t exclusive to the NSA, or even to other national intelligence organizations. They’re just better-funded hacker tools. Academics have discussed ways to recreate much of the NSA’s collection and analysis tools with open-source and commercial systems.

  For example, when I was working with the Guardian on the Snowden documents, the one top-secret program the NSA desperately did not want us to expose was QUANTUM. This is the NSA’s program for what is called packet injection—basically, a technology that allows the agency to hack into computers. Turns out, though, that the NSA was not alone in its use of this technology. The Chinese government uses packet injection to attack computers. The cyberweapons manufacturer Hacking Team sells packet injection technology to any government willing to pay for it. Criminals use it. And there are hacker tools that give the capability to individuals as well. All of these existed before I wrote about QUANTUM. By using its knowledge to attack others rather than to build up the Internet’s defenses, the NSA has worked to ensure that anyone can use packet injection to hack into computers.

  Even when technologies are developed inside the NSA, they don’t remain exclusive for long. Today’s top-secret programs become tomorrow’s PhD theses and the next day’s hacker tools. Techniques first developed for the military cyberweapon Stuxnet have ended up in criminal malware. The same password-cracking software that Elcomsoft sells to governments was used by hackers to hack celebrity photos from iCloud accounts. And once-secret techniques to monitor people’s cell phones are now in common use.

  The US government’s desire for unfettered surveillance has already affected how the Internet works. When surveillance becomes multinational and cooperative, those needs will increasingly take precedence over others. And the architecture choices network engineers make to comply with government surveillance demands are likely to be around for decades, simply because it’s easier to keep doing the same things than to change. By putting surveillance ahead of security, the NSA ensures the insecurity of us all.

  COLLATERAL DAMAGE FROM CYBERATTACKS

  As nations continue to hack each other, the Internet-using public is increasingly part of the collateral damage. Most of the time we don’t know the details, but sometimes enough information bubbles to the surface that we do.

  Three examples: Stuxnet’s target was Iran, but the malware accidentally infected over 50,000 computers in India, Indonesia, Pakistan, and elsewhere, including computers owned by Chevron, and industrial plants in Germany; it may have been responsible for the failure of an Indian satellite in 2010. Snowden claims that the NSA accidentally caused an Internet blackout in Syria in 2012. Similarly, China’s Great Firewall uses a technique called DNS injection to block access to certain websites; this technique regularly disrupts communications having nothing to do with China or the censored links.

  The more nations attack each other through the global Internet—whether to gain intelligence or to inflict damage—the more civilian networks will become collateral damage.

  HARM TO NATIONAL INTERESTS

  In Chapter 9, I discussed how the NSA’s activities harm US economic interests. It also harms the country’s political interests.

  Political scientist Ian Bremmer has argued that public revelations of the NSA’s activities “have badly undermined US credibility with many of its allies.” US interests have been significantly harmed on the world stage, as one country after another has learned about our snooping on its citizens or leaders: friendly countries in Europe, Latin America, and Asia. Relations between the US and Germany have been particularly strained since it became public that the NSA was tapping the cell phone of German chancellor Angela Merkel. And Brazil’s president Dilma Rousseff turned down an invitation to a US state dinner—the first time any world leader did that—because she and the rest of her country were incensed at NSA surveillance.

  Much more is happening behind the scenes, over more private diplomatic channels. There’s no soft-pedaling it; the US is undermining its global stature and leadership with its aggressive surveillance program.

  12

  Principles

  The harms from mass surveillance are many, and the costs to individuals and society as a whole disproportionately outweigh the benefits. We can and must do something to rein it in. Before offering specific legal, technical, and social proposals, I want to start this section with some gen
eral principles. These are universal truths about surveillance and how we should deal with it that apply to both governments and corporations.

  Articulating principles is the easy part. It’s far more difficult to apply them in specific circumstances. “Life, liberty, and the pursuit of happiness” are principles we all agree on, but we only need to look at Washington, DC, to see how difficult it can be to apply them. I’ve been on many panels and debates where people on all sides of this issue agree on general principles about data collection, surveillance, oversight, security, and privacy, even though they disagree vehemently on how to apply those principles to the world at hand.

  SECURITY AND PRIVACY

  Often the debate is characterized as “security versus privacy.” This simplistic view requires us to make some kind of fundamental trade-off between the two: in order to become secure, we must sacrifice our privacy and subject ourselves to surveillance. And if we want some level of privacy, we must recognize that we must sacrifice some security in order to get it.

  It’s a false trade-off. First, some security measures require people to give up privacy, but others don’t impinge on privacy at all: door locks, tall fences, guards, reinforced cockpit doors on airplanes. And second, privacy and security are fundamentally aligned. When we have no privacy, we feel exposed and vulnerable; we feel less secure. Similarly, if our personal spaces and records are not secure, we have less privacy. The Fourth Amendment of the US Constitution talks about “the right of the people to be secure in their persons, houses, papers, and effects” (italics mine). Its authors recognized that privacy is fundamental to the security of the individual.

  Framing the conversation as trading security for privacy leads to lopsided evaluations. Often, the trade-off is presented in terms of monetary cost: “How much would you pay for privacy?” or “How much would you pay for security?” But that’s a false trade-off, too. The costs of insecurity are real and visceral, even in the abstract; the costs of privacy loss are nebulous in the abstract, and only become tangible when someone is faced with their aftereffects. This is why we undervalue privacy when we have it, and only recognize its true value when we don’t. This is also why we often hear that no one wants to pay for privacy and that therefore security trumps privacy absolutely.

  When the security versus privacy trade-off is framed as a life-and-death choice, all rational debate ends. How can anyone talk about privacy when lives are at stake? People who are scared will more readily sacrifice privacy in order to feel safer. This explains why the US government was given such free rein to conduct mass surveillance after 9/11. The government basically said that we all had to give up our privacy in exchange for security; most of us didn’t know better, and thus accepted the Faustian bargain.

  The problem is that the entire weight of insecurity is compared with the incremental invasion of privacy. US courts do this a lot, saying things on the order of, “We agree that there is a loss of privacy at stake in this or that government program, but the risk of a nuclear bomb going off in New York is just too great.” That’s a sloppy characterization of the trade-off. It’s not the case that a nuclear detonation is impossible if we surveil, or inevitable if we don’t. The probability is already very small, and the theoretical privacy-invading security program being considered could only reduce that number very slightly. That’s the trade-off that needs to be considered.

  More generally, our goal shouldn’t be to find an acceptable trade-off between security and privacy, because we can and should maintain both together.

  SECURITY OVER SURVEILLANCE

  Security and surveillance are conflicting design requirements. A system built for security is harder to surveil. Conversely, a system built for easy surveillance is harder to secure. A built-in surveillance capability in a system is insecure, because we don’t know how to build a system that only permits surveillance by the right sort of people. We saw this in Chapter 11.

  We need to recognize that, to society as a whole, security is more critical than surveillance. That is, we need to choose a secure information infrastructure that inhibits surveillance instead of an insecure infrastructure that allows for easy surveillance.

  The reasoning applies generally. Our infrastructure can be used for both good and bad purposes. Bank robbers drive on highways, use electricity, shop at hardware stores, and eat at all-night restaurants, just like honest people. Innocents and criminals alike use cell phones, e-mail, and Dropbox. It rains on the just and the unjust.

  Despite this, society continues to function, because the honest, positive, and beneficial uses of our infrastructure far outweigh the dishonest, negative, and harmful ones. The percentage of the drivers on our highways who are bank robbers is negligible, as is the percentage of e-mail users who are criminals. It makes far more sense to design all of these systems for the majority of us who need security from criminals, telemarketers, and sometimes our own governments.

  By prioritizing security, we would be protecting the world’s information flows—including our own—from eavesdropping as well as more damaging attacks like theft and destruction. We would protect our information flows from governments, non-state actors, and criminals. We would be making the world safer overall.

  Tor is an excellent example. It’s free open-source software that you can use to browse anonymously on the Internet. First developed with funding from the US Naval Research Laboratory and then from the State Department, it’s used by dissidents all over the world to evade surveillance and censorship. Of course, it’s also used by criminals for the same purpose. Tor’s developers are constantly updating the program to evade the Chinese government’s attempts to ban it. We know that the NSA is continually trying to break it, and—at least as of a 2007 NSA document disclosed by Snowden—has been unsuccessful. We know that the FBI was hacking into computers in 2013 and 2014 because it couldn’t break Tor. At the same time, we believe that individuals who work at both the NSA and the GCHQ are anonymously helping keep Tor secure. But this is the quandary: Tor is either strong enough to protect the anonymity of both those we like and those we don’t like, or it’s not strong enough to protect the anonymity of either.

  Of course, there will never be a future in which no one spies. That’s naïve. Governments have always spied, since the beginning of history; there are even a few spy stories in the Old Testament. The question is which sort of world we want to move towards. Do we want to reduce power imbalances by limiting government’s abilities to monitor, censor, and control? Or do we allow governments to have increasingly more power over us?

  “Security over surveillance” isn’t an absolute rule, of course. There are times when it’s necessary to design a system for protection from the minority of us who are dishonest. Airplane security is an example of that. The number of terrorists flying on planes is negligible compared with the number of nonterrorists, yet we design entire airports around those few, because a failure of security on an airplane is catastrophically more deadly than a terrorist bomb just about anywhere else. We don’t (yet) design our entire society around terrorism prevention, though.

  There are also times when we need to design appropriate surveillance into systems. We want shipping services to be able to track packages in real time. We want first responders to know where an emergency cell phone call is coming from. We don’t use the word “surveillance” in these cases, of course; we use some less emotionally laden term like “package tracking.”

  The general principle here is that systems should be designed with the minimum surveillance necessary for them to function, and where surveillance is required they should gather the minimum necessary amount of information and retain it for the shortest time possible.

  TRANSPARENCY

  Transparency is vital to any open and free society. Open government laws and freedom of information laws let citizens know what the government is doing, and enable them to carry out their democratic duty to oversee its activities. Corporate disclosure laws perform similar functions in the pr
ivate sphere. Of course, both corporations and governments have some need for secrecy, but the more they can be open, the more we can knowledgeably decide whether to trust them. Right now in the US, we have strong open government and freedom of information laws, but far too much information is exempted from them.

  For personal data, transparency is pretty straightforward: people should be entitled to know what data is being collected about them, what data is being archived about them, and how data about them is being used—and by whom. And in a world that combines an international Internet with country-specific laws about surveillance and control, we need to know where data about us is being stored. We are much more likely to be comfortable with surveillance at any level if we know these things. Privacy policies should provide this information, instead of being so long and deliberately obfuscating that they shed little light.

  We also need transparency in the algorithms that judge us on the basis of our data, either by publishing the code or by explaining how they work. Right now, we cannot judge the fairness of TSA algorithms that select some of us for “special screening.” Nor can we judge the IRS’s algorithms that select some of us for auditing. It’s the same with search engine algorithms that determine what Internet pages we see, predictive policing algorithms that decide whom to bring in for questioning and what neighborhoods to patrol, or credit score algorithms that determine who gets a mortgage. Some of this secrecy is necessary so people don’t figure out how to game the system, but much of it is not. The EU Data Protection Directive already requires disclosure of much of this information.

 

‹ Prev