Book Read Free

Data and Goliath

Page 16

by Bruce Schneier


  This point was made in the 9/11 Commission Report. That report described a failure to “connect the dots,” which proponents of mass surveillance claim requires collection of more data. But what the report actually said was that the intelligence community had all the information about the plot without mass surveillance, and that the failures were the result of inadequate analysis.

  Mass surveillance didn’t catch underwear bomber Umar Farouk Abdulmutallab in 2006, even though his father had repeatedly warned the US government that he was dangerous. And the liquid bombers (they’re the reason governments prohibit passengers from bringing large bottles of liquids, creams, and gels on airplanes in their carry-on luggage) were captured in 2006 in their London apartment not due to mass surveillance but through traditional investigative police work. Whenever we learn about an NSA success, it invariably comes from targeted surveillance rather than from mass surveillance. One analysis showed that the FBI identifies potential terrorist plots from reports of suspicious activity, reports of plots, and investigations of other, unrelated, crimes.

  This is a critical point. Ubiquitous surveillance and data mining are not suitable tools for finding dedicated criminals or terrorists. We taxpayers are wasting billions on mass-surveillance programs, and not getting the security we’ve been promised. More importantly, the money we’re wasting on these ineffective surveillance programs is not being spent on investigation, intelligence, and emergency response: tactics that have been proven to work.

  Mass surveillance and data mining are much more suitable for tasks of population discrimination: finding people with certain political beliefs, people who are friends with certain individuals, people who are members of secret societies, and people who attend certain meetings and rallies. Those are all individuals of interest to a government intent on social control like China. The reason data mining works to find them is that, like credit card fraudsters, political dissidents are likely to share a well-defined profile. Additionally, under authoritarian rule the inevitable false alarms are less of a problem; charging innocent people with sedition instills fear in the populace.

  More than just being ineffective, the NSA’s surveillance efforts have actually made us less secure. In order to understand how, I need to explain a bit about Internet security, encryption, and computer vulnerabilities. The following three sections are short but important.

  INTERNET ATTACK VERSUS DEFENSE

  In any security situation, there’s a basic arms race between attack and defense. One side might have an advantage for a while, and then technology changes and gives the other side an advantage. And then it changes back.

  Think about the history of military technology and tactics. In the early 1800s, military defenders had an advantage; charging a line was much more dangerous than defending it. Napoleon first figured out how to attack effectively using the weaponry of the time. By World War I, firearms—particularly the machine gun—had become so powerful that the defender again had an advantage; trench warfare was devastating to the attacker. The tide turned again in World War II with the invention of blitzkrieg warfare, and the attacker again gained the advantage.

  Right now, both on the Internet and with computers in general, the attacker has the advantage. This is true for several reasons.

  • It’s easier to break things than to fix them.

  • Complexity is the worst enemy of security, and our systems are getting more complex all the time.

  • The nature of computerized systems makes it easier for the attacker to find one exploitable vulnerability in a system than for the defender to find and fix all vulnerabilities in the system.

  • An attacker can choose a particular attack and concentrate his efforts, whereas the defender has to defend against every possibility.

  • Software security is generally poor; we simply don’t know how to write secure software and create secure computer systems. Yes, we keep improving, but we’re still not doing very well.

  • Computer security is very technical, and it’s easy for average users to get it wrong and subvert whatever security they might have.

  This isn’t to say that Internet security is useless, far from it. Attack might be easier, but defense is still possible. Good security makes many kinds of attack harder, more costly, and more risky. Against an attacker who isn’t sufficiently skilled, good security may protect you completely.

  In the security field, we think in terms of risk management. You identify what your risk is, and what reasonable precautions you should take. So, as someone with a computer at home, you should have a good antivirus program, turn automatic updates on so your software is up-to-date, avoid dodgy websites and e-mail attachments from strangers, and keep good backups. These plus several more essential steps that are fairly easy to implement will leave you secure enough against common criminals and hackers. On the other hand, if you’re a political dissident in China, Syria, or Ukraine trying to avoid arrest or assassination, your precautions must be more comprehensive. Ditto if you’re a criminal trying to evade the police, a businessman trying to prevent corporate espionage, or a government embassy trying to thwart military espionage. If you’re particularly concerned about corporations collecting your data, you’ll need a different set of security measures.

  For many organizations, security comes down to basic economics. If the cost of security is less than the likely cost of losses due to lack of security, security wins. If the cost of security is more than the likely cost of losses, accept the losses. For individuals, a lot of psychology mixes in with the economics. It’s hard to put a dollar value on a privacy violation, or on being put on a government watch list. But the general idea is the same: cost versus benefit.

  Of critical import to this analysis is the difference between random and targeted attacks.

  Most criminal attacks are opportunistic. In 2013, hackers broke into the network of the retailer Target Corporation and stole credit card and other personal information belonging to 40 million people. It was the biggest known breach of its kind at the time, and a catastrophe for the company—its CEO, Gregg Steinhafel, resigned over the incident—but the criminals didn’t specifically pick Target for any ideological reasons. They were interested in obtaining credit card numbers to commit fraud, and any company’s database would have done. If Target had had better security, the criminals would have gone elsewhere. It’s like the typical home burglar. He wants to rob a home. And while he might have some selection criteria as to neighborhood and home type, he doesn’t particularly care which one he chooses. Your job as a homeowner is to make your home less attractive to the burglar than your neighbor’s home. Against undirected attacks, what entails good security is relative.

  Compare this with the 2012 attack against the New York Times by Chinese hackers, possibly ones associated with the government. In this case, the attackers were trying to monitor reporters’ communications with Chinese dissidents. They specifically targeted the New York Times’ e-mails and internal network because that’s where the information they wanted was located. Against targeted attacks, what matters is your absolute level of security. It is irrelevant what kind of security your neighbors have; you need to be secure against the specific capabilities of your attackers.

  Another example: Google scans the e-mail of all Gmail users, and uses information gleaned from it to target advertising. Of course, there isn’t a Google employee doing this; a computer does it automatically. So if you write your e-mail in some obscure language that Google doesn’t automatically translate, you’ll be secure against Google’s algorithms—because it’s not worth it to Google to manually translate your e-mails. But if you’re suddenly under targeted investigation by the FBI, officers will take that time and translate your e-mails.

  Keep this security distinction between mass and targeted surveillance in mind; we’ll return to it again and again.

  THE VALUE OF ENCRYPTION

  I just described Internet security as an arms race, with the attacker having an advantage
over the defender. The advantage might be major, but it’s still an advantage of degree. It’s never the case that one side has some technology so powerful that the other side can’t possibly win—except in movies and comic books.

  Encryption, and cryptography in general, is the one exception to this. Not only is defense easier than attack; defense is so much easier than attack that attack is basically impossible.

  There’s an enormous inherent mathematical advantage in encrypting versus trying to break encryption. Fundamentally, security is based on the length of the key; a small change in key length results in an enormous amount of extra work for the attacker. The difficulty increases exponentially. A 64-bit key might take an attacker a day to break. A 65-bit key would take the same attacker twice the amount of time to break, or two days. And a 128-bit key—which is at most twice the work to use for encryption—would take the same attacker 264 times longer, or one million billion years to break. (For comparison, Earth is 4.5 billion years old.)

  This is why you hear statements like “This can’t be broken before the heat death of the universe, even if you assume the attacker builds a giant computer using all the atoms of the planet.” The weird thing is that those are not exaggerations. They’re just effects of the mathematical imbalance between encrypting and breaking.

  At least, that’s the theory. The problem is that encryption is just a bunch of math, and math has no agency. To turn that encryption math into something that can actually provide some security for you, it has to be written in computer code. And that code needs to run on a computer: one with hardware, an operating system, and other software. And that computer needs to be operated by a person and be on a network. All of those things will invariably introduce vulnerabilities that undermine the perfection of the mathematics, and put us back in the security situation discussed earlier—one that is strongly biased towards attack.

  The NSA certainly has some classified mathematics and massive computation capabilities that let it break some types of encryption more easily. It built the Multiprogram Research Facility in Oak Ridge, Tennessee, for this purpose. But advanced as the agency’s cryptanalytic capabilities are, we’ve learned from Snowden’s documents that it largely uses those other vulnerabilities—in computers, people, and networks—to circumvent encryption rather than tackling it head-on. The NSA hacks systems, just as Internet criminals do. It has its Tailored Access Operations group break into networks and steal keys. It exploits bad user-chosen passwords, and default or weak keys. It obtains court orders and demands copies of encryption keys. It secretly inserts weaknesses into products and standards.

  Snowden put it like this in an online Q&A in 2013: “Encryption works. Properly implemented strong crypto systems are one of the few things that you can rely on. Unfortunately, endpoint security is so terrifically weak that NSA can frequently find ways around it.”

  But those other methods the NSA can use to get at encrypted data demonstrate exactly why encryption is so important. By leveraging that mathematical imbalance, cryptography forces an attacker to pursue these other routes. Instead of passively eavesdropping on a communications channel and collecting data on everyone, the attacker might have to break into a specific computer system and grab the plaintext. Those routes around the encryption require more work, more risk of exposure, and more targeting than bulk collection of unencrypted data does.

  Remember the economics of big data: just as it is easier to save everything than to figure out what to save, it is easier to spy on everyone than to figure out who deserves to be spied on. Widespread encryption has the potential to render mass surveillance ineffective and to force eavesdroppers to choose their targets. This would be an enormous win for privacy, because attackers don’t have the budget to pick everyone.

  THE PREVALENCE OF VULNERABILITIES

  Vulnerabilities are mistakes. They’re errors in design or implementation—glitches in the code or hardware—that allow unauthorized intrusion into a system. So, for example, a cybercriminal might exploit a vulnerability to break into your computer, eavesdrop on your web connection, and steal the password you use to log in to your bank account. A government intelligence agency might use a vulnerability to break into the network of a foreign terrorist organization and disrupt its operations, or to steal a foreign corporation’s intellectual property. Another government intelligence agency might take advantage of a vulnerability to eavesdrop on political dissidents, or terrorist cells, or rival government leaders. And a military might use a vulnerability to launch a cyberweapon. This is all hacking.

  When someone discovers a vulnerability, she can use it either for defense or for offense. Defense means alerting the vendor and getting it patched—and publishing it so the community can learn from it. Lots of vulnerabilities are discovered by vendors themselves and patched without any fanfare. Others are discovered by researchers and ethical hackers.

  Offense involves using the vulnerability to attack others. Unpublished vulnerabilities are called “zero-day” vulnerabilities; they’re very valuable to attackers because no one is protected against them, and they can be used worldwide with impunity. Eventually the affected software’s vendor finds out—the timing depends on how widely the vulnerability is exploited—and issues a patch to close it.

  If an offensive military cyber unit or a cyberweapons manufacturer discovers the vulnerability, it will keep it secret for future use to build a cyberweapon. If used rarely and stealthily, the vulnerability might remain secret for a long time. If unused, it will remain secret until someone else discovers it.

  Discoverers can sell vulnerabilities. There’s a robust market in zero-days for attack purposes—both governments and cyberweapons manufacturers that sell to governments are buyers—and black markets where discoverers can sell to criminals. Some vendors offer bounties for vulnerabilities to spur defense research, but the rewards are much lower.

  Undiscovered zero-day vulnerabilities are common. Every piece of commercial software—your smartphone, your computer, the embedded systems that run nuclear power plants—has hundreds if not thousands of vulnerabilities, most of them undiscovered. The science and engineering of programming just isn’t good enough to produce flawless software, and that isn’t going to change anytime soon. The economics of software development prioritize features and speed to market, not security.

  What all this means is that the threat of hacking isn’t going away. For the foreseeable future, it will always be possible for a sufficiently skilled attacker to find a vulnerability in a defender’s system. This will be true for militaries building cyberweapons, intelligence agencies trying to break into systems in order to eavesdrop, and criminals of all kinds.

  MAINTAINING AN INSECURE INTERNET

  In Chapter 6, I discussed how the NSA uses both existing and specially created vulnerabilities to hack into systems. Its actions put surveillance ahead of security, and end up making us all less secure. Here’s how the NSA and GCHQ think, according to a Guardian article on some of the Snowden documents: “Classified briefings between the agencies celebrate their success at ‘defeating network security and privacy. . . .’ ”

  Just how do governments go about defeating security and privacy? We know the NSA uses the following four main practices. Assume that the Russians, Chinese, and various other countries are using similar methods. And cybercriminals aren’t far behind.

  Stockpiling vulnerabilities in commercial software that we use every day, rather than making sure those security flaws get fixed. When the NSA discovers (or buys) a vulnerability, it can either alert the vendor and get a still-secret vulnerability fixed, or it can hold on to it and use it to eavesdrop on target computer systems. Both tactics support important US policy goals, but the NSA has to choose which one to pursue in each case.

  Right now, the US—both at the NSA and at US Cyber Command—stockpiles zero-day vulnerabilities. How many it has is unclear. In 2014, the White House tried to clarify the country’s policy on this in a blog post, but didn’t real
ly explain it. We know that a single cyberweapon, Stuxnet, used four zero-days. Using up that many for a single cyberattack implies that the government’s stockpile is in the hundreds.

  In congressional testimony, former NSA director Michael Hayden introduced the agency jargon NOBUS, “nobody but us”—that is, a vulnerability that nobody but us is likely to find or use. The NSA has a classified process to determine what it should do about vulnerabilities. The agency claims that it discloses and closes most of the vulnerabilities it finds, but holds back some—we don’t know how many—that it believes are NOBUSes.

  This approach seems to be the appropriate general framework, but it’s impossible to apply in practice. Many of us in the security field don’t know how to make NOBUS decisions, and we worry that the government can’t, either.

  This stockpiling puts everyone at risk. Unpatched vulnerabilities make us all less safe, because anyone can independently discover them and use them to attack us. They’re inherently destabilizing, especially because they are only effective for a limited time. Even worse, each use runs the risk that others will learn about the vulnerability and use it for themselves. And they come in families; keeping one secret might mean that an entire class of vulnerabilities remains undiscovered and unpatched. The US and other Western countries are highly vulnerable to zero-days, because of our critical electronic infrastructure, intellectual property, and personal wealth. Countries like China and Russia are less vulnerable—North Korea much less—so they have considerably less incentive to get vulnerabilities fixed.

  Inserting backdoors into widely used computer hardware and software products. Backdoors aren’t new. The security industry has long worried about backdoors left in software by hackers, and has spent considerable effort trying to find and fix them. But now we know that the US government is deliberately inserting them into hardware and software products.

  One of the NSA documents disclosed by Snowden describes the “SIGINT Enabling Project,” one tactic of which is to “insert vulnerabilities into commercial encryption systems, IT systems, networks, and endpoint communications devices used by targets.” We don’t know much about this project: how much of it is done with the knowledge and consent of the manufacturers involved, and how much is done surreptitiously by either employees secretly working for the government or clandestine manipulation of the company’s master source code files. We also don’t know how well it has succeeded—the documents don’t give us a lot of details—but we know it was funded at $250 million per year. We also don’t know which other countries do the same things to systems designed by companies under their political control.

 

‹ Prev