Given that this theater of war is so new and unknown, given that everything happens so fast, and given the military’s default belief in the righteousness of its mission, however it is framed, militaries have tended to rush in and fill what they regard as a void in security. The corresponding danger is the perception that we have military problems, which beg for military solutions. These tend to be totalitarian at worst, and extralegal at best.
We need to fix this.
In the US, a series of laws prevents the military from playing a role in normal peacetime domestic affairs, while ensuring they are prepared to counter foreign threats. The 1878 Posse Comitatus Act and other directives prevent the military from engaging in domestic security matters. Because we limit the military’s role to making war against foreign powers, we have felt confident in allowing it to operate with more latitude. For example, the normal rules of search and seizure that apply to law enforcement don’t apply to the military, because such rules just don’t make sense in the middle of a war.
Offensive military operations in cyberspace, be they espionage or attack, should remain within the purview of the military. In the US, that’s Cyber Command. If we’re going to attack another country’s electronic infrastructure, we should treat it like any other attack on a foreign country. Not simple espionage (cyber or real world), but as an attack. Such operations should be recognized as offensive military actions, correspondingly approved at the highest levels of the executive branch, and should be subject to the same international law standards that govern acts of war in the offline world.
BREAK UP THE NSA
I have just proposed that the NSA’s espionage mission be separated from its surveillance mission, and that the military’s role in cyberspace be restricted to actions against foreign military targets. To accomplish this, I advocate breaking up the NSA and restoring and strengthening the various agencies’ responsibilities that existed prior to 9/11:
• As part of the Department of Defense, the NSA should focus on espionage against foreign governments.
• The Department of Justice should be responsible for law enforcement and terrorism investigations. To that end, it should conduct only targeted and legally permissible surveillance activities, domestic and foreign, and should pursue leads based on the expertise of FBI agents and not NSA databases.
• The NSA’s defensive capabilities in cryptography, computer security, and network defense should be spun off and become much more prominent and public. The National Institute of Standards and Technology (NIST), a civilian agency outside the Department of Defense, should reassert control over the development of technical standards for network security. The Computer Security Act of 1987 attempted to keep the NSA out of domestic security by making it clear that NIST—then called the National Bureau of Standards—had the lead in establishing technical security standards. We need to strengthen that law and ensure it’s obeyed.
• The US’s offensive cyber capabilities should remain with US Cyber Command. That organization should subsume the NSA’s hacking capabilities (that’s TAO). The general in charge of US Cyber Command should not also be the director of the NSA.
This is a long-range plan, but it’s the right one. In the meantime, we should reduce the NSA’s funding to pre-9/11 levels. That in itself would do an enormous amount of good.
FIGHT THE CYBER SOVEREIGNTY MOVEMENT
Twenty years ago, few governments had any policies regulating the Internet. Today, every country does, and some of them are pretty draconian. This shouldn’t come as a surprise; the Internet became too big a deal for governments to ignore. But this change took many Internet watchers by surprise, and continues to do so.
Increasingly, the world’s governments are fighting against the Internet’s inherently international nature. If a regime wants to surveil its people, limit what they can read, and censor what they can say, the international free-and-open nature of the Internet presents a problem.
For this reason, countries like Russia, China, and Saudi Arabia have pushed for years for more national control over their domestic Internet. Through international organizations like the International Telecommunications Union—that’s the UN agency that controls telephony standards—they are trying to wrest control of the Internet from the more informal multi-stakeholder international organizations that are currently in charge. Their arguments sound benign, but their motivations are not. They want an Internet that recognizes both national borders and the rights of governments to exert control within those borders: in this case, resulting in more surveillance and censorship.
The disclosure of the NSA’s surveillance activities has given this position a huge boost. Several governments have pushed back against US dominance of the Internet because they’re concerned about their citizens’ privacy. Countries like Brazil and Germany have called for more of their citizens’ data to be stored within their own borders. Other countries, with the opposite agenda, have seized on the same rhetoric. Russia passed a law in 2014 mandating that online businesses store data on its citizens within the country, beyond the reach of the NSA but within easy reach of the Russian government.
I hold conflicting views about this. On one hand, I want countries with stronger privacy laws to protect their citizens’ data by demanding that it be subject to their jurisdiction. On the other hand, I don’t think this will protect such data against NSA surveillance. At least the NSA has some constraints on what it may access within the US. If that same data were stored in Brazilian and German servers, those legal restrictions would not apply. And given what we know about the NSA’s technical capabilities, I have no doubt that the agency will gain access in any case.
The fundamentally international nature of the Internet is an enormous benefit for people living in countries that engage in surveillance and censorship. Cyber sovereignty is often a smoke screen for the desires of political leaders to monitor and control their citizens without interference from foreign governments or corporations. And the fight against cyber sovereignty is often viewed as a smoke screen for the NSA’s efforts to gain access to more of the world’s communications. We need to reaffirm our support for a free, open, and global Internet, and then work to ensure its continued existence.
PROVIDE FOR COMMONS
Unowned public spaces have enormous social value. Our public parks, our sidewalks, our roads are not owned by any private concern, and we have laws that reflect that public ownership. On the Internet, everything is owned by some private entity; even that website independently run by your friend is hosted on some corporate server somewhere. There is no commons.
We don’t perceive our online experience this way. Chatting on Facebook feels like chatting in person, and we’re surprised when the company exercises its right to delete posts and ban people. We’re even more surprised when we learn that we have no right to appeal—or even to our data. Yes, we agreed to hand over all those rights when we clicked that end-user license agreement. But because we didn’t bother reading it, we weren’t aware of it.
The concept of public space is important because a lot of our freedoms in the offline world are based on that notion. In the US, the First Amendment protects free speech in public places. Other laws limit behaviors like public drunkenness and lewdness. These laws don’t apply to the Internet, because everything there is private space. The laws don’t apply to things we say on Facebook, Twitter, Instagram, or Medium—or to comments we make on news sites—even if they are publicly readable.
Back in the dawn of the Internet, public discussion forums were conducted on something called Usenet. It was a decentralized system, and no one company could control who could speak and what they could say. As discussion forums moved to websites and corporate-owned platforms, that freedom disappeared.
We need places on the Internet that are not controlled by private parties—places to speak, places to converse, places to gather, places to protest. They could be government-run, or they could be corporate-run with special rules treating them as a
true commons. Similar to common-carrier rules by which telcos are not allowed to discriminate amongst different types of traffic, there could be common-carrier social networking areas that the owners are not allowed to monitor or censor.
Whatever the solution, commons are vital to society. We should deliberately work to ensure that we always have them in cyberspace.
14
Solutions for Corporations
As we look to limit corporate surveillance, it’s important to remember that we all reap enormous benefits from data collection and use. Data collection gives us many benefits and conveniences that just weren’t possible before: real-time driving directions based on actual congestion data, grocery lists that remember what we bought last time, the ability to get a store refund even if you don’t save your receipts, the ability to remotely verify that you turned out the lights and locked the door, instant communication with anyone anywhere in the world. There’s more coming. Watch any science fiction movie or television show and pay attention to the marvels of a fully computerized world; much of it assumes that computers know, respond to, and remember what people are doing. This sort of surveillance is our future, and it’s a future filled with things that make our lives better and more enjoyable.
Similarly, there is value to unfettered access to technology. Although much of this book focuses on the dark side of technology, we must remember that technology has been an enormous benefit to us all. Technology enables us to accomplish complex tasks more quickly, easily, and accurately for many purposes: to develop more durable construction materials; to find and disseminate information; to precisely depict physical phenomena; to communicate with others free of geographical constraints; to document events; to grow more food; to live longer. I could not have written this book without the Internet. It’s not perfect, of course. Technology is unevenly distributed on the planet, and there are haves and have-nots, but—in general—more technology is better.
The last thing we want to do is derail that future. We simply don’t know what sorts of inventions are coming, or what major human problems they will be able to solve. We need to be able to experiment with new technologies and with new businesses based on those technologies, and this includes surveillance technologies. The trick will be maximizing the benefits that come from companies collecting, storing, and analyzing our data, while minimizing the harms.
There are lots of solutions out there to consider. The 1980 OECD Privacy Framework is a great place to start; it lays out limitations on data collection, data storage, and data use. In 1995, the European Union passed the EU Data Protection Directive, which regulated personal data collected by corporations. American corporations, accustomed to the much more permissive legal regime in the US, are constantly running afoul of European law. And reforms, bringing that law up to date with modern technology, are currently being debated.
The solutions offered in this chapter are all directed at the private collection and use of our data. Sometimes these changes can be spurred by the market, but most of the time they will be facilitated by laws. This is really a list of things governments need to do, which in turn is really a list of things citizens need to demand that their governments do. Since they affect corporations, they’re in this chapter.
MAKE VENDORS LIABLE FOR PRIVACY BREACHES
One way to improve the security of collected data is to make companies liable for data breaches.
Corporations are continually balancing costs and benefits. In this case, the costs consist of the cost of securing the data they collect and save, the cost of insecurity if there’s a breach, and the value of the data they collect. Right now, the cost of insecurity is low. A few very public breaches aside—Target is an example here—corporations find it cheaper to spend money on PR campaigns touting good security, weather the occasional press storm and round of lawsuits when they’re proven wrong, and fix problems after they become public.
OECD Privacy Framework (1980)
COLLECTION LIMITATION PRINCIPLE: There should be limits to the collection of personal data and any such data should be obtained by lawful and fair means and, where appropriate, with the knowledge or consent of the data subject.
DATA QUALITY PRINCIPLE: Personal data should be relevant to the purposes for which they are to be used, and, to the extent necessary for those purposes, should be accurate, complete and kept up-to-date.
PURPOSE SPECIFICATION PRINCIPLE: The purposes for which personal data are collected should be specified not later than at the time of data collection and the subsequent use limited to the fulfilment of those purposes or such others as are not incompatible with those purposes and as are specified on each occasion of change of purpose.
USE LIMITATION PRINCIPLE: Personal data should not be disclosed, made available or otherwise used for purposes other than those specified in accordance with Paragraph 9 except: a) with the consent of the data subject; or b) by the authority of law.
SECURITY SAFEGUARDS PRINCIPLE: Personal data should be protected by reasonable security safeguards against such risks as loss or unauthorized access, destruction, use, modification or disclosure of data.
OPENNESS PRINCIPLE: There should be a general policy of openness about developments, practices and policies with respect to personal data. Means should be readily available of establishing the existence and nature of personal data, and the main purposes of their use, as well as the identity and usual residence of the data controller.
INDIVIDUAL PARTICIPATION PRINCIPLE: Individuals should have the right: a) to obtain from a data controller, or otherwise, confirmation of whether or not the data controller has data relating to them; b) to have communicated to them, data relating to them i. within a reasonable time; ii. at a charge, if any, that is not excessive; iii. in a reasonable manner; and iv. in a form that is readily intelligible to them; c) to be given reasons if a request made under subparagraphs (a) and (b) is denied, and to be able to challenge such denial; and d) to challenge data relating to them and, if the challenge is successful to have the data erased, rectified, completed or amended.
ACCOUNTABILITY PRINCIPLE: A data controller should be accountable for complying with measures which give effect to the principles stated above.
This is because most of the cost of privacy breaches falls on the people whose data is exposed. In economics, this is known as an externality: an effect of a decision not borne by the decision maker. Externalities limit the incentive for companies to improve their security.
You might expect users to respond by favoring secure services over insecure ones—after all, they’re making their own buying decisions on the basis of the same market model. But that’s not generally possible. In some cases, software monopolies limit the available product choice. In other cases, the “lock-in effect” created by proprietary file formats, existing infrastructure, compatibility requirements, or software-as-a-service makes it harder to switch. In many cases, we don’t know who is collecting our data; recall the discussion of hidden surveillance in Chapter 2. In all cases, it’s hard for buyers to assess the security of any data service. And it’s not just nontechnical buyers; even I can’t tell you whether or not to entrust your privacy to any particular service provider.
Liabilities change this. By raising the cost of privacy breaches, we can make companies accept the costs of the externality and force them to expend more effort protecting the privacy of those whose data they have acquired. We’re already doing this in the US with healthcare data; privacy violations in that industry come with serious fines.
And it’s starting to happen here with data from stores, as well. Target is facing several lawsuits as a result of its 2013 breach. In other cases, banks are being sued for inadequately protecting the privacy of their customers. One way to help would be to require companies to inform users about all the information they possess that might have been compromised.
These cases can be complicated, with multiple companies involved in a particular incident, and apportioning liability will be hard. Courts ha
ve been reluctant to find a value in privacy, because people willingly give it away in exchange for so little. And because it is difficult to link harms from loss of privacy to specific actions that caused those harms, such cases have been very difficult to win.
There’s a better way to approach this: make the point of harm the privacy breach, not any ensuing harms. Companies must be compelled to comply with regulations like the 1973 Code of Fair Information Practices and other similar privacy regulations that are currently not mandatory; then the violation becomes failure to adhere to the regulations.
There’s a parallel with how the EPA regulates environmental pollutants. Factories have a legal obligation to limit the amount of particulates they emit into the environment. If they exceed these limits, they are subject to fines. There’s no need to wait until there is an unexpected surge of cancer cases. The problem is understood, the regulations are in place, companies can decide whether to build factories with coal furnaces or solar panels, and sanctions are waiting if they fail to comply with what are essentially best practices. That is the direction we need to go.
The US Code of Fair Information Practices (1973)
The Code of Fair Information Practices is based on five principles:
1. There must be no personal data record-keeping systems whose very existence is secret.
2. There must be a way for a person to find out what information about the person is in a record and how it is used.
3. There must be a way for a person to prevent information about the person that was obtained for one purpose from being used or made available for other purposes without the person’s consent.
4. There must be a way for a person to correct or amend a record of identifiable information about the person.
Data and Goliath Page 21