by Finn Brunton
Thus, having seen to the ethics requirements of the first principle, according to the second, maximin principle, social policy aimed at resolving conflicting interests and preferences inherent in cases we have discussed should take
heed of the important work these are doing potentially to raise the standing of those on the losing end of entrenched power and knowledge asymmetries.
For the welfare of others
We end this section with what may well be the toughest challenge confronting data obfuscation: whether it can be tolerated when it aims at systems that promise societal benefits extending beyond the individual subjects themselves. As we enter deeper and deeper into the epistemic and decision-making 80
ChApTEr 4
paradigm of big data, and as hope is stoked by its potential to serve the common good, questions arise concerning the obligation of individuals to participate.23 Obfuscators may be faulted for being unwilling to pay costs for benefits, failing to pitch in for the sake of the common good. But what exactly is the extent of this obligation, and its limits? Are individuals obligated to pay whatever is asked, succumb to any terms of service, and pitch in even if there is a cost? Do sufferers from a rare disease, for example, owe it to others to participate in studies, and to allow data about them to be integrated into statistical analyses in which the size of N improves the results? And what if there is a cost?
The plight of the ethical obfuscator resembles that of the ethical citizen expected to contribute to the common good by, say, paying taxes or serving in the military. Some might say, equivalently, that we must fulfill an obligation not only by contributing to the common store of data but also by doing so
honestly, accurately, and conscientiously. Even if there is some sense of obligation, what principles govern its shape, particularly if there is risk or cost associated with it? Ethics, generally, doesn’t require supererogation, and liberal democracies don’t demand or condone the sacrifice of innocent individuals, even a few, for the benefit of the majority. Where to draw the line?
What principles of justice offer guidance on these matters?
Jeremy Waldron observed that after the terrorist attacks of September
11, 2001 citizens were asked to allow the balance of security and liberty to be tipped in favor of security.24 Although it isn’t unusual for social policy to require tradeoffs—one value, one right against another or others—Waldron reminds
us that such tradeoffs must be made wisely with fastidious attention to consequences. One particular consequence is the distributional impact; losses and gains, costs and benefits should be borne fairly among individuals and between groups. Waldron’s worry is that when we say that we collectively give up a measure of freedom in return for our collective security there is an important elision: some individuals or groups suffer a disproportionate loss of freedom for the security benefit of all, or, as sometimes happens with tradeoffs in general, may even be excluded entirely from the collective benefits. Generaliz-ing this warning to questions about paying for the collective good with individual data prompts us to consider not only the total sum of costs over benefits but also who is paying the cost and who is enjoying the benefits. Often, companies defend data avarice by citing service improvements or security but are IS OBFUSCATION JUSTIFIED?
81
vague about crucial details—for example, whether existing customers and data contributors are supporting new ones who haven’t pitched in, and what proportion of the value extracted accrues to “all” and what proportion to the company. These questions must be answered in order to address questions
about the nature and extent of the obligations data subjects have to contribute to the common data store.
Risk and data
The language of risk frequently crops up in hailing the promise of big data for the good of all. Proponents would have us believe that data will help reduce risks of terror and crime, of inefficacious medical treatment, of bad credit decisions, of inadequate education, of inefficient energy use, and so forth. These claims should persuade or even compel individuals to give generously of
information, as we graciously expose the contents of our suitcases in airports.
By the logic of these claims, obfuscators are unethical in diminishing, depriv-ing, or subverting the common stock. Persuasive? Irrefutable? Yet here, too, justice demands attention to distribution and fairness: who risks and who benefits? We do not flatly reject the claims, but until these questions are answered and issues of harm and costs are addressed there can be no such obligation.
Take, for example, the trivial and ubiquitous practice of online tracking for the purpose of behavioral advertising.25 Ad networks claim that online tracking and behavioral advertising reduce the “risk” of costly advertising to unsuitable targets or to targeting attractive offers to unprofitable customers. Risk reduction it may indeed be, but information contributions by all are improving the lot only of a few, primarily the ad networks providing the service, possibly the advertisers, and perhaps the attractive customers they seek to lure. We made a similar point above when we discussed data aggregation for the purpose of reducing credit fraud: that citing risk reduction often oversimplifies a picture in which risk may not be reduced overall, or even if it is reduced, not reduced for all. What actually occurs is that risk is shifted and redistributed. We offer similar cautions against inappropriate disclosure of medical information,
which may increase risk for some information subjects while decreasing it for others; or collecting and mining data for the purposes of price discrimination, imposing risks on consumers under surveillance while reducing risks for mer-chants who engage in schemes of data profiling.
82
ChApTEr 4
In sum
Data obfuscation raises important ethical challenges that anyone designing or using obfuscating systems would do well to heed. We have scrutinized the
challenges and explored contexts and conditions that are relevant to their adjudication in ethical terms. But we also have discovered that adjudicating ethical challenges often invokes considerations that are political and expedi-ent. Politics comes into play when disputes hinge on disagreements over the relative importance of societal ends and relative significance of ethical and societal values. It also comes into play when addressing the merits of competing non-moral claims, the allocation of goods, and the distribution of risks.
When entering the realms of the political, obfuscation must be tested against the demands of justice. But if obfuscators are so tested, so must we test the data collectors, the information services, the trackers, and the profilers. We have found that breathless rhetoric surrounding the promise and practice of data does not say enough about justice and the problem of risk shifting.
Incumbents have embedded few protections and mitigations into the edifices of data they are constructing. Against this backdrop, obfuscation offers a means of striving for balance defensible when it functions to resist domination of the weaker by the stronger. A just society leaves this escape hatch open.
IS OBFUSCATION JUSTIFIED?
83
5 WILL OBFUSCatION WOrK?
How can obfuscation succeed? How can the efforts of a few individuals generating extraneous data work against well-funded, determined institutions, let alone against such behemoths of data as Google, Facebook, Axciom, and the
National Security Agency? Encountering these doubts again and again, we
have come to see that when people ask about particular instantiations of
obfuscation, or obfuscation generally “But does it work? ” the reasonable answer is “Yes, but it depends.” It depends on the goals, the obfuscator, the adversary, the resources available, and more. These, in turn, suggest means, methods, and principles for design and execution.
The typical scenario we imagined earlier involves individuals functioning
within information ecosystems often not of their own making or choosing.
Against the designers, operat
ors, managers, and owners of these ecosys-
tems, individual data subjects stand in an asymmetric relation of knowledge, power, or both. Although these individuals are aware that information about them or produced by them is necessary for the relationship, there is much that they don’t know. How much is taken? What is done with it? How will they be affected? They may grasp enough about the ecosystems in which they are
wittingly or unwittingly enrolled, from Web searching to facial recognition, to believe or recognize that its practices are inappropriate, but, at the same time, recognize that they aren’t capable of reasonably functioning outside it, or of reasonably inducing change within it.
Whether obfuscation works—whether unilateral shifting of terms of
engagement over personal information is fulfilled by a particular obfuscation project—may seem to be a straightforward question about a specific problem-solving technique, but upon closer scrutiny it is actually several questions.
Whether obfuscation works depends on characteristics of the existing circumstances, the desired alteration in terms, what counts as fulfillment of these desires, and the architecture and features of the particular obfuscation project under consideration. This is why answering the question “Does it work?” with
“It depends” isn’t facetious; instead it is an invitation to consider in systematic terms what characteristics of an information ecosystem make it one in which obfuscation could work. Beyond these considerations, we seek to map design possibilities for obfuscation projects into an array of diverse goals that the instigators and users of such projects may have.
84
Chapter 5
Therefore, we have to answer two questions with this chapter. We can take the question “Will obfuscation work?” in the sense “How can obfuscation work for me and my particular situation?” or in the sense “Does obfuscation work in general?” We will respond to both questions. The overall answer is straightforward: Yes, obfuscation can work, but whether it does and to what extent depends on how it is implemented to respond to a threat, fulfill a goal, and meet other specific parameters. This chapter presents a set of questions that we think should be addressed if obfuscation is to be applied well.
5.1 Obfuscation is about goals
In the world of security and privacy theory, it is by now well established that the answer to every “Does it work?” question is “It depends.” To secure something, to make it private or safe or secret, entails tradeoffs, many of which we have already discussed. Securing things requires time, money, effort, and
attention, and adds organizational and personal friction while diminishing convenience and access to many tools and services. Near-total freedom from
digital surveillance for an individual is simple, after all: just lead the life of an undocumented migrant laborer of the 1920s, with no Internet, no phones, no insurance, no assets, riding the rails, being paid off the books for illegal manual work. Simple, but with a very high cost, because the threat model of
“everything” is ludicrously broad. When we think of organizational security tradeoffs, we can think of the “Cone of Silence” in the spy-movie-parody tele-vision series Get Smart.1 Used for conducting top secret meetings, the Cone works so well that the people in it can’t hear one another—it is perfectly private and amusingly useless.2
Threat models lower the costs of security and privacy by helping us
understand what our adversaries are looking for and what they are capable of finding, so that we can defend against those dangers specifically.3 If you know that your organization faces a danger that includes sophisticated attacks on its information security, you should fill in all the USB ports on the organization’s computers with rubber cement and keep sensitive information on “airgapped”
machines that are never connected to the network. But if you don’t believe that your organization faces such a danger, why deprive people of the utility of USB
sticks? Obfuscation in general is useful in relation to a specific type of threat, shaped by necessary visibility. As we have emphasized throughout, the obfuscator is already exposed to some degree—visible to radar, to people
WILL OBFUSCatION WOrK?
85
scrutinizing public legal filings, to security cameras, to eavesdropping, to Web search providers, and generally to data collection defined by the terms of service. Furthermore, he or she is exposed, to a largely unknown degree, at the wrong side of the information asymmetry, and this unknown exposure is
further aggravated by time—by future circulation of data and systems of analysis. We take this visibility as a starting point for working out the role that obfuscation can play.
To put that another way, we don’t have a best-practices threat model
available—in fact, an obfuscator may not have sufficient resources, research, or training to put such a model together. We are operating from a position of weakness, obligated to accept choices we should probably refuse. If this is the case, we have to make do (more on that below) and we must have a clear
sense of what we want to accomplish. Consider danah boyd’s research on
American teenagers’ use of social media. Teens in the United States are
subject to an enormous amount of scrutiny, almost all of it without their
consent or control (parents, school, other authority figures). Social media would seem to make them subject to even more. They are exposed to scrutiny by default—in fact, it is to their benefit, from a privacy perspective, to appear to be visible to everyone. “As teens encounter particular technologies, they make decisions based on what they’re trying to achieve,” boyd writes,4 and what they are trying to achieve is often to share content without sharing
meaning. They can’t necessarily create secret social spaces for their community—parents can and do demand passwords to their social-network
accounts and access to their phones. Instead, they use a variety of practices that assume everyone can see what they do, and then behave so that only a
few people can understand the meaning of their actions. “Limiting access to meaning,” boyd writes, “can be a much more powerful tool for achieving
privacy than trying to limit access to the content itself.”5 Their methods don’t necessarily use obfuscation (they lean heavily on subtle social cues, references, and nuance to create material that reads differently to different audiences, a practice of “social steganography”), but they emphasize the importance of
understanding goals. The goal is not to disappear or to maintain total
informational control (which may be impossible); it is to limit and shape the community that can accurately interpret actions that everyone can see.
Much the same is true of obfuscation. Many instances and practices
that we have gathered under that heading are expressions of particular goals 86
Chapter 5
that take discovery, visibility, or vulnerability as a starting point. For all the reasons we have already discussed, people now can’t escape certain kinds of data collection and analysis, so the question then becomes “What does the
obfuscator want to do with obfuscation?” The answer to that question gives us a set of parameters (choices, constraints, mechanisms) that we can use to
shape our approach to obfuscation.
5.2 I want to use obfuscation …
A safe that can’t be cracked does not exist. Safes are rated in hours—in how long it would take an attacker (given various sets of tools) to open them.6 A safe is purchased as a source of security in addition to other elements of security, including locked doors, alarms, guards, and law-enforcement personnel.
A one-hour safe with an alarm probably is adequate in a precinct where the police reliably show up in twenty minutes. If we abstract this a little bit, we can use it to characterize the goals of obfuscation. The strength of an obfuscation approach isn’t measured by a single objective standard (as safes are) but in relation to a goal and
a context: to be strong enough. It may be used on its own or in concert with other privacy techniques. The success of obfuscation is always relative to its purposes, and to consideration of constraints, obstacles, and the un-level playing field of epistemic and power asymmetries.
When gathering different obfuscation examples, we observed that there
was convergence around general aims and purposes that cropped up numer-
ous times, even though a single system could be associated with several ends or purposes and even though intermediate ends sometimes served as means
to achieve other ends. There are subtler distinctions, too, but we have simplified and unified purposes and ends into goals to make them more readily
applicable to design and practice. They are arranged roughly in order of inclusion, from buying time to expressing protest. Interfering with profiling, the fifth goal, can include some of the earlier goals, such as providing cover, within it, and can be in turn contained by expressing protest (the sixth goal). (Since virtually all obfuscation contributes to the difficulty of rapidly analyzing and processing data for surveillance purposes, all the higher-order goals include the first goal: buying time.) As you identify the goal suited to your project, you ascend a ladder of complexity and diversity of possible types of obfuscation.
Skeptical readers—and we all should be skeptical—will notice that we
are no longer relying heavily on examples of obfuscation used by powerful
WILL OBFUSCatION WOrK?
87
groups for malign ends, such as the use of Twitter bots to hamper election protests, the use of likefarming in social-network scams, or inter-business corporate warfare. We want this section to focus on how obfuscation can be used for positive purposes.
If you can answer the questions in the previous chapter to your satisfac-
tion, then this chapter is intended for you. We begin with the possibility that you want to use obfuscation to buy some time.