Obfuscation

Home > Other > Obfuscation > Page 11
Obfuscation Page 11

by Finn Brunton


  over someone’s eyes, spoiling a dataset, or impairing the functioning of a system, even to achieve laudable ends, the ethical obfuscator still should investigate whether other means are as readily available with lesser moral costs. We can ask whether the costs associated with different forms of obfuscation vary significantly, but we also can ask whether other means might

  achieve the same goals without the costs we have been considering thus far.

  The question of whether less disruptive but equally or more effective

  alternatives to obfuscation can be found is worth asking—although in chapter 3, where we reviewed some of the standard approaches to resisting troubling data-surveillance practices, we found little cause for optimism. Opting out, suggested by critics who say “If you don’t like this practice, you can always choose not to engage,” may be feasible when it comes to nifty mobile apps, IS OBFUSCATION JUSTIFIED?

  73

  digital games, and various forms of social media, but inconvenient and expensive when it comes to online shopping, EZ Pass, and Frequent Flyer programs—

  and forgoing many vectors of surveillance—mobile phones, credit cards,

  insurance, motor vehicles, public transportation—is now nearly infeasible for many people.

  Other alternatives, including corporate best practices and legal regula-

  tion, though promising in theory, are limited in practice. For structural reasons having to do with radically misaligned interests and the proverbial folly of leaving the fox to guard the henhouse, meaningful limits on data practice

  aren’t likely to be set by corporate actors. Further, a history of unsuccessful attempts to have various industries regulate their respective data practices leaves little hope for meaningful reform. Although governmental legislation has also been variably effective,9 its effects haven’t reached the commercial sector, particularly when it comes to regulating online and mobile tracking.

  Despite dogged efforts and the intense commitment of the Federal Trade Commission, the Department of Commerce’s National Telecommunications and

  Information Administration, and other government agencies, general progress has been minimal. For example, notice and consent expressed in privacy policies remain the dominant mechanisms for protecting privacy online, despite decisive evidence that they are incomprehensible to data subjects, are

  expressed ambiguously, are continuously revised, and have not constrained

  the degree and scope of data collection and use in practice. Further, by most accounts, concerted efforts to establish a Do-Not-Track standard for Web

  browsing were sabotaged by the advertising industry,10 and the Snowden rev-elations11 have revealed that the U.S. government and other governments have long been conducting mass surveillance. Individuals have good reason to

  question whether their privacy interests in appropriate gathering and use of information will be secured any time soon by conventional means.

  Justice and fairness

  So far, we have shown that when obfuscators and their critics disagree

  over the ethics of obfuscation, their disagreements sometimes boil down to clashes over ends and values. The critic accuses the obfuscator of violating legitimate ends; the obfuscator accuses the target of precisely the same.

  Clashes such as these would benefit from public airing and deliberation in the 74

  ChApTEr 4

  political arena, something we strongly support. But in our discussion of ethics of obfuscation, we also identified clashes that concerned conflicting interests and preferences more than competing ends and values. A clear instance of

  this emerged in our discussion of free riding. Charged with unseemly behavior, obfuscators may point to the terms of interaction unilaterally set by data collectors, which enable the seizure by these data collectors of surplus value generated during the course of the interaction. In relation to peers, complaints of free riding have opened tricky questions, such as whether blame is more appropriately assigned to an obfuscator who may have exposed peers to

  even greater scrutiny or disadvantage or to the agents of that scrutiny or disadvantage.

  A purely ethical resolution of such claims and counterclaims might not be

  possible when, taken in isolation, they amount to favoring either the obfuscator’s interests and preferences or those of the obfuscator’s target. Within a broader societal context, however, disputes over whose preferences and

  interests are given greatest credence are deeply political. They recognize certain entitlements over others, and in so doing they often bring about systematic allocation or reconfiguration of power, authority, and goods as well as of burdens and subjection. These are among the questions of justice and fairness that, for centuries, have troubled political philosophers when resolving clashes over what values trump other values and whose rights count more

  than the rights of others. Beyond rights and values, however, societies have sought principles to govern the distribution of a wide range of goods, to ame-liorate deeply unfair, unjust, and indecent outcomes, rather than leaving it to brute competition among actors (individuals, institutions, and organizations), or to the fiat of incumbency as the strong incumbents would prefer.

  To guide our reasoning about just and fair distribution of goods (power,

  wealth, authority, etc.), we have dipped into recent writings in political philosophy. We beg our readers’ forbearance as we sample from a vast disciplinary tradition for insights that will help us address the standoff we have identified between target and obfuscator in all its particularities. It might seem unnecessary to drill down to first principles when technologically advanced,

  liberal, and progressive democracies would already presumably have inte-

  grated such principles into their laws and regulations. This would mean that we would need only to refer to existing law and regulation for answers to

  political questions concerning privacy and obfuscation. It is, however,

  IS OBFUSCATION JUSTIFIED?

  75

  precisely because existing laws and policies have not, or not yet, adequately confronted overwhelming gaps in privacy protection that the need exists to refer to fundamental principles for better answers.

  Returning to situations in which obfuscators’ resistance confounds a tar-

  get’s will or interests, we ask how these considerations of justice might guide our assessment. John Rawls, in A Theory of Justice,12 demands as a basic requirement that the obfuscation practices in question not violate or erode basic rights and liberties. This requirement calls into question obfuscating systems relying on deception, system subversion, and exploitation that have the potential to violate rights of property, security, and autonomy. This principle establishes a presumption against such systems unless strong counter-

  vailing claims of equal or greater weight can clearly be demonstrated,

  including autonomy, fair treatment, freedom of speech, and freedom of political association—generally freedoms associated with a right to privacy. The first principle makes short work of obfuscation as used by criminals to mask their attacks and confuse their trails.

  For nuanced cases in which neither adversary holds a clear ethical advan-

  tage in their competing claims, Rawls’ second principle, that of maximin, is relevant. This principle demands that a just society should favor “the alternative the worst outcome of which is superior to the worst outcomes of the

  others.”13 In practical terms this means that when weighing policy options, a just society should not necessarily look to equalize the standing of different individuals or groups, but where this is not possible or makes no sense should focus on the plight of those on the lower end of the socioeconomic spectrum, ensuring that whatever policy is chosen is one that maximizes outcomes for these stakeholders. A just society’s policies, in other words, should maximize the minimum.

  Returning to earlier cases, let us now consider the debate over wasted
r />   resources—not common resources, which we have already addressed, but

  privately owned resources, as when obfuscation purportedly wastes Face-

  book’s resources with misleading profiles. Here service providers and owners of resources declare that, because proprietary rights allow them to set terms of use at will and to their advantage, unauthorized actions, by definition, make unethical or wasteful use of their services or resources. Obfuscators, by contrast, claim that they are weakened, exploited, made vulnerable, and compromised, and that they are merely acting to rectify an imbalance of control, 76

  ChApTEr 4

  power, and advantage and to reduce risk and ambiguity. As was noted earlier, how we evaluate the competing claims affects whether we deem obfuscating

  activity, such as TrackMeNot’s generating of fake queries, wasteful or legitimate, prohibited or allowed. Where no obvious ethical issue is at stake, these political choices about the exercise of power and privilege are subject to the maximin principle of justice. How this plays out will depend on details of specific instances—for example, concrete differences in the properties of TrackMeNot, Vula, and Russian nationalist Twitterbots, as well as the contexts in which they operate.

  In relation to free riding, Rawls’ second principle forces a question about whether the data services whose terms enable them to capture surplus value from personal information are entitled to that surplus value. It allows us to see that the entitlements of profit and control that these firms have unilaterally asserted through their terms of service are, in fact, open to redistribution through the adoption of different social policies. Obfuscators aren’t free riding if the disadvantage of a particular engagement is excessive and unfair, and if the only claims they may be violating are those asserted by service providers under a regime that doesn’t fully recognize its implications for information flows newly enabled by sociotechnical systems. A similar point applies to pollution. Although there are some who presume in favor of data collectors

  merely on the grounds that they have collected and assembled data and hence are entitled to its integrity, we believe that no charge of pollution will stick unless societal worth can be demonstrated. If that can’t be done, an argument is needed to support the claim that any value should accrue only or mainly to the data collectors; it can’t simply be presumed. Though it is true that individuals using obfuscation to take cover may diminish the purity of a data pool, impose costs on data gatherers, or deny data gatherers the benefits of surplus generated through collection, aggregation, and analysis of data, a full picture considers the value of the data and the legitimacy of data gatherers’ claims.

  When there are charges of free riding or when there are charges of pollution, private claims of data owners and counterclaims of obfuscators are viewed as conflicts of preferences or interests. In our view, seeking resolution by point-ing to property rights begs the question of the extent of these rights in the fluid environment of technology and data. This issue remains open to political

  negotiation and adjustment. General prosperity and societal welfare should be considered, ideally in light of Rawls’ second principle.

  IS OBFUSCATION JUSTIFIED?

  77

  Assignment of blame and moral responsibility may also be assessed politically. When considering liability for free riding and data pollution, we have argued that, although the obfuscator is a causal agent in both those cases, moral responsibility may nevertheless reasonably accrue to the target of

  obfuscation unless the target’s activities and business or data practices are beyond reproach. Considerations of justice apply as much to fair distribution of costs as they do to fair distribution of benefits.14

  In the various theories of justice offered by political philosophers, including Rawls, there is a fairly uniform idea of those on the bottom end of the socioeconomic spectrum toward whom great concern is directed. In highlight-ing various ways in which the maximin principle is relevant to the political standing of obfuscation, we have presumed traditional or standard views of what it means to be better off or worse off—powerful or weak, rich or poor, well or poorly educated, healthy or sick—remain relevant. To those dimensions of inequality, our theme of informational asymmetries of power and of knowledge adds two dimensions of difference between haves and have-nots,

  crucial to the maximin principle.15

  Informational justice and the asymmetries of power and knowledge

  Circumstances surrounding the obfuscating systems we introduced in part I of this book are typically characterized by both asymmetries of power and asymmetries of knowledge. The power differential between individuals and the corporate and governmental institutions and organizations that place them under surveillance, capture information about their activities, and subsequently assemble it and mine it is clear. The judging, preying eye of unspecified, digital publics16 also may train its disciplining gaze on individuals. Although, as we demonstrated in part I, obfuscation can be and has been used by the more

  powerful against the less powerful, the more powerful usually have more

  direct ways to impose their will. Obfuscation is generally not as strong or certain as these more direct methods, and it is only rarely adopted by powerful actors—and then usually to evade the notice of other powerful actors.17

  Stronger actors have less of a need to resort to obfuscation because they have better methods available if they want to hide something—among them secret

  classifications, censorship, trade secrets, and threats of state violence. So let us consider the less powerful members of society who may reach for obfuscation to even the odds.

  78

  ChApTEr 4

  To people who are not well off or politically influential and not in a position to refuse terms of engagement, to people who aren’t technically sophisticated or savvy enough to utilize strong encryption, and to people who want discounts at the supermarket, free email accounts, and cheap mobile phones, obfuscation offers some measure of resistance, obscurity, and dignity, if not a permanent reconfiguration of control or an inversion of the entrenched hierarchy. As Anatole France put it, “the law, in its majestic equality, forbids the rich as well as the poor to sleep under bridges and steal bread.”18 For those whom circumstance and necessity oblige to give up data about themselves—those who

  most need the shelter of the bridge, however ad hoc and unsatisfying it may be in comparison with a proper house—obfuscation provides a means of redress.

  What we have called power asymmetries map closely onto traditional

  vectors of power—wealth, social class, education, race, and so forth. In today’s data-driven societies, epistemic or information asymmetries are highly consequential. Obfuscation may provide cover against known, specific threats, but also may offer protection against lurking but poorly understood threats from uncertain sources (government or corporate), whose presence we sense but

  about which we know little. We suspect these “others” are able to capture

  information that we generate and emanate as we move about online, engage

  in transactions online and off, work, communicate, and socialize, but precisely what information they capture, where they send it, how it then is used, and the logic of its impact on us we simply do not know. This is the nature of the epistemic asymmetry in its most extreme form. Under these circumstances,

  obfuscation may seem like flailing about in the dark, but it offers some hope against the unknown knowers.

  Obfuscating against direct exertions of power and control is resistance of a familiar kind, but the shield that obfuscation may promise against lurking, unknown adversaries calls to mind a different political threat. In his book Republicanism: A Theory of Freedom and Government, Philip Pettit prefers a definition of freedom not as actual non-interference but as non-domination—

  that is, security against arbitrary interference: “not just that people (or other actors, such as governments or corporations) with a power of
arbitrary interference probably will not exercise it, but that the agents in question lose that power: they are deprived of the capacity to exercise it, or at least their capacity to exercise it is severely reduced.”19 Viewed from the weak side of the epistemic asymmetry, we may be aware that information about us and information IS OBFUSCATION JUSTIFIED?

  79

  emanating from our activities, online and off, is accessible to those higher up on the scale, often in the form of rationalized information assemblages—profiles that can be used to control us directly or indirectly and to decide what we can and can’t have and where we can and can’t go. As societies embrace the

  promise of big-data analytics, and as correlation and clustering assume a

  dominant role in decision making, individuals may increasingly be subjected to decisions that “work” statistically but don’t “make sense.”20 Our freedom is compromised not only when we are prevented from having or doing what we

  want, but also when others have the capacity to exercise this power in ways that we don’t understand and that we experience as arbitrary. Domination is precisely this, according to Pettit. Republicanism doesn’t preclude non-arbitrary subjection to suitable forms of law and government; it requires only that individuals be secure against arbitrary interference, “controlled by the arbitrium—the will or judgment—of the interferer: to the extent, in particular, that it is not forced to track the interests and ideas of those who suffer the interference.”21

  Those on the wrong side of the power and knowledge asymmetries of an

  information society are, as we have argued, effectively class members of its less well-off —subjects of surveillance, uncertain how it affects their fates, and lacking power to set terms of engagement. Consequently, in developing

  policies for a society deemed just according to Rawls’ two principles,22 those on the wrong side of the asymmetries should be allowed the freedom to assert their values, interests, and preferences through obfuscation (in keeping with ethical requirements), even if this means impinging on the interests and preferences of those on the right side of knowledge and power asymmetries.

 

‹ Prev