The Transparent Society: Will Technology Force Us to Choose Between Privacy and Freedom?

Home > Science > The Transparent Society: Will Technology Force Us to Choose Between Privacy and Freedom? > Page 37
The Transparent Society: Will Technology Force Us to Choose Between Privacy and Freedom? Page 37

by David Brin


  Disagreeable, yes. But wholly unanticipated? In how many cases did someone warn against the very unpleasantness that eventually happened? Someone who might have seemed irritating at the time, and was pushed aside? Would it serve a useful purpose to grant high prediction scores after the fact, as consolation prizes to Cassandras whose original dire warnings were ignored?

  The answer is, it couldn’t hurt. There might even come a time when prediction becomes a captivating spectator sport, as fans suspensefully follow champion seers competing for prizes and honor, staking their vaunted reputations on one of the most valued human skills, being right.

  All kidding aside, the point is that predictions registries will happen—perhaps scores of them, maintained by both august institutions and private aficionados. In the awkward beginning they may be objects of fun or ridicule. Then we’ll wonder how we lived without them. Such forums represent another tool for accountability in a world that can no longer afford vague murkiness, or leaders who blithely dismiss their mistakes with arm wavings and eloquent nonexplanations.

  As society becomes more transparent, we may learn to be more forgiving of each other’s flaws, for nobody is perfect, or on target all the time. On the other hand, when someone makes bold assertions to having special insight, it seems fair to arm people with the means to verify such claims.

  Everybody wants everything. History moves in some of these directions.

  BORIS STRUGATSKY

  Preserving “Basic” Privacy

  Some people believe we will be able to preserve privacy neither through exhortation nor through a reciprocal balance of power, but only because the mighty will not bother spying on innocuous little guys like you and me. In other words, we know about the Prince of Wales’s mistress’s underwear because his cell phone conversations were intrinsically interesting (to a few million “celebrity watchers”). Prince Charles could listen in on you or me, but why would he care? (Also, since he is so closely watched, he’d fear getting caught.)

  EFF cofounder John Gilmore has pointed out that there is a big difference between individual and mass surveillance. Acknowledging that specific encryption regimes might be broken by top government computers in a concerted effort one at a time, he nevertheless maintains that applying the same approach to millions will prove impossible. But there are flaws in the argument that the mighty will find it too onerous and tedious to spy on small folk, if each of us takes pains to fade back into the crowd.

  First, who wants to fade into a crowd? Not outspoken citizens in a civilization of aficionados and individualists—and certainly not John Gilmore! Anyway, technology keeps enlarging the power of big shots to stare at the teeming masses. We cannot count on sheltering our privacy in a throng of peasants, hoping predators won’t notice us amid the “practical obscurity” of other sheep. Whether through software agents, or keyword scans, or some closely held breakthrough in decryption technology, there may at any time come a sudden, many-orders improvement in the power of government agencies (or oligarchs) to see. If this happened, obscurity among the masses would be a frail refuge.

  Which is a shame, since a way truly must be found to protect “bedroom privacy.” As we discussed in chapter 3, there is a realm that each of us calls deeply personal, wherein we seek either solitude or intimacy. A place to hold things we want kept private, from love letters to facts about embarrassing physical limitations (incontinence, infertility, psychiatric disorders, or the tragedy of a miscarriage). There is a long-standing legal concept called “curtilage,” which stands for the protected area of a person’s home and its immediate surroundings. In the coming era, when camera-bearing robots may swarm the skies, we will all need something like this, some zone of sanctuary where we can feel unobserved. Some corner where our hearts can remain forever just our own.

  As we’ll see in future chapters, basic personal privacy may yet thrive in a relatively transparent world, but only if we apply the right tools, combining principle and practicality with the ability of each citizen to enforce the Golden Rule.

  In this chapter, we sampled just a few items from the transparency tool kit of tomorrow. These and other innovations share one essential trait. No authorities must be assuaged—no priests, kings, or Senate committees propitiated—in order for them to come to pass. In a century of amateurs, it won’t occur to citizens to ask permission. The resources required to start a predictions registry, for instance, or percolation, are so small that some experiments will doubtless be initiated by readers of this very book.

  As for “watching the watchers,” this trend is already under way. In the neo-West, it will be stopped only if all cameras are banned. This won’t eliminate the cameras, just make them smaller, restricting their use to the mighty and influential. (For a much darker set of scenarios, describing how the camera-filled world of tomorrow could go sour, see the section “How Things Might Go Wrong” in chapter 9.)

  The important point is what these examples say about openness and candor in the world to come. Without tools for encouraging accountability, all the fancy toys and high-speed dataways will do little for our lives in the area that counts most, fostering a confident civilization of free individuals. A culture with the courage to strike out along new paths, and the wisdom to look out for errors and pitfalls along the way.

  In a transparent society, citizens must have the habit of knowing.

  They will refuse not to know.

  THE PLAUSIBILITY MATRIX

  Show me the assumptions on which you base your facts.

  ANDREW CUTLER

  In history there is no such thing as the sum of many vectors.

  LEO TOLSTOY

  The debate between transparency and strong privacy involves underlying suppositions about what will be technically possible in the twenty-first century. If certain social trends or scientific accomplishments come to pass, openness will fail to deliver real accountability. In that case, secrecy may wind up being the little guy’s only hope.

  Does that sound like a strange admission on my part?

  Perhaps it does, after so many pages spent vigorously opposing a world of masks. But this book is an exploration, not an ideological polemic. An honest person should willingly consider the possibility of being wrong. So chapter 9 will examine a number of ways in which transparency might bring truly unpleasant consequences.

  But first, let’s discuss whether transparency can happen at all.

  Recall the accountability matrix at the end of chapter 3. There we saw how our individual desires may clash with our own long-range interests. Now let’s consider a different kind of chart, shown on the following page, dealing with what may be plausible in the coming century.

  Just as in the accountability matrix, it is quite possible for more than one of these boxes to be true at the same time! Two sets of technologies may exist, one that helps shine light on centers of power (such as government or potential oligarchs) and another that helps guard their secret conspiracies. If two boxes are both plausible, then a debate over policy becomes relevant. As a civilization, we could then use law, research, and social persuasion to decide which technology to emphasize, and thus sway the direction in which things go.

  What we cannot do is use policy to cram our way into a situation that is technologically unfeasible. The most eloquent argument in the world will be impotent if its aim proves impossible.

  Suppose box number 1 became totally dominant, making the other three improbable. For instance, if “gnat cameras” became utterly pervasive, cheap, and universally accessible, a world of obligate openness might be inevitable. Transparency would happen automatically, no matter whether or not openness aficionados like me succeed in convincing a single person. Strong privacy could win every debate, and still become an ideological relic within a few decades.

  Contrariwise, we may see that powerful trends drive us toward the situation described in box 3, where the mighty can look at those below them, while thwarting vision or accountability directed their way. This is the c
lassic predicament that occurred in almost every major human civilization to this date. If technology offers a range of powers, then the rich or those in authority will surely gain access to the very best tools. It will be only natural for them to enhance and take advantage of this difference. A cynic, influenced by history, might predict that only box 3 is a credible long-term prospect. We may be fated to be drawn into its unyielding grasp, no matter how hard we thrash and squirm.

  But suppose it is a matter of choice. If several boxes are plausible at once, then both the strong privacy advocates and I share a common goal. All of us will fight like hell to escape the world of obligate tyranny arising out of box 3!

  That still leaves a lot of space between me and the cypherpunks. For instance, I see countless threats to freedom, looming in all directions, while they tend to fixate primarily or solely on government. But an even more important difference has to do with which heading we would take, in fleeing from box 3.

  I would aim “upward” across the plausibility matrix, heading toward box 1, whereas the strong privacy advocates’ belief is that we can best preserve liberty by moving to the “right,” toward box 4.

  This latter point of view was expressed by a prominent cypherpunk, Hal Finney.

  I’d say that encryption offers for the first time a chance to put the little guy more on an even footing with the big powers of the world. There is an asymmetry between what big governments and big companies know about me and what I know about them. With encryption there is for the first time a chance that I can draw a shield of privacy around my activities. This will put us on more even ground.

  Keep this passage in mind for later, when we consider the “garden of Akademos.” I very much favor the concept of even ground, but can it be accomplished with shields?

  In effect, we are carrying out a debate on two levels. First, what is plausible , and second, what is desirable.

  Let us assume for a moment that boxes 1, 3, and 4 are all realistic. Each one might seed a culture, depending on which technologies are emphasized, encouraged, supported, and socially sanctioned. This implies, among other things, that publicly available encryption schemes might truly conceal the little guy’s secrets as advertised, even from the NSA or a Mafia clan.

  In that case, our decision about which way to go should depend on three issues: a. Where do we want to end up?

  b. Who has the ultimate advantage in each situation?

  c. Which situation is robust or stable?

  I won’t go deeply into question A right now. Throughout this book I have painted the world of box 4, filled with widespread and habitual secrecy, as a dour place, massively paranoid and rather inhuman. But I am no prophet or seer. I can only argue my case and let time prove it right or wrong.

  Where logic does shed some light is on questions B and C.

  Who has an ultimate advantage in each world? In a transparent society (arising from box 1), the rich and mighty do have an edge, but only in direct proportion to their legitimate wealth or exertion of lawfully supervised authority. Moreover, their boons are counterbalanced by the fact that average people will look at such individuals a lot more.

  At the opposite extreme, in a world of shadows and masks (arising from box 4), perhaps the mighty know nothing about you, and you know nothing about them. Each of you is free to conceal whatever you like, any machination or scheme. In that case, the reader may contemplate who will be better equipped to take advantage of this sovereign darkness. Obviously, I believe the mighty will use such pervasive secrecy far more effectively to advance their own unchecked designs.

  Regarding our third question—issue C—we must ask which is sturdier, a transparent society or a masked society? Which one is robust against the buffets and upsets that will inevitably occur with the passage of time?

  One strong privacy advocate accused me of basing my arguments on “pure speculation,” because I cite the possibility that a gnat camera may someday plant itself in your ceiling to stare down at your keystrokes as you type at your computer, bypassing and neutralizing all your fancy encryption software.

  “We can’t make decisions based on speculative possibilities,” my critic said.

  Well, yes we can and should—if those speculations are relatively plausible, and if they point out a potentially devastating failure mode for the opposing plan!

  In fact, the world arising out of box 4 is a frail one. Suppose that someday a new technology comes along, one that promises to shred the veils, blow through the encryption haze, or send light piercing past all the cherished opaque walls. As we shall see in chapter 9, such possibilities are quite credible. They range from new decryption algorithms, quantum computers, and gnat cameras, all the way to noninvasive mapping of another person’s cerebral activity down to the neuron level (a field where immense strides have recently been made). Who can say what tomorrow will bring? The point is that in a world where secrecy already reigns, such advances will likely be snatched up by some nexus of power accumulation-a government agency, or some cabal of the rich, or a band of merry techies in the cyberelite. The rest of us probably won’t even hear of the breakthrough. Moreover, it will be in the interest of the mighty to make sure that we never do. To our eyes, the haze will continue. We won’t even be aware that new gods have been born. A race of supermen with X-ray eyes, who can see through our beloved veils.

  Italian novelist and philosopher Umberto Eco expressed this concern eloquently.

  There is a risk that we might be heading toward an online 1984, in which Orwell’s “proles” are represented by the passive, televisionfed masses that have no access to this new tool [the Next].... Above them there’ll be a petite bourgeoisie of passive users, office workers, airline clerks. And finally we‘// see the masters of the game, the nomenklatura—in the Soviet sense. This has nothing to do with class in the traditional sense—the nomenklatura are just as lively to be inner city hackers as rich executives. But they will have one thing in common: the knowledge that brings control.

  The same cannot be said for the world emerging from box 1. If every technological advance is instantly revealed and discussed by a feisty, argumentative society, each potential misapplication or failure mode will be loudly assailed by innumerable T-cells. Use of any new technique by power centers will receive especially close scrutiny. Moreover, a conspiracy to evade such supervision will be risky, to say the least. In a society where whistleblowers are popular heroes, it will be hard to trust your henchmen. This is not to suggest that something truly staggering won’t happen, a breakthrough so unexpected and earth-shaking that it fractures even a transparent society. But when it comes to robustness against surprise and change, there can simply be no comparison with the fragile, masked world of box 4.

  But what if box number 1 is not feasible?

  In fact, the entire discussion in this book is based on a single, and possibly flawed, premise: that all of the boxes are possible. That we actually have a choice about what will happen.

  It might not be so.

  Indeed, one can imagine technical and social trends that make box 1 entirely untenable. In that case, historians of the obscure will look back on this book as a weird piece of semiutopian literature by a forgotten astronomer turned science fiction author. A tract with as little long-term relevance as the ravings of Savonarola, Lysenko, or Rand. If this turns out to be the case (and we’ll discuss some possibilities in chapter 9), there may be no choice but to take the cypherpunks’ advice and strive as hard as we can for the sanctuary of box 4.

  The haze may be blinding, and the air in your mask may grow stale. You will worry all the time about what is going on in the dark, mysterious towers of the rich. But the world of anonymous strangers and secret codes may offer a kind of safety. For a little while.

  Before we move on, the reader may be wondering about box 2! It represents a quirky, inverted situation in which those on top are held accountable, while those below have genuine privacy.

  Hey, it sounds like the best dea
l of all, eh?

  In fact, this is the very world that many strong privacy advocates claim they are aiming for! After all, the policies they promote seem to demand openness from the great enemy, government, while protecting the little guy’s right to keep secrets.

  The flaw in this claim bears repeating over and over again. Government is not the sole source of peril. Any potential power center can be dangerous, and none of them, not even cypherpunks, can claim exemption just by saying, “Oh, don’t worry about me, because I’m harmless!”

  I can see the validity of box 2, in principle. Certainly, a sliding scale of scrutiny may apply, from relatively little needed for some middle-class schoolteacher all the way up to a relentless light that must shine on the dealings of billionaires and attorneys general. In fact, this commonsense approach is already the de facto situation in America and several other countries when it comes to the privacy tort protections. Public figures are deemed to enjoy less shelter against observation by news media, for instance, than average citizens. Moreover, a general sense of fairness and decency makes people tend to stare less at quiet neighbors than at the famous, or those who deliberately seek wealth and power.

  Perhaps we who stand at the extremes, both strong privacy advocates and believers in transparency, underestimate how smart folks really are. Over the long haul, people may work their way toward a clear-headed mixture of our purist positions, finding pragmatic ways to aim heaps of light toward governments and corporations and criminals and the techno-elite, while at the same time securing an enviable curtilage of privacy for average citizens and their families.

 

‹ Prev