Book Read Free

You are not a Gadget: A Manifesto

Page 8

by Jaron Lanier


  This suggests a hypothesis to join the ranks of ideas about how the circumstances of our evolution influenced our nature. We, the big-brained species, probably didn’t get that way to fill a single, highly specific niche. Instead, we must have evolved with the ability to switch between different niches. We evolved to be both loners and pack members. We are optimized not so much to be one or the other, but to be able to switch between them.

  New patterns of social connection that are unique to online culture have played a role in the spread of modern networked terrorism. If you look at an online chat about anything, from guitars to poodles to aerobics, you’ll see a consistent pattern: jihadi chat looks just like poodle chat. A pack emerges, and either you are with it or against it. If you join the pack, then you join the collective ritual hatred.

  If we are to continue to focus the powers of digital technology on the project of making human affairs less personal and more collective, then we ought to consider how that project might interact with human nature.

  The genetic aspects of behavior that have received the most attention (under rubrics like sociobiology or evolutionary psychology) have tended to focus on things like gender differences and mating behaviors, but my guess is that clan orientation and its relationship to violence will turn out to be the most important area of study.

  Design Underlies Ethics in the Digital World

  People are not universally nasty online. Behavior varies considerably from site to site. There are reasonable theories about what brings out the best or worst online behaviors: demographics, economics, child-rearing trends, perhaps even the average time of day of usage could play a role. My opinion, however, is that certain details in the design of the user interface experience of a website are the most important factors.

  People who can spontaneously invent a pseudonym in order to post a comment on a blog or on YouTube are often remarkably mean. Buyers and sellers on eBay are a little more civil, despite occasional disappointments, such as encounters with flakiness and fraud. Based on those data, you could conclude that it isn’t exactly anonymity, but transient anonymity, coupled with a lack of consequences, that brings out online idiocy.

  With more data, that hypothesis can be refined. Participants in Second Life (a virtual online world) are generally not quite as mean to one another as are people posting comments to Slashdot (a popular technology news site) or engaging in edit wars on Wikipedia, even though all allow pseudonyms. The difference might be that on Second Life the pseudonymous personality itself is highly valuable and requires a lot of work to create.

  So a better portrait of the troll-evoking design is effortless, consequence-free, transient anonymity in the service of a goal, such as promoting a point of view, that stands entirely apart from one’s identity or personality. Call it drive-by anonymity.

  Computers have an unfortunate tendency to present us with binary choices at every level, not just at the lowest one, where the bits are switching. It is easy to be anonymous or fully revealed, but hard to be revealed just enough. Still, that does happen, to varying degrees. Sites like eBay and Second Life give hints about how design can promote a middle path.

  Anonymity certainly has a place, but that place needs to be designed carefully. Voting and peer review are preinternet examples of beneficial anonymity. Sometimes it is desirable for people to be free of fear of reprisal or stigma in order to invoke honest opinions. To have a substantial exchange, however, you need to be fully present. That is why facing one’s accuser is a fundamental right of the accused.

  Could Drive-by Anonymity Scale Up the Way Communism and Fascism Did?

  For the most part, the net has delivered happy surprises about human potential. As I pointed out earlier, the rise of the web in the early 1990s took place without leaders, ideology, advertising, commerce, or anything other than a positive sensibility shared by millions of people. Who would have thought that was possible? Ever since, there has been a constant barrage of utopian extrapolations from positive online events. Whenever a blogger humiliates a corporation by posting documentation of an infelicitous service representative, we can expect triumphant hollers about the end of the era of corporate abuses.

  It stands to reason, however, that the net can also accentuate negative patterns of behavior or even bring about unforeseen social pathology. Over the last century, new media technologies have often become prominent as components of massive outbreaks of organized violence.

  For example, the Nazi regime was a major pioneer of radio and cinematic propaganda. The Soviets were also obsessed with propaganda technologies. Stalin even nurtured a “Manhattan Project” to develop a 3-D theater with incredible, massive optical elements that would deliver perfected propaganda. It would have been virtual reality’s evil twin if it had been completed. Many people in the Muslim world have only gained access to satellite TV and the internet in the last decade. These media certainly have contributed to the current wave of violent radicalism. In all these cases, there was an intent to propagandize, but intent isn’t everything.

  It’s not crazy to worry that, with millions of people connected through a medium that sometimes brings out their worst tendencies, massive, fascist-style mobs could rise up suddenly. I worry about the next generation of young people around the world growing up with internet-based technology that emphasizes crowd aggregation, as is the current fad. Will they be more likely to succumb to pack dynamics when they come of age?

  What’s to prevent the acrimony from scaling up? Unfortunately, history tells us that collectivist ideals can mushroom into large-scale social disasters. The fascias and communes of the past started out with small numbers of idealistic revolutionaries.

  I am afraid we might be setting ourselves up for a reprise. The recipe that led to social catastrophe in the past was economic humiliation combined with collectivist ideology. We already have the ideology in its new digital packaging, and it’s entirely possible we could face dangerously traumatic economic shocks in the coming decades.

  An Ideology of Violation

  The internet has come to be saturated with an ideology of violation. For instance, when some of the more charismatic figures in the online world, including Jimmy Wales, one of the founders of Wikipedia, and Tim O’Reilly, the coiner of the term “web 2.0,” proposed a voluntary code of conduct in the wake of the bullying of Kathy Sierra, there was a widespread outcry, and the proposals went nowhere.

  The ideology of violation does not radiate from the lowest depths of trolldom, but from the highest heights of academia. There are respectable academic conferences devoted to methods of violating sanctities of all kinds. The only criterion is that researchers come up with some way of using digital technology to harm innocent people who thought they were safe.

  In 2008, researchers from the University of Massachusetts at Amherst and the University of Washington presented papers at two of these conferences (called Defcon and Black Hat), disclosing a bizarre form of attack that had apparently not been expressed in public before, even in works of fiction. They had spent two years of team effort figuring out how to use mobile phone technology to hack into a pacemaker and turn it off by remote control, in order to kill a person. (While they withheld some of the details in their public presentation, they certainly described enough to assure protégés that success was possible.)

  The reason I call this an expression of ideology is that there is a strenuously constructed lattice of arguments that decorate this murderous behavior so that it looks grand and new. If the same researchers had done something similar without digital technology, they would at the very least have lost their jobs. Suppose they had spent a couple of years and significant funds figuring out how to rig a washing machine to poison clothing in order to (hypothetically) kill a child once dressed. Or what if they had devoted a lab in an elite university to finding a new way to imperceptibly tamper with skis to cause fatal accidents on the slopes? These are certainly doable projects, but because they are not digital, they don’t support an i
llusion of ethics.

  A summary of the ideology goes like this: All those nontechnical, ignorant, innocent people out there are going about their lives thinking that they are safe, when in actuality they are terribly vulnerable to those who are smarter than they are. Therefore, we smartest technical people ought to invent ways to attack the innocents, and publicize our results, so that everyone is alerted to the dangers of our superior powers. After all, a clever evil person might come along.

  There are some cases in which the ideology of violation does lead to practical, positive outcomes. For instance, any bright young technical person has the potential to discover a new way to infect a personal computer with a virus. When that happens, there are several possible next steps. The least ethical would be for the “hacker” to infect computers. The most ethical would be for the hacker to quietly let the companies that support the computers know, so that users can download fixes. An intermediate option would be to publicize the “exploit” for glory. A fix can usually be distributed before the exploit does harm.

  But the example of the pacemakers is entirely different. The rules of the cloud apply poorly to reality. It took two top academic labs two years of focused effort to demonstrate the exploit, and that was only possible because a third lab at a medical school was able to procure pacemakers and information about them that would normally be very hard to come by. Would high school students or terrorists, or any other imaginable party, have been able to assemble the resources necessary to figure out whether it was possible to kill people in this new way?

  The fix in this case would require many surgeries—more than one for each person who wears a pacemaker. New designs of pacemakers will only inspire new exploits. There will always be a new exploit, because there is no such thing as perfect security. Will each heart patient have to schedule heart surgeries on an annual basis in order to keep ahead of academic do-gooders, just in order to stay alive? How much would it cost? How many would die from the side effects of surgery? Given the endless opportunity for harm, no one will be able to act on the information the researchers have graciously provided, so everyone with a pacemaker will forever be at greater risk than they otherwise would have been. No improvement has taken place, only harm.

  Those who disagree with the ideology of violation are said to subscribe to a fallacious idea known as “security through obscurity.” Smart people aren’t supposed to accept this strategy for security, because the internet is supposed to have made obscurity obsolete.

  Therefore, another group of elite researchers spent years figuring out how to pick one of the toughest-to-pick door locks, and posted the results on the internet. This was a lock that thieves had not learned to pick on their own. The researchers compared their triumph to Turing’s cracking of Enigma. The method used to defeat the lock would have remained obscure were it not for the ideology that has entranced much of the academic world, especially computer science departments.

  Surely obscurity is the only fundamental form of security that exists, and the internet by itself doesn’t make it obsolete. One way to deprogram academics who buy into the pervasive ideology of violation is to point out that security through obscurity has another name in the world of biology: biodiversity.

  The reason some people are immune to a virus like AIDS is that their particular bodies are obscure to the virus. The reason that computer viruses infect PCs more than Macs is not that a Mac is any better engineered, but that it is relatively obscure. PCs are more commonplace. This means that there is more return on the effort to crack PCs.

  There is no such thing as an unbreakable lock. In fact, the vast majority of security systems are not too hard to break. But there is always effort required to figure out how to break them. In the case of pacemakers, it took two years at two labs, which must have entailed a significant expense.

  Another predictable element of the ideology of violation is that anyone who complains about the rituals of the elite violators will be accused of spreading FUD—fear, uncertainty, and doubt. But actually it’s the ideologues who seek publicity. The whole point of publicizing exploits like the attack on pacemakers is the glory. If that notoriety isn’t based on spreading FUD, what is?

  The MIDI of Anonymity

  Just as the idea of a musical note was formalized and rigidified by MIDI, the idea of drive-by, trollish, pack-switch anonymity is being plucked from the platonic realm and made into immovable eternal architecture by software. Fortunately, the process isn’t complete yet, so there is still time to promote alternative designs that resonate with human kindness. When people don’t become aware of, or fail to take responsibility for, their role, accidents of time and place can determine the outcomes of the standards wars between digital ideologies. Whenever we notice an instance when history was swayed by accident, we also notice the latitude we have to shape the future.

  Hive mind ideology wasn’t running the show during earlier eras of the internet’s development. The ideology became dominant after certain patterns were set, because it sat comfortably with those patterns. The origins of today’s outbreaks of nasty online behavior go back quite a way, to the history of the counterculture in America, and in particular to the war on drugs.

  Before the World Wide Web, there were other types of online connections, of which Usenet was probably the most influential. Usenet was an online directory of topics where anyone could post comments, drive-by style. One portion of Usenet, called “alt,” was reserved for nonacademic topics, including those that were oddball, pornographic, illegal, or offensive. A lot of the alt material was wonderful, such as information about obscure musical instruments, while some of it was sickening, such as tutorials on cannibalism.

  To get online in those days you usually had to have an academic, corporate, or military connection, so the Usenet population was mostly adult and educated. That didn’t help. Some users still turned into mean idiots online. This is one piece of evidence that it’s the design, not the demographic, that concentrates bad behavior. Since there were so few people online, though, bad “netiquette” was then more of a curiosity than a problem.

  Why did Usenet support drive-by anonymity? You could argue that it was the easiest design to implement at the time, but I’m not sure that’s true. All those academic, corporate, and military users belonged to large, well-structured organizations, so the hooks were immediately available to create a nonanonymous design. If that had happened, today’s websites might not have inherited the drive-by design aesthetic.

  So if it wasn’t laziness that promoted online anonymity, what was it?

  Facebook Is Similar to No Child Left Behind

  Personal reductionism has always been present in information systems. You have to declare your status in reductive ways when you file a tax return. Your real life is represented by a silly, phony set of database entries in order for you to make use of a service in an approximate way. Most people are aware of the difference between reality and database entries when they file taxes.

  But the order is reversed when you perform the same kind of self-reduction in order to create a profile on a social networking site. You fill in the data: profession, marital status, and residence. But in this case digital reduction becomes a causal element, mediating contact between new friends. That is new. It used to be that government was famous for being impersonal, but in a postpersonal world, that will no longer be a distinction.

  It might at first seem that the experience of youth is now sharply divided between the old world of school and parents, and the new world of social networking on the internet, but actually school now belongs on the new side of the ledger. Education has gone through a parallel transformation, and for similar reasons.

  Information systems need to have information in order to run, but information underrepresents reality. Demand more from information than it can give, and you end up with monstrous designs. Under the No Child Left Behind Act of 2002, for example, U.S. teachers are forced to choose between teaching general knowledge and “teaching
to the test.” The best teachers are thus often disenfranchised by the improper use of educational information systems.

  What computerized analysis of all the country’s school tests has done to education is exactly what Facebook has done to friendships. In both cases, life is turned into a database. Both degradations are based on the same philosophical mistake, which is the belief that computers can presently represent human thought or human relationships. These are things computers cannot currently do.

  Whether one expects computers to improve in the future is a different issue. In a less idealistic atmosphere it would go without saying that software should only be designed to perform tasks that can be successfully performed at a given time. That is not the atmosphere in which internet software is designed, however.

  If we build a computer model of an automobile engine, we know how to test whether it’s any good. It turns out to be easy to build bad models! But it is possible to build good ones. We must model the materials, the fluid dynamics, the electrical subsystem. In each case, we have extremely solid physics to rely on, but we have lots of room for making mistakes in the logic or conception of how the pieces fit together. It is inevitably a long, unpredictable grind to debug a serious simulation of any complicated system. I’ve worked on varied simulations of such things as surgical procedures, and it is a humbling process. A good surgical simulation can take years to refine.

  When it comes to people, we technologists must use a completely different methodology. We don’t understand the brain well enough to comprehend phenomena like education or friendship on a scientific basis. So when we deploy a computer model of something like learning or friendship in a way that has an effect on real lives, we are relying on faith. When we ask people to live their lives through our models, we are potentially reducing life itself. How can we ever know what we might be losing?

 

‹ Prev