You are not a Gadget: A Manifesto

Home > Other > You are not a Gadget: A Manifesto > Page 7
You are not a Gadget: A Manifesto Page 7

by Jaron Lanier


  It is also important to notice the similarity between the lords and peasants of the cloud. A hedge fund manager might make money by using the computational power of the cloud to calculate fantastical financial instruments that make bets on derivatives in such a way as to invent out of thin air the phony virtual collateral for stupendous risks. This is a subtle form of counterfeiting, and is precisely the same maneuver a socially competitive teenager makes in accumulating fantastical numbers of “friends” on a service like Facebook.

  Ritually Faked Relationships Beckon to Messiahs Who May Never Arrive

  But let’s suppose you disagree that the idea of friendship is being reduced, and are confident that we can keep straight the two uses of the word, the old use and the new use. Even then one must remember that the customers of social networks are not the members of those networks.

  The real customer is the advertiser of the future, but this creature has yet to appear in any significant way as this is being written. The whole artifice, the whole idea of fake friendship, is just bait laid by the lords of the clouds to lure hypothetical advertisers—we might call them messianic advertisers—who could someday show up.

  The hope of a thousand Silicon Valley start-ups is that firms like Face-book are capturing extremely valuable information called the “social graph.” Using this information, an advertiser might hypothetically be able to target all the members of a peer group just as they are forming their opinions about brands, habits, and so on.

  Peer pressure is the great power behind adolescent behavior, goes the reasoning, and adolescent choices become life choices. So if someone could crack the mystery of how to make perfect ads using the social graph, an advertiser would be able to design peer pressure biases in a population of real people who would then be primed to buy whatever the advertiser is selling for their whole lives.

  The situation with social networks is layered with multiple absurdities. The advertising idea hasn’t made any money so far, because ad dollars appear to be better spent on searches and in web pages. If the revenue never appears, then a weird imposition of a database-as-reality ideology will have colored generations of teen peer group and romantic experiences for no business or other purpose.

  If, on the other hand, the revenue does appear, evidence suggests that its impact will be truly negative. When Facebook has attempted to turn the social graph into a profit center in the past, it has created ethical disasters.

  A famous example was 2007’s Beacon. This was a suddenly imposed feature that was hard to opt out of. When a Facebook user made a purchase anywhere on the internet, the event was broadcast to all the so-called friends in that person’s network. The motivation was to find a way to package peer pressure as a service that could be sold to advertisers. But it meant that, for example, there was no longer a way to buy a surprise birthday present. The commercial lives of Facebook users were no longer their own.

  The idea was instantly disastrous, and inspired a revolt. The MoveOn network, for instance, which is usually involved in electoral politics, activated its huge membership to complain loudly. Facebook made a quick retreat.

  The Beacon episode cheered me, and strengthened my sense that people are still able to steer the evolution of the net. It was one good piece of evidence against metahuman technological determinism. The net doesn’t design itself. We design it.

  But even after the Beacon debacle, the rush to pour money into social networking sites continued without letup. The only hope for social networking sites from a business point of view is for a magic formula to appear in which some method of violating privacy and dignity becomes acceptable. The Beacon episode proved that this cannot happen too quickly, so the question now is whether the empire of Facebook users can be lulled into accepting it gradually.

  The Truth About Crowds

  The term “wisdom of crowds” is the title of a book by James Surowiecki and is often introduced with the story of an ox in a marketplace. In the story, a bunch of people all guess the animal’s weight, and the average of the guesses turns out to be generally more reliable than any one person’s estimate.

  A common idea about why this works is that the mistakes various people make cancel one another out; an additional, more important idea is that there’s at least a little bit of correctness in the logic and assumptions underlying many of the guesses, so they center around the right answer. (This latter formulation emphasizes that individual intelligence is still at the core of the collective phenomenon.) At any rate, the effect is repeatable and is widely held to be one of the foundations of both market economies and democracies.

  People have tried to use computing clouds to tap into this collective wisdom effect with fanatic fervor in recent years. There are, for instance, well-funded—and prematurely well-trusted—schemes to apply stock market-like systems to programs in which people bet on the viability of answers to seemingly unanswerable questions, such as when terrorist events will occur or when stem cell therapy will allow a person to grow new teeth. There is also an enormous amount of energy being put into aggregating the judgments of internet users to create “content,” as in the collectively generated link website Digg.

  How to Use a Crowd Well

  The reason the collective can be valuable is precisely that its peaks of intelligence and stupidity are not the same as the ones usually displayed by individuals.

  What makes a market work, for instance, is the marriage of collective and individual intelligence. A marketplace can’t exist only on the basis of having prices determined by competition. It also needs entrepreneurs to come up with the products that are competing in the first place.

  In other words, clever individuals, the heroes of the marketplace, ask the questions that are answered by collective behavior. They bring the ox to the market.

  There are certain types of answers that ought not be provided by an individual. When a government bureaucrat sets a price, for instance, the result is often inferior to the answer that would come from a reasonably informed collective that is reasonably free of manipulation or runaway internal resonances. But when a collective designs a product, you get design by committee, which is a derogatory expression for a reason.

  Collectives can be just as stupid as any individual—and, in important cases, stupider. The interesting question is whether it’s possible to map out where the one is smarter than the many.

  There is a substantial history to this topic, and varied disciplines have accumulated instructive results. Every authentic example of collective intelligence that I am aware of also shows how that collective was guided or inspired by well-meaning individuals. These people focused the collective and in some cases also corrected for some of the common hive mind failure modes. The balancing of influence between people and collectives is the heart of the design of democracies, scientific communities, and many other long-standing success stories.

  The preinternet world provides some great examples of how individual human-driven quality control can improve collective intelligence. For example, an independent press provides tasty news about politicians by journalists with strong voices and reputations, like the Watergate reporting of Bob Woodward and Carl Bernstein. Without an independent press, composed of heroic voices, the collective becomes stupid and unreliable, as has been demonstrated in many historical instances—most recently, as many have suggested, during the administration of George W Bush.

  Scientific communities likewise achieve quality through a cooperative process that includes checks and balances, and ultimately rests on a foundation of goodwill and “blind” elitism (blind in the sense that ideally anyone can gain entry, but only on the basis of a meritocracy). The tenure system and many other aspects of the academy are designed to support the idea that individual scholars matter, not just the process or the collective.

  Yes, there have been plenty of scandals in government, the academy, and the press. No mechanism is perfect. But still here we are, having benefited from all of these institutions. There certa
inly have been plenty of bad reporters, self-deluded academic scientists, incompetent bureaucrats, and so on. Can the hive mind help keep them in check? The answer provided by experiments in the preinternet world is yes—but only if some signal processing has been placed in the loop.

  Signal processing is a bag of tricks engineers use to tweak flows of information. A common example is the way you can set the treble and bass on an audio signal. If you turn down the treble, you are reducing the amount of energy going into higher frequencies, which are composed of tighter, smaller sound waves. Similarly, if you turn up the bass, you are heightening the biggest, broadest waves of sound.

  Some of the regulating mechanisms for collectives that have been most successful in the preinternet world can be understood as being like treble and bass controls. For instance, what if a collective moves too readily and quickly, jittering instead of settling down to provide a stable answer? This happens on the most active Wikipedia entries, for example, and has also been seen in some speculation frenzies in open markets.

  One service performed by representative democracy is low-pass filtering, which is like turning up the bass and turning down the treble. Imagine the jittery shifts that would take place if a wiki were put in charge of writing laws. It’s a terrifying thing to consider. Superenergized people would be struggling to shift the wording of the tax code on a frantic, never-ending basis. The internet would be swamped.

  Such chaos can be avoided in the same way it already is, albeit imperfectly: by the slower processes of elections and court proceedings. These are like bass waves. The calming effect of orderly democracy achieves more than just the smoothing out of peripatetic struggles for consensus. It also reduces the potential for the collective to suddenly jump into an overexcited state when too many rapid changes coincide in such a way that they don’t cancel one another out.

  For instance, stock markets might adopt automatic trading shutoffs, which are triggered by overly abrupt shifts in price or trading volume. (In Chapter 6 I will tell how Silicon Valley ideologues recently played a role in convincing Wall Street that it could do without some of these checks on the crowd, with disastrous consequences.)

  Wikipedia had to slap a crude low-pass filter on the jitteriest entries, such as “President George W. Bush.” There’s now a limit to how often a particular person can remove someone else’s text fragments. I suspect that these kinds of adjustments will eventually evolve into an approximate mirror of democracy as it was before the internet arrived.

  The reverse problem can also appear. The hive mind can be on the right track, but moving too slowly. Sometimes collectives can yield brilliant results given enough time—but sometimes there isn’t enough time. A problem like global warming might automatically be addressed eventually if the market had enough time to respond to it. (Insurance rates, for instance, would climb.) Alas, in this case there doesn’t appear to be enough time, because the market conversation is slowed down by the legacy effect of existing investments. Therefore some other process has to intervene, such as politics invoked by individuals.

  Another example of the slow hive problem: there was a lot of technology developed—but very slowly—in the millennia before there was a clear idea of how to be empirical, before we knew how to have a peer-reviewed technical literature and an education based on it, and before there was an efficient market to determine the value of inventions.

  What is crucial about modernity is that structure and constraints were part of what sped up the process of technological development, not just pure openness and concessions to the collective. This is an idea that will be examined in Chapter 10.

  An Odd Lack of Curiosity

  The “wisdom of crowds” effect should be thought of as a tool. The value of a tool is its usefulness in accomplishing a task. The point should never be the glorification of the tool. Unfortunately, simplistic free market ideologues and noospherians tend to reinforce one another’s unjustified sentimentalities about their chosen tools.

  Since the internet makes crowds more accessible, it would be beneficial to have a wide-ranging, clear set of rules explaining when the wisdom of crowds is likely to produce meaningful results. Surowiecki proposes four principles in his book, framed from the perspective of the interior dynamics of the crowd. He suggests there should be limits on the ability of members of the crowd to see how others are about to decide on a question, in order to preserve independence and avoid mob behavior. Among other safeguards, I would add that a crowd should never be allowed to frame its own questions, and its answers should never be more complicated than a single number or multiple choice answer.

  More recently, Nassim Nicholas Taleb has argued that applications of statistics, such as crowd wisdom schemes, should be divided into four quadrants. He defines the dangerous “Fourth Quadrant” as comprising problems that have both complex outcomes and unknown distributions of outcomes. He suggests making that quadrant taboo for crowds.

  Maybe if you combined all our approaches you’d get a practical set of rules for avoiding crowd failures. Then again, maybe we are all on the wrong track. The problem is that there’s been inadequate focus on the testing of such ideas.

  There’s an odd lack of curiosity about the limits of crowd wisdom. This is an indication of the faith-based motivations behind such schemes. Numerous projects have looked at how to improve specific markets and other crowd wisdom systems, but too few projects have framed the question in more general terms or tested general hypotheses about how crowd systems work.

  Trolls

  “Troll” is a term for an anonymous person who is abusive in an online environment. It would be nice to believe that there is a only a minute troll population living among us. But in fact, a great many people have experienced being drawn into nasty exchanges online. Everyone who has experienced that has been introduced to his or her inner troll.

  I have tried to learn to be aware of the troll within myself. I notice that I can suddenly become relieved when someone else in an online exchange is getting pounded or humiliated, because that means I’m safe for the moment. If someone else’s video is being ridiculed on YouTube, then mine is temporarily protected. But that also means I’m complicit in a mob dynamic. Have I ever planted a seed of mob-beckoning ridicule in order to guide the mob to a target other than myself? Yes, I have, though I shouldn’t have. I observe others doing that very thing routinely in anonymous online meeting places.

  I’ve also found that I can be drawn into ridiculous pissing matches online in ways that just wouldn’t happen otherwise, and I’ve never noticed any benefit. There is never a lesson learned, or a catharsis of victory or defeat. If you win anonymously, no one knows, and if you lose, you just change your pseudonym and start over, without having modified your point of view one bit.

  If the troll is anonymous and the target is known, then the dynamic is even worse than an encounter between anonymous fragmentary pseudo-people. That’s when the hive turns against personhood. For instance, in 2007 a series of “Scarlet Letter” postings in China incited online throngs to hunt down accused adulterers. In 2008, the focus shifted to Tibet sympathizers. Korea has one of the most intense online cultures in the world, so it has also suffered some of the most extreme trolling. Korean movie star Choi Jin-sil, sometimes described as the “Nation’s Actress,” committed suicide in 2008 after being hounded online by trolls, but she was only the most famous of a series of similar suicides.

  In the United States, anonymous internet users have ganged up on targets like Lori Drew, the woman who created a fake boy persona on the internet in order to break the heart of a classmate of her daughter’s, which caused the girl to commit suicide.

  But more often the targets are chosen randomly, following the pattern described in the short story “The Lottery” by Shirley Jackson. In the story, residents of a placid small town draw lots to decide which individual will be stoned to death each year. It is as if a measure of human cruelty must be released, and to do so in a contained yet random wa
y limits the damage by using the fairest possible method.

  Some of the better-known random victims of troll mobs include the blogger Kathy Sierra. She was suddenly targeted in a multitude of ways, such as having images of her as a sexually mutilated corpse posted prominently, apparently in the hopes that her children would see them. There was no discernible reason Sierra was targeted. Her number was somehow drawn from the lot.

  Another famous example is the tormenting of the parents of Mitchell Henderson, a boy who committed suicide. They were subjected to gruesome audio-video creations and other tools at the disposal of virtual sadists. Another occurence is the targeting of epileptic people with flashing web designs in the hope of inducing seizures.

  There is a vast online flood of videos of humiliating assaults on helpless victims. The culture of sadism online has its own vocabulary and has gone mainstream. The common term “lulz,” for instance, refers to the gratification of watching others suffer over the cloud.*

  When I criticize this type of online culture, I am often accused of being either an old fart or an advocate of censorship. Neither is the case. I don’t think I’m necessarily any better, or more moral, than the people who tend the lulzy websites. What I’m saying, though, is that the user interface designs that arise from the ideology of the computing cloud make people—all of us—less kind. Trolling is not a string of isolated incidents, but the status quo in the online world.

  The Standard Sequence of Troll Invocation

  There are recognizable stages in the degradation of anonymous, fragmentary communication. If no pack has emerged, then individuals start to fight. This is what happens all the time in online settings. A later stage appears once a pecking order is established. Then the members of the pack become sweet and supportive of one another, even as they goad one another into ever more intense hatred of nonmembers.

 

‹ Prev