Book Read Free

Smart Mobs

Page 8

by Howard Rheingold


  The naturalist Thomas H. Huxley championed Darwinian theory in Kropotkin’s day, especially in his 1888 essay “The Struggle for Existence,” which promoted competition as the most important driver of human evolution.20 Kropotkin asserted that Huxley’s interpretation of Darwinian theory was misconstrued and inaccurate. The publication of Huxley’s essay was the impetus for Kropotkin to begin writing Mutual Aid: AFactor of Evolution as a reply to Huxley, and the subsequent series of articles that eventually made up Kropotkin’s most famous book were originally published in the same journal, The Guardian.21

  Cooperation, Kropotkin claimed, has been observed extensively in the animal kingdom. Horses and deer unite to protect each other from their foes, wolves and lions gather to hunt, while bees and ants work together in many different ways. Since Kropotkin’s day, corroboration for some of his ideas has surfaced, and interest in Kropotkin’s biological work, long eclipsed by his anarchist writing, was revitalized when biologist Stephen J. Gould concluded that Kropotkin had been onto something.22 Symbiosis and cooperation have indeed been observed at every level from cell to ecosystem.

  Kropotkin also contended that humans are predisposed to help one another without authoritarian coercion. A centralized government, he insisted, is not needed to set an example or to make people do the right thing. People were doing so before the rise of the state. In fact, Kropotkin maintained that it is government that represses our natural tendency for cooperation. His belief in the principle of grassroots power was strong enough to land him in the czar’s prison.

  Kropotkin wrote of the temporary guilds of the Middle Ages—cooperative, “just in time” groups formed by the union of like-minded individuals who shared a common goal and space. These groups could be found aboard ships, at the building sites of large-scale public construction pro- jects such as cathedrals, and anywhere “fishermen, hunters, travelling merchants, builders, or settled craftsmen—came together for a common pursuit.”23 After leaving port, the captain of a ship would gather the crew and passengers on deck, tell them that they were all in this together and that the success of the voyage was dependent upon all of them working as one. Everyone onboard then elected a “governor” and “enforcers,” who would gather “taxes” from those who broke the rules. At the end of the voyage the levies would be given to the poor in the port city.

  Kropotkin’s uncontestable observation that cooperation crops up all over biology eventually fomented a revolution in evolutionary theory in the 1950s and 1960s. Marine biologist George Williams stated the problem posed by the cooperative behavior exhibited by social insects: “A modern biologist seeing an animal doing something to benefit another assumes either that it is being manipulated by the other individual or that it is being subtly selfish.”24 If every organism seeks only to benefit itself against all, why would bees sacrifice themselves for the hive, as they clearly do?

  In 1964, social insect specialist William Hamilton came up with an answer now known as “kin selection”: Because bees are sisters (in fact, bees share more genes than sisters do), saving the life of several hivemates at the cost of one’s own is a net gain in the number of the same genes transmitted to future generations.25 The most radical interpretation of kin selection was popularized by Richard Dawkins’s book The Selfish Gene in a startling formulation: “We are survival machines . . . robot vehicles blindly programmed to preserve the selfish molecules known as genes.”26

  The difference between predisposition and predestination is outside the scope of this book, but I recommend contemplating another of Hobbes’s statements in regard to the behavior of insects versus that of humans: “The agreement of these creatures is natural; that of men is by covenant only, which is artificial; and therefore it is no wonder if there be somewhat else required.”27 The “somewhat else required” to achieve human cooperative behavior is as important as evolutionary influences and is the focus of its own discipline. And the bulk of the “artificial” part is what we now call “technology.”

  Those “covenants” mentioned by Hobbes turn out to be tricky because humans play elaborate games of trust and deception. Economists have long sought the mathematical grail that could predict the behavior of markets. In 1944, John von Neumann and Oskar Morgenstern’s Theory of Games and Economic Behavior provided, if not a grail, a means of looking at the way people compete and collude, cooperate and defect, in competitive situations.28

  John von Neumann was arguably the most influential but least-famous scientist in history, considering his fundamental contributions to mathematics, quantum physics, game theory, and the development of the atomic bomb, digital computer, and intercontinental ballistic missile.29 Von Neumann was a prodigy who joked with his father in classical Latin and Greek at the age of six, was a colleague of Einstein at Princeton’s Institute for Advanced Study, and was perhaps the most brilliant of the stellar collection of scientists gathered at Los Alamos to undertake the Manhattan Project. Jacob Bronowski, a Manhattan Project colleague, recounted that von Neumann had told him, during a taxicab ride in London, that “real life consists of bluffing, of little tactics of deception, of asking yourself what is the other man going to think I mean to do. And that is what games are about in my theory.”30

  Game theory is based on several assumptions: that the players are in conflict, that they must take action, that the results of the actions will determine which player wins according to definite rules, and that all players (this is the kicker) are expected to always act “rationally” by choosing the strategy that will maximize their gain regardless of the consequence to others. These are the kind of rules that don’t fit real life with predictive precision, but that do attract economists, because they map onto the behavior of observable phenomena like markets, arms races, cartels, and traffic.

  After World War II, von Neumann joined other mathematicians and economists to brainstorm game theory at a mundane building that still houses the same institution near the Santa Monica beach. The RAND Corporation was the first think tank, where intellectuals with security clearances thought about the unthinkable, as RANDite Herman Kahn referred to the craft of thermonuclear war strategy.31 Because the arms race seemed to be closely related to the kind of bluff and counter-bluff described by game theory, the new field became popular among the first nuclear war strategists. In 1950, RAND researchers came up with four fundamental elements of Morgenstern- and von Neumann-style games: Chicken, Stag Hunt, Deadlock, and Prisoner’s Dilemma. Keep in mind that although they can be described as stories, they are represented by exact mathematical equations.

  Chicken is the game portrayed in movies about juvenile delinquents: two opponents rush toward oblivion, and the one who stops or swerves first loses. Deadlock is endless betrayal: Each player refuses to cooperate, ever. The next two are more interesting. Stag Hunt was first described by Jean Jacques Rousseau in 1755: “If it was a matter of hunting deer, everyone well realized that he must remain faithfully at his post; but if a hare happened to pass within reach of one of them, we cannot doubt that he would have gone off in pursuit of it without scruple and, having caught his own prey, he would have cared very little about having caused his companions to lose theirs.”32 Stag Hunt is a classic illustration of a problem of provisioning a public good in the face of individual temptation to defect to self-interest. Should a hunter remain with the group and bet on the smaller chance of bringing down large prey for the entire tribe or break away from the group and pursue the more certain prospect of bringing a rabbit home to his own family?

  The fourth game hatched at RAND has grown into an interdisciplinary Schelling point. The game was invented in 1950 by RAND researchers Merrill Flood and Melvin Dresher.33 A few months after Flood and Dresher invented the game, a RAND consultant named it at a seminar at Stanford University. Tucker described the game situation: “Two men, charged with a joint violation of law, are held separately by the police. Each is told that (1) if one confesses and the other does not, the former will be given a reward . . . and the latte
r will be fined . . ., (2) if both confess, each will be fined . . . . At the same time, each has a good reason to believe that (3) if neither confesses, both will go clear.”34

  Over the years, the popular version has changed Tucker’s rendition of Prisoner’s Dilemma. Threatening jail sentences is a better story than offering rewards. Remember that the prisoners are “held separately” and unable to communicate, so they can only guess what the other prisoner is likely to do. The prisoner who testifies against his partner will go free, and the partner will be sentenced to three years. If both prisoners decide to testify against each other, they will both get a two-year sentence. And if neither testifies, they will both receive a one-year sentence. Because this is game theory, each player is interested only in his own welfare. Rationally, each player will conclude that testifying will take a year off his sentence, regardless of what the other player does. Defecting will prevent a player from being a sucker—remaining loyally silent while the other player rats out. However, if they both refuse to testify, they could both get away with only one year. There’s the dilemma: Each player, acting in his own interest, brings a result neither player prefers.

  The mathematical version represents the results of the two players’ strategies, pitted against each other in the form of a table. Each row represents a strategy for one player and each column represents a strategy for the other player. The pairs of numbers in the table cells represent the respective payoffs for the players. The payoffs are structured so that, in the RAND researchers’ original terms, the reward payoff for mutual cooperation is greater than the punishment payoff for mutual defection; both are greater than the sucker’s payoff for cooperating when the other player defects and less than the temptation payoff for defecting when the other player cooperates. All four of the RAND social dilemmas are variations of the same model: Reverse the sucker and temptation payoffs, and Prisoner’s Dilemma becomes Chicken. Switch reward and temptation payoffs, and Prisoner’s Dilemma becomes Stag Hunt.

  B cooperates B defects

  A cooperates 2,2 0,3

  A defects 3,0 1,1

  In 1979 political scientist Robert Axelrod grew interested in cooperation— a turning point in the history of smart mob theory:

  This project began with a simple question. When should a person cooperate, and when should a person be selfish, in an ongoing interaction with another person? Should a friend keep providing favors to another friend who never reciprocates? Should a business provide prompt service to another business that is about to be bankrupt? How intensely should the United States try to punish the Soviet Union for a particular hostile act, and what pattern of behavior can the United States use to best elicit cooperative behavior from the Soviet Union? There is a simple way to represent the type of situation that gives rise to these problems. This is to use a particular kind of game called the iterated Prisoner’s Dilemma. The game allows the players to achieve mutual gains from cooperation, but it also allows for the possibility that one player will exploit the other, or the possibility that neither will cooperate.35

  The Prisoner’s Dilemma game takes on interesting new properties when it is repeated over and over (“iterated”). Although the players cannot communicate their intentions regarding the current move, the history of previous decisions becomes a factor in assessing the other player’s intentions. In Axelrod’s words, “What makes it possible for cooperation to emerge is the fact that the players might meet again. This possibility means that the choices made today not only determine the outcome of this move, but can also influence the later choices of the players. The future can cast a shadow back upon the present and thereby affect the current strategic situation.”36 “Reputation” is another way of looking at this “shadow of the future.”

  Axelrod proposed a “Computer Prisoner’s Dilemma Tournament” pitting computer programs against one another. Each program would make a choice to cooperate or defect on each move, thus gaining points according to the game’s payoff matrix. Each program could take into account the history of its opponent’s prior moves. Axelrod received entries from game theorists in economics, psychology, sociology, political science, and mathematics. He ran fourteen entries against each other and against a random rule, over and over. “To my considerable surprise,” Axelrod reported, “the winner was the simplest of all the programs submitted, TIT FOR TAT. TIT FOR TAT is merely the strategy of starting with cooperation and thereafter doing what the other player did on the previous move.”37 If the opponent cooperates on the first move, then TIT FOR TAT cooperates on the next move; if the opponent defects on the first move, then TIT FOR TAT defects on the next move. If the opponent switches from defection to cooperation, TIT FOR TAT switches from defection to cooperation on the following move, punishing the opponent but forgiving it.

  Axelrod invited professors of evolutionary biology, physics, and computer science to join the original entrants on a second round. Designers of strategies were allowed to take into account the results of the first tournament. TIT FOR TAT won again. Axelrod found this intriguing:

  Something very interesting was happening here. I suspected that the properties that made TIT FOR TAT so successful in the tournaments would work in a world where any strategy was possible. If so, then cooperation based solely on reciprocity seemed possible. But I wanted to know the exact conditions that would be needed to foster cooperation on these terms. This led me to an evolutionary perspective: a consideration of how cooperation can emerge among egoists without central authority. The evolutionary perspective suggested three distinct questions. First, how can a potentially cooperative strategy get an initial foothold in an environment which is predominantly noncooperative? Second, what type of strategy can thrive in a variegated environment composed of other individuals using a wide diversity of more or less sophisticated strategies? Third, under what conditions can such a strategy once fully established among a group of people, resist invasion by a less cooperative strategy?38

  Tinkering with the game simulation revealed an answer, at least on the game-theoretic level, to Axelrod’s first question: Within a pool of entirely uncooperative strategies, cooperative strategies evolve from small clusters of individuals who reciprocate cooperation, even if the cooperative strategies have only a small proportion of their interactions with each other. Clusters of cooperators amass points for themselves faster than defectors can. Strategies based on reciprocity can survive against a variety of strategies, and “cooperation, once established on the basis of reciprocity, can protect itself from invasion by less cooperative strategies. Thus, the gear wheels of social evolution have a ratchet.”39

  Axelrod, a political scientist at the University of Michigan, wasn’t a biologist, so he called “selfish gene” biologist Richard Dawkins in England, who told him to speak to William Hamilton, discoverer of kin selection in insects, who, unknown to Axelrod until then, was also at the University of Michigan. Hamilton recalled a Harvard graduate student, Robert Trivers, who had presented evidence for reciprocity as the mechanism that enables self-interested individuals to cooperate.40 The “shadow of the future” enabled individuals to do favors for others, who would do favors for them in the future. Years before Axelrod and TIT FOR TAT, had Trivers uncovered the link between self-interest and cooperation? The publication of Axel-rod’s The Evolution of Cooperation ignited interest in the biological basis of cooperation.41

  In 1983 biologist Gerald Wilkinson reported that vampire bats in Costa Rica regurgitate blood to share with other bats who had been less successful in their night’s hunt and that bats played TIT FOR TAT, feeding those who had shared in the past and refusing those who had not shared.42 Wilkinson suggested that the bats’ frequent social grooming rituals furnished a means by which this social memory functioned.

  In related research, Manfred Milinski performed a clever experiment with a species of small fish called sticklebacks.43 Schools of sticklebacks send out scouting pairs to assess the danger posed by nearby predators. Why would an individual dart out
from the safety of the school to probe the reactions of a fish that would like to eat it? Milinski noted that each pair of sticklebacks probing a predator took turns moving toward the larger fish in short darting movements. If the predator showed interest, the scouts scooted back to the school. Milinski proposed that the turn taking was an example of the Prisoner’s Dilemma. He tested his hypothesis by putting a mirror near a predator in an aquarium. Lone sticklebacks reacted in a TIT FOR TATlike manner when observing what their mirror image did; that is, when they darted forward or backward spontaneously, they repeated the action after seeing their image.

  Later, when discussing zero-sum games versus non-zero-sum games, I’ll point out the ways that cooperative and competitive behaviors are nested within one another. Recall the first public goods, where early hunters may have cooperated in order to bring down game but reverted to more competitive strategies such as dominance hierarchies when it came to allocating that meat (although one of the oft-quoted observations about the emergence of food sharing is that “the Inuit knows that the best place for him to store his surplus is in someone else’s stomach”44).

  Cooperation and conflict are both aspects of the same phenomenon. One of the important ways that humans cooperate is banding together into clans, tribes, and nations in order to compete more effectively against other bands. Cooperators can thrive amid populations of defectors if they learn to recognize each other and interact with one another. Are Ostrom’s “clearly defined group boundaries” another way of cooperators learning to recognize each other? Cooperators who clump together can outcompete noncooperative strategies by creating public goods that benefit themselves but not the defectors. One time-tested way of inducing a group to work together is to introduce an external threat. Cooperative enterprise and inter-group conflict have coevolved because the ability to recognize who is inside and who is outside a group’s boundaries is integral to both intragroup cooperation and intergroup conflict.

 

‹ Prev