A Beautiful Math

Home > Other > A Beautiful Math > Page 10
A Beautiful Math Page 10

by Tom Siegfried


  Rufus Johnstone, of the University of Cambridge, extended the math of the hawk-dove game in just this manner to evaluate the eavesdropper factor. In this game, the eavesdropper knows whether its opponent has won or lost its previous fight. An eavesdropper encountering a loser will act hawkish, but if encountering a winner the eavesdropper will adopt a dove strategy and forgo the chance to win the resource.

  "An individual that is victorious in one round is more likely to win in the next, because its opponent is less likely to mount an escalated challenge," Johnstone concluded.11

  Since eavesdroppers have the advantage of knowing when to run, avoiding fights with dangerous foes, you might guess that eavesdropping would reduce the amount of violent conflict in a society. Alas, the math shows otherwise. Adding eavesdroppers to the hawk-dove game raises the rate of "escalated" fighting— occasions where both combatants take the hawk approach.

  Why? Because of the presence of spectators! If nobody is watching, it is not so bad to be a dove. But in the jungle, reputation is everything. With spectators around, acting like a dove guarantees that you'll face an aggressive opponent in your next fight. Whereas if everybody sees that you're a ferocious hawk, your next opponent may head for the hills at the sight of you.

  So the presence of spectators encourages violence, and watching violence today offers an advantage for the spectators who may be fighters tomorrow. In other words, the benefit to an individual of eavesdropping—helping that individual avoid high-risk conflict—drives a tendency toward a higher level of high-risk conflict in the society as a whole.

  But don't forget that adding spectators is just one of many complications that could be considered in the still very simplified hawk-dove game. Fights depend on more than just aggressiveness. Size and skill come into play as well. And one study noted that a bird's self-assessment of its own fighting skills can also influence the fight-or-flight decision. If the birds know their own skill levels accurately, overall fighting might be diminished. (You can think of this as the Clint Eastwood version of the hawk-dove game: A bird has got to know its limitations.)12

  In any case, policy makers who would feel justified in advocating wars based on game theory should pause and realize that real life is more complicated than biologists' mathematical games. Humans, after all, have supposedly advanced to a civilized state where the law of the jungle doesn't call all the shots. And in fact, game theory can help show how that civilized state came about. Game theory describes how the circumstances can arise that make cooperation and communication a stable strategy for the members of a species. Without game theory, cooperative human social behavior is hard to understand.

  EVOLVING ON A LANDSCAPE

  Game theory can help illuminate how different strategies fare in the battle to survive. Even more important, game theory helps to show how the best strategies might differ as circumstances change. After all, a set of behavioral propensities that's successful in the jungle might not be such a hot idea in the Antarctic.

  When evolutionists talk about circumstances changing, typically they'll be referring to something like the climate, or the trauma of a recent asteroid impact. But the changing strategies of the organisms themselves can be just as important. And that's why game theory is essential for understanding evolution. Remember the basic concept of a Nash equilibrium—it's when everybody is doing the best they can do, given what everybody else is doing. In other words, the best survival strategy depends on who else is around and how they are behaving. If your survival hinges on the actions of others, you're in a game whether you like it or not.

  Using the language of evolution, success in the survival game equates to "fitness." The fittest survive and procreate. Obviously some individuals score better in this game than others. Biologists like to describe such differences in fitness in geographic terms, using the metaphor of a landscape. Using this metaphor, you can think of fitness—or the goal of a game—as getting a good vantage point, living on the peak of a mountain with a good view of your surroundings. For convenience you can describe your fitness just by specifying your latitude and longitude on the landscape map. Some latitude–longitude positions will put you on high ground; some will leave you in a chasm. In other words, some positions are more fit than others. It's just another way of saying that some combinations of features and behaviors improve your chance to survive and reproduce. Real biological fitness is analogous to the better vantage point of a mountain peak.

  In a fitness landscape (just like a real landscape) there can, of course, be more than one peak—more than one combination of properties with a high likelihood for having viable offspring. (In the simple landscape of the all-bird island, you'd have a dove peak and a hawk peak.) In a landscape with many fitness peaks, some would be "higher" than others (meaning your odds of reproducing are more favorable), but still many peaks would be good enough for a species to survive.

  On a real landscape, your vantage point can be disturbed by many kinds of events. A natural disaster—a hurricane like Katrina, say, or an earthquake and tsunami—can literally reshape the landscape, and a latitude and longitude that previously gave you a great view may now be a muddy rut. Similarly in evolution, a change in the fitness landscape can leave a once successful species in a survival valley. Something like this seems to be what happened to the dinosaurs.

  You don't need an asteroid impact to change the biological fitness landscape, though. Simply suppose that some new species moves into the neighborhood. What used to be a good strategy— say, swimming in the lake, away from waterphobic predators— might not be so smart if crocodiles move in. So as evolution proceeds, the fitness landscape changes. Your best evolutionary strategy, in other words, depends on who else is evolving along with you. No species is a Robinson Crusoe alone on an island. And when what you should do depends on what others are doing, game theory is the name of the game.

  Recognizing this ever-shifting evolution landscape is the key to explaining how cooperative behavior comes about. In particular, it helps to explain the vastly more elaborate cooperation exhibited by humans compared with other animals.

  KIN AND COOPERATION

  It's not that nonhuman animals never cooperate. Look at ants, for instance. But such social insect societies can easily be explained by evolution's basis in genetic inheritance. The ants in an ant colony are all closely related. By cooperating they enhance the prospect that their shared genes will be passed along to future colonies.

  Similar reasoning should explain some human cooperation— that between relatives. As Maynard Smith's teacher J. B. S. Haldane once remarked, it would make sense to dive into a river to save two drowning siblings or eight drowning cousins. (On average, you share one-half of a sibling's genes, one-eighth of a cousin's.) But human cooperation is not limited to planning family reunion picnics. Somehow, humans evolved to cooperate with strangers.

  When I visited Martin Nowak, he emphasized that such nonkin cooperation was one of the defining differences between humans and the rest of the planet's species. The other was language. "I think humans are really distinct from animals in two different ways," he said. "One is that they have a language which allows us to talk about everything. No other animal species has evolved such a system of unlimited communication. Animals can talk about a lot of things and signal about a lot of things to each other, but it seems that they are limited to a certain finite number of things that they can actually tell each other."

  Humans, though, have a "combinatorial" language, a mix-and-match system of sounds that can describe any number of circumstances, even those never previously encountered. "There must have been a transition in evolution," Nowak said, that allowed humans to develop this "infinite" communication system. Such a flexible language system no doubt helped humans evolve their other distinction—widespread cooperation. "Humans are the only species that have solved the problem of large-scale cooperation between nonrelated individuals," Nowak pointed out. "That cooperation is interesting because evolution is based on co
mpetition, and if you want survival of the fittest, this competition makes it difficult to explain cooperation."13

  Charles Darwin himself noted this "altruism" problem. Behaving altruistically—helping someone else out, at a cost to you with no benefit in return—does seem to be a rather foolish strategy in the struggle to survive. But humans (many of them, at least) possess a compelling instinct to be helpful. There must have been some survival advantage to being a nice guy, no matter what Leo Durocher might have thought. (He was the baseball manager of the mid-20th century who was famous for saying "Nice guys finish last.")

  One early guess was that altruism works to the altruist's advantage in some way, like mutual backscratching. If you help out your neighbor, maybe someday your neighbor will return the favor. (This is the notion of "reciprocal altruism.") But that explanation doesn't take you very far. It only works if you will encounter the recipient of your help again in the future. Yet people often help others whom they will probably never see again.

  Maybe you can still get an advantage from being nice in an indirect way. Suppose you help out a stranger whom you never see again, but that stranger—overwhelmed by your kindness— becomes a traveling Good Samaritan, rendering aid to all sorts of disadvantaged souls. Someday maybe one of the Samaritan's beneficiaries will encounter you and help you out, thanks to the lesson learned from the Samaritan you initially inspired.

  Such "indirect reciprocity," Nowak told me, had been mentioned long ago by the biologist Richard Alexander but was generally dismissed by evolutionary biologists. And on the face of it, it sounds a little far-fetched. Nowak, though, had explored the idea of indirect reciprocity in detail with the mathematician Karl Sigmund in Vienna. They had recently published a paper showing how indirect reciprocity might actually work, using the mathematics of game theory (in the form of the Prisoner's Dilemma) to make the point. The secret to altruism, Nowak suggested, is the power of reputation. "By helping someone we can increase our reputation," he said, "and to have a higher reputation in the group increases the chance that someone will help you."

  The importance of reputation explains why human language became important—so people could gossip. Gossip spreads reputation, making altruistic behavior based on reputation more likely. "It's interesting how much time humans spend talking about other people, as though they were constantly evaluating the reputations of other people," Nowak said. "Language helped the evolution of cooperation and vice versa. A cooperative population makes language more important…. With indirect reciprocity you can either observe the person, you can look at how he behaves, or more efficiently you can just talk to people…. Language is essential for this."14

  Reputation breeds cooperation because it permits players in the game of life to better predict the actions of others. In the Prisoner's Dilemma game, for instance, both players come out ahead if they cooperate. But if you suspect your opponent won't cooperate, you're better off defecting. In a one-shot game against an unknown opponent, the smart play is to defect. If, however, your opponent has a well-known reputation as a cooperator, it's a better idea to cooperate also, so both of you are better off. In situations where the game is played repeatedly, cooperation offers the added benefit of enhancing your reputation.

  TIT FOR TAT

  Gossip about reputations may not be enough to create a cooperative society, though. Working out the math to prove that indirect reciprocity can infuse a large society with altruistic behavior turned up some problems. Nowak and Sigmund's model of indirect reciprocity was criticized by several other experts who pointed out that it was unlikely to work except in very small groups. When I next encountered Nowak, in 2004 at a complexity conference in Boston, his story had grown more elaborate.

  In his talk, Nowak recounted the role of the Prisoner's Dilemma game in analyzing evolutionary cooperation. The essential backdrop was a famous game theory tournament held in 1980, organized by the political scientist Robert Axelrod at the University of Michigan. Axelrod conceived the brilliant idea of testing the skill of game theoreticians themselves in a Prisoner's Dilemma contest. He invited game theory experts to submit a strategy for playing Prisoner's Dilemma (in the form of a computer program) and then let the programs battle it out in a round-robin competition. Each program played repeated games against each of the other programs to determine which strategy would be the most "fit" in the Darwinian sense.

  Of the 14 strategies submitted, the winner was the simplest— an imitative approach called tit for tat, submitted by the game theorist Anatol Rapoport.15 In a tit-for-tat strategy, a player begins by cooperating in the first round of the game. After that, the player does whatever its opponent did in the preceding round. If the other player cooperates, the tit-for-tat player does also. Whenever the opponent defects, though, the tit-for-tat player defects on the next play and continues to defect until the opponent cooperates again.

  In any given series of games against a particular opponent, tit for tat is likely to lose. But in a large number of rounds versus many different opposition strategies, tit for tat outperforms the others on average. Or at least it did in Axelrod's tournament.

  Once tit for tat emerged as the winner, it seemed possible that even better strategies might be developed. So Axelrod held a second tournament, this time attracting 62 entries. Of the contestants in the second tournament, only one entered tit for tat. It was Rapoport, and he won again.

  You can see how playing tit for tat enhances opportunities for cooperation in a society. A reputation as a tit-for-tat player will induce opponents to cooperate with you, knowing that if they do, you will. And if they don't, you won't.

  Alas, the story gets even more complicated. Just because tit for tat won Axelrod's tournament, that doesn't mean it's the best strategy in the real world. For one thing, it rarely won in head-to-head competition against any other strategy; it just did the best overall (because strategies that defeated tit for tat often lost badly against other strategies).

  In his talk at the conference, Nowak explored some of the nuances of the tit-for-tat strategy in a broader context. At first glance, tit for tat's success seems to defy the Nash equilibrium implication that everyone's best strategy is to always defect. The mathematics of evolutionary game theory, based on analyzing an infinitely large population, seems to confirm that expectation. However, Nowak pointed out, for a more realistic finite population, you can show that a tit-for-tat strategy, under certain circumstances, can successfully invade the all-defect population.

  But if you keep calculating what would happen if the game continues, it gets still more complicated. Tit for tat is an unforgiving strategy—if your opponent meant to cooperate but accidentally defected, you would then start defecting and cooperation would diminish. If you work out what would happen in such a game, the tit-for-tat strategy becomes less successful than a modified strategy called "generous tit for tat." So a generous tit-for-tat strategy would take over the population.

  "Generous tit for tat is a strategy that starts with cooperation, and I cooperate whenever you cooperate, but sometimes I will cooperate even when you defect," Nowak explained. "This allows me to correct for mistakes—if it's an accidental mistake, you can correct for it."16

  As the games go on, the situation gets even more surprising, Nowak said. The generous tit-for-tat approach gets replaced by a strategy of full-scale cooperation! "Because if everybody plays generous tit for tat, or tit for tat, then nobody deliberately tries to defect; everybody is a cooperator." Oh Happy Days.

  Except that "always cooperate" is not a stable strategy. As soon as everybody cooperates, an always-defect strategy can invade, just like a hawk among the doves, and clean up. So you start with all defect, go to tit for tat, then generous tit for tat, then all cooperate, then all defect. "And this," said Nowak, "is the theory of war and peace in human history."17

  GAMES AND PUNISHMENT

  Nevertheless, humans do cooperate. If indirect reciprocity isn't responsible for that cooperation, what is? Lately, one pop
ular view seems to be that cooperation thrives because it is enforced by the threat of punishment. And game theory shows how that can work.

  Among the advocates of this view are the economists Samuel Bowles and Herbert Gintis and the anthropologist Robert Boyd. They call this idea "strong reciprocity." A strong reciprocator rewards cooperators but punishes defectors. In this case, a more complicated game illustrates the interaction. Rather than playing the Prisoner's Dilemma game—a series of one-on-one encounters— strong reciprocity researchers conduct experiments with various versions of public goods games.

  These are just the sorts of games, described in Chapter 3, that show how different individuals adopt different strategies—some are selfish, some are cooperators, some are reciprocators. In a typical public goods game, players are given "points" at the outset (redeemable for real money later). In each round, players may contribute some of their points to a community fund and keep the rest. Then each player receives a fraction of the community fund. A greedy player will donate nothing, assuring a maximum personal payoff, although the group as a whole would then be worse off. Altruistic players will share some of their points to increase the payoff to the whole group. Reciprocators base their contributions on what others are contributing, thereby punishing the "free riders" who would donate little but reap the benefits of the group (but in so doing punish the rest of the group, including themselves, as well). As we've seen, humankind comprises all three sorts of players. Further studies suggest why the human race might have evolved to include punishers.

 

‹ Prev