Book Read Free

Behave: The Biology of Humans at Our Best and Worst

Page 35

by Robert M. Sapolsky


  This system can indicate that this mouse is John Smith. How does it also tell that he’s your never-before-encountered brother? The closer the relative, the more similar their cluster of MHC genes and the more similar their olfactory signature. Olfactory neurons in a mouse contain receptors that respond most strongly to the mouse’s own MHC protein. Thus, if the receptor is maximally stimulated, it means the mouse is sniffing its armpit. If near maximally stimulated, it’s a close relative. Moderately, a distant relative. Not at all (though the MHC protein is being detected by other olfactory receptors), it’s a hippo’s armpit.*

  Olfactory recognition of kin accounts for a fascinating phenomenon. Recall from chapter 5 how the adult brain makes new neurons. In rats, pregnancy triggers neurogenesis in the olfactory system. Why there? So that olfactory recognition is in top form when it’s time to recognize your newborn; if the neurogenesis doesn’t occur, maternal behavior is impaired.25

  Then there is kin recognition based on imprinted sensory cues. How do I know which newborn to nurse? The one who smells like my vaginal fluid. Which kid do I hang out near? The one who smells like Mom’s milk. Many ungulates use such rules. So do birds. Which bird do I know is Mom? The bird whose distinctive song I learned before hatching.

  And there are species that figure out relatedness by reasoning; my guess is that male baboons make statistical inferences when identifying their likely offspring: “How much of this mom’s peak estrus swelling was spent with me? All. Okay, this is my kid; act accordingly.” Which brings us to the most cognitively strategic species, namely us. How do we do kin recognition? In ways that are far from accurate, with interesting consequences.

  We start with a long theorized type of pseudo–kin recognition. What if you operate with the rule that you cooperate with (i.e., act related to) individuals who share conspicuous traits with you? This facilitates passing on copies of genes if you possess a gene (or genes) with three properties: (a) it generates that conspicuous signal; (b) recognizes it in others; and (c) makes you cooperate with others who have that signal. It’s a kind of primitive, stripped-down kin selection.

  Hamilton speculated about the existence of such a “green-beard effect”; if an organism has a gene that codes for both growing a green beard and cooperating with other green bearders, green bearders will flourish when mixed with non–green bearders.26 Thus, “the crucial requirement for altruism is genetic relatedness at the altruism locus [i.e., merely a multifaceted green-beard gene] and not genealogical relationship over the whole genome.”27

  Green-beard genes exist. Among yeast, cells form cooperative aggregates that need not be identical or even closely related. Instead, it could be any yeast that expresses a gene coding for a cell-surface adhesion protein that sticks to copies of the same molecule on other cells.28

  Humans show green-beard effects. Crucially, we differ as to what counts as a green-beard trait. Define it narrowly, and we call it parochialism. Include enmity toward those without that green-beard trait and it’s xenophobia. Define the green-beard trait as being a member of your species, and you’ve described a deep sense of humanity.

  RECIPROCAL ALTRUISM

  So sometimes a chicken is an egg’s way of making another egg, genes can be selfish, and sometimes we gladly lay down our lives for two brothers or eight cousins. Does everything have to be about competition, about individuals or groups of relatives leaving more copies of their genes than the others, being more fit, having more reproductive success?* Is the driving force of behavioral evolution always that someone be vanquished?

  Not at all. One exception is elegant, if specialized. Remember rock/paper/scissors? Paper envelops rock; rock breaks scissors; scissors cut paper. Would rocks want to bash every scissors into extinction? No way. Because then all those papers would enwrap the rocks into extinction. Each participant has an incentive for restraint, producing an equilibrium.

  Remarkably, such equilibriums occur in living systems, as shown in a study of the bacteria Escherichia coli.29 The authors generated three colonies of E. coli, each with a strength and a weakness. To simplify: Strain 1 secretes a toxin. Strength: it can kill competitor cells. Weakness: making the toxin is energetically costly. Strain 2 is vulnerable to the toxin, in that it has a membrane transporter that absorbs nutrients, and the toxin slips in via that transporter. Strength: it’s good at getting food. Weakness: vulnerability to the toxin. Strain 3 doesn’t have the transporter and thus isn’t vulnerable to the toxin, and it doesn’t make the toxin. Strength: it doesn’t bear the cost of making the toxin and is insensitive to it. Weakness: it doesn’t absorb as much nutrients. Thus, destruction of strain 2 by strain 1 causes the demise of strain 1 thanks to strain 3. The study showed that the strains could exist in equilibrium, each limiting its growth.

  Cool. But it doesn’t quite fit our intuitions about cooperation. Rock/paper/scissors is to cooperation as peace due to nuclear weapons–based mutually assured destruction is to the Garden of Eden.

  Which raises a third fundamental, alongside individual selection and kin selection: reciprocal altruism. “I’ll scratch your back if you scratch mine. I’d rather not actually scratch yours if I can get away with it. And I’m watching you in case you try the same.”

  Despite what you might expect from kin selection, unrelated animals frequently cooperate. Fish swarm in a school, birds fly in formation. Meerkats take risks by giving alarm calls that aid everyone, vampire bats who maintain communal colonies feed one another’s babies.*30 Depending on the species, unrelated primates groom one another, mob predators, and share meat.

  Why should nonrelatives cooperate? Because many hands lighten the load. School with other fish, and you’re less likely to be eaten (competition for the safest spot—the center—produces what Hamilton termed the “geometry of the selfish herd”). Birds flying in a V formation save energy by catching the updraft of the bird in front (raising the question of who gets stuck there).31 If chimps groom one another, there are fewer parasites.

  In a key 1971 paper biologist Robert Trivers laid out the evolutionary logic and parameters by which unrelated organisms engage in “reciprocal altruism”—incurring a fitness cost to enhance a nonrelative’s fitness, with the expectation of reciprocation.32

  It doesn’t require consciousness to evolve reciprocal altruism; back to the metaphor of the airplane wing in the wind tunnel. But there are some requirements for its occurrence. Obviously, the species must be social. Furthermore, social interactions have to be frequent enough that the altruist and the indebted are likely to encounter each other again. And individuals must be able to recognize each other.

  Amid reciprocal altruism occurring in numerous species, individuals often attempt to cheat (i.e., to not reciprocate) and monitor attempts by others to do the same to them. This raises the realpolitik world of cheating and counterstrategies, the two coevolving in escalating arms races. This is called a “Red Queen” scenario, for the Red Queen in Through the Looking-Glass, who must run faster and faster to stay in place.33

  This raises two key interrelated questions:

  Amid the cold calculations of evolutionary fitness, when is it optimal to cooperate, when to cheat?

  In a world of noncooperators it’s disadvantageous to be the first altruist. How do systems of cooperation ever start?*

  Gigantic Question #1: What Strategy for Cooperating Is Optimal?

  While biologists were formulating these questions, other scientists were already starting to answer them. In the 1940s “game theory” was founded by the polymath John von Neumann, one of the fathers of computer science. Game theory is the study of strategic decision making. Framed slightly differently, it’s the mathematical study of when to cooperate and when to cheat. The topic was already being explored with respect to economics, diplomacy, and warfare. What was needed was for game theorists and biologists to start talking. This occurred around 1980 concerning the Prisoner’s Dilemma (PD), introdu
ced in chapter 3. Time to see its parameters in detail.

  Two members of a gang, A and B, are arrested. Prosecutors lack evidence to convict them of a major crime but can get them on a lesser charge, for which they’ll serve a year in prison. A and B can’t communicate with each other. Prosecutors offer each a deal—inform on the other and your sentence is reduced. There are four possible outcomes:

  Both A and B refuse to inform on each other: each serves one year.

  Both A and B inform on each other: each serves two years.

  A informs on B, who remains silent: A walks free and B serves three years.

  B informs on A, who remains silent: B walks and A serves three years.

  Thus, each prisoner’s dilemma is whether to be loyal to your partner (“cooperate”) or betray him (“defect”). The thinking might go, “Best to cooperate. This is my partner; he’ll also cooperate, and we’ll each serve only a year. But what if I cooperate and he stabs me in the back? He walks, and I’m in for three years. Better defect. But what if we both defect—that’s two years. But maybe defect, in case he cooperates . . .” Round and round.*

  If you play PD once, there is a rational solution. If you, prisoner A, defect, your sentence averages out to one year (zero years if B cooperates, two years if B defects); if you cooperate, the average is two years (one year if B cooperates, three years if B defects). Thus you should defect. In single-round versions of PD, it’s always optimal to defect. Not very encouraging for the state of the world.

  Suppose there are two rounds of PD. The optimal strategy for the second round is just like in a single-round version—always defect. Given that, the first-round defaults into being like a single-round game—and thus, defect during it also.

  What about a three-round game? Defect in the third, meaning that things default into a two-round game. In which case, defect in the second, meaning defect in the first.

  It’s always optimal to defect in round Z, the final round. And thus it’s always optimal to defect in round Z−1, and thus round Z−2. . . . In other words, when two individuals play for a known number of rounds, the optimal strategy precludes cooperation.

  But what if the number of rounds is unknown (an “iterated” PD)? Things get interesting. Which is when the game theorists and biologists met.

  The catalyst was political scientist Robert Axelrod of the University of Michigan. He explained to his colleagues how PD works and asked them what strategy they’d use in a game with an unknown number of rounds. The strategies offered varied enormously, with some being hair-raisingly complicated. Axelrod then programmed the various strategies and pitted them against each other in a simulated massive round-robin tournament. Which strategy won, was most optimal?

  It was provided by a mathematician at the University of Toronto, Anatol Rapoport; as the mythic path-of-the-hero story goes, it was the simplest strategy. Cooperate in the first round. After that, you do whatever the other player did in the previous round. It was called Tit for Tat. More details:

  You cooperate (C) in the first round, and if the other player always cooperates (C), you both happily cooperate into the sunset:

  Example 1:

  You:C C C C C C C C C C. . . .

  Her:C C C C C C C C C C. . . .

  Suppose the other player starts cooperating but then, tempted by Satan, defects (D) in round 10. You cooperated, and thus you take a hit:

  Example 2:

  You:C C C C C C C C C C

  Her:C C C C C C C C C D

  Thus, you Tit for Tat her, punishing her in the next round:

  Example 3:

  You:C C C C C C C C C C D

  Her:C C C C C C C C C D ?

  If by then she’s resumed cooperating, you do as well; peace returns:

  Example 4:

  You:C C C C C C C C C C D C C C. . . .

  Her:C C C C C C C C C D C C C C. . . .

  If she continues defecting, you do as well:

  Example 5:

  You:C C C C C C C C C C D D D D D. . . .

  Her:C C C C C C C C C D D D D D D. . . .

  Suppose you play against someone who always defects. Things look like this:

  Example 6:

  You:C D D D D D D D D D. . . .

  Her:D D D D D D D D D D. . . .

  This is the Tit for Tat strategy. Note that it can never win. Best case is a draw, if playing against another person using Tit for Tat or someone using an “always cooperate” strategy. Otherwise it loses by a small margin. Every other strategy would always beat Tit for Tat by a small margin. However, other strategies playing against each other can produce catastrophic losses. And when everything is summed, Tit for Tat wins. It lost nearly every battle but won the war. Or rather, the peace. In other words, Tit for Tat drives other strategies to extinction.

  Tit for Tat has four things going for it: Its proclivity is to cooperate (i.e., that’s its starting state). But it isn’t a sucker and punishes defectors. It’s forgiving—if the defector resumes cooperating, so will Tit for Tat. And the strategy is simple.

  Axelrod’s tournament launched a zillion papers about Tit for Tat in PD and related games (more later). Then something crucial occurred—Axelrod and Hamilton hooked up. Biologists studying the evolution of behavior longed to be as quantitative as those studying the evolution of kidneys in desert rats. And here was this world of social scientists studying this very topic, even if they didn’t know it. PD provided a framework for thinking about the strategic evolution of cooperation and competition, as Axelrod and Hamilton explored in a 1981 paper (famous enough that it’s a buzz phrase—e.g., “How’d your lecture go today?” “Terrible, way behind schedule; I didn’t even get to Axelrod and Hamilton”).34

  As the evolutionary biologists started hanging with the political scientists, they inserted real-world possibilities into game scenarios. One addressed a flaw in Tit for Tat.

  Let’s introduce signal errors—a message is misunderstood, someone forgets to tell someone something, or there’s a hiccup of noise in the system. Like in the real world.

  There has been a signal error in round 5, with two individuals using a Tit for Tat strategy. This is what everyone means:

  Example 7:

  You:C C C C C

  Her:C C C C C

  But thanks to a signal error, this is what you think happened:

  Example 8:

  You:C C C C C

  Her:C C C C D

  You think, “What a creep, defecting like that.” You defect in the next round. Thus, what you think has happened:

  Example 9:

  You:C C C C C D

  Her:C C C C D C

  What she thinks is happening, being unaware of the signal error:

  Example 10:

  You:C C C C C D

  Her:C C C C C C

  She thinks, “What a creep, defecting like that.” Thus she defects the next round. “Oh, so you want more? I’ll give you more,” you think, and defect. “Oh, so you want more? I’ll give you more,” she thinks:

  Example 11:

  You:C C C C C D C D C D C D C D C D. . . .

  Her:C C C C D C D C D C D C D C D C. . . .

  When signal errors are possible, a pair of Tit for Tat players are vulnerable to being locked forever in this seesawing of defection.*

  The discovery of this vulnerability prompted evolutionary biologists Martin Nowak of Harvard, Karl Sigmund of the University of Vienna, and Robert Boyd of UCLA to provide two solutions.35 “Contrite Tit for Tat” retaliates only if the other side has defected twice in a row. “Forgiving Tit for Tat” automatically forgives one third of defections. Both avoid doomsday signal-error scenarios but are vulnerable to exploitation.*

  A solution to this vulnerability is to shift the frequency of forgiveness in accordance with the likelihood of signal error (“Sorry I’m late again; the train was delayed” being assessed
as more plausible and forgivable than “Sorry I’m late again; a meteorite hit my driveway again”).

  Another solution to Tit for Tat’s signal-error vulnerability is to use a shifting strategy. At the beginning, in an ocean of heterogeneous strategies, many heavily biased toward defection, start with Tit for Tat. Once they’ve become extinct, switch to Forgiving Tit for Tat, which outcompetes Tit for Tat when signal errors occur. What is this transition from hard-assed, punitive Tit for Tat to incorporating forgiveness? Establishing trust.

  Other elaborations simulate living systems. The computer scientist John Holland of the University of Michigan introduced “genetic algorithms”—strategies that mutate over time.

  Another real-world elaboration was to factor in the “cost” of certain strategies—for example, with Tit for Tat, the costs of monitoring for and then punishing cheating—costly alarm systems, police salaries, and jail construction. These are superfluous in a world of no signal errors and nothing but Tit for Tat–ers, and Tit for Tat can be replaced by the cheaper Always Cooperate.

  Thus, when there are signal errors, differing costs to different strategies, and the existence of mutations, a cycle emerges: a heterogeneous population of strategies, including exploitative, noncooperative ones, are replaced by Tit for Tat, then replaced by Forgiving Tit for Tat, then by Always Cooperate—until a mutation reintroduces an exploitative strategy that spreads like wildfire, a wolf among Always Cooperate sheep, starting the cycle all over again. . . .*36 More and more modifications made the models closer to the real world. Soon the computerized game strategies were having sex with each other, which must have been the most exciting thing ever for the mathematicians involved.

 

‹ Prev