Behave: The Biology of Humans at Our Best and Worst

Home > Other > Behave: The Biology of Humans at Our Best and Worst > Page 49
Behave: The Biology of Humans at Our Best and Worst Page 49

by Robert M. Sapolsky


  —

  Okay, we’re not perfect reasoning machines. But that’s our goal, and numerous moral philosophers emphasize the preeminence of reasoning, where emotion and intuition, if they happen to show up, just soil the carpet. Such philosophers range from Kant, with his search for a mathematics of morality, to Princeton philosopher Peter Singer, who kvetches that if things like sex and bodily functions are pertinent to philosophizing, time to hang up his spurs: “It would be best to forget all about our particular moral judgments.” Morality is anchored in reason.5

  YEAH, SURE IT IS: SOCIAL INTUITIONISM

  Except there’s a problem with this conclusion—people often haven’t a clue why they’ve made some judgment, yet they fervently believe it’s correct.

  This is straight out of chapter 11’s rapid implicit assessments of Us versus Them and our post-hoc rational justifications for visceral prejudice. Scientists studying moral philosophy increasingly emphasize moral decision making as implicit, intuitive, and anchored in emotion.

  The king of this “social intuitionist” school is Jonathan Haidt, whom we’ve encountered previously.6 Haidt views moral decisions as primarily based on intuition and believes reasoning is what we then use to convince everyone, including ourselves, that we’re making sense. In an apt phrase of Haidt’s, “moral thinking is for social doing,” and sociality always has an emotional component.

  The evidence for the social intuitionist school is plentiful:

  When contemplating moral decisions, we don’t just activate the eggheady dlPFC.7 There’s also activation of the usual emotional cast—amygdala, vmPFC and the related orbitofrontal cortex, insular cortex, anterior cingulate. Different types of moral transgressions preferentially activate different subsets of these regions. For example, moral quandaries eliciting pity preferentially activate the insula; those eliciting indignation activate the orbitofrontal cortex. Quandaries generating intense conflict preferentially activate the anterior cingulate. Finally, for acts assessed as equally morally wrong, those involving nonsexual transgression (e.g., stealing from a sibling) activate the amygdala, whereas those involving sexual transgressions (e.g., sex with a sibling) also activate the insula.*

  Moreover, when such activation is strong enough, we also activate the sympathetic nervous system and feel arousal—and we know how those peripheral effects feedback and influence behavior. When we confront a moral choice, the dlPFC doesn’t adjudicate in contemplative silence. The waters roil below.

  The pattern of activation in these regions predicts moral decisions better than does the dlPFC’s profile. And this matches behavior—people punish to the extent that they feel angered by someone acting unethically.8

  People tend toward instantaneous moral reactions; moreover, when subjects shift from judging nonmoral elements of acts to moral ones, they make assessments faster, the antithesis of moral decision making being about grinding cognition. Most strikingly, when facing a moral quandary, activation in the amygdala, vmPFC, and insula typically precedes dlPFC activation.9

  Damage to these intuitionist brain regions makes moral judgments more pragmatic, even coldhearted. Recall from chapter 10 how people with damage to the (emotional) vmPFC readily advocate sacrificing one relative to save five strangers, something control subjects never do.

  Most telling is when we have strong moral opinions but can’t tell why, something Haidt calls “moral dumbfounding”—followed by clunky post-hoc rationalizing.10 Moreover, such moral decisions can differ markedly in different affective or visceral circumstances, generating very different rationalizations. Recall from the last chapter how people become more conservative in their social judgments when they’re smelling a foul odor or sitting at a dirty desk. And then there’s that doozy of a finding—knowing a judge’s opinions about Plato, Nietzsche, Rawls, and any other philosopher whose name I just looked up gives you less predictive power about her judicial decisions than knowing if she’s hungry.

  The social intuitionist roots of morality are bolstered further by evidence of moral judgment in two classes of individuals with limited capacities for moral reasoning.

  AGAIN WITH BABIES AND ANIMALS

  Much as infants demonstrate the rudiments of hierarchical and Us/Them thinking, they possess building blocks of moral reasoning as well. For starters, infants have the bias concerning commission versus omission. In one clever study, six-month-olds watched a scene containing two of the same objects, one blue and one red; repeatedly, the scene would show a person picking the blue object. Then, one time, the red one is picked. The kid becomes interested, looks more, breathes faster, showing that this seems discrepant. Now, the scene shows two of the same objects, one blue, one a different color. In each repetition of the scene, a person picks the one that is not blue (its color changes with each repetition). Suddenly, the blue one is picked. The kid isn’t particularly interested. “He always picks the blue one” is easier to comprehend than “He never picks the blue one.” Commission is weightier.11

  Infants and toddlers also have hints of a sense of justice, as shown by Kiley Hamlin of the University of British Columbia, and Paul Bloom and Karen Wynn of Yale. Six- to twelve-month-olds watch a circle moving up a hill. A nice triangle helps to push it. A mean square blocks it. Afterward the infants can reach for a triangle or a square. They choose the triangle.* Do infants prefer nice beings, or shun mean ones? Both. Nice triangles were preferred over neutral shapes, which were preferred over mean squares.

  Such infants advocate punishing bad acts. A kid watches puppets, one good, one bad (sharing versus not). The child is then presented with the puppets, each sitting on a pile of sweets. Who should lose a sweet? The bad puppet. Who should gain one? The good puppet.

  Remarkably, toddlers even assess secondary punishment. The good and bad puppets then interact with two additional puppets, who can be nice or bad. And whom did kids prefer of those second-layer puppets? Those who were nice to nice puppets and those who punished mean ones.

  Other primates also show the beginnings of moral judgments. Things started with a superb 2003 paper by Frans de Waal and Sarah Brosnan.12 Capuchin monkeys were trained in a task: A human gives them a mildly interesting small object—a pebble. The human then extends her hand palm up, a capuchin begging gesture. If the monkey puts the pebble in her hand, there’s a food reward. In other words, the animals learned how to buy food.

  Now there are two capuchins, side by side. Each gets a pebble. Each gives it to the human. Each gets a grape, very rewarding.

  Now change things. Both monkeys pay their pebble. Monkey 1 gets a grape. But monkey 2 gets some cucumber, which blows compared with grapes—capuchins prefer grapes to cucumber 90 percent of the time. Monkey 2 was shortchanged.

  And monkey 2 would then typically fling the cucumber at the human or bash around in frustration. Most consistently, they wouldn’t give the pebble the next time. As the Nature paper was entitled, “Monkeys reject unequal pay.”

  This response has since been demonstrated in various macaque monkey species, crows, ravens, and dogs (where the dog’s “work” would be shaking her paw).*13

  Subsequent work by Brosnan, de Waal, and others fleshed out this phenomenon further:14

  One criticism of the original study was that maybe capuchins refused to work for cucumbers because grapes were visible, regardless of whether the other guy was getting paid in grapes. But no—the phenomenon required unfair payment.

  Both animals are getting grapes, then one gets switched to cucumber. What’s key—that the other guy is still getting grapes, or that I no longer am? The former—if doing the study with a single monkey, switching from grapes to cucumbers would not evoke refusal. Nor would it if both monkeys got cucumbers.

  Across the various species, males were more likely than females to reject “lower pay”; dominant animals were more likely than subordinates to reject.

  It’s about the work—give one monkey a free grape, the
other free cucumber, and the latter doesn’t get pissed.

  The closer in proximity the two animals are, the more likely the one getting cucumber is to go on strike.

  Finally, rejection of unfair pay isn’t seen in species that are solitary (e.g., orangutans) or have minimal social cooperation (e.g., owl monkeys).

  Okay, very impressive—other social species show hints of a sense of justice, reacting negatively to unequal reward. But this is worlds away from juries awarding money to plaintiffs harmed by employers. Instead it’s self-interest—“This isn’t fair; I’m getting screwed.”

  How about evidence of a sense of fairness in the treatment of another individual? Two studies have examined this in a chimp version of the Ultimatum Game. Recall the human version—in repeated rounds, player 1 in a pair decides how money is divided between the two of them. Player 2 is powerless in the decision making but, if unhappy with the split, can refuse, and no one gets any money. In other words, player 2 can forgo immediate reward to punish selfish player 1. As we saw in chapter 10, Player 2s tend to accept 60:40 splits.

  In the chimp version, chimp 1, the proposer, has two tokens. One indicates that each chimp gets two grapes. The other indicates that the proposer gets three grapes, the partner only one. The proposer chooses a token and passes it to chimp 2, who then decides whether to pass the token to the human grape dispenser. In other words, if chimp 2 thinks chimp 1 is being unfair, no one gets grapes.

  In one such study, Michael Tomasello (a frequent critic of de Waal—stay tuned) at the Max Planck Institutes in Germany, found no evidence of chimp fairness—the proposer always chose, and the partner always accepted unfair splits.15 De Waal and Brosnan did the study in more ethologically valid conditions and reported something different: proposer chimps tended toward equitable splits, but if they could give the token directly to the human (robbing chimp 2 of veto power), they’d favor unfair splits. So chimps will opt for fairer splits—but only when there is a downside to being unfair.

  Sometimes other primates are fair when it’s at no cost to themselves. Back to capuchin monkeys. Monkey 1 chooses whether both he and the other guy get marshmallows or it’s a marshmallow for him and yucky celery for the other guy. Monkeys tended to choose marshmallows for the other guy.* Similar “other-regarding preference” was shown with marmoset monkeys, where the first individual got nothing and merely chose whether the other guy got a cricket to eat (of note, a number of studies have failed to find other-regarding preference in chimps).16

  Really interesting evidence for a nonhuman sense of justice comes in a small side study in a Brosnan/de Waal paper. Back to the two monkeys getting cucumbers for work. Suddenly one guy gets shifted to grapes. As we saw, the one still getting the cucumber refuses to work. Fascinatingly, the grape mogul often refuses as well.

  What is this? Solidarity? “I’m no strike-breaking scab”? Self-interest, but with an atypically long view about the possible consequences of the cucumber victim’s resentment? Scratch an altruistic capuchin and a hypocritical one bleeds? In other words, all the questions raised by human altruism.

  Given the relatively limited reasoning capacities of monkeys, these findings support the importance of social intuitionism. De Waal perceives even deeper implications—the roots of human morality are older than our cultural institutions, than our laws and sermons. Rather than human morality being spiritually transcendent (enter deities, stage right), it transcends our species boundaries.17

  MR. SPOCK AND JOSEPH STALIN

  Many moral philosophers believe not only that moral judgment is built on reasoning but also that it should be. This is obvious to fans of Mr. Spock, since the emotional component of moral intuitionism just introduces sentimentality, self-interest, and parochial biases. But one remarkable finding counters this.

  Relatives are special. Chapter 10 attests to that. Any social organism would tell you so. Joseph Stalin thought so concerning Pavlik Morozov ratting out his father. As do most American courts, where there is either de facto or de jure resistance to making someone testify against their own parent or child. Relatives are special. But not to people lacking social intuitionism. As noted, people with vmPFC damage make extraordinarily practical, unemotional moral decisions. And in the process they do something that everyone, from clonal yeast to Uncle Joe to the Texas Rules of Criminal Evidence considers morally suspect: they advocate harming kin as readily as strangers in an “Is it okay to sacrifice one person to save five?” scenario.18

  Emotion and social intuition are not some primordial ooze that gums up that human specialty of moral reasoning. Instead, they anchor some of the few moral judgments that most humans agree upon.

  CONTEXT

  So social intuitions can have large, useful roles in moral decision making. Should we now debate whether reasoning or intuition is more important? This is silly, not least of all because there is considerable overlap between the two. Consider, for example, protesters shutting down a capital to highlight income inequity. This could be framed as the Kohlbergian reasoning of people in a postconventional stage. But it could also be framed à la Haidt in a social intuitionist way—these are people who resonate more with moral intuitions about fairness than with respect for authority.

  More interesting than squabbling about the relative importance of reasoning and intuition are two related questions: What circumstances bias toward emphasizing one over the other? Can the differing emphases produce different decisions?

  As we’ve seen, then–graduate student Josh Greene and colleagues helped jump-start “neuroethics” by exploring these questions using the poster child of “Do the ends justify the means?” philosophizing, namely the runaway trolley problem. A trolley’s brake has failed, and it is hurtling down the tracks and will hit and kill five people. Is it okay to do something that saves the five but kills someone else in the process?

  People have pondered this since Aristotle took his first trolley ride;* Greene et al. added neuroscience. Subjects were neuroimaged while pondering trolley ethics. Crucially, they considered two scenarios. Scenario 1: Here comes the trolley; five people are goners. Would you pull a lever that diverts the trolley onto a different track, where it will hit and kill someone (the original scenario)? Scenario 2: Same circumstance. Would you push the person onto the tracks to stop the trolley?19

  By now I bet readers can predict which brain region(s) activates in each circumstance. Contemplate pulling the lever, and dlPFC activity predominates, the detached, cerebral profile of moral reasoning. Contemplate consigning the person to death by pushing them, and it’s vmPFC (and amygdala), the visceral profile of moral intuition.

  Would you pull the lever? Consistently, 60 to 70 percent of people, with their dlPFCs churning away, say yes to this utilitarian solution—kill one to save five. Would you push the person with your own hands? Only 30 percent are willing; the more the vmPFC and/or amygdaloid activation, the more likely they are to refuse.* This is hugely important—a relatively minor variable determines whether people emphasize moral reasoning or intuition, and they engage different brain circuits in the process, producing radically different decisions. Greene has explored this further.

  Are people resistant to the utilitarian trade-off of killing one to save five in the pushing scenario because of the visceral reality of actually touching the person whom they have consigned to death? Greene’s work suggests not—if instead of pushing with your hands, you push with a pole, people are still resistant. There’s something about the personal force involved that fuels the resistance.

  Are people willing in the lever scenario because the victim is at a distance, rather than right in front of them? Probably not—people are just as willing if the lever is right next to the person who will die.

  Greene suggests that intuitions about intentionality are key. In the lever scenario, the five people are saved because the trolley has been diverted to another track; the killing of the individual is a
side effect and the five would still have been saved if that person hadn’t been standing on the tracks. In contrast, in the pushing scenario the five are saved because the person is killed, and the intentionality feels intuitively wrong. As evidence, Greene would give subjects another scenario: Here comes the trolley, and you are rushing to throw a switch that will halt it. Is it okay to do this if you know that in the process of lunging for the switch, you must push a person out of the way, who falls to the ground and dies? About 80 percent of people say yes. Same pushing the person, same proximity, but done unintentionally, as a side effect. The person wasn’t killed as a means to save the five. Which seems much more okay.

  Now a complication. In the “loop” scenario, you pull a lever that diverts the trolley to another track. But—oh no!—it’s just a loop; it merges back on to the original track. The trolley will still kill the five people—except that there’s a person on the side loop who will be killed, stopping the trolley. This is as intentional a scenario as is pushing with your hands—diverting to another track isn’t enough; the person has to be killed. By all logic only about 30 percent of people should sign on, but instead it’s in the 60 to 70 percent range.

  Greene concludes (from this and additional scenarios resembling the loop) that the intuitionist universe is very local. Killing someone intentionally as a means to save five feels intuitively wrong, but the intuition is strongest when the killing would occur right here, right now; doing it in more complicated sequences of intentionality doesn’t feel as bad. This is not because of a cognitive limit—it’s not that subjects don’t realize the necessity of killing the person in the loop scenario. It just doesn’t feel the same. In other words, intuitions discount heavily over space and time. Exactly the myopia about cause and effect you’d expect from a brain system that operates rapidly and automatically. This is the same sort of myopia that makes sins of commission feel worse than those of omission.

 

‹ Prev