Book Read Free

Behave: The Biology of Humans at Our Best and Worst

Page 50

by Robert M. Sapolsky


  Thus these studies suggest that when a sacrifice of one requires active, intentional, and local actions, more intuitive brain circuitry is engaged, and ends don’t justify means. And in circumstances where either the harm is unintentional or the intentionality plays out at a psychological distance, different neural circuitry predominates, producing an opposite conclusion about the morality of ends and means.

  These trolleyology studies raise a larger point, which is that moral decision making can be wildly context dependent.20 Often the key thing that a change in context does is alter the locality of one’s intuitionist morals, as summarized by Dan Ariely of Duke University in his wonderful book Predictably Irrational. Leave money around a common work area and no one takes it; it’s not okay to steal money. Leave some cans of Coke and they’re all taken; the one-step distance from the money involved blunts the intuitions about the wrongness of stealing, making it easier to start rationalizing (e.g., someone must have left them out for the taking).

  The effects of proximity on moral intuitionism are shown in a thought experiment by Peter Singer.21 You’re walking by a river in your hometown. You see that a child has fallen in. Most people feel morally obliged to jump in and save the child, even if the water destroys their $500 suit. Alternatively, a friend in Somalia calls and tells you about a poor child there who will die without $500 worth of medical care. Can you send money? Typically not. The locality and moral discounting over distance is obvious—the child in danger in your hometown is far more of an Us than is this dying child far away. And this is an intuitive rather than cognitive core—if you were walking along in Somalia and saw a child fall into a river, you’d be more likely to jump in and sacrifice the suit than to send $500 to that friend making the phone call. Someone being right there, in the flesh, in front of our eyes is a strong implicit prime that they are an Us.

  Moral context dependency can also revolve around language, as noted in chapter 3.22 Recall, for example, people using different rules about the morality of cooperation if you call the same economic game the “Wall Street game” or the “community game.” Framing an experimental drug as having a “5 percent mortality rate” versus a “95 percent survival rate” produces different decisions about the ethics of using it.

  Framing also taps into the themes of people having multiple identities, belonging to multiple Us groups and hierarchies. This was shown in a hugely interesting 2014 Nature paper by Alain Cohn and colleagues at the University of Zurich.23 Subjects, who worked for an (unnamed) international bank, played a coin-toss game with financial rewards for guessing outcomes correctly. Crucially, the game’s design made it possible for subjects to cheat at various points (and for the investigators to detect the cheating).

  In one version subjects first completed a questionnaire filled with mundane questions about their everyday lives (e.g., “How many hours of television do you watch each week?”). This produced a low, baseline level of cheating.

  Then, in the experimental version, the questionnaire was about their bank job. Questions like these primed the subjects to implicitly think more about banking (e.g., they became more likely in a word task to complete “__oker” with “broker” than with “smoker”).

  So subjects were thinking about their banking identity. And when they did, rates of cheating rose 20 percent. Priming people in other professions (e.g., manufacturing) to think about their jobs, or about the banking world, didn’t increase cheating. These bankers carried in their heads two different sets of ethical rules concerning cheating (banking and nonbanking), and unconscious cuing brought one or the other to the forefront.* Know thyself. Especially in differing contexts.

  “But This Circumstance Is Different”

  The context dependency of morality is crucial in an additional realm.

  It is a nightmare of a person who, with remorseless sociopathy, believes it is okay to steal, kill, rape, and plunder. But far more of humanity’s worst behaviors are due to a different kind of person, namely most of the rest of us, who will say that of course it is wrong to do X . . . but here is why these special circumstances make me an exception right now.

  We use different brain circuits when contemplating our own moral failings (heavy activation of the vmPFC) versus those of others (more of the insula and dlPFC).24 And we consistently make different judgments, being more likely to exempt ourselves than others from moral condemnation. Why? Part of it is simply self-serving; sometimes a hypocrite bleeds because you’ve scratched a hypocrite. The difference may also reflect different emotions being involved when we analyze our own actions versus those of others. Considering the moral failings of the latter may evoke anger and indignation, while their moral triumphs prompt emulation and inspiration. In contrast, considering our own moral failings calls forth shame and guilt, while our triumphs elicit pride.

  The affective aspects of going easy on ourselves are shown when stress makes us more this way.25 When experimentally stressed, subjects make more egoistic, rationalizing judgments regarding emotional moral dilemmas and are less likely to make utilitarian judgments—but only when the latter involve a personal moral issue. Moreover, the bigger the glucocorticoid response to the stressor, the more this is the case.

  Going easy on ourselves also reflects a key cognitive fact: we judge ourselves by our internal motives and everyone else by their external actions.26 And thus, in considering our own misdeeds, we have more access to mitigating, situational information. This is straight out of Us/Them—when Thems do something wrong, it’s because they’re simply rotten; when Us-es do it, it’s because of an extenuating circumstance, and “Me” is the most focal Us there is, coming with the most insight into internal state. Thus, on this cognitive level there is no inconsistency or hypocrisy, and we might readily perceive a wrong to be mitigated by internal motives in the case of anyone’s misdeeds. It’s just easier to know those motives when we are the perpetrator.

  The adverse consequences of this are wide and deep. Moreover, the pull toward judging yourself less harshly than others easily resists the rationality of deterrence. As Ariely writes in his book, “Overall cheating is not limited by risk; it is limited by our ability to rationalize the cheating to ourselves.”

  Cultural Context

  So people make different moral judgments about the same circumstance depending on whether it’s about them or someone else, which of their identities has been primed, the language used, how many steps the intentionality is removed, and even the levels of their stress hormones, the fullness of their stomach, or the smelliness of their environment. After chapter 9 it is no surprise that moral decision making can also vary dramatically by culture. One culture’s sacred cow is another’s meal, and the discrepancy can be agonizing.

  When thinking about cross-cultural differences in morality, key issues are what universals of moral judgment exist and whether the universals or the differences are more interesting and important.

  Chapter 9 noted some moral stances that are virtually universal, whether de facto or de jure. These include condemnation of at least some forms of murder and of theft. Oh, and of some form of sexual practice.

  More broadly, there is the near universal of the Golden Rule (with cultures differing as to whether it is framed as “Do only things you’d want done to you” or “Don’t do things you wouldn’t want done to you”). Amid the power of its simplicity, the Golden Rule does not incorporate people differing as to what they would/wouldn’t want done to them; we have entered complicated terrain when we can make sense of an interchange where a masochist says, “Beat me,” and the sadist sadistically answers, “No.”

  This criticism is overcome with the use of a more generalized, common currency of reciprocity, where we are enjoined to give concern and legitimacy to the needs and desires of people in circumstances where we would want the same done for us.

  Cross-cultural universals of morality arise from shared categories of rules of moral behavior. The
anthropologist Richard Shweder has proposed that all cultures recognize rules of morality pertinent to autonomy, community, and divinity. As we saw in the last chapter, Jonathan Haidt breaks this continuum into his foundations of morality that humans have strong intuitions about. These are issues related to harm, fairness and reciprocity (both of which Shweder would call autonomy), in-group loyalty and respect for authority (both of which Shweder would call community), and issues of purity and sanctity (i.e., Shweder’s realm of divinity).*27

  The existence of universals of morality raises the issue of whether that means that they should trump more local, provincial moral rules. Between the moral absolutists on one side and the relativists on the other, people like the historian of science Michael Shermer argue reasonably for provisional morality—if a moral stance is widespread across cultures, start off by giving the benefit of the doubt to its importance, but watch your wallet.28

  It’s certainly interesting that, for example, all cultures designate certain things as sacred; but it is far more so to look at the variability in what is considered sacred, how worked up people get when such sanctity is violated,* and what is done to keep such violations from reoccurring. I’ll touch on this huge topic with three subjects—cross-cultural differences concerning the morality of cooperation and competition, affronts to honor, and the reliance on shame versus guilt.

  COOPERATION AND COMPETITION

  Some of the most dramatic cross-cultural variability in moral judgment concerns cooperation and competition. This was shown to an extraordinary extent in a 2008 Science paper by a team of British and Swiss economists.

  Subjects played a “public good” economic game where players begin with a certain number of tokens and then decide, in each of a series of rounds, how many to contribute to a shared pool; the pool is then multiplied and shared evenly among all the players. The alternative to contributing is for subjects to keep the tokens for themselves. Thus, the worst payoff for an individual player would be if they contributed all their tokens to the pool, while no other player contributed any; the best would be if the individual contributed no tokens and everyone else contributed everything. As a feature of the design, subjects could “pay” to punish other players for the size of their contribution. Subjects were from around the world.

  First finding: Across all cultures, people were more prosocial than sheer economic rationality would predict. If everyone played in the most brutally asocial, realpolitik manner, no one would contribute to the pool. Instead subjects from all cultures consistently contributed. Perhaps as an explanation, subject from all cultures punished people who made lowball contributions, and to roughly equal extents.

  Where the startling difference came was with a behavior that I’d never even seen before in the behavioral economics literature, something called “antisocial punishment.” Free-riding punishment is when you punish another player for contributing less than you (i.e., being selfish). Antisocial punishment is when you punish another player for contributing more than you (i.e., being generous).

  What is that about? Interpretation: This hostility toward someone being overly generous is because they’re going to up the ante, and soon everyone (i.e., me) will be expected to be generous. Kill ’em, spoiling things for everyone. It’s a phenomenon where you punish someone for being nice, because what if that sort of crazy deviance becomes the norm and you feel pressure to be nice back?

  At one extreme were subjects from countries (the United States and Australia) where this weird antisocial punishment was nearly nonexistent. And at the mind-boggling other extreme were subjects from Oman and Greece, who were willing to spend more to punish generosity than to punish selfishness. And this was not a comparison of, say, theologians in Boston with Omani pirates. Subjects were all urban university students.

  So what’s different among these cities? The authors found a key correlation—the lower the social capital in a country, the higher the rates of antisocial punishment. In other words, when do people’s moral systems include the idea that being generous deserves punishment? When they live in a society where people don’t trust one another and feel as if they have no efficacy.

  Fascinating work has also been done specifically on people in non-Western cultures, as reported in a pair of studies by Joseph Henrich, of the University of British Columbia, and colleagues.29 Subjects were in the thousands and came from twenty-five different “small-scale” cultures from around the world—they were nomadic pastoralists, hunter-gatherers, sedentary forager/horticulturalists, and subsistence farmers/wage earners. There were two control groups, namely urbanites from Missouri and Accra, Ghana. As a particularly thorough feature of the study, subjects played three economic games: (a) The Dictator Game, where the subject simply decides how money is split between them and another player. This measures a pure sense of fairness, independent of consequence. (b) The Ultimatum Game, where you can pay to punish someone treating you unfairly (i.e., self-interested second-party punishment). (c) A third-party punishment scenario, where you can pay to punish someone treating a third party unfairly (i.e., altruistic punishment).

  B. Herrmann et al., “Antisocial Punishment Across Societies,” Sci 319 (2008): 1362.

  Visit bit.ly/2neVZaA for a larger version of this graph.

  The authors identified three fascinating variables that predicted patterns of play:

  Market integration: How much do people in a culture interact economically, with trade items? The authors operationalized this as the percentage of people’s calories that came from purchases in market interactions, and it ranged from 0 percent for the hunter-gathering Hadza of Tanzania to nearly 90 percent for sedentary fishing cultures. And across the cultures a greater degree of market integration strongly predicted people making fairer offers in all three games and being willing to pay for both self-interested second-party and altruistic third-party punishment of creeps. For example, the Hadza, at one extreme, kept an average of 73 percent of the spoils for themselves in the Dictator Game, while the sedentary fishing Sanquianga of Colombia, along with people in the United States and Accra, approached dictating a 50:50 split. Market integration predicts more willingness to punish selfishness and, no surprise, less selfishness.

  Community size: The bigger the community, the more the incidence of second- and third-party punishment of cheapskates. Hadza, for example, in their tiny bands of fifty or fewer, would pretty much accept any offer above zero in the Ultimatum Game—there was no punishment. At the other extreme, in communities of five thousand or more (sedentary agriculturalists and aquaculturalists, plus the Ghanaian and American urbanites), offers that weren’t in the ballpark of 50:50 were typically rejected and/or punished.

  Religion: What percentage of the population belonged to a worldwide religion (i.e., Christianity or Islam)? This ranged from none of the Hadza to 60 to 100 percent for all the other groups. The greater the incidence of belonging to a Western religion, the more third-party punishment (i.e., willingness to pay to punish person A for being unfair to person B).

  What to make of these findings?

  First the religion angle. This was a finding not about religiosity generally but about religiosity within a worldwide religion, and not about generosity or fairness but about altruistic third-party punishment. What is it about worldwide religions? As we saw in chapter 9, it is only when groups get large enough that people regularly interact with strangers that cultures invent moralizing gods. These are not gods who sit around the banquet table laughing with detachment at the foibles of humans down below, or gods who punish humans for lousy sacrificial offerings. These are gods who punish humans for being rotten to other humans—in other words, the large religions invent gods who do third-party punishment. No wonder this predicts these religions’ adherents being third-party punishers themselves.

  Next the twin findings that more market integration and bigger community size were associated with fairer offers (for the former) and more wil
lingness to punish unfair players (for both). I find this to be a particularly challenging pair of findings, especially when framed as the authors thoughtfully did.

  The authors ask where the uniquely extreme sense of fairness comes from in humans, particularly in the context of large-scale societies with strangers frequently interacting. And they offer two traditional types of explanations that are closely related to our dichotomies of intuition versus reasoning and animal roots versus cultural inventions:

  Our moral anchoring in fairness in large-scale societies is a residue and extension of our hunter-gatherer and nonhuman primate past. This was life in small bands, where fairness was mostly driven by kin selection and easy scenarios of reciprocal altruism. As our community size has expanded and we now mostly have one-shot interactions with unrelated strangers, our prosociality just represents an expansion of our small-band mind-set, as we use various green-beard marker shibboleths as proxies for relatedness. I’d gladly lay down my life for two brothers, eight cousins, or a guy who is a fellow Packers fan.

  The moral underpinnings of a sense of fairness lie in cultural institutions and mind-sets that we invented as our groups became larger and more sophisticated (as reflected in the emergence of markets, cash economies, and the like).

 

‹ Prev