Book Read Free

Behave: The Biology of Humans at Our Best and Worst

Page 52

by Robert M. Sapolsky


  This issue, namely how to jump-start and then maintain cooperation in a sea of noncooperators, ran through all of chapter 10 and, as shown in the widespread existence of social species that cooperate, this is solvable (stay tuned for more in the final chapter). When framed in the context of morality, averting the tragedy of the commons requires getting people in groups to not be selfish; it is an issue of Me versus Us.

  But Greene outlines a second type of tragedy. Now there are two different groups of shepherds, and the challenge is that each group has a different approach to grazing. One, for example, treats the pasture as a classic commons, while the other believes that the pasture should be divided up into parcels of land belonging to individual shepherds, with high, strong fences in between. In other words, mutually contradictory views about using the pasture.

  The thing that fuels the danger and tragedy of this situation is that each group has such a tightly reasoned structure in their heads as to why their way is correct that it can acquire moral weight, be seen as a “right.” Greene dissects that word brilliantly. For each side, perceiving themselves as having a “right” to do things their way mostly means that they have slathered enough post-hoc, Haidtian rationalizations on a shapeless, self-serving, parochial moral intuition; have lined up enough of their gray-bearded philosopher-king shepherds to proclaim the moral force of their stance; feel in the most sincere, pained way that the very essence of what they value and who they are is at stake, that the very moral rightness of the universe is wobbling; all of that so strongly that they can’t recognize the “right” for what it is, namely “I can’t tell you why, but this is how things should be done.” To cite a quote attributed to Oscar Wilde, “Morality is simply the attitude we adopt towards people whom we personally dislike.”

  It’s Us versus Them framed morally, and the importance of what Greene calls “the Tragedy of Commonsense Morality” is shown by the fact that most intergroup conflicts on our planet ultimately are cultural disagreements about whose “right” is righter.

  This is an intellectualized, bloodless way of framing the issue. Here’s a different way.

  Say I decide that it would be a good thing to have pictures here demonstrating cultural relativism, displaying an act that is commonsensical in one culture but deeply distressing in another. “I know,” I think, “I’ll get some pictures of a Southeast Asian dog-meat market; like me, most readers will likely resonate with dogs.” Good plan. On to Google Images, and the result is that I spend hours transfixed, unable to stop, torturing myself with picture after picture of dogs being carted off to market, dogs being butchered, cooked, and sold, pictures of humans going about their day’s work in a market, indifferent to a crate stuffed to the top with suffering dogs.

  I imagine the fear those dogs feel, how they are hot, thirsty, in pain. I think, “What if these dogs had come to trust humans?” I think of their fear and confusion. I think, “What if one of the dogs whom I’ve loved had to experience that? What if this happened to a dog my children loved?” And with my heart racing, I realize that I hate these people, hate every last one of them and despise their culture.

  And it takes a locomotive’s worth of effort for me to admit that I can’t justify that hatred and contempt, that mine is a mere moral intuition, that there are things that I do that would evoke the same response in some distant person whose humanity and morality are certainly no less than mine, and that but for the randomness of where I happen to have been born, I could have readily had their views instead.

  The thing that makes the tragedy of commonsense morality so tragic is the intensity with which you just know that They are deeply wrong.

  In general, our morally tinged cultural institutions—religion, nationalism, ethnic pride, team spirit—bias us toward our best behaviors when we are single shepherds facing a potential tragedy of the commons. They make us less selfish in Me versus Us situations. But they send us hurtling toward our worst behaviors when confronting Thems and their different moralities.

  The dual process nature of moral decision making gives some insights into how to avert these two very different types of tragedies.

  In the context of Me versus Us, our moral intuitions are shared, and emphasizing them hums with the prosociality of our Us-ness. This was shown in a study by Greene, David Rand of Yale, and colleagues, where subjects played a one-shot public-goods game that modeled the tragedy of the commons.34 Subjects were given differing lengths of time to decide how much money they would contribute to a common pot (versus keeping it for themselves, to everyone else’s detriment). And the faster the decision required, the more cooperative people were. Ditto if you had primed subjects to value intuition (by having them relate a time when intuition led them to a good decision or where careful reasoning did the opposite)—more cooperation. Conversely, instruct subjects to “carefully consider” their decision, or prime them to value reflection over intuition, and they’d be more selfish. The more time to think, the more time to do a version of “Yes, we all agree that cooperation is a good thing . . . but here is why I should be exempt this time”—what the authors called “calculated greed.”

  What would happen if subjects played the game with someone screamingly different, as different a human as you could find, by whatever the subject’s standards of comfort and familiarity? While the study hasn’t been done (and would obviously be hard to do), you’d predict that fast, intuitive decisions would overwhelmingly be in the direction of easy, unconflicted selfishness, with “Them! Them!” xenophobia alarms ringing and automatic beliefs of “Don’t trust Them!” instantly triggered.

  When facing Me-versus-Us moral dilemmas of resisting selfishness, our rapid intuitions are good, honed by evolutionary selection for cooperation in a sea of green-beard markers.35 And in such settings, regulating and formalizing the prosociality (i.e., moving it from the realm of intuition to that of cogitation) can even be counterproductive, a point emphasized by Samuel Bowles.*

  In contrast, when doing moral decision making during Us-versus-Them scenarios, keep intuitions as far away as possible. Instead, think, reason, and question; be deeply pragmatic and strategically utilitarian; take their perspective, try to think what they think, try to feel what they feel. Take a deep breath, and then do it all again.*

  Veracity and Mendacity

  The question rang out, clear and insistent, a question that could not be ignored or evaded. Chris swallowed once, tried for a voice that was calm and steady, and answered, “No, absolutely not.” It was a bald-faced lie.

  Is this a good thing or bad thing? Well, it depends on what the question was: (a) “When the CEO gave you the summary, were you aware that the numbers had been manipulated to hide the third-quarter losses?” asked the prosecutor. (b) “Is this a toy you already have?” asked Grandma tentatively. (c) “What did the doctor say? Is it fatal?” (d) “Does this outfit make me look ____ ?” (e) “Did you eat the brownies that were for tonight?” (f) “Harrison, are you harboring the runaway slave named Jack?” (g) “Something’s not adding up. Are you lying about being at work late last night?” (h) “OMG, did you just cut one?”

  Nothing better typifies the extent to which the meanings of our behaviors are context dependent. Same untruth, same concentration on controlling your facial expression, same attempt to make just the right amount of eye contact. And depending on the circumstance, this could be us at our best or worst. On the converse side of context dependency, sometimes being honest is the harder thing—telling an unpleasant truth about another person activates the medial PFC (along with the insula).*36

  Given these complexities, it is no surprise that the biology of honesty and duplicity is very muddy.

  As we saw in chapter 10, the very nature of competitive evolutionary games selects for both deception and vigilance against it. We even saw protoversions of both in social yeast. Dogs attempt to deceive one another, with marginal success—when a dog is terrified, fear pheromone
s emanate from his anal scent glands, and it’s not great if the guy you’re facing off against knows you’re scared. A dog can’t consciously choose to be deceptive by not synthesizing and secreting those pheromones. But he can try to squelch their dissemination by putting a lid on those glands, by putting his tail between his legs—“I’m not scared, no siree,” squeaked Sparky.

  No surprise, nonhuman primate duplicity takes things to a whole other level.37 If there is a good piece of food and a higher-ranking animal nearby, capuchins will give predator alarm calls to distract the other individual; if it is a lower-ranking animal, no need; just take the food. Similarly, if a low-ranking capuchin knows where food has been hidden and there is a dominant animal around, he will move away from the hiding place; if it’s a subordinate animal, no problem. The same is seen in spider monkeys and macaques. And other primates don’t just carry out “tactical concealment” about food. When a male gelada baboon mates with a female, he typically gives a “copulation call.” Unless he is with a female who has snuck away from her nearby consortship male. In which case he doesn’t make a sound. And, of course, all of these examples pale in comparison with what politico chimps can be up to. Reflecting deception as a task requiring lots of social expertise, across primate species, a larger neocortex predicts higher rates of deception, independent of group size.*

  That’s impressive. But it is highly unlikely that there is conscious strategizing on the part of these primates. Or that they feel bad or even morally soiled about being deceptive. Or that they actually believe their lies. For those things we need humans.

  The human capacity for deception is enormous. We have the most complex innervation of facial muscles and use massive numbers of motor neurons to control them—no other species can be poker-faced. And we have language, that extraordinary means of manipulating the distance between a message and its meaning.

  Humans also excel at lying because our cognitive skills allow us to do something beyond the means of any perfidious gelada baboon—we can finesse the truth.

  A cool study shows our propensity for this. To simplify: A subject would roll a die, with different results yielding different monetary rewards. The rolls were made in private, with the subject reporting the outcome—an opportunity to cheat.

  Given chance and enough rolls, if everyone was honest, each number would be reported about one sixth of the time. If everyone always lied for maximal gain, all rolls would supposedly have produced the highest-paying number.

  There was lots of lying. Subjects were over 2,500 college students from twenty-three countries, and higher rates of corruption, tax evasion, and political fraud in a subject’s country predicted higher rates of lying. This is no surprise, after chapter 9’s demonstration that high rates of rule violations in a community decrease social capital, which then fuels individual antisocial behavior.

  What was most interesting was that across all the cultures, lying was of a particular type. Subjects actually rolled a die twice, and only the first roll counted (the second, they were told, tested whether the die was “working properly”). The lying showed a pattern that, based on prior work, could be explained by only one thing—people rarely made up a high-paying number. Instead they simply reported the higher roll of the two.

  You can practically hear the rationalizing. “Darn, my first roll was a 1 [a bad outcome], my second a 4 [better]. Hey, rolls are random; it could just as readily have been 4 as a 1, so . . . let’s just say I rolled a 4. That’s not really cheating.”

  In other words, lying most often included rationalizing that made it feel less dishonest—not going whole hog for that filthy lucre, so that your actions feel like only slightly malodorous untruthiness.

  When we are lying, naturally, regions involved in Theory of Mind are involved, particularly with circumstances of strategic social deception. Moreover, the dlPFC and related frontal regions are central to a neural circuit of deception. And then insight grinds to a halt.38

  Back to the theme introduced in chapter 2 of the frontal cortex, and the dlPFC in particular, getting you to do the harder thing when it’s the right thing to do. And in our value-free sense of “right,” you’d expect the dlPFC to activate when you’re struggling to do (a) the morally right thing, which is to avoid the temptation to lie, as well as (b) the strategically right thing, namely, once having decided to lie, doing it effectively. It can be hard to deceive effectively, having to think strategically, carefully remember what lie you’re actually saying, and create a false affect (“Your Majesty, I bring terrible, sad news about your son, the heir to the throne [yeah, we ambushed him—high fives!]”).* Thus activation of the dlPFC will reflect both the struggle to resist temptation and the executive effort to wallow effectively in the temptation, once you’ve lost that struggle. “Don’t do it” + “if you’re going to do it, do it right.”

  This confusion arises in neuroimaging studies of compulsive liars.*39 What might one expect? These are people who habitually fail to resist the temptation of lying; I bet they have atrophy of something frontocortical. These are people who habitually lie and are good at it (and typically have high verbal IQs); I bet they have expansion of something frontocortical. And the studies bear out both predictions—compulsive liars have increased amounts of white matter (i.e., the axonal cables connecting neurons) in the frontal cortex, but lesser amounts of gray matter (i.e., the cell bodies of the neurons). It’s not possible to know if there’s causality in these neuroimaging/behavior correlates. All one can conclude is that frontocortical regions like the dlPFC show multiple and varied versions of “doing the harder thing.”

  You can dissociate the frontal task of resisting temptation from the frontal task of lying effectively by taking morality out of the equation.40 This is done in studies where people are told to lie. (For example, subjects are given a series of pictures; later they are shown an array of pictures, some of which are identical to ones in their possession, and asked, “Is this a picture you have?” A signal from the computer indicates whether the subject should answer honestly or lie.) In this sort of scenario, lying is most consistently associated with activation of the dlPFC (along with the nearby and related ventrolateral PFC). This is a picture of the dlPFC going about the difficult task of lying effectively, minus worrying about the fate of its neuronal soul.

  The studies tend to show activation of the anterior cingulate cortex (ACC) as well. As introduced in chapter 2, the ACC responds to circumstances of conflicting choices. This occurs for conflict in an emotional sense, as well as in a cognitive sense (e.g., having to choose between two answers when both seem to work). In the lying studies the ACC isn’t activating because of moral conflict about lying, since subjects were instructed to lie. Instead, it’s monitoring the conflict between reality and what you’ve been instructed to report, and this gums up the works slightly; people show minutely longer response times during lying trials than during honest ones.

  This delay is useful in polygraph tests (i.e., lie detectors). In the classic form, the test detected arousal of the sympathetic nervous system, indicating that someone was lying and anxious about not getting caught. The trouble is that you’d get the same anxious arousal if you’re telling the truth but your life’s over if that fallible machine says otherwise. Moreover, sociopaths are undetectable, since they don’t get anxiously aroused when lying. Plus subjects can take countermeasures to manipulate their sympathetic nervous system. As a result, this use of polygraphs is no longer admissible in courts. Contemporary polygraph techniques instead home in on that slight delay, on the physiological indices of that ACC conflict—not the moral one, since some miscreant may have no moral misgivings, but the cognitive conflict—“Yeah, I robbed the store, but no, wait, I have to say that I didn’t.” Unless you thoroughly believe your lie, there’s likely to be that slight delay, reflecting the ACC-ish cognitive conflict between reality and your claim.

  Thus, activation of the ACC, dlPFC, and nearby fron
tal regions is associated with lying on command.41 At this point we have our usual issue of causality—is activation of, say, the dlPFC a cause, a consequence, or a mere correlate of lying? To answer this, transcranial direct-current stimulation has been used to inactivate the dlPFC in people during instructed-lying tasks. Result? Subjects were slower and less successful in lying—implying a causal role for the dlPFC. And to remind us of how complicated this issue is, people with damage to the dlPFC are less likely to take honesty into account when honesty and self-interest are pitted against each other in an economic game. So this most eggheady, cognitive part of the PFC is central to both resisting lying and, once having decided to lie, doing it well.

  This book’s focus is not really how good a liar someone is. It’s whether we lie, whether we do the harder thing and resist the temptation to deceive. For more understanding of that, we turn to a pair of thoroughly cool neuroimaging studies where subjects who lied did so not because they were instructed to but because they were dirty rotten cheaters.

  The first was carried out by the Swiss scientists Thomas Baumgartner, Ernst Fehr (whose work has been noted previously), and colleagues.42 Subjects played an economic trust game where, in each round, you could be cooperative or selfish. Beforehand a subject would tell the other player what their strategy would be (always/sometimes/never cooperate). In other words, they made a promise.

  Some subjects who promised to always cooperate broke their promise at least once. At such times there was activation of the dlPFC, the ACC, and, of course, the amygdala.*43

  A pattern of brain activation before each round’s decision predicted breaking of a promise. Fascinatingly, along with predictable activation of the ACC, there’d be activation of the insula. Does the scoundrel think, “I’m disgusted with myself, but I’m going to break my promise”? Or is it “I don’t like this guy because of X; in fact, he’s kind of disgusting; I owe him nothing; I’m breaking my promise”? While it’s impossible to tell, given our tendency to rationalize our own transgressions, I’d bet it’s the latter.

 

‹ Prev