Book Read Free

The Elephant in the Brain_Hidden Motives in Everyday Life

Page 10

by Robin Hanson


  Of course, these aren’t mutually exclusive. Any particular act of self-deception might serve multiple purposes at once. When the mother of an alleged murderer is convinced that her son is innocent, she’s playing Loyalist to her son and Cheerleader to the jury. The prizefighter who is grossly overconfident about his odds of winning is playing both Cheerleader (to his fans, teammates, and other supporters) and Madman (to his opponent).

  MODULARITY

  The benefit of self-deception is that it can, in some scenarios, help us mislead others. But what about its costs?

  As we’ve mentioned, the main cost is that it leads to suboptimal decision-making. Like the general who erases the mountain range on the map, then leads the army to a dead end, self-deceivers similarly run the risk of acting on false or missing information.

  Luckily, however, we don’t have to bear the full brunt of our own deceptions. Typically, at least part of our brain continues to know the truth. In other words, our saving grace is inconsistency.

  “To understand most important ideas in psychology,” says social psychologist Jonathan Haidt in The Happiness Hypothesis, “you need to understand how the mind is divided into parts that sometimes conflict.” He goes on:

  We assume that there is one person in each body, but in some ways we are each more like a committee whose members have been thrown together working at cross purposes.35

  There are dozens of schemes for how to divide up the mind. The Bible identifies the head and the heart. Freud gives us the id, ego, and superego. Iain McGilchrist differentiates the analytical left brain from the holistic right brain,36 while Douglas Kenrick gives us seven “subselves”: Night Watchman, Compulsive Hypochondriac, Team Player, Go-Getter, Swinging Single, Good Spouse, and Nurturing Parent.37 Meanwhile, the next generation is growing up on Pixar’s Inside Out, which portrays the mind as a committee of five different emotional personalities.

  None of these schemes is unequivocally better or more accurate than the others. They’re just different ways of slicing up the same complex system—the reality of which is even more fragmented than the “committee” metaphor suggests. Psychologists call this modularity. Instead of a single monolithic process or small committee, modern psychologists see the brain as a patchwork of hundreds or thousands of different parts or “modules,” each responsible for a slightly different information-processing task. Some modules take care of low-level tasks like detecting edges in the visual field or flexing a muscle. Others are responsible for medium-sized operations like walking and conjugating verbs. Still higher-level modules (which are themselves composed of many lower-level modules) are responsible for things like detecting cheaters38 and managing our social impressions.

  The point is that there are many different systems in the brain, each connected to other systems but also partially isolated from each other. The artificial intelligence researcher Marvin Minsky famously described this arrangement as the “society of mind.”39 And like a society, there are different ways to carve it up for different purposes. Just as America can be broken down in terms of political factions (liberals vs. conservatives), geography (urban vs. rural, coastal vs. heartland), or generations (Baby Boomers, Gen Xers, Millennials), the mind can also be carved up in many different ways.

  And crucially, as Haidt stressed, the different parts don’t always agree. A fact might be known to one system and yet be completely concealed or cut off from other systems. Or different systems might contain mutually inconsistent models of the world.

  This is illustrated rather dramatically by the rare but well-documented condition known as blindsight, which typically follows from some kind of brain damage, like a stroke to the visual cortex. Just like people who are conventionally blind, blindsighted patients swear they can’t see. But when presented with flashcards and forced to guess what’s on the card, they do better than chance. Clearly some parts of their brains are registering visual information, even if the parts responsible for conscious awareness are kept in the dark.40

  What this means for self-deception is that it’s possible for our brains to maintain a relatively accurate set of beliefs in systems tasked with evaluating potential actions, while keeping those accurate beliefs hidden from the systems (like consciousness) involved in managing social impressions. In other words, we can act on information that isn’t available to our verbal, conscious egos. And conversely, we can believe something with our conscious egos without necessarily making that information available to the systems charged with coordinating our behavior.

  No matter how fervently a person believes in Heaven, for example, she’s still going to be afraid of death. This is because the deepest, oldest parts of her brain—those charged with self-preservation—haven’t the slightest idea about the afterlife. Nor should they. Self-preservation systems have no business dealing with abstract concepts. They should run on autopilot and be extremely difficult to override (as the difficulty of committing suicide attests41). This sort of division of mental labor is simply good mind design. As psychologists Douglas Kenrick and Vladas Griskevicius put it, “Although we’re aware of some of the surface motives for our actions, the deep-seated evolutionary motives often remain inaccessible, buried behind the scenes in the subconscious workings of our brains’ ancient mechanisms.”42

  Thus the very architecture of our brains makes it possible for us to behave hypocritically—to believe one set of things while acting on another. We can know and remain ignorant, as long as it’s in separate parts of the brain.43

  SELF-DISCRETION

  Self-discretion is perhaps the most important and subtle mind game that we play with ourselves in the service of manipulating others. This is our mental habit of giving less psychological prominence to potentially damaging information. It differs from the most blatant forms of self-deception, in which we actively lie to ourselves (and believe our own lies). It also differs from strategic ignorance, in which we try our best not to learn potentially dangerous information.

  Picture the mind as a society of little modules, systems, and subselves chattering away among themselves. This chatter is largely what constitutes our inner mental life, both conscious and unconscious. Self-discretion, then, consists of discretion among different brain parts. When part of the brain has to process a sensitive piece of information—wanting to get the upper hand in a particular interaction, for example—it doesn’t necessarily make a big conscious fuss about it. Instead, we might just feel vaguely uneasy until we’ve gained the upper hand, whereupon we’ll feel comfortable ending the conversation. At no point does the motive “Get the upper hand” rise to full conscious attention, but the same result is accomplished discreetly.

  Information is sensitive in part because it can threaten our self-image and therefore our social image. So the rest of the brain conspires—whispers—to keep such information from becoming too prominent, especially in consciousness. In this sense, the Freuds were right: the conscious ego needs to be protected. But not because we are fragile, but rather to keep damaging information from leaking out of our brain and into the minds of our associates.

  Self-discretion can be very subtle. When we push a thought “deep down” or to the “back of our minds,” it’s a way of being discreet with potentially damaging information. When we spend more time and attention dwelling on positive, self-flattering information, and less time and attention dwelling on shameful information, that’s self-discretion.

  Think about that time you wrote an amazing article for the school paper, or gave that killer wedding speech. Did you feel a flush of pride? That’s your brain telling you, “This information is good for us! Let’s keep it prominent, front and center.” Dwell on it, bask in its warm glow. Reward those neural pathways in the hope of resurfacing those proud memories whenever they’re relevant.

  Now think about the time you mistreated your significant other, or when you were caught stealing as a child, or when you botched a big presentation at work. Feel the pang of shame? That’s your brain telling you not to
dwell on that particular information. Flinch away, hide from it, pretend it’s not there. Punish those neural pathways, so the information stays as discreet as possible.44

  GETTING OUR BEARINGS

  In summary, our minds are built to sabotage information in order to come out ahead in social games. When big parts of our minds are unaware of how we try to violate social norms, it’s more difficult for others to detect and prosecute those violations. This also makes it harder for us to calculate optimal behaviors, but overall, the trade-off is worth it.

  Of all the things we might be self-deceived about, the most important are our own motives. It’s this special form of self-deception that we turn to in the next chapter.

  6

  Counterfeit Reasons

  “Reason is . . . the slave of the passions, and can never pretend to any other office than to serve and obey them.”—David Hume1

  “A man always has two reasons for doing anything: a good reason and the real reason.”—J. P. Morgan2

  Let’s briefly take stock of the argument we’ve been making so far. In Chapter 2, we saw how humans (and all other species for that matter) are locked in the game of natural selection, which often rewards selfish, competitive behavior. In Chapter 3, we looked at social norms and saw how they constrain our selfish impulses, but also how norms can be fragile and hard to enforce. In Chapter 4, we looked at the many and subtle ways that humans try to cheat by exploiting the fragility of norm enforcement, largely by being discreet about bad behavior. In Chapter 5, we took a closer look at the most subtle and intriguing of all these norm-evasion techniques: self-deception. “We deceive ourselves,” as Robert Trivers says, “the better to deceive others”—in particular, to make it harder for others to catch and prosecute us for behaving badly.

  Together, these instincts and predispositions make up the elephant in the brain. They’re the facts about ourselves, our behaviors, and our minds that we’re uncomfortable acknowledging and confronting directly. It’s not that we’re entirely or irredeemably selfish and self-deceived—just that we’re often rewarded for acting on selfish impulses, but less so for acknowledging them, and that our brains respond predictably to those incentives.

  In this chapter, we turn our attention to one particular type of self-deception: the fact that we’re strategically ignorant about our own motives. In other words, we don’t always know the “whys” behind our own behavior. But as we’ll see, we certainly pretend to know.

  “I WANTED TO GO GET A COKE”

  In the 1960s and early 1970s, neuroscientists Roger Sperry and Michael Gazzaniga conducted some of the most profound research in the history of psychology—a series of experiments that would launch Gazzaniga into an illustrious career as the “grandfather” of cognitive neuroscience,3 and for which Sperry would eventually win the Nobel Prize in 1981.

  In terms of method, the experiments were fairly conventional: an image was flashed, some questions were asked, that sort of thing. What distinguished these experiments were their subjects. These were patients who had previously, for medical reasons, undergone a corpus callosotomy—a surgical severing of the nerves that connect the left and right hemispheres of the brain. Hence the nickname for these subjects: split-brain patients.

  Until Sperry and Gazzaniga’s experiments, no one had noticed anything particularly strange about split-brain patients. They were able to walk around leading seemingly normal lives. Neither their doctors nor their loved ones—nor the patients themselves—had noticed that much was amiss.

  But things were amiss, in a rather peculiar way, as Sperry and Gazzaniga were about to find out.

  In order to understand their research, it helps to be familiar with two basic facts about the brain. The first is that each hemisphere processes signals from the opposite side of the body. So the left hemisphere controls the right side of the body (the right arm, leg, hand, and everything else), while the right hemisphere controls the left side of the body. This is also true for signals from the ears—the left hemisphere processes sound from the right ear, and vice versa. With the eyes it’s a bit more complicated, but the upshot is that when a patient is looking straight ahead, everything to the right—in the right half of the visual field—is processed by the left hemisphere, and everything to the left is processed by the right hemisphere.4

  The second key fact is that, after a brain is split by a callosotomy, the two hemispheres can no longer share information with each other. In a normal (whole) brain, information flows smoothly back and forth between the hemispheres, but in a split-brain, each hemisphere becomes an island unto itself—almost like two separate people within a single skull.5

  Now, what Sperry and Gazzaniga did, in a variety of different experimental setups, was ask the right hemisphere to do something, but then ask the left hemisphere to explain it.

  In one setup, they flashed a split-brain patient two different pictures at the same time, one to each hemisphere. The left hemisphere, for example, saw a picture of a chicken while the right hemisphere saw a picture of a snowy field. The researchers then asked the patient to reach out with his left hand and point to a word that best matched the picture he had seen. Since the right hemisphere had seen the picture of the snowy field, the left hand pointed to a shovel—because a shovel goes nicely with snow.

  No surprises here. But then the researchers asked the patient to explain why he had chosen the shovel. Explanations, and speech generally, are functions of the left hemisphere, and thus the researchers were putting the left hemisphere in an awkward position. The right hemisphere alone had seen the snowy field, and it was the right hemisphere’s unilateral decision to point to the shovel. The left hemisphere, meanwhile, had been left completely out of the loop, but was being asked to justify a decision it took no part in and wasn’t privy to.

  From the point of view of the left hemisphere, the only legitimate answer would have been, “I don’t know.” But that’s not the answer it gave. Instead, the left hemisphere said it had chosen the shovel because shovels are used for “cleaning out the chicken coop.” In other words, the left hemisphere, lacking a real reason to give, made up a reason on the spot. It pretended that it had acted on its own—that it had chosen the shovel because of the chicken picture. And it delivered this answer casually and matter-of-factly, fully expecting to be believed, because it had no idea it was making up a story. The left hemisphere, says Gazzaniga, “did not offer its suggestion in a guessing vein but rather as a statement of fact.”6

  In another setup, Sperry and Gazzaniga asked a patient—by way of his right hemisphere (left ear)—to stand up and walk toward the door. Once the patient was out of his chair, they then asked him, out loud, what he was doing, which required a response from his left hemisphere. Again this put the left hemisphere in an awkward position.

  Now, we know why the patient got out of his chair—because the researchers asked him to, via his right hemisphere. The patient’s left hemisphere, however, had no way of knowing this. But instead of saying, “I don’t know why I stood up,” which would have been the only honest answer, it made up a reason and fobbed it off as the truth:

  “I wanted to go get a Coke.”

  RATIONALIZATION

  What these studies demonstrate is just how effortlessly the brain can rationalize its behavior. Rationalization, sometimes known to neuroscientists as confabulation, is the production of fabricated stories made up without any conscious intention to deceive. They’re not lies, exactly, but neither are they the honest truth.

  Humans rationalize about all sorts of things: beliefs, memories, statements of “fact” about the outside world. But few things seem as easy for us to rationalize as our own motives. When we make up stories about things outside our minds, we open ourselves up to fact-checking. People can argue with us: “Actually, that’s not what happened.” But when we make up stories about our own motives, it’s much harder for others to question us—outside of a psychology lab, at least. And as we saw in Chapter 3, we have strong incentives
to portray our motives in a flattering light, especially when they’re the subject of norm enforcement.

  Rationalization is a kind of epistemic forgery, if you will. When others ask us to give reasons for our behavior, they’re asking about our true, underlying motives. So when we rationalize or confabulate, we’re handing out counterfeit reasons (see Box 5). We’re presenting them as an honest account of our mental machinations, when in fact they’re made up from scratch.

  Box 5: “Motives” and “Reasons”

  When we use the term “motives,” we’re referring to the underlying causes of our behavior, whether we’re conscious of them or not. “Reasons” are the verbal explanations we give to account for our behavior. Reasons can be true, false, or somewhere in between (e.g., cherry-picked).

  Even more dramatic examples of rationalization can be elicited from patients suffering from disability denial,7 a rare disorder that occasionally results from a right-hemisphere stroke. In a typical case, the stroke will leave the patient’s left arm paralyzed, but—here’s the weird part—the patient will completely deny that anything is wrong with his arm, and will manufacture all sorts of strange (counterfeit) excuses for why it’s just sitting there, limp and lifeless. The neuroscientist V. S. Ramachandran recalls some of the conceptual gymnastics his patients have undertaken in this situation:

 

‹ Prev