THE NEUROPHYSIOLOGY OF THOUGHT SUPPRESSION
One particular kind of self-deception—consciously mediated efforts at suppressing true information from consciousness—has been studied by neurophysiologists in a most revealing way. The resulting data are striking in our context: different sections of the brain appear to have been co-opted in evolution to suppress the activity of other sections to create self-deceptive thinking.
Consider the active conscious suppression of memory. In real life, we actively attempt to suppress our thoughts: I won’t think about this today; please, God, keep this woman from my mind, and so on. In the laboratory, individuals are instructed to forget an arbitrary set of symbols they have just learned. The effect of such efforts is highly variable, measured as the degree of memory achieved a month later when attempting to recall the symbols. This variation turns out to be associated with variation in the underlying neurophysiology. The more highly the dorsolateral prefrontal cortex (DLPFC) is activated during directed forgetting, the more it suppresses ongoing activity in the hippocampus (where memories are typically stored) and the less is remembered a month later. The DLPFC is otherwise often involved in overcoming cognitive obstacles and in planning and regulating motor activity, including suppressing unwanted responses. One is tempted to imagine that this area of the brain was co-opted for the new function of suppressing memories because it was often involved in affecting other brain areas, in particular, suppressing behavior. There is a physical component to this—I know it well. When I experience an unwanted thought and act to suppress it, I often experience an involuntary twitch in one or both of my arms, as if trying to push something down and out of sight.
THE IRONY OF TRYING TO SUPPRESS ONE’S THOUGHTS
The neurophysiological work employed meaningless strings of letters or numbers during short periods of memorization followed by short periods of attempted forgetting, results measured a month later. But another factor operates if we try to suppress something meaningful. One might easily suppose that a conscious decision to suppress a thought (don’t think of a white bear) could easily be achieved, each recurrence of the thought suppressed more deeply so that soon enough the thought itself fails to recur. But this is not what happens. The mind seems to resist suppression, and under some conditions we do precisely what we are trying to suppress. For example, we may blurt out the very truth we are trying to hide from others, as if involuntarily or contra-voluntarily. The suppressed thought often comes back to consciousness, sometimes at the rate of once per minute, and often for days. As with the neurophysiology of thought suppression, some people are better at thought suppression and some try harder. But few people are completely successful.
Two processes are thought to work simultaneously. On the one hand, there is an effort to consciously suppress the undesired thought, initially and whenever it reappears. On the other hand, an unconscious process to search for the prohibited word, as if looking for errors, that is, thoughts that need additional suppression. This process is itself subject to errors, especially when we are under cognitive load. When one is distracted or overburdened mentally, the unconscious search for the thought is not combined with suppression of it, so that the suppressed thought may burst forth more often than expected.
IMPROVING DECEPTION THROUGH NEURAL INHIBITION
The first great advances in neurophysiology came from the ability to measure ongoing brain activity in space and time, first crudely through EEG and then more precisely through fMRI and PET scans. Now a recent method (as we saw in Chapter 1) has taken the opposite approach and selectively knocked out brain activity in particular parts of the brain to see the effects. This was achieved by applying external electrical stimulation on the scalp to inhibit brain activity directly underneath. For example, stimulation can be applied to a brain area involved in deception (at the anterior prefrontal cortex, aPFC) while a person chooses whether to lie in response to a series of questions designed to determine whether she was involved in the mock crime of stealing money from a room. Although in general we expect any artificially induced effect on life—for example, rapping a person hard on his or her knee—to be negative much more often than positive, this intervention was clearly positive where deception was concerned. At least three key components were altered in an advantageous direction. Reaction time while lying was decreased under inhibition, as was physiological arousal. So people were quicker and more relaxed. The electrical inhibition also appeared to reduce the moral conflict during lying. That is, people felt less guilt under inhibition, and the less guilt they felt, the quicker their response times. In addition, people with this area knocked out lied more frequently on relevant questions and less on irrelevant ones, thus more finely tuning their lying.
This is a very striking result. Artificially suppressing mental activity improves performance. This provides an analogy to self-deception, because the suppression of mental activity can come externally via a magnetic device applied to the skull or internally via neuronal suppression emanating from elsewhere in the brain—via self-deception in service of deceit. The only thing we do not know is whether the external inhibition also knocked out consciousness to aspects of the deception, as we might well expect.
Incidentally, two recent studies in China suggest that the brains of those regarded as pathological liars show more white matter in the areas of the brain believed to be involved in deception. “White matter” refers not to the neurons themselves but to the supporting glial cells that nourish the neurons, especially their long, thin dendritic extensions. We know from work on jugglers that the more they practice, the more white matter shows up in the “juggling center” of their brains, so this correlation with lying may result from repeated practice.
UNCONSCIOUS SELF-RECOGNITION SHOWS SELF-DECEPTION
The classic experimental work demonstrating self-deception took place some thirty years ago and involved (largely unconscious) verbal denial or projection of one’s own voice. In a brilliant series of experiments, true and false information was shown to be simultaneously stored within an individual, but with a strong bias toward the true information being hidden in the unconscious mind and the false in the conscious. In turn, people’s tendency to deny (or project) their voices could be affected by making them feel worse or better about themselves, respectively. Thus, one could argue that the self-deception was ultimately directed toward others.
The experiment was based on a simple fact of human biology. We are physiologically aroused by the sound of a human voice but more so to the sound of our own voice (for example, as played from a tape recorder). We are unconscious of these effects. Thus one can play a game of self-recognition, in which people are asked whether a voice is their own (conscious self-recognition) while at the same time recording (via higher arousal) whether unconscious self-recognition has been achieved.
Here is how it worked. People were asked to read the same paragraph from a book. These recordings were chopped into two-, four-, six-, twelve-, and twenty-four-second segments, and a master tape was created consisting of a mixture of these segments of their own and other voices (matched for age and sex). Meantime, each individual was hooked up to a machine measuring his or her galvanic skin response (GSR), a measure of arousal that is normally twice as great for hearing one’s own voice as hearing someone else’s. People were asked to press a button to indicate that they thought the recording was of themselves and another button to indicate how sure they were.
Several interesting facts were discovered. Some people denied their own voices some of the time; this was the only kind of mistake they made and they seemed to be unconscious of making it (when interviewed later, only one was aware of having made this mistake). And yet the skin had it correct—that is, it showed the large increase in GSR expected upon hearing one’s own voice. By contrast, another set of people heard themselves talking when they were not—they projected their voice, and this was the only error they made. Although half were aware later that they had sometimes made this mistake,
the skin once again had it correct. This is unconscious self-recognition shown to be superior to conscious recognition. There were two other categories: those who never made mistakes and those who made both kinds, sometimes fooling even their skin, but for simplicity we neglect these two categories (about which nothing more is known, in any case).
It is well known that making people feel bad about themselves leads to less self-involvement (e.g., looking in the mirror). In the above experiment, people made to feel bad by a poor score on a pseudo-exam just taken (in fact, with grades randomly assigned) started to deny their voices. Made to feel good by a good score, they started to hear themselves talking when they were not. It was as if self-presentation was expanding under success and contracting in response to failure.
Another interesting feature—never analyzed statistically—was that deniers also showed the highest levels of arousal to all stimuli. It was as if they were primed to respond quickly, to deny the reality, and get it out of sight. By contrast, inventing reality (projecting) seems a more relaxed enterprise, with more relaxed arousal levels typical of those who make no mistakes. Perhaps reality that needs to be denied is more threatening than is the absence of reality one wishes to construct. Also, denial can be dealt with quickly, with low cognitive load, but requires an aroused state for quick detection and deletion.
There is a parallel in the way in which the brain responds to familiar faces. Some people have damage to a specific part of their brain that inhibits their ability to recognize familiar faces consciously. When asked to choose familiar over unfamiliar faces or match names with faces, the individual performs at chance levels. He or she nonetheless recognizes familiar faces unconsciously, as shown through changes in brain activity and skin conductance. When asked to state which face he or she trusts more, choice is above chance in the expected direction. Thus, there is some access to unconscious knowledge, but not much.
Can we study this in other animals? Some birds show the human pattern exactly. In playback experiments, they show greater physiological arousal to hearing their own species’song (compared to that of others) but a stronger response still to their own voices. These birds could easily be trained to peck at a button when they recognized their own voice (this would be analogous to verbal self-recognition), while measures of physiological arousal would reveal something closer to unconscious self-recognition (GSR in humans). When birds are made to lose fights, do they start avoiding pecking to their own voice (denial) and when made to win fights, show the opposite effect?
CAN ONE HALF OF THE BRAIN HIDE FROM THE OTHER?
Our left and right brain are connected by a corpus callosum, an ancient vertebrate symmetry that has important effects on daily life. The brains partly receive information independently (left ear, right brain) and also act independently (left brain runs right hand). I have often noticed that my right brain may not actively engage in a search unless the left brain makes the goal explicit by saying it out aloud. That is, I will be searching for an object in the visual world or in my pockets, including left pocket, and I will not find it until I say the word out loud (“lighter”), then suddenly I spot it in my left visual field or feel it in my left pocket (this is a consequence of the brain being cross-wired—left-side information goes primarily to the right brain, which in turn controls movements by the left side). This happens, I believe, because the information I am searching for is not shared freely across the corpus callosum between the two sides of the brain but is apprehended by the right brain only when it hears the name of what is being searched for. Then suddenly the left visual field and left tactile side—under control of the right brain—are open to inspection.
Does this curious fact have anything to do with deceit and self-deception? I believe it does, because when I want to hide something from myself—for example, keys just lifted unconsciously from another person—they are promptly stored in my left pocket, where they will be slow to be discovered even when I am consciously searching for them. Likewise, I have noticed that “inadvertent” touching of women (that is, unconscious prior to the action) occurs exclusively with my left hand and comes as a surprise to my dominant left brain, which controls the right side of my body. In effect, the left brain, the linguistic side, is associated with consciousness; the right side (left hand) is less conscious.
This is supported by evidence that processes of denial—and subsequent rationalization—appear to reside preferentially in the left brain and are inhibited by the right brain. People with paralysis on the right side of the body (due to a stroke in the left brain) never or very rarely deny their condition. But a certain small percentage of those with left-side paralysis deny their stroke (anosognosia) and when confronted with strong counterevidence (film of their inability to move their left arm), they indulge in a remarkable series of rationalizations denying the cause of their paralysis (due to arthritis, not feeling very mobile today, overexercise). This is especially common and strong in individuals with large lesions to the right central side of the brain, and it is consistent with other evidence that the right brain is more emotionally honest and the left actively engaged in self-promotion. Normally people show a shorter response time to threatening words, but those with anosognosia show a longer time, demonstrating that they implicitly repress information regarding their own condition.
IMPOSED SELF-DECEPTION
So far we have spoken of self-deception evolving in the service of the actor, hiding deception and promoting an illusory self. Now consider effects of others on us. We are highly sensitive to others, and to their opinions, desires, and actions. More to the point, they can manipulate and dominate us. This can result in self-deception being imposed on us by others (with varying degrees of force). Extreme examples are instructive. A captive may come to identify with his or her captor, an abused wife may take on the worldview of her abuser, and molested children may blame themselves for the transgressions against them. These are cases of imposed self-deception, and if they are acting functionally from the standpoint of the victimized (by no means certain), they probably do so by reducing conflict with the dominant individual. At least this is often the theory of the participants themselves. An abused wife may be deeply frightened and may rationalize acquiescence as the path least likely to provoke additional severe assaults—this is most effective if actually believed.
The situations need not be nearly as extreme. Consider birds. In many small species, the male begins dominant—he has the territory into which the female settles. And he can displace her from preferred feeding sites. But as time goes on, his dominance drops, and when she reaches the stage of egg-laying, there is a reversal: she now displaces him from preferred sites. The presumption is that risk of extra-pair paternity and the growing importance of female parental investment shifts the dominance toward her. The very same thing may often be true in human relationships.
This finding caught my attention many years ago because it appeared to capture exactly so many of my own relationships with women, one after the other—I was initially dominant but thoroughly subordinate at the end. It was only later that I noticed that the ruling system of self-deception had changed accordingly—from mine to hers. Initially, discussions were all biased in my favor, but I hardly noticed—wasn’t that the way it should be? Then came a short time when we may have spoken as equals, followed by rapid descent into her system of self-deception—I would apologize to her for what were, in fact, her failings.
Sex, for example, is an attributional nightmare—who is causing what effect on whom?—so sexual dysfunction on either or both sides can easily be seen as caused by the other person. Whether manipulated by guilt or fear of losing the relationship, you may now be practicing self-deception on behalf of someone else, not yourself—a most unenviable position.
IMPLICIT VERSUS EXPLICIT SELF-ESTEEM
Let us consider another example of imposed self-deception, one with deeper social implications. It is possible to measure something called a person’s explicit preference as well a
s an implicit one. The explicit simply asks people to state their preferences directly—for example, for so-called black people over white (to use the degraded language of the United States), where the actor is one or the other. The implicit measure is more subtle. It asks people to push a right-hand button for “white” names (Chip, Brad, Walter) or “good” words (“joy,” “peace,” “wonderful,” “happy”) and left for “black” names (Tyrone, Malik, Jamal) or “bad” words (“agony,” “nasty,” “war,” “death”)—and then reverses everything, white or bad, black or good. We now look at latencies—how long does it take an individual to respond when he or she must punch white or good versus white or bad—and assume that shorter latencies (quicker responses) means the terms are, by implication, more strongly associated in the brain, hence the term “implicit association test” (IAT). Invented only in 1998, it has now generated an enormous literature, including (unusual for the social sciences) actual improvements in methodology. Several websites harvest enormous volumes of IAT data over the Internet (for example, at Harvard, Yale, and the University of Washington), and these studies have produced some striking findings.
For example, black and white people are similar in their explicit tendency to value self over other, blacks indeed somewhat more strongly so. But when it comes to the implicit measures, whites respond even more strongly in their own favor than they do explicitly, while blacks—on average—prefer white over black, not by a huge margin but, nevertheless, they prefer other to self. This is most unexpected from an evolutionary perspective, where self is the beginning (if not end) of self-interest. To find an organism valuing (unrelated) other people more than self on an implicit measure using generic good terms, such as “pleasure” and “friend,” versus bad, such as “terrible” and “awful,” is to find an organism not obviously oriented toward its own self-interest.
The Folly of Fools: The Logic of Deceit and Self-Deception in Human Life Page 8