The Folly of Fools: The Logic of Deceit and Self-Deception in Human Life

Home > Other > The Folly of Fools: The Logic of Deceit and Self-Deception in Human Life > Page 7
The Folly of Fools: The Logic of Deceit and Self-Deception in Human Life Page 7

by Robert Trivers


  ANIMALS MAY BE CONSCIOUS OF DECEPTION

  Naturally one must be careful in imputing particular kinds of consciousness to other species, but some situations strongly suggest that animals are conscious of ongoing deception in some detail. Ravens, for example, have evolved a set of elaborate behaviors surrounding their tendency to cache (that is, bury and hide) food for future consumption, which can be enjoyed by another bird who happens to view the caching. Accordingly, ravens who are about to hide food seem very sensitive to just this possibility. They distance themselves from others and often cache behind a structure that obstructs others’view. They regularly interrupt caching to look around. At any evidence they are being observed, they will usually retrieve the cached food and wait to rebury somewhere else, preferably while not under observation. If they do cache food, they will often return within a minute or two. The watchers, in turn, stay at a safe distance, often hiding behind a tree or other object. They stop looking if the other stops caching and wait a minute or more after the bird has left before going for the cache. Hand-reared ravens, in turn, can follow the human gaze by repositioning themselves to see around an obstacle. This suggests the possibility that ravens can project the sight of another individual into the distance. Likewise, when jays are caching in others’ presence, they maximize their distance from others and cache in the shade and in a confusing pattern, moving caches frequently. Experimental work shows that they remember who has watched them cache in the past and when being observed by such individuals are more likely to re-cache than when they are being watched by a newcomer—another example of intelligence evolving in the context of deception.

  In the presence of other squirrels, gray squirrels cache farther apart, build false caches, and build with their backs turned to the other squirrels; no such responses are shown to crows who may be watching. Turning one’s back often shows up in other mammals, as well. A chimpanzee male displaying an erection to a female may turn his back when a more dominant male arrives, until his erection has subsided. Children as young as sixteen months will turn their backs to conceal the object in hand or what they are doing. I personally find it very hard in the presence of a woman with whom I am close to receive a phone call from another woman with whom I may have, or only wish to have, a relationship, without turning my back to pursue the conversation. This occurs even though there is nothing visual to hide and the act of turning gives me away. Perhaps this is a case of reducing cognitive dissonance—and cognitive load—by not having to watch one woman watch you while you pretend not to talk to another woman.

  In ravens, the pilferers avoid searching for known caches when in the presence of those who cache but will go immediately to the caches in the presence of a noncaching bird (that is unlikely to defend). In addition, they actively search away from the cache in the presence of the cacher, as if hiding their intentions. In one experiment, when ravens were introduced into an area where food was hidden, a subordinate male quickly developed the ability to find food, which the most dominant quickly learned to parasitize. This in turn led the subordinate to first search in areas where no food was present, to lure the dominant away, at which point the subordinate moved quickly to the food itself.

  Mantis shrimps are hard-shelled and their claws dangerous for seven weeks out of eight. On the eighth week, they are molting, and their body and claws are soft; they are unable to attack others and are vulnerable to attack by them. When encountered at this time, they greatly increase their rate of claw threats, sometimes combined with insincere lunges at the opponent. About half the time, this scares off their opponent. The other half, the soft-shelled shrimp runs for its life. The week before a mantis shrimp becomes soft-shelled, it increases its rate of claw threats but also increases the rate at which these threats are followed by actual attack, as if signaling that threats will be backed up by aggressive action just before the time when they will not.

  In fiddler crabs, the male typically has a large claw used to fight and threaten other males and to court females. Should he lose this claw, he regenerates one very similar in appearance but less effective than the original. The size of the first claw does indeed correlate (independent of body size) with claw strength as well as ability to resist being pulled from one’s burrow, but the size of the replacement claw does not, and males can’t distinguish between the two kinds of claws in an opponent.

  In primates, hiding information from others may take very active forms. For example, in both chimpanzees and gorillas, individuals will cover their faces in an apparent attempt to hide a facial expression. Gorillas in zoos have been seen to cover “play faces” (facial expressions meant to invite play) with one or both hands, and these covered faces are less likely to elicit play than uncovered play faces. Of course, a play face hidden in this fashion is hardly undetectable and may easily become a secondary signal. Chimpanzees will hide objects behind their backs that they are about to throw. They will also throw an object to one side of a tree to frighten another chimp into moving to the opposite side, where his opponent awaits him.

  DECEPTION AS AN EVOLUTIONARY GAME

  An important part of understanding deception is to understand it mathematically as an evolutionary game, with multiple players pursuing multiple strategies with various degrees of conscious and unconscious deception (in a fine-grained mixture). Contrast this with the problem of cooperation. Cooperation has been well modeled as a simple prisoner’s dilemma. Cooperation by both parties benefits each, while defections hurt both, but each is better off if he defects while the other cooperates. Cheating is favored in single encounters, but cooperation may emerge much of the time, if players are permitted to respond to their partner’s previous moves. This theoretical space is well explored.

  The simplest application of game theory to deception would be to treat it as a classical prisoner’s dilemma. Two individuals can tell each other the truth (both cooperate), lie (both defect), or one of each. But this cannot work. One problem is that a critical new variable becomes important—who believes whom? If you lie and I believe you, I suffer. If you lie and I disbelieve you, you are likely to suffer. By contrast, in the prisoner’s dilemma, each individual knows after each reciprocal play how the other played (cooperate or defect), and a simple reciprocal rule can operate under the humblest of conditions—cooperate initially, then do what your partner did on the previous move (tit for tat). But with deception, there is no obvious reciprocal logic. If you lie to me, this does not mean my best strategy is to lie back to you—it usually means that my best strategy is to distance myself from you or punish you.

  The most creative suggestion I have heard to mathematically model deception is to adapt the ultimatum game (UG) to this problem. In the UG, a person proposes a split of, say, $100 (provided by the experimenter) —$80 to self, $20 to the responder. The responder, in turn, can accept the split, in which case the money is split accordingly, or the responder can reject the offer, in which case neither party gets any money. Often the game is played as a one-shot anonymous encounter. That is, individuals play only once with people they do not know and with whom they will not interact in the future. In this situation, the game measures an individual’s sense of injustice—at what level of offer are you sufficiently offended to turn it down even though you thereby lose money? In many cultures, the 80/20 split is the break-even point at which one-half of the population turns down the offer as too unfair.

  Now imagine a modified UG in which there are two possible pots (say, $100 and $400) and both players know this. One pot is then randomly assigned to the proposer. Imagine the proposer offers you $40, which could represent 40 percent of a $100 pot (in which case you should accept) or 10 percent of a $400 pot (most people would reject). The proposer is permitted to lie and tell you that the pot is the smaller of the two when in fact it is the larger. You can trust the proposer or not, but the key is that you are permitted to pay to find out the truth from a (disinterested) third party. This measures the value you place in reducing your uncertainty
regarding the proposer’s honesty.

  If you then discover that the proposer lied, you should have a moral (or, at least, moralistic) motive to reject the offer, and the other way around for the truth—all compared to uncertainty, or not paying to find out. Note that from a purely economic point of view, there is no benefit in finding out the truth, since it costs money and may lead to an (otherwise) unnecessary loss of whatever is offered. The question can then be posed: How much would a responder be prepared to pay to reduce the uncertainty and go for a possibly inconvenient truth? Note that the game can be played in real life with varying degrees of anonymity and also multiple times, as in the iterated prisoner’s dilemma. As ability to discriminate develops, the other person will benefit more from your honesty (quickly seen as such) and suffer less from deception (spotted and discarded).

  When we add self-deception, the game quickly becomes very complicated. One can imagine actors who are:• Stone-cold honest (cost: information given away, naive regarding deception by others).

  • Consciously dishonest to a high degree but with low self-deception (cost: higher cognitive cost and higher cost when detected).

  • Dishonest with high self-deception (more superficially convincing at lower immediate cognitive cost but suffering later defects and acting more often in the service of others).

  And so on.

  A DEEPER THEORY OF DECEPTION

  Those talented at the mathematics of simple games or studying them via computer simulation might find it rewarding to define a set of people along the lines just mentioned, and then assign variable quantitative effects to explore their combined evolutionary trajectory. Perhaps results will be trivial and trajectories will depend completely on the relative quantitative effects assigned to each strategy, but it is much more likely that deeper connections will emerge, seen only when the coevolutionary struggle is formulated explicitly. The general point is, of course, that there are multiple actors in this game, kept in some kind of frequency-dependent equilibrium that itself may change over time. We choose to play different roles in different situations, presumably according to the expected payoffs. Of course it is better to begin with very simple games and only add complexity as we learn more about the underlying dynamics.

  It stands to reason that if our theory of self-deception rests on a theory of deception, advances in the latter will be especially valuable. I have known this for thirty years but have not been able to think of anything myself that is original regarding the deeper logic of deception, nor have I seen much progress elsewhere. Yes, signals in male/female courtship interactions may evolve toward costlier ones that are more difficult to fake (for example, antler size, physical strength, and bodily symmetry), but there is always room for deception, and many systems do not obey this simple rule regarding cost.

  CHAPTER 3

  Neurophysiology and Levels of Imposed Self-Deception

  Although study of the neurophysiology of deceit and self-deception is just beginning, there are already some interesting findings. Evidence suggests a greatly diminished role for the conscious mind in guiding human behavior. Contrary to our imagination, the conscious mind seems to lag behind the unconscious in both action and perception—it is much more observer of action than initiator. The precise details of the neurobiology of active thought suppression suggest that one part of the brain has been co-opted in evolution to suppress another part, a very interesting development if true. At the same time, evidence from social psychology makes it clear that trying to suppress thoughts sometimes produces a rebound effect, in which the thought recurs more often than before. Other work shows that suppressing neural activity in an area of the brain related to lying appears to improve lying, as if the less conscious the more successful.

  There is something called induced self-deception, in which the self-deceived person acts not for the benefit of self but for someone who is inducing the self-deception. This can be parent, partner, kin group, society, or whatever, and it is an extremely important factor in human life. You are still practicing self-deception but not for your own benefit. Among other things, it means that we need to be on guard to avoid this fate—not defensive via self-deception but via greater consciousness.

  Finally, we have treated self-deception as part of an offensive strategy, but is this really true? Consider the opposite—and conventional—view, that self-deception serves a purely defensive function, for example, protecting our degree of happiness in the face of reality. An extreme form is the notion that we would not get out of bed in the morning if we knew how bad things were—we levitate ourselves out via self-deception. This makes no coherent sense as a general truth, but in practicing self-deception, we may sometimes genuinely fool ourselves for personal benefit (absent any effect on others). Placebo effects and hypnosis provide unusual examples, in that they show direct health benefits from self-deception, although this typically requires a third party, either hypnotist or doctor-model. And people can almost certainly induce positive immune effects with the help of personal self-deception, as we shall see in Chapter 6.

  THE NEUROPHYSIOLOGY OF CONSCIOUS KNOWLEDGE

  Because we live inside our conscious minds, it is often easy to imagine that decisions arise in consciousness and are carried out by orders emanating from that system. We decide, “Hell, let’s throw this ball,” and we then initiate the signals to throw the ball, shortly after which the ball is thrown. But detailed study of the neurophysiology of action shows otherwise. More than twenty years ago, it was first shown that an impulse to act begins in the brain region involved in motor preparation about six-tenths of a second before consciousness of the intention, after which there is a further delay of as much as half a second before the action is taken. In other words, when we form the conscious intention to throw the ball, areas of the brain involved in throwing have already been activated more than half a second earlier.

  Much more recent work, from 2008, gives a more dramatic picture of preconscious neural activity. The original work involved a neural area, the supplementary motor area involved in late motor planning. An important distinction is whether preparatory neural activity is related to a particular decision (throw the ball) or just activation in general (do something). A novel experiment settled the matter. While seeing a series of letters flash in front of him or her, each a half-second apart, an individual is asked to hit one of two buttons (with left or right index finger) whenever he or she feels like it and to remember which letter was seen when the conscious choice was made. After this, the subject had to choose which of four letters was the one he or she saw when consciously deciding to press the button. This served roughly to demarcate when conscious knowledge of the decision is made, since each letter is visible for only half a second and conscious knowledge of intention occurs about one second before the action itself.

  What about prior unconscious intention? Computer software can search through fMRI images (showing blood flow associated with neural activity) taken in various parts of the brain during intervals prior to action. Most strikingly, a full seven seconds before consciousness of impending action, activity occurs in the lateral and medial prefrontal cortex, quite some distance from the supplementary motor area and the motor neurons themselves. Given the slowness of the fMRI response, it is estimated that fully ten seconds before consciousness of intent, the neural signals begin that will later give rise to the consciousness and then the behavior itself. This work also helps explain earlier findings that people develop anticipatory skin conductance responses to risky decisions well before they consciously realize that such decisions are risky.

  One point is well worth emphasizing. From the time a person becomes conscious of the intent to do something (throw a ball), he or she has about a second to abort the action, and this can occur up to one hundred milliseconds before action (one-tenth of a second). These effects can themselves operate below consciousness—that is, subliminal effects operating at two hundred milliseconds before action can affect the chance of action. In that se
nse, the proof of a long chain of unconscious neural activity before conscious intention is formed (after which there is about a one-second delay before action) does not obviate the concept of free will, at least in the sense of being able to abort bad ideas and also being able to learn, both consciously and unconsciously, from past experience.

  On the flip side, it is now clear that consciousness requires some time for perception to occur. Put another way, a neural signal travels from the toe to the brain in about twenty milliseconds but takes twenty-five times as long, a full five hundred milliseconds (half a second) to register in consciousness. Once again, consciousness lags reality and by a large amount, plenty of time for unconscious biases to affect what enters consciousness.

  In short, the best evidence shows that our unconscious mind is ahead of our conscious mind in preparing for decisions, that consciousness occurs relatively late in the process (after about ten seconds), and that there is ample time for the decision to be aborted after consciousness (one second). In addition, incoming information requires about half a second to enter consciousness, so that the conscious mind seems more like a post-hoc evaluator and commentator upon—including rationalizing—our behavior, rather than the initiator of the behavior. Chris Rock, the comedian, says that when you meet him for the first time (conscious mind and all), you are not really meeting him—you are only meeting his representative.

 

‹ Prev