Another syndrome that results in a catastrophic failure of a specific ability, rather than a graceful degradation of cognition in general, is characterized by the delusional belief that familiar people, often the patient’s parents, are impostors.27 This rare condition is known as Capgras syndrome. An individual with Capgras may acknowledge that his mother looks very much like his mother, but insist that she is actually someone pretending to be his mother. Some patients with Capgras may even maintain that the person in the mirror is an impostor. In some cases patients attack the alleged imposter, as they understandably become distressed over why someone is impersonating their family, and seek to find out where their loved ones really are.28
How is it that an organ widely known for its resilience and graceful degradation sometimes undergoes epic failures such as alien hand or Capgras syndrome? One reason is that many of the computations the brain performs are modular in nature—that is, different parts of the brain specialize in performing different types of computations.
In the late 1700s, Franz Joseph Gall, a preeminent neuroanatomist, proposed that the cortex was modular—a collection of different organs each dedicated to a specific task. Gall was also the father of the “science” of phrenology. He argued that one area of the cortex was responsible for love, another for pride, others for religion, the perception of time, wit, and so on, and that the size of these areas was directly proportional to how much of the given personality trait someone possessed. Gall further maintained that it was possible to tell how big each cortical area was based on bumps on the skull. Together, these fanciful assumptions provided a convenient means to determine the true nature of people by palpating their skulls. A big protrusion on the back of the skull, and you will be a loving parent; a large bump behind the ear, and you are secretive and cunning. People consulted with phrenologists to obtain insights into the psychological profile of others and of themselves, and to determine whether couples would be compatible. The lack of any scientific foundation, together with the subjectivity of confirming whether someone was secretive or witty, made phrenology a highly profitable field for charlatans and quacks.29
In a sense Gall was right—the brain is modular. But he made a mistake that often plagues scientists. He assumed that the categories we devise to describe things are something more than that. While love, pride, secretiveness, and wit may be distinct and important personality traits, there is no real reason to assume that each has its own little brain module. We may describe a car as having style, but nobody attributes this quality to any single component of the car. The tendency to assume that the categories we use to describe human behavior at the psychological level reveals something about how the brain is structured, is still in effect today among those who believe that complex personality traits such as intelligence, “novelty-seeking,” or spirituality can be attributed to single genes, or localized to a single brain area.
The division of labor in the brain is best understood in the light of evolution and the manner in which the brain performs computations. We have already seen that different parts of the brain are dedicated to processing sounds and touch. While there is not a single area responsible for language, specific areas do subserve different aspects of language such as speech comprehension and production. It is even the case that different parts of the visual system are preferentially devoted to recognizing places or faces. Similarly, there are areas that play an important, but slightly more intangible, role in human personality. This was famously illustrated by the case of Phineas Gage. After accidentally having a rod 1 meter long and 3 centimeters thick blasted through his skull, Phineas went from being the type of person you would enjoy hanging out with to the rude, unreliable, disrespectful type most of us would go out of our way to avoid.30 Phineas Gage’s lesion affected part of the ventromedial prefrontal cortex, an area important for inhibiting socially inappropriate behaviors, among other things.
The brain’s modularity underlies the symptoms of many neurological syndromes, including the aphasias, loss of motor control, and body neglect that can emerge after strokes. The causes of alien hand syndrome and Capgras syndrome are more mysterious, but they are probably attributable to the loss of specialized subsystems in the brain. Alien hand syndrome might be the consequence of broken communication channels between the “executive” areas of the frontal cortex responsible for deciding what to do and the motor areas responsible for actually getting the job done (that is, translating goals into actual movements of the hand).31 Capgras has been suggested to be a consequence of damage to the areas that link facial recognition with emotional significance. Imagine running into someone who looks identical to a dead family member. Your reaction may be one of bewilderment, but it is unlikely that you will embrace him and have a positive emotional reaction toward this person. You recognize the face but the emotional impact of that face is not uploaded. In Capgras patients, the recognition of a parent’s face, in the absence of any feelings of love or familiarity, might reasonably lead a patient to conclude that the individual is an impostor.32
So the modules of the brain do not correspond to tidy well-defined traits like intelligence, spirituality, courage, or creativity. Most personality traits and decisions are complex multidimensional phenomena that require the integrative effort of many different areas, each of which may play an important but elusive role. We should not think of the brain’s modularity as resembling the unique and nontransferable specializations, like the parts of a car, but more like the members of a soccer team; each player’s performance depends to a large extent on the other players’, and if one team member is lost, the others can take over with varying degrees of effectiveness.
The brain’s remarkable ability to learn, adapt, and reorganize has a flipside: in response to trauma, neural plasticity can be responsible for disorders including phantom limbs and tinnitus.33 It is not particularly surprising that brain bugs surface in response to trauma, because our neural operating system was probably never tested or “debugged” under these conditions. Cortical plasticity evolved primarily as a powerful mechanism that allowed the brain to adapt to, and shape, the world around it, not as a mechanism to cope with trauma or injury. In a red-in-tooth-and-claw world, any serious injury pretty much guaranteed that an individual would no longer be playing in the gene pool. Thus, relatively little selective pressure would have ever been placed on removing the glitches that arose from the interaction between brain plasticity and serious trauma to the body or brain.
The cockpit of an airplane has indicators and gauges about flap and landing gear positions, engine temperature, fuel level, structural integrity, and so on. Thanks to these sensors the main cockpit computer “knows” the position of the landing gear, but it does not feel the landing gear. The human body has sensors distributed throughout, which provide information to the brain regarding limb position, external temperature, fuel levels, structural integrity, and the like. What is exceptional about the brain as a computational device is that evolution has not only ensured that the brain has access to the information from our peripheral devices, but that it endowed us with conscious awareness of these devices. As you lay awake in the dark your brain does not simply verbally report the position of your left arm, it goes all out and generates a sense of ownership by projecting the feeling of your arm into the extracranial world. A glitch in this sophisticated charade is that under some circumstances—as a result of the brain’s own plasticity mechanisms gone awry—the brain can end up projecting the sensation of an arm into points in space where an arm no longer resides. This may simply be the price to be paid for body awareness—one of the most useful and extraordinary illusions the brain bestows upon us.
4
Temporal Distortions
Time is an illusion, lunch time doubly so.
—Douglas Adams
I decided that blackjack would be the ideal gambling endeavor on my first trip to Las Vegas. Surely, even I could grasp the basics of a game that consists of being dealt one card at a time in t
he hopes that they will add up to 21. After I received my first two cards, my job was to decide whether I should “stick” (take no more cards) or “hit” (request another) and risk “busting” (exceeding 21). My opponent was the dealer, and I was assured that her strategy was written in stone: the dealer would continue to take cards until the sum was 17 or more, in which case she would stick. In other words, the dealer played like a robot obeying a simple program—no free will required. To avoid having to actually memorize the optimal strategies, I decided to also play as a robot and use the same set of rules as the dealer. Naively it seemed to me that if I adopted the same strategy I should have a 50 percent chance of winning.
This of course was not the case. As everybody knows, the house always has the advantage, but where was it? Fortunately, Las Vegas is a city where people are eager to pass on gambling advice, so I asked around. The taxi driver assured me the dealers had the advantage because they got to see your cards, but you did not get to see theirs. An off-duty dealer informed me that it was because I had to decide whether to take a card before the dealer. But the strategy of sticking at 17 does not require looking at any cards other than your own, so who sees whose cards or who sees them first is irrelevant. Further inquiries led to a number of fascinating, albeit incorrect, answers.
When I asked for a third card and it added up to more than 21 the dealer immediately made it abundantly clear that the hand was over for me by picking up my cards and chips, and proceeding to play with the other players at the table. When no one else wanted another card, the dealer revealed her cards and their sum, at which point I realized that she busted. Since I also busted, I actually tied with the dealer. If we had both ended up with a total of 18, it would indeed be a tie, and I would get my chips back. The casino’s advantage is simply that the patron loses a tie when both have a hand that adds up to more than 21.1 But why couldn’t I, or the others I spoke to, readily see this?
The reason is that the casino’s advantage was carefully hidden in a place (or, rather, a time) we did not think to look: in the future. Note that the dealer took my cards away immediately after I busted. At this point my brain said game over. Indeed, I could have left the table at this point without ever bothering to find out if the dealer had also busted. One of the golden rules etched into our brains is that cause comes before effect. So my brain didn’t bother looking for the cause of my loss (the house’s advantage) in the events that happened after I had stopped playing. By cleverly tapping into a mental blind spot about cause and effect, casinos play down their advantage and perpetuate the illusion of fairness.
DELAY BLINDNESS
One does not need to learn that cause precedes effect; it is hardwired into the brain. If a rat fortuitously presses a lever and some food falls from the heavens, it naturally comes to repeat the movements it made before the miraculous event occurred, not those that came after. Two of the most basic and ubiquitous forms of learning, classical and operant conditioning, allow animals to capture the gist of cause and effect. The Russian physiologist Ivan Pavlov was the first to carefully study classical conditioning in his famous experiments. He demonstrated that dogs begin to salivate in response to a bell (the conditioned stimulus) if, in the past, the bell’s ringing consistently precedes the presentation of meat powder (the unconditioned stimulus). From the perspective of the dog, classical conditioning can be considered a quick-and-dirty cause-and-effect detector—although in practice whether the bell actually causes the appearance of the meat is irrelevant as far as the dog is concerned, what matters is that it predicts snack time.
Dogs are far from the only animals that learn to salivate in response to a conditioned stimulus. I learned this the hard way. Every day for a few weeks, my officemate at the University of California in San Francisco shared one of her chewable vitamin C tablets with me (which are well suited to induce salivation due to their sourness)—out of kindness, or in the name of science? The bottle made a distinctive rattling sound every time she retrieved it from her drawer. After a few weeks, I noticed that sometimes, out of the blue, I found my mouth overflowing with saliva. Before presenting this newly found condition to a doctor, I realized that my officemate was sometimes getting a dose for herself and not giving me one. Totally unconsciously my brain processed the conditioned stimulus (the rattling) and produced the unconditioned response (salivation.
On the flipside, when Pavlov and subsequent investigators provided presentations of the sound of the bell shortly after giving the dogs the meat, the dogs did not salivate in response to the bell.2 Why should they? If anything, in that case, the meat was “causing” the bell; there is little reason to salivate in response to a bell, particularly if you are in the process of wolfing down a meal.
As in most cases of classical conditioning, the interval between the conditioned and unconditioned stimulus is short—a few seconds or less. The neural circuits responsible for classical conditioning not only make “assumptions” about the order of the stimuli but also about the appropriate delay between them. In nature, when one event causes (or is correlated with) another, the time between them is generally short, so evolution has programmed the nervous system in a manner that classical conditioning requires close temporal proximity between the conditioned and unconditioned stimuli. If Pavlov had rung the bell one hour before presenting the meat, there is no chance the dog would ever have learned to associate the bell with the meat, even though the ability of the bell to predict the occurrence of the meat would have been exactly the same.
The importance of the delay between stimuli has been carefully studied using another example of classical conditioning, called eyeblink conditioning.3 In humans, this form of associative learning typically involves watching a silent movie while wearing some specially adapted “glasses” that can blow a puff of air into the eye, reflexively causing people to blink (this method is a significant improvement over the old one in which blinking was elicited by slapping volunteers in the face with a wooden paddle). If an auditory tone is presented before each air puff, people unconsciously start blinking to the tone before the onset of the air puff.4 If the onset of the tone precedes the air puff by a half second, robust learning takes place; however, if the delay between the “cause” and “effect” is more than a few seconds, little or no classical conditioning occurs. The maximal intervals between the conditioned and unconditioned stimuli that can still result in learning are highly dependent on the animal and the stimuli involved, but if the delays are long enough, learning never occurs.
The difficulty that animals have in detecting the relationship between events that are separated by longer periods of time is also evident in operant conditioning, in which animals learn to perform an action to receive a reward. In a typical operant conditioning experiment rats learn to press a bar to receive a pellet of food. Again the delay between the action (cause) and the reward (effect) is critical. If the food is delivered immediately after a rat presses the lever, the rat readily learns; however, if the delay is 5 minutes, the rat does not learn the cause and effect relationship.5 In both cases the facts remain the same; pressing the lever results in the delivery of food. But because of the delay, animals cannot figure out the relationship.
This “delay blindness” is not limited to simple forms of associative learning, such as classical and operant conditioning. It is a general property of the nervous system that applies to many forms of learning. If a light goes on and off every time we press a button, we have no trouble establishing the causal relationship between our action and the effect. If, however, the delay is a mere five seconds—perhaps it is a slow fluorescent light—the relationship is a bit harder to detect, particularly if in our impatience we press the button multiple times.
In a hotel in Italy I found myself wondering what a little cord in the shower was for. After pulling on it a couple of times produced no observable effects I assumed it no longer had a function or was broken. Thirty seconds later, the phone rang; only then did I realize the mysterious cord was an e
mergency call in case you fell in the shower. But if the delay between pulling the cord and receiving a call had been five minutes there is little doubt I would not even have remembered tinkering with the cord, much less figured out the relationship between pulling the cord and the phone ringing.
The delays between cause and effect that the brain picks up on are not absolute, but tuned to the nature of the problem at hand. We expect short intervals between seeing something fall and hearing it crash, and longer intervals between taking aspirin and our headache improving. But across the board it is vastly more difficult to detect relationships between events separated by hours, days, or years. If I take a drug for the first time, and 15 minutes later I have a seizure, I’ll have no problem suspecting that the drug was its cause. If, on the other hand, the seizure occurs one month later, there is a much lower chance I’ll establish the connection. Consider that the delay between smoking cigarettes and lung cancer can be decades. If cigarettes caused lung cancer within one week of one’s first cigarette, the tobacco industry would never have managed to develop into a mammoth multi-trillion-dollar global business.
Why is it so much harder to detect the relationship between events separated by days or months? Of course, as a general rule the more time between two events the more complicated and less direct the nature of the relationship. Additionally, however, our neural hardware was simply not designed to capture the relationship between two events if there is a long delay between them. The primordial forms of associative learning (classical and operant conditioning) are generally useless over time scales of hours,6 much less days, months, or years. Learning the relationship between planting seeds and growing a supply of corn, or between having sex and becoming pregnant, are things that require connecting dots that are many months apart. These forms of learning require cognitive abilities that far exceed those of all animals, except humans. But even for us, understanding the relationship between events separated in time is a challenge. Consequently we often fail to properly balance the short and long-term consequences of our actions.
Brain Buys Page 10