Brain Buys
Page 17
Even though Marilyn vos Savant gave the correct answer in her column, she was inundated with letters, including about a thousand from people with PhDs (many in mathematics and statistics), chastising her for propagating innumeracy. The debate received a surprising amount of national attention and a front-paged article in The New York Times.22
Probability seems to be in a class by itself when it comes to mental blind spots and cognitive biases. Our intuition simply appears to run perpendicular to those of probability theory. This may be in part because the assumptions on which probability theory is built were unlikely to be met in natural settings.23 Consider the gambler’s fallacy: our intuition tells us that if the roulette wheel landed on a red for the last five consecutive plays, we might want to place a bet on black, since it’s “due.” But then we did not evolve to optimize bets in casinos. Steven Pinker points out that “in any world but a casino, the gambler’s fallacy is rarely a fallacy. Indeed, calling our intuitive predictions fallacious because they fail on gambling devices is backwards. A gambling device is, by definition, a machine designed to defeat our intuitive predictions. It is like calling our hands badly designed because they make it hard to get out of handcuffs.”24
Determining the probability that the roulette wheel will turn up black or red requires spinning it many times. Similarly, calculating the probability that a coin will land heads-up requires flipping it many times. But there is an additional implicit assumption: the properties of the roulette wheel or coin will not change over time. Coins don’t adapt or learn; they satisfy the condition of stationarity. We can safely assume that the “behavior” of the coin will be the same tomorrow as it is today. But in the natural world things are always changing. If my enemy shot 10 arrows at me and all were way off target, I’d be ill-advised to assume that the next 10 arrows will be equally harmless. Nature changes, and people and animals learn; the assumptions that are valid today are not necessarily valid tomorrow. Furthermore, in many ecologically realistic conditions we are not interested in the probability something will happen; what we care about is whether or not it will happen this one time. Will I survive if I swim across a crocodile-infested river? Will I survive if I’m bitten by the snake that just crossed my path? There are many things we don’t want to try again and again simply to establish a realistic estimate of the probability.
Perhaps one of the most famous examples of probability biases comes from cases in which people are asked to estimate or calculate an unknown probability based on other known probabilities. In one of the many different versions of these studies, the German cognitive psychologist Gird Gigerenzer presented the following pieces of information to 160 gynecologists:25
1. The probability that a woman has breast cancer is 1%.
2. If a woman has breast cancer, the probability is 90% that she will have a positive mammogram.
3. If a woman does not have breast cancer, there is a 9% chance she will have a positive mammogram (the false-positive rate.
Next he asked: if a woman has tested positive, what is the likelihood she actually has breast cancer? This is not an academic scenario; one can easily understand why it is important for both physicians and patients to grasp the answer to this question. Gigerenzer gave the doctors four possible options to choose from:
Only 20 percent of the physicians chose the correct option, C (10%); 14 percent chose option A, 47 percent chose option B, and 19 percent chose option D. So more than half of physicians assumed that there was more than an 80 percent chance that the patient had breast cancer. Gigerenzer points out the undue anxiety that would result from patients’ false belief that their chances of having breast cancer were so high.
Where does the correct answer come from? In a sample of 1000, the great majority of women (990) do not have breast cancer; but because of the 9 percent false-positive rate (which is quite high for a medical test), of these 990 women, 89 (9 percent of 990) women who do not have cancer will have a positive mammogram. That’s a lot of positive tests, particularly because only 10 women (1 percent of 1000) would be expected to have the disease, and of these, 9 to have a positive mammogram. Therefore, there will be a total of 98 positive tests, of which only 9 would truthfully indicate the disease—close to 10 percent. Gigerenzer went on to show that when he expressed the entire scenario in a more naturalistic manner, accuracy improved dramatically. For example, when the conditions were presented in terms of frequencies (statement 1 was reworded to read, “10 out of a population of 1000 women would be expected to have breast cancer”), most physicians (87 percent) choose the correct answer. In other words, the format used to present a problem is of fundamental importance. Gigerenzer argues that our awkward relationship with probability theory is not necessarily rooted in poor reasoning skills, but is because probabilities are not often encountered in ecologically realistic settings and thus do not represent a natural input format for the brain. Nevertheless, the fact remains: we are inept at making probability judgments.
NEUROSCIENCE OF BIASES
Now that we have sampled from a selection of cognitive biases, we can ask the question psychologists and economists have been asking for decades, and philosophers have been pondering for centuries, Are humans predominantly rational or irrational beings? In this oversimplified form, the question makes as much sense as asking whether humans are violent or peaceful creatures. We are both violent and peaceful, and rational and irrational. But why? How is it that on the one hand we have made it to the moon and back, cracked atoms, and uncoiled the mysteries of life itself; yet, on the other hand, we allow our decisions to be influenced by arbitrary and irrelevant factors and are inherently ill-equipped to decide whether we should switch doors in a game show? One answer to this paradox is that many of the tasks the brain performs are not accomplished by a single dedicated system but by the interplay between multiple systems—so our decisions are the beneficiaries, and victims, of the brain’s internal committee work.
Find the oddball (unique) item, in each of the panels in Figure 6.2 as quickly as possible.
Figure 6.2 Serial and parallel search.
Most people are able to spot the oddball much more quickly in the left than in the right panel. Why would this be, given that the panel on the right is simply the one on the left rotated by 90 degrees? The symbols on the left take the familiar shape of the numbers 2 and 5. You have had a lifetime of experiences with these symbols, but primarily in the upright position. This experience has led to neurons in your visual system specializing in “2” and “5” detection; accounting for an automatic and rapid ability to spot the standout. The task on the right, however, relies on attention and an effortful search among the less familiar symbols.26
You have two strategies, or systems, at your disposal to find objects in a visual scene: an automatic one, referred to as a parallel search; and a conscious strategy, referred to as a serial search. Loosely speaking, there are also two independent yet interacting systems responsible for the decisions we make. These systems have been called the automatic (or associative) and the reflective (or rule-based) systems.27 The automatic system is related to what we think of as our intuition, and it is unconscious, rapid, associative, and effortless. It is very sensitive to context and emotions, eager to jump to conclusions, and possesses a number of biases and preconceived assumptions. But the automatic system is precisely the one we need to understand what the people around us are saying and what their intentions are. It allows us to quickly decide if it is most prudent to stop or proceed through a yellow light. In his book Blink, Malcolm Gladwell examined the wisdom and folly of the automatic system, and the fact that training can make it the key-stone of expert judgments.28 Through extensive experience, art dealers, coaches, soldiers, and doctors, learn to quickly evaluate situations overflowing with information and arrive at an effective assessment.
In contrast to the automatic system, the reflective system is slow, effortful, and requires conscious thought. It can adapt quickly to mistakes, and it is flexible and
deliberative. This is the system we want to engage when we are problem solving, such as when we are trying to decide which mortgage plan is the best. It is the system Semmelweis used to figure out why there were so many more deaths in the First Obstetric Clinic. The reflective system ultimately grasps why we should switch doors when Monty Hall gives us the opportunity.
What do cows drink? Any initial urge to blurt out “milk” is a consequence of the automatic system, which associates cows with milk. But if you resisted that initial urge, the reflective system offered “water.” Here is another: A plastic baseball bat and a ball cost $1.10 in total. The bat costs $1 more than the ball. How much does the bat cost?29 Most of us almost reflexively want to blurt out $1. Presumably our automatic system captures the fact that $1 + $0.10 matches the total of $1.10, but totally ignores the stipulation that the bat costs $1 more than the ball. The reflective system must come to the rescue and point out that $0.05 + $1.05 also sums to the correct total and satisfies the condition that the bat cost $1 more than the ball.
We should not envision the automatic and reflective systems as distinct nonoverlapping parts of the brain, like two chips in a computer. Nevertheless, evolutionarily older parts of the brain are key players in the automatic system, and cortical areas that have recently undergone more dramatic expansion are likely pivotal to the reflective system.
The automatic system is the source of many of our cognitive biases. Does this mean the automatic system is inherently flawed, a failure of evolutionary design? No. First, the bugs in our automatic system are not a reflection of the fact that it was poorly designed, but once again, that it was designed for a time and place very different from the world we now inhabit. In this light we are “ecologically rational”—we generally make good and near-optimal decisions in evolutionarily realistic contexts.30 Second, sometimes a powerful feature is also a bug. For instance, word processors and texting devices have “autocorrect” and “autocomplete” features, which can correct misspelled words or complete a couple of letters with the most common word. But it is inevitable that the wrong words will be inserted from time to time, and our messages will be garbled if we are not on our toes. By analogy, some cognitive biases are simply the flipside of some of the brain’s most important features.
Cognitive biases have been intensely studied, and their implications vigorously debated, yet little is known about their actual causes at the level of our neural hardware. Brain-imaging studies have looked for the areas in the brain that are preferentially activated during framing or loss aversion effects.31 At best, these studies reveal the parts of the brain that may be involved in cognitive biases, not their underlying causes. Understanding how and why the brain makes good or bad decisions remains a long way off, yet the little we have learned about the basic architecture of the brain offers some clues. For instance, the similarity between some cognitive biases and priming suggests that they are a direct consequence of the associative architecture of the brain in general, and of the automatic system in particular.32
We have discussed two principles about how the brain files information about the world. First, knowledge is stored as connections between nodes (groups of neurons) that represent related concepts. Second, once a node is activated its activity “spreads” to those it connects to, increasing the likelihood they will be activated. So asking someone if she likes sushi before asking her to name a country increases the likelihood she will think of Japan. Once the “sushi” node has been activated it boosts activity in the “Japan” node. We also saw that merely exposing people to certain words can influence their behavior. People who completed word puzzles with a high proportion of “polite” words waited longer before interrupting an ongoing phone conversation than those completing puzzles with “rude” words. Somehow the words representing the concepts of “patience” or “rudeness” weaseled their way past our semantic networks and into the areas of the brain which actually control how polite or rude we are (behavioral priming). In another study people were asked to think of words related to being angry (that is, words that might be associated with being “hotheaded”), which resulted in higher guesstimates of the temperatures of foreign cities.33
To understand the relationship between behavioral priming and framing let’s consider a hypothetical framing experiment in which we give people $50, and then offer them two possible options:
(A) You can KEEP 49 percent of your money.
(B) You can LOSE 49 percent of your money.
You of course will pick option B, but for argument’s sake, let’s suppose the automatic system is tempted to blurt out “let’s keep the 49 percent” until the reflective system steps in and vetoes option A. Within our semantic networks the word keep has developed associations with related concepts (save, hold, have), which by and large can be said to be emotionally positive—a “good thing.” In contrast, the word lose is generally linked with concepts (gone, defeat, fail) that would be related to negative emotions—a “bad thing.” The connections from the neurons that represent the “keep” and “loss” nodes in our semantic networks must directly or indirectly extend past our semantic network circuits to the brain centers responsible for controlling our emotions and behavior. Because option A has the word keep in it, it will tickle the circuits responsible for “good things”; the net result is that our automatic system will be nudged toward option A.
Studies have demonstrated the connection between the semantic networks and the circuits responsible for emotions and actions by flashing a word that carries positive or negative connotation on a computer screen for a mere 17 milliseconds—too quick to be consciously registered. A second later, they showed the volunteers a painting and asked them to rate how much they liked it. Paintings preceded by positive words (great, vital, lively) were rated higher than the paintings preceded by negative words (brutal, cruel, angry).34 Again, the internal representations of words are contaminating the computations taking place in other parts of the brain.
Let’s take a closer look at the anchoring bias. You may have noted that the informal experiment in which thinking of Brad Pitt’s age resulted in lowballing Joe Biden’s age is similar to a priming study, except that it is a number that primes the numbers closer to it. Some instances of anchoring may be a form of numerical priming.35 The notion is that just as thinking of “sushi” might bias the likelihood of thinking of “Japan,” thinking of “45” makes it more likely to think of “60” than “70” when estimating Joe Biden’s age.
As we have seen, studies have shown that some neurons respond selectively to pictures of Jennifer Aniston or Bill Clinton, and we can think of these neurons as members of the “Jennifer Aniston” and “Bill Clinton” nodes. But how are numbers represented in the brain? Scientists have also recorded from neurons that respond selectively to numbers or, more accurately, to quantities (the number of items in a display). Surprisingly, these experiments were performed in monkeys. The neuroscientists Andreas Nieder and Earl Miller trained monkeys to look at a display with a certain number of dots in it, ranging from 1 to 30. One second later the monkeys viewed another picture with either the same or a different number of dots,36 and the monkeys were getting paid (in the form of juice) to decide if the number of dots was the same or different between the first and second images. They held a lever in their hands, and they had to release it if the numbers in both displays were a match, and continue to hold the lever if the quantities were different. With a lot of training the monkeys managed to perform the task fairly accurately. As represented in Figure 6.3 when a display with eight dots was followed by one with four dots they judged this to be a match only 10 percent of the time; whereas when an eight-item display was followed by another with eight items, the monkeys judged that as a match 90 percent of the time. No one is suggesting that the monkeys are counting the number of dots (the images were only shown for a half second); rather they are performing a numerical approximation (automatically estimating the number of items without actually counting them). When the e
xperimenters recorded from individual neurons in the prefrontal cortex they found that some neurons were “tuned” to the number of items in the display. For example, one neuron might respond strongly when the monkey was viewing a display with four items, but significantly less when there were one or five items in the display (Figure 6.3). In general the tuning curves were fairly “broad,” meaning that a neuron that responds maximally to 8 items would also spike in response to 12 items, and conversely a neuron that responded maximally to 12 items would also respond to 8 items, albeit less vigorously. Therefore, the numbers 8 and 12 would be represented by a different but overlapping population of neurons, much in the same way that the written numbers 32,768 and 32,704 share some of the same digits.
In numerical priming the activity produced by one number “spreads” to others. We have seen in Chapter 1 that we are not sure what this spread of activity corresponds to in terms of neurons. One hypothesis is that it is a fading echo, the decaying activity levels of a neuron after the stimulus has disappeared. A not-mutually-exclusive hypothesis is that priming may occur as a result of the overlap in the representation of related concepts. Here, it is not that activity from the neurons representing “sushi” spreads to those representing “Japan,” but that some of the neurons participate in both representations, in the same manner that in the monkey experiments the same neurons participate in the representation of 8 and of 12. Let’s say you are illegally forging the numbers in a document; substituting the number 9990 for 9900 is much easier than for 10207 because there is more digit overlap between 9990 and 9900. Similarly, in the anchoring bias, numbers may prime similar numbers because of the overlap in the neural code used to represent them. Many of the neurons representing the number 45 will also participate in the representation of 60 and 66, but the overlap between 45 and 60 will be more than 45 and 66. Assuming that recently activated neurons are more likely to be reactivated again we can see that if the “unbiased” estimate of Biden’s age was 66, this value would be “pulled down” by increased activity in the neurons that were activated by 45 when subjects were first asked Pitt’s age.