The Rational Animal: How Evolution Made Us Smarter Than We Think
Page 13
If a woman tests positive, what are the chances that she has breast cancer?
The correct answer is roughly 10 percent. Given the odds above, if a woman tests positive, there is about a 10 percent chance that she actually has breast cancer. (If you do the math, you’ll see that nine out of one hundred women who don’t have breast cancer will be false positives, and a little less than one in one hundred will be a true positive. So just about one in ten who test positive, or 10 percent, actually has breast cancer.)
But of the doctors who were asked this question, only 21 percent got it right. This should already be pretty disturbing, but the situation is much more disconcerting than that. For starters, these doctors were all gynecologists. These are the people who actually do mammography screening! The doctors could have simply recalled what they should already know about false positives, but that didn’t happen. Even more troubling is the range of responses. Almost half of the doctors said the likelihood of having breast cancer was 90 percent! And one in five doctors said the likelihood was only 1 percent! But here is the final kicker. The question was multiple choice—there were only four options (90 percent, 81 percent, 10 percent, and 1 percent). This means that monkeys would have likely done better in answering this question correctly, since randomly guessing the answer will lead monkeys to be right 25 percent of the time, while only 21 percent of doctors got it right.
The literature on judgment and decision making is overflowing with these kinds of shocking studies. It is tempting to present such findings as prime evidence of the stupidity and inability of humankind. Errors are in fact being made—gross errors by the people who should know best. But before condemning humanity as hopelessly deficient, let’s take a step back and think about the situation. The people making these errors are doctors. They’ve been full-time students receiving formal education from the age of five to thirty. And it’s not just the schooling. You have to be pretty smart and motivated to get into medical school in the first place, not to mention finish it and pass all your exams. It hardly makes sense to relegate this group to the category of stupid people.
From the evolutionary psychologist’s perspective, it’s unlikely that the brain evolved to be dumb. Instead, the problem might be not with the test takers but with the test makers. The breast cancer problem is asking us a question on a frequency our brains don’t receive. And it’s pretty important that we adjust the antenna.
COMMUNICATING ON OUR NATURAL FREQUENCY
In the modern world, we are awash in numerically expressed statistical information. You may have spent enough years in math classes to cognitively understand that a 0.07 probability and a 7 percent likelihood are the same thing, but many of us will still furrow our brows and squint our eyes when digesting a statement about a 0.07 probability. Probabilities and likelihood estimates are a common way to present statistical information, but they are also an evolutionarily recent invention. Mathematical probabilities were invented in Europe in the mid-1600s. And thanks to this statistical renaissance, we now have a really smart way to present numbers—so smart that probabilities often outsmart even us.
Gerd Gigerenzer, a decision scientist at the Max Plank Institute, is not a fan of mathematical probabilities or likelihood estimates. He has long realized that trying to understand probabilities and likelihoods is the evolutionary equivalent of writing as compared to talking—an unnatural and difficult variant of something that’s easy in another format. Hence, statistics presented in probability format can lead to a lot of problems. Just as even well-educated writers have problems spelling words like “dumbbell,” “embarrass,” and “misspell,” smart doctors can have problems figuring out the likelihood that you have breast cancer if your mammogram comes back positive.
Instead of presenting information as conditional probabilities or likelihood estimates, Gigerenzer has demonstrated that people are much better at computing statistical information if it’s presented in terms of natural frequencies. “Natural frequencies represent the way ancestral humans encoded information,” Gigerenzer explains. Whereas probabilities are like writing, natural frequencies are like talking.
Let’s take a boat upriver to the Shiwiar village. Imagine that the village chief wants to catch dinner today, and he’s trying to decide whether it would be worthwhile to go hunting in the nearby red canyon. For the Shiwiar, as for most of our ancestors, the only database available to make any kind of calculation consists of their own observations and those communicated by a handful of close others. When the chief is trying to determine whether it’s wise to go hunting in the red canyon, he can consider what happened the last twenty times people went hunting there. The chief observes natural frequencies—five out of the last twenty hunts in the red canyon were successful. He doesn’t think in terms of probabilities, though. Neither did our ancestors, who did not observe probabilities in their natural environment. As a consequence, our brains do not process probabilities (“0.25 probability of success”) in the same way as they do natural frequencies (“5 out of 20 were successful”). Years of formal math training have taught most of us that these two statistical statements mean the same thing, but decades of writing training still hasn’t outmoded spell-checkers.
Gigerenzer has found dramatic improvements in both novices and experts when hard questions are asked in terms of natural frequencies rather than probabilities. Take the probability-laced breast cancer question asked earlier—the one that dumbfounded our panel of doctors. Here is the same exact information translated into natural frequencies:
•Ten out of every one thousand women have breast cancer.
•Of these ten women with breast cancer, nine test positive.
•Of the 990 women without breast cancer, about 89 also test positive.
If a woman tests positive, what are the chances that she has breast cancer?
When Gigerenzer asked doctors this question, the difference was remarkable. Whereas only 21 percent of doctors answered correctly when the question was presented in terms of probabilities, 87 percent answered correctly when it was presented in terms of natural frequencies. One question is hard; the other is easy—even though, to a mathematician, both are asking the same exact thing.
And remember the Linda problem mentioned earlier? It suffers from the same problem: it asks people a simple question in terms of complex probabilities. Below is the Linda problem translated into natural frequencies:
Researchers polled one hundred women with the following features. They are on average thirty-one years old, single, outspoken, and very bright. They majored in philosophy. As students, they were deeply concerned with issues of discrimination and social justice, and also participated in antinuclear demonstrations.
Which is larger?
A.The number of women out of that one hundred that might be bank tellers.
B.The number of women out of that one hundred that might be bank tellers and active in the feminist movement.
Whereas only about 10 percent of people answer the Linda problem correctly when it is asked in probability format as presented earlier, almost 100 percent get the right answer when it’s presented in natural frequency format. Mathematically, both versions ask the exact same question. But the first version is confusing and leads to errors, whereas the second one is surprisingly easy.
TAPPING THE ANCESTRAL WISDOM OF OUR DIFFERENT SUBSELVES
To tap into the innate intelligence of the human brain, we need to understand how the mind expects to take in information. Because our brains are designed to receive information in the way our ancestors would have received it, people will be much better problem solvers when problems are presented in ancestral formats—such as presenting math problems by using natural frequencies (five out of one hundred) rather than probabilities (0.05).
We should also expect that people will be superb problem solvers when it comes to ancestral challenges—solving the types of evolutionary problems faced by our subselves. And because each of our subselves specializes in different types of pro
blems, we should be able to improve our reasoning abilities by making complex problems relevant to our different subselves. In the remainder of the chapter, we look at two cases that unlock the wisdom of our inner team player—the affiliation subself.
DETECTING CHEATERS
Cognitive psychologists have developed some particularly difficult problems to test people’s abilities in deciphering what’s known as conditional logic. A classic problem of this sort is known as the Wason Task:
Figure 5.2 shows four cards. Each card has a number on one side and a letter on the other. Which card(s) should you turn over in order to test whether the following rule is true: If a card has an even number on one side, then it must have a consonant on the other side?
Figure 5.2. The Wason Task
The correct answer is that you need to turn over two cards: The card with the “8” and the card with the “A.” You don’t need to turn over any other card. If you turn over the card labeled “3” and find that the second side has a consonant, this does not invalidate the rule (which says nothing about odd numbers). Likewise, if you turn over the “B” card and find an odd number, this also does not break the rule.
Don’t feel too bad if you didn’t get the right answer. People are not very good at these kinds of problems. Only about 10 percent of college students get the right answer. The Shiwiar of the Amazon got it right 0 percent of the time. We can take comfort in knowing that the payoff from many years of formal education is a boost in those test scores—to 10 percent. And in case you’re wondering, paying over $40,000 per year for a Harvard education may improve scores even further—all the way up to 12 percent.
The Wason card problem is difficult for most people, in the same way that learning to write is difficult. Unless you took advanced conditional reasoning in college, it’s not that easy to derive the right answer—it’s like asking an illiterate person to write an essay. Even philosophy professors and mathematicians tend to get the answer wrong (in fact, we wrote the wrong answer when writing this book and had to go back and correct it two separate times).
But what if there were a way to make this problem less like doing ballet and more like walking? Leda Cosmides, an evolutionary psychologist at the University of California, Santa Barbara, has figured out how to do just that. Although solving abstract logic problems is evolutionarily novel and therefore difficult, Cosmides suspected that humans have been solving all sorts of complex logical problems for hundreds of thousands of years. As it turns out, solving one of those ancestral problems requires the same exact complex logic as solving the Wason card problem.
This ancestral problem is a specialty of our affiliation subself, which is a master manager of life in social groups. Here’s why: Living in social groups brings many advantages. With ten heads working on a problem, the odds of discovering a solution rise dramatically. But anyone who has ever worked on a group project knows that there’s also a downside to working in groups—some people end up taking the credit without doing their part. When our ancestors were living in small groups and facing frequent dangers of starvation, it was critical to figure out which people were taking more than they were giving. Having a couple of group members who ate their share of the food but didn’t do their share of the hunting and gathering could mean the difference between starvation and survival. Our ancestors needed to be good at detecting social parasites—the cheaters in our midst.
Leda Cosmides realized that the logical reasoning people use to detect cheaters involves the exact same logical reasoning needed to solve the Wason card problem. Let’s revisit the card problem, except this time we’ll state the problem in a way that allows the affiliation subself to process this information in the way our ancestors did:
Figure 5.3 shows four cards. Each card has a person’s age on one side and the beverage he or she is drinking on the other. Which card(s) do you need to turn over in order to test whether the following rule is true: If a person is drinking alcohol, he or she must be over eighteen?
Figure 5.3. The Cheater Detection version of the Wason Task
When presented with this translated version of the problem, most people instantly get the right answer: turn over the card with “16” and the card with “beer,” while leaving the other two cards unturned. It’s pointless to turn over the card with “21” because we know that this person isn’t a cheater. It’s similarly useless to turn over the card of the person who’s drinking a Coke because that person did not receive the benefit that could come from cheating.
The complex logic required to solve the cheater problem is mathematically identical to the complex logic needed to solve the Wason card problem we gave you earlier. Cosmides tried dozens of versions of the problem, always finding the same results. Regardless of whether the question was about familiar things like the drinking age or unfamiliar things like the right to eat cassava roots, if the problem involved detecting a cheater, most people became brilliant logicians.
Larry Sugiyama, the anthropologist we met at the beginning of the chapter, had the Shiwiar try to solve this problem. Whereas the Shiwiar solved the original Wason card task 0 percent of the time, they solved the evolutionarily translated version 83 percent of the time. This was in fact one point better than Harvard students, so in an intellectual Olympics, the unschooled Shiwiar would have beat out the well-educated Cambridge team on the natural version of this problem.
Cosmides and Sugiyama demonstrated something extremely important. It’s not the case that people are incapable of doing complex logic. Instead, most academic problems are written in such a way that they never engage the sophisticated talents of our subselves. It’s like asking a car mechanic to solve the problem of lifting a car not by using a jack but by demonstrating the answer in terms of mathematical vectors and energy exchange.
One of us just tried the cheater detection problem on our seven-year-old son, who has yet to be educated in multiplication, much less conditional probability. He had a difficult time understanding the Wason card task, wanting to turn over every card. But when the problem was framed in terms of people who either had or had not paid the fee to play a special Lego Universe computer game, he nailed the answer easily. Whereas the original card task is tough, like writing an essay about family relations in the modern world, the evolutionarily translated version is like chatting with a neighbor about how your kids are doing. Writing is hard, and talking is easy, even when you are attempting to communicate the same exact thing.
THE LARGE NUMBERS PARADOX
Imagine you learn that there was a plane crash, and all two hundred people aboard perished. You wouldn’t be human if you didn’t feel some sadness and grief in response to this news. Now imagine instead that it was a larger plane, and the crash resulted in six hundred fatalities. How would you feel?
Most people would again feel grief and sadness, but they wouldn’t feel three times as much grief and sadness. In fact, people experience about the same level of emotion in both situations—and sometimes they experience less emotion when more people perish.
This is known as the large numbers paradox. You can find it all around you. Many Americans, for example, are outraged when they learn that the US military presence in Iraq and Afghanistan in the first decade of the twenty-first century cost taxpayers over $1 billion. But those people wouldn’t feel much more outrage if they were told that the endeavor cost over $1 trillion—even though the latter amount is over a thousand times greater and, in fact, closer to the actual cost. It’s mathematically equivalent to the difference between the clerk at the corner store charging you $4 versus $4,000 for the same sandwich. Yet when government expenses are multiplied a thousand times, people don’t get any angrier!
To understand this paradox we need to navigate through the jungle back to the world of the Shiwiar. The Shiwiar live in small villages of about fifty to one hundred people. Each villager knows most of these people, many of whom are relatives or close friends. The range of fifty to one hundred is important because it appears again and
again around the globe and across time. Modern hunter-gatherers, from Africa to South America to Oceania, live in bands of about fifty to one hundred people. If you were the first explorer to contact a tribe of people who had never seen an outsider, smart money says you’d find about fifty to one hundred people in that tribe. Archaeological evidence suggests that if you traveled back in time one hundred thousand years to visit your great, great, great . . . great-grandmother, you’d likely find her living in a nomadic band of fifty to one hundred people. Today many of us live in cities of millions. Yet our social networks—the people we interact with—still include about fifty to one hundred people.
If you went up to a Shiwiar chief and told him that six hundred people might perish, he’d probably scratch his head and say, “Hunh? What does this mean?” Hunter-gatherer societies tend to have very few words for numbers and amounts. You might find “one,” “two,” “a few,” “tribe size.” Were we to start discoursing about 200, 600, 1 million, 1 billion, the Shiwiar’s eyes would likely glaze over (just as ours do at the distinction between a billion and a trillion).
Making reasoned decisions involving calculations with large numbers is an evolutionarily novel concept—it’s like writing, not talking. While it’s perhaps amusing to think about how the Shiwiar might respond to large numbers, keep in mind that your brain is pretty much evolutionarily identical to a Shiwiar’s brain, which is in turn similar to the brains of the common ancestors of all human beings, who migrated out of Africa approximately fifty thousand years ago. We are all modern cavemen. Many of us have spent long years in math classes, and we cognitively comprehend that we need to add three zeros to turn a thousand into a million. Yet, as the paradoxical plane crash and taxpayer examples illustrate, our brains get a bit numb when numbers get big. What’s a light-year again, or what’s 1012 nanometers? To our brains, the very big number is a fuzzy concept devoid of evolutionary relevance. This really matters if we want to understand people’s erroneous and irrational decisions. Asking the average person to reason out logical questions laced with large numbers is a bit like asking him or her to perform Swan Lake at the Metropolitan Opera House.