* * *
Amos had agreed to spend the 1970–71 academic year at Stanford University, and so he and Danny, who remained in Israel, were apart. They used the year to collect data. The data consisted entirely of answers to curious questions that they had devised. Their questions were first posed to high school students in Israel—Danny sent out twenty or so Hebrew University graduate students in taxis to scour the entire country for unsuspecting Israeli children. (“We were running out of kids in Jerusalem.”) The graduate students gave each kid two to four of what must have seemed to them totally bizarre questions, and a couple of minutes to answer each of them. “We had multiple questionnaires,” said Danny, “because no one child could do the whole thing.”
Consider the following question:
All families of six children in a city were surveyed. In 72 families the exact order of births of boys and girls was G B G B B G.
What is your estimate of the number of families surveyed in which the exact order of births was B G B B B B?
That is, in this hypothetical city, if there were 72 families with 6 children born in the following order—girl, boy, girl, boy, boy, girl—how many families with 6 children do you imagine have the birth order boy, girl, boy, boy, boy, boy? Who knows what Israeli high school students made of the strange question, but fifteen hundred of them supplied answers to it. Amos posed other, equally weird, questions to college students at the University of Michigan and Stanford University. For example:
On each round of a game, 20 marbles are distributed at random among five children: Alan, Ben, Carl, Dan, and Ed. Consider the following distributions:
I
II
Alan
4
Alan
4
Ben
4
Ben
4
Carl
5
Carl
4
Dan
4
Dan
4
Ed
3
Ed
4
In many rounds of the game, will there be more results of type I or type II?
They were trying to determine how people judged—or, rather, misjudged—the odds of any situation when the odds were hard, or impossible, to know. All the questions had right answers and wrong answers. The answers that their subjects supplied could be compared to the right answer, and their errors investigated for patterns. “The general idea was: What do people do?” said Danny. “What actually is going on when people judge probability? It’s a very abstract concept. They must be doing something.”
Amos and Danny didn’t have much doubt that a lot of people would get the questions they had dreamed up wrong—because Danny and Amos had gotten them, or versions of them, wrong. More precisely, Danny made the mistakes, noticed that he made the mistakes, and theorized about why he had made the mistakes, and Amos became so engrossed by both Danny’s mistakes and his perceptions of those mistakes that he at least pretended to have been tempted to make the same ones. “We kicked it around, and our focus became our intuitions,” said Danny. “We thought that errors we did not make ourselves were not interesting.” If they both committed the same mental errors, or were tempted to commit them, they assumed—rightly, as it turned out—that most other people would commit them, too. The questions they had spent the year cooking up for the students in Israel and the United States were not so much experiments as they were little dramas: Here, look, this is what the uncertain human mind actually does.
At a very young age, Amos had recognized a distinction within the class of people who insisted on making their lives complicated. Amos had a gift for avoiding what he called “overcomplicated” people. But every now and then he ran into a person, usually a woman, whose complications genuinely interested him. In high school he’d become entranced with the future poet Dahlia Ravikovitch: His intimate friendship with her had startled their peers. His relationship with Danny had the same effect. An old friend of Amos’s would later recall, “Amos would say, ‘People are not so complicated. Relationships between people are complicated.’ And then he would pause, and say: ‘Except for Danny.’” But there was something about Danny that caused Amos to let down his guard and turned Amos, when he was alone with Danny, into a different character. “Amos almost suspended disbelief when we were working together,” said Danny. “He didn’t do that much for other people. And that was the engine of the collaboration.”
In August 1971 Amos returned to Eugene with his wife and children and a mental pile of data, and moved into a house on a cliff overlooking the town. He’d rented it from an Oregon Research Institute psychologist on leave. “The thermostat was set on 85,” said Barbara. “There were picture windows, with no curtain. They had left a mountain of laundry, none of it clothes.” Their landlords, they soon learned, were nudists. (Welcome to Eugene! Don’t look down!) A few weeks later Danny followed with his own wife and children, and an even bigger mental pile of data, and moved into a house with something even more unsettling—to Danny—than a nudist: a lawn. Danny couldn’t picture himself doing yard work any more than anyone else could picture him doing it. Still, he was unusually optimistic. “My memories of Eugene are all of bright sunshine,” he later said, even though he had come from a land where the sun shined all the time, and, on more than half the days he spent in Eugene, the skies were more cloudy than blue.
Anyway, he spent most of his time indoors, talking to Amos. They installed themselves in an office in the former Unitarian church, and continued the conversation they’d started in Jerusalem. “I had the sense, ‘My life has changed,’” said Danny. “We were quicker in understanding each other than we were in understanding ourselves. The way the creative process works is that you first say something, and later, sometimes years later, you understand what you said. And in our case it was foreshortened. I would say something and Amos would understand it. When one of us would say something that was off the wall, the other would search for the virtue in it. We would finish each other’s sentences and frequently did. But we also kept surprising each other. It still gives me goose bumps.” For the first time in their careers, they had something like a staff at their disposal. Papers got typed by someone else; subjects for their experiments got recruited by someone else; money for research got raised by someone else. All they had to do was talk to each other.
They had some ideas about the mechanisms in the human mind that produced error. They set out looking for the interesting mistakes—or biases—that such mechanisms would make. A pattern emerged: Danny would arrive early each morning and analyze the answers that Oregon college students had given to their questions of the day before. (Danny didn’t believe in waiting around: He’d later admonish graduate students who failed to analyze data within a day of getting it, saying, “It’s a bad sign for your research career.”) Amos would turn up around noon and the two of them would walk down to a fish and chips place no one else could stand, eat lunch, and then return and talk the rest of the day. “They had a certain style of working,” recalls Paul Slovic, “which is they just talked to each other for hour after hour after hour.”
The Oregon researchers noticed, as the Hebrew University professors had noticed, that whatever Amos and Danny were talking about must be funny, as they spent half their time laughing. They bounced back and forth between Hebrew and English and broke each other up in both. They happened to be in Eugene, Oregon, surrounded by joggers and nudists and hippies and forests of Ponderosa pine, but they could just as well have been in Mongolia. “I don’t think either of them was attached to physical location,” said Slovic. “It didn’t matter where they were. All that mattered were the ideas.” Everyone also noticed the intense privacy of their conversation. Before they had arrived in Eugene, Amos had made some faint noises about including Paul Slovic in the collaboration, but once Danny arrived it became clear to Slovic that he didn’t belong. “We weren’t a th
reesome together much,” he said. “They didn’t want anyone else in the room.”
In a funny way, they didn’t even want themselves in the room. They wanted to be the people they became when they were with each other. Work, for Amos, had always been play: If it wasn’t fun, he simply didn’t see the point in doing it. Work now became play for Danny, too. This was new. Danny was like a kid with the world’s best toy closet who is so paralyzed by indecision that he never gets around to enjoying his possessions but instead just stands there worrying himself to death over whether to grab his Super Soaker or take his electric scooter out for a spin. Amos rooted around in Danny’s mind and said, “Screw it, we’re going to play with all of this stuff.” There would be times, later in their relationship, when Danny would go into a deep funk—a depression, almost—and walk around saying, “I’m out of ideas.” Even that Amos found funny. Their mutual friend Avishai Margalit recalled, “When he heard that Danny was saying, ‘I’m finished, I’m out of ideas,’ Amos laughed and said, ‘Danny has more ideas in one minute than a hundred people have in a hundred years.’” When they sat down to write they nearly merged, physically, into a single form, in a way that the few people who happened to catch a glimpse of them found odd. “They wrote together sitting right next to each other at the typewriter,” recalls Michigan psychologist Richard Nisbett. “I cannot imagine. It would be like having someone else brush my teeth for me.” The way Danny put it was, “We were sharing a mind.”
Their first paper—which they still half-thought of as a joke played on the academic world—had shown that people faced with a problem that had a statistically correct answer did not think like statisticians. Even statisticians did not think like statisticians. “Belief in the Law of Small Numbers” had raised an obvious next question: If people did not use statistical reasoning, even when faced with a problem that could be solved with statistical reasoning, what kind of reasoning did they use? If they did not think, in life’s many chancy situations, like a card counter at a blackjack table, how did they think? Their next paper offered a partial answer to the question. It was called . . . well, Amos had this thing about titles. He refused to start a paper until he had decided what it would be called. He believed the title forced you to come to grips with what your paper was about.
And yet the titles that he and Danny put on their papers were inscrutable. They had to play, at least in the beginning, by the rules of the academic game, and in that game it wasn’t quite respectable to be easily understood. Their first attempt to describe how people formed judgments they titled “Subjective Probability: A Judgment of Representativeness.”‡ Subjective probability—a person might just make out what that meant. Subjective probability meant: the odds you assign to any given situation when you are more or less guessing. Look outside the window at midnight and see your teenage son weaving his way toward your front door, and say to yourself, “There’s a 75 percent chance he’s been drinking”—that’s subjective probability. But “A Judgment of Representativeness”: What the hell was that? “Subjective probabilities play an important role in our lives,” they began. “The decisions we make, the conclusions we reach, and the explanations we offer are usually based on our judgments of the likelihood of uncertain events such as success in a new job, the outcome of an election, or the state of a market.” In these and many other uncertain situations, the mind did not naturally calculate the correct odds. So what did it do?
The answer they now offered: It replaced the laws of chance with rules of thumb. These rules of thumb Danny and Amos called “heuristics.” And the first heuristic they wanted to explore they called “representativeness.”
When people make judgments, they argued, they compare whatever they are judging to some model in their minds. How much do those clouds resemble my mental model of an approaching storm? How closely does this ulcer resemble my mental model of a malignant cancer? Does Jeremy Lin match my mental picture of a future NBA player? Does that belligerent German political leader resemble my idea of a man capable of orchestrating genocide? The world’s not just a stage. It’s a casino, and our lives are games of chance. And when people calculate the odds in any life situation, they are often making judgments about similarity—or (strange new word!) representativeness. You have some notion of a parent population: “storm clouds” or “gastric ulcers” or “genocidal dictators” or “NBA players.” You compare the specific case to the parent population.
Amos and Danny left unaddressed the question of how exactly people formed mental models in the first place, and how they made judgments of similarity. Instead, they said, let’s focus on cases where the mental model that people have in their heads is fairly obvious. The more similar the specific case is to the notion in your head, the more likely you are to believe that the case belongs to the larger group. “Our thesis,” they wrote, “is that, in many situations, an event A is judged to be more probable than an event B whenever A appears more representative than B.” The more the basketball player resembles your mental model of an NBA player, the more likely you will think him to be an NBA player.
They had a hunch that people, when they formed judgments, weren’t just making random mistakes—that they were doing something systematically wrong. The weird questions they put to Israeli and American students were designed to tease out the pattern in human error. The problem was subtle. The rule of thumb they had called representativeness wasn’t always wrong. If the mind’s approach to uncertainty was occasionally misleading, it was because it was often so useful. Much of the time, the person who can become a good NBA player matches up pretty well with the mental model of “good NBA player.” But sometimes a person does not—and in the systematic errors they led people to make, you could glimpse the nature of these rules of thumb.
For instance, in families with six children, the birth order B G B B B B was about as likely as G B G B B G. But Israeli kids—like pretty much everyone else on the planet, it would emerge—naturally seemed to believe that G B G B B G was a more likely birth sequence. Why? “The sequence with five boys and one girl fails to reflect the proportion of boys and girls in the population,” they explained. It was less representative. What is more, if you asked the same Israeli kids to choose the more likely birth order in families with six children—B B B G G G or G B B G B G—they overwhelmingly opted for the latter. But the two birth orders are equally likely. So why did people almost universally believe that one was far more likely than the other? Because, said Danny and Amos, people thought of birth order as a random process, and the second sequence looks more “random” than the first.
The natural next question: When does our rule-of-thumb approach to calculating the odds lead to serious miscalculation? One answer was: Whenever people are asked to evaluate anything with a random component to it. It wasn’t enough that the uncertain event being judged resembled the parent population, wrote Danny and Amos. “The event should also reflect the properties of the uncertain process by which it is generated.” That is, if a process is random, its outcome should appear random. They didn’t explain how people’s mental model of “randomness” was formed in the first place. Instead they said, Let’s look at judgments that involve randomness, because we psychologists can all pretty much agree on people’s mental model of it.
Londoners in the Second World War thought that German bombs were targeted, because some parts of the city were hit repeatedly while others were not hit at all. (Statisticians later showed that the distribution was exactly what you would expect from random bombing.) People find it a remarkable coincidence when two students in the same classroom share a birthday, when in fact there is a better than even chance, in any group of twenty-three people, that two of its members will have been born on the same day. We have a kind of stereotype of “randomness” that differs from true randomness. Our stereotype of randomness lacks the clusters and patterns that occur in true random sequences. If you pass out twenty marbles randomly to five boys, they are actually more
likely to each receive four marbles (column II), than they are to receive the combination in column I, and yet American college students insisted that the unequal distribution in column I was more likely than the equal one in column II. Why? Because column II “appears too lawful to be the result of a random process. . . . ”
A suggestion arose from Danny and Amos’s paper: If our minds can be misled by our false stereotype of something as measurable as randomness, how much might they be misled by other, vaguer stereotypes?
The average heights of adult males and females in the U.S. are, respectively, 5 ft. 10 in. and 5 ft. 4 in. Both distributions are approximately normal with a standard deviation of about 2.5 in.§
An investigator has selected one population by chance and has drawn from it a random sample.
What do you think the odds are that he has selected the male population if
1.The sample consists of a single person whose height is 5 ft. 10 in.?
The Undoing Project Page 17