Book Read Free

The Undoing Project

Page 13

by Michael Lewis


  In the early 1960s, Treisman had picked up where the work of fellow Brits Colin Cherry and Donald Broadbent had left off. Cherry, a cognitive scientist, had identified what became known as the “cocktail party effect.” The cocktail party effect was the ability of people to filter a lot of noise for the sounds they wished to hear—as they did when they listened to someone at a cocktail party. It was in those days a practical problem because of the design of air traffic control towers. In the early control towers, the voices of all the pilots who needed guidance were broadcast through loudspeakers. Air traffic controllers had to filter the voices to identify the relevant airplane. It was just assumed that they could ignore the voices that they needed to ignore in order to focus on the voice that required their attention.

  Together with another British colleague, Neville Moray, Treis-man set out to see just how selectively people listened when they listened selectively. “Nobody had done or was doing any research in the field of selective listening,” she wrote in her memoir, “so we had it more or less to ourselves.” She and Moray had put people in headphones attached to a two-channel tape recorder and piped two different passages of prose simultaneously into separate ears. Treisman asked the subjects to repeat back to her, as they listened, one of the passages. Afterward, she asked them what they had picked up from the passage they had supposedly ignored. It turned out that they hadn’t entirely ignored it. Some words and phrases got through to the mind, even if they hadn’t been invited. For instance, if their name was in the passage that they were assigned to ignore, people would often hear it.

  This surprised Treisman, along with the few other people then paying attention to attention. “I thought at the time that attention was a complete filtering,” said Treisman, “but it turns out that some kind of monitoring goes on. The question I had was, how do we do this? When, and how, does the content get through?” In her Harvard talk, Treisman proposed that people possessed, not an on-off switch that enabled them to pay attention to whatever they intended to pay attention to, but a more subtle mechanism that selectively weakened, rather than entirely blocked, background noise. That background noise might get through was, of course, not the happiest news for passengers in airplanes circling the control tower. But it was interesting.

  Anne Treisman was on a flying visit to Harvard, where the demand to hear what she had to say was so great that her talk had to be moved to a big public lecture hall off campus. Danny left the talk filled with new enthusiasm. He asked to be deputized to look after Treisman and her traveling party—which included her mother, her husband, and their two small children. He gave them a tour of Harvard. “He was very eager to impress,” said Treisman, “and so I let myself be impressed.” It would be years before Danny and Anne left their marriages and married each other, but it took no time at all for Danny to engage Treisman’s ideas.

  In the fall of 1967 Danny had gotten over his feelings of being slighted and returned to Hebrew University, with the promise of tenure and an entirely new research program. It was now possible, with double-channel tape recorders, to measure how well people divided their attention, or switched their attention from one thing to another. It stood to reason that some people might be better at it than others, and that the ability might offer an advantage in certain lines of work. With this in mind Danny went to England, at the invitation of the Cambridge Applied Psychology Unit, to test professional soccer players. He thought that there might be a difference in the attention-switching abilities of players in the first (premier) league and players in the fourth league. He took the train from Cambridge to Arsenal—home to a top-division soccer team—with his heavy dual-track tape recorder beside him. He put the headphones on the players and tested their ability to switch from the message playing in one ear to the message playing in the other, and found . . . nothing. Or, at least, no obvious difference between them and the players in the lower-ranked league. A talent for playing soccer didn’t require any special ability to switch attention.

  “Then I thought, this could be critical in pilots,” he recalled. He knew, from working with flight instructors, that the cadets training to fly fighter jets sometimes failed because they either couldn’t divide their attention between tasks or were slow to pick up on seemingly unimportant but actually critical background signals. He returned to Israel and tested cadets who were training to fly jets for the Air Force. This time he found what he was looking for: The successful fighter pilots were better able to switch attention than the unsuccessful ones, and both were better at it than Israeli bus drivers. Eventually one of Danny’s students discovered that you could predict, from how efficiently they switched channels, which Israeli bus drivers were more likely to have accidents.

  There was a relentlessness in the way Danny’s mind moved from insight to application. Psychologists, especially the ones who became university professors, weren’t exactly known for being useful. The demands of being an Israeli had forced Danny to find a talent in himself he might otherwise never have spotted. His high school friend Ariel Ginsburg thought that the Israeli army had made Danny more practical: The creation of a new interview system, and its effect on an entire army, had been intoxicating. The most popular class Danny taught at Hebrew University was a graduate seminar he called Applications of Psychology. Each week he brought in some real-world problem and told the students to use what they knew from psychology to address it. Some of the problems came from Danny’s many attempts to make psychology useful to Israel. After terrorists started placing bombs in city trash cans—and one in the Hebrew University cafeteria in March 1969 that wounded twenty-nine students—Danny asked: What does psychology tell you that might be useful to the government, which is trying to minimize the public’s panic? (Before they could arrive at an answer, the government removed the trash cans.)

  Israelis in the 1960s lived with constant change. Immigrants who had come from city life were channeled onto collective farms. The farms themselves underwent fairly constant technological upheaval. Danny designed a course to train the people who trained the farmers. “Reforms always create winners and losers,” Danny explained, “and the losers will always fight harder than the winners.” How did you get the losers to accept change? The prevailing strategy on the Israeli farms—which wasn’t working very well—was to bully or argue with the people who needed to change. The psychologist Kurt Lewin had suggested persuasively that, rather than selling people on some change, you were better off identifying the reasons for their resistance, and addressing those. Imagine a plank held in place by a spring on either side of it, Danny told the students. How do you move it? Well, you can increase the force on one side of the plank. Or you can reduce the force on the other side. “In one case the overall tension is reduced,” he said, “and in the other it is increased.” And that was a sort of proof that there was an advantage in reducing the tension. “It’s a key idea,” said Danny. “Making it easy to change.”

  Danny was also training Air Force flight instructors to train fighter pilots. (But only on the ground: The one time they took him up in a plane he vomited into his oxygen mask.) How did you get fighter pilots to memorize a series of instructions? “We started making a long list,” recalled Zur Shapira. “Danny says no. He tells us about ‘The Magical Number Seven.’” “The Magical Number Seven, Plus or Minus Two: Some Limits on Our Capacity for Processing Information” was a paper, written by Harvard psychologist George Miller, which showed that people had the ability to hold in their short-term memory seven items, more or less. Any attempt to get them to hold more was futile. Miller half-jokingly suggested that the seven deadly sins, the seven seas, the seven days of the week, the seven primary colors, the seven wonders of the world, and several other famous sevens had their origins in this mental truth.

  At any rate, the most effective way to teach people longer strings of information was to feed the information into their minds in smaller chunks. To this, Shapira recalled, Danny added his own twist. “He says you
only tell them a few things—and get them to sing it.” Danny loved the idea of the “action song.” In his statistics classes he had actually asked his students to sing the formulas. “He forced you to engage with problems,” said Baruch Fischhoff, a student who became a professor at Carnegie Mellon University, “even if they were complicated problems without simple solutions. He made you feel you could do something useful with this science.”

  A lot of the problems Danny threw at his students felt like pure whim. He asked them to design a currency so that it was hard to counterfeit. Was it better for bills of different denominations to resemble each other, as they did in the United States, thus leading anyone accepting them to examine them closely; or should they have a wide variety of colors and shapes so that they were harder to copy? He asked them how they would design a workplace to make it more efficient. (And of course they must be familiar with the psychological research showing that some wall colors led workers to be more productive than others.) Some of Danny’s problems were so abstruse and strange that the student’s first response was, Um, we’ll need to go to the library and get back to you on that. “When we said that,” recalled Zur Shapira, “Danny responded—mildly upset—by saying, ‘You have completed a three-year program in psychology. You are by definition professionals. Don’t hide behind research. Use your knowledge to come up with a plan.’”

  But what were you supposed to say when Danny brought in a copy of a doctor’s prescription from the twelfth century, sloppily written, in a language you didn’t know a word of, and asked you to decode it? “Someone once said that education was knowing what to do when you don’t know,” said one of his students. “Danny took that idea and ran with it.” One day Danny brought in a stack of those games in which the object is to guide a small metal ball through a wooden maze. The assignment he gave his students: Teach someone how to teach someone else how to play the game. “It would never occur to anyone that you could teach this,” recalled one of the students. “The trick was to break it down into the component skills—learning how to hold your hand steady, learning how to tilt slightly to the right, and so on—then teach them separately and then, once you’d taught them all, put them together.” The guy at the store who sold the games to Danny found the whole idea of it hysterical. But to Danny, useful advice, however obvious, was better than no advice at all. He asked his students to figure out what advice they would give to an Egyptologist who was having difficulty deciphering a hieroglyph. “He tells us that the guy is going slower and slower and getting more and more stuck,” recalled Daniela Gordon, a student who became a researcher in the Israeli army. “Then Danny asks, ‘What should he do?’ No one could think of anything. And Danny says; ‘He should take a nap!’”

  Danny’s students left every class with a sense that there was really no end to the problems in this world. Danny found problems where none seemed to exist; it was as if he structured the world around him so that it might be understood chiefly as a problem. To each new class the students arrived wondering what problem he might bring for them to solve. Then one day he brought them Amos Tversky.

  5

  THE COLLISION

  Danny and Amos had been at the University of Michigan at the same time for six months, but their paths seldom crossed; their minds, never. Danny had been in one building, studying people’s pupils, and Amos had been in another, devising mathematical approaches to similarity, measurement, and decision making. “We had not had much to do with each other,” said Danny. The dozen or so graduate students in Danny’s seminar at Hebrew University were all surprised when, in the spring of 1969, Amos turned up. Danny never had guests: The seminar was his show. Amos was about as far removed from the real-world problems in Applications of Psychology as a psychologist could be. Plus, the two men didn’t seem to mix. “It was the graduate students’ perception that Danny and Amos had some sort of rivalry,” said one of the students in the seminar. “They were clearly the stars of the department who somehow or other hadn’t gotten in sync.”

  Before he left for North Carolina, Amnon Rapoport had felt that he and Amos disturbed Danny in some way that was hard to pin down. “We thought he was afraid of us or something,” said Amnon. “Suspicious of us.” For his part, Danny said he’d simply been curious about Amos Tversky. “I think I wanted a chance to know him better,” he said.

  Danny invited Amos to come to his seminar to talk about whatever he wanted to talk about. He was a little surprised that Amos didn’t talk about his own work—but then Amos’s work was so abstract and theoretical that he probably decided it had no place in the seminar. Those who stopped to think about it found it odd that Amos’s work betrayed so little interest in the real world, when Amos was so intimately and endlessly engaged with that world, and how, conversely, Danny’s work was consumed by real-world problems, even as he kept other people at a distance.

  Amos was now what people referred to, a bit confusingly, as a “mathematical psychologist.” Nonmathematical psychologists, like Danny, quietly viewed much of mathematical psychology as a series of pointless exercises conducted by people who were using their ability to do math as camouflage for how little of psychological interest they had to say. Mathematical psychologists, for their part, tended to view nonmathematical psychologists as simply too stupid to understand the importance of what they were saying. Amos was then at work with a team of mathematically gifted American academics on what would become a three-volume, molasses-dense, axiom-filled textbook called Foundations of Measurement—more than a thousand pages of arguments and proofs of how to measure stuff. On the one hand, it was a wildly impressive display of pure thought; on the other, the whole enterprise had a tree-fell-in-the-woods quality to it. How important could the sound it made be, if no one was able to hear it?

  Instead of his own work, Amos talked to Danny’s students about the cutting-edge research being done in Ward Edwards’s lab at the University of Michigan. Edwards and his students were still engaged in what they considered to be an original line of inquiry. The specific study Amos described was about how people, in their decision making, responded to new information. As Amos told it, the psychologists had brought people in and presented them with two book bags filled with poker chips. Each bag contained both red poker chips and white poker chips. In one of the bags, 75 percent of the chips were white and 25 percent were red; in the other bag, 75 percent of the chips were red and 25 percent were white. The subject picked one of the bags at random and, without glancing inside the bag, began to pull chips out of it, one at a time. After extracting each chip, he’d give the psychologists his best guess of the odds that the bag he was holding was filled with mostly red, or mostly white, chips.

  The beauty of the experiment was that there was a correct answer to the question: What is the probability that I am holding the bag of mostly red chips? It was provided by a statistical formula called Bayes’s theorem (after Thomas Bayes, who, strangely, left the formula for others to discover in his papers after his death, in 1761). Bayes’s rule allowed you to calculate the true odds, after each new chip was pulled from it, that the book bag in question was the one with majority white, or majority red, chips. Before any chips had been withdrawn, those odds were 50:50—the bag in your hands was equally likely to be either majority red or majority white. But how did the odds shift after each new chip was revealed?

  That depended, in a big way, on the so-called base rate: the percentage of red versus white chips in the bag. (These percentages were presumed to be known.) If you know that one bag contains 99 percent red chips and the other, 99 percent white chips, the color of the first chip drawn from the bag tells you a lot more than if you know that each bag contains only 51 percent red or white. But how much more does it tell you? Plug the base rate into Bayes’s formula and you get an answer. In the case of two bags known to be 75 percent-25 percent majority red or white, the odds that you are holding the bag containing mostly red chips rise by three times every time you draw a red ch
ip, and are divided by three every time you draw a white chip. If the first chip you draw is red, there is a 3:1 (or 75 percent) chance that the bag you are holding is majority red. If the second chip you draw is also red, the odds rise to 9:1, or 90 percent. If the third chip you draw is white, they fall back to 3:1. And so on.

  The bigger the base rate—the known ratio of red to white chips—the faster the odds shift around. If the first three chips you draw are red, from a bag in which 75 percent of the chips are known to be either red or white, there’s a 27:1, or slightly greater than 96 percent, chance you are holding the bag filled with mostly red chips.

  The innocent subjects who pulled the poker chips out of the book bags weren’t expected to know Bayes’s rule. The experiment would have been ruined if they had. Their job was to guess the odds, so that the psychologists could compare those guesses with the correct answer. From their guesses, the psychologists hoped to get a sense of just how closely whatever was going on in people’s minds resembled a statistical calculation when those minds were presented with new information. Were human beings good intuitive statisticians? When they didn’t know the formula, did they still behave as if they did?

  At the time, the experiments felt radical and exciting. In the minds of the psychologists, the results spoke to all sorts of real-world problems: How do investors respond to earnings reports, or patients to diagnoses, or political strategists to polls, or coaches to a new score? A woman in her twenties who receives from a single test a diagnosis of breast cancer is many times more likely to have been misdiagnosed than is a woman in her forties who receives the same diagnosis. (The base rates are different: Women in their twenties are far less likely to have breast cancer.) Does she sense her own odds? If so, how clearly? Life is filled with games of chance: How well do people play them? How accurately do they assess new information? How do people leap from evidence to a judgment about the state of the world? How aware are they of base rates? Do they allow what just happened to alter, accurately, their sense of the odds of what will happen next?

 

‹ Prev