Rationality- From AI to Zombies

Home > Science > Rationality- From AI to Zombies > Page 6
Rationality- From AI to Zombies Page 6

by Eliezer Yudkowsky


  Until then, he had not felt these extra details as extra burdens. Instead they were corroborative detail, lending verisimilitude to the narrative. Someone presents you with a package of strange ideas, one of which is that universes replicate. Then they present support for the assertion that universes replicate. But this is not support for the package, though it is all told as one story.

  You have to disentangle the details. You have to hold up every one independently, and ask, “How do we know this detail?” Someone sketches out a picture of humanity’s descent into nanotechnological warfare, where China refuses to abide by an international control agreement, followed by an arms race . . . Wait a minute—how do you know it will be China? Is that a crystal ball in your pocket or are you just happy to be a futurist? Where are all these details coming from? Where did that specific detail come from?

  For it is written:

  If you can lighten your burden you must do so.

  There is no straw that lacks the power to break your back.

  *

  1. William S. Gilbert and Arthur Sullivan, The Mikado, Opera, 1885.

  2. Tversky and Kahneman, “Extensional Versus Intuitive Reasoning.”

  3. Amos Tversky and Daniel Kahneman, “Judgments of and by Representativeness,” in Judgment Under Uncertainty: Heuristics and Biases, ed. Daniel Kahneman, Paul Slovic, and Amos Tversky (New York: Cambridge University Press, 1982), 84–98.

  7

  Planning Fallacy

  The Denver International Airport opened 16 months late, at a cost overrun of $2 billion. (I’ve also seen $3.1 billion asserted.) The Eurofighter Typhoon, a joint defense project of several European countries, was delivered 54 months late at a cost of $19 billion instead of $7 billion. The Sydney Opera House may be the most legendary construction overrun of all time, originally estimated to be completed in 1963 for $7 million, and finally completed in 1973 for $102 million.1

  Are these isolated disasters brought to our attention by selective availability? Are they symptoms of bureaucracy or government incentive failures? Yes, very probably. But there’s also a corresponding cognitive bias, replicated in experiments with individual planners.

  Buehler et al. asked their students for estimates of when they (the students) thought they would complete their personal academic projects.2 Specifically, the researchers asked for estimated times by which the students thought it was 50%, 75%, and 99% probable their personal projects would be done. Would you care to guess how many students finished on or before their estimated 50%, 75%, and 99% probability levels?

  13% of subjects finished their project by the time they had assigned a 50% probability level;

  19% finished by the time assigned a 75% probability level;

  and only 45% (less than half!) finished by the time of their 99% probability level.

  As Buehler et al. wrote, “The results for the 99% probability level are especially striking: Even when asked to make a highly conservative forecast, a prediction that they felt virtually certain that they would fulfill, students’ confidence in their time estimates far exceeded their accomplishments.”3

  More generally, this phenomenon is known as the “planning fallacy.” The planning fallacy is that people think they can plan, ha ha.

  A clue to the underlying problem with the planning algorithm was uncovered by Newby-Clark et al., who found that

  Asking subjects for their predictions based on realistic “best guess” scenarios; and

  Asking subjects for their hoped-for “best case” scenarios . . .

  . . . produced indistinguishable results.4

  When people are asked for a “realistic” scenario, they envision everything going exactly as planned, with no unexpected delays or unforeseen catastrophes—the same vision as their “best case.”

  Reality, it turns out, usually delivers results somewhat worse than the “worst case.”

  Unlike most cognitive biases, we know a good debiasing heuristic for the planning fallacy. It won’t work for messes on the scale of the Denver International Airport, but it’ll work for a lot of personal planning, and even some small-scale organizational stuff. Just use an “outside view” instead of an “inside view.”

  People tend to generate their predictions by thinking about the particular, unique features of the task at hand, and constructing a scenario for how they intend to complete the task—which is just what we usually think of as planning. When you want to get something done, you have to plan out where, when, how; figure out how much time and how much resource is required; visualize the steps from beginning to successful conclusion. All this is the “inside view,” and it doesn’t take into account unexpected delays and unforeseen catastrophes. As we saw before, asking people to visualize the “worst case” still isn’t enough to counteract their optimism—they don’t visualize enough Murphyness.

  The outside view is when you deliberately avoid thinking about the special, unique features of this project, and just ask how long it took to finish broadly similar projects in the past. This is counterintuitive, since the inside view has so much more detail—there’s a temptation to think that a carefully tailored prediction, taking into account all available data, will give better results.

  But experiment has shown that the more detailed subjects’ visualization, the more optimistic (and less accurate) they become. Buehler et al. asked an experimental group of subjects to describe highly specific plans for their Christmas shopping—where, when, and how.5 On average, this group expected to finish shopping more than a week before Christmas. Another group was simply asked when they expected to finish their Christmas shopping, with an average response of four days. Both groups finished an average of three days before Christmas.

  Likewise, Buehler et al., reporting on a cross-cultural study, found that Japanese students expected to finish their essays ten days before deadline. They actually finished one day before deadline. Asked when they had previously completed similar tasks, they responded, “one day before deadline.”6 This is the power of the outside view over the inside view.

  A similar finding is that experienced outsiders, who know less of the details, but who have relevant memory to draw upon, are often much less optimistic and much more accurate than the actual planners and implementers.

  So there is a fairly reliable way to fix the planning fallacy, if you’re doing something broadly similar to a reference class of previous projects. Just ask how long similar projects have taken in the past, without considering any of the special properties of this project. Better yet, ask an experienced outsider how long similar projects have taken.

  You’ll get back an answer that sounds hideously long, and clearly reflects no understanding of the special reasons why this particular task will take less time. This answer is true. Deal with it.

  *

  1. Roger Buehler, Dale Griffin, and Michael Ross, “Inside the Planning Fallacy: The Causes and Consequences of Optimistic Time Predictions,” in Gilovich, Griffin, and Kahneman, Heuristics and Biases, 250–270.

  2. Roger Buehler, Dale Griffin, and Michael Ross, “Exploring the ‘Planning Fallacy’: Why People Underestimate Their Task Completion Times,” Journal of Personality and Social Psychology 67, no. 3 (1994): 366–381, doi:10.1037/0022-3514.67.3.366; Roger Buehler, Dale Griffin, and Michael Ross, “It’s About Time: Optimistic Predictions in Work and Love,” European Review of Social Psychology 6, no. 1 (1995): 1–32, doi:10.1080/14792779343000112.

  3. Buehler, Griffin, and Ross, “Inside the Planning Fallacy.”

  4. Ian R. Newby-Clark et al., “People Focus on Optimistic Scenarios and Disregard Pessimistic Scenarios While Predicting Task Completion Times,” Journal of Experimental Psychology: Applied 6, no. 3 (2000): 171–182, doi:10.1037/1076-898X.6.3.171.

  5. Buehler, Griffin, and Ross, “Inside the Planning Fallacy.”

  6. Ibid.

  8

  Illusion of Transparency: Why No One Understands You

  In hindsight bias, people who know the outcom
e of a situation believe the outcome should have been easy to predict in advance. Knowing the outcome, we reinterpret the situation in light of that outcome. Even when warned, we can’t de-interpret to empathize with someone who doesn’t know what we know.

  Closely related is the illusion of transparency: We always know what we mean by our words, and so we expect others to know it too. Reading our own writing, the intended interpretation falls easily into place, guided by our knowledge of what we really meant. It’s hard to empathize with someone who must interpret blindly, guided only by the words.

  June recommends a restaurant to Mark; Mark dines there and discovers (a) unimpressive food and mediocre service or (b) delicious food and impeccable service. Then Mark leaves the following message on June’s answering machine: “June, I just finished dinner at the restaurant you recommended, and I must say, it was marvelous, just marvelous.” Keysar presented a group of subjects with scenario (a), and 59% thought that Mark’s message was sarcastic and that Jane would perceive the sarcasm.1 Among other subjects, told scenario (b), only 3% thought that Jane would perceive Mark’s message as sarcastic. Keysar and Barr seem to indicate that an actual voice message was played back to the subjects.2 Keysar showed that if subjects were told that the restaurant was horrible but that Mark wanted to conceal his response, they believed June would not perceive sarcasm in the (same) message:3

  They were just as likely to predict that she would perceive sarcasm when he attempted to conceal his negative experience as when he had a positive experience and was truly sincere. So participants took Mark’s communicative intention as transparent. It was as if they assumed that June would perceive whatever intention Mark wanted her to perceive.4

  “The goose hangs high” is an archaic English idiom that has passed out of use in modern language. Keysar and Bly told one group of subjects that “the goose hangs high” meant that the future looks good; another group of subjects learned that “the goose hangs high” meant the future looks gloomy.5 Subjects were then asked which of these two meanings an uninformed listener would be more likely to attribute to the idiom. Each group thought that listeners would perceive the meaning presented as “standard.”

  (Other idioms tested included “come the uncle over someone,” “to go by the board,” and “to lay out in lavender.” Ah, English, such a lovely language.)

  Keysar and Henly tested the calibration of speakers: Would speakers underestimate, overestimate, or correctly estimate how often listeners understood them?6 Speakers were given ambiguous sentences (“The man is chasing a woman on a bicycle.”) and disambiguating pictures (a man running after a cycling woman), then asked the speakers to utter the words in front of addressees, then asked speakers to estimate how many addressees understood the intended meaning. Speakers thought that they were understood in 72% of cases and were actually understood in 61% of cases. When addressees did not understand, speakers thought they did in 46% of cases; when addressees did understand, speakers thought they did not in only 12% of cases.

  Additional subjects who overheard the explanation showed no such bias, expecting listeners to understand in only 56% of cases.

  As Keysar and Barr note, two days before Germany’s attack on Poland, Chamberlain sent a letter intended to make it clear that Britain would fight if any invasion occurred.7 The letter, phrased in polite diplomatese, was heard by Hitler as conciliatory—and the tanks rolled.

  Be not too quick to blame those who misunderstand your perfectly clear sentences, spoken or written. Chances are, your words are more ambiguous than you think.

  *

  1. Boaz Keysar, “The Illusory Transparency of Intention: Linguistic Perspective Taking in Text,” Cognitive Psychology 26 (2 1994): 165–208, doi:10.1006/cogp.1994.1006.

  2. Keysar and Barr, “Self-Anchoring in Conversation.”

  3. Boaz Keysar, “Language Users as Problem Solvers: Just What Ambiguity Problem Do They Solve?,” in Social and Cognitive Approaches to Interpersonal Communication, ed. Susan R. Fussell and Roger J. Kreuz (Mahwah, NJ: Lawrence Erlbaum Associates, 1998), 175–200.

  4. Keysar and Barr, “Self-Anchoring in Conversation.”

  5. Boaz Keysar and Bridget Bly, “Intuitions of the Transparency of Idioms: Can One Keep a Secret by Spilling the Beans?,” Journal of Memory and Language 34 (1 1995): 89–109, doi:10.1006/jmla.1995.1005.

  6. Boaz Keysar and Anne S. Henly, “Speakers’ Overestimation of Their Effectiveness,” Psychological Science 13 (3 2002): 207–212, doi:10.1111/1467-9280.00439.

  7. Keysar and Barr, “Self-Anchoring in Conversation.”

  9

  Expecting Short Inferential Distances

  Homo sapiens’s environment of evolutionary adaptedness (a.k.a. EEA or “ancestral environment”) consisted of hunter-gatherer bands of at most 200 people, with no writing. All inherited knowledge was passed down by speech and memory.

  In a world like that, all background knowledge is universal knowledge. All information not strictly private is public, period.

  In the ancestral environment, you were unlikely to end up more than one inferential step away from anyone else. When you discover a new oasis, you don’t have to explain to your fellow tribe members what an oasis is, or why it’s a good idea to drink water, or how to walk. Only you know where the oasis lies; this is private knowledge. But everyone has the background to understand your description of the oasis, the concepts needed to think about water; this is universal knowledge. When you explain things in an ancestral environment, you almost never have to explain your concepts. At most you have to explain one new concept, not two or more simultaneously.

  In the ancestral environment there were no abstract disciplines with vast bodies of carefully gathered evidence generalized into elegant theories transmitted by written books whose conclusions are a hundred inferential steps removed from universally shared background premises.

  In the ancestral environment, anyone who says something with no obvious support is a liar or an idiot. You’re not likely to think, “Hey, maybe this person has well-supported background knowledge that no one in my band has even heard of,” because it was a reliable invariant of the ancestral environment that this didn’t happen.

  Conversely, if you say something blatantly obvious and the other person doesn’t see it, they’re the idiot, or they’re being deliberately obstinate to annoy you.

  And to top it off, if someone says something with no obvious support and expects you to believe it—acting all indignant when you don’t—then they must be crazy.

  Combined with the illusion of transparency and self-anchoring, I think this explains a lot about the legendary difficulty most scientists have in communicating with a lay audience—or even communicating with scientists from other disciplines. When I observe failures of explanation, I usually see the explainer taking one step back, when they need to take two or more steps back. Or listeners assume that things should be visible in one step, when they take two or more steps to explain. Both sides act as if they expect very short inferential distances from universal knowledge to any new knowledge.

  A biologist, speaking to a physicist, can justify evolution by saying it is the simplest explanation. But not everyone on Earth has been inculcated with that legendary history of science, from Newton to Einstein, which invests the phrase “simplest explanation” with its awesome import: a Word of Power, spoken at the birth of theories and carved on their tombstones. To someone else, “But it’s the simplest explanation!” may sound like an interesting but hardly knockdown argument; it doesn’t feel like all that powerful a tool for comprehending office politics or fixing a broken car. Obviously the biologist is infatuated with their own ideas, too arrogant to be open to alternative explanations which sound just as plausible. (If it sounds plausible to me, it should sound plausible to any sane member of my band.)

  And from the biologist’s perspective, they can understand how evolution might sound a little odd at first—but when someone rejects evolution even after the biologi
st explains that it’s the simplest explanation, well, it’s clear that nonscientists are just idiots and there’s no point in talking to them.

  A clear argument has to lay out an inferential pathway, starting from what the audience already knows or accepts. If you don’t recurse far enough, you’re just talking to yourself.

  If at any point you make a statement without obvious justification in arguments you’ve previously supported, the audience just thinks you’re crazy.

  This also happens when you allow yourself to be seen visibly attaching greater weight to an argument than is justified in the eyes of the audience at that time. For example, talking as if you think “simpler explanation” is a knockdown argument for evolution (which it is), rather than a sorta-interesting idea (which it sounds like to someone who hasn’t been raised to revere Occam’s Razor).

  Oh, and you’d better not drop any hints that you think you’re working a dozen inferential steps away from what the audience knows, or that you think you have special background knowledge not available to them. The audience doesn’t know anything about an evolutionary-psychological argument for a cognitive bias to underestimate inferential distances leading to traffic jams in communication. They’ll just think you’re condescending.

  And if you think you can explain the concept of “systematically underestimated inferential distances” briefly, in just a few words, I’ve got some sad news for you . . .

  *

  10

  The Lens That Sees Its Own Flaws

  Light leaves the Sun and strikes your shoelaces and bounces off; some photons enter the pupils of your eyes and strike your retina; the energy of the photons triggers neural impulses; the neural impulses are transmitted to the visual-processing areas of the brain; and there the optical information is processed and reconstructed into a 3D model that is recognized as an untied shoelace; and so you believe that your shoelaces are untied.

 

‹ Prev