Rationality- From AI to Zombies

Home > Science > Rationality- From AI to Zombies > Page 3
Rationality- From AI to Zombies Page 3

by Eliezer Yudkowsky


  Stylistically, the essays in this book run the gamut from “lively textbook” to “compendium of thoughtful vignettes” to “riotous manifesto,” and the content is correspondingly varied. Rationality: From AI to Zombies collects hundreds of Yudkowsky’s blog posts into twenty-six “sequences,” chapter-like series of thematically linked posts. The sequences in turn are grouped into six books, covering the following topics:

  Book 1—Map and Territory. What is a belief, and what makes some beliefs work better than others? These four sequences explain the Bayesian notions of rationality, belief, and evidence. A running theme: the things we call “explanations” or “theories” may not always function like maps for navigating the world. As a result, we risk mixing up our mental maps with the other objects in our toolbox.

  Book 2—How to Actually Change Your Mind. This truth thing seems pretty handy. Why, then, do we keep jumping to conclusions, digging our heels in, and recapitulating the same mistakes? Why are we so bad at acquiring accurate beliefs, and how can we do better? These seven sequences discuss motivated reasoning and confirmation bias, with a special focus on hard-to-spot species of self-deception and the trap of “using arguments as soldiers.”

  Book 3—The Machine in the Ghost. Why haven’t we evolved to be more rational? Even taking into account resource constraints, it seems like we could be getting a lot more epistemic bang for our evidential buck. To get a realistic picture of how and why our minds execute their biological functions, we need to crack open the hood and see how evolution works, and how our brains work, with more precision. These three sequences illustrate how even philosophers and scientists can be led astray when they rely on intuitive, non-technical evolutionary or psychological accounts. By locating our minds within a larger space of goal-directed systems, we can identify some of the peculiarities of human reasoning and appreciate how such systems can “lose their purpose.”

  Book 4—Mere Reality. What kind of world do we live in? What is our place in that world? Building on the previous sequences’ examples of how evolutionary and cognitive models work, these six sequences explore the nature of mind and the character of physical law. In addition to applying and generalizing past lessons on scientific mysteries and parsimony, these essays raise new questions about the role science should play in individual rationality.

  Book 5—Mere Goodness. What makes something valuable—morally, or aesthetically, or prudentially? These three sequences ask how we can justify, revise, and naturalize our values and desires. The aim will be to find a way to understand our goals without compromising our efforts to actually achieve them. Here the biggest challenge is knowing when to trust your messy, complicated case-by-case impulses about what’s right and wrong, and when to replace them with simple exceptionless principles.

  Book 6—Becoming Stronger. How can individuals and communities put all this into practice? These three sequences begin with an autobiographical account of Yudkowsky’s own biggest philosophical blunders, with advice on how he thinks others might do better. The book closes with recommendations for developing evidence-based applied rationality curricula, and for forming groups and institutions to support interested students, educators, researchers, and friends.

  The sequences are also supplemented with “interludes,” essays taken from Yudkowsky’s personal website, http://www.yudkowsky.net. These tie in to the sequences in various ways; e.g., The Twelve Virtues of Rationality poetically summarizes many of the lessons of Rationality: From AI to Zombies, and is often quoted in other essays.

  Clicking the asterisk at the bottom of an essay will take you to the original version of it on Less Wrong (where you can leave comments) or on Yudkowsky’s website. You can also find a glossary for Rationality: From AI to Zombies terminology online, at http://wiki.lesswrong.com/wiki/RAZ_Glossary.

  Map and Territory

  This, the first book, begins with a sequence on cognitive bias: “Predictably Wrong.” The rest of the book won’t stick to just this topic; bad habits and bad ideas matter, even when they arise from our minds’ contents as opposed to our minds’ structure. Thus evolved and invented errors will both be on display in subsequent sequences, beginning with a discussion in “Fake Beliefs” of ways that one’s expectations can come apart from one’s professed beliefs.

  An account of irrationality would also be incomplete if it provided no theory about how rationality works—or if its “theory” only consisted of vague truisms, with no precise explanatory mechanism. The “Noticing Confusion” sequence asks why it’s useful to base one’s behavior on “rational” expectations, and what it feels like to do so.

  “Mysterious Answers” next asks whether science resolves these problems for us. Scientists base their models on repeatable experiments, not speculation or hearsay. And science has an excellent track record compared to anecdote, religion, and . . . pretty much everything else. Do we still need to worry about “fake” beliefs, confirmation bias, hindsight bias, and the like when we’re working with a community of people who want to explain phenomena, not just tell appealing stories?

  This is then followed by The Simple Truth, a stand-alone allegory on the nature of knowledge and belief.

  It is cognitive bias, however, that provides the clearest and most direct glimpse into the stuff of our psychology, into the shape of our heuristics and the logic of our limitations. It is with bias that we will begin.

  There is a passage in the Zhuangzi, a proto-Daoist philosophical text, that says: “The fish trap exists because of the fish; once you’ve gotten the fish, you can forget the trap.”20

  I invite you to explore this book in that spirit. Use it like you’d use a fish trap, ever mindful of the purpose you have for it. Carry with you what you can use, so long as it continues to have use; discard the rest. And may your purpose serve you well.

  Acknowledgments

  I am stupendously grateful to Nate Soares, Elizabeth Tarleton, Paul Crowley, Brienne Strohl, Adam Freese, Helen Toner, and dozens of volunteers for proofreading portions of this book.

  Special and sincere thanks to Alex Vermeer, who steered this book to completion, and Tsvi Benson-Tilsen, who combed through the entire book to ensure its readability and consistency.

  *

  1. The idea of personal bias, media bias, etc. resembles statistical bias in that it’s an error. Other ways of generalizing the idea of “bias” focus instead on its association with nonrandomness. In machine learning, for example, an inductive bias is just the set of assumptions a learner uses to derive predictions from a data set. Here, the learner is “biased” in the sense that it’s pointed in a specific direction; but since that direction might be truth, it isn’t a bad thing for an agent to have an inductive bias. It’s valuable and necessary. This distinguishes inductive “bias” quite clearly from the other kinds of bias.

  2. A sad coincidence: Leonard Nimoy, the actor who played Spock, passed away just a few days before the release of this book. Though we cite his character as a classic example of fake “Hollywood rationality,” we mean no disrespect to Nimoy’s memory.

  3. Timothy D. Wilson et al., “Introspecting About Reasons Can Reduce Post-choice Satisfaction,” Personality and Social Psychology Bulletin 19 (1993): 331–331.

  4. Jamin Brett Halberstadt and Gary M. Levine, “Effects of Reasons Analysis on the Accuracy of Predicting Basketball Games,” Journal of Applied Social Psychology 29, no. 3 (1999): 517–530.

  5. Keith E. Stanovich and Richard F. West, “Individual Differences in Reasoning: Implications for the Rationality Debate?,” Behavioral and Brain Sciences 23, no. 5 (2000): 645–665, http://journals.cambridge.org/abstract_S0140525X00003435.

  6. Timothy D. Wilson, David B. Centerbar, and Nancy Brekke, “Mental Contamination and the Debiasing Problem,” in Heuristics and Biases: The Psychology of Intuitive Judgment, ed. Thomas Gilovich, Dale Griffin, and Daniel Kahneman (Cambridge University Press, 2002).

  7. Amos Tversky and Daniel Kahneman, “Extensional Versus Intuitive Reasoni
ng: The Conjunction Fallacy in Probability Judgment,” Psychological Review 90, no. 4 (1983): 293–315, doi:10.1037/0033-295X.90.4.293.

  8. Richards J. Heuer, Psychology of Intelligence Analysis (Center for the Study of Intelligence, Central Intelligence Agency, 1999).

  9. Wayne Weiten, Psychology: Themes and Variations, Briefer Version, Eighth Edition (Cengage Learning, 2010).

  10. Raymond S. Nickerson, “Confirmation Bias: A Ubiquitous Phenomenon in Many Guises,” Review of General Psychology 2, no. 2 (1998): 175.

  11. Probability neglect is another cognitive bias. In the months and years following the September 11 attacks, many people chose to drive long distances rather than fly. Hijacking wasn’t likely, but it now felt like it was on the table; the mere possibility of hijacking hugely impacted decisions. By relying on black-and-white reasoning (cars and planes are either “safe” or “unsafe,” full stop), people actually put themselves in much more danger. Where they should have weighed the probability of dying on a cross-country car trip against the probability of dying on a cross-country flight—the former is hundreds of times more likely—they instead relied on their general feeling of worry and anxiety (the affect heuristic). We can see the same pattern of behavior in children who, hearing arguments for and against the safety of seat belts, hop back and forth between thinking seat belts are a completely good idea or a completely bad one, instead of trying to compare the strengths of the pro and con considerations.21

  Some more examples of biases are: the peak/end rule (evaluating remembered events based on their most intense moment, and how they ended); anchoring (basing decisions on recently encountered information, even when it’s irrelevant)22 and self-anchoring (using yourself as a model for others’ likely characteristics, without giving enough thought to ways you’re atypical);23 and status quo bias (excessively favoring what’s normal and expected over what’s new and different).24

  12. Katherine Hansen et al., “People Claim Objectivity After Knowingly Using Biased Strategies,” Personality and Social Psychology Bulletin 40, no. 6 (2014): 691–699.

  13. Similarly, Pronin writes of gender bias blindness:

  In one study, participants considered a male and a female candidate for a police-chief job and then assessed whether being “streetwise” or “formally educated” was more important for the job. The result was that participants favored whichever background they were told the male candidate possessed (e.g., if told he was “streetwise,” they viewed that as more important). Participants were completely blind to this gender bias; indeed, the more objective they believed they had been, the more bias they actually showed.25

  Even when we know about biases, Pronin notes, we remain “naive realists” about our own beliefs. We reliably fall back into treating our beliefs as distortion-free representations of how things actually are.26

  14. In a survey of 76 people waiting in airports, individuals rated themselves much less susceptible to cognitive biases on average than a typical person in the airport. In particular, people think of themselves as unusually unbiased when the bias is socially undesirable or has difficult-to-notice consequences.27 Other studies find that people with personal ties to an issue see those ties as enhancing their insight and objectivity; but when they see other people exhibiting the same ties, they infer that those people are overly attached and biased.

  15. Joyce Ehrlinger, Thomas Gilovich, and Lee Ross, “Peering Into the Bias Blind Spot: People’s Assessments of Bias in Themselves and Others,” Personality and Social Psychology Bulletin 31, no. 5 (2005): 680–692.

  16. Richard F. West, Russell J. Meserve, and Keith E. Stanovich, “Cognitive Sophistication Does Not Attenuate the Bias Blind Spot,” Journal of Personality and Social Psychology 103, no. 3 (2012): 506.

  17. . . . Not to be confused with people who think they’re unusually intelligent, thoughtful, etc. because of the illusory superiority bias.

  18. Michael J. Liersch and Craig R. M. McKenzie, “Duration Neglect by Numbers and Its Elimination by Graphs,” Organizational Behavior and Human Decision Processes 108, no. 2 (2009): 303–314.

  19. Sebastian Serfas, Cognitive Biases in the Capital Investment Context: Theoretical Considerations and Empirical Experiments on Violations of Normative Rationality (Springer, 2010).

  20. Zhuangzi and Burton Watson, The Complete Works of Zhuangzi (Columbia University Press, 1968).

  21. Cass R. Sunstein, “Probability Neglect: Emotions, Worst Cases, and Law,” Yale Law Journal (2002): 61–107.

  22. Dan Ariely, Predictably Irrational: The Hidden Forces That Shape Our Decisions (HarperCollins, 2008).

  23. Boaz Keysar and Dale J. Barr, “Self-Anchoring in Conversation: Why Language Users Do Not Do What They ‘Should,’” in Heuristics and Biases: The Psychology of Intuitive Judgment, ed. Thomas Gilovich, Dale Griffin, and Daniel Kahneman (New York: Cambridge University Press, 2002), 150–166, doi:10.2277/0521796792.

  24. Scott Eidelman and Christian S. Crandall, “Bias in Favor of the Status Quo,” Social and Personality Psychology Compass 6, no. 3 (2012): 270–281.

  25. Eric Luis Uhlmann and Geoffrey L. Cohen, “‘I think it, therefore it’s true’: Effects of Self-perceived Objectivity on Hiring Discrimination,” Organizational Behavior and Human Decision Processes 104, no. 2 (2007): 207–223.

  26. Emily Pronin, “How We See Ourselves and How We See Others,” Science 320 (2008): 1177–1180, http://psych.princeton.edu/psychology/research/pronin/pubs/2008%20Self%20and%20Other.pdf.

  27. Emily Pronin, Daniel Y. Lin, and Lee Ross, “The Bias Blind Spot: Perceptions of Bias in Self versus Others,” Personality and Social Psychology Bulletin 28, no. 3 (2002): 369–381.

  Book I

  Map and Territory

  A. Predictably Wrong

  1. What Do I Mean By “Rationality”?

  2. Feeling Rational

  3. Why Truth? And . . .

  4. . . . What’s a Bias, Again?

  5. Availability

  6. Burdensome Details

  7. Planning Fallacy

  8. Illusion of Transparency: Why No One Understands You

  9. Expecting Short Inferential Distances

  10. The Lens That Sees Its Own Flaws

  B. Fake Beliefs

  11. Making Beliefs Pay Rent (in Anticipated Experiences)

  12. A Fable of Science and Politics

  13. Belief in Belief

  14. Bayesian Judo

  15. Pretending to be Wise

  16. Religion’s Claim to be Non-Disprovable

  17. Professing and Cheering

  18. Belief as Attire

  19. Applause Lights

  C. Noticing Confusion

  20. Focus Your Uncertainty

  21. What Is Evidence?

  22. Scientific Evidence, Legal Evidence, Rational Evidence

  23. How Much Evidence Does It Take?

  24. Einstein’s Arrogance

  25. Occam’s Razor

  26. Your Strength as a Rationalist

  27. Absence of Evidence Is Evidence of Absence

  28. Conservation of Expected Evidence

  29. Hindsight Devalues Science

  D. Mysterious Answers

  30. Fake Explanations

  31. Guessing the Teacher’s Password

  32. Science as Attire

  33. Fake Causality

  34. Semantic Stopsigns

  35. Mysterious Answers to Mysterious Questions

  36. The Futility of Emergence

  37. Say Not “Complexity”

  38. Positive Bias: Look into the Dark

  39. Lawful Uncertainty

  40. My Wild and Reckless Youth

  41. Failing to Learn from History

  42. Making History Available

  43. Explain/Worship/Ignore?

  44. “Science” as Curiosity-Stopper

  45. Truly Part of You

  Interlude: The Simple Truth

  Part A

  Predictably Wrong

  1

 
What Do I Mean By “Rationality”?

  I mean:

  Epistemic rationality: systematically improving the accuracy of your beliefs.

  Instrumental rationality: systematically achieving your values.

  When you open your eyes and look at the room around you, you’ll locate your laptop in relation to the table, and you’ll locate a bookcase in relation to the wall. If something goes wrong with your eyes, or your brain, then your mental model might say there’s a bookcase where no bookcase exists, and when you go over to get a book, you’ll be disappointed.

  This is what it’s like to have a false belief, a map of the world that doesn’t correspond to the territory. Epistemic rationality is about building accurate maps instead. This correspondence between belief and reality is commonly called “truth,” and I’m happy to call it that.

  Instrumental rationality, on the other hand, is about steering reality—sending the future where you want it to go. It’s the art of choosing actions that lead to outcomes ranked higher in your preferences. I sometimes call this “winning.”

  So rationality is about forming true beliefs and making winning decisions.

  Pursuing “truth” here doesn’t mean dismissing uncertain or indirect evidence. Looking at the room around you and building a mental map of it isn’t different, in principle, from believing that the Earth has a molten core, or that Julius Caesar was bald. Those questions, being distant from you in space and time, might seem more airy and abstract than questions about your bookcase. Yet there are facts of the matter about the state of the Earth’s core in 2015 CE and about the state of Caesar’s head in 50 BCE. These facts may have real effects upon you even if you never find a way to meet Caesar or the core face-to-face.

 

‹ Prev