Book Read Free

Rationality- From AI to Zombies

Page 30

by Eliezer Yudkowsky


  You don’t even ask whether the incident reflects poorly on God, so there’s no need to quickly blurt out “The ways of God are mysterious!” or “We’re not wise enough to question God’s decisions!” or “Murdering babies is okay when God does it!” That part of the question is just-not-thought-about.

  The reason that educated religious people stay religious, I suspect, is that when they doubt, they are subconsciously very careful to attack their own beliefs only at the strongest points—places where they know they can defend. Moreover, places where rehearsing the standard defense will feel strengthening.

  It probably feels really good, for example, to rehearse one’s prescripted defense for “Doesn’t Science say that the universe is just meaningless atoms bopping around?,” because it confirms the meaning of the universe and how it flows from God, etc. Much more comfortable to think about than an illiterate Egyptian mother wailing over the crib of her slaughtered son. Anyone who spontaneously thinks about the latter, when questioning their faith in Judaism, is really questioning it, and is probably not going to stay Jewish much longer.

  My point here is not just to beat up on Orthodox Judaism. I’m sure that there’s some reply or other for the Slaying of the Firstborn, and probably a dozen of them. My point is that, when it comes to spontaneous self-questioning, one is much more likely to spontaneously self-attack strong points with comforting replies to rehearse, then to spontaneously self-attack the weakest, most vulnerable points. Similarly, one is likely to stop at the first reply and be comforted, rather than further criticizing the reply. A better title than “Avoiding Your Belief’s Real Weak Points” would be “Not Spontaneously Thinking About Your Belief’s Most Painful Weaknesses.”

  More than anything, the grip of religion is sustained by people just-not-thinking-about the real weak points of their religion. I don’t think this is a matter of training, but a matter of instinct. People don’t think about the real weak points of their beliefs for the same reason they don’t touch an oven’s red-hot burners; it’s painful.

  To do better: When you’re doubting one of your most cherished beliefs, close your eyes, empty your mind, grit your teeth, and deliberately think about whatever hurts the most. Don’t rehearse standard objections whose standard counters would make you feel better. Ask yourself what smart people who disagree would say to your first reply, and your second reply. Whenever you catch yourself flinching away from an objection you fleetingly thought of, drag it out into the forefront of your mind. Punch yourself in the solar plexus. Stick a knife in your heart, and wiggle to widen the hole. In the face of the pain, rehearse only this:

  What is true is already so.

  Owning up to it doesn’t make it worse.

  Not being open about it doesn’t make it go away.

  And because it’s true, it is what is there to be interacted with.

  Anything untrue isn’t there to be lived.

  People can stand what is true,

  for they are already enduring it.

  —Eugene Gendlin1

  (Hat tip to Stephen Omohundro.)

  *

  1. Eugene T. Gendlin, Focusing (Bantam Books, 1982).

  75

  Motivated Stopping and Motivated Continuation

  While I disagree with some views of the Fast and Frugal crowd—in my opinion they make a few too many lemons into lemonade—it also seems to me that they tend to develop the most psychologically realistic models of any school of decision theory. Most experiments present the subjects with options, and the subject chooses an option, and that’s the experimental result. The frugalists realized that in real life, you have to generate your options, and they studied how subjects did that.

  Likewise, although many experiments present evidence on a silver platter, in real life you have to gather evidence, which may be costly, and at some point decide that you have enough evidence to stop and choose. When you’re buying a house, you don’t get exactly ten houses to choose from, and you aren’t led on a guided tour of all of them before you’re allowed to decide anything. You look at one house, and another, and compare them to each other; you adjust your aspirations—reconsider how much you really need to be close to your workplace and how much you’re really willing to pay; you decide which house to look at next; and at some point you decide that you’ve seen enough houses, and choose.

  Gilovich’s distinction between motivated skepticism and motivated credulity highlights how conclusions a person does not want to believe are held to a higher standard than conclusions a person wants to believe. A motivated skeptic asks if the evidence compels them to accept the conclusion; a motivated credulist asks if the evidence allows them to accept the conclusion.

  I suggest that an analogous bias in psychologically realistic search is motivated stopping and motivated continuation: when we have a hidden motive for choosing the “best” current option, we have a hidden motive to stop, and choose, and reject consideration of any more options. When we have a hidden motive to reject the current best option, we have a hidden motive to suspend judgment pending additional evidence, to generate more options—to find something, anything, to do instead of coming to a conclusion.

  A major historical scandal in statistics was R. A. Fisher, an eminent founder of the field, insisting that no causal link had been established between smoking and lung cancer. “Correlation is not causation,” he testified to Congress. Perhaps smokers had a gene which both predisposed them to smoke and predisposed them to lung cancer.

  Or maybe Fisher’s being employed as a consultant for tobacco firms gave him a hidden motive to decide that the evidence already gathered was insufficient to come to a conclusion, and it was better to keep looking. Fisher was also a smoker himself, and died of colon cancer in 1962.

  (Ad hominem note: Fisher was a frequentist. Bayesians are more reasonable about inferring probable causality.)

  Like many other forms of motivated skepticism, motivated continuation can try to disguise itself as virtuous rationality. Who can argue against gathering more evidence? I can. Evidence is often costly, and worse, slow, and there is certainly nothing virtuous about refusing to integrate the evidence you already have. You can always change your mind later. (Apparent contradiction resolved as follows: Spending one hour discussing the problem, with your mind carefully cleared of all conclusions, is different from waiting ten years on another $20 million study.)

  As for motivated stopping, it appears in every place a third alternative is feared, and wherever you have an argument whose obvious counterargument you would rather not see, and in other places as well. It appears when you pursue a course of action that makes you feel good just for acting, and so you’d rather not investigate how well your plan really worked, for fear of destroying the warm glow of moral satisfaction you paid good money to purchase. It appears wherever your beliefs and anticipations get out of sync, so you have a reason to fear any new evidence gathered.

  The moral is that the decision to terminate a search procedure (temporarily or permanently) is, like the search procedure itself, subject to bias and hidden motives. You should suspect motivated stopping when you close off search, after coming to a comfortable conclusion, and yet there’s a lot of fast cheap evidence you haven’t gathered yet—there are websites you could visit, there are counter-counter arguments you could consider, or you haven’t closed your eyes for five minutes by the clock trying to think of a better option. You should suspect motivated continuation when some evidence is leaning in a way you don’t like, but you decide that more evidence is needed—expensive evidence that you know you can’t gather anytime soon, as opposed to something you’re going to look up on Google in thirty minutes—before you’ll have to do anything uncomfortable.

  *

  76

  Fake Justification

  Many Christians who’ve stopped really believing now insist that they revere the Bible as a source of ethical advice. The standard atheist reply is given by Sam Harris: “You and I both know that it woul
d take us five minutes to produce a book that offers a more coherent and compassionate morality than the Bible does.” Similarly, one may try to insist that the Bible is valuable as a literary work. Then why not revere Lord of the Rings, a vastly superior literary work? And despite the standard criticisms of Tolkien’s morality, Lord of the Rings is at least superior to the Bible as a source of ethics. So why don’t people wear little rings around their neck, instead of crosses? Even Harry Potter is superior to the Bible, both as a work of literary art and as moral philosophy. If I really wanted to be cruel, I would compare the Bible to Jacqueline Carey’s Kushiel series.

  “How can you justify buying a $1 million gem-studded laptop,” you ask your friend, “when so many people have no laptops at all?” And your friend says, “But think of the employment that this will provide—to the laptop maker, the laptop maker’s advertising agency—and then they’ll buy meals and haircuts—it will stimulate the economy and eventually many people will get their own laptops.” But it would be even more efficient to buy 5,000 One Laptop Per Child laptops, thus providing employment to the OLPC manufacturers and giving out laptops directly.

  I’ve touched before on the failure to look for third alternatives. But this is not really motivated stopping. Calling it “motivated stopping” would imply that there was a search carried out in the first place.

  In The Bottom Line, I observed that only the real determinants of our beliefs can ever influence our real-world accuracy, only the real determinants of our actions can influence our effectiveness in achieving our goals. Someone who buys a million-dollar laptop was really thinking, “Ooh, shiny,” and that was the one true causal history of their decision to buy a laptop. No amount of “justification” can change this, unless the justification is a genuine, newly running search process that can change the conclusion. Really change the conclusion. Most criticism carried out from a sense of duty is more of a token inspection than anything else. Free elections in a one-party country.

  To genuinely justify the Bible as a lauding-object by reference to its literary quality, you would have to somehow perform a neutral reading through candidate books until you found the book of highest literary quality. Renown is one reasonable criteria for generating candidates, so I suppose you could legitimately end up reading Shakespeare, the Bible, and Gödel, Escher, Bach. (Otherwise it would be quite a coincidence to find the Bible as a candidate, among a million other books.) The real difficulty is in that “neutral reading” part. Easy enough if you’re not a Christian, but if you are . . .

  But of course nothing like this happened. No search ever occurred. Writing the justification of “literary quality” above the bottom line of “I the Bible” is a historical misrepresentation of how the bottom line really got there, like selling cat milk as cow milk. That is just not where the bottom line really came from. That is just not what originally happened to produce that conclusion.

  If you genuinely subject your conclusion to a criticism that can potentially de-conclude it—if the criticism genuinely has that power—then that does modify “the real algorithm behind” your conclusion. It changes the entanglement of your conclusion over possible worlds. But people overestimate, by far, how likely they really are to change their minds.

  With all those open minds out there, you’d think there’d be more belief-updating.

  Let me guess: Yes, you admit that you originally decided you wanted to buy a million-dollar laptop by thinking, “Ooh, shiny.” Yes, you concede that this isn’t a decision process consonant with your stated goals. But since then, you’ve decided that you really ought to spend your money in such fashion as to provide laptops to as many laptopless wretches as possible. And yet you just couldn’t find any more efficient way to do this than buying a million-dollar diamond-studded laptop—because, hey, you’re giving money to a laptop store and stimulating the economy! Can’t beat that!

  My friend, I am damned suspicious of this amazing coincidence. I am damned suspicious that the best answer under this lovely, rational, altruistic criterion X, is also the idea that just happened to originally pop out of the unrelated indefensible process Y. If you don’t think that rolling dice would have been likely to produce the correct answer, then how likely is it to pop out of any other irrational cognition?

  It’s improbable that you used mistaken reasoning, yet made no mistakes.

  *

  77

  Is That Your True Rejection?

  It happens every now and then, that the one encounters some of my transhumanist-side beliefs—as opposed to my ideas having to do with human rationality—strange, exotic-sounding ideas like superintelligence and Friendly AI. And the one rejects them.

  If the one is called upon to explain the rejection, not uncommonly the one says, “Why should I believe anything Yudkowsky says? He doesn’t have a PhD!”

  And occasionally someone else, hearing, says, “Oh, you should get a PhD, so that people will listen to you.” Or this advice may even be offered by the same one who disbelieved, saying, “Come back when you have a PhD.”

  Now there are good and bad reasons to get a PhD, but this is one of the bad ones.

  There’s many reasons why someone actually has an adverse reaction to transhumanist theses. Most are matters of pattern recognition, rather than verbal thought: the thesis matches against “strange weird idea” or “science fiction” or “end-of-the-world cult” or “overenthusiastic youth.”

  So immediately, at the speed of perception, the idea is rejected. If, afterward, someone says “Why not?,” this launches a search for justification. But this search will not necessarily hit on the true reason—by “true reason” I mean not the best reason that could be offered, but rather, whichever causes were decisive as a matter of historical fact, at the very first moment the rejection occurred.

  Instead, the search for justification hits on the justifying-sounding fact, “This speaker does not have a PhD.”

  But I also don’t have a PhD when I talk about human rationality, so why is the same objection not raised there?

  And more to the point, if I had a PhD, people would not treat this as a decisive factor indicating that they ought to believe everything I say. Rather, the same initial rejection would occur, for the same reasons; and the search for justification, afterward, would terminate at a different stopping point.

  They would say, “Why should I believe you? You’re just some guy with a PhD! There are lots of those. Come back when you’re well-known in your field and tenured at a major university.”

  But do people actually believe arbitrary professors at Harvard who say weird things? Of course not. (But if I were a professor at Harvard, it would in fact be easier to get media attention. Reporters initially disinclined to believe me—who would probably be equally disinclined to believe a random PhD-bearer—would still report on me, because it would be news that a Harvard professor believes such a weird thing.)

  If you are saying things that sound wrong to a novice, as opposed to just rattling off magical-sounding technobabble about leptical quark braids in N + 2 dimensions; and the hearer is a stranger, unfamiliar with you personally and with the subject matter of your field; then I suspect that the point at which the average person will actually start to grant credence overriding their initial impression, purely because of academic credentials, is somewhere around the Nobel Laureate level. If that. Roughly, you need whatever level of academic credential qualifies as “beyond the mundane.”

  This is more or less what happened to Eric Drexler, as far as I can tell. He presented his vision of nanotechnology, and people said, “Where are the technical details?” or “Come back when you have a PhD!” And Eric Drexler spent six years writing up technical details and got his PhD under Marvin Minsky for doing it. And Nanosystems is a great book. But did the same people who said, “Come back when you have a PhD,” actually change their minds at all about molecular nanotechnology? Not so far as I ever heard.

  It has similarly been a general rul
e with the Machine Intelligence Research Institute that, whatever it is we’re supposed to do to be more credible, when we actually do it, nothing much changes. “Do you do any sort of code development? I’m not interested in supporting an organization that doesn’t develop code” → OpenCog → nothing changes. “Eliezer Yudkowsky lacks academic credentials” → Professor Ben Goertzel installed as Director of Research → nothing changes. The one thing that actually has seemed to raise credibility, is famous people associating with the organization, like Peter Thiel funding us, or Ray Kurzweil on the Board.

  This might be an important thing for young businesses and new-minted consultants to keep in mind—that what your failed prospects tell you is the reason for rejection, may not make the real difference; and you should ponder that carefully before spending huge efforts. If the venture capitalist says “If only your sales were growing a little faster!,” or if the potential customer says “It seems good, but you don’t have feature X,” that may not be the true rejection. Fixing it may, or may not, change anything.

  And it would also be something to keep in mind during disagreements. Robin Hanson and I share a belief that two rationalists should not agree to disagree: they should not have common knowledge of epistemic disagreement unless something is very wrong.

 

‹ Prev