Rationality- From AI to Zombies

Home > Science > Rationality- From AI to Zombies > Page 34
Rationality- From AI to Zombies Page 34

by Eliezer Yudkowsky


  A single fleeting image can be enough to prime associated words for recognition. Don’t think it takes anything more to set confirmation bias in motion. All it takes is that one quick flash, and the bottom line is already decided, for we change our minds less often than we think . . .

  *

  1. Thomas Mussweiler and Fritz Strack, “Comparing Is Believing: A Selective Accessibility Model of Judgmental Anchoring,” European Review of Social Psychology 10 (1 1999): 135–167, doi:10.1080/14792779943000044.

  2. Gretchen B. Chapman and Eric J. Johnson, “Incorporating the Irrelevant: Anchors in Judgments of Belief and Value,” in Gilovich, Griffin, and Kahneman, Heuristics and Biases, 120–138.

  3. Tversky and Kahneman, “Judgment Under Uncertainty.”

  4. Nicholas Epley and Thomas Gilovich, “Putting Adjustment Back in the Anchoring and Adjustment Heuristic: Differential Processing of Self-Generated and Experimentor-Provided Anchors,” Psychological Science 12 (5 2001): 391–396, doi:10.1111/1467-9280.00372.

  5. Brian Wansink, Robert J. Kent, and Stephen J. Hoch, “An Anchoring and Adjustment Model of Purchase Quantity Decisions,” Journal of Marketing Research 35, no. 1 (1998): 71–81, http://www.jstor.org/stable/3151931.

  89

  Do We Believe Everything We’re Told?

  Some early experiments on anchoring and adjustment tested whether distracting the subjects—rendering subjects cognitively “busy” by asking them to keep a lookout for “5” in strings of numbers, or some such—would decrease adjustment, and hence increase the influence of anchors. Most of the experiments seemed to bear out the idea that cognitive busyness increased anchoring, and more generally contamination.

  Looking over the accumulating experimental results—more and more findings of contamination, exacerbated by cognitive busyness—Daniel Gilbert saw a truly crazy pattern emerging: Do we believe everything we’re told?

  One might naturally think that on being told a proposition, we would first comprehend what the proposition meant, then consider the proposition, and finally accept or reject it. This obvious-seeming model of cognitive process flow dates back to Descartes. But Descartes’s rival, Spinoza, disagreed; Spinoza suggested that we first passively accept a proposition in the course of comprehending it, and only afterward actively disbelieve propositions which are rejected by consideration.

  Over the last few centuries, philosophers pretty much went along with Descartes, since his view seemed more, y’know, logical and intuitive. But Gilbert saw a way of testing Descartes’s and Spinoza’s hypotheses experimentally.

  If Descartes is right, then distracting subjects should interfere with both accepting true statements and rejecting false statements. If Spinoza is right, then distracting subjects should cause them to remember false statements as being true, but should not cause them to remember true statements as being false.

  Gilbert, Krull, and Malone bear out this result, showing that, among subjects presented with novel statements labeled TRUE or FALSE, distraction had no effect on identifying true propositions (55% success for uninterrupted presentations, vs. 58% when interrupted); but did affect identifying false propositions (55% success when uninterrupted, vs. 35% when interrupted).1

  A much more dramatic illustration was produced in followup experiments by Gilbert, Tafarodi, and Malone.2 Subjects read aloud crime reports crawling across a video monitor, in which the color of the text indicated whether a particular statement was true or false. Some reports contained false statements that exacerbated the severity of the crime, other reports contained false statements that extenuated (excused) the crime. Some subjects also had to pay attention to strings of digits, looking for a “5,” while reading the crime reports—this being the distraction task to create cognitive busyness. Finally, subjects had to recommend the length of prison terms for each criminal, from 0 to 20 years.

  Subjects in the cognitively busy condition recommended an average of 11.15 years in prison for criminals in the “exacerbating” condition, that is, criminals whose reports contained labeled false statements exacerbating the severity of the crime. Busy subjects recommended an average of 5.83 years in prison for criminals whose reports contained labeled false statements excusing the crime. This nearly twofold difference was, as you might suspect, statistically significant.

  Non-busy participants read exactly the same reports, with the same labels, and the same strings of numbers occasionally crawling past, except that they did not have to search for the number “5.” Thus, they could devote more attention to “unbelieving” statements labeled false. These non-busy participants recommended 7.03 years versus 6.03 years for criminals whose reports falsely exacerbated or falsely excused.

  Gilbert, Tafarodi, and Malone’s paper was entitled “You Can’t Not Believe Everything You Read.”

  This suggests—to say the very least—that we should be more careful when we expose ourselves to unreliable information, especially if we’re doing something else at the time. Be careful when you glance at that newspaper in the supermarket.

  PS: According to an unverified rumor I just made up, people will be less skeptical of this essay because of the distracting color changes.

  *

  1. Daniel T. Gilbert, Douglas S. Krull, and Patrick S. Malone, “Unbelieving the Unbelievable: Some Problems in the Rejection of False Information,” Journal of Personality and Social Psychology 59 (4 1990): 601–613, doi:10.1037/0022-3514.59.4.601.

  2. Gilbert, Tafarodi, and Malone, “You Can’t Not Believe Everything You Read.”

  90

  Cached Thoughts

  One of the single greatest puzzles about the human brain is how the damn thing works at all when most neurons fire 10–20 times per second, or 200Hz tops. In neurology, the “hundred-step rule” is that any postulated operation has to complete in at most 100 sequential steps—you can be as parallel as you like, but you can’t postulate more than 100 (preferably fewer) neural spikes one after the other.

  Can you imagine having to program using 100Hz CPUs, no matter how many of them you had? You’d also need a hundred billion processors just to get anything done in realtime.

  If you did need to write realtime programs for a hundred billion 100Hz processors, one trick you’d use as heavily as possible is caching. That’s when you store the results of previous operations and look them up next time, instead of recomputing them from scratch. And it’s a very neural idiom—recognition, association, completing the pattern.

  It’s a good guess that the actual majority of human cognition consists of cache lookups.

  This thought does tend to go through my mind at certain times.

  There was a wonderfully illustrative story which I thought I had bookmarked, but couldn’t re-find: it was the story of a man whose know-it-all neighbor had once claimed in passing that the best way to remove a chimney from your house was to knock out the fireplace, wait for the bricks to drop down one level, knock out those bricks, and repeat until the chimney was gone. Years later, when the man wanted to remove his own chimney, this cached thought was lurking, waiting to pounce . . .

  As the man noted afterward—you can guess it didn’t go well—his neighbor was not particularly knowledgeable in these matters, not a trusted source. If he’d questioned the idea, he probably would have realized it was a poor one. Some cache hits we’d be better off recomputing. But the brain completes the pattern automatically—and if you don’t consciously realize the pattern needs correction, you’ll be left with a completed pattern.

  I suspect that if the thought had occurred to the man himself—if he’d personally had this bright idea for how to remove a chimney—he would have examined the idea more critically. But if someone else has already thought an idea through, you can save on computing power by caching their conclusion—right?

  In modern civilization particularly, no one can think fast enough to think their own thoughts. If I’d been abandoned in the woods as an infant, raised by wolves or silent robots, I would scarcely be reco
gnizable as human. No one can think fast enough to recapitulate the wisdom of a hunter-gatherer tribe in one lifetime, starting from scratch. As for the wisdom of a literate civilization, forget it.

  But the flip side of this is that I continually see people who aspire to critical thinking, repeating back cached thoughts which were not invented by critical thinkers.

  A good example is the skeptic who concedes, “Well, you can’t prove or disprove a religion by factual evidence.” As I have pointed out elsewhere, this is simply false as probability theory. And it is also simply false relative to the real psychology of religion—a few centuries ago, saying this would have gotten you burned at the stake. A mother whose daughter has cancer prays, “God, please heal my daughter,” not, “Dear God, I know that religions are not allowed to have any falsifiable consequences, which means that you can’t possibly heal my daughter, so . . . well, basically, I’m praying to make myself feel better, instead of doing something that could actually help my daughter.”

  But people read “You can’t prove or disprove a religion by factual evidence,” and then, the next time they see a piece of evidence disproving a religion, their brain completes the pattern. Even some atheists repeat this absurdity without hesitation. If they’d thought of the idea themselves, rather than hearing it from someone else, they would have been more skeptical.

  Death. Complete the pattern: “Death gives meaning to life.”

  It’s frustrating, talking to good and decent folk—people who would never in a thousand years spontaneously think of wiping out the human species—raising the topic of existential risk, and hearing them say, “Well, maybe the human species doesn’t deserve to survive.” They would never in a thousand years shoot their own child, who is a part of the human species, but the brain completes the pattern.

  What patterns are being completed, inside your mind, that you never chose to be there?

  Rationality. Complete the pattern: “Love isn’t rational.”

  If this idea had suddenly occurred to you personally, as an entirely new thought, how would you examine it critically? I know what I would say, but what would you? It can be hard to see with fresh eyes. Try to keep your mind from completing the pattern in the standard, unsurprising, already-known way. It may be that there is no better answer than the standard one, but you can’t think about the answer until you can stop your brain from filling in the answer automatically.

  Now that you’ve read this, the next time you hear someone unhesitatingly repeating a meme you think is silly or false, you’ll think, “Cached thoughts.” My belief is now there in your mind, waiting to complete the pattern. But is it true? Don’t let your mind complete the pattern! Think!

  *

  91

  The “Outside the Box” Box

  Whenever someone exhorts you to “think outside the box,” they usually, for your convenience, point out exactly where “outside the box” is located. Isn’t it funny how nonconformists all dress the same . . .

  In Artificial Intelligence, everyone outside the field has a cached result for brilliant new revolutionary AI idea—neural networks, which work just like the human brain! New AI idea. Complete the pattern: “Logical AIs, despite all the big promises, have failed to provide real intelligence for decades—what we need are neural networks!”

  This cached thought has been around for three decades. Still no general intelligence. But, somehow, everyone outside the field knows that neural networks are the Dominant-Paradigm-Overthrowing New Idea, ever since backpropagation was invented in the 1970s. Talk about your aging hippies.

  Nonconformist images, by their nature, permit no departure from the norm. If you don’t wear black, how will people know you’re a tortured artist? How will people recognize uniqueness if you don’t fit the standard pattern for what uniqueness is supposed to look like? How will anyone recognize you’ve got a revolutionary AI concept, if it’s not about neural networks?

  Another example of the same trope is “subversive” literature, all of which sounds the same, backed up by a tiny defiant league of rebels who control the entire English Department. As Anonymous asks on Scott Aaronson’s blog:

  Has any of the subversive literature you’ve read caused you to modify any of your political views?

  Or as Lizard observes:

  Revolution has already been televised. Revolution has been merchandised. Revolution is a commodity, a packaged lifestyle, available at your local mall. $19.95 gets you the black mask, the spray can, the “Crush the Fascists” protest sign, and access to your blog where you can write about the police brutality you suffered when you chained yourself to a fire hydrant. Capitalism has learned how to sell anti-capitalism.

  Many in Silicon Valley have observed that the vast majority of venture capitalists at any given time are all chasing the same Revolutionary Innovation, and it’s the Revolutionary Innovation that IPO’d six months ago. This is an especially crushing observation in venture capital, because there’s a direct economic motive to not follow the herd—either someone else is also developing the product, or someone else is bidding too much for the startup. Steve Jurvetson once told me that at Draper Fisher Jurvetson, only two partners need to agree in order to fund any startup up to $1.5 million. And if all the partners agree that something sounds like a good idea, they won’t do it. If only grant committees were this sane.

  The problem with originality is that you actually have to think in order to attain it, instead of letting your brain complete the pattern. There is no conveniently labeled “Outside the Box” to which you can immediately run off. There’s an almost Zen-like quality to it—like the way you can’t teach satori in words because satori is the experience of words failing you. The more you try to follow the Zen Master’s instructions in words, the further you are from attaining an empty mind.

  There is a reason, I think, why people do not attain novelty by striving for it. Properties like truth or good design are independent of novelty: 2 + 2 = 4, yes, really, even though this is what everyone else thinks too. People who strive to discover truth or to invent good designs, may in the course of time attain creativity. Not every change is an improvement, but every improvement is a change.

  Every improvement is a change, but not every change is an improvement. The one who says “I want to build an original mousetrap!,” and not “I want to build an optimal mousetrap!,” nearly always wishes to be perceived as original. “Originality” in this sense is inherently social, because it can only be determined by comparison to other people. So their brain simply completes the standard pattern for what is perceived as “original,” and their friends nod in agreement and say it is subversive.

  Business books always tell you, for your convenience, where your cheese has been moved to. Otherwise the readers would be left around saying, “Where is this ‘Outside the Box’ I’m supposed to go?”

  Actually thinking, like satori, is a wordless act of mind.

  The eminent philosophers of Monty Python said it best of all in Life of Brian:1

  “You’ve got to think for yourselves! You’re all individuals!”

  “Yes, we’re all individuals!”

  “You’re all different!”

  “Yes, we’re all different!”

  “You’ve all got to work it out for yourselves!”

  “Yes, we’ve got to work it out for ourselves!”

  *

  1. Graham Chapman et al., Monty Python’s The Life of Brian (of Nazareth) (Eyre Methuen, 1979).

  92

  Original Seeing

  Since Robert Pirsig put this very well, I’ll just copy down what he said. I don’t know if this story is based on reality or not, but either way, it’s true.1

  He’d been having trouble with students who had nothing to say. At first he thought it was laziness but later it became apparent that it wasn’t. They just couldn’t think of anything to say.

  One of them, a girl with strong-lensed glasses, wanted to write a five-hundred word essay about the United
States. He was used to the sinking feeling that comes from statements like this, and suggested without disparagement that she narrow it down to just Bozeman.

  When the paper came due she didn’t have it and was quite upset. She had tried and tried but she just couldn’t think of anything to say.

  It just stumped him. Now he couldn’t think of anything to say. A silence occurred, and then a peculiar answer: “Narrow it down to the main street of Bozeman.” It was a stroke of insight.

  She nodded dutifully and went out. But just before her next class she came back in real distress, tears this time, distress that had obviously been there for a long time. She still couldn’t think of anything to say, and couldn’t understand why, if she couldn’t think of anything about all of Bozeman, she should be able to think of something about just one street.

  He was furious. “You’re not looking!” he said. A memory came back of his own dismissal from the University for having too much to say. For every fact there is an infinity of hypotheses. The more you look the more you see. She really wasn’t looking and yet somehow didn’t understand this.

  He told her angrily, “Narrow it down to the front of one building on the main street of Bozeman. The Opera House. Start with the upper left-hand brick.”

  Her eyes, behind the thick-lensed glasses, opened wide.

  She came in the next class with a puzzled look and handed him a five-thousand-word essay on the front of the Opera House on the main street of Bozeman, Montana. “I sat in the hamburger stand across the street,” she said, “and started writing about the first brick, and the second brick, and then by the third brick it all started to come and I couldn’t stop. They thought I was crazy, and they kept kidding me, but here it all is. I don’t understand it.”

 

‹ Prev