Rationality- From AI to Zombies

Home > Science > Rationality- From AI to Zombies > Page 138
Rationality- From AI to Zombies Page 138

by Eliezer Yudkowsky


  “Bayesianism” is often contrasted with “frequentism.” Some frequentists criticize Bayesians for treating probabilities as subjective states of belief, rather than as objective frequencies of events. Kruschke and Yudkowsky have replied that frequentism is even more “subjective” than Bayesianism, because frequentism’s probability assignments depend on the intentions of the experimenter.10

  Importantly, this philosophical disagreement shouldn’t be conflated with the distinction between Bayesian and frequentist data analysis methods, which can both be useful when employed correctly. Bayesian statistical tools have become cheaper to use since the 1980s, and their informativeness, intuitiveness, and generality have come to be more widely appreciated, resulting in “Bayesian revolutions” in many sciences. However, traditional frequentist methods remain more popular, and in some contexts they are still clearly superior to Bayesian approaches. Kruschke’s Doing Bayesian Data Analysis is a fun and accessible introduction to the topic.11

  In light of evidence that training in statistics—and some other fields, such as psychology—improves reasoning skills outside the classroom, statistical literacy is directly relevant to the project of overcoming bias. (Classes in formal logic and informal fallacies have not proven similarly useful.)12,13

  An Art in its Infancy

  We conclude with three sequences on individual and collective self-improvement. “Yudkowsky’s Coming of Age” provides a last in-depth illustration of the dynamics of irrational belief, this time spotlighting the author’s own intellectual history. “Challenging the Difficult” asks what it takes to solve a truly difficult problem—including demands that go beyond epistemic rationality. Finally, “The Craft and the Community” discusses rationality groups and group rationality, raising the questions:

  Can rationality be learned and taught?

  If so, how much improvement is possible?

  How can we be confident we’re seeing a real effect in a rationality intervention, and picking out the right cause?

  What community norms would make this process of bettering ourselves easier?

  Can we effectively collaborate on large-scale problems without sacrificing our freedom of thought and conduct?

  Above all: What’s missing? What should be in the next generation of rationality primers—the ones that replace this text, improve on its style, test its prescriptions, supplement its content, and branch out in altogether new directions?

  Though Yudkowsky was moved to write these essays by his own philosophical mistakes and professional difficulties in AI theory, the resultant material has proven useful to a much wider audience. The original blog posts inspired the growth of Less Wrong, a community of intellectuals and life hackers with shared interests in cognitive science, computer science, and philosophy. Yudkowsky and other writers on Less Wrong have helped seed the effective altruism movement, a vibrant and audacious effort to identify the most high-impact humanitarian charities and causes. These writings also sparked the establishment of the Center for Applied Rationality, a nonprofit organization that attempts to translate results from the science of rationality into useable techniques for self-improvement.

  I don’t know what’s next—what other unconventional projects or ideas might draw inspiration from these pages. We certainly face no shortage of global challenges, and the art of applied rationality is a new and half-formed thing. There are not many rationalists, and there are many things left undone.

  But wherever you’re headed next, reader—may you serve your purpose well.

  *

  1. Jonathan Baron, Thinking and Deciding (Cambridge University Press, 2007).

  2. Keith J. Holyoak and Robert G. Morrison, The Oxford Handbook of Thinking and Reasoning (Oxford University Press, 2013).

  3. Bourget and Chalmers, “What Do Philosophers Believe?”

  4. Holt, “Thinking Inside the Boxes.”

  5. Gary L. Drescher, Good and Real: Demystifying Paradoxes from Physics to Ethics (Cambridge, MA: MIT Press, 2006).

  6. William Talbott, “Bayesian Epistemology,” in The Stanford Encyclopedia of Philosophy, Fall 2013, ed. Edward N. Zalta.

  7. Jaynes, Probability Theory.

  8. Marcus Hutter, Universal Artificial Intelligence: Sequential Decisions Based On Algorithmic Probability (Berlin: Springer, 2005), doi:10.1007/b138233.

  9. Richard Feldman, “Naturalized Epistemology,” in The Stanford Encyclopedia of Philosophy, Summer 2012, ed. Edward N. Zalta.

  10. John K. Kruschke, “What to Believe: Bayesian Methods for Data Analysis,” Trends in Cognitive Sciences 14, no. 7 (2010): 293–300.

  11. John K. Kruschke, Doing Bayesian Data Analysis, Second Edition: A Tutorial with R, JAGS, and Stan (Academic Press, 2014).

  12. Geoffrey T. Fong, David H. Krantz, and Richard E. Nisbett, “The Effects of Statistical Training on Thinking about Everyday Problems,” Cognitive Psychology 18, no. 3 (1986): 253–292, doi:10.1016/0010-0285(86)90001-0.

  13. Paul J. H. Schoemaker, “The Role of Statistical Knowledge in Gambling Decisions: Moment vs. Risk Dimension Approaches,” Organizational Behavior and Human Performance 24, no. 1 (1979): 1–17.

  Part X

  Yudkowsky’s Coming of Age

  292

  My Childhood Death Spiral

  My parents always used to downplay the value of intelligence. And play up the value of—effort, as recommended by the latest research? No, not effort. Experience. A nicely unattainable hammer with which to smack down a bright young child, to be sure. That was what my parents told me when I questioned the Jewish religion, for example. I tried laying out an argument, and I was told something along the lines of: “Logic has limits; you’ll understand when you’re older that experience is the important thing, and then you’ll see the truth of Judaism.” I didn’t try again. I made one attempt to question Judaism in school, got slapped down, didn’t try again. I’ve never been a slow learner.

  Whenever my parents were doing something ill-advised, it was always, “We know better because we have more experience. You’ll understand when you’re older: maturity and wisdom are more important than intelligence.”

  If this was an attempt to focus the young Eliezer on intelligence uber alles, it was the most wildly successful example of reverse psychology I’ve ever heard of.

  But my parents aren’t that cunning, and the results weren’t exactly positive.

  For a long time, I thought that the moral of this story was that experience was no match for sheer raw native intelligence. It wasn’t until a lot later, in my twenties, that I looked back and realized that I couldn’t possibly have been more intelligent than my parents before puberty, with my brain not even fully developed. At age eleven, when I was already nearly a full-blown atheist, I could not have defeated my parents in any fair contest of mind. My SAT scores were high for an 11-year-old, but they wouldn’t have beaten my parents’ SAT scores in full adulthood. In a fair fight, my parents’ intelligence and experience could have stomped any prepubescent child flat. It was dysrationalia that did them in; they used their intelligence only to defeat itself.

  But that understanding came much later, when my intelligence had processed and distilled many more years of experience.

  The moral I derived when I was young was that anyone who downplayed the value of intelligence didn’t understand intelligence at all. My own intelligence had affected every aspect of my life and mind and personality; that was massively obvious, seen at a backward glance. “Intelligence has nothing to do with wisdom or being a good person”—oh, and does self-awareness have nothing to do with wisdom, or being a good person? Modeling yourself takes intelligence. For one thing, it takes enough intelligence to learn evolutionary psychology.

  We are the cards we are dealt, and intelligence is the unfairest of all those cards. More unfair than wealth or health or home country, unfairer than your happiness set-point. People have difficulty accepting that life can be that unfair; it’s not a happy thought. “Intelligence isn’t as imp
ortant as X” is one way of turning away from the unfairness, refusing to deal with it, thinking a happier thought instead. It’s a temptation, both to those dealt poor cards, and to those dealt good ones. Just as downplaying the importance of money is a temptation both to the poor and to the rich.

  But the young Eliezer was a transhumanist. Giving away IQ points was going to take more work than if I’d just been born with extra money. But it was a fixable problem, to be faced up to squarely, and fixed. Even if it took my whole life. “The strong exist to serve the weak,” wrote the young Eliezer, “and can only discharge that duty by making others equally strong.” I was annoyed with the Randian and Nietszchean trends in science fiction, and as you may have grasped, the young Eliezer had a tendency to take things too far in the other direction. No one exists only to serve. But I tried, and I don’t regret that. If you call that teenage folly, it’s rare to see adult wisdom doing better.

  Everyone needed more intelligence. Including me, I was careful to pronounce. Be it far from me to declare a new world order with myself on top—that was what a stereotyped science fiction villain would do, or worse, a typical teenager, and I would never have allowed myself to be so clichéd. No, everyone needed to be smarter. We were all in the same boat: A fine, uplifting thought.

  Eliezer1995 had read his science fiction. He had morals, and ethics, and could see the more obvious traps. No screeds on Homo novis for him. No line drawn between himself and others. No elaborate philosophy to put himself at the top of the heap. It was too obvious a failure mode. Yes, he was very careful to call himself stupid too, and never claim moral superiority. Well, and I don’t see it so differently now, though I no longer make such a dramatic production out of my ethics. (Or maybe it would be more accurate to say that I’m tougher about when I allow myself a moment of self-congratulation.)

  I say all this to emphasize that Eliezer1995 wasn’t so undignified as to fail in any obvious way.

  And then Eliezer1996 encountered the concept of intelligence explosion. Was it a thunderbolt of revelation? Did I jump out of my chair and shout “Eurisko!”? Nah. I wasn’t that much of a drama queen. It was just massively obvious in retrospect that smarter-than-human intelligence was going to change the future more fundamentally than any mere material science. And I knew at once that this was what I would be doing with the rest of my life, creating the intelligence explosion. Not nanotechnology like I’d thought when I was eleven years old; nanotech would only be a tool brought forth of intelligence. Why, intelligence was even more powerful, an even greater blessing, than I’d realized before.

  Was this a happy death spiral? As it turned out later, yes: that is, it led to the adoption even of false happy beliefs about intelligence. Perhaps you could draw the line at the point where I started believing that surely the lightspeed limit would be no barrier to superintelligence.

  (How my views on intelligence have changed since then . . . let’s see: When I think of poor hands dealt to humans, these days, I think first of death and old age. Everyone’s got to have some intelligence level or other, and the important thing from a fun-theoretic perspective is that it should ought to increase over time, not decrease like now. Isn’t that a clever way of feeling better? But I don’t work so hard now at downplaying my own intelligence, because that’s just another way of calling attention to it. I’m smart for a human, if the topic should arise, and how I feel about that is my own business.

  The part about intelligence being the lever that lifts worlds is the same. Except that intelligence has become less mysterious unto me, so that I now more clearly see intelligence as something embedded within physics. Superintelligences may go FTL if it happens to be permitted by the true physical laws, and if not, then not. It’s not unthinkable, but I wouldn’t bet on it.)

  But the real wrong turn came later, at the point where someone said, “Hey, how do you know that superintelligence will be moral? Intelligence has nothing to do with being a good person, you know—that’s what we call wisdom, young prodigy.”

  And lo, it seemed obvious to the young Eliezer that this was mere denial. Certainly, his own painstakingly constructed code of ethics had been put together using his intelligence and resting on his intelligence as a base. Any fool could see that intelligence had a great deal to do with ethics, morality, and wisdom; just try explaining the Prisoner’s Dilemma to a chimpanzee, right?

  Surely, then, superintelligence would necessarily imply supermorality.

  Thus is it said: “Parents do all the things they tell their children not to do, which is how they know not to do them.”

  *

  293

  My Best and Worst Mistake

  Last chapter I covered the young Eliezer’s affective death spiral around something that he called “intelligence.” Eliezer1996, or even Eliezer1999 for that matter, would have refused to try and put a mathematical definition—consciously, deliberately refused. Indeed, he would have been loath to put any definition on “intelligence” at all.

  Why? Because there’s a standard bait-and-switch problem in AI, wherein you define “intelligence” to mean something like “logical reasoning” or “the ability to withdraw conclusions when they are no longer appropriate,” and then you build a cheap theorem-prover or an ad-hoc nonmonotonic reasoner, and then say, “Lo, I have implemented intelligence!” People came up with poor definitions of intelligence—focusing on correlates rather than cores—and then they chased the surface definition they had written down, forgetting about, you know, actual intelligence. It’s not like Eliezer1996 was out to build a career in Artificial Intelligence. He just wanted a mind that would actually be able to build nanotechnology. So he wasn’t tempted to redefine intelligence for the sake of puffing up a paper.

  Looking back, it seems to me that quite a lot of my mistakes can be defined in terms of being pushed too far in the other direction by seeing someone else’s stupidity. Having seen attempts to define “intelligence” abused so often, I refused to define it at all. What if I said that intelligence was X, and it wasn’t really X? I knew in an intuitive sense what I was looking for—something powerful enough to take stars apart for raw material—and I didn’t want to fall into the trap of being distracted from that by definitions.

  Similarly, having seen so many AI projects brought down by physics envy—trying to stick with simple and elegant math, and being constrained to toy systems as a result—I generalized that any math simple enough to be formalized in a neat equation was probably not going to work for, you know, real intelligence. “Except for Bayes’s Theorem,” Eliezer2000 added; which, depending on your viewpoint, either mitigates the totality of his offense, or shows that he should have suspected the entire generalization instead of trying to add a single exception.

  If you’re wondering why Eliezer2000 thought such a thing—disbelieved in a math of intelligence—well, it’s hard for me to remember this far back. It certainly wasn’t that I ever disliked math. If I had to point out a root cause, it would be reading too few, too popular, and the wrong Artificial Intelligence books.

  But then I didn’t think the answers were going to come from Artificial Intelligence; I had mostly written it off as a sick, dead field. So it’s no wonder that I spent too little time investigating it. I believed in the cliché about Artificial Intelligence overpromising. You can fit that into the pattern of “too far in the opposite direction”—the field hadn’t delivered on its promises, so I was ready to write it off. As a result, I didn’t investigate hard enough to find the math that wasn’t fake.

  My youthful disbelief in a mathematics of general intelligence was simultaneously one of my all-time worst mistakes, and one of my all-time best mistakes.

  Because I disbelieved that there could be any simple answers to intelligence, I went and I read up on cognitive psychology, functional neuroanatomy, computational neuroanatomy, evolutionary psychology, evolutionary biology, and more than one branch of Artificial Intelligence. When I had what seemed like simple bright ideas,
I didn’t stop there, or rush off to try and implement them, because I knew that even if they were true, even if they were necessary, they wouldn’t be sufficient: intelligence wasn’t supposed to be simple, it wasn’t supposed to have an answer that fit on a T-shirt. It was supposed to be a big puzzle with lots of pieces; and when you found one piece, you didn’t run off holding it high in triumph, you kept on looking. Try to build a mind with a single missing piece, and it might be that nothing interesting would happen.

  I was wrong in thinking that Artificial Intelligence, the academic field, was a desolate wasteland; and even wronger in thinking that there couldn’t be math of intelligence. But I don’t regret studying e.g. functional neuroanatomy, even though I now think that an Artificial Intelligence should look nothing like a human brain. Studying neuroanatomy meant that I went in with the idea that if you broke up a mind into pieces, the pieces were things like “visual cortex” and “cerebellum”—rather than “stock-market trading module” or “commonsense reasoning module,” which is a standard wrong road in AI.

  Studying fields like functional neuroanatomy and cognitive psychology gave me a very different idea of what minds had to look like than you would get from just reading AI books—even good AI books.

  When you blank out all the wrong conclusions and wrong justifications, and just ask what that belief led the young Eliezer to actually do . . .

  Then the belief that Artificial Intelligence was sick and that the real answer would have to come from healthier fields outside led him to study lots of cognitive sciences;

  The belief that AI couldn’t have simple answers led him to not stop prematurely on one brilliant idea, and to accumulate lots of information;

 

‹ Prev