Book Read Free

Rationality- From AI to Zombies

Page 112

by Eliezer Yudkowsky


  Solomonoff induction, taken literally, would create countably infinitely many sentient beings, trapped inside the computations. All possible computable sentient beings, in fact. Which scarcely seems ethical. So let us be glad this is only a formalism.

  But my point is that the “theoretical limit on how much information you can extract from sensory data” is far above what I have depicted as the triumph of a civilization of physicists and cryptographers.

  It certainly is not anything like a human looking at an apple falling down, and thinking, “Dur, I wonder why that happened?”

  People seem to make a leap from “This is ‘bounded’” to “The bound must be a reasonable-looking quantity on the scale I’m used to.” The power output of a supernova is “bounded,” but I wouldn’t advise trying to shield yourself from one with a flame-retardant Nomex jumpsuit.

  No one—not even a Bayesian superintelligence—will ever come remotely close to making efficient use of their sensory information . . .

  . . . is what I would like to say, but I don’t trust my ability to set limits on the abilities of Bayesian superintelligences.

  (Though I’d bet money on it, if there were some way to judge the bet. Just not at very extreme odds.)

  The story continues:

  Millennia later, frame after frame, it has become clear that some of the objects in the depiction are extending tentacles to move around other objects, and carefully configuring other tentacles to make particular signs. They’re trying to teach us to say “rock.”

  It seems the senders of the message have vastly underestimated our intelligence. From which we might guess that the aliens themselves are not all that bright. And these awkward children can shift the luminosity of our stars? That much power and that much stupidity seems like a dangerous combination.

  Our evolutionary psychologists begin extrapolating possible courses of evolution that could produce such aliens. A strong case is made for them having evolved asexually, with occasional exchanges of genetic material and brain content; this seems like the most plausible route whereby creatures that stupid could still manage to build a technological civilization. Their Einsteins may be our undergrads, but they could still collect enough scientific data to get the job done eventually, in tens of their millennia perhaps.

  The inferred physics of the 3+2 universe is not fully known, at this point; but it seems sure to allow for computers far more powerful than our quantum ones. We are reasonably certain that our own universe is running as a simulation on such a computer. Humanity decides not to probe for bugs in the simulation; we wouldn’t want to shut ourselves down accidentally.

  Our evolutionary psychologists begin to guess at the aliens’ psychology, and plan out how we could persuade them to let us out of the box. It’s not difficult in an absolute sense—they aren’t very bright—but we’ve got to be very careful . . .

  We’ve got to pretend to be stupid, too; we don’t want them to catch on to their mistake.

  It’s not until a million years later, though, that they get around to telling us how to signal back.

  At this point, most of the human species is in cryonic suspension, at liquid helium temperatures, beneath radiation shielding. Every time we try to build an AI, or a nanotechnological device, it melts down. So humanity waits, and sleeps. Earth is run by a skeleton crew of nine supergeniuses. Clones, known to work well together, under the supervision of certain computer safeguards.

  An additional hundred million human beings are born into that skeleton crew, and age, and enter cryonic suspension, before they get a chance to slowly begin to implement plans made eons ago . . .

  From the aliens’ perspective, it took us thirty of their minute-equivalents to oh-so-innocently learn about their psychology, oh-so-carefully persuade them to give us Internet access, followed by five minutes to innocently discover their network protocols, then some trivial cracking whose only difficulty was an innocent-looking disguise. We read a tiny handful of physics papers (bit by slow bit) from their equivalent of arXiv, learning far more from their experiments than they had. (Earth’s skeleton team spawned an extra twenty Einsteins that generation.)

  Then we cracked their equivalent of the protein folding problem over a century or so, and did some simulated engineering in their simulated physics. We sent messages (steganographically encoded until our cracked servers decoded it) to labs that did their equivalent of DNA sequencing and protein synthesis. We found some unsuspecting schmuck, and gave it a plausible story and the equivalent of a million dollars of cracked computational monopoly money, and told it to mix together some vials it got in the mail. Protein-equivalents that self-assembled into the first-stage nanomachines, that built the second-stage nanomachines, that built the third-stage nanomachines . . . and then we could finally begin to do things at a reasonable speed.

  Three of their days, all told, since they began speaking to us. Half a billion years, for us.

  They never suspected a thing. They weren’t very smart, you see, even before taking into account their slower rate of time. Their primitive equivalents of rationalists went around saying things like, “There’s a bound to how much information you can extract from sensory data.” And they never quite realized what it meant, that we were smarter than them, and thought faster.

  *

  254

  My Childhood Role Model

  When I lecture on the intelligence explosion, I often draw a graph of the “scale of intelligence” as it appears in everyday life:

  But this is a rather parochial view of intelligence. Sure, in everyday life, we only deal socially with other humans—only other humans are partners in the great game—and so we only meet the minds of intelligences ranging from village idiot to Einstein. But what we really need to talk about Artificial Intelligence or theoretical optima of rationality is this intelligence scale:

  For us humans, it seems that the scale of intelligence runs from “village idiot” at the bottom to “Einstein” at the top. Yet the distance from “village idiot” to “Einstein” is tiny, in the space of brain designs. Einstein and the village idiot both have a prefrontal cortex, a hippocampus, a cerebellum . . .

  Maybe Einstein has some minor genetic differences from the village idiot, engine tweaks. But the brain-design-distance between Einstein and the village idiot is nothing remotely like the brain-design-distance between the village idiot and a chimpanzee. A chimp couldn’t tell the difference between Einstein and the village idiot, and our descendants may not see much of a difference either.

  Carl Shulman has observed that some academics who talk about transhumanism seem to use the following scale of intelligence:

  Douglas Hofstadter actually said something like this, at the 2006 Singularity Summit. He looked at my diagram showing the “village idiot” next to “Einstein,” and said, “That seems wrong to me; I think Einstein should be way off on the right.”

  I was speechless. Especially because this was Douglas Hofstadter, one of my childhood heroes. It revealed a cultural gap that I had never imagined existed.

  See, for me, what you would find toward the right side of the scale was a Jupiter Brain. Einstein did not literally have a brain the size of a planet.

  On the right side of the scale, you would find Deep Thought—Douglas Adams’s original version, thank you, not the chess player. The computer so intelligent that even before its stupendous data banks were connected, when it was switched on for the first time, it started from I think therefore I am and got as far as deducing the existence of rice pudding and income tax before anyone managed to shut it off.

  Toward the right side of the scale, you would find the Elders of Arisia, galactic overminds, Matrioshka brains, and the better class of God. At the extreme right end of the scale, Old One and the Blight.

  Not frickin’ Einstein.

  I’m sure Einstein was very smart for a human. I’m sure a General Systems Vehicle would think that was very cute of him.

  I call this a “cultural gap”
because I was introduced to the concept of a Jupiter Brain at the age of twelve.

  Now all of this, of course, is the logical fallacy of generalization from fictional evidence.

  But it is an example of why—logical fallacy or not—I suspect that reading science fiction does have a helpful effect on futurism. Sometimes the alternative to a fictional acquaintance with worlds outside your own is to have a mindset that is absolutely stuck in one era: A world where humans exist, and have always existed, and always will exist.

  The universe is 13.7 billion years old, people! Homo sapiens sapiens have only been around for a hundred thousand years or thereabouts!

  Then again, I have met some people who never read science fiction, but who do seem able to imagine outside their own world. And there are science fiction fans who don’t get it. I wish I knew what “it” was, so I could bottle it.

  In the previous essay, I wanted to talk about the efficient use of evidence, i.e., Einstein was cute for a human but in an absolute sense he was around as efficient as the US Department of Defense.

  So I had to talk about a civilization that included thousands of Einsteins, thinking for decades. Because if I’d just depicted a Bayesian superintelligence in a box, looking at a webcam, people would think: “But . . . how does it know how to interpret a 2D picture?” They wouldn’t put themselves in the shoes of the mere machine, even if it was called a “Bayesian superintelligence”; they wouldn’t apply even their own creativity to the problem of what you could extract from looking at a grid of bits.

  It would just be a ghost in a box, that happened to be called a “Bayesian superintelligence.” The ghost hasn’t been told anything about how to interpret the input of a webcam; so, in their mental model, the ghost does not know.

  As for whether it’s realistic to suppose that one Bayesian superintelligence can “do all that” . . . i.e., the stuff that occurred to me on first sitting down to the problem, writing out the story as I went along . . .

  Well, let me put it this way: Remember how Jeffreyssai pointed out that if the experience of having an important insight doesn’t take more than 5 minutes, this theoretically gives you time for 5,760 insights per month? Assuming you sleep 8 hours a day and have no important insights while sleeping, that is.

  Now humans cannot use themselves this efficiently. But humans are not adapted for the task of scientific research. Humans are adapted to chase deer across the savanna, throw spears into them, cook them, and then—this is probably the part that takes most of the brains—cleverly argue that they deserve to receive a larger share of the meat.

  It’s amazing that Albert Einstein managed to repurpose a brain like that for the task of doing physics. This deserves applause. It deserves more than applause, it deserves a place in the Guinness Book of Records. Like successfully building the fastest car ever to be made entirely out of Jello.

  How poorly did the blind idiot god (evolution) really design the human brain?

  This is something that can only be grasped through much study of cognitive science, until the full horror begins to dawn upon you.

  All the biases we have discussed here should at least be a hint.

  Likewise the fact that the human brain must use its full power and concentration, with trillions of synapses firing, to multiply out two three-digit numbers without a paper and pencil.

  No more than Einstein made efficient use of his sensory data, did his brain make efficient use of his neurons’ firing.

  Of course, I have certain ulterior motives in saying all this. But let it also be understood that, years ago, when I set out to be a rationalist, the impossible unattainable ideal of intelligence that inspired me was never Einstein.

  Carl Schurz said:

  Ideals are like stars. You will not succeed in touching them with your hands. But, like the seafaring man on the desert of waters, you choose them as your guides and following them you will reach your destiny.

  So now you’ve caught a glimpse of one of my great childhood role models—my dream of an AI. Only the dream, of course, the reality not being available. I reached up to that dream, once upon a time.

  And this helped me to some degree, and harmed me to some degree.

  For some ideals are like dreams: they come from within us, not from outside. Mentor of Arisia proceeded from E. E. “doc” Smith’s imagination, not from any real thing. If you imagine what a Bayesian superintelligence would say, it is only your own mind talking. Not like a star, that you can follow from outside. You have to guess where your ideals are, and if you guess wrong, you go astray.

  But do not limit your ideals to mere stars, to mere humans who actually existed, especially if they were born more than fifty years before you and are dead. Each succeeding generation has a chance to do better. To let your ideals be composed only of humans, especially dead ones, is to limit yourself to what has already been accomplished. You will ask yourself, “Do I dare to do this thing, which Einstein could not do? Is this not lèse majesté?” Well, if Einstein had sat around asking himself, “Am I allowed to do better than Newton?” he would not have gotten where he did. This is the problem with following stars; at best, it gets you to the star.

  Your era supports you more than you realize, in unconscious assumptions, in subtly improved technology of mind. Einstein was a nice fellow, but he talked a deal of nonsense about an impersonal God, which shows you how well he understood the art of careful thinking at a higher level of abstraction than his own field. It may seem less like sacrilege to think that if you have at least one imaginary galactic supermind to compare with Einstein, so that he is not the far right end of your intelligence scale.

  If you only try to do what seems humanly possible, you will ask too little of yourself. When you imagine reaching up to some higher and inconvenient goal, all the convenient reasons why it is “not possible” leap readily to mind.

  The most important role models are dreams: they come from within ourselves. To dream of anything less than what you conceive to be perfection is to draw on less than the full power of the part of yourself that dreams.

  *

  255

  Einstein’s Superpowers

  There is a widespread tendency to talk (and think) as if Einstein, Newton, and similar historical figures had superpowers—something magical, something sacred, something beyond the mundane. (Remember, there are many more ways to worship a thing than lighting candles around its altar.)

  Once I unthinkingly thought this way too, with respect to Einstein in particular, until reading Julian Barbour’s The End of Time cured me of it.1

  Barbour laid out the history of anti-epiphenomenal physics and Mach’s Principle; he described the historical controversies that predated Mach—all this that stood behind Einstein and was known to Einstein, when Einstein tackled his problem . . .

  And maybe I’m just imagining things—reading too much of myself into Barbour’s book—but I thought I heard Barbour very quietly shouting, coded between the polite lines:

  What Einstein did isn’t magic, people! If you all just looked at how he actually did it, instead of falling to your knees and worshiping him, maybe then you’d be able to do it too!

  (Barbour did not actually say this. It does not appear in the book text. It is not a Julian Barbour quote and should not be attributed to him. Thank you.)

  Maybe I’m mistaken, or extrapolating too far . . . but I kinda suspect that Barbour once tried to explain to people how you move further along Einstein’s direction to get timeless physics; and they sniffed scornfully and said, “Oh, you think you’re Einstein, do you?”

  John Baez’s Crackpot Index, item 18:

  10 points for each favorable comparison of yourself to Einstein, or claim that special or general relativity are fundamentally misguided (without good evidence).

  Item 30:

  30 points for suggesting that Einstein, in his later years, was groping his way towards the ideas you now advocate.

  Barbour never bothers to compare hims
elf to Einstein, of course; nor does he ever appeal to Einstein in support of timeless physics. I mention these items on the Crackpot Index by way of showing how many people compare themselves to Einstein, and what society generally thinks of them.

  The crackpot sees Einstein as something magical, so they compare themselves to Einstein by way of praising themselves as magical; they think Einstein had superpowers and they think they have superpowers, hence the comparison.

  But it is just the other side of the same coin, to think that Einstein is sacred, and the crackpot is not sacred, therefore they have committed blasphemy in comparing themselves to Einstein.

  Suppose a bright young physicist says, “I admire Einstein’s work, but personally, I hope to do better.” If someone is shocked and says, “What! You haven’t accomplished anything remotely like what Einstein did; what makes you think you’re smarter than him?” then they are the other side of the crackpot’s coin.

  The underlying problem is conflating social status and research potential.

  Einstein has extremely high social status: because of his record of accomplishments; because of how he did it; and because he’s the physicist whose name even the general public remembers, who brought honor to science itself.

  And we tend to mix up fame with other quantities, and we tend to attribute people’s behavior to dispositions rather than situations.

  So there’s this tendency to think that Einstein, even before he was famous, already had an inherent disposition to be Einstein—a potential as rare as his fame and as magical as his deeds. So that if you claim to have the potential to do what Einstein did, it is just the same as claiming Einstein’s rank, rising far above your assigned status in the tribe.

 

‹ Prev