Rationality- From AI to Zombies

Home > Science > Rationality- From AI to Zombies > Page 93
Rationality- From AI to Zombies Page 93

by Eliezer Yudkowsky


  This application of the Anti-Zombie Principle is weaker. But it’s also much more general. And, in terms of sheer common sense, correct.

  The reductionist and the substance dualist actually have two different versions of the above statement. The reductionist furthermore says, “Whatever makes me talk about consciousness, it seems likely that the important parts take place on a much higher functional level than atomic nuclei. Someone who understood consciousness could abstract away from individual neurons firing, and talk about high-level cognitive architectures, and still describe how my mind produces thoughts like ‘I think therefore I am.’ So nudging things around by the diameter of a nucleon shouldn’t affect my consciousness (except maybe with very small probability, or by a very tiny amount, or not until after a significant delay).”

  The substance dualist furthermore says, “Whatever makes me talk about consciousness, it’s got to be something beyond the computational physics we know, which means that it might very well involve quantum effects. But still, my consciousness doesn’t flicker on and off whenever someone sneezes a kilometer away. If it did, I would notice. It would be like skipping a few seconds, or coming out of a general anesthetic, or sometimes saying, ‘I don’t think therefore I’m not.’ So since it’s a physical fact that thermal vibrations don’t disturb the stuff of my awareness, I don’t expect flipping the switch to disturb it either.”

  Either way, you shouldn’t expect your sense of awareness to vanish when someone says the word “Abracadabra,” even if that does have some infinitesimal physical effect on your brain—

  But hold on! If you hear someone say the word “Abracadabra,” that has a very noticeable effect on your brain—so large, even your brain can notice it. It may alter your internal narrative; you may think, “Why did that person just say ‘Abracadabra’?”

  Well, but still you expect to go on talking about consciousness in almost exactly the same way afterward, for almost exactly the same reasons.

  And again, it’s not that “consciousness” is being equated to “that which makes you talk about consciousness.” It’s just that consciousness, among other things, makes you talk about consciousness. So anything that makes your consciousness go out like a light should make you stop talking about consciousness.

  If we do something to you, where you don’t see how it could possibly change your internal narrative—the little voice in your head that sometimes says things like “I think therefore I am,” whose words you can choose to say aloud—then it shouldn’t make you cease to be conscious.

  And this is true even if the internal narrative is just “pretty much the same,” and the causes of it are also pretty much the same; among the causes that are pretty much the same is whatever you mean by “consciousness.”

  If you’re wondering where all this is going, and why it’s important to go to such tremendous lengths to ponder such an obvious-seeming Generalized Anti-Zombie Principle, then consider the following debate:

  ALBERT: “Suppose I replaced all the neurons in your head with tiny robotic artificial neurons that had the same connections, the same local input-output behavior, and analogous internal state and learning rules.”

  BERNICE: “That’s killing me! There wouldn’t be a conscious being there anymore.”

  CHARLES: “Well, there’d still be a conscious being there, but it wouldn’t be me.”

  SIR ROGER PENROSE: “The thought experiment you propose is impossible. You can’t duplicate the behavior of neurons without tapping into quantum gravity. That said, there’s not much point in me taking further part in this conversation.” (Wanders away.)

  ALBERT: “Suppose that the replacement is carried out one neuron at a time, and the swap occurs so fast that it doesn’t make any difference to global processing.”

  BERNICE: “How could that possibly be the case?”

  ALBERT: “The little robot swims up to the neuron, surrounds it, scans it, learns to duplicate it, and then suddenly takes over the behavior, between one spike and the next. In fact, the imitation is so good that your outward behavior is just the same as it would be if the brain were left undisturbed. Maybe not exactly the same, but the causal impact is much less than thermal noise at 310 Kelvin.”

  CHARLES: “So what?”

  ALBERT: “So don’t your beliefs violate the Generalized Anti-Zombie Principle? Whatever just happened, it didn’t change your internal narrative! You’ll go around talking about consciousness for exactly the same reason as before.”

  BERNICE: “Those little robots are a Zombie Master. They’ll make me talk about consciousness even though I’m not conscious. The Zombie World is possible if you allow there to be an added, extra, experimentally detectable Zombie Master—which those robots are.”

  CHARLES: “Oh, that’s not right, Bernice. The little robots aren’t plotting how to fake consciousness, or processing a corpus of text from human amateurs. They’re doing the same thing neurons do, just in silicon instead of carbon.”

  ALBERT: “Wait, didn’t you just agree with me?”

  CHARLES: “I never said the new person wouldn’t be conscious. I said it wouldn’t be me.”

  ALBERT: “Well, obviously the Anti-Zombie Principle generalizes to say that this operation hasn’t disturbed the true cause of your talking about this me thing.”

  CHARLES: “Uh-uh! Your operation certainly did disturb the true cause of my talking about consciousness. It substituted a different cause in its place, the robots. Now, just because that new cause also happens to be conscious—talks about consciousness for the same generalized reason—doesn’t mean it’s the same cause that was originally there.”

  ALBERT: “But I wouldn’t even have to tell you about the robot operation. You wouldn’t notice. If you think, going on introspective evidence, that you are in an important sense ‘the same person’ that you were five minutes ago, and I do something to you that doesn’t change the introspective evidence available to you, then your conclusion that you are the same person that you were five minutes ago should be equally justified. Doesn’t the Generalized Anti-Zombie Principle say that if I do something to you that alters your consciousness, let alone makes you a completely different person, then you ought to notice somehow?”

  BERNICE: “Not if you replace me with a Zombie Master. Then there’s no one there to notice.”

  CHARLES: “Introspection isn’t perfect. Lots of stuff goes on inside my brain that I don’t notice.”

  ALBERT: “You’re postulating epiphenomenal facts about consciousness and identity!”

  BERNICE: “No I’m not! I can experimentally detect the difference between neurons and robots.”

  CHARLES: “No I’m not! I can experimentally detect the moment when the old me is replaced by a new person.”

  ALBERT: “Yeah, and I can detect the switch flipping! You’re detecting something that doesn’t make a noticeable difference to the true cause of your talk about consciousness and personal identity. And the proof is, you’ll talk just the same way afterward.”

  BERNICE: “That’s because of your robotic Zombie Master!”

  CHARLES: “Just because two people talk about ‘personal identity’ for similar reasons doesn’t make them the same person.”

  I think the Generalized Anti-Zombie Principle supports Albert’s position, but the reasons shall have to wait for future essays. I need other prerequisites, and besides, this essay is already too long.

  But you see the importance of the question, “How far can you generalize the Anti-Zombie Argument and have it still be valid?”

  The makeup of future galactic civilizations may be determined by the answer . . .

  *

  1. René Descartes, Discours de la Méthode, vol. 45 (Librairie des Bibliophiles, 1887).

  224

  GAZP vs. GLUT

  In “The Unimagined Preposterousness of Zombies,” Daniel Dennett says:1

  To date, several philosophers have told me that they plan to accept my challenge to offer a non-questi
on-begging defense of zombies, but the only one I have seen so far involves postulating a “logically possible” but fantastic being—a descendent of Ned Block’s Giant Lookup Table fantasy . . .

  A Giant Lookup Table, in programmer’s parlance, is when you implement a function as a giant table of inputs and outputs, usually to save on runtime computation. If my program needs to know the multiplicative product of two inputs between 1 and 100, I can write a multiplication algorithm that computes each time the function is called, or I can precompute a Giant Lookup Table with 10,000 entries and two indices. There are times when you do want to do this, though not for multiplication—times when you’re going to reuse the function a lot and it doesn’t have many possible inputs; or when clock cycles are cheap while you’re initializing, but very expensive while executing.

  Giant Lookup Tables get very large, very fast. A GLUT of all possible twenty-ply conversations with ten words per remark, using only 850-word Basic English, would require 7.6 × 10585 entries.

  Replacing a human brain with a Giant Lookup Table of all possible sense inputs and motor outputs (relative to some fine-grained digitization scheme) would require an unreasonably large amount of memory storage. But “in principle,” as philosophers are fond of saying, it could be done.

  The GLUT is not a zombie in the classic sense, because it is microphysically dissimilar to a human. (In fact, a GLUT can’t really run on the same physics as a human; it’s too large to fit in our universe. For philosophical purposes, we shall ignore this and suppose a supply of unlimited memory storage.)

  But is the GLUT a zombie at all? That is, does it behave exactly like a human without being conscious?

  The GLUT-ed body’s tongue talks about consciousness. Its fingers write philosophy papers. In every way, so long as you don’t peer inside the skull, the GLUT seems just like a human . . . which certainly seems like a valid example of a zombie: it behaves just like a human, but there’s no one home.

  Unless the GLUT is conscious, in which case it wouldn’t be a valid example.

  I can’t recall ever seeing anyone claim that a GLUT is conscious. (Admittedly my reading in this area is not up to professional grade; feel free to correct me.) Even people who are accused of being (gasp!) functionalists don’t claim that GLUTs can be conscious.

  GLUTs are the reductio ad absurdum to anyone who suggests that consciousness is simply an input-output pattern, thereby disposing of all troublesome worries about what goes on inside.

  So what does the Generalized Anti-Zombie Principle (GAZP) say about the Giant Lookup Table (GLUT)?

  At first glance, it would seem that a GLUT is the very archetype of a Zombie Master—a distinct, additional, detectable, non-conscious system that animates a zombie and makes it talk about consciousness for different reasons.

  In the interior of the GLUT, there’s merely a very simple computer program that looks up inputs and retrieves outputs. Even talking about a “simple computer program” is overshooting the mark, in a case like this. A GLUT is more like ROM than a CPU. We could equally well talk about a series of switched tracks by which some balls roll out of a previously stored stack and into a trough—period; that’s all the GLUT does.

  A spokesperson from People for the Ethical Treatment of Zombies replies: “Oh, that’s what all the anti-mechanists say, isn’t it? That when you look in the brain, you just find a bunch of neurotransmitters opening ion channels? If ion channels can be conscious, why not levers and balls rolling into bins?”

  “The problem isn’t the levers,” replies the functionalist, “the problem is that a GLUT has the wrong pattern of levers. You need levers that implement things like, say, formation of beliefs about beliefs, or self-modeling . . . Heck, you need the ability to write things to memory just so that time can pass for the computation. Unless you think it’s possible to program a conscious being in Haskell.”

  “I don’t know about that,” says the PETZ spokesperson, “all I know is that this so-called zombie writes philosophical papers about consciousness. Where do these philosophy papers come from, if not from consciousness?”

  Good question! Let us ponder it deeply.

  There’s a game in physics called Follow-The-Energy. Richard Feynman’s father played it with young Richard:

  It was the kind of thing my father would have talked about: “What makes it go? Everything goes because the Sun is shining.” And then we would have fun discussing it:

  “No, the toy goes because the spring is wound up,” I would say. “How did the spring get wound up?” he would ask.

  “I wound it up.”

  “And how did you get moving?”

  “From eating.”

  “And food grows only because the Sun is shining. So it’s because the Sun is shining that all these things are moving.” That would get the concept across that motion is simply the transformation of the Sun’s power.2

  When you get a little older, you learn that energy is conserved, never created or destroyed, so the notion of using up energy doesn’t make much sense. You can never change the total amount of energy, so in what sense are you using it?

  So when physicists grow up, they learn to play a new game called Follow-The-Negentropy—which is really the same game they were playing all along; only the rules are mathier, the game is more useful, and the principles are harder to wrap your mind around conceptually.

  Rationalists learn a game called Follow-The-Improbability, the grownup version of “How Do You Know?” The rule of the rationalist’s game is that every improbable-seeming belief needs an equivalent amount of evidence to justify it. (This game has amazingly similar rules to Follow-The-Negentropy.)

  Whenever someone violates the rules of the rationalist’s game, you can find a place in their argument where a quantity of improbability appears from nowhere; and this is as much a sign of a problem as, oh, say, an ingenious design of linked wheels and gears that keeps itself running forever.

  The one comes to you and says: “I believe with firm and abiding faith that there’s an object in the asteroid belt, one foot across and composed entirely of chocolate cake; you can’t prove that this is impossible.” But, unless the one had access to some kind of evidence for this belief, it would be highly improbable for a correct belief to form spontaneously. So either the one can point to evidence, or the belief won’t turn out to be true. “But you can’t prove it’s impossible for my mind to spontaneously generate a belief that happens to be correct!” No, but that kind of spontaneous generation is highly improbable, just like, oh, say, an egg unscrambling itself.

  In Follow-The-Improbability, it’s highly suspicious to even talk about a specific hypothesis without having had enough evidence to narrow down the space of possible hypotheses. Why aren’t you giving equal air time to a decillion other equally plausible hypotheses? You need sufficient evidence to find the “chocolate cake in the asteroid belt” hypothesis in the hypothesis space—otherwise there’s no reason to give it more air time than a trillion other candidates like “There’s a wooden dresser in the asteroid belt” or “The Flying Spaghetti Monster threw up on my sneakers.”

  In Follow-The-Improbability, you are not allowed to pull out big complicated specific hypotheses from thin air without already having a corresponding amount of evidence; because it’s not realistic to suppose that you could spontaneously start discussing the true hypothesis by pure coincidence.

  A philosopher says, “This zombie’s skull contains a Giant Lookup Table of all the inputs and outputs for some human’s brain.” This is a very large improbability. So you ask, “How did this improbable event occur? Where did the GLUT come from?”

  Now this is not standard philosophical procedure for thought experiments. In standard philosophical procedure, you are allowed to postulate things like “Suppose you were riding a beam of light . . .” without worrying about physical possibility, let alone mere improbability. But in this case, the origin of the GLUT matters; and that’s why it’s important to understand the moti
vating question, “Where did the improbability come from?”

  The obvious answer is that you took a computational specification of a human brain, and used that to precompute the Giant Lookup Table. (Thereby creating uncounted googols of human beings, some of them in extreme pain, the supermajority gone quite mad in a universe of chaos where inputs bear no relation to outputs. But damn the ethics, this is for philosophy.)

  In this case, the GLUT is writing papers about consciousness because of a conscious algorithm. The GLUT is no zombie, any more than a cellphone is a zombie because it can talk about consciousness while being just a small consumer electronic device. The cellphone is just transmitting philosophy speeches from whoever happens to be on the other end of the line. A GLUT generated from an originally human brain-specification is doing the same thing.

  “All right,” says the philosopher, “the GLUT was generated randomly, and just happens to have the same input-output relations as some reference human.”

  How, exactly, did you randomly generate the GLUT?

  “We used a true randomness source—a quantum device.”

  But a quantum device just implements the Branch Both Ways instruction; when you generate a bit from a quantum randomness source, the deterministic result is that one set of universe-branches (locally connected amplitude clouds) see 1, and another set of universes see 0. Do it 4 times, create 16 (sets of) universes.

  So, really, this is like saying that you got the GLUT by writing down all possible GLUT-sized sequences of 0s and 1s, in a really damn huge bin of lookup tables; and then reaching into the bin, and somehow pulling out a GLUT that happened to correspond to a human brain-specification. Where did the improbability come from?

 

‹ Prev