The Singularity Is Near: When Humans Transcend Biology

Home > Other > The Singularity Is Near: When Humans Transcend Biology > Page 59
The Singularity Is Near: When Humans Transcend Biology Page 59

by Ray Kurzweil


  “We know that brains cause consciousness with specific biological mechanisms.”35

  So who is being the reductionist here? Searle apparently expects that we can measure the subjectivity of another entity as readily as we measure the oxygen output of photosynthesis.

  Searle writes that I “frequently cite IBM’s Deep Blue as evidence of superior intelligence in the computer.” Of course, the opposite is the case: I cite Deep Blue not to belabor the issue of chess but rather to examine the clear contrast it illustrates between the human and contemporary machine approaches to the game. As I pointed out earlier, however, the pattern-recognition ability of chess programs is increasing, so chess machines are beginning to combine the analytical strength of traditional machine intelligence with more humanlike pattern recognition. The human paradigm (of self-organizing chaotic processes) offers profound advantages: we can recognize and respond to extremely subtle patterns. But we can build machines with the same abilities. That, indeed, has been my own area of technical interest.

  Searle is best known for his Chinese Room analogy and has presented various formulations of it over twenty years. One of the more complete descriptions of it appears in his 1992 book, The Rediscovery of the Mind:

  I believe the best-known argument against strong AI was my Chinese room argument . . . that showed that a system could instantiate a program so as to give a perfect simulation of some human cognitive capacity, such as the capacity to understand Chinese, even though that system had no understanding of Chinese whatever. Simply imagine that someone who understands no Chinese is locked in a room with a lot of Chinese symbols and a computer program for answering questions in Chinese. The input to the system consists in Chinese symbols in the form of questions; the output of the system consists in Chinese symbols in answer to the questions. We might suppose that the program is so good that the answers to the questions are indistinguishable from those of a native Chinese speaker. But all the same, neither the person inside nor any other part of the system literally understands Chinese; and because the programmed computer has nothing that this system does not have, the programmed computer, qua computer, does not understand Chinese either. Because the program is purely formal or syntactical and because minds have mental or semantic contents, any attempt to produce a mind purely with computer programs leaves out the essential features of the mind.36

  Searle’s descriptions illustrate a failure to evaluate the essence of either brain processes or the nonbiological processes that could replicate them. He starts with the assumption that the “man” in the room doesn’t understand anything because, after all, “he is just a computer,” thereby illuminating his own bias. Not surprisingly Searle then concludes that the computer (as implemented by the man) doesn’t understand. Searle combines this tautology with a basic contradiction: the computer doesn’t understand Chinese, yet (according to Searle) can convincingly answer questions in Chinese. But if an entity—biological or otherwise—really doesn’t understand human language, it will quickly be unmasked by a competent interlocutor. In addition, for the program to respond convincingly, it would have to be as complex as a human brain. The observers would long be dead while the man in the room spends millions of years following a program many millions of pages long.

  Most important, the man is acting only as the central processing unit, a small part of a system. While the man may not see it, the understanding is distributed across the entire pattern of the program itself and the billions of notes he would have to make to follow the program. I understand English, but none of my neurons do. My understanding is represented in vast patterns of neurotransmitter strengths, synaptic clefts, and interneuronal connections. Searle fails to account for the significance of distributed patterns of information and their emergent properties.

  A failure to see that computing processes are capable of being—just like the human brain—chaotic, unpredictable, messy, tentative, and emergent is behind much of the criticism of the prospect of intelligent machines that we hear from Searle and other essentially materialist philosophers. Inevitably Searle comes back to a criticism of “symbolic” computing: that orderly sequential symbolic processes cannot re-create true thinking. I think that’s correct (depending, of course, on what level we are modeling an intelligent process), but the manipulation of symbols (in the sense that Searle implies) is not the only way to build machines, or computers.

  So-called computers (and part of the problem is the word “computer,” because machines can do more than “compute”) are not limited to symbolic processing. Nonbiological entities can also use the emergent self-organizing paradigm, which is a trend well under way and one that will become even more important over the next several decades. Computers do not have to use only 0 and 1, nor do they have to be all digital. Even if a computer is all digital, digital algorithms can simulate analog processes to any degree of precision (or lack of precision). Machines can be massively parallel. And machines can use chaotic emergent techniques just as the brain does.

  The primary computing techniques that we have used in pattern-recognition systems do not use symbol manipulation but rather self-organizing methods such as those described in chapter 5 (neural nets, Markov models, genetic algorithms, and more complex paradigms based on brain reverse engineering). A machine that could really do what Searle describes in the Chinese Room argument would not merely be manipulating language symbols, because that approach doesn’t work. This is at the heart of the philosophical sleight of hand underlying the Chinese Room. The nature of computing is not limited to manipulating logical symbols. Something is going on in the human brain, and there is nothing that prevents these biological processes from being reverse engineered and replicated in nonbiological entities.

  Adherents appear to believe that Searle’s Chinese Room argument demonstrates that machines (that is, nonbiological entities) can never truly understand anything of significance, such as Chinese. First, it is important to recognize that for this system—the person and the computer—to, as Searle puts it, “give a perfect simulation of some human cognitive capacity, such as the capacity to understand Chinese,” and to convincingly answer questions in Chinese, it must essentially pass a Chinese Turing test. Keep in mind that we are not talking about answering questions from a fixed list of stock questions (because that’s a trivial task) but answering any unanticipated question or sequence of questions from a knowledgeable human interrogator.

  Now, the human in the Chinese Room has little or no significance. He is just feeding things into the computer and mechanically transmitting its output (or, alternatively, just following the rules in the program). And neither the computer nor the human needs to be in a room. Interpreting Searle’s description to imply that the man himself is implementing the program does not change anything other than to make the system far slower than real time and extremely error prone. Both the human and the room are irrelevant. The only thing that is significant is the computer (either an electronic computer or the computer comprising the man following the program).

  For the computer to really perform this “perfect simulation,” it would indeed have to understand Chinese. According to the very premise it has “the capacity to understand Chinese,” so it is then entirely contradictory to say that “the programmed computer . . . does not understand Chinese.”

  A computer and computer program as we know them today could not successfully perform the described task. So if we are to understand the computer to be like today’s computers, then it cannot fulfill the premise. The only way that it could do so would be if it had the depth and complexity of a human. Turing’s brilliant insight in proposing his test was that convincingly answering any possible sequence of questions from an intelligent human questioner in a human language really probes all of human intelligence. A computer that is capable of accomplishing this—a computer that will exist a few decades from now—will need to be of human complexity or greater and will indeed understand Chinese in a deep way, because otherwise it would neve
r be convincing in its claim to do so.

  Merely stating, then, that the computer “does not literally understand Chinese” does not make sense, for it contradicts the entire premise of the argument. To claim that the computer is not conscious is not a compelling contention, either. To be consistent with some of Searle’s other statements, we have to conclude that we really don’t know if it is conscious or not. With regard to relatively simple machines, including today’s computers, while we can’t state for certain that these entities are not conscious, their behavior, including their inner workings, doesn’t give us that impression. But that will not be true for a computer that can really do what is needed in the Chinese Room. Such a machine will at least seem conscious, even if we cannot say definitively whether it is or not. But just declaring that it is obvious that the computer (or the entire system of the computer, person, and room) is not conscious is far from a compelling argument.

  In the quote above Searle states that “the program is purely formal or syntactical.” But as I pointed out earlier, that is a bad assumption, based on Searle’s failure to account for the requirements of such a technology. This assumption is behind much of Searle’s criticism of AI. A program that is purely formal or syntactical will not be able to understand Chinese, and it won’t “give a perfect simulation of some human cognitive capacity.”

  But again, we don’t have to build our machines that way. We can build them in the same fashion that nature built the human brain: using chaotic emergent methods that are massively parallel. Furthermore, there is nothing inherent in the concept of a machine that restricts its expertise to the level of syntax alone and prevents it from mastering semantics. Indeed, if the machine inherent in Searle’s conception of the Chinese Room had not mastered semantics, it would not be able to convincingly answer questions in Chinese and thus would contradict Searle’s own premise.

  In chapter 4 I discussed the ongoing effort to reverse engineer the human brain and to apply these methods to computing platforms of sufficient power. So, like a human brain, if we teach a computer Chinese, it will understand Chinese. This may seem to be an obvious statement, but it is one with which Searle takes issue. To use his own terminology, I am not talking about a simulation per se but rather a duplication of the causal powers of the massive neuron cluster that constitutes the brain, at least those causal powers salient and relevant to thinking.

  Will such a copy be conscious? I don’t think the Chinese Room tells us anything about this question.

  It is also important to point out that Searle’s Chinese Room argument can be applied to the human brain itself. Although it is clearly not his intent, his line of reasoning implies that the human brain has no understanding. He writes: “The computer . . . succeeds by manipulating formal symbols. The symbols themselves are quite meaningless: they have only the meaning we have attached to them. The computer knows nothing of this, it just shuffles the symbols.” Searle acknowledges that biological neurons are machines, so if we simply substitute the phrase “human brain” for “computer” and “neurotransmitter concentrations and related mechanisms” for “formal symbols,” we get:

  The [human brain] . . . succeeds by manipulating [neurotransmitter concentrations and related mechanisms]. The [neurotransmitter concentrations and related mechanisms] themselves are quite meaningless: they have only the meaning we have attached to them. The [human brain] knows nothing of this, it just shuffles the [neurotransmitter concentrations and related mechanisms].

  Of course, neurotransmitter concentrations and other neural details (for example, interneuronal connection and neurotransmitter patterns) have no meaning in and of themselves. The meaning and understanding that emerge in the human brain are exactly that: an emergent property of its complex patterns of activity. The same is true for machines. Although “shuffling symbols” does not have meaning in and of itself, the emergent patterns have the same potential role in nonbiological systems as they do in biological systems such as the brain. Hans Moravec has written, “Searle is looking for understanding in the wrong places. . . .[He] seemingly cannot accept that real meaning can exist in mere patterns.”37

  Let’s address a second version of the Chinese Room. In this conception the room does not include a computer or a man simulating a computer but has a room full of people manipulating slips of paper with Chinese symbols on them—essentially, a lot of people simulating a computer. This system would convincingly answer questions in Chinese, but none of the participants would know Chinese, nor could we say that the whole system really knows Chinese—at least not in a conscious way. Searle then essentially ridicules the idea that this “system” could be conscious. What are we to consider conscious, he asks: the slips of paper? The room?

  One of the problems with this version of the Chinese Room argument is that it does not come remotely close to really solving the specific problem of answering questions in Chinese. Instead it is really a description of a machinelike process that uses the equivalent of a table lookup, with perhaps some straightforward logical manipulations, to answer questions. It would be able to answer a limited number of canned questions, but if it were to answer any arbitrary question that it might be asked, it would really have to understand Chinese in the same way that a Chinese-speaking person does. Again, it is essentially being asked to pass a Chinese Turing test, and as such, would have to be as clever, and about as complex, as a human brain. Straightforward table lookup algorithms are simply not going to achieve that.

  If we want to re-create a brain that understands Chinese using people as little cogs in the re-creation, we would really need billions of people simulating the processes in a human brain (essentially the people would be simulating a computer, which would be simulating human brain methods). This would require a rather large room, indeed. And even if extremely efficiently organized, this system would run many thousands of times slower than the Chinese-speaking brain it is attempting to re-create.

  Now, it’s true that none of these billions of people would need to know anything about Chinese, and none of them would necessarily know what is going on in this elaborate system. But that’s equally true of the neural connections in a real human brain. None of the hundred trillion connections in my brain knows anything about this book I am writing, nor do any of them know English, nor any of the other things that I know. None of them is conscious of this chapter, nor of any of the things I am conscious of. Probably none of them is conscious at all. But the entire system of them—that is, Ray Kurzweil—is conscious. At least I’m claiming that I’m conscious (and so far, these claims have not been challenged).

  So if we scale up Searle’s Chinese Room to be the rather massive “room” it needs to be, who’s to say that the entire system of billions of people simulating a brain that knows Chinese isn’t conscious? Certainly it would be correct to say that such a system knows Chinese. And we can’t say that it is not conscious any more than we can say that about any other brain process. We can’t know the subjective experience of another entity (and in at least some of Searle’s other writings, he appears to acknowledge this limitation). And this massive multibillion-person “room” is an entity. And perhaps it is conscious. Searle is just declaring ipso facto that it isn’t conscious and that this conclusion is obvious. It may seem that way when you call it a room and talk about a limited number of people manipulating a small number of slips of paper. But as I said, such a system doesn’t remotely work.

  Another key to the philosophical confusion implicit in the Chinese Room argument is specifically related to the complexity and scale of the system. Searle says that whereas he cannot prove that his typewriter or tape recorder is not conscious, he feels it is obvious that they are not. Why is this so obvious? At least one reason is because a typewriter and a tape recorder are relatively simple entities.

  But the existence or absence of consciousness is not so obvious in a system that is as complex as the human brain—indeed, one that may be a direct copy of the organization and “causal powe
rs” of a real human brain. If such a “system” acts human and knows Chinese in a human way, is it conscious? Now the answer is no longer so obvious. What Searle is saying in the Chinese Room argument is that we take a simple “machine” and then consider how absurd it is to consider such a simple machine to be conscious. The fallacy has everything to do with the scale and complexity of the system. Complexity alone does not necessarily give us consciousness, but the Chinese Room tells us nothing about whether or not such a system is conscious.

  Kurzweil’s Chinese Room. I have my own conception of the Chinese Room—call it Ray Kurzweil’s Chinese Room.

  In my thought experiment there is a human in a room. The room has decorations from the Ming dynasty, including a pedestal on which sits a mechanical typewriter. The typewriter has been modified so that its keys are marked with Chinese symbols instead of English letters. And the mechanical linkages have been cleverly altered so that when the human types in a question in Chinese, the typewriter does not type the question but instead types the answer to the question. Now, the person receives questions in Chinese characters and dutifully presses the appropriate keys on the typewriter. The typewriter types out not the question, but the appropriate answer. The human then passes the answer outside the room.

  So here we have a room with a human in it who appears from the outside to know Chinese yet clearly does not. And clearly the typewriter does not know Chinese, either. It is just an ordinary typewriter with its mechanical linkages modified. So despite the fact that the man in the room can answer questions in Chinese, who or what can we say truly knows Chinese? The decorations?

  Now, you might have some objections to my Chinese Room.

  You might point out that the decorations don’t seem to have any significance.

  Yes, that’s true. Neither does the pedestal. The same can be said for the human and for the room.

 

‹ Prev