Book Read Free

Rationalist Spirituality

Page 6

by Bernardo Kastrup


  Haikonen talks of his “cognitive architecture” as a “conscious” machine because, like many others, he seems to implicitly assume that a potential for subjective experience is a property of all matter, a position usually called “panpsychism”. For the purposes of this book, it is not very important whether this assumption is correct, so we will stay agnostic of it.

  Let us now return to our original question, namely: how can we reconcile the strong intuition we get that the “Chinese Room” cannot possibly understand Chinese, with the objective, measurable fact that it does possess the intelligence required to hold a conversation in Chinese? The logical answer is that understanding is an object in consciousness that is correlated with symbol associations in the brain, but cannot exist outside of consciousness. The Chinese Room argument makes a separation between the entity that contains the model of reality, with corresponding symbol association rules (the manual), and the entity that possesses consciousness (the clerk). That separation makes the jump from symbol associations to true understanding impossible, for the manual itself is not conscious. Even though the conscious clerk is performing the symbol associations himself, one at a time, following the rules in the manual, he does not have an internalized model in his own brain that could lead to understanding.

  The strong intuition that we get from the Chinese Room argument has nothing to do with intelligence, but with consciousness. Searle is appealing to that sense of insight and understanding that we have as humans. Insight and understanding are correlates in consciousness of certain intelligent processes taking place in the physical brain. The latter consist simply of symbol associations performed according to the rules imposed by the structure and electrochemistry of the neural networks in the brain, akin to the clerk following the rules in a manual. However, in the Chinese room, the model is in the manual while consciousness is in the clerk, so the symbol associations can never translate into a conscious insight of understanding. The Chinese Room argument shows that, when separating the entity with assumed consciousness (the clerk) from the unconscious intelligent model (the manual), true understanding of the model cannot occur. This clearly highlights our strong intuition that understanding only exists in consciousness, not in intelligence. Symbol associations reflect intelligence, but not understanding.

  On the other hand, when the symbol associations occur in the physical brain, they lead to the conscious “feeling” of insight and understanding because, as we inferred earlier, the brain is the transceiver of consciousness. This is what we have as humans that an extremely intelligent but unconscious computer would not have. As we argued earlier, this is not just a matter of material complexity, but of a fundamental property of nature (consciousness) for which we have an explanatory gap in science today.

  If the brain is a transceiver for the interaction between consciousness and the known material aspects of reality, then the symbol associations taking place in the physical brain are responsible for constructing the “messages” that are “transmitted” to consciousness. Searle’s Chinese Room argument appeals to our intuition that understanding, as an object in consciousness, cannot happen if that “transmission” does not take place. In the Chinese Room, the “transmission” never happens because intelligence and consciousness are separated from each other as properties of different entities.

  Critics of Searle have argued against the validity of the “Chinese Room” argument by pointing out that the clerk is just a part of a system comprising the clerk himself and the manual. The critics argue that it is the whole system that “understands” Chinese, not the clerk alone. Searle counters this argument in a straight-forward way: imagine that the clerk has now memorized the entire manual, with all of its symbol manipulation rules. This way, you can now forget about the manual and only consider the clerk. He has the entire model in his brain. But the clerk still just follows memorized rules. Does the clerk now truly understand Chinese?

  Think about it for a moment. The clerk memorized the manual, but he is still just blindly following rules for associating symbols whose meaning he has no idea of. So the clerk still does not understand Chinese at all. That seems pretty evident and Searle’s original argument ends here.

  But if you have been reading attentively, you will have noticed that I just put myself in an apparently difficult position here. My original argument was that intelligence and consciousness were separated from each other as properties of different entities, so there could be no understanding. However, the symbol association rules that originally were in the manual now are in the brain of the entity that has consciousness (the clerk). There is no longer any separation between rules and consciousness. So how come is there still no understanding? The onus is on me to explain that without contradicting my earlier argument. Here is the explanation: there is still no understanding because a crucial element of a true model of reality is still missing from the clerk’s head. It is subtle, but glaringly obvious once you see it. Bear with me.

  Let us go back to Haikonen’s idea of the brain as a correlation-finding and association-performing engine of perceptual symbols. One of the key features of his cognitive architecture is the ability to associate symbols of different modalities that signify the same thing. For instance, the mental image of a tree, with its trunk, branches, and leaves, is a mental symbol that corresponds directly to an external entity; it is a mirror-image in our heads of an entity of objective reality. The mental image of the English word “tree”, with its four letters, is also a symbol that signifies that same thing; but indirectly: the mental image of the word “tree” evokes in our brains the mental image of a tree, which, in turn, corresponds to a real tree “out there”. The word “tree” would be meaningless if not for its evocation of mental images of trees. Similarly, the sound we get when we pronounce the English word “tree” is another kind of indirect symbol representing the same entity of objective reality. For emphasis: the mental image of the tree is a symbol that directly mirrors external reality in the brain, while the symbols associated to the word “tree” (written or spoken) are mental labels that refer in directly to external reality. The brain learns the correlation between these direct and indirect symbols: “sound ‘tree’” – “written word ‘tree’” – “mental image of a tree”. This way, when we hear the sound “tree”, or see the written word “tree”, the image of a real tree pops in our heads through learned association. This is how we understand language. Without this grounding of all indirect language-related symbols to perceptual symbols corresponding directly to entities of external reality, one could not possibly understand language. Without it, any language would feel to you as a foreign language that you never learned.

  When we imagine the clerk memorizing the entire manual for Chinese symbol manipulations, we are leaving out all the associations between the indirect language symbols (Chinese characters) and the direct perceptual symbols (mental images, sounds, flavors, aromas, feelings, etc.) that ground them to entities of external reality. In fact, it is only the Chinese man outside the room that, upon receiving the written answer from the room, can perform the necessary associations between Chinese characters and entities of external reality. Therefore, even when the clerk memorizes the entire manual he would still not have the complete model of reality, with its corresponding symbol associations, in his head. We can then conclude that my original argument still holds: while the clerk now has internalized parts of the model of reality (the manual), he still does not have a crucial part of the model in his brain (the grounding of language symbols to external entities of reality). This way, the “transmission” to consciousness is not complete in a very fundamental way, and there can be no understanding.

  Now let us extend the thought experiment a bit ourselves. If the clerk, having internalized the entire manual, were also to learn the associations between each Chinese character and the entity of external reality it refers to, then I guess we would be safe in saying that he would indeed understand Chinese. In fact, that would be the very defin
ition of learning a new language: the manual would give him the grammatical and syntactic rules of the Chinese language, as well as the predetermined content of the answers he has to produce, while the grounding of the Chinese characters to entities of external reality would give him the semantics. But notice this: the key reason why we feel comfortable with this conclusion is that we assume the clerk to be a conscious entity like ourselves, thereby fulfilling the most important intuitive requirement for the ability to understand. So the “room” now understands Chinese because the clerk, a conscious human, understands Chinese himself.

  Now imagine that there is no human clerk in the Chinese Room, but only a supercomputer programmed with all the Chinese symbol manipulation rules originally in the clerk’s manual and equipped with Internet access. This way, the supercomputer would have sensory inputs in the form of images downloaded from the Internet. Assume too that we would further program the supercomputer with all symbol association rules necessary for linking each Chinese character to corresponding digital image files downloaded from the Internet. For instance, a digital photograph of a tree would be linked with the Chinese character for “tree”. Would the supercomputer now truly understand Chinese? Could mere software links between digital symbols and digital images be the crucial difference that confers understanding, even though there is no subjective experience of those symbols and images?

  I know that, in appealing to your intuition with the questions above, I am doing more hand-waving than logical argumentation. However, it is my contention that the very notion of “understanding” resides eminently in conscious experience. My use of the modified “Chinese Room” argument above aims at highlighting precisely that. If such contention is correct, there is no alternative but to argue about the notion of understanding in the subjective framework where the notion itself exists. I thus submit that the supercomputer would still have no true understanding, despite the software links between Chinese characters and digital images, so long as the supercomputer is not conscious.

  When one considers the inner workings of the brain, one is looking at the processes of intelligence. In the absence of consciousness, intelligence consists purely of “mechanical” symbol associations, grounded on external reality or not, like what the clerk in the Chinese room does with the help of his gigantic manual. Symbol associations are just the neural correlates of objects in consciousness, but are not conscious experiences in or by themselves. Searle’s Chinese Room argument, with the extensions we discussed above, helps us gain a strong intuition about the difference between those two things: whenever we separate crucial symbol associations from an assumed conscious entity our intuition tells us that understanding of those symbol associations is no longer possible.

  As a final note, I want to make sure I do not misrepresent Searle’s points of view here. Searle does not believe that consciousness (or “intentionality”, which is the technical term he actually uses) emanates from yet unknown aspects of reality. He does not believe the brain to be a transceiver for immaterial consciousness. In fact, he believes that consciousness is a property of the structure and electrochemistry of the brain, therefore being generated by the brain itself. He does not believe electronic computers can manifest consciousness because computers today do not replicate, but merely simulate, that structure and electrochemistry. According to Searle, it is as-of-yet unknown “causal powers” of the structure and electrochemistry of the brain that allow consciousness, and therefore understanding, to exist. Although Searle does not identify what those “causal powers” are, here I associate them with whatever features of the brain allow for the interaction of immaterial consciousness with the material world. In other words, for me Searle’s “causal powers of intentionality” are the specific structures and electrochemical properties of the brain that allow it to work as a kind of consciousness transceiver, whatever those specific structures and electrochemical properties may be. According to Stapp’s theory, it is the quantum mechanical nature of the movement of calcium ions in nerve terminals that make up such “causal powers of intentionality”.

  Whatever the origin or cause of consciousness, Searle’s arguments clearly highlight the importance of there being consciousness for the ability of an entity to have true understanding.This is the point where Searle’s arguments fit into the thought-line of this book.

  Let us go back to our inference that the brain is a transceiver for consciousness in the known, material reality. Consciousness only has access to the symbol associations taking place inside the brain, not to objective reality itself. However, since the brain builds indirect mental models of external reality that operate through those symbol associations, consciousness can have in direct access to reality. The structure and electrochemistry of the brain frame the perception of whatever the external reality might be, before presenting it to consciousness. Therefore, disturbances or damage to the way the brain physically operates immediately affect and modulate our conscious perception of the world, even though consciousness, as inferred, does not arise from the brain itself.

  Chapter 8

  The beginnings of a theory of purpose

  We now have to revisit a question we left open in an earlier chapter: why would nature impose on itself limitations analogous to the ones faced by interplanetary explorers operating robotic vehicles from a distance? If consciousness is such a primary ground of meaning, why would nature choose to trap consciousness within the narrow confines of brains? It does not seem to make any sense. Yet, we have arrived at this cross-roads by following a coherent and rational line of thought. Therefore, the question is certainly deserving of careful consideration.

  One could argue that consciousness is simply on its way to expansion and enrichment. The path to expansion may entail that, at the current stage of universal evolution, consciousness happens to be limited to the capabilities of the current physical brain structures and associated models of reality. However, the brain itself can be expected to continue to evolve and improve over generations, thereby easing the limitations imposed on consciousness. The models of reality that brains are capable of building can become increasingly more comprehensive, sophisticated, and accurate, thereby giving consciousness access to more and more elements and laws of nature, as mirrored in those mental models. In this context, although the conscious experience of nature remains always indirect, operating through nature’s reflection on mental models, the current limitations of consciousness are seen simply as a natural stage in its path to enrichment. It can be inferred that, at some point in the universe’s evolution, such limitations will gradually erode through material betterment, and consciousness will expand to yet unknown depth and scope.

  At first sight, the hypothesis above may sound entirely consistent with what we have articulated so far. It seems to correspond perfectly to the natural process of enrichment that we have inferred earlier to take place in nature, and to give it its meaning. However, more careful analysis shows us that the hypothesis above is, in fact, inconsistent with the line of argumentation we have constructed thus far. In fact, the hypothesis is based on the subtle assumption that consciousness is fundamentally circumscribed by the known material reality, its depth and scope being a consequence of the evolution of material structures (for instance, brains). In other words, it is assumed that the reach of consciousness fundamentally depends on structures of the known material reality. Only then does it make sense to infer that consciousness is enriched as such structures of material reality themselves evolve.

  However, earlier we have argued the exact opposite of this assumption: that consciousness has primacy over the known material reality. We have also argued that it is material reality that, in a way, is a consequence of consciousness, not the other way around. To base this position, we have used two arguments: first, the fact that we do not have direct access to objective reality and that all we believe to exist are, in fact, objects in our own consciousness; and second, the fact that Wigner’s interpretation of quantum mechanics places ob
servation in consciousness as a precondition for the physical existence of material reality. Therefore, the hypothesis that consciousness expands merely as a consequence of brain evolution is not logically consistent with our articulation so far.

  So we are still left with our original question: if consciousness is such a primary ground of meaning, as inferred in previous chapters, why would nature choose to trap consciousness within the narrow confines of physical brains? We have seen above that whatever the answer to this question may be, it cannot entail that the enrichment of consciousness is circumscribed or paced by the evolution of structures in the known material reality. What other hypothesis are we left with?

  The only avenue left is that the very imposition of limitations on consciousness through material structures is the vehicle for its expansion. Now, that sounds totally illogical at first. It sounds like saying that you can lose weight by eating more, or something similarly contradictory. But there is a surprising way in which this makes sense. In fact, there is a way in which this may explain your existence right now, including the fact that you are reading this book. To gain insight in it, however, we need to briefly touch upon what science calls “information theory”.

  Engineer and mathematician Claude Shannon, the founder of information theory, published a highly influential scientific paper in 1948, the concepts of which underlie all electronic communications today.1 Every phone call you make, every page you download from the Internet, every show you watch on television has been made possible by the theoretical framework outlined by Shannon. His key insight in the paper has been to find a way to quantify what we call “information”.

  Shannon succeed in quantifying information by using the framework of a system where a transmitter selects one among a set of possible messages and sends it to a receiver. You could think of the transmitter and receiver as you and a friend of yours talking on the phone. Suppose you called your friend to ask how he is doing. There is an enormous number of messages he could then select to transmit to you in reply. Namely, he could say that he is doing “well”, or “terrible”, or “better”, or even that he is “unsure”. According to Shannon, the more possible messages your friend can pick from as an answer to transmit to you the more information there will be in that one single answer. To understand Shannon’s insight, imagine that you call your friend merely to know if he is at home. There are then only two possible messages that can be transmitted: yes (for instance, if he picks up the phone himself) or no (for instance, if nobody picks up the phone). There is then less information in the message, whether it is “yes” or “no”, since the number of possible messages was restricted to only two. As Shannon put it himself, “[…] the number of [possible] messages […] can be regarded as a measure of the information produced when one message is chosen.”2

 

‹ Prev