Book Read Free

Rationalist Spirituality

Page 5

by Bernardo Kastrup


  Philosopher John Searle once proposed a thought experiment that has become notorious and extremely influential in academic circles. It is called the “Chinese Room” argument,1 and it has been used to highlight an intuition that no computer can ever truly understand anything.

  The thought experiment goes like this. A clerk who only speaks English is locked up in a room without windows. Through a small slot in the wall of the room, a Chinese person can pass to him questions written in Chinese. The Chinese person has no idea of whom or what is inside the room. He just passes his questions on paper through the small slot. Inside the room, our English-speaking friend receives the paper filled with Chinese symbols. He has no idea what those symbols mean, but he has a huge manual, written in English, about how to process Chinese symbols so to generate answers in Chinese. His job is this: given the Chinese symbols in the paper received from outside, he must follow the rules in the manual and generate another sequence of Chinese symbols to send back to the outside as a reply to the question originally received. The Chinese person receives the reply from the room and, lo and behold, finds a perfectly reasonable and intelligible answer, in Chinese, to the question he had originally asked. He very reasonably assumes, then, that whatever or whoever is inside the room can understand Chinese.

  However, the English-speaking clerk in the room has no idea of what the question was, or the answer for that matter. All he did was to blindly follow rules for manipulating symbols. The rules could go like this: for such or such groups of Chinese symbols coming in, write such or such groups of Chinese symbols on your reply. The rules should then cover every possible group and combinations of groups of symbols that could occur. Naturally, there would be countless rules of potentially enormous complexity. But we can imagine that the manual is big enough and that the English-speaking clerk inside the room has enough time, enough blank sheets of paper, pencils, and filing cabinets to perform the necessary administration.

  Now Searle asks the following question: even though from the outside we may believe that the “room” understands Chinese, since it answers questions in Chinese correctly every time, can a clerk blindly following a set of rules be said to really understand Chinese? Our intuition screams to us: of course not. As Searle so colorfully put it, “[…] such symbol manipulation by itself couldn’t be sufficient for understanding Chinese in any literal sense because the man could write ‘squoggle squoggle’ after ‘squiggle squiggle’ without understanding anything in Chinese.”2 Analogously, says Searle, computers can never truly be said to understand anything, since all they do is to manipulate symbols according to pre-programmed rules, very much like the English-speaking clerk inside the room. Searle believes that understanding is the result of unique properties, or “causal powers”, of the brain that no computer simulation can reproduce.

  This intuition is incredibly powerful, and yet the majority of academics today have concluded that the argument actually says nothing about intelligence, or about the possibility of computers becoming intelligent. The manual the clerk follows is like a computer program, or software. If, as postulated in the thought experiment, that software were elaborate, complex, and complete enough to generate a correct reply in Chinese every time, then it would indeed be intelligent. After all, unlike consciousness, intelligence is an objective property that can be objectively measured. It is just that the required level of complexity of the software would be so enormous that it would not fit our normal mental picture of a rule book used by a clerk inside a room. So how can we reconcile the strong intuition we get that the “room” cannot possibly understand Chinese with the objective fact that it does possess the intelligence required to hold a conversation in Chinese?

  To continue further with our exploration, we need a clearer definition of intelligence. Although there are many variations of a definition in the academic world, most of them capture the same fundamental aspects. An entity is said to be intelligent when it is capable of building internal, mental models of reality by means of which it can interpret past and current events, as well as anticipate future events, with a degree of accuracy and speed.

  In less formal wording, if you have valid mental explanations for things that happened, and valid mental predictions for things that might happen, then you possess a degree of intelligence. You are intelligent if you can, for instance, correctly explain why your bank balance is low this week; and you are even more intelligent if you can explain why the world economy nearly collapsed during the 2007-2009 financial crisis. You are intelligent if you can, for instance, predict that your steak will cook if you put it over a hot grill; and you are even more intelligent if you can predict how global warming will play itself out. The more complete, elaborate, and accurate these explanations and predictions are, the more intelligent you will be. The faster you can come up with these predictions and explanations, the more intelligent you will be.

  Our brain structures, together with the signals captured by our sensory organs, define an envelope of possible mental symbol manipulations for constructing those explanations and predictions. These mental symbols, or simply “symbols”, are the neural representations of the things we see, smell, touch, taste, or hear. They are also the neural representations of things we have seen, smelled, touched, tasted, or heard in the past and whose memories we still hold in our brains. For instance, if you look out your window right now and see a tree, the image of the tree is a symbol manipulated inside your brain, which represents the tree “out there”. The word “tree” and the sound of its pronunciation are also symbols that may be evoked in your brain, through association, as a consequence of manipulating the image of the tree. Even the Chinese character for “tree” can be one such symbol in your brain. Similarly, the memory of the smell of a pine tree that you may have cut last Christmas is also a symbol manipulated in your brain at the moment you recall it. For clarity, all of these symbols consist merely of electrochemical signals circulating across your neurons; they may represent things like trees, sounds, and smells “out there”, but they are nothing more than measurable, material neural signals inside your head. The manipulations of such symbols take the form of neural signal processing in the brain, so we can compute valid explanations and predictions about things and events. For instance, you may say: “I could feel the smell of the pine tree so intensely last year because I exposed its internals when I cut it down”; or: “If I prune that tree in the garden later today, I will probably experience a strong woody aroma again”. Producing such explanations and predictions through symbol manipulations is the role of intelligence. Once the envelope of possible symbol manipulations is defined as a quantum wave function in the brain, Stapp’s theory tells us that consciousness comes into play and chooses one out of the possibilities within the envelope. This defines what we actually perceive as objects in consciousness.

  Naturally, we often also need to decide on an action. The perception that emerges out of choosing one of the possibilities circumscribed by the symbol manipulations in the brain guides our choice of action. For instance, if a car is speeding towards you and, through symbol manipulations of what you are seeing now and have seen in the past, you predict that you are going to be run over, you should perhaps consider getting out of the way. The choice of which action to take, still according to Stapp’s model, can only be made when consciousness again collapses the wave function that arises in the brain after it has been primed by the prediction that you are going to be run over.

  It is important that we differentiate between the “mechanical” symbol manipulations performed by our neurons and the insight, understanding, and other objects in consciousness that we get along with such symbol manipulations. Intelligence entails setting up our physical brain structures, in the form of connections between neurons, to construct accurate models of reality. In regular states, consciousness only has access to the symbol manipulations entailed by those models, and can only causally affect material reality through those models. Let us discuss this idea of “models
” in a little more detail.

  Some video games are familiar examples of computer models. For instance, flight simulators are computer models of real aircraft and of atmospheric conditions. An accurate flight simulator is such that the behavior of the virtual aircraft in the virtual world of the computer simulation corresponds to the behavior of the real aircraft in the real world. This correspondence should be one-to-one, that is, each aspect of the simulated aircraft’s behavior should correspond accurately to an aspect of the real aircraft’s behavior. In technical jargon, we say that there is an “isomorphism”, that is, a correspondence of form between the model used in the simulation and the real thing.

  Although the models used in computer games aim simply at providing a marginally accurate simulation of reality for entertainment purposes, accurate models serve a much more practical and important purpose: they enable us to explain and predict reality without having to do the real thing. For instance, an accurate computer model of a tall skyscraper enables engineers to predict its stability and ability to withstand high winds. Engineers can make these predictions before they actually begin building the skyscraper so that, if the building turns out to be unstable, they can make adjustments to the design without having to find out about the errors only after the wrongly-designed building collapses. Engineers can also build computer models of structures that have already failed in reality, so as to explain why those failures occurred.

  Models are mirrors of reality. They comprise internal elements and laws that are isomorphic to elements and laws of reality. The more accurate the isomorphism, the more accurate will the explanations and predictions of the model be. The symbol manipulations in our brains are themselves models of reality inside our heads. And, as it turns out, all we are conscious of are these internal models of reality, these mirrors of reality inside our heads, not of reality itself.

  We are constantly deriving explanations about events we observed and making predictions about future events. Each of these explanations and predictions consists simply of neural signals (symbols) circulating in our brains, but which are assumed to correspond to entities in external reality in an isomorphic manner. This is what intelligence does, and is a core function of our brains. The more complete and accurate our mental models of reality are, the more intelligent we are. In other words, the more of the elements and laws of nature we can correctly mirror in corresponding neural signals in our brains, the more intelligent we are.

  Renowned neuroscientist Henry Markram and his team at the École Polytechnique Fédérale de Lausanne, in Switzerland, have been working on building computer versions of the structure and dynamics of mammalian brains.3 Their simulations capture the idiosyncrasies of individual neurons, which have given them unique and detailed insights into how the brain actually functions. With their work, they are addressing the idea discussed above that the brain generates a model of the universe around us, our conscious experiences being defined by such model. In Markram’s words: “[the] theory is that the brain creates, builds, a version of the universe and projects this version of the universe, like a bubble, all around us.”4 Markram uses this idea to interpret, for instance, the way most anesthetics actually work: they do not send us into a deep sleep or block our perceptual receptors, as most people believe. Instead, they work by injecting “noise” into the brain, disrupting the symbol manipulations entailed by the brain model of the universe. This cripples our ability to consciously register anything coherently, because our consciousness is confined to that model now disrupted. Markram says that “99% of what you see is not what comes in through the eyes; it is what you infer”5 by means of the model of the universe inside your brain. This is a remarkable assertion that illustrates the extent to which our perceived “objective reality” is actually determined by the mental models in our heads, and how hopelessly confined to those models our regular consciousness seems to be.

  Returning to the analogy with robot explorers on Mars, the models responsible for the symbol manipulations in our brains perform a lot of processing on symbols perceived by our senses prior to exposure to consciousness. This is analogous to the computations the Martian robotic explorers performed, prior to transmission, on data sent to mission control on Earth. We may be conscious of some of the raw data captured by our senses, but are mostly aware of the resulting explanations and predictions derived from that data by our mental models of reality. In a way, consciousness is “trapped” inside our mental models, having no direct access to reality, but only to the symbol manipulations in the brain. If the models are not entirely accurate, then inaccurate perceptions arise in consciousness. Since our mental models are never complete, in the sense that they never explain the whole of nature, so our consciousness is never aware of the whole of nature. We are like video gamers who spend their entire lives in a flight simulator, having never even seen a real aircraft, let alone flown one.

  The question now is: how does the brain build these indirect models of reality? How do these models come to incorporate the correct manipulations of symbols? The brain is so enormously complex that it is difficult to answer these questions solely through analysis of a real brain. A complementary approach is an engineering-oriented one: instead of only analyzing the brain, we can also try to synthesize something like the brain and see if it works in similar ways. If it does, we will have our answers, since we will know exactly how we built it. Though there are many valid attempts in both academia and industry today to engineer a brain-like electronic system, I will discuss only one, which I consider to be particularly insightful for our purposes: Pentti Haikonen’s “cognitive architecture”.6

  Haikonen has done advanced artificial intelligence research at Nokia Research Center in Finland. His goal has been to design cognitive computer systems that behave in ways analogous to humans, so they can better interact with humans and do things that, today, only humans can do. His greatest insight has been that the human brain is but a correlation-finding and association-performing engine. All the brain does is to try and find correlations between mental symbols of perception and capture these correlations in symbol associations performed by neurons. In his artificial “brain”, these associations are performed by artificial associative neurons. All symbols in Haikonen’s artificial brain architecture are ultimately linked, perhaps through a long series of associations, to perceptual signals from sensory mechanisms. This grounds all symbol associations to perceived things and events of the external world, which gives those associations their semantic value. In this framework, the explanations derived by the brain are just a series of symbol associations linking two past events. The predictions derived by the brain are just extrapolated symbol association chains.

  Let us look at some examples to understand this well. Suppose you see someone smile with satisfaction after having taken a bite from a chocolate cake. Your brain instantly conjures up an explanation for that: the person smiled because the cake tastes good. This explanation is the result of symbol associations the brain has been trained to perform over time, while finding naturally-occurring correlations between symbols of perception. Finding and capturing these correlations in Haikonen’s associative artificial neurons is analogous to what we call “learning”. For instance, in the past, you may have several times eaten slices of chocolate cake that tasted all very good. This is a naturally-occurring correlation between chocolate cake and delicious flavor. From the repetition of this experience, the brain learned over time to associate the perceptual symbol “chocolate cake” to the perceptual symbol “delicious flavor”. This association entails the learning that those two symbols tend to occur either together or as a consequence of one another. Your brain may also have learned a correlation, and encoded an association in your neurons, between the symbols “delicious flavor” and “smile”. This way, a sequence of two symbol associations leads from “chocolate cake” to “smile” via “delicious flavor”.

  Once the symbol associations are in place, they serve as a model to explain observed e
vents, as well as to predict future events. This way, when you see somebody smile after eating chocolate cake your brain matches those observed symbols to the chain of associations “chocolate cake” – “delicious flavor” – “smile”. You then infer that the person smiled because the chocolate cake tasted good. Similarly, if you are at a restaurant and the waiter places your dessert plate before you containing a slice of chocolate cake, you will predict that you will have the experience of “delicious flavor” the moment you eat it. That prediction arises because your brain has already encoded (that is, learned), from previous observations, an association between “chocolate cake” and “delicious flavor”.

  It may be difficult to accept that our sophisticated human intelligence can be boiled down to detecting correlations and establishing associations between mental symbols. And this is why the work of Haikonen is so insightful, since he is able to explain myriad brain functions, in great detail, purely based on the idea of symbol associations.

  At the moment you were born, your brain likely was a nearly blank slate (except for whatever instinctive responses may have been genetically encoded in it). It had no built-in models. Initially, it received a flood of symbols from the sensory organs that were manipulated in relatively random, chaotic, incoherent ways. Over time, through learning, your brain started realizing that different symbols tended to occur together, or in a sequence. The observation of these correlations led to physical modifications in the structure of your brain, slowly turning into mental models of reality. How this can physically take place in the brain has been explained and modeled mathematically by Randall O’Reilly and Yuko Munakata,7 amongst others. If two symbols have occurred many times in succession in your past, and now the first one of them has just been perceived by your brain, through the associations encoded in it your brain will predict that the second symbol may be about to come. Ultimately, though, there are only learned associations between symbols, no understanding as such. It is striking, but quite logical, as we will soon see.

 

‹ Prev