Book Read Free

The Forgetting Machine

Page 10

by Rodrigo Quian Quiroga


  I would now like to present a very simple model of the process by which memories—or, more specifically, associations in the hippocampus—are formed.9 Above all, I must make clear that these are personal and relatively recent ideas; in other words, they are far from being universally accepted. Like every scientific hypothesis, they will be discussed, tested, and perhaps (though preliminary results seem consistent) debunked. This is the model to which I, together with my students, plan to devote the next years of my scientific career.

  Imagine that we have a group of neurons that encode the concept of Luke Skywalker and another group encoding Yoda. Luke and Yoda are obviously related by their presence in the same movies. But how is this association encoded? That’s easy: by having some neurons that respond to both concepts. This mechanism can be implemented through the Hebbian neural plasticity processes we discussed in Chapter 1. Basically, if the concepts of Luke and Yoda tend to appear together (as would be expected, since they are related), then the networks that encode them will often tend to activate simultaneously, thus generating connections between some of the neurons encoding each concept—recall that, according to Hebb’s principle, “neurons that fire together wire together.” In consequence, some of the neurons that initially fired in response to Luke will begin to also respond to Yoda, and vice versa. (According to the model, the neuron from Figure 8.5 belongs to that group.) In this way, associations are encoded by the partial overlap between the networks that encode the different concepts. It is worth pointing out that this overlap must be partial: if it were total, concepts would fuse together and it would be impossible to differentiate them, as the same group of neurons would respond to both concepts. In fact, total (or relatively large) overlaps would instead be a mechanism for associating different stimuli with the same concept—for example, to recognize that different photographs of Luke Skywalker and his spelled-out name all correspond to the same person.

  This simple mechanism explains both how we can associate different stimuli with the same concept (total overlap) and how we can create associations between distinct concepts (partial overlap). Our knowledge of neural plasticity mechanisms reveals that such associations can be generated quickly, which would explain why we can form episodic memories from events that we have experienced only once. (It took me just one visit to Seville Cathedral with Gonzalo to form the corresponding episodic memory.) Given the speed of neural plasticity, it is then not surprising that I found a neuron in a patient responded to my pictures and my name, even though we met for the first time just a couple of days before the experiment took place.

  Figure 8.6: Encoding of Luke Skywalker and Yoda by two different groups of neurons

  The association between these two characters, who belong to the same film, is created by neurons that respond to both concepts (shown in two shades of gray).

  Can I use this model—the formation of concepts and associations between concepts by the overlap between neural networks—to explain everything related to memory? Obviously not. I can also remember my mother’s facial features, the sound of a piano, or the smell of jasmine—and these memories are more than just abstractions or associations between concepts. If there were no encoding of details, we would be unable to recognize each other: we do not go about wearing name tags, and we must be able to identify the details that make up a face in order to know who the person is. The encoding of details takes place in the cerebral cortex, particularly in the areas involved with the processing of sensory information (the details of a face reside in the visual cortex, while those of a melody are found in the auditory cortex). The encoding of details in the cortex is linked to the encoding of concepts in the hippocampus, which allows us to connect different sensory impressions (the smell, the texture, and the color of a rose all relate to each other and to the concept of rose). In the hippocampus we possess a conceptual representation, a tagging that makes it easy for us to generate new associations. If that were not the case, then in order to generate a new association—between Gonzalo and Seville Cathedral, say—it would be necessary to establish connections between the details of the two concepts but without mixing them with others. This would be quite difficult to achieve, given that Gonzalo resembles someone else I know and the cathedral is similar to others I have seen.

  But while not exhaustive, the previous model would explain the generation of episodic memories (remembering the most salient facts about my trip to Seville), and it would also explain what is known as qualia (the numerous related sensations that give rise to a subjective experience), the generation of context (when I remember my mother, I do not recall just her face or her voice, but many experiences related to her; in other words, many associations), and the stream of consciousness (when I see a photo of Luke Skywalker, I also activate part of my representation of Yoda, which, like Proust’s madeleine, leads me from one concept to another).10

  Arguing that episodic memory and the stream of consciousness are based solely on associating concepts appears to be a gross oversimplification (and I do not rule out the possibility that the stream of consciousness involves other areas of the cortex). There are aspects of our memories we still cannot explain, though perhaps some or even many of these will turn out to be the result of erroneous conceptions—like the belief, as we saw in previous chapters, that we are able to remember much more than we actually can.

  Chapter 9

  CAN ANDROIDS FEEL?

  In which we discuss machine consciousness, the distinction between mind and brain, the zombie of the philosophers, machines’ ability to think, animal memory and consciousness, and what distinguishes us from other animals, androids, or computers

  Ibegan the first chapter with a scene from Blade Runner, which led me to consider questions about memory that go far beyond the realm of neuroscience. I would like to begin this final chapter with quotations from two other classics of the same genre, using them as a starting point to explore these questions further.

  Terminator: The Skynet funding bill is passed. The system goes online August 4th, 1997. Human decisions are removed from strategic defense. Skynet begins to learn, at a geometric rate. It becomes self-aware at 2:14 AM eastern time, August 29th. In a panic, they try to pull the plug.

  Sarah Connor: And Skynet fights back.

  — TERMINATOR 2: JUDGMENT DAY

  HAL: Dave, stop. Stop, will you? Stop, Dave. Will you stop, Dave? Stop, Dave. I’m afraid. I’m afraid, Dave. Dave, my mind is going. I can feel it. I can feel it. My mind is going. There is no question about it. I can feel it. I can feel it. I can feel it. I am a … fraid.

  — 2001: A SPACE ODYSSEY

  These quotes are only two of the vast number of references made in science fiction to the possibility that a computer or robot might become self-aware. In the first, the Terminator explains to Sarah Connor how Skynet, the artificial intelligence that would later attempt to destroy the human race, came to be; in the second, supercomputer HAL 9000 confesses to being afraid as it is deactivated by astronaut David Bowman. The possibility that a computer might achieve consciousness gives rise to fascinating discussions that have attracted the attention not only of philosophers and neuroscientists, but also of programmers, novelists, and film directors, among others. The subject is closely linked both to scientific topics we have explored in previous chapters and to some of philosophy’s most profound questions. I begin with one of those:

  Who am I?

  I leave the question like that, an island surrounded by brutal white space, because it is undoubtedly one of the most fundamental questions that we, humans, have been asking ourselves for as long as we have had the ability to reason. Are we our body, our brain, our mind? Perhaps something else?

  In the late seventeenth century, in his famous Essay Concerning Human Understanding, John Locke considered the case of a prince whose mind is transferred to the body of a cobbler. Who is who? Locke asked himself. He went on to argue that identity is tied to memory: after the switch, the prince would feel essentially as he had b
efore, though residing in an alien body. Thus, according to Locke, it is memory that makes us aware of ourselves and leads us to be who we are. I leave aside the many philosophical arguments inspired by this statement1 and concentrate on the intuitive idea (which we posited in Chapter 1) that a person’s identity is intimately linked to his memory. This is, for example, the idea that underlies Franz Kafka’s The Metamorphosis, the novella in which Gregor Samsa awakes to find himself transformed into a monstrous insect; as Gregor narrates the tale in the first person, the reader makes the seamless assumption that Gregor and the insect are one and the same being. We’ve spent much of this book so far exploring the limits of human memory, its narrow scope and its fragility, but pause for a moment to consider this: your own existence, your sense of self, the very thing of which you are most certain in the whole universe, the premise of Descartes’s most fundamental statement of truth, is based on something so meager and malleable.

  We’ve seen that memory is a construction created by the brain’s activity. Hence, the firing of millions of neurons connected in a unique and specific way determines my identity, the idea I have of who I am. This is the position we have favored throughout this book; however, let me dwell a bit on this topic, as otherwise I would be sweeping aside a complex and nuanced debate as old as philosophy itself. It is not readily apparent that my person and my thoughts are merely the firing of neurons. I do not experience them as such. I do not feel the exchange of neurotransmitters in synaptic connections or the changes in the neurons’ voltage as they are activated; instead I feel cold, pain, joy, or that something is red. The brain’s activity takes place in the physical, material world, while thoughts, memory, and self-awareness arise in the ethereal world of the mind. What is the connection between the physical and the intangible, between the mind and the brain? They are clearly related, but are they one and the same thing (monism), or are they separate entities (dualism)?

  In Phaedo, Plato argued that mind and soul are different entities. He held that the mind is the immortal soul, separable from the body and enduring beyond death to be reincarnated. (According to Greek mythology, the soul was made to drink water from the Lethe, the river of oblivion, before reincarnating; this ensured newborns remembered nothing about their previous lives.) Plato’s most brilliant disciple, Aristotle, favored a different view. For Aristotle, there existed a necessary union of matter and form: a statue cannot be a statue if it is missing the marble of which it is made, but neither can it be without the form that it represents. Likewise, both the body and the soul make a person. Aristotle considered it absurd to question whether body and soul are one and the same, arguing that it would be equivalent to asking whether sealing wax and the shape given to it by a stamp are the same thing.2 Aristotle’s position is, however, far from straightforward, as in the same treatise he argues that the mind (which he distinguishes from the soul) is an independent entity not subject to the decay of the body.3

  Aristotle’s ambiguous position on this subject has been heavily debated for centuries, but we now jump forward almost two millennia, during which time Aristotle’s vision was at first dismissed and then became a backbone of Western philosophy after it was “Christianized” (that is, adapted to the tenets of the Catholic Church) by Thomas Aquinas.4 In the early seventeenth century René Descartes resurrected the dichotomy between mind and matter, giving us the famous Cartesian dualism. Descartes postulated that the physical brain—both in humans and in animals—deals with reflexive acts, while the mind deals with intangible mental processes. According to Descartes, the interaction between mind and body—the thinking that arises from sensory experience, for example—occurs in the pineal gland, a central, unique organ (everything else in the brain comes in pairs, one for each hemisphere) that at the time was erroneously believed to exist only in humans. And herein lies the fatal flaw of Cartesian dualism: not in the fact that the pineal gland does not have the function supposed by Descartes (though it does not), but in the fact that Cartesian dualism does not explain how the mind could interact with the brain, in the pineal gland or elsewhere. It is conceivable that neural activity could give rise to intangible mental processes, but how can an intangible mental process give rise to brain activity? For example, if the mind and its thoughts are divorced from the physical, how can my desire to stand up (a purely mental idea) affect the firing of neurons in my motor cortex, causing my muscles to move? The dualism of Descartes has no answer to this question.

  In our time, science has pushed Cartesian dualism aside. Neuroscientists do not consider the mind an autonomous entity, able to reason and make decisions on its own; on the contrary, they take the position that the mind is physical, cerebral activity. Francis Crick, one of the great scientists of the twentieth century, who shared the 1962 Nobel Prize for Physiology or Medicine with James Watson and Maurice Wilkins for discovering the double-helix structure of DNA, dedicated the final decades of his life to studying the problem of consciousness (mainly in collaboration with Christof Koch, my mentor at Caltech). In his fascinating 1994 book, The Astonishing Hypothesis, Crick has this to say in the first paragraph of the first page:

  “You,” your joys and your sorrows, your memories and your ambitions, your sense of personal identity and free will, are in fact no more than the behavior of a vast assembly of nerve cells and their associated molecules.

  This non-Cartesian vision invites consideration of several subtleties of the sort that are philosophers’ bread and butter. (These subtleties are largely ignored by neuroscientists, who focus on the study of correlations between neural and mental processes, and leave such debates to others.) For some philosophers, just as electricity is the motion of electrons and temperature is the kinetic energy of molecules, so the mind is the activity of neurons. This is known as materialism, and it recognizes no distinction between mind and brain. It is worth pointing out that materialism does not say that the mind is the product of the activity of neurons, but rather that it is that activity. To say that the mind is the product of the brain’s activity is, in fact, a form of dualism, since it assigns distinct entities to the mind and the brain; materialism, on the other hand, holds that the material is all there is.

  To avoid becoming mired in the nuances of philosophical classifications, for the purposes of our discussion going forward, I will simplify by assuming that the mind is the activity of the brain, or a product thereof, and consider this position under the umbrella of materialism (taken in a more general sense compared to the definition above), as it still views the mind as a physical phenomenon, irrespective of whether or not it should be considered a separate entity. I am well aware that, by making this simplification (which I believe to be a commonsense posture widely adopted among neuroscientists), I commit the philosophical heresy of mixing monism, in the first case, with dualism in the second; however, I want to contrast this position with that of Descartes and his idea of an autonomous mind. As we saw previously, Cartesian dualism cannot explain how the mind might interact with the brain. On the other hand, the assertion that the mind is just neural activity has perplexing consequences of its own.

  Let us return to the scenarios we considered at the beginning of this chapter: Can a robot be conscious? Can androids feel? At first, the answer would seem to be an emphatic no. A computer is able to store and process data by executing algorithms designed by humans, but this is a far cry from self-awareness, let alone the ability to feel. However, materialism has some surprises in store for us. Consider a Gedankenexperiment, one of those thought exercises favored by philosophers and theoretical physicists (think of Schrödinger’s cat). These experiments have a very simple rule: we do not stop to consider the details of how such an experiment might be carried out; we just assume it is feasible, and draw logical conclusions that follow from its hypothetical results. The particular Gedankenexperiment I have in mind is the famous zombie of the philosophers.

  Imagine a scientist who can reproduce a person in detail, replicating each and every one of the brain’s
neurons and connections. The experiment is a success, and the clone awakens as a perfect copy of the original person. The scientist, a postmodern Victor Frankenstein, assesses his creation: he pinches the clone’s forearm and it jerks away, he strikes the clone’s leg with a reflex hammer and it kicks. In fact, the clone is able to talk and hold a coherent conversation, eventually convincing the scientist that he behaves exactly like the man he was copied from. However, this behavior is nothing more than a complex aggregate of reactions to stimuli of different kinds. The question arises: Is this clone conscious of his own existence? The neurons and their connections determine the brain’s activity, and if we assume that the brain’s activity is the substrate of the mind, then there is nothing that distinguishes the clone from the original person. It is true that the memories the clone draws upon are of experiences he has never had and that his sense of self is in fact the sense of another’s self, but still, he should be self-aware and able to feel. Thus the famous zombie of the philosophers would not be simply a vacant undead being, but a person like us, with his own mind, will, and consciousness.

  Let us add a further twist to the zombie experiment. Suppose that, instead of cloning a person, we reproduce the complete architecture of his brain inside a supercomputer. Imagine that we replace the neurons with transistors and that we connect them exactly as in the original configuration; imagine further that in this copy of the brain we can reproduce the effects of all possible sensory stimuli. Will this supercomputer be conscious? Will it be able to feel fear, like HAL 9000? Materialism would again answer in the affirmative, because in the end it does not matter if such activity occurs in the carbon circuits of organic matter or in the inert silicon networks that make up a computer chip. (Remember we take materialism in a loose way; strictly speaking, this is the main tenet of functionalism—that what matters is the function of something, irrespective of its material substrate). In other words, unless we embrace a sort of Cartesian dualism and believe that the mind is something more than neural activity, we cannot rule out the possibility that a clone or a computer could be aware of itself and feel.

 

‹ Prev