The Forgetting Machine

Home > Other > The Forgetting Machine > Page 11
The Forgetting Machine Page 11

by Rodrigo Quian Quiroga


  Let us leave the Gedankenexperiment aside and move on to the real world. There is no mad scientist able to replicate each and every connection in the brain, but artificial intelligence is quite real, and today’s computers increasingly blur and challenge the distinction between people and machines. Science advances at a frenzied rate, and what once appeared impossible—that a computer could vanquish a chess grandmaster—in fact occurred at the end of the twentieth century, when Deep Blue beat Garry Kasparov. Today’s robots can run, jump, reproduce human gestures, and even give the impression of having a personality, just like HAL 9000. Will machines, then, indeed be able eventually to feel or be self-aware? And now we arrive at a dilemma: How would we test this? How can we know if a robot is able to feel?

  In Blade Runner, Harrison Ford interrogates potential androids with a battery of personal questions as he monitors their vital signs and eye reflexes with a “Voight-Kampff machine.” In our time, it is not far-fetched to think that an android might be able to ape the reactions of a person, no matter how complex; the question is simply one of technology (for example, current Geminoid and Actroid androids can already reproduce human gestures almost perfectly). Much more difficult would be sustaining an unpredictable human interaction, or a coherent conversation. In other words, while the emotional reaction of an android may appear identical to that of a human, the android would have trouble knowing when and how to deploy that reaction. This is precisely the basis of the Turing test, conceived by British mathematician Alan Turing in 1950.5 Turing proposed that asking whether a machine is able to think is analogous to asking whether it can replicate human behavior. Avoiding technicalities related to the appearance and voice of an android, in a Turing test the examiner types questions on a keyboard and gets the answers—from a person and a computer, each in another room—on a monitor. If after these chats the examiner is unable to distinguish the computer from the person, then the computer has passed the test.

  In theory, the Turing test seems to make sense, since we can imagine any number of questions, or sequences of questions, that could be used to detect the fact that we are interacting with a computer. In practice, however, the validity of the results is debatable, as these may depend not just on the complexity of the computer’s underlying algorithms but also on the ability of the examiners to pose the right questions and draw correct conclusions based on the answers (for instance, one computer was programmed to fool the examiners by mimicking common “human” typing errors). A more substantial critique of the Turing test comes from philosopher John Searle. He argued that the test is fundamentally unable to determine whether a machine can think, and to make his point proposed a Gedankenexperiment now among the most discussed by contemporary philosophers: The Chinese room.

  Imagine a person, who does not speak Chinese, locked in a room with an enormous manual on how to manipulate Chinese symbols. Someone outside the room provides the person with cards containing questions in Chinese; the person does not understand the questions but by following the instructions in the manual is able to produce sensible answers. Searle’s conclusion is that the person would appear to understand Chinese, despite not knowing a single word, and pass the Turing test. The Chinese room argument not only seems to lay bare the limitations inherent in attempts to evaluate whether a machine is able to think, it is also offered to refute the possibility that a machine is able to think at all, because, according to Searle, machines can only obey rules without understanding their content. These conclusions, as appealing as they may sound, have been heavily debated among philosophers.6 A main criticism, known as the systems reply, is to consider what would happen if the person in the Chinese room were able to internalize the whole process and remember by heart all the rules involved. Would we still say that this person doesn’t understand Chinese? Does it make any difference if he resorts to an external manual or has memorized its content? Searle’s argument triggers a fascinating discussion about what it means to understand or have thoughts. If we claim that after having internalized the manual’s rules, the person is still not able to understand, what leads us to think that? What’s the difference between this person and someone who understands Chinese? In other words, how can we tell that the people around us are thoughtful, conscious beings and not sophisticated robots simply executing commands? Implicit in Searle’s argument is a suggestion of Cartesian dualism—which we neuroscientists try to leave aside—but instead of the immaterial autonomous mind, we now refer to the no-less mysterious notion of understanding. In my view, thought and understanding involve the ability to generalize and react in novel situations. This possibility is denied by construction in the Chinese room argument, as the manual includes all possible questions and answers. But making this premise more flexible, we could say that the person in the Chinese room understands Chinese if he is able to correctly answer questions that are not in the manual, inferring the answer based on other rules. Extrapolating to machines, we could argue that a machine shows some level of thought and understanding if it shows general intelligence—that is, if it is able to learn by inference to perform functions it was not programmed to perform. This is certainly the most difficult challenge facing artificial intelligence.

  So far we have discussed clones, philosophical zombies, and computers that emulate the workings of the brain. Now we turn to less hypothetical subjects: other animals. Can animals think? Do they have memories like we do, and can they use them to be aware of their own existence?

  The Florida scrub jay (Aphelocoma coerulescens) is a bird in the crow family that stores acorns, seeds, and so on during the summer for use when winter comes. These birds tend to steal food from one another, and for that reason must hide it in scattered places to avoid having their entire stash discovered. The astounding fact is that they remember not just tens or even hundreds of hiding places, but thousands distributed throughout many square miles around their nests. What’s more, a series of clever experiments carried out by Nicky Clayton’s group at Cambridge University established that these birds remember when they hid the food, realizing that, for example, after a few days a peanut is still tasty, but a worm not so much; whether they were being watched when they hid it, returning later to move the food in case the witness should try to steal it; and even planning for the future, by hiding food where they know they will be able to retrieve it later, and not in places that will be hard to reach.7

  The scrub jay may be a sort of memory champion in the animal kingdom, but many other species have at least some memory capacity. We have all had experiences with cats and dogs, who can clearly remember other animals, people, and events—for instance, that it was the vet who administered a painful injection.

  In general, animal memory has been studied mainly in monkeys and rodents, using selective brain injury, drugs, genetic manipulations, or neural recordings of different areas of the brain. In monkeys, one of the classic memory experiments is known as delay match to sample, in which the subject is shown an object, and later, when the same object is shown alongside another, the subject has to choose the one that was shown initially, in order to receive a reward. (A variation, called delay no-match to sample, has the animal choose the new object.) Experiments of this kind allow scientists to evaluate an animal’s ability to remember objects. A substantial number of scientific papers document the neural activity of animals while they perform this experiment, which was also widely used to test an animal model aiming to reproduce the kind of amnesia suffered by patient H.M. (whose case we discussed earlier) by performing a similar surgery in monkeys.8

  In rodents, the most common experiments evaluate spatial memory. This is in part because it is evolutionarily crucial for rodents to be aware of and remember their surroundings (for example, to know how to escape if a predator appears), and in part because of the discovery of place cells (neurons that encode specific places) by John O’Keefe’s group in the 1970s. This discovery earned O’Keefe, along with Edvard and May-Britt Moser, the Nobel Prize for Physiology or Medicine
in 2014. Following the discovery of these cells, a great number of studies have used electrophysiological recordings, surgical lesions, drugs, and genetic manipulations to elucidate how rodents generate memories of their surroundings.9 Curiously, there is a close analogy between place cells and the concept neurons we described in the previous chapter. In particular, both kinds of neurons are located in the hippocampus, and their firing patterns have similar characteristics.10 Now, how does a neuron that responds to a specific place relate to one that responds to Jennifer Aniston? The answer is that, ultimately, a place is also a concept—it is crucial for a rat to remember its surroundings, whereas for us what is essential is that we recognize each other. It is possible that place cells and concept cells have the same type of memory-related function, and that the only difference between them is due to the types of things that different species tend to remember. This is not to say that there are no spatial representations in humans (or concept representations—like the one of a cat—in rats). In fact, spatial representations provide context to our memories—for example, we may remember exactly where we were when we engaged in an interesting conversation with somebody.

  It is thus clear that memory capacity is not exclusively human. We’ve also discussed how identity is linked to memory. But are animals aware of themselves based on their recollection of past experiences? And, again, how might we test whether this is so? After all, we do not have a common language that would allow us to ask them questions, making a Turing-like test impossible. As it happens, however, a very simple experiment devised in 1970 by American psychologist Gordon Gallup, Jr. provides irrefutable evidence of animal self-awareness. Observing the behavior of chimpanzees in front of a mirror, Gallup noticed that, after gaining familiarity with the reflective surface, the animals showed signs of recognizing themselves: they grimaced, checked out parts of their bodies that they could not see directly (for example, picking bits of food from between their teeth), etc. Based on these observations, Gallup designed the following test: Once a chimpanzee was familiar with its reflection, he proceeded to put the animal briefly to sleep (so it would not know what he was doing) before coloring parts of its eyebrows and ears with red dye. After waking up, the chimpanzees behaved normally, unaware that anything had changed, but when brought again in front of the mirror, Gallup found they would repeatedly touch their colored parts. This simple procedure is now known as the mirror test, a test only a few animals pass, among them, the higher primates (chimpanzees, gorillas, and orangutans), dolphins, and elephants.11 The test was also administered to babies (coloring areas of their faces with rouge), to show that humans begin to recognize themselves at between eighteen months and two years of age.

  I remember once seeing my dog bark at himself in front of the mirror, perhaps mistaking his reflection for another dog. In fact, there are many other animals that cannot identify themselves in a mirror: chicks peep constantly if alone but calm down if they are surrounded by other chicks . . . or in front of a mirror. Hens eat more if they are with other hens or in front of a mirror; pigeons lay fewer eggs if they are isolated than if they are with other pigeons or in front of a mirror; some birds peck aggressively at their reflections in windows. In general, while passing the mirror test proves beyond doubt that an animal recognizes itself, failing it does not disprove self-awareness. An animal might not react to a mark on its reflection for any number of reasons. It may not have a keen sense of sight, or it may notice the mark but have (or display) no interest in it. It is undeniable that higher primates have self-awareness, and it is likely that dogs, cats, and various other animals have it too—despite not passing the mirror test. After all, they do have memory, which, as in the case of higher primates (like us), may give rise to their feeling of being; and anybody who has had a dog or a cat has no doubt that they have personalities and are aware of their own existence. But what about fish, or insects? Perhaps, instead of defining consciousness as something that is either there or not, and attempting to distinguish between conscious and nonconscious animals, we should accept that consciousness may appear at different degrees and in different forms throughout the animal kingdom: whereas we humans ask ourselves questions about our being, our origin, and whether there is a hereafter, less-developed animals have the narrower scope of discovering the best way to relate to their peers and their environment, and most primitive beings are confined to the instinctive struggle for survival.

  The difference between degrees of consciousness and richness of memory in animal species thus depends upon what they have evolved to be. Despite the lack of a fundamental difference between our brains and those of the higher primates, the truth is that there is a gigantic evolutionary leap between them and us. Chimpanzees have developed strategies to hunt in groups, share food, and even make and use tools, but they do not ask themselves about their brain capacity, whether Earth is the center of the universe, or about the validity of the law of gravity or Pythagoras’s theorem. What causes this tremendous difference between humans and all other animals? What is the secret of our astounding and unique capacity for thought?

  There is one obvious faculty that is uniquely human: our use of language. Other animals communicate, they may even have their own system of signs, but human language is unique in its complexity and the ability it gives us to refer to the past or hypothetical futures. Our language enables us to communicate and interact much more profoundly than any other species; it allows us to share our memories and pass on our knowledge. A mother chimp can teach her young what to do and what to avoid when a given situation arises, but she cannot tell them about her past experiences, her successes and failures; a young chimp will learn to behave in a particular way in order to survive, but will likely not understand why.

  There is another, especially relevant consequence of our use of language. In previous chapters we saw the importance of abstraction. Words are no more and no less than abstractions of reality. When I say “dog,” I do not refer to my childhood pet or my neighbor’s; it doesn’t matter if the dog is shaggy, big, small, ornery, a good hunter, or if it is white with dark spots on its back. When I say “dog,” I brush aside all those details and refer to whatever it is that defines the animal. I am far from being the first person to make this argument. British philosopher John Stuart Mill wrote this in the mid-nineteenth century:

  Even if there were a name for every individual object, we should require general names as much as we now do. Without them we could not express the result of a single comparison, nor record any one of the uniformities existing in nature … It is only by means of general names that we can convey any information, predicate any attribute, even of an individual, much more of a class.

  —JOHN STUART MILL. A SYSTEM OF LOGIC, RATIOCINATIVE AND INDUCTIVE. LONDON: LONGMAN, [1868] 1970, 436.

  In a remarkable (though not very well-known) passage, the great Jorge Luis Borges has this to say:

  The world of appearance is a jumble of shuffled sensations . . . Language is an effective ordering of the world’s enigmatic abundance. In other words, we attribute nouns to reality. We touch a round shape, we see a little lump of light the color of dawn, a tingling elates our mouth, and we lie to ourselves and say that these three disparate things are but one and that it is called an orange. The moon itself is a fiction. Apart from astronomical facts, upon which we will not dwell here, there is no resemblance whatsoever between the yellow circle now clearly rising above the Recoleta and the thin pink sliver that I saw above the Plaza de Mayo a few nights ago. Every noun is an abbreviation. Instead of enumerating cold, sharp, hurtful, unbreakable, shiny, pointy, we say dagger; instead of the sun receding and the shadows approaching, we say dusk.

  —JORGE LUIS BORGES, “BLATHER FOR VERSES,” FROM THE SIZE OF MY HOPE, 1926.

  Language helps us form concepts and solidify the abstractions represented by each noun, adjective, or verb that we use, not only to communicate with others but also to sort out our own thoughts. Language allows us to order our experience and reflect on
it, to give form and meaning to what we feel and perceive, and explain ourselves to ourselves. Imagine trying to immerse yourself in your deepest thoughts without using words; imagine trying to understand how the brain encodes memories without resorting to words like neuron, memory, or brain, but using instead only specific images conjured by your thinking. Russian psychologist Alexander Luria (whom we introduced in Chapter 5) argued that the use of words underlies the shift from concrete thinking, based on graphic images, to logical thinking, in terms of concepts, during maturation. His mentor, Lev Vygotsky, saw words as functional tools that support concept formation—the transition from concrete to abstract thoughts.12 Similarly, philosopher Dan Dennett argues that words are labels we attach to experienced circumstances, becoming the objects of our brain’s machinery—prototypes of concepts that we can then manipulate in our thoughts.13

  We have already argued that memory, like thought in general, is based upon forming associations, and it is precisely language that establishes relations between concepts, for example when I say this is a guard dog, two is greater than one, or I went out for dinner with my brother. In the previous chapter we saw that Jennifer Aniston neurons (or concept neurons) play a crucial role in the encoding of these concepts. We also saw that repetition helps reinforce memories, and that the ability to write, articulate, or simply think in terms of words provides critical support for the consolidation of concepts and the relations between them. The degree of abstraction afforded by the use of language may well be what lets us discard countless details that we can then fill in by inference.14 This is indeed the quintessence of our intelligence and creativity, what allows us to base our thought upon ideas and concepts much more advanced than those accessible to other animals.

 

‹ Prev