Book Read Free

The Forgetting Machine

Page 7

by Rodrigo Quian Quiroga


  In the late nineteenth century, John Langdon Down, a British psychiatrist known for the syndrome that now carries his name, reported on several cases similar to those of Funes and Shereshevskii. For example, he described the case of a boy who learned by heart Gibbon’s Decline and Fall of the Roman Empire without understanding the content of what he recited. Down called this “verbal adhesion”: memory without comprehension. Perhaps the most famous of these so-called savants (after the French word for “sage”) is Kim Peek, who became a celebrity when his life story was the basis for the movie Rain Man. Kim Peek’s memory, like Shereshevskii’s, was apparently unlimited and was tested time and time again in exhibitions. He knew the zip codes and area codes of thousands of US towns, as well as the names of their local television stations and nearby highways; he had an unlimited capacity to recall historical facts from the last two millennia; he could name all the British monarchs in the correct order and tell the date of any baseball game; he could answer any question on American and world history, the lives of world leaders, geography, movies, actors and actresses, music—he could identify every piece of music he had ever heard and tell its date of composition, name its composer, and give the composer’s dates of birth and death—sports, literature, stories from the Bible, and so on.18 However, and like Shereshevskii, Kim Peek had a very limited capacity for reasoning. It was estimated that he had memorized the content of several thousand books, but he did not read fiction, or indeed any book that required imagination or faculties beyond raw memory. Instead, he read only books that described facts without ambiguity or room for interpretation.

  We’ve seen that the brain selects and processes relatively little of the information available to it, and does so in a redundant way aimed not at scrupulous reproduction but at the extraction of meaning. In this sense, the mind of a savant mirrors more closely the behavior of a computer. Like a computer, the brain of the savant does not filter information but simply records every detail literally, without constructing meaning and so without, eventually, being able to understand.

  Chapter 6

  COULD WE BECOME MORE INTELLIGENT?

  In which we discuss how much of our brains we use; the value (if any) of training our memory; the impact of digital gadgets, the internet, and the information bombardment to which we are nowadays exposed; the differences between memorization and comprehension; as well as creativity and the (misguided) use of memory in the educational system

  The title of this chapter would sound daring even for a self-help book. However, the purpose of this book is not to give advice on how to use the brain, but rather to describe some aspects of its workings—in particular the way memory functions. Why, then, choose such a bold title for this chapter? Because I believe it is worthwhile to analyze—better yet, debunk—some of the myths that abound in self-help literature, myths often used to support techniques for “training the brain” that, in my personal opinion, fall very short of what they claim to achieve. As I write this, I am aware that I may seem to be contradicting myself. After all, I just spent nearly a full chapter extoling the wonders of the method of loci, an artificial technique to aid memory. I believe, however, that the historic and scientific analysis of that method clarifies some fundamental principles of the way memory works and, moreover, illustrates the importance of memory from antiquity to the present day. In antiquity, this interest centered on oratory and the ability to enshrine information in a world where opportunities for documentation were scarce. Today, we ask ourselves about the proper role of memory in education, about the consequences of outsourcing our memories to sundry gadgets, and, above all, about how the internet is affecting our brains.

  We often hear that we use just 10 percent of our brain. A natural response to this is to wonder whether we could become smarter by learning to use a greater fraction. This is the premise of Lucy, the film by Luc Besson in which Scarlett Johansson learns to use an ever-larger fraction of her mental capacity until her brainpower is such that she develops telepathic abilities. Lucy and its thoroughly unscientific premise aside, I would like to reframe the question raised in this chapter’s title into one more pragmatic and specific: Can we train our memory to make use of more neurons? And will increasing the number of neurons we use make us more intelligent?

  Let us take this step by step. First of all, it is not true that we use only a tiny fraction of our brain. We use all of it, though not all the time. In other words, while only a fraction of our neurons are active at any given instant,1 nearly all of our neurons are active at some point, when their assigned functionality is required. If we used our whole brain, firing up all neurons simultaneously, not only would we need to gulp tablespoon after tablespoon of sugar to provide the glucose necessary for such a high level of neuronal activity,2 but the specific functions of the different neurons would become jumbled. Thus, activating all of our neurons at the same time would do nothing to improve our intelligence. In fact, many epileptic seizures are characterized by generalized neuronal activation. There is still much left to learn about epilepsy,3 but some of the basic mechanisms are well understood. In particular, epileptic seizures tend to begin with the development of pathologic activity in a specific area of the brain, known as the epileptic focus. The abnormal activity of these neurons spills over into neighboring areas and, eventually, to the rest (or at least a significant portion) of the brain. When the seizure takes over, neurons fire frenziedly, and EEG scans show sharp increases in amplitude that resemble seismographic readings during earthquakes. At that point, far more than 10 percent of the brain’s neurons are active, but instead of acquiring Lucy’s supernatural powers, the brain’s owner loses consciousness, in most cases remembering nothing afterward.

  Having established that using more of the brain at once is not the path to mental superiority, we can ask ourselves if it is nevertheless worth the trouble to try to remember more. On the one hand, as we studied the cases of Shereshevskii, Funes, and the savants in the previous chapter, we saw that remembering too much can lead to significant mental handicap. On the other hand, those of us without Shereshevskii’s synesthesia, Funes’s head injury, or the unusual minds of savants may be able to stop short of their surfeit of remembering, and train our memories to our benefit. How many times have we been frustrated by the inability to recall a certain word? How often do we go to the kitchen to retrieve something and, when we get there, find ourselves unable to remember what it was we needed?

  Unlike savants or Shereshevskii, “memory champions” are normal people who dedicate many hours a day to exercising their memory ability. Dominic O’Brien—an eight-time world memory champion4 who in 2002 managed to remember the order of cards in fifty-four shuffled decks—says that his mnemonic training allows him, among other things, to recall the names of 100 new people at a party, remember appointments without a calendar, or the content of a speech without notes.5 Now, considering the many hours that it took him to acquire these skills, are they worth it? I don’t mean to minimize the accomplishment of memorizing fifty-four shuffled decks, or the achievement of Akira Haraguchi, a Japanese engineer and therapist who managed to memorize 100,000 digits of π (this is not a typo: one hundred thousand digits of π). Neither do I care to judge the choice of memory champions to devote such effort to achieving these feats; after all, people are free to do whatever they want with their time, and a professional mnemonist could argue that remembering shuffled decks of cards is no less absurd than watching twenty-two men run behind a soccer ball. There is nothing wrong with dedicating hours to practicing the method of loci and reveling in the ability it gives us to remember; it may even be useful as a tool for concentration. However, I would like to comment on the usefulness of these techniques when applied to everyday life—above all, to highlight the fact that not only do they not make us more intelligent, they do not even enhance our memory in general. While it is beneficial to keep our brains active (just as it is to eat healthy food or keep physically fit), training in a specific memory-enhancement met
hod is no better for the brain than reading a book, learning a language, or playing chess. Note that I have said “training in a specific method”: despite what most people think,6 mnemonic exercises improve performance only at these exercises—in other words, they enable whatever specific memory you have trained your brain to retrieve, but such improvements do not transfer to other tasks and our overall memory abilities.

  The benefits claimed above by Dominic O’Brien, a living legend among mnemonists, lead me to make two specific remarks. First, to be able to remember the names of 100 people at a party, you must make the effort to do so. In other words, while other guests are enjoying the party and talking to others, the mnemonist must spend time focusing on remembering names. And this is the problem: in ordinary daily life, it is neither easy nor interesting to apply these techniques.7 No matter how many hours we spend training in mnemonics, we will still forget what it was that we wanted from the kitchen and will keep fumbling for that pesky word.8 The only way to avoid such situations would be to make an explicit and continual effort to remember everything—a frustrating, if not downright impossible, enterprise. Second, it is not clear that there is any advantage to not using, say, a calendar or a grocery list. If a physician has an office assistant, he does not need to remember his appointments; he may delegate that task to his assistant and focus that effort instead on caring for his patients. In the same way, if I can delegate the management of information to modern-day gadgets, why not do it? What sense does it make to commit to memory the dates and times of a bunch of meetings if I can just enter them on my calendar or into my computer?

  The problem is that no matter how good my memory might be, the business of remembering names, appointments, or telephone numbers still requires effort, a use of resources that might be better spent on other tasks. And if I have all my upcoming meetings fluttering about in my head, I will be less able to concentrate on more important things. Consider an example:

  I want to finish this chapter by discussing the internet and the educational system . . . speaking of which, tomorrow I have an appointment with the university’s vice-chancellor to discuss funding for my research center . . . oh, and the day after tomorrow I’m supposed to meet a colleague who wants to discuss something else (what was it?), and then on Friday I’m meeting a new student … in this chapter it is essential to analyze the use of memory in school and especially the use of the internet … I’m also supposed to see someone else tomorrow after I meet with the vice-chancellor … who is it? Am I confusing that meeting with the ones I have next week?

  It may sound exaggerated, but this is in fact how our brain functions when we multitask, thinking about and remembering many different things at the same time. Of course we can—and do—remember appointments, write books, and perform many other tasks in parallel. It is true as well that, after training, the effort required to remember appointments and such may become smaller. However, regardless of how small it becomes, this effort will still take up resources we could otherwise put to use elsewhere. Moreover, training may enable us to memorize appointments more easily, but unless we review what we’ve memorized regularly (and nothing is more exasperating than continually checking the clock to ensure we aren’t late for a meeting), we will sooner or later forget it. It is a good thing to exercise the brain,9 but there are other, perhaps more useful ways to do so than by training it to remember numbers, dates, names, or lists of words.

  The discussion about how we might improve our memory leads me to one of the most important tools of our age, and a subject of much controversy: the internet. You’ve likely asked yourself more than once how this new technology is affecting our brains and, in particular, our memories. This, surprisingly, is a question that Plato already considered long ago—not about the internet, obviously, but about writing. In Phaedrus, Plato recounts a dialogue between Socrates and Phaedrus by the banks of the river Ilissus, telling the story of how King Thamus of Egypt rejected the gift of letters presented to him by the god Theuth:

  But when they came to letters, This, said Theuth, will make the Egyptians wiser and give them better memories; it is a specific both for the memory and for the wit. Thamus replied: O most ingenious Theuth, the parent or inventor of an art is not always the best judge of the utility or inutility of his own inventions to the users of them. And in this instance, you who are the father of letters, from a paternal love of your own children have been led to attribute to them a quality which they cannot have; for this discovery of yours will create forgetfulness in the learners’ souls, because they will not use their memories; they will trust to the external written characters and not remember of themselves. The specific which you have discovered is an aid not to memory, but to reminiscence, and you give your disciples not truth, but only the semblance of truth; they will be hearers of many things and will have learned nothing; they will appear to be omniscient and will generally know nothing; they will be tiresome company, having the show of wisdom without the reality.

  —PLATO, PHAEDRUS

  (TRANSLATED BY BENJAMIN JOWETT)

  Plato was clearly concerned about how writing might eventually affect memory. We need only replace “letters” with “the internet” in the previous quotation to see where Plato would stand on a topic much debated in the twenty-first century. But while we may not have Plato’s brilliant mind, we do have an advantage he did not: over two and a half millennia of additional experience showing the value of writing and, consequently, counseling against too-hasty judgment of the internet.

  We might also compare the emergence of the internet to the revolution spawned by Gutenberg’s development of the printing press in the fifteenth century. The internet and the gadgets that have sprung up around it put a seemingly limitless source of information at one’s fingertips. The printing press permitted the dissemination of books that before that time had been confined to a handful of libraries. Before Gutenberg, “ordering” a book implied the long and expensive process of having it copied by hand. After Gutenberg, books began to appear in personal collections. Today nobody worries that the convenience of having an extensive home library might make us less intelligent.

  Why, then, would I bother to remember names, facts, and dates I can find almost immediately on the internet? Using one’s memory would seem to be as obsolete as using a slide rule when there is a calculator at hand. However, there is a crucial distinction: the internet does not replace our memory; it complements it. The calculator rendered the slide rule completely obsolete and made it pointless to teach its use. If we have a scientific calculator, we gain nothing by learning to use a slide rule. But memory is quite different. A Google search may be much more comprehensive, accurate, and sometimes even faster than going through our memory storage. However, the internet does not process the content it delivers to us as we do; the understanding must still be provided by the user.

  In the previous chapter, as I discussed Peter of Ravenna, I was not much concerned with exactly when his Phoenix was published, but I did remark it was the year before Columbus’s discovery of the New World, since this fact places everything in context. A computer does not perform this kind of reasoning, which is based on extracting meaning from information and establishing connections with other information based on that meaning. When I first learn the date of a fact, I need to process it to place the fact in context; after that, I may have no need to remember the date at all. My interest lies in remembering the context and the connections that I developed—after all, the precise dates are only a mouse click away. Unlike the process of using a slide rule, this process of placing information in context and establishing associations is vital: it is the key to thought.

  In an earlier book,10 I discussed the information bombardment to which we are subjected by way of text messages, email, Twitter, WhatsApp, Facebook, etc. In fact, it is estimated that we are exposed daily to information equivalent to that contained in 174 newspapers—five times as much as in the 1980s.11 We are constantly connected to this information as we car
ry our cell phones everywhere we go; we have even developed a newfangled cyberaddiction that compels us to check each new message as soon as we receive it. How long can we wait without looking at the latest email, even though we’re almost sure it’s not important? How long can we be without our cell phone, or bear the knowledge that its battery is dead before searching for an outlet to charge it?

  Herein lies the danger of the internet: it is endless. With more information available than we can possibly consume, it is tempting to go from page to page, spending just a few seconds on each one and not taking the time necessary to process what we find there. We replace comprehension with superficial reading. The internet and our twenty-first-century gadgets are powerful tools, but we must be careful to maintain control over them and resist the impulse to succumb to the frenzied rhythm that they impose.12 Let us borrow an analogy from the world of visual media. A music video might move from shot to shot at a hectic pace, constantly changing angles, because, after all, it typically has little to say and is more about creating visual impressions to accompany a song. On the other hand, a film by Andrei Tarkovsky has a slow cadence, giving the spectator enough time to absorb a deeper message. It engages the imagination in a more sustained way, and leaves us thinking.

  Though we have not really defined what intelligence is (a far from trivial task), by now it is clear that it is very different from memory capacity. Still, whether or not we mean to, we tend to associate memory with intelligence.13 A person who remembers historical events, philosophical arguments, and works of literature will generally be considered intelligent. This is, however, an erroneous notion, perhaps stemming from the fact that intelligent people tend to be intellectually curious and thus more likely to study (and remember) such things.

 

‹ Prev