How We Learn

Home > Other > How We Learn > Page 3
How We Learn Page 3

by Benedict Carey


  To the extent that it’s possible to locate a memory in the brain, that’s where it resides: in neighborhoods along the neocortex primarily, not at any single address.

  That the brain can find this thing and bring it to life so fast—instantaneously, for most of us, complete with emotion, and layers of detail—defies easy explanation. No one knows how that happens. And it’s this instant access that creates what to me is the brain’s grandest illusion: that memories are “filed away” like video scenes that can be opened with a neural click, and snapped closed again.

  The truth is stranger—and far more useful.

  • • •

  The risk of peering too closely inside the brain is that you can lose track of what’s on the outside—i.e., the person. Not some generic human, either, but a real one. Someone who drinks milk straight from the carton, forgets friends’ birthdays, and who can’t find the house keys, never mind calculate the surface area of a pyramid.

  Let’s take a moment to review. The close-up of the brain has provided a glimpse of what cells do to form a memory. They fire together during an experience. Then they stabilize as a network through the hippocampus. Finally, they consolidate along the neocortex in a shifting array that preserves the basic plot points. Nonetheless, to grasp what people do to retrieve a memory—to remember—requires stepping back for a wide shot. We’ve zoomed in, à la Google Maps, to see cells at street level; it’s time to zoom out and have a look at the larger organism: at people whose perceptions reveal the secrets of memory retrieval.

  The people in question are, again, epilepsy patients (to whom brain science owes debts without end).

  In some epilepsy cases, the flares of brain activity spread like a chemical fire, sweeping across wide stretches of the brain and causing the kind of full-body, blackout seizures that struck H.M. as a young man. Those seizures are so hard to live with, and often so resistant to drug treatment, that people consider brain surgery. No one has the same procedure H.M. underwent, of course, but there are other options. One of those is called split brain surgery. The surgeon severs the connections between the left and right hemispheres of the brain, so the storms of activity are confined to one side.

  This quiets the seizures, all right. But at what cost? The brain’s left and right halves cannot “talk” to each other at all; split brain surgery must cause serious damage, drastically altering someone’s personality, or at least their perceptions. Yet it doesn’t. The changes are so subtle, in fact, that the first studies of these so-called split brain patients in the 1950s found no differences in thinking or perception at all. No slip in IQ; no deficits in analytical thinking.

  The changes had to be there—the brain was effectively cut in half—but it would take some very clever experiments to reveal them.

  In the early 1960s, a trio of scientists at the California Institute of Technology finally did so, by devising a way to flash pictures to one hemisphere at a time. Bingo. When split brain patients saw a picture of a fork with only their right hemisphere, they couldn’t say what it was. They couldn’t name it. Due to the severed connection, their left hemisphere, where language is centered, received no information from the right side. And the right hemisphere—which “saw” the fork—had no language to name it.

  And here was the kicker: The right hemisphere could direct the hand it controls to draw the fork.

  The Caltech trio didn’t stop there. In a series of experiments with these patients, the group showed that the right hemisphere could also identify objects by touch, correctly selecting a mug or a pair of scissors by feel after seeing the image of one.

  The implications were clear. The left hemisphere was the intellectual, the wordsmith, and it could be severed from the right without any significant loss of IQ. The right side was the artist, the visual-spatial expert. The two worked together, like copilots.

  This work percolated into the common language and fast, as shorthand for types of skills and types of people: “He’s a right brain guy, she’s more left brain.” It felt right, too: Our aesthetic sensibility, open and sensual, must come from a different place than cool logic.

  What does any of this have to do with memory?

  It took another quarter century to find out. And it wouldn’t happen until scientists posed a more fundamental question: Why don’t we feel two-brained, if we have these two copilots?

  “That was the question, ultimately,” said Michael Gazzaniga, who coauthored the Caltech studies with Roger Sperry and Joseph Bogen in the 1960s. “Why, if we have these separate systems, is it that the brain has a sense of unity?”

  That question hung over the field, unanswered, for decades. The deeper that scientists probed, the more confounding the mystery seemed to be. The left brain/right brain differences revealed a clear, and fascinating, division of labor. Yet scientists kept finding other, more intricate, divisions. The brain has thousands, perhaps millions, of specialized modules, each performing a special skill—one calculates a change in light, for instance, another parses a voice tone, a third detects changes in facial expression. The more experiments that scientists did, the more specializing they found, and all of these mini-programs run at the same time, often across both hemispheres. That is, the brain sustains a sense of unity not only in the presence of its left and right copilots. It does so amid a cacophony of competing voices coming from all quarters, the neural equivalent of open outcry at the Chicago Board of Trade.

  How?

  The split brain surgery would again provide an answer.

  In the early 1980s, Dr. Gazzaniga performed more of his signature experiments with split brain patients—this time with an added twist. In one, for example, he flashed a patient two pictures: The man’s left hemisphere saw a chicken foot, and his right saw a snow scene. (Remember, the left is where language skills are centered, and the right is holistic, sensual; it has no words for what it sees.) Dr. Gazzaniga then had the man choose related images for each picture from an array visible to both hemispheres, say, a fork, a shovel, a chicken, and a toothbrush. The man chose a chicken to go with the foot, and a shovel to go with the snow. So far, so good.

  Then Dr. Gazzaniga asked him why he chose those items—and got a surprise. The man had a ready answer for one choice: The chicken goes with the foot. His left hemisphere had seen the foot. It had words to describe it and a good rationale for connecting it to the chicken.

  Yet his left brain had not seen the picture of the snow, only the shovel. He had chosen the shovel on instinct but had no conscious explanation for doing so. Now, asked to explain the connection, he searched his left brain for the symbolic representation of the snow and found nothing. Looking down at the picture of the shovel, the man said, “And you need a shovel to clean out the chicken shed.”

  The left hemisphere was just throwing out an explanation based on what it could see: the shovel. “It was just making up any old BS,” Gazzaniga told me, laughing at the memory of the experiment. “Making up a story.”

  In subsequent studies he and others showed that the pattern was consistent. The left hemisphere takes whatever information it gets and tells a tale to conscious awareness. It does this continually in daily life, and we’ve all caught it in the act—overhearing our name being whispered, for example, and filling in the blanks with assumptions about what people are gossiping about.

  The brain’s cacophony of voices feels coherent because some module or network is providing a running narration. “It only took me twenty-five years to ask the right question to figure it out,” Gazzaniga said, “which was why? Why did you pick the shovel?”

  All we know about this module is it resides somewhere in the left hemisphere. No one has any idea how it works, or how it strings together so much information so fast. It does have a name. Gazzaniga decided to call our left brain narrating system “the interpreter.”

  This is our director, in the film crew metaphor. The one who makes sense of each scene, seeking patterns and inserting judgments based on the material; the one
who fits loose facts into a larger whole to understand a subject. Not only makes sense but makes up a story, as Gazzaniga put it—creating meaning, narrative, cause and effect.

  It’s more than an interpreter. It’s a story maker.

  This module is vital to forming a memory in the first place. It’s busy answering the question “What just happened?” in the moment, and those judgments are encoded through the hippocampus. That’s only part of the job, however. It also answers the questions “What happened yesterday?” “What did I make for dinner last night?” And, for global religions class, “What were the four founding truths of Buddhism, again?”

  Here, too, it gathers the available evidence, only this time it gets the sensory or factual cues from inside the brain, not from outside. Think. To recall the Buddha’s truths, start with just one, or a fragment of one. Anguish. The Buddha talked about anguish. He said anguish was … to be understood. That’s right, that’s truth number one. The second truth had to do with meditation, with not acting, with letting go. Let go of anguish? That’s it; or close. Another truth brings to mind a nature trail, a monk padding along in robes—the path. Walking the path? Follow the path?

  So it goes. Each time we run the tape back, a new detail seems to emerge: The smell of smoke in the kitchen; the phone call from a telemarketer. The feeling of calmness when reading “let go of anguish”—no, it was let go of the sources of anguish. Not walk the path, but cultivate the path. These details seem “new” in part because the brain absorbs a lot more information in the moment than we’re consciously aware of, and those perceptions can surface during remembering. That is to say: The brain does not store facts, ideas, and experiences like a computer does, as a file that is clicked open, always displaying the identical image. It embeds them in networks of perceptions, facts, and thoughts, slightly different combinations of which bubble up each time. And that just retrieved memory does not overwrite the previous one but intertwines and overlaps with it. Nothing is completely lost, but the memory trace is altered and for good.

  As scientists put it, using our memories changes our memories.

  After all the discussion of neurons and cell networks; after Lashley’s rats and H.M.; after the hippocampus, split brain patients, and the story maker, this seems elementary, even mundane.

  It’s not.

  * * *

  * Self-serving is right.

  Chapter Two

  The Power of Forgetting

  A New Theory of Learning

  Memory contests are misleading spectacles, especially in the final rounds.

  At that point, there are only a handful of people left onstage and their faces reflect all varieties of exhaustion, terror, and concentration. The stakes are high, they’ve come a long way already, and any mistake can end it all. In a particularly tough to watch scene from the documentary Spellbound, about the Scripps National Spelling Bee, one twelve-year-old trips over the word “opsimath.” He appears to be familiar with the word, he’s digging deep, there’s a moment when he seems to have it—but then he inserts an “o” where it doesn’t belong.

  Clang!

  A bell rings—meaning: wrong answer—and the boy’s eyes bulge in stunned disbelief. A gasp sweeps through the crowd, followed by clapping, consolation applause for effort. He slinks offstage, numb. Variations of this scene repeat, as other well-prepped contestants miss a word. They slump at the microphone, or blink without seeing, before being bathed in the same lukewarm applause. In contrast, those who move to the next round seem confident, locked in. The winner smiles when she hears her final word—“logorrhea”—and nails it.

  These competitions tend to leave us with two impressions. One is that the contestants, and especially the winners, must be extra-human. How on earth are they doing that? Their brains must be not only bigger and faster but also different from the standard-issue version (i.e., ours). Maybe they even have “photographic” memories.

  Not so. Yes, it’s true that some people are born with genetic advantages, in memory capacity and processing speed (though no one has yet identified an “intelligence gene” or knows with any certainty how one would function). It’s true, too, that these kinds of contests tend to draw from the higher end of the spectrum, from people who take a nerdy interest in stockpiling facts. Still, a brain is a brain is a brain, and the healthy ones all work the same way. With enough preparation and devotion, each is capable of seemingly wizardlike feats of memory. And photographic memories, as far as scientists can tell, don’t exist, at least not in the way that we imagine.

  The other impression is more insidious, because it reinforces a common, self-defeating assumption: To forget is to fail. This appears self-evident. The world is so full of absentmindedness, tuned-out teenagers, misplaced keys, and fear of creeping dementia that forgetting feels dysfunctional, or ominous. If learning is building up skills and knowledge, then forgetting is losing some of what was gained. It seems like the enemy of learning.

  It’s not. The truth is nearly the opposite.

  Of course it can be a disaster to space out on a daughter’s birthday, to forget which trail leads back to the cabin, or to draw a blank at test time. Yet there are large upsides to forgetting, too. One is that it is nature’s most sophisticated spam filter. It’s what allows the brain to focus, enabling sought-after facts to pop to mind.

  One way to dramatize this would be to parade all those spelling prodigies back onstage again for another kind of competition, a fast-paced tournament of the obvious. Quick: Name the last book you read. The last movie you saw. The local drugstore. The secretary of state. The World Series champions. And then faster still: your Gmail password, your sister’s middle name, the vice president of the United States.

  In this hypothetical contest, each of those highly concentrated minds would be drawing a lot of blanks. Why? Not due to mere absentmindedness or preoccupation. No, these kids are alert and highly focused. So focused, in fact, that they’re blocking out trivial information.

  Think about it: To hold so many obscure words in mind and keep the spellings straight, the brain must apply a filter. To say it another way, the brain must suppress—forget—competing information, so that “apathetic” doesn’t leak into “apothecary,” or “penumbra” into “penultimate,” and keep any distracting trivia from bubbling to the surface, whether song lyrics, book titles, or names of movie actors.

  We engage in this kind of focused forgetting all the time, without giving it much thought. To lock in a new computer password, for example, we must block the old one from coming to mind; to absorb a new language, we must hold off the corresponding words in our native tongue. When thoroughly immersed in a topic or novel or computation, it’s natural to blank on even common nouns—“could you pass me the whatyoucallit, the thing you eat with?”

  Fork.

  As the nineteenth-century American psychologist William James observed, “If we remembered everything, we should on most occasions be as ill off as if we remembered nothing.”

  The study of forgetting has, in the past few decades, forced a fundamental reconsideration of how learning works. In a way, it has also altered what the words “remember” and “forget” mean. “The relationship between learning and forgetting is not so simple and in certain important respects is quite the opposite of what people assume,” Robert Bjork, a psychologist at the University of California, Los Angeles, told me. “We assume it’s all bad, a failure of the system. But more often, forgetting is a friend to learning.”

  The “losers” in memory competitions, this research suggests, stumble not because they remember too little. They have studied tens, perhaps hundreds of thousands of words, and often they are familiar with the word they ultimately misspell. In many cases, they stumble because they remember too much. If recollecting is just that—a re-collection of perceptions, facts, and ideas scattered in intertwining neural networks in the dark storm of the brain—then forgetting acts to block the background noise, the static, so that the right signals stand ou
t. The sharpness of the one depends on the strength of the other.

  Another large upside of forgetting has nothing to do with its active filtering property. Normal forgetting—that passive decay we so often bemoan—is also helpful for subsequent learning. I think of this as the muscle-building property of forgetting: Some “breakdown” must occur for us to strengthen learning when we revisit the material. Without a little forgetting, you get no benefit from further study. It is what allows learning to build, like an exercised muscle.

  This system is far from perfect. We have instantaneous and flawless recall of many isolated facts, it’s true: Seoul is the capital of South Korea, 3 is the square root of 9, and J. K. Rowling is the author of the Harry Potter books. Yet no complex memory comes back exactly the same way twice, in part because the forgetting filter blocks some relevant details along with many irrelevant ones. Features that previously were blocked or forgotten often reemerge. This drift in memory is perhaps most obvious when it comes to the sort of childhood tales we all tell and embellish. The time we borrowed the family car at age fourteen; the time we got lost on the metro the first time we visited the city. After rolling out those yarns enough times, it can be tough to tell what’s true and what’s not.

  The point is not that memory is nothing more than a pile of loose facts and a catalog of tall tales. It’s that retrieving any memory alters its accessibility, and often its content.

 

‹ Prev