Book Read Free

How We Learn

Page 18

by Benedict Carey


  To flesh out this dimension and appreciate its importance, let’s compare baseball stars to an equally exotic group of competitors, known more for their intellectual prowess than their ability to hit line drives: chess players. On a good day, a chess grand master can defeat the world’s most advanced supercomputer, and this is no small thing. Every second, the computer can consider more than 200 million possible moves, and draw on a vast array of strategies developed by leading scientists and players. By contrast, a human player—even a grand master—considers about four move sequences per turn in any depth, playing out the likely series of parries and countermoves to follow. That’s four per turn, not per second. Depending on the amount of time allotted for each turn, the computer might search one billion more possibilities than its human opponent. And still, the grand master often wins. How?

  The answer is not obvious. In a series of studies in the 1960s, a Dutch psychologist who was also himself a chess master, Adriaan de Groot, compared masters to novices and found no differences in the number of moves considered; the depth of each search, the series of countermoves played out, mentally; or the way players thought about the pieces (for instance, seeing the rook primarily as an attacking piece in some positions, and as a defensive one in others). If anything, the masters searched fewer moves than the novices. But they could do one thing the novices could not: memorize a chess position after seeing the board for less than five seconds. One look, and they could reconstruct the arrangement of the pieces precisely, as if they’d taken a mental snapshot.

  In a follow-up study, a pair of researchers at Carnegie Mellon University—William G. Chase and Herbert A. Simon—showed that this skill had nothing to do with the capacity of the masters’ memory. Their short-term recall of things like numbers was no better than anyone else’s. Yet they saw the chessboard in more meaningful chunks than the novices did.* “The superior performance of stronger players derives from the ability of those players to encode the position into larger perceptual chunks, each consisting of a familiar configuration of pieces,” Chase and Simon concluded.

  Grand masters have a good eye, too, just like baseball players, and they’re no more able to describe it. (If they could, it would quickly be programmed into the computer, and machines would rule the game.) It’s clear, though, that both ballplayers and grand masters are doing more than merely seeing or doing some rough analysis. Their eyes, and the visual systems in their brains, are extracting the most meaningful set of clues from a vast visual tapestry, and doing so instantaneously. I think of this ability in terms of infrared photography: You see hot spots of information, live information, and everything else is dark. All experts—in arts, sciences, IT, mechanics, baseball, chess, what have you—eventually develop this kind of infrared lens to some extent. Like chess and baseball prodigies, they do it through career-long experience, making mistakes, building intuition. The rest of us, however, don’t have a lifetime to invest in Chemistry 101 or music class. We’ll take the good eye—but need to do it on the cheap, quick and dirty.

  • • •

  When I was a kid, everyone’s notebooks and textbooks, every margin of every sheet of lined paper in sight, was covered with doodles: graffiti letters, caricatures, signatures, band logos, mazes, 3-D cubes. Everyone doodled, sometimes all class long, and the most common doodle of all was the squiggle:

  Those squiggles have a snowflake quality; they all look the same and yet each has its own identity when you think about it. Not that many people have. The common squiggle is less interesting than any nonsense syllable, which at least contains meaningful letters. It’s virtually invisible, and in the late 1940s one young researcher recognized that quality as special. In some moment of playful or deep thinking, she decided that the humble squiggle was just the right tool to test a big idea.

  Eleanor Gibson came of age as a researcher in the middle of the twentieth century, during what some call the stimulus-response, or S-R, era of psychology. Psychologists at the time were under the influence of behaviorism, which viewed learning as a pairing of a stimulus and response: the ringing of a bell before mealtime and salivation, in Ivan Pavlov’s famous experiment. Their theories were rooted in work with animals, and included so-called operant conditioning, which rewarded a correct behavior (navigating a maze) with a treat (a piece of cheese) and discouraged mistakes with mild electrical shocks. This S-R conception of learning viewed the sights, sounds, and smells streaming through the senses as not particularly meaningful on their own. The brain provided that meaning by seeing connections. Most of us learn early in life, for instance, that making eye contact brings social approval, and screaming less so. We learn that when the family dog barks one way, it’s registering excitement; another way, it senses danger. In the S-R world, learning was a matter of making those associations—between senses and behaviors, causes and effects.

  Gibson was not a member of the S-R fraternity. After graduating from Smith College in 1931, she entered graduate studies at Yale University hoping to work under the legendary primatologist Robert Yerkes. Yerkes refused. “He wanted no women in his lab and made it extremely clear to me that I wasn’t wanted there,” Gibson said years later. She eventually found a place with Clark Hull, an influential behaviorist known for his work with rats in mazes, where she sharpened her grasp of experimental methods—and became convinced that there wasn’t much more left to learn about conditioned reflexes. Hull and his contemporaries had done some landmark experiments, but the S-R paradigm itself limited the types of questions a researcher could ask. If you were studying only stimuli and responses, that’s all you’d see. The field, Gibson believed, was completely overlooking something fundamental: discrimination. How the brain learns to detect minute differences in sights, sounds, or textures. Before linking different names to distinct people, for example, children have to be able to distinguish between the sounds of those names, between Ron and Don, Fluffy and Scruffy. That’s one of the first steps we take in making sense of the world. In hindsight, this seems an obvious point. Yet it took years for her to get anyone to listen.

  In 1948, her husband—himself a prominent psychologist at Smith—got an offer from Cornell University, and the couple moved to Ithaca, New York. Gibson soon got the opportunity to study learning in young children, and that’s when she saw that her gut feeling about discrimination learning was correct. In some of her early studies at Cornell, she found that children between the ages of three and seven could learn to distinguish standard letters—like a “D” or a “V”—from misshapen ones, like:

  These kids had no idea what the letters represented; they weren’t making associations between a stimulus and response. Still, they quickly developed a knack for detecting subtle differences in the figures they studied. And it was this work that led to the now classic doodle experiment, which Gibson conducted with her husband in 1949. The Gibsons called the doodles “nonsense scribbles,” and the purpose of the study was to test how quickly people could discriminate between similar ones. They brought thirty-two adults and children into their lab, one at a time, and showed each a single doodle on a flashcard:

  The study had the feel of a card trick. After displaying the “target” doodle for five seconds, the experimenters slipped it into a deck of thirty-four similar flashcards. “Some of the items in the pack are exact replicas, tell me which ones,” they said, and then began showing each card, one at a time, for three seconds. In fact, the deck contained four exact replicas, and thirty near-replicas:

  The skill the Gibsons were measuring is the same one we use to learn a new alphabet, at any age, whether Chinese characters, chemistry shorthand, or music notation. To read even a simple melody, you have to be able to distinguish an A from a B-flat on the clef. Mandarin is chicken scratch until you can discriminate between hundreds of similar figures. We’ve all made these distinctions expertly, most obviously when learning letters in our native tongue as young children. After that happens and we begin reading words and sentences—after we began “chunkin
g,” in the same way the chess masters do—we forget how hard it was to learn all those letters in the first place, never mind linking them to their corresponding sounds and blending them together into words and ideas.

  In their doodle experiment, the Gibsons gave the participants no feedback, no “you-got-its” or “try-agains.” They were interested purely in whether the eye was learning. And so it was. The adults in the experiment needed about three times through, on average, to score perfectly, identifying all four of the exact replicas without making a single error. The older children, between nine and eleven years old, needed five (to get close to perfect); the younger ones, between six and eight years old, needed seven. These people weren’t making S-R associations, in the way that psychologists assumed that most learning happened. Nor were their brains—as the English philosopher John Locke famously argued in the seventeenth century—empty vessels, passively accumulating sensations. No, their brains came equipped with evolved modules to make important, subtle discriminations, and to put those differing symbols into categories.

  “Let us consider the possibility of rejecting Locke’s assumption altogether,” the Gibsons wrote. “Perhaps all knowledge comes through the senses in an even simpler way than John Locke was able to conceive—by way of variations, shadings, and subtleties of energy.”

  That is, the brain doesn’t solely learn to perceive by picking up on tiny differences in what it sees, hears, smells, or feels. In this experiment and a series of subsequent ones—with mice, cats, children, and adults—Gibson showed that it also perceives to learn. It takes the differences it has detected between similar-looking notes or letters or figures, and uses those to help decipher new, previously unseen material. Once you’ve got middle-C nailed on the treble clef, you use it as a benchmark for nearby notes; when you nail the A an octave higher, you use that to read its neighbors; and so on. This “discrimination learning” builds on itself, the brain hoarding the benchmarks and signatures it eventually uses to read larger and larger chunks of information.

  In 1969, Eleanor Gibson published Principles of Perceptual Learning and Development, a book that brought together all her work and established a new branch of psychology: perceptual learning. Perceptual learning, she wrote, “is not a passive absorption, but an active process, in the sense that exploring and searching for perception itself is active. We do not just see, we look; we do not just hear, we listen. Perceptual learning is self-regulated, in the sense that modification occurs without the necessity of external reinforcement. It is stimulus oriented, with the goal of extracting and reducing the information simulation. Discovery of distinctive features and structure in the world is fundamental in the achievement of this goal.”

  This quote is so packed with information that we need to stop and read closely to catch it all.

  Perceptual learning is active. Our eyes (or ears, or other senses) are searching for the right clues. Automatically, no external reinforcement or help required. We have to pay attention, of course, but we don’t need to turn it on or tune it in. It’s self-correcting—it tunes itself. The system works to find the most critical perceptual signatures and filter out the rest. Baseball players see only the flares of motion that are relevant to judging a pitch’s trajectory—nothing else. The masters in Chase and Simon’s chess study considered fewer moves than the novices, because they’d developed such a good eye that it instantly pared down their choices, making it easier to find the most effective parry. And these are just visual examples. Gibson’s conception of perceptual learning applied to all the senses, hearing, smell, taste, and feel, as well as vision.

  Only in the past decade or so have scientists begun to exploit Gibson’s findings—for the benefit of the rest of us.

  • • •

  The flying conditions above Martha’s Vineyard can change on a dime. Even when clouds are sparse, a haze often settles over the island that, after nightfall, can disorient an inexperienced pilot. That’s apparently what happened just after 9:40 P.M. on July 16, 1999, when John Kennedy Jr. crashed his Piper Saratoga into the ocean seven miles offshore, killing himself, his wife, and her sister. “There was no horizon and no light,” said another pilot who’d flown over the island that night. “I turned left toward the Vineyard to see if it was visible but could see no lights of any kind nor any evidence of the island. I thought the island might have suffered a power failure.” The official investigation into the crash found that Kennedy had fifty-five hours of experience flying at night, and that he didn’t have an instrument rating at all. In pilot’s language, that means he was still learning and not yet certified to fly in zero visibility, using only the plane’s instrument panel as a guide.

  The instruments on small aircraft traditionally include six main dials. One tracks altitude, another speed through the air. A third, the directional gyro, is like a compass; a fourth measures vertical speed (climb or descent). Two others depict a miniature airplane and show banking of the plane and its turning rate through space, respectively (newer models have five, no banking dial).

  Learning to read any one of them is easy, even if you’ve never seen an instrument panel before. It’s harder, however, to read them all in one sweep and to make the right call on what they mean collectively. Are you descending? Are you level? This is tricky for amateur pilots to do on a clear day, never mind in zero visibility. Add in communicating with the tower via radio, reading aviation charts, checking fuel levels, preparing landing gear, and other vital tasks—it’s a multitasking adventure you don’t want to have, not without a lot of training.

  This point was not lost on Philip Kellman, a cognitive scientist at Bryn Mawr College, when he was learning to fly in the 1980s. As he moved through his training, studying for aviation tests—practicing on instrument simulators, logging air time with instructors—it struck him that flying was mostly about perception and action. Reflexes. Once in the air, his instructors could see patterns that he could not. “Coming in for landing, an instructor may say to the student, ‘You’re too high!’ ” Kellman, who’s now at UCLA, told me. “The instructor is actually seeing an angle between the aircraft and the intended landing point, which is formed by the flight path and the ground. The student can’t see this at all. In many perceptual situations like this one, the novice is essentially blind to patterns that the expert has come to see at a glance.”

  That glance took into account all of the instruments at once, as well as the view out the windshield. To hone that ability, it took hundreds of hours of flying time, and Kellman saw that the skill was not as straightforward as it seemed on the ground. Sometimes a dial would stick, or swing back and forth, creating a confusing picture. Were you level, as one dial indicated, or in a banking turn, like another suggested? Here’s how Kellman describes the experience of learning to read all this data at once with an instructor: “While flying in the clouds, the trainee in the left seat struggles as each gauge seems to have a mind of its own. One by one, he laboriously fixates on each one. After a few seconds on one gauge, he comprehends how it has strayed and corrects, perhaps with a jerk guaranteed to set up the next fluctuation. Yawning, the instructor in the right seat looks over at the panel and sees at a glance that the student has wandered off of the assigned altitude by two hundred feet but at least has not yet turned the plane upside down.”

  Kellman is an expert in visual perception. This was his territory. He began to wonder if there was a quicker way for students to at least get a feel for the instrument panel before trying to do everything at once at a thousand feet. If you developed a gut instinct for the panel, then the experience in the air might not be so stressful. You’d know what the instruments were saying and could concentrate on other things, like communicating with the tower. The training shortcut Kellman developed is what he calls a perceptual learning module, or PLM. It’s a computer program that gives instrument panel lessons—a videogame, basically, but with a specific purpose. The student sees a display of the six dials and has to decide quickly what those dials
are saying collectively. There are seven choices: “Straight & Level,” “Straight Climb,” “Descending Turn,” “Level Turn,” “Climbing Turn,” “Straight Descent,” and the worrisome “Instrument Conflict,” when one dial is stuck.

  In a 1994 test run of the module, he and Mary K. Kaiser of the NASA Ames Research Center brought in ten beginners with zero training and four pilots with flying experience ranging from 500 to 2,500 hours. Each participant received a brief introduction to the instruments, and then the training began: nine sessions, twenty-four presentations on the same module, with short breaks in between. The participants saw, on the screen, an instrument panel, below which were the seven choices. If the participant chose the wrong answer—which novices tend to do at the beginning—the screen burped and provided the right one. The correct answer elicited a chime. Then the next screen popped up: another set of dials, with the same set of seven choices.

  After one hour, even the experienced pilots had improved, becoming faster and more accurate in their reading. The novices’ scores took off: After one hour, they could read the panels as well as pilots with an average of one thousand flying hours. They’d built the same reading skill, at least on ground, in 1/1,000th of the time. Kellman and Kaiser performed a similar experiment with a module designed to improve visual navigation using aviation charts—and achieved similar results. “A striking outcome of both PLMs is that naïve subjects after training performed as accurately and reliably faster than pilots before training,” they wrote. “The large improvements attained after modest amounts of training in these aviation PLMs suggest that the approach has promise for accelerating the acquisition of skills in aviation and other training contexts.”

 

‹ Prev