The Tale of the Dueling Neurosurgeons

Home > Other > The Tale of the Dueling Neurosurgeons > Page 10
The Tale of the Dueling Neurosurgeons Page 10

by Sam Kean


  But the patches that processed the center, he discovered, were vastly larger in area than the patches that covered the periphery. In fact, it wasn’t even close. Scientists now know that the focal center of the eye, the fovea, takes up just one ten-thousandth of the retina’s surface area. Yet gobbles up a full one-tenth of the PVC’s processing power. Similarly, around half of the PVC’s 250 million neurons help us process the central 1 percent of our visual field. Inouye’s half-blind patients helped him see this special magnification for the first time in history.

  Unfortunately for Inouye, other scientists got credit for his discoveries. During World War I, two English doctors, ignorant of his work, duplicated his experiments on the visual cortex with their own brain-damaged troops. They obtained the same results, but these doctors had the cultural advantage of being European. What’s more, in his major paper on vision, Inouye used a convoluted Cartesian graph to plot the relationship between the eyes and the primary visual cortex. It was precise, but it left readers cross-eyed themselves. The Englishmen meanwhile used a simple map, something scientists could grasp at a glance. When this intuitive diagram was published in textbooks worldwide, Inouye slipped into obscurity. Blindness can be a generational affliction, too.

  The next major discovery in vision neuroscience took place far from the battlefield. In 1958 a pair of young neuroscientists at the Johns Hopkins University, one Canadian and one Swedish, began investigating neurons in the visual cortex. In particular, David Hubel and Torsten Wiesel wanted to know what sights or shapes got these neurons excited—what made them fire? They had a good hunch, based on other scientists’ work. Signals from the eyes actually make a quick layover in the thalamus, in the center of the brain, before reaching the visual cortex. And other scientists had shown that thalamic neurons respond strongly to black-and-white spots. So Hubel and Wiesel decided to take the obvious next step and investigate how neurons in the visual cortex responded to spots.

  When shown their new lab, a grimy basement with no windows, Hubel and Wiesel rejoiced. No windows meant no stray light—perfect for vision work. They were less enthusiastic about the equipment they inherited. Their experiments involved, à la A Clockwork Orange, strapping down an anesthetized cat into a harness, immobilizing its eyes, and forcing it to stare at spots projected onto a bedsheet. But because the harness they inherited was horizontal, the kitty had to lie on its back, staring straight up toward the ceiling. Therefore the duo had to flip their slide projector toward the ceiling, too, then drape a sheet over the pipes up there “like a circus tent,” Hubel remembered. Insects and dust rained down, and to see the screen, the duo had to stare upward themselves, straining their necks.

  And this was just the setup—actually studying the neurons proved no easier. By 1958 scientists had built microelectrodes sensitive enough to monitor a single neuron inside the brain; some researchers had already examined hundreds of individual cells this way. (This head start intimidated Hubel and Wiesel, who felt like amateurs. So they “catapulted [themselves] to respectability,” as they said, by starting the count in their experiments at number 3,000. Whenever people visited the lab, they made sure to announce what number they were on.)

  Each electrode had fine platinum wires that slid into the cat’s primary visual cortex. Hubel and Wiesel wired the electrode’s other end to a speaker, which clicked whenever a neuron fired in response to a spot. Or at least it should have clicked. The first experiments proved dreadful, taking nine hours each—their necks were killing them—and running into the wee hours. Wiesel would start blathering in Swedish around 3 a.m., and Hubel almost nodded off and crashed one night while driving home. Worse, the neurons they monitored would not fire. They tried white spots. They tried black spots. They tried polka dots. “We tried everything short of standing on our heads,” Hubel recalled—including cheesecake shots from glamour magazines. But the stubborn stupid neurons refused to click.

  Week after maddening week passed, until September 1958. During the fifth hour of work one night, starting with cell 3009, they dropped yet another slide with yet another dot into the projector. According to different accounts, the slide either jammed or went in crooked, at an angle. Regardless, something finally happened: one neuron “went off like a machine gun,” Hubel said—rat-a-tat-tat-tat-tat-tat. It soon fell silent again, but after an hour of desperate fiddling, they realized what was going on. The neuron didn’t give a fig about the dot; it was firing in response to the slide itself—specifically, to the sharp shadow that formed on the screen as the edge of the slide dropped into place. This neuron dug lines.

  More hours of fiddling followed, and the duo quickly realized how lucky they’d been. Only lines within about ten degrees of one orientation set this neuron off. Had they dropped the slide in any less crookedly, the cell would have continued to give them the silent treatment. What’s more, other neurons, in follow-up experiments, proved equally picky, firing only for lines like or /. It took many more years of work, and many more cats, to firm everything up, but Hubel and Wiesel had already gotten a peek at the first law of vision: neurons in the primary visual cortex like lines, but different neurons like different lines, raked at different angles.

  The next step involved looking a little wider and determining the geographical patterns of these line-loving neurons. Did all of the neurons that liked a given angle cluster together, or was their distribution random? The former, it turned out. Again, neuroscientists knew by about 1900 that neurons are arranged in columns, like bits of stubble on the brain’s surface. And Hubel and Wiesel found that all the neurons within one column had similar taste: they all preferred one specific line orientation, like . What’s more, if Hubel and Wiesel shifted their platinum wire a smidge, about two-thousandths of an inch, to another column, all of that column’s cells might respond to |, a line ten or so degrees different. Successive, tiny steps into new “orientation columns” revealed neurons that fired only for /, then ∕, and so on. In sum, the optimal orientation shifted smoothly from column to column, like a minute hand creeping around a clock.

  The geographical patterns didn’t stop there, though. Further digging revealed that, just as cells worked together in columns, columns worked together in larger clusters, like a bundle of drinking straws. Each bundle had enough orientation columns to cover all 180 degrees of possible lines, from—to | and back to—. Each bundle also responded best to one eye, right or left. Hubel and Wiesel soon realized that one left-eye bundle plus one right-eye bundle—a “hypercolumn”—could detect any line with any orientation within one pixel of the visual field. Once again, this took years of work to firm up, but it turns out that no matter what lovely shape our eyes lock onto—the swirl of a nautilus shell, the curve of a hip—the brain determinedly breaks that form down into tiny line segments.

  Eventually Hubel and Wiesel relieved their neckaches and got their apparatus turned the right way around, so that the clockwork kitties stared straight forward, toward a proper screen. And the discoveries just kept coming. Beyond simple line-detecting neurons, Hubel and Wiesel also discovered neurons that loved to track motion. Some of these neurons got all excited for up/down motion, others buzzed for left/right movement, and still others for diagonal action. And it turned out that these motion-detecting neurons outnumbered the simple line-detecting neurons. They outnumbered them by a lot, actually. This hinted at something that no one had ever suspected—that the brain tracks moving things more easily than still things. We have a built-in bias toward detecting action.

  Why? Because it’s probably more critical for animals to spot moving things (predators, prey, falling trees) than static things, which can wait. In fact, our vision is so biased toward movement that we don’t technically see stationary objects at all. To see something stationary, our brains have to scribble our eyes very subtly over its surface. Experiments have even proved that if you artificially stabilize an image on the retina with a combination of special contact lenses and microelectronics, the image will vanish.
/>
  With these elements in place—Inouye’s map of the visual cortex, plus knowledge of line detectors and motion detectors—scientists could finally describe the basics of animal vision. The most important point is that each hypercolumn can detect all possible movements for all possible lines within one visual pixel. (Hypercolumns also contain structures, called blobs, that detect color.) Overall, then, each one-millimeter-wide hypercolumn effectively functions as a tiny, autonomous eye, a setup reminiscent of the compound eyes of insects. The advantage of this pixilated system, besides acuity, is that we can store the instructions to create a hypercolumn just once in our DNA, then hit the repeat button over and over to cover the whole visual field.*

  Some observers claimed that science learned more about vision during Hubel and Wiesel’s two decades of collaboration than in the previous two centuries, and the duo shared a much-deserved Nobel Prize in 1981. But despite their importance, they took vision science only so far. Their hypercolumns broke the world down quite effectively into constituent lines and motion, but the world contains more than wriggling stick figures. Actually recognizing things, and summoning memories and emotions about them, requires more processing, in areas of the brain beyond the primary visual cortex.

  Fittingly, the next advance in vision neuroscience—the “two streams” theory—appeared in 1982, just after Hubel and Wiesel won the Nobel. All five senses have primary processing areas in the brain, to break sensations down into constituent parts. All five senses also have so-called association areas, which analyze the parts and extract more sophisticated information. It just so happens with sight that, after the primary visual cortex gets a rough handle on something’s shape and motion, the data get split into two streams for further processing. The how/where stream determines where something is located and how fast it’s moving. This stream flows from the occipital lobes into the parietal lobes; it eventually pings the brain’s movement centers, thereby allowing us to grab onto (or dodge) whatever we’re tracking. The what stream determines what something is. It flows off into the temporal lobes, and taps into the memories and emotions that make a jumble of sensations snap into recognition.

  No one knows for sure how that snap takes place, but one good guess involves circuits of neurons firing in sync. At the beginning of the what stream, neurons are rather indiscriminate: they might fire for any horizontal line or any splash of red. But those early neurons feed their data into circuits farther upstream, and those upstream circuits are more picky. They might fire only for lines that are red and horizontal, for example. Still farther upstream, circuits might fire only for red horizontal lines with a metallic glint, and so on. Meanwhile, other neurons (working in parallel) will fire for clear glass lines at a certain angle or black rubber circles. Finally, when all these neurons throb at once, your brain remembers the pattern—red metal, glass, rubber—and says, aha, a Corvette.* The brain also integrates, over a few tenths of a second, the Corvette’s sound and texture and smell, to further aid in recognition. Overall, then, the process of recognition is smeared out among different parts of the brain, not localized in one spot. (Important note.*)

  In everyday life, of course, we don’t bother distinguishing between seeing a car (primary visual cortex), recognizing a car (what stream), and locating a car in space (how/where stream). We just look. And even inside the brain, the streams aren’t independent: there’s plenty of feedback and crosstalk, to make sure you reach for the right thing at the right time. Nevertheless, those steps are independent enough that the brain can stumble over any one of them, with disastrous results.

  If the primary visual cortex suffers damage, people lose basic perceptual skills, a problem that becomes obvious when they draw things. If they sketch a smiley face, the eyes might end up outside the head. Tires might appear on top of cars. Some people can’t even close a triangle or cross an X. This is the most devastating type of visual damage.

  Damage to the how/where stream hinders the ability to locate objects in space: people whiff when they grab at things and constantly run into furniture. Even more dramatic, consider a fortysomething woman in Switzerland who suffered a parietal lobe stroke in 1978. All sense of movement disappeared for her, and life became a series of Polaroid snapshots, one every five seconds or so. While pouring tea, she saw the liquid freeze in midair like a winter waterfall. Next thing she knew, her cup overflowed. When crossing the street, she could see the cars fine, even read their license plates. But one moment they’d be far away, and the next they’d almost clip her. During conversations, people talked without moving their lips—everyone was a ventriloquist—and crowded rooms left her nauseous, because people appeared and reappeared around her like specters. She could still track movement through touch or sound, but all sense of visual motion vanished.

  Finally, if the what stream malfunctions, people can pinpoint where objects are but can no longer tell one object from another. They cannot find a pen again if they put it down on a cluttered desk, and they’re hopeless in parking lots at the mall. Bizarrely, though, they can still perceive surface details just fine. Ask them to copy a picture of a horse, a diamond ring, or a Gothic cathedral, and they’ll render it immaculately—all without recognizing it. Some people can even draw objects from memory, but if shown their own drawings later, nothing registers. In general, these people retain their perceptual skills, since the primary visual cortex works, but the details never snap into recognition and identity eludes them.

  Sometimes damage to the what stream is more selective, and rather than all objects, people fail to recognize only a narrow class of things. Many of these so-called category deficits arise after attacks of the herpes virus, the same bug that causes cold sores. Herpes means “creeping,” and although it’s normally harmless, the virus does sometimes go rogue and migrate up the olfactory nerves to the brain, where it ravages the temporal lobes. When this happens, neurons begin to fire in panic, and victims complain of funny smells and sounds; as more tissue dies, they suffer headaches, stiff necks, and seizures. Many fall into comas and die. Those patients who wake up again often have sharply focused brain damage, as sharply focused as if a Russian bullet had pierced them. And if just the right spot gets nicked, they might display a correspondingly sharp mental deficit. Most commonly, people lose the ability to recognize animals. Inanimate objects they recognize fine—strollers, tents, briefcases, umbrellas. But when shown any animal, even cats or dogs, they stare, mystified, as if looking at beasts dragged back from an alien zoo.

  Loads of similar cases exist, some of which beggar belief. Contra the cases above, some herpes victims can recognize living things just fine, but not tools or man-made objects: cash registers become “harmonicas,” mirrors become “chandeliers,” darts fairy-tale transform into “feather dusters.” (Scarily, one man with so-called object blindness continued to drive. He couldn’t tell cars from buses from bicycles, but because his how/where stream still worked, he could detect motion, and simply steered clear of anything coming at him.) To get even more specific, some brain-damaged people can recognize objects and animals but not food. Others blank out only on certain categories of food, such as fruits and vegetables, while still others can name cuts of meat but not the animals they came from. “Color amnesiacs” cannot remember where lemons fit into the rainbow, nor whether blood and roses are of similar hues. One woman struggled with questions about, no kidding, the color of green beans and oranges.

  Usually these “mind-blind” folks can identify things through another sense: let them touch a toothbrush or sniff an avocado, and it all comes back. Not always, however. One woman who couldn’t recognize animals by sight also couldn’t recognize animal sounds, even though she could identify inanimate objects via sound. She had difficulties with spatial dimensions, too, but again only with animals. She knew that tomatoes are bigger than peas, but couldn’t remember whether goats are taller than raccoons. Along those lines, when scientists sketched out objects that looked like patent-office rejects (e.g., water pitchers
with frying-pan handles), she spotted them as fakes. But when they drew polar bears with horse heads and other chimeras, she had no idea whether such things existed. For some reason, as soon as an animal was involved, her mind gummed up.

  These pure category deficits, while rare, imply something important about the evolution of the human mind. Our ancestors spent a lot of time thinking about animals, whether furry, feathered, or scaly. The reason is obvious. We’re animals ourselves, and the ability to recognize and pigeonhole our fellow creatures (as food, predators, companions, beasts of burden) gave our ancestors a big boost in the wild. Eventually, we probably developed specialized neural circuitry that took responsibility for analyzing animals, and when those circuits crap out, the entire category can slip clean out of people’s minds. Our ancestors exploited fruits and vegetables, too, as well as small, tool-like objects. Probably not coincidentally, these are the two other categories of things that commonly disappear from people’s mental repertoire. Our brains are natural taxonomists: we cannot help but recognize certain things as special. But the danger of specialized circuitry is that if the circuits go kaput, an entire class of things can go extinct mentally.

  The way we catalogue the world teaches us something else about mind-brain evolution. I hesitate to even evoke the m-word, since it’s such a contentious term. But after reading about fruit deficits and animal deficits and color deficits, it seems pretty clear that our brains do have modules on some level—semi-independent “organs” that do a specific mental task, and that can be wiped out without damaging the rest of the brain. Some neuroscientists go so far as to declare the entire brain a Rube Goldberg machine of modules that evolved independently, for different mental tasks, and that nature stuck together with gum and rubber bands. That “massive modularity” pushes things too far for some scientists: they see the mind-brain as a general problem solver, not a collection of specialized components. But most neuroscientists agree that, whether you call them modules or not, our minds do use specialized circuits for certain tasks, such as recognizing animals, recognizing edible plants, and recognizing faces.

 

‹ Prev