Ignorance

Home > Other > Ignorance > Page 11
Ignorance Page 11

by Firestein, Stuart


  I have to admit that I do not have the mathematical sophistication to follow most of the theoretical propositions being produced by these computational neurobiologists, as they have come to be called. But I do think they have particularly strong access to ignorance. This is at least partly because, like modern physicists, they can use mathematics to frame questions that are hard, maybe impossible, to ask linguistically. With mathematics as your language you don’t end up with paradoxical-sounding descriptions like the Emo Philips’s quip that opened this case history. When it comes to asking about the brain, theoreticians have the further advantage of not being hampered by the technical limitations that are often part of experimentation, and not being constrained by earlier findings—findings that were often the result not so much of having a good question but of having certain technical capabilities that produced certain kinds of data (see my earlier remarks about voltage spikes in the nervous system in chapter 2). Of course, there are technical advances in math as well—new propositions, new techniques, added calculating power, and their application to nervous systems has been occasionally successful—but still the theoretician is not as constrained as the experimentalist.

  Abbott is very thoughtful about choosing “precisely where along the frontier of ignorance I want to work.” Talking in my class, it’s interesting to listen to him wrestle with this problem. He makes the point that for a theorist all this freedom is a challenge. Without the technical constraints it’s much more difficult to define a limit. “I want to claim that I want to discover the roots of consciousness, but I happen to believe that if I did say that, I wouldn’t get anywhere.” “You know that if you are too risky in your research you’ll get nothing done. Or you can play it safe and reap rewards for doing essentially the same thing over and over again,” but that’s really not getting much done and “you have to force yourself not to do that.” You have to find the “stuff that pushes the edges for you,” and to do that you have to be honest and say, “What can I personally tackle?” “Also you have to know the times you live in. Is there enough information for me to make progress here? When do you yourself say you’re not going to be able to solve this?” “So you have to introspect and that’s the good part. But you have to guess too, and you could guess wrong. There are no guarantees.” I hope you can see the struggle here and recognize that this struggle goes on in parallel with the actual work that he does, and it is a constant struggle, back and forth, always trying to locate that sweet spot of ignorance. Let’s see it in action.

  Abbott has some “simple” questions. “This morning when I opened the refrigerator I noticed that there wasn’t much orange juice left. This evening on the way home from work I remembered to stop at the grocery store to pick up some OJ. What happened in my brain that put that thought in there and retrieved it at the right time, 10 hours later?” The simplicity of this question, like that of understanding how we walk, can be deceiving. Appropriately, Abbott calls this the “memory of the commonplace,” the vast number of things we remember that are not exceptional or are not practiced, but that nonetheless occupy memory space in our brains. This all seems so unremarkable, so easy to ignore, as to dismiss it as not being that important. It is so woven into our daily lives; we do it all day, every day, and even overnight. How easy it would be to overlook this pathetically obvious kind of neural activity as being too common to be significant. Yet when we think very hard about this question, it actually becomes harder to understand. In brain work especially, this is a clue that you may be on to something.

  What is it about the orange juice example that is so tantalizing? For one there is a delay. You think of it once in the morning and then not again until 10 hours or more later, after a day during which your brain did a lot of other things—if you’re a theoretical neuroscientist like Larry Abbott it did a lot of very complicated things. Yet there it is, sometimes tripped off by a cue of some sort, the grocery store comes into view, you see an ad for apples, the radio reports on Jews in some West Bank settlement, or any of a zillion things that may be strongly or weakly related to orange juice. And often there is no discernable, that is, conscious, cue at all, it just seems to be there. How does the brain keep that not-so-special memory alive for so long?

  There is a similar kind of memory, called recognition memory that is noteworthy because we seem to have a remarkably huge capacity for it. There are now famous experiments in which subjects have been shown as many as 10,000 images in a relatively short time—just flashes of each one. Then when shown a second group of images and asked to identify which ones they had seen in the previous group, they were able to respond with an almost unbelievable accuracy of over 90%. If it weren’t for the truly astonishing numbers involved, this wouldn’t seem like taxing brain work—just watching some pictures flash by and recalling whether you’ve seen them before. You don’t have to make a list of the scenes you’ve seen (that would be very hard); you don’t have to describe the scenes (that would also be hard); you just have to identify them as familiar. But 10,000 of them, with better than 90% accuracy!

  These and a few other similar considerations began to nag at Abbott. Were we missing something here? Because it didn’t seem like our ideas about memories formed from synapses switching on and off could really describe this level of dynamic memory capacity—thousands of memories a day, some fleeting, some longer lasting, many not even consciously recorded, most soon forgotten, at least in their details. So you see that by asking the right question (and one that could have been asked 50 years ago just as easily), a window is cracked open ever so slightly. I almost hesitate to use the window-opening metaphor here because it suggests a beam of radiant light coming into the darkened room, when actually it is almost exactly the opposite—an apparently well-lit room is suddenly darkened by the stealthy entrance of an unimagined ignorance just outside the room. What we thought we knew so well can’t be the whole story; it may not be any of the real story.

  Around the early 2000s Stefano Fusi, another computational neuroscientist with a physics background, was working independently on this problem and he and Abbott independently came to a result that appeared to be catastrophic to understanding how memory works. Since 2005 they have joined forces and worked together, and Stefano was also a guest in my course on ignorance, although some years after Larry Abbott. The rest of this history interweaves both of their work.

  The catastrophic nature of the problem was that we simply didn’t have enough synapses to remember the things we do, all those commonplace memories, if all the current models for how memories persist in our minds were true. And the difference wasn’t by a little bit, a decimal point or two of adjustment or a little tweak. It was, well, catastrophic. All the accepted models of memory formation relied on the notion that memories were composed of some number of synapses, the connections between the cells in our brains, whose strength had been modified, and that at a later time this network of active synapses could be accessed and would be perceived as a memory. The assumption was that, like computers, the more switches (synapses in the brain, transistors in the computer) you had, the more you could remember. This is called scalability. In a scalable system the process by which something is accomplished remains the same and if you want more of it, you just add more hardware (e.g., switches). The human brain has about a 100 trillion synapses (a number represented as 1014, that is, 10 multiplied by itself 14 times). So even if a memory required a hundred synapses (102), there was enough hardware for 1012 memories—about a trillion memories, which certainly seems like more than enough.

  But Fusi and Abbott (along with another neurophysicist hybrid named David Amit, who was Fusi’s original mentor) had stumbled upon a dismaying problem with one of the assumptions of the accepted models. Without going into the technical details, they found that the number of memories in a wet, warm biological brain does not scale with the number of switches (synapses), as is the case with the transistors of a cold, hard computing machine. Instead, they only scale with the logarithm of
the number of synapses. The logarithm is that number that tells you how many times to multiply by 10—in other words, the number of memories in a brain with 1014 synapses maxed out at 14, not a trillion. 14! That’s what they mean by a catastrophe. (To be entirely accurate here, they actually scale with something called the natural logarithm, but that value only turns out to be around 32, not really much of an improvement.)

  When Fusi first came upon this result, he was a young scientist, and a physicist no less, and couldn’t get the result published in any major journal. An accidental meeting and conversation with Larry Abbott revealed that he also had stumbled upon a similar result but was suspicious of it, given the consequences. They began working together, and the result only became more and more robust. Still this discrepancy, shall we call it, was being largely ignored by the greater neuroscience community until finally Larry was asked to give a Plenary lecture at the Society for Neuroscience annual meeting in 2004 and presented his and Fusi’s findings to a large captive audience.

  The crucial thing about this finding is that it changes almost everything about the way we think the brain stores memories. Although Fusi and Abbott show this by performing careful calculations, you can get an intuitive appreciation for the problem—a system that learns quickly, as our brain does in commonplace mode, will also forget quickly. This is because new memories are constantly being formed that write over the old ones, so nothing lasts for long while the brain is active. Pretty quickly the synapses used in one memory begin getting incorporated into others and then there is less and less of the original memory left, and eventually it is no longer recognizable, that is, we have forgotten it. This also tells us that forgetting is due to the degradation of memory, not by time, as most of us are wont to think, but by ongoing activity. Forgetfulness, especially the kind that so worries people, like forgetting where you just put your keys or why you walked into a room, is not due so much to age as to overworking the poor contraption. New memories crowd out older ones, even on the time scale of minutes.

  Of course, we have long-lasting memories, but that requires that the system making the memories does so only very slowly, too slowly to explain our normal experience of recognizing vast numbers of familiar things. Practiced or exceptional memory may work this way—long-term learning clearly requires slower processes and so it is more difficult. There is not only more than one kind of memory, but there must be more than one kind of mechanism for remembering.

  Well, this story has a happy ending, at least for a book on ignorance, because the catastrophe remains unresolved. As Fusi says, “We were studying a problem, not a solution.” Of course, there are some hypotheses, but they are mostly of the sort that we have to look at things we never looked at before, because the established principles do not contain the solution. The solution is in a new dark room with new black cats scurrying about—or not.

  John Krakauer is a brash, young neuroscientist whose cultured demeanor and English accent barely cloak an unswerving scientific toughness. Krakauer is a medical doctor who slipped into research during his residency and has never left it. Using perhaps the polar opposite of the theoretical approach, he nonetheless has come to a similar conclusion that motor systems are the improbable key to understanding how the brain works. Two of his mantras are that “Plants don’t have nervous systems, because they don’t go anywhere” and “The reason to exist is to act.” He now manages both a clinical practice and a research program. His quick, and often racy, sense of humor belies his deep thinking about what it is the brain is up to. Krakauer asks my class a simple question: “What muscle contracts first on pressing an elevator button?” This shouldn’t be difficult. We all think back, running through a movie in our heads, our brains, of ourselves reaching up to push the button in an elevator. We’ve all done this hundreds, thousands of times, but the answer still shocks us: “The gastroc muscles,” he says, “in your leg (this is one of the two long muscles of your calf) on the same side as the arm that you will lift to push the button. And if you didn’t do this, tighten this muscle just slightly in anticipation of your fairly heavy arm (approximately 8 ½ lb!) being lifted like an extended lever, you would topple over.” What is so shocking about this is that our brain has worked out the problem, not a trivial one in engineering terms by the way, made these anticipatory postural adjustments in muscles that are nowhere near the arm being lifted, and we have no access to the process. Imagine how much more complex it gets if you are carrying groceries in the other arm, but still you don’t experience it as difficult.

  This particular example reminds me of a pointed question asked by the philosopher Ludwig Wittgenstein. “When you raise your arm, your arm goes up. But what is left over if you subtract the fact that your arm goes up from the fact that you raise your arm?” There is the sense that in that tiny bit that’s left over, for which we haven’t got an exact name—intention or thought or decision—is a very important answer.

  An even deeper question, which will at first sound silly (which is how deep questions often masquerade), is how come we all reach for the elevator button at about the same speed? Krakauer demonstrates this by reaching for his cup of “double shot, extra hot, skimmed latte” and asking why we all choose the same speed and motion when reaching for a cup of coffee. No one goes too slow or too fast, and if someone did you would think something was wrong with that person—mentally or physically. These striking regularities appear in most simple actions. Virtually everyone makes a fairly straight-line approach to the cup—not from above or below or a myriad other ways of reaching for the cup. Indeed, this is an enduring question in the neurobiology of action—what is called the “degrees of freedom problem”—that, given a nearly infinite number of ways to reach for the cup, why do we all choose the same one? “Why isn’t there infinite procrastination as the brain tries to determine which of the many possible ways it will use?” “No one knows why,” says young Dr. Krakauer. No one knows why.

  Of course, there are all sorts of factors one could theorize about—maximizing efficiency, speed versus error, energy costs—and there are equations and graphs from dozens of experiments describing the effects of these and other factors on movement choice, but none yet fully explain this exquisite mystery. The medical doctor Krakauer points out, however, that these are important elements in designing rehabilitation programs for stroke patients or other victims of movement pathologies. What should we actually be working on to improve lost movement and coordination? What are the critical aspects of making movements that are lost due to pathology or injury?

  Parkinson’s patients are a particularly interesting example. The disease is marked outwardly by slowed movements, from walking to reaching for a cup of coffee. But, as Krakauer dryly notes, “They never get run over.” If the problem were some sort of “execution deficiency”—poor muscle control or damaged communication between nerves and muscles—they would be likely to suffer many more accidental injuries than they do. Something seems amiss here, and this began a series of experiments with Parkinson’s patients that pointed in a completely different direction from muscular dysfunction. A case where a simple question, why do Parkinson’s patients move slowly, was at odds with the observed conditions (they don’t have accidents), and getting the right question, not just more observations, was the key to understanding what was going on.

  It’s worth looking into this a bit because not only is the answer to this specific conundrum unexpected, it leads improbably to an understanding of the development of skill, for example, in sport, and of a new perspective on the whole of cognition. This is a wonderful example of how a little bit of the right ignorance can lead to undreamed-of insights in seemingly unrelated areas.

  The answer, or at least the partial answer, is that Parkinson’s patients, those in the earlier less debilitating stages of the disease, believe that they are moving at the correct speed; they are simply wrong about that. If you yell “Fire!” in a room full of these type of Parkinson’s patients, they will beat you to the do
or. They can move just fine; they “choose” to go slowly. But why have they made this choice? You might think you could just ask them, but that won’t work. They are unable to self-report on this. While they recognize they are moving slower than other people, and may even be embarrassed by it, they don’t really know why they are moving more slowly than they could, so you can’t just ask them. This is an example of why the brain is so poor an instrument for understanding how it works—at least through introspection. You can think about it all you want, and you will never get access to what your brain is doing computationally at any given moment. You only have access to a result, a behavior or a perception, that could have been reached in numerous indistinguishable ways. By the way, you are no more able to self-report why you have chosen the speed at which you walk or grasp for objects than a Parkinson’s patient.

  So what are the possible reasons that Parkinson’s patients adopt the speed they do? It could be that they have miscalculated the speed-accuracy tradeoff—that is, how fast you can go without making serious errors, like falling down or knocking over the coffee. And, indeed, if you were to watch someone trying to navigate across an icy pond in slippery shoes, his or her gait and posture would look very Parkinsonian, due to the person’s calculation of the risk of falling over.

  Another possibility is that they have miscalculated the energy cost. They have an increased reluctance to move faster because some implicit calculation of the energy cost has gone awry. Think about yourself reaching for a glass of water on the table—you could do it much faster than you typically do and still not knock it over or spill it. Why do we “choose” to go more slowly than we are capable of? It could very well be that to beings from another planet we appear to move painfully slowly. So have we decided in some unconscious way that it’s just not worth the effort to get to the glass any sooner?

 

‹ Prev