One could get called away anytime for another opportunity to examine and evaluate interesting patients. Our success at Cornell was hugely dependent on the residents. We became a resource for them and they for us. As doctors roamed the hospital doing their chores, patient after patient was directed toward us. A beeper would go off, and there would be news from Payne Whitney, Cornell’s psychiatric hospital, adjacent to New York Hospital, about a relatively young patient with Korsakoff’s syndrome. This syndrome manifests with memory loss, confabulation, and apathy, a result of thiamine deficiency, and it is usually seen in malnutrition resulting from chronic alcoholism or weight disorders. Volpe would grab me, and over we would go to witness the confused man who had no idea where he was but was about to be utterly repaired by an IV injection of thiamine right in front of our eyes. Minutes later, a call back to the main wards: A woman with acute cognitive dimming was in need of assessment. From a scientific point of view, the neurologic wards are the most fascinating place on earth.
FROM SLEEPING RABBITS TO REAL PEOPLE
One of the most gripping procedures to watch was that of radiologists trying to determine which hemisphere in a patient is responsible for language and speech. Before neurosurgeons would operate in the regions near the language areas, they wanted to locate the language areas. Hemispheric variation was always a possibility, and, properly, they wanted to be sure. The radiologic procedure they used, done for purely medical reasons, was an opportunity for the neuropsychologists to learn a thing or two about the dynamics of interhemispheric processes. The procedure required the radiologist to thread a catheter through the femoral artery in the leg, up through the heart all the way to the neck, and the internal carotid artery, which feeds the brain. They would then inject sodium amytal, an anesthetic that put half of the patient’s brain to sleep for approximately two minutes. After that, the doctors would withdraw the catheter a bit and rethread it up to the opposite carotid artery to test the other half brain. All of this was done under direct observation of the patient and the radiologist using a fluoroscope. Watching half of a human brain go to sleep is the eeriest experience I’ve had. It certainly trumped my earlier rabbit work.
What makes it a draining experience as well is seeing that a person’s conscious state can be directly manipulated in such a dramatic way and always at some risk. In general terms, the patient is usually asked to hold both hands high in the air. As the anesthetic takes hold in one hemisphere, the contralateral hand falls limp. In the hemisphere responsible for language and speech, those functions are severely disrupted, yielding either total silence or gibberish. This is all especially dramatic because one knows the other half brain is awake, watching.
We were trying to answer a fairly exotic question. When the right hemisphere was home alone, so to speak, with the dominant left hemisphere asleep, could we teach it anything? Further, could it then communicate its knowledge to the left hemisphere after it had been awakened from the anesthesia? If memories were established in the right hemisphere when the dominant language system was asleep, could the left hemisphere language system, after it awoke, have access to the information that had been encoded while it was snoozing? In our experiment, we discovered the answer: no. At the same time, if the patient was asked to simply point to an answer on a card I held up, the right hemisphere (presumably) seemed to do just fine at remembering the encoded information. The information was in there, but it was stored out of reach of the language system in the opposite hemisphere.
NEW TECHNOLOGIES: CAN THE BLIND SEE?
It was such vibrant, fulfilling work. Yet nothing could match what we were doing across the street in our labs, where hard-core experimental science was pounding forward. Jeff had established Festinger’s eye tracker, enabling unique split-brain experiments. In our prior studies, as I mentioned, we had sent information to one hemisphere or another by simply asking the patient to fixate a point on a screen and then quickly flashing the information either to the left or right of fixation. It had to be quickly flashed, because if it was left up on the screen for more than 150 milliseconds, the patient could move his or her eyes, thereby allowing each hemisphere to see what was being projected. The eye tracker changed all of that, ensuring that the image always remained in contact with the intended hemisphere. This meant we could show visual stimuli for longer periods of time. We could even show movies to the silent right hemisphere. Would the content of the movies affect the talking left hemisphere?
Soon two spectacular new patients arrived to capitalize on our technological advancement. Case J.W. was part of the Dartmouth series. His callosum had been sectioned in two stages, and he would prove to be extraordinarily interesting in every scientific and personal way. In addition, Case V.P. came to us from Ohio. She was part of another surgical series, headed by Dr. Mark Rayport, and she became exceptionally interesting as well. Throughout the remaining pages of this book, these two cases will be prominent. Overall, between the wards at Cornell and our growing group of split-brain patients, every day’s work was like fishing in a stocked pond. Every time the experimental hook went in, up came another insight. It’s no wonder we worked all the time.
In our early days at Cornell, Jeff had found the tracker to be a powerful aid to our routine use of the tachistoscope, and he applied it to patients without split brains. He’d become interested in a phenomenon called “blindsight,” cleverly named by the distinguished Oxford psychologist Larry Weiskrantz.2 Just as the name implies, it is a syndrome in which people who have lesions in their primary visual cortex can respond to visual information, even though they deny its presence. This isn’t like the “extinguished stimuli” that LeDoux, Volpe, and I explored when we first started at Cornell. Those patients could see information if nothing competed with it in the opposite visual field. With blindsight, however, the patient simply can not see the object but can nonetheless point to or pick it up or react to it in some way. The many visual scientists, led by Weiskrantz, studying this believed the remaining capacity was due to intact secondary visual pathways kicking in and picking up the slack somehow.
The patients who had been written up in the scientific literature had not had the advantage of being studied with a fancy eye tracker. Only the tracker could ensure that a stimulus was placed in the visual field where the experimenter hoped it was and remained fixed there over a period of time. In other words, without the eye tracker, there was room for error in interpreting why there was remaining visual function. Once a region of blindness had been identified as having been caused by a central brain lesion, it behooved the experimenter to make sure that all stimuli were presented within the blind region and that none fell into any intact parts of the visual field that remained. That could only be achieved with an eye tracker, which Jeff had. All he needed was a patient to study. Sure enough, it wasn’t long until one showed up at Cornell.
Jeff first studied a thirty-four-year-old woman who had undergone surgery to clip an aneurysm in the right half of her brain. The aneurysm was in her right occipital lobe, so the surgery was expected to cause blindness in part of the patient’s vision. Sure enough, after surgery, the patient had a dense left homonymous hemianopia—she couldn’t see to the left of a point she was looking at. She was given an MRI, which revealed an occipital lesion that clearly spared both secondary visual regions and the superior colliculus, the main midbrain candidate for residual vision associated with blindsight. These intact areas should have been able to support many of the blindsight phenomena commonly reported.
But the patient had no blindsight. Jeff studied her for months and got nothing. He wrote up the work and published it in one of the finest scientific journals.3 It met with deafening silence. Blindsight was too big an idea to be shot down by one experiment, even a great, beautifully executed experiment. Jeff said, “Great, Mike: I come to your lab to learn some new tricks, and you know what I discover? Blind people are blind. That kind of brilliance ought to get me a job at Harvard.” In fact, the broader claims about the nature
of blindsight remain a topic for debate. Jeff soon moved on to more alluring questions.
Cornell had became something of a magnet in those days. On many fronts the work was taking hold in the scientific literature, and New York, well, was New York. Who didn’t want to be in New York? We caught the eye, for example, of the spectacularly creative Stephen Kosslyn and his student Martha Farah at Harvard. They met Jeff, and all were off on a scientific hunt for the brain basis underlying visual imagery, the processes that allow us to imagine and visualize objects and events in our mind’s eye. Kosslyn, still in his thirties, was the world’s authority on this fascinating question. It was logical to want to know how mental imagery might be affected by split-brain surgery. Jeff was pressed into duty.
The story was complex and involved all kinds of discrete, detailed experiments. The studies came at a time when the notion of modularity was emerging as a conceptual framework for viewing cognitive mechanisms. With a modular framework, complex mental processes, such as visual imagery, could not be thought of as monolithic, involving just one part of the brain. Instead, complex cognitive skills were now seen to be the end result of several interacting modules, which produced what seemed to be a unitary cognitive event. It is easy to say this, and though it is hard to provide evidence for that kind of thinking, Steve, Jeff, and Martha did just that. They saw that split-brain patients handled imagery differently in each hemisphere, thereby suggesting that each hemisphere had different modules available to process the identical stimulus.4 Believe me, this is all you want to know about it.
New York is a place that draws people into its magic. One day, a letter arrived from Toronto: A young Italian scientist from Bologna was wondering if we had room for her in our lab. We did and Elisabetta Làdavas, to whom no word short of vibrant does justice, moved south into our lab and our hearts. Like all the Italian scientists I have known, she has a work ethic that is dazzling, and a lust for life that leaves everyone around her breathless. Fascinated by the problem of visual attention (like every one else that I seemed to have surrounded myself with), Elisabetta had a unique approach. Everybody wanted to know how visual attention was distributed across a scene. So, for instance, if vision were viewed as a TV screen, was there more attention on the right side of the screen than on the left? Was there more attention on the top part of the screen than on the bottom? As Elisabetta worked on this question with teams of scientists, she always added her own twist. How is visual attention distributed if you look at a TV by bending down and looking through your legs at the screen? And left becomes right and vice versa? I’ll never forget the look of astonishment on Jeff’s face when she proposed this; months of experiments ensued. To this day she remains one of our closest friends and has become a distinguished scientist, successfully breaking through the rather male-dominated Italian academic culture.
GEORGE A. MILLER AND THE BIRTH OF COGNITIVE NEUROSCIENCE
New York offered so many things, not the least of which was the talent at Rockefeller University and, in particular, George Miller (Figure 27). I had just arrived at Cornell and was seeking companionship with someone well versed in psychology. Right next door was Miller, one of the few giants in the history of psychology, so I called to ask if I might come over sometime. He said sure, and suggested we have lunch. I had no idea this would lead to our developing the field of cognitive neuroscience.
FIGURE 27. George Miller visiting us at our weekend home in Shoreham, Long Island, New York.
(Courtesy of the author)
Both Miller and his office intimidated me. Not only did the office contain more books and journals than entire psychology departments, but it looked as if most of them had been read. As he stood up to greet me, I was surprised to see that he was as tall as me, which is to say, way over six feet. With little ado, we went upstairs to the Rockefeller Faculty Club—home to great minds and mediocre food. We took our trays of soup and sandwiches and sat down. As we tiptoed around various subjects, he occasionally interjected hospitable questions, like “Would you like a beer?” I said, “No thanks.” Awhile later, he asked, “Would you like a cigarette?” I declined. A little later, he asked, “Would you like dessert?” Again, I declined. My thought was to keep things in the realm of professional simplicity. He looked at me in obvious exasperation and wondering, no doubt, if I indulged in anything at all, finally asking me, “Do you fuck?” I was silent for a moment, and then burst out laughing. Then I had dessert.
The ice had been officially broken, and I realized that George’s reputation as a formidable mind had gotten the better of me. Characterizations of first-rate thinkers tend to take on a life of their own, with the result that neophytes like me begin to think these great personages would rather have a beer with an old friend than meet someone new and challenging. George put all that to rest with one hilarious crack, and within weeks we were good friends. Although I learned in the years to come that his reputation for unceremoniously dismissing faulty arguments was well deserved, I also learned that his comments, whether positive or negative, were inevitably constructive with regard to good science.
Pierre S. DuPont made a wonderful observation to the French National Assembly some two hundred years ago. “It is necessary,” he said, “to be gracious as to intentions; one should believe them good, and apparently they are; but we do not have to be gracious at all to inconsistent logic or to absurd reasoning. Bad logicians have committed more involuntary crimes than bad men have done intentionally.”5 That sentiment is the essence of George. He rarely talked about the personal dimensions of a given advocate, but simply observed if their reasoning was valid. When introduced to a body of information, his enormously quantitative and logical mind began a kind of digestion process, the outcome of which would be either encouraging or damning for the topic at hand. The product of this natural capacity is a rare scientist who could break from the conventional mode of thinking in a field and form a clear image of how things should be done. Again and again, George ventured into uncharted territory and produced classic papers that were harbingers of the vast activity that was to follow in this area of inquiry. Although his roots were in psychophysics, his main intellectual concern had always been the psychology of language.
In his earliest work, around 1950, he examined the perception of language, borrowing a host of technical tools from engineering. These included information theory,* which provided a rigor that had been previously unattainable in the psychological study of language.6 In what was to become his signature style, he first drew a host of colleagues and students into the study of language perception. After establishing the importance of meaning and redundancy, he then followed that lead by shifting his interest and attention to language comprehension.
At about this time, Noam Chomsky released Syntactic Structures,7 and George was quick to see its implications for the psychological modeling of comprehension. He immersed himself in Chomsky’s writings. He and Chomsky spent six weeks together with their families in one house, during a summer course at Stanford University in 1957. George described, in a brief autobiography, what a daunting experience that was for him; given the caliber of George’s own mind, that statement gives a clue as to how much of a genius Chomsky truly is. George’s work during the next few years, exploring the relationship between transformational grammar* and comprehension, placed the field of psycholinguistics on a sound footing.8
George, who died in 2012 at age ninety-two, spent his life tugging back the curtain that obscures the secrets of language, and in doing so, he not only led the field of psycholinguistics, but restructured the field of psychology. Through the study of language, he learned and taught the rest of the psychological community that when describing behavior one cannot ignore the processing that mediates stimulus and response. Meaning, structure, strategic thinking, and reasoning are too large a part of even the simplest perception to be ignored. George and a few other seminal figures, such as Festinger, Premack, and Sperry, are responsible for changing the face of psychology: transforming i
t from a science of behavior to a science of mentation. Nevertheless, what has fascinated me over the years is that George, a highly rational person, did not approach his new endeavors with much forethought. Like most great scientists, he became interested in some phenomenon or other and then simply jumped in to try to illuminate the problem. As a story develops, either a new insight is gained, or the idea is a bust.
My own years with him were filled with another enterprise: launching yet another field, which has come to be known as cognitive neuroscience, the study of how the brain creates the mind. It was born of rather intense interactions based primarily at the Rockefeller University bar. For about three years, George and I met there regularly after work and talked about our fields. He always had a deep interest in biology and assumed that much of psychology eventually would be an arm of neurobiology. A major problem with the then-current state of affairs was that neurobiologists, almost without exception, assumed that they could talk about cognitive matters with the same expertise with which they could talk about, for instance, cellular physiology. This is the equivalent of a textile expert talking as knowledgably about high fashion as she does about the pros and cons of polyester. It was unmitigated arrogance, and it drove many serious psychologists away from the brain and sciences, but not George.
We started exchanging stories, mine about episodes in the clinic and his about new experimental strategies. I would tell him about patients with high verbal IQs who lacked a grammar school child’s ability to solve simple problems. He would tell me that psychologists do not yet have anything resembling a theory of intelligence or mind. He urged the continued collection of dissociations in cognition seen in the clinic, in the hope that a theory would emerge from these seemingly bizarre and scattered observations.
Tales from Both Sides of the Brain : A Life in Neuroscience (9780062228819) Page 18