In the early 1960s, some aspects of biology also appeared to be deceptively simple. Watson and Crick had made their breakthrough discovery about DNA and its role in heredity.2 By today’s standards of molecular mechanisms, the model of how it worked was simple. Genes produced proteins, which then carried out bodily functions. Boom, boom, boom and you had a full mechanism. It became known as the “central dogma.” Information flowed in one direction, from DNA out to proteins that then instructed the body. With all that we know today, however, there is serious disagreement on how to even define what a gene is, let alone how many different interactions there are between molecules that are thought to be in some causal chain of action. To complicate matters even more, it is now known that information flows in both directions: what is getting built is, in turn, influencing how it is getting built. The molecular aspects of life reflect a complex system laced with feedback loops and multiple interactions—nothing is linear and simple.
Modern brain science started out being discussed in simple linear terms. Neuron A went to neuron B, which then went to neuron C. Information was passed along a path and was somehow gradually transformed from sensory exposure into action, having been shaped by external reinforcements. Today such a simple characterization of how the brain works would be risible. The interactions of the brain’s circuitry are as complex as those of the molecules that make it up. Getting a hold on how it works is almost paralyzing in its difficulty. Good thing we didn’t realize this at the time, or no one would have tackled the job.
As I look back on those early days, it may have been good for human split-brain research to begin coming of age in the hands of the simplest of researchers: me. I didn’t know anything. I was simply trying to figure it out using my own vocabulary and my own simple logic. That is all I had, along with bundles of energy. Ironically, the same was true for Sperry, the most sophisticated neuroscientist of the era. He had never worked in the human arena and so we held hands as we plowed forward.
In some sense, of course, we all realized split-brain patients were neurologic patients, and neurology was a well-formed field with lots of vocabulary. Joe was our guide in the minefield of jargon. Bedside examination of a patient with a stroke or a degenerative disease was well established and described. The rich history of early neurologists had taught us a great deal about which part of the brain managed what cognitive functions. The nineteenth-century giants in the field, Paul Broca and John Hughlings Jackson, and their twentieth-century counterparts, such as the neurosurgeon Wilder Penfield and still more recently Norman Geschwind, all played major roles in developing the medical perspective on how the brain is organized.
I can still remember the day when Joe came over to Caltech from White Memorial Hospital to give us a lab talk. He described some of our early findings, using the classic terminology of neurology. Although it wasn’t gobbledygook, it sounded like that to me, and I remember saying so to Joe and Sperry. Joe was a very open fellow and always progressive. He simply said to me, “Well, go do better,” and Sperry nodded in agreement. Over the ensuing years we did, establishing in our first four papers3 a scientific vocabulary for capturing what was going on in humans who had the two halves of their brains separated.
ORIGIN OF SPLIT-BRAIN RESEARCH
Split-brain research in animals has a rich history. This all occurred before my time in the lab, and it is easy to imagine that there are many versions of the story. The most straightforward begins with Ronald Myers working on a M.D./Ph.D. degree at the University of Chicago in the mid-1950s. His project was to learn how to cut the optic chiasm down the midline in a cat—a formidable assignment. The chiasm was seemingly inaccessible. Located at the base of the brain, it was where some of the nerves from the left eye and right eye cross, allowing information from both eyes to project to each half brain. If he could successfully cut the chiasm, it would mean visual information coursing up from the right eye would stay lateralized—that is, it would go only to the right half brain—and information coursing up from the left eye would go only to the left brain. The surgery would have eliminated the normal information mixing at the base of the brain.
If such a surgery could be done, then it would mean that one could begin to test how information from one eye came together inside the brain with information from the other. All of this was driven by the working hypothesis, then unproven, that the neural structure integrating the information was the corpus callosum, the huge nerve tract that interconnects the two half brains. There were those, such as Karl Lashley, mentioned earlier, who thought that the corpus callosum was merely a structural element that supported the two hemispheres. The experiment Myers designed was meant first to teach a visual problem to one eye of a chiasm-sectioned cat and then to test the other eye. If the information was integrated, then the idea was to test again after cutting the callosum to see if the integration stopped. The prediction was that it would. That would be huge.
Myers worked on the procedure and finally perfected what was, at first, an extraordinarily difficult technique. After much practice it became quite straightforward even though it doesn’t sound at all easy. His original description is telling:
The optic chiasma was transected in the mid-saggital plane through a transbuccal [through the mouth] approach. In this procedure the soft palate was incised from its attachment to the hard palate anteriorly to within a half-centimeter of its free margin posteriorly. The cut edges were retracted with catgut sutures creating a diamond-shaped opening. A flap of nasal mucosa was next reflected from the sphenoid bone, and, with a dental burr, an oval fenestra 1 by 5 mm. was made in the bone immediately anterior to the spheno-presphenoidal suture. Through this opening in the bone the dura was carefully exposed and incised, thus revealing the underlying optic chiasma. The chiasma was then sectioned with a fine steel blade, under close visual control through a binocular dissecting microscope. A small piece of tantalum foil was inserted between the cut halves of the chiasma so that post mortem verification of the completeness of section would be possible by gross inspection.
After section of the chiasma, the opening in the bone was filled with Gelfoam soaked in blood to form a barrier between the nasopharynx and cranial cavity. The flap of mucosa was replaced over the Gelfoam and the incised soft palate reapposed with catgut sutures.4
Got it? Myers was set to perform his experiment. He found that in the chiasm-sectioned cat the information was integrated, and just as he predicted, after the callosum was sectioned, the integration stopped. This procedure, along with the finding that the corpus callosum transferred information between the two hemispheres, launched a thousand ships. With both surgeries, now each hemisphere could be directly given visual information and the opposite hemisphere could be tested for its knowledge of the information.
With Myers’s chiasm surgery breakthrough in hand and the logical next step to cut the callosum, interest was developing in what first seemed like a relatively obscure, yet confounding finding. Akelaitis’s patients at the University of Rochester appeared to have no major behavioral or cognitive changes following callosum surgery. As a consequence of this work and Lashley’s stance, most people thought that when it came to humans, little would come of this careful new animal work of Myers and Sperry.
Of course, one of the beauties of science is that it marches on. As the split-brain story developed and became rich and influential in science, people wanted to know where the idea came from. Was it Myers? Sperry? Both? Others? Did it just slowly happen as information accrued over time? After all, it wasn’t until years after Myers did his work that the whole preparation was dubbed “split-brain” by Sperry,5 the consummate wordsmith.
One account of its origins came from a well-known psychologist, Clifford T. Morgan, who had moved from Wisconsin to Santa Barbara in the early sixties. He had been an instructor at Harvard in the early forties and no doubt had known Sperry, as they both were associated with Lashley. Morgan was keenly interested in epilepsy and also became a celebrated textbook writer.
His first book, Physiological Psychology, published in 1943, was credited for bringing order to the field by systematizing its many facets.6 Morgan went on to a distinguished career, started his own publishing company, his own journals, and his own society. Perhaps he was the model for my own subsequent entrepreneurial efforts to start a journal and a scientific society.
I later met Morgan at his office when I arrived for my first stint at the University of California, Santa Barbara (UCSB) in 1966. He was a warm and generous man who seemed to live in order to hear Dixieland jazz at a local spot, the Timbers, on Sunday nights. In fact, he was so generous that on the spur of the moment one day, he lent me five thousand dollars to help me buy my first house! Just like that he wrote out a check at his desk with the simple command, “Pay it back when you can,” and handed it to me. That simple gesture kick-started my domestic life and had a big impact on me. Years later, following his example, I was able to do the same for two of my young research associates.
It turns out that the idea for the split brain was spelled out in the second edition of his book in 1950, coauthored with University of Pennsylvania psychologist Eliot Stellar.7 It was stated with no fanfare and made to sound as if it were part of the culture at the time, whereas in fact everybody wondered what the callosum did, and everybody wondered how information was communicated between the hemispheres. Does it remind you of what was going on in the field of genetics? After all, everybody knew there was inheritance and everybody knew there was DNA before Watson and Crick put it together. Maybe major advances simply accrue. At the same time, and very importantly in my view, somebody has to go out and do something to prove or disprove the talk, not just go on and on about it. There was no question in my mind Myers and Sperry had gotten their hands dirty and transformed the talk into findings.
I met Myers years later at a conference where I was presenting the human split-brain work and he was presenting some of his anatomical work carried out on chimpanzees.8 I was eager to get to know him because I fully understood his crucial role in the history and development of split-brain research. As a scientist he certainly had earned the respect of his peers, and the field of brain science was indebted to him.
That doesn’t mean he was Mr. Nice Guy. After my talk, he went into some kind of rant about how the “odd human case” didn’t mean much of anything and that it was sort of a bizarre consequence of prior epilepsy, etcetera. I was stunned and rather speechless. But the lightbulb slowly was turning on. Turf is king, and I was on his turf, even though I was following up his work in another species, and, by that time, our human studies had been peer reviewed in several refereed journals. I was getting another lesson in the difference between scientists and science. I was also wondering if it was inevitable that all contributors to intellectual property turned out this way. Was there any difference between an artist, a scientist, a bricklayer? Would I also turn out that way? Note to self . . .
DR. SPERRY
Roger Sperry was a true giant in the field. When I arrived at Caltech, he had recently recovered from a relapse of tuberculosis. His wife, Norma, coordinated the flow of information to him from the lab, while he rested and recovered at the sanatorium. At that time, he was involved in at least three major scientific projects. His foundational work in neurobiology, which was revealing that animals were not randomly wired and then shaped by experience,9 was going strong. He also proposed the bold hypothesis that a chemoaffinity process was in play—a process that guided neurons to grow to a specific destination during development. He had outlined that idea at a conference a few years before, and it served as the basis for Caltech hiring him into a professorship.
Sperry had taken on another issue. There was something called psychophysical isomorphism.10 This was the idea that, for example, if one saw a “triangle” in the real world, there was a corresponding electrical pattern in the visual brain areas that matched the real-world picture. To test this idea, he inserted little mica plates into the cortex of cats. The mica served as an insulator so that any electrical field potential in the brain, should it exist, would be highly disrupted by the many intervening insulators, thereby preventing the animal from performing a visual perceptual task. Many variations of this experiment were carried out. All the results supported Sperry’s belief that the notion of psychophysical isomorphism (parallelism) should be abandoned. It has been.
On top of all this, of course, was the exploding research on split-brain animals. Sperry had an army of postdoctoral research assistants working mostly on cats and monkeys. The lab was going full tilt on a variety of issues that dealt mainly with the question, Would an animal with its corpus callosum sectioned show transfer of information between the two hemispheres when a perceptual problem was trained to only one hemisphere?
Any one of these thrusts of research would have been enough to keep most labs busy and noticed in the larger scientific community. Sperry had a style that let things happen. He didn’t tell us to how to do science. He watched, he kibitzed, he surely guided in ways we didn’t fully understand at the time. When he saw something of interest, he knew how to bring it out and enhance it. Put differently, he had a nose for the important versus the routine.
More generally, those of us who have spent a life in science running large labs wonder how it all keeps going. It most assuredly does not happen by the lab director issuing new directives on a daily basis. Labs can go for years with yeoman-grade science taking place. There can be dry periods, dull periods, nonfunded periods. Occasionally, however, something—sometimes it’s serendipity, sometimes it’s an actual hypothesized experiment—comes along and works out. Instantly, all the mundane days dissolve into glee and excitement.
I can remember George Miller, the distinguished psychologist, saying to me, “Everybody wants to think science moves forward one clean hypothesis at a time. It moves forward, but usually by stumbling on to something that was unintended.” Yet we then quickly tell a story about how we logically proceeded to our findings, which keeps the myth going. Science is great, but scientists are human and prone to storytelling just like everyone else.
Nonetheless, keeping an overall lab narrative going is crucial to keeping the research focused and on track. Young scientists come and go. They make their contribution to part of the story, and in return, they receive the support of the lab director throughout their career. That is the standard arrangement. Students usually continue working on an aspect of the problem they contributed to, and in the long run, that is how major research thrusts develop. Even pedestrian lines of research can grow like this.
The successful labs keep an edge by having really smart students and postdocs. Of course, smarts aren’t the only ingredient to success. Everybody is smart, but some students are also energetic and practical. A further combination of hard-to-predict characteristics and luck—which is what I had when I hopped into this lab’s ongoing dynamics—lead to a successful career in science.
Back in my undergraduate summer at Caltech, my meeting with Sperry in his Kerckhoff Hall office was the first of many experiences of meeting “the man.” His scientific reputation was, as I said, exceptional. From neurodevelopment to animal psychobiology, he was the intellectual leader of his time.
People really do have two realities—the everyday person and the “metro” person, or the private self and the public self, as its commonly phrased. The public self is your job, your reputation, the model the world builds about you and expects of you. It is usually not you. Let’s face it: If Keith Richards lived the life we all think he lives, he would be dead.
We can sometimes come to be ruled by the metro self. We live to feed it and do what it tells us to do. This thing that isn’t the real you is now running your life, making demands on you. Meanwhile, the real you is trying to get the kids to school, root the gophers out of the rose bed, see your friends for a drink, and talk about whatever. In my life, twenty years of lunches with Leon Festinger, the distinguished social psychologist, demonstrated that someone who had a large m
etro self could also be exceptionally personal and not let that self intrude or take over his private life. Many people pull this off.
I have always been amused by my many colleagues who claimed they knew Roger Sperry. They knew the metro Sperry. I can say with a fair degree of confidence that nobody knew him like I knew him, both his everyday self and his—legendary—metro side.
DISCOVERY AND CREDIT
With that glorious day testing Case W.J., and revealing that sectioning the callosum in humans had an effect in line with the preceding animal work, the fifty-year program of study on human split-brain patients began. It was luck that I was there. Sperry let me flourish, as did Bogen. Others in the lab, who were interested in the results, let me remain in charge of the project. It was a time of good fortune.
Our first report was a brief communication to the Proceedings of the National Academy of Sciences. Sperry had recently been elected to the academy, and members in those days had a fast-track way of publishing. We worked like crazy through the winter and spring and got the paper off in August 1962 for an October publishing date. The paper, largely free of medical jargon, was an amazing case history, a succinct summary of all we had done on W.J.11 The idea that disconnecting the hemispheres of the human brain caused major effects had new life. The era of human split-brain research was born.
At the same time, another story was brewing, one that would begin to teach me about the competitive nature of scientists. Norman Geschwind (Figure 10), a young neurologist, and Edith Kaplan, an equally young neuropsychologist, were working at Boston Veterans Administration Hospital. They reported a case of a patient, P.K., who suffered from a gliablastoma multiforme, a tumor that had invaded his left hemisphere. During surgery, presumably to debulk the tumor, he sustained an infarction* of the anterior cerebral artery.12 As Antonio Damasio recounted years later in Geschwind’s obituary, “The anterior section of the callosum as well as the medial aspect of the right frontal lobe were destroyed” and “resulted in a severe disturbance of writing, naming and praxic control of his left hand.”13 In short, a natural lesion resulting from a stroke, as opposed to a surgical section of the corpus callosum, had revealed a disconnection effect.† They first reported their finding at a December 14, 1961, meeting at the Boston Society of Neurology and Psychiatry.14 Geschwind and Kaplan had very cleverly interpreted a messy tumor case as a callosum lesion case and carried out some simple tests that suggested they were right. After the patient died a few months later, their diagnosis was confirmed at autopsy. In the spring, another posting of the tumor case was made. In the newsy Random Reports section of the May 1962 issue of the New England Journal of Medicine was an entry made by Geschwind about these astounding observations. Great care was taken to note that the report came out of the December 14, 1961, meeting. The finding was the buzz in Boston.
Tales from Both Sides of the Brain : A Life in Neuroscience (9780062228819) Page 5