The River of Consciousness

Home > Science > The River of Consciousness > Page 14
The River of Consciousness Page 14

by Oliver Sacks


  *9 An alternative explanation, Crick and Koch suggest (personal communication), is that the blurring and persistence of snapshots is due to their reaching short-term memory (or a short-term visual memory buffer) and slowly decaying there.

  Scotoma: Forgetting and Neglect in Science

  We may look at the history of ideas backwards or forwards: we can retrace the earlier stages, the intimations, and the anticipations of what we think now; or we can concentrate on the evolution, the effects and influences of what we once thought. Either way, we may imagine that history will be revealed as a continuum, an advance, an opening like Darwin’s tree of life. What one often finds, however, is very far from a majestic unfolding, and very far from being a continuum in any sense.

  I began to realize how elusive scientific history can be when I became involved with my first love, chemistry. I vividly remember, as a boy, reading a history of chemistry and learning that what we now call oxygen had been all but discovered in the 1670s by John Mayow, a century before Scheele and Priestley identified it. Mayow, through careful experimentation, showed that approximately one-fifth of the air we breathe consists of a substance necessary to both combustion and respiration (he called it “spiritus nitro-aereus”). And yet Mayow’s prescient work, widely read in his time, was somehow forgotten and obscured by the competing phlogiston theory, which prevailed for another century until Lavoisier finally disproved it in the 1780s. Mayow had died a hundred years earlier, at the age of thirty-nine. “Had he lived but a little longer,” the author of this history, F. P. Armitage, wrote, “it can scarcely be doubted that he would have forestalled the revolutionary work of Lavoisier, and stifled the theory of phlogiston at its birth.” Was this a romantic exaltation of John Mayow, a romantic misreading of the structure of the scientific enterprise, or could the history of chemistry have been wholly different, as Armitage suggested?*1

  Such forgetting or neglect of history is not uncommon in science; I saw it for myself when I was a young neurologist just starting work in a headache clinic. My job was to make a diagnosis—migraine, tension headache, whatever—and prescribe a treatment. But I could never confine myself to this, nor could many of the patients I saw. They would often tell me, or I would observe, other phenomena: sometimes distressing, sometimes intriguing, but not strictly part of the medical picture—not needed, at least to make a diagnosis.

  Often a classical visual migraine is preceded by an aura, so called, where the patient may see brightly scintillating zigzags slowly traversing the field of vision. These are well described and understood. But more rarely, patients would tell me of complex geometrical patterns that appeared in place of, or in addition to, the zigzags: lattices, whorls, funnels, and webs, constantly shifting, gyrating, and modulating. When I searched the current literature, I could find no mention of these. Puzzled, I decided to look at nineteenth-century accounts, which tend to be much fuller, much more vivid, much richer in description, than modern ones.

  My first discovery was in the rare-book section of our college library (everything written before 1900 counted as “rare”)—an extraordinary book on migraine written in the 1860s by a Victorian physician, Edward Liveing. It had a wonderful, lengthy title, On Megrim, Sick-Headache, and Some Allied Disorders: A Contribution to the Pathology of Nerve-Storms, and it was a grand, meandering sort of book, clearly written in an age far more leisurely, less rigidly constrained, than ours. It touched briefly on the complex geometrical patterns many of my patients had described, and it referred me to an 1858 paper, “On Sensorial Vision,” by John Frederick Herschel, an eminent astronomer. I felt I had struck pay dirt at last. Herschel gave meticulous, elaborate descriptions of exactly the phenomena my patients had described; he had experienced them himself, and he ventured some deep speculations about their possible nature and origin. He thought they might represent “a sort of kaleidoscopic power” in the sensorium, a primitive, pre-personal generating power in the mind, the earliest stages, even precursors, of perception.

  I could find no adequate description of these “geometrical spectra,” as Herschel called them, in the entire century between his observations and my own—and yet it was clear to me that perhaps one person in twenty affected with visual migraine experienced them on occasion. How had these phenomena—startling, highly characteristic, unmistakable hallucinatory patterns—evaded notice for so long?

  In the first place, someone must make an observation and report it. In 1858, the same year that Herschel reported his “spectra,” Guillaume Duchenne, a French neurologist, published a detailed description of a boy with what we now call muscular dystrophy, followed a year later by a report on thirteen more cases. His observations rapidly entered the mainstream of clinical neurology, identified as a disorder of great importance. Physicians started “seeing” the dystrophy everywhere, and within a few years scores of further cases had been published in the medical literature. The disorder had always existed, ubiquitous and unmistakable, but very few physicians had reported on it before Duchenne.*2

  Herschel’s paper on hallucinatory patterns, by contrast, sank without a trace. Perhaps this was because he was not a physician making medical observations but simply an independent observer of great curiosity. Though he suspected that his observations had scientific importance—that such phenomena could lead to deep insights about the brain—their medical importance was not his focus. His paper was published not in a medical journal but in a general scientific one. Because migraine was usually defined as a “medical” condition, Herschel’s descriptions were not seen as relevant, and after a brief mention in Liveing’s book they were forgotten or ignored by the medical profession. In a sense, Herschel’s observations were premature; if they were to point to new scientific ideas about the mind and brain, there was no way of making such a connection in the 1850s—the necessary concepts only emerged more than a century later with the development of chaos theory in the 1970s and 1980s.

  According to chaos theory, although it is impossible to predict the individual behavior of each element in a complex dynamic system (for instance, the individual neurons or neuronal groups in the primary visual cortex), patterns can be discerned at a higher level by using mathematical models and computer analyses. There are “universal behaviors” which represent the ways such dynamic, nonlinear systems self-organize. These tend to take the form of complex reiterative patterns in space and time—indeed the very sorts of networks, whorls, spirals, and webs that one sees in the geometrical hallucinations of migraine.

  Such chaotic, self-organizing behaviors have now been recognized in a vast range of natural systems, from the eccentric motions of Pluto to the striking patterns that appear in the course of certain chemical reactions to the multiplication of slime molds or the vagaries of weather. With this, a hitherto insignificant or unregarded phenomenon like the geometrical patterns of migraine aura suddenly assumes a new importance. It shows us, in the form of a hallucinatory display, not only an elemental activity of the cerebral cortex but an entire self-organizing system, a universal behavior, at work.*3

  With migraine, I had to go back to an earlier, forgotten medical literature—a literature that most of my colleagues saw as superseded or obsolete. I found myself in a similar position with Tourette’s. My interest in this syndrome had been kindled in 1969 when I was able to “awaken” a number of postencephalitic patients with L-dopa and saw how many of them rapidly swung from motionless, trancelike states through a tantalizing brief “normality” and then to the opposite extreme—violently hyperkinetic, tic-ridden states very similar to the half-mythical “Tourette’s syndrome.” I say “half-mythical” because no one in the 1960s spoke much about Tourette’s; it was considered extremely rare and possibly factitious. I had only vaguely heard of it.

  Indeed, in 1969, when I started to think about it, as my own patients were becoming palpably tourettic, I had difficulty finding any current references, and once again had to go back to the literature of the previous century: to Gilles de la Touret
te’s original papers in 1885 and 1886 and to the dozen or so reports that followed them. It was an era of superb, mostly French, descriptions of the varieties of tic behavior, which culminated in the book Les tics et leur traitement published in 1902 by Henri Meige and E. Feindel. Yet between 1907, when their book was translated into English, and 1970, the syndrome itself seemed almost to have disappeared.

  Why? One must wonder whether this neglect was not caused by the growing pressures at the beginning of the new century to try to explain scientific phenomena, following a time when it was enough to simply describe them. And Tourette’s was peculiarly difficult to explain. In its most complex forms it could express itself not only as convulsive movements and noises but as tics, compulsions, obsessions, and tendencies to make jokes and puns, to play with boundaries, and engage in social provocations and elaborate fantasies. Though there were attempts to explain the syndrome in psychoanalytical terms, these, while casting light on some of the phenomena, were impotent to explain others; there were clearly organic components as well. In 1960, the finding that a drug, haloperidol, which counters the effects of dopamine, could extinguish many of the phenomena of Tourette’s generated a much more tractable hypothesis—that Tourette’s was essentially a chemical disease, caused by an excess of (or an excessive sensitivity to) the neurotransmitter dopamine.

  With this comfortable, reductive explanation to hand, the syndrome suddenly sprang into prominence again and indeed seemed to multiply its incidence a thousandfold. (It is currently considered to affect one person in a hundred.) There is now a very intensive investigation of Tourette’s syndrome, but it is an investigation largely confined to molecular and genetic aspects. And while these may explain some of the overall excitability of Tourette’s, they do little to illuminate the particular forms of the tourettic disposition to engage in comedy, fantasy, mimicry, mockery, dream, exhibition, provocation, and play. While we have moved from an era of pure description to one of active investigation and explanation, Tourette’s itself has been fragmented in the process and is no longer seen as a whole.

  This sort of fragmentation is perhaps typical of a certain stage in science—the stage that follows pure description. But the fragments must somehow, sometime, be gathered together and presented once more as a coherent whole. This requires an understanding of determinants at every level, from the neurophysiological to the psychological to the sociological—and of their continuous and intricate interaction.*4

  In 1974, after I had spent fifteen years as a physician making observations on patients’ neurological conditions, I had a neuropsychological experience of my own. I had severely injured the nerves and muscles of my left leg while climbing in a remote part of Norway; I needed surgery to repair the muscle tendons and time to allow the healing of nerves. During the two-week period after surgery, while my leg was immobilized in a cast, bereft of movement and sensation, it ceased to feel like a part of me. It seemed to have become a lifeless object, not real, not mine, inconceivably alien. But when I tried to communicate this feeling to my surgeon, he said, “Sacks, you’re unique. I’ve never heard of anything like this from a patient before.”

  I found this absurd. How could I be “unique”? There must be other cases, I thought, even if my surgeon had not heard of them. As soon as I was mobile enough, I started to talk to my fellow patients, and many of them, I found, had similar experiences of “alien” limbs. Some had found this so uncanny and fearful that they had tried to put it out of their minds; others had worried about it secretly but not tried to describe it to others.

  After I left the hospital, I went to the library, determined to seek out the literature on the subject. For three years I found nothing. Then I came across an account by Silas Weir Mitchell, an American neurologist working at a Philadelphia hospital for amputees during the Civil War. He described, very fully and carefully, the phantom limbs (or “sensory ghosts,” as he called them) that amputees experienced in place of their lost limbs. He also wrote of “negative phantoms,” the subjective annihilation and alienation of limbs following severe injury and surgery. He was so struck by these phenomena that he wrote a special circular on the matter, which was distributed by the surgeon general’s office in 1864.

  Weir Mitchell’s observations aroused brief interest but then disappeared. More than fifty years passed before the syndrome was rediscovered as thousands of new cases of neurological trauma were treated during the First World War. In 1917, the French neurologist Joseph Babinski (with Jules Froment) published a monograph in which, apparently ignorant of Weir Mitchell’s report, he described the syndrome I had experienced with my own leg injury. Babinski’s observations, like Weir Mitchell’s, sank without a trace. (When, in 1975, I finally came upon Babinski’s book in our library, I found I was the first person to have borrowed it since 1918.) During the Second World War, the syndrome was fully and richly described for a third time by two Soviet neurologists, Aleksei N. Leont’ev and Alexander Zaporozhets—again in ignorance of their predecessors. Yet though their book, Rehabilitation of Hand Function, was translated into English in 1960, their observations completely failed to enter the consciousness of either neurologists or rehabilitation specialists.*5

  The work of Weir Mitchell and Babinski, of Leont’ev and Zaporozhets, seemed to have fallen into a historical or cultural scotoma, a “memory hole,” as Orwell would say.

  As I pieced together this extraordinary, even bizarre story, I felt more sympathy with my surgeon’s saying that he had never heard of anything like my symptoms before. The syndrome is not that uncommon: it occurs whenever there is a significant loss of proprioception and other sensory feedback through immobility or nerve damage. But why is it so difficult to put this on record, to give the syndrome its due place in our neurological knowledge and consciousness?

  As used by neurologists, the term “scotoma” (from the Greek for “darkness”) denotes a disconnection or hiatus in perception, essentially a gap in consciousness produced by a neurological lesion. (Such lesions may be at any level, from the peripheral nerves, as in my own case, to the sensory cortex of the brain.) It is extremely difficult for a patient with such a scotoma to be able to communicate what is happening. He himself scotomizes the experience because the affected limb is no longer part of his internal body image. Such a scotoma is literally unimaginable unless one is actually experiencing it. This is why I suggest, only half jocularly, that people read A Leg to Stand On while under spinal anesthesia, so that they will know in their own persons what I am describing.

  Let us turn from this uncanny realm of alien limbs to a more positive phenomenon (but still a strangely neglected and scotomized one)—that of acquired cerebral achromatopsia or total color blindness following a cerebral injury or lesion. (This is a completely different condition from common color blindness, which is caused by a deficiency of one or more color receptors in the retina.) I choose this example because I have explored it in some detail, after I learned of it quite by accident, when a patient with the condition wrote to me.*6

  When I looked into the history of achromatopsia, I again encountered a remarkable gap or anachronism. Acquired cerebral achromatopsia—and even more dramatically, hemi-achromatopsia, the loss of color perception in only one half of the visual field, coming on suddenly as a consequence of a stroke—had been described in exemplary fashion in 1888 by a Swiss neurologist, Louis Verrey. When his patient subsequently died and came to autopsy, Verrey was able to delineate the exact area of the visual cortex that had been damaged by her stroke. Here, he predicted, “the center for chromatic sense will be found.” Within a few years of Verrey’s report, there were other careful reports of similar problems with color perception and the lesions that caused them. Achromatopsia and its neural basis seemed firmly established. But then, strangely, the literature fell silent—not a single full case report was published for another seventy-five years.

  This story has been discussed with great scholarship and acumen by both Antonio Damasio and Semir Zeki.*7
Zeki remarks that Verrey’s findings aroused resistance the moment they were published and sees their virtual denial and dismissal as springing from a deep and perhaps unconscious philosophical attitude—the then prevailing belief in the seamlessness of vision.

  The notion that we are given the visual world as a datum, an image, complete with color, form, movement, and depth, is a natural and intuitive one, seemingly supported by Newtonian optics and Lockean sensationalism. The invention of the camera lucida, and later of photography, seemed to exemplify such a mechanical model of perception. Why should the brain behave any differently? Color, it was obvious, was an integral part of the visual image and not to be dissociated from it. The ideas of an isolated loss of color perception or of a center for chromatic sensation in the brain were thought self-evident nonsense. Verrey had to be wrong; such absurd notions had to be dismissed out of hand. So they were, and achromatopsia “disappeared.”

  There were, of course, other factors at work as well. Damasio has described how, in 1919, when Gordon Holmes published his findings on two hundred cases of war injuries to the visual cortex, he stated summarily that none of these showed isolated deficiencies in color perception. Holmes was a man of formidable authority and power in the neurological world, and his empirically based antagonism to the notion of a color center in the brain, reiterated with increasing force for over thirty years, was a major factor in preventing other neurologists from recognizing the syndrome.

 

‹ Prev