The Youngest Science

Home > Other > The Youngest Science > Page 8
The Youngest Science Page 8

by Lewis Thomas

We thought it useful, given so powerful an example of natural immunity already in existence in animals, to see whether we could obtain even stronger antibacterial sera by immunizing the rabbits. We injected animals with suspensions of heat-killed meningococci, and collected sera at weekly intervals. These samples were set up as in the initial experiments, adding various numbers of live bacteria to the serum specimens and determining how many were killed, and how quickly. Within the next few days we encountered our paradox: the sera from the immunized rabbits, which had been capable of killing a million meningococci in a few hours, had now lost this property. There were potent and specific antibodies in these sera, as we could show in other kinds of tests—agglutination, precipitation, and complement fixation tests. But, with the appearance of a specific antibody, the bactericidal activity vanished.

  Moreover, something of the same sort could be shown in the whole rabbit, in vivo. When we injected live bacteria into the bloodstream of our immunized animals, and then measured the survival of the bacteria by serial blood cultures, we were surprised to learn that the blood cultures were still positive twenty-four hours later in the more intensively immunized rabbits, in contrast to the unimmunized animals, in which all of the meningococci had disappeared within ten to fifteen minutes.

  By this time it was late April of 1941 and I was in a hurry. The problem had turned into something fascinating, involving both paradox and surprise. I knew I was expected back in New York the next January to become a neurologist, so I worked as fast as I could. What I had run into was an antique immunologic phenomenon called the “prozone,” in which an excess of antibody turns off the immune reaction unless the serum is sufficiently diluted. However, the difference in my laboratory—what was new—was that it worked in vivo: an immunized animal could lose, as the result of being immunized, its own natural defense. This might, I thought, have useful implications for susceptibility in certain human infections beyond meningitis—typhoid fever and brucellosis, for example—and I wanted to get on with it.

  However, as it turned out, I never got to finish the problem or even answer the principal questions. Nor did I ever get back to the Neurological Institute. The Rockefeller Institute was put on notice in late 1941, then mobilized as a naval medical research unit; I was assigned to it as lieutenant, and received orders to turn up in New York, in uniform, by the end of March 1942. John Dingle and I reluctantly agreed to bring the still-inconclusive problem of the in vivo prozone to a premature end and write the work up; to this day I’ve never been able to return, full-time, to the problem. It still hangs there in my mind, and I don’t believe any other laboratory has ever settled it.

  * * *

  • • •

  I think I would still pick neurology as the most fascinating of all fields in medicine. It is now beginning to move into problems originally staked out by psychiatry, and the contributions from neuropharmacology have already begun to transform the discipline. The most enthralling of these is endorphin, a simple peptide secreted within the brain with the particular function of attaching specifically to the surface of cells responsible for the awareness of pain, at the same sites to which morphine and heroin habitually become attached.

  These things are interesting for all kinds of reasons, some of them urgently important. Now that the chemical structures are known, it may be possible to design new classes of drugs for pain, perhaps without the side effects and addicting properties of morphine. It is also conceivable that new insights can be gained into the mechanism of addiction itself, and perhaps new ways will be found to cope with at least the purely medical aspects of one of this century’s most appalling social problems. Perhaps, as well, when we have learned enough about endorphin, and gotten used to the idea that such a thing exists in our brains, we may take a different attitude toward addiction. I wonder what would happen if pharmacologic science were to produce a “natural” drug, as natural, say, as endorphin, possessing the subjectively pleasurable properties of heroin, but without addiction. Would it be allowed, or would we pass laws to forbid it?

  But the most interesting question of all is why does such a substance exist? What is the biological purpose of endorphin? Is its antipain function what it is really designed to accomplish, or is this a more or less incidental side effect, a biological accident, with some other still-unguessed-at role in the regulation of messages in the brain?

  If it is, in fact, a built-in mechanism for the alleviation of pain, how did it get there past all the selective tests of evolution? Why should it have survival value for a species, or for an individual animal? For this is what you would have to find, unless its existence is to make no sense in the context of modern biology. We take it for granted that every major inherited trait possessed uniformly by any species is there because of natural selection. This is as solid and inflexible a rule as any in science.

  It would be a different problem if only we humans made endorphin in our brains. Perhaps you could make the case that for a species as intelligent and at the same time as interdependent and watchful of each other as ours, it might be useful to install a device of this kind to guard against intolerable pain, or to ease the individual through what might otherwise be an agonizing process of dying. Without it, living in our kind of intimacy, at our close quarters, might be too difficult for us, and we might separate from one another, each trying life on his own, and the species would then, of course, collapse.

  But why should mice have the same equipment, and every other vertebrate thus far studied?

  And why, of all creatures, earthworms? For it has just been discovered that the primitive nervous system of annelids is richly endowed with the same kind of endorphin receptors, and it can be assumed that the worm possesses the same system for pain relief as exists in our own brains. I am glad to learn of this. Earthworms do have sensory equipment, I know. They withdraw quickly when touched, even when blown upon. Without protection against overwhelming pain, the day-to-day life of a worm, being stepped on, snatched by birds, ground under plows, washed away in streams, would be hellish indeed.

  Perhaps this is simply a piece of extraordinary good luck on the part of nature. Maybe something slipped up somewhere early in evolution, and all of us were endowed with something ineffable, free for the having, carrying no particular value for competition. The genes were simply handed down, species after elaborate species, to restrain the suffering of living and dying, by pure chance. I have to doubt this, as an earnest believer in the details of evolution.

  Yet there it is, a biologically universal act of mercy. I cannot explain it, except to say that I would have put it in had I been around at the very beginning, sitting as a member of a planning committee, say, and charged with the responsibility for organizing for the future a closed ecosystem crowded with an infinite variety of life on this planet. No such system could possibly operate without pain, and pain receptors would have to be planned in detail for all sentient forms of life, plainly for their own protection and the avoidance of danger. But not limitless pain; this would have the effect of turbulence, unhinging the whole system in an agony even before it got under way. And not, I should think, the awareness of dying. I would have cast a vote for a modulator of pain, finely enough adjusted to assure its usefulness, but set with a governor of some sort, to make sure it never could get out of hand. In this sense, endorphin may have developed in our brains not for its selective value to our species, or any species, or any individuals without species, but for the survival and perpetuation of the whole biosphere, or as it is sometimes called, the System.

  No one can predict how the endorphin story will turn out in the end, for it is only at its beginning. At the present stage, it might go anywhere, mean anything. Conceivably, chemical messengers of this class, small peptide molecules, may even be involved in disorders of the brain, including schizophrenia.

  This state of affairs tells a central truth about research. Making guesses at what might lie ahead, when the new facts
have arrived, is the workaday business of science, but it is never the precise, sure-footed enterprise that it sometimes claims credit for being. Accurate prediction is the accepted measure of successful research, the ultimate reward for the investigator, and also for his sponsors. Convention has it that prediction comes in two sequential epiphanies: first, the scientist predicts that his experiment will turn out the way he predicts; and then, the work done, he predicts what the experiment says about future experiments, his or someone else’s. It has the sound of an intellectually flawless acrobatic act. The mind stands still for a moment, leaps out into midair at precisely the millisecond when a trapeze from the other side is hanging at the extremity of its arc, zips down, out, and up again, lets go and flies into a triple somersault, then catches a second trapeze timed for that moment and floats to a platform amid deafening applause. There is no margin for error. Success depends not so much on the eye or the grasp, certainly not on the imagination, only on the predictable certainty of the release of the bars to be caught. Clockwork.

  It doesn’t actually work this way, and if scientists thought it did, nothing would get done; there would be only a mound of bone-shattered scholars being carried off on stretchers.

  In real life, research is dependent on the human capacity for making predictions that are wrong, and on the even more human gift for bouncing back to try again. This is the way the work goes. The predictions, especially the really important ones that turn out, from time to time, to be correct, are pure guesses. Error is the mode.

  We all know this in our bones, whether engaged in science or in the ordinary business of life. More often than not, our firmest predictions are chancy, based on what we imagine to be probability rather than certainty, and we become used to blundering very early in life. Indeed, the universal experience, mandated in the development of every young child, of stumbling, dropping things, saying the words wrong, spilling oatmeal, and sticking one’s thumb in one’s eye are part of the preparation for adult living. A successful child is one who has learned so thoroughly about his own fallibility that he can never forget it, all the rest of his life.

  In research, the usefulness of error is that it leads to more research, and this is what the word tells us. To err doesn’t really mean getting things wrong; its etymology derives from the Indo-European root ers, signifying simply “to be in motion”; it comes into Latin as errare, meaning “to wander,” but the same root emerges in Old Norse as ras, rushing about looking for something, from which we get the English word race. In order to get anything right, we are obliged first to get a great many things wrong.

  The technical term stochastic is another word filled with the same lesson. We use it today to signify absolute randomness, and certain computers are programmed to turn out strings of stochastic variables in order that biomathematicians can arrange the appropriate controls for experiments involving large numbers of numbers. Stochastic is the jargon term for pure chance.

  But it started out, as happens so often in language, with precisely the opposite meaning. The original Greek root was stokhos, meaning a brick column used as a target; from this the root words meaning “to take aim” were derived.

  We like to think that we take aim and hit targets by taking advantage of a human gift for accuracy and precision. But there is this secret, embedded in the language itself: we become accurate only by trial and error, we tend to wander about, searching for targets. It is being in motion, at random (from a root meaning running, by the way), that permits us to get things done.

  The immunologic system works in this way. When you inject a foreign antigen, horse serum protein, say, into a rabbit, a few lymphocytes are able to recognize this particular protein. They promptly begin to manufacture specific antibodies against the horse serum protein, and other cells of the same line begin dividing rapidly so that small factories for this kind of antibody production (and only this kind) are set up in the lymph nodes. The animal is now sensitized or immune, and stays that way indefinitely. When this phenomenon was first revealed, it was thought that the lymphocytes confronted by the horse serum molecules were somehow taught what to do by the encounter. Each cell, naïve to begin with, was instructed by the presence of the antigen, then learned how to make exactly the right antibody needed to lock precisely on to the foreign protein.

  This notion, the “instructive” theory, reasonable as it sounds, turned out to be wrong. It has now been replaced by what is called the “clonal selection” theory of immune response, supported by an immense body of solid research. According to this theory, lymphocytes are born knowing what to look for, and the individual cells, each with its individual kind of genetically determined receptor, roam the blood and tissues looking for the specific antigens which match the available specific receptors. When a lymphocyte meets its matching antigen, it promptly enlarges and begins dividing into identical progeny, all possessing the same receptors, and the result is a clone of identical cells all prepared to synthesize just the particular antibody needed, now and in the future. It is a tissue of cellular memory.

  Among the billions of lymphocytes made available in a young animal are individual cells capable of recognizing the molecular configuration of almost anything in nature, including totally new, synthetic compounds never before seen in nature. The populations of such knowledgeable cells, and the extent of their collective repertoire, are vastly increased as the animal matures, probably as the result of somatic mutations or rearrangements of genes occurring from time to time in the stem cells which give rise to lymphocytes. The system works, and works with astonishing efficiency, because of the high mobility of the recognizing cells, their large numbers, and their capacity to amplify the antibody production quickly by replicating just the informed cells that are needed for the occasion.

  It is eminently efficient, but from the point of view of any individual lymphocyte it must look like nothing but one mistake after another. When the horse serum protein appears, it is not recognizable to any but a small minority of the cell population; for all the rest it is a waste of time, motion, and effort. Also, there are risks all around, chances of making major blunders, endangering the whole organism. Flawed lymphocytes can turn up with an inability to distinguish between self and nonself, and replication of these can bring down the entire structure with the devastating diseases of autoimmunity. Blind spots can exist, or gaps in recognition analogous to color blindness, so that certain strains of animals are genetically unable to recognize the foreignness of certain bacteria and viruses.

  Nevertheless, on balance the immune system works very well, so well indeed that the neurobiologists are currently entertaining (and being entertained by) the same selection theory to explain how the brain works. It is postulated that the thinking units equivalent to lymphocytes are the tiny columns of packed neurones which make up most of the substance of the cerebral cortex. These clusters are the receptors, prepared in advance for confrontation with this or that sensory stimulus, or this or that particular idea. For all the things we will ever see in the universe, including things not yet thought of, the human brain possesses one or another prepared, aware, knowledgeable cluster of connected neurones, as ready to lock on to that one idea as a frog’s brain is for the movement of a fly. The recognition is amplified by synaptic alterations within the column of cells and among the other groups with which the column is connected, and memory is installed.

  Statistically, the probability that any theory like this one, very early in its development, will turn out to be correct is of course vanishingly small, even with the speculative backing of an analogous mechanism in the immune system. The great thing about it, right or wrong, is that it is already causing ripples of interest and excitement, and other investigators are starting to plan experiments, cooking up ideas, their minds wandering, their receptors displayed at full attention, waiting for the right idea to come along. Neurology and immunology may be on the verge of converging.

  * * *

 
• • •

  I wrote a couple of essays a few years back on computers, in which I had a few things to say in opposition to the idea that machines could be made with what the computer people themselves call Artificial Intelligence; they always use capital letters for this technology, and refer to it in their technical papers as AI. I was not fond of the idea and said so, and proceeded to point out the necessity for error in the working of the human mind, which I thought made it different from the computer. In response, I received a great deal of mail, most of it gently remonstrative, but friendly, the worst kind of mail to get on days when things aren’t going well anyway, pointing out to me in the simplest language how wrong indeed I was. Computers do proceed, of course, by the method of trial and error. The whole technology is based on this, can work in no other way.

  One of the things I have always disliked about computers is that they are personally humiliating. They do resemble, despite my wish for it to be otherwise, the operations of the human mind. There are differences, but the Artificial Intelligence people, with their vast and clever computers, have come far enough along to make it clear that the machines behave like thinking machines. If they are right, the thing to worry about is not that they will ultimately be making electronic minds superior to ours but that already ours are so inferior to theirs, mine anyway. I have never heard of a computer, even a simple one, as dedicated to the deliberate process of forgetting information, losing it, restoring it out of context and in misleading forms, or generating such a condition of diffuse, inaccurate confusion as occurs every day in the average human brain. We are already so outclassed as to live in constant embarrassment.

  I have been inputting, as they say, one bit of hard data after another into my brain all my life, some of it thruputting and outputting from the other ear, but a great deal held and stored somewhere, or so I am assured, but I possess no reliable device, anywhere in my circuitry, for retrieving it when needed. If I wish for the simplest of things, someone’s name for example, I cannot send in a straightforward demand with any sure hope of getting back the right name. I am often required to think about something else, something quite unrelated, and wait, hanging around in the mind’s lobby, picking up books and laying them down, pacing around, and then, if it is a lucky day, out pops the name. No computer could be designed by any engineer to function, or malfunction, in this way.

 

‹ Prev