Shufflebrain
Page 17
When a bacillus is not responding to a chemical stimulus, it tumbles randomly through the medium. (Its flagella crank randomly.) In the presence of a stimulus, though, the bacterium checks the tumbling action and swims in an orderly fashion. What would happen, Koshland wondered, if he tricked them? What if he placed the organism in chemical stimulus (no gradient though) and then quickly diluted the medium? If the bacteria indeed analyzes head-to-tail, they should go right on tumbling because there'd be no gradient, just a diluted broth. But if bacteria remember a past concentration, diluting the medium should fool them into thinking they were in a gradient. Then they'd stop tumbling.
"The latter was precisely what occurred," Koshland wrote.[1] The bacterial relied on memory of the past concentration to program their behavior in the present.
There was more. Koshland called attention to another feature of bacterial behavior. He pointed out that in responding to a chemical stimulus--in checking the tumbling action-- "the bacterium has thus reduced a complex problem in three-dimensional migration to a very simple on-off device."[2]
When a human being simplifies a complicated task, we speak about intelligence. Thus bacteria show evidence of rudimentary minds. And we can use hologramic theory to account for it.
***
Adler and Tso discovered that attractants induce counterclockwise rotation in E. coli's flagella. Repellents cranked the appendage clockwise. In terms of hologramic theory in its simplest form, the two opposite reactions are 180 degrees (or pi) out of phase. By shifting from random locomotion to movement toward or away from a stimulus, the organism would be shifting from random phase variations in its flagella to harmonic motion--from cacophony to a melody if they were tooting horns instead of churning their appendages.
Adler and Tso identified the bacterium's sensory apparatus. Like the biochemical motor, it also turned out to be a protein. A search eventually turned up strains of E. coli that genetically lack the protein for a specific attractant. (Specific genes direct the synthesis of specific proteins.)
At the time they published their article, Adler and Tso had not isolated the memory protein (if a unique one truly exists). But the absence of that information doesn't prevent our using hologramic theory to explain the observations: Phase spectra must be transformed from the coordinates of the sensory proteins through those of contractile proteins to the flagella and into the wave motion of the propelling cell. Amplitudes can be handled as local constants. The chemical stimulus in principle acts on the bacterium's perceptual mechanism analogous to the reconstruction beam's decoding of an optical hologram. As tensors in a continuum, the phase values encoded in the sensory protein must be transformed to the coordinate system representing locomotion. The same message passes from sensory to motor mechanisms, and through whatever associates the two. Recall that tensors define the coordinates, not the other way around. Thus, in terms of information, the locomotion of the organism is a transformation of the reaction between the sensory protein and the chemical stimulus, plus or minus the effects of local constants. Absolute amplitudes and noise, products of local constants, would come from such things as the viscosity of the fluid (e.g., thick pus versus sweat) the age and health of organisms, the nutritional quality of the medium (better to grow on the unwashed hands of a fast-food hamburger flipper than on the just-scrubbed fingernails of a surgical nurse), or whatever else the phase spectrum cannot encode. As for the storage of whole codes in small physical spaces, remember that phase has no prescribed size in the absolute sense. A single molecule can contain a whole message.
***
Evidence of memory on single-celled animals dates back at least to 1911, to experiments of the protozoologists L. M. Day and M. Bentley on paramecia.[3] Day and Bentley put a paramecium into a snug capillary tube--one whose diameter less than the animal's length. The paramecium swam down to the opposite end of the tube, where it attempted to turn abound. But in the cramped lumen, the little fellow twisted, curled, ducked, bobbed....but somehow managed by accident to get faced in the opposite direction. What did it do? It immediately swam to the other end and got itself stuck again. And again it twisted, curled, ducked...and only managing to get turned around by pure luck. Then, after a while Day and Bentley began to notice something. The animal was taking less and less time to complete the course. It was becoming more and more efficient at the tricky turn-around maneuver. Eventually, it learned to execute the move on the first attempt.
Day and Bentley's observations didn't fit the conventional wisdom of their day, nor the criteria for learning among some schools of thought in our own times. Their little paramecia had taught themselves the trick, which in some circles doesn't count as learning. But in the 1950s an animal behaviorist named Beatrice Gelber conditioned paramecia by the same basic approach Pavlov had taken when he used a whiff of meat to make a dog drool when it heard the ringing of a bell.
Gelber prepared a pâté of her animal's favorite bacteria (a single paramecium may devour as many as 5 million bacilli in a single day[4] ); then she smeared some of it on the end of a sterile platinum wire. She dipped the wire into a paramecium culture. Immediately her animals swarmed around the wire, which was not exactly startling news. In a few seconds, she withdrew the wire, counted off a few more seconds and dipped it in again. Same results!. But on the second trial, Gelber presented the animals with a bare, sterilized wire, instead of with bacteria. No response! Not at first, anyway. But after thirty trials--two offers of bacteria, one of sterile wire--Gelber's paramecia were swarming around the platinum tip, whether it proffered bacterial pâté or not.[5]
Naturally, Gelber had her critics, those who dismiss the idea that a single cell can behave at all, let alone remember anything. I must admit, a mind isn't easy to fathom in life on such a reduced scale. Yet I've sat entranced at my stereoscopic microscope for hours on end watching protozoa of all sorts swim around in the water with my salamanders. I've often wondered if Gelber's critics had ever set aside their dogmas and doctrines long enough to observe for themselves the truly remarkable capabilities of one-celled animals. Let me recount something I witnessed one Saturday afternoon many years ago.
On occasion, a fungal growth infects a salamander larva's gills. To save the salamander, I remove the growth mechanically. On the Saturday in question, I discovered one such fungal jungle teeming with an assortment of protozoa. What were those beasts? I wondered. Instead of depositing the teased-off mass on the sleeve of my lab coat, I transferred it to a glass slide for inspection under the much greater magnification of the compound phase microscope.[6]
Several different species of protozoa were working the vine-like hyphae of the fungus. I was soon captivated by the behavior of a species I couldn't identify. They were moving up and down the hyphae at a brisk pace. At the distal end of a strand an animal's momentum would carry it out into the surrounding fluid. It would then turn and swim back to its "own" hypha, even when another one was closer. Something spatial or chemical, or both, must be attracting these critters, I thought almost out loud. Just as I was thinking the thought, one animal attracted my attention. It had just taken a wide elliptical course into the fluid; but along the return arc of the excursion, another hypha lay directly on its path. And my little hero landed on it. After a few pokes at the foreign strand, the animal paused as though something didn't seem quite right. Meanwhile its sibs were busily working the territory. After a few tentative pokes, my animal moved away. But now it landed on a third hypha, shoved off after a brief inspection and landed on still another hypha. Soon it was hopelessly lost on the far side of the microscopic jungle.
But then something happened. As I was anticipating its departure, protozoan hesitated, gave the current hypha a few sniffs and began slowly working up and down the shaft. After maybe five or six trips back and forth along the strand, my animal increased its speed. Within a few minutes, it was working the new hyphas as it had been when it first attracted my attention. I couldn't escape the thought that the little creature had forgo
tten its old home and had learned the cues peculiar its new one.
Had I conducted carefully controlled experiments, I might have discovered a purely instinctive basis for all I saw that Saturday. Maybe Gelber's or Day and Bentley's observation can be explained by something other than learning, per se. But, instinctive or learned, the behavior of protozoa--or bacteria--doesn't fit into the same class of phenomena as the action-reaction of a rubber band. Organized information exists in the interval between what they sense and how they respond. We employ identical criteria in linking behavior to a human mind.
***
But higher organisms require a "real" brain in order to learn, don't they? If posing such a question seems ridiculous, consider an observation of a physiologist named G. A. Horridge made some years ago on decapitated roaches and locusts.
In some invertebrates, including insects, collections of neurons--ganglia--provide the body with direct innervation, as do the spinal cord and brainstem among vertebrates. Horridge wondered if ganglion cells could learn without the benefit of the insect's brain. To test the question, he devised an experiment that has since become famous enough to bear his name: "The Horridge preparation."
In the Horridge preparation, the body of a beheaded insect is skewered into an electrical circuit. A wire is attached to a foot. Then the preparation is suspended directly above a salt solution. If the leg relaxes and gravity pulls down the foot, the wire dips into the salt solution, closing the electrical circuit and-- zappo! A jolt is delivered to the ganglion cells within the headless body. In time, the ganglion cells learn to avoid the shock by raising the leg high enough to keep the wire out of the salt solution.
An electrophysiologist name Graham Hoyle went on to perfect and refine the Horridge preparation. Working with pithed crabs, he used computers to control the stimuli; he made direct electrophysiological recordings from specific ganglion cells; and, because he could accurately control the stimuli and record the responses, Hoyle was able to teach the cells to alter their frequency of firing, which is a very sophisticated trick. How well did the pithed crabs learn? According to Hoyle, "debrained animals learned better than intact ones."[7]
I'm not suggesting that we replace the university with the guillotine. Indeed, later in this chapter we'll bring the brain back into our story. But first-rate evidence of mind exists in some very hard-to-swallow places. Brain (in the sense of what we house inside our crania) is not a sine qua non of mind.
Aha, but does the behavior of beheaded bugs really have any counterpart in like us?
***
In London, in 1881, the leading holist of the day, F. L. Goltz arrived from Strasbourg for a public showdown at the International Medical Congress with his arch-rival, Englishman David Ferrier who'd gained renown for investgating functional localization within the cerebral cortex.
At the Congress, Ferrier presented monkeys with particular paralyses following specific ablations of what came to be known as the motor cortex. Ferrier's experiments were so dramatically elegant that he won the confrontation. But not before Goltz had presented his "Hund ohne Hirn," (dog without brain)-- an animal who could stand up even though his cerebrum had been amputated. [8]
The decerebrated mammal has been a standard laboratory exercise in physiology courses every since. A few years ago, a team of investigators, seeking to find out if the mammalian midbrain could be taught to avoid irritation of the cornea, used the blink reflex to demonstrate that, "decerebrate cats could learn the conditioned response."[9]
Hologramic theory does not predict that microbes, beheaded bugs or decerebrated dogs and cats necessarily perceive, remember and behave. Experiments furnish the underlying evidence. Some of that evidence, particularly with bacteria, has been far more rigorously gathered than any we might cite for support of memory in rats or monkeys or human beings. But the relative nature of phase code explains how an organism only 2 micrometers long--or a thousand times smaller that, if need be--can house complete sets of instructions. Transformations within the continuum give us a theory of how biochemical and physiological mechanisms quite different from those in intact brains and bodies of vertebrates may nevertheless carry out the same overall informational activities.
Yet hologramic theory does not force us to abandon everything else we know.[10] . Instead, hologramic theory gives new meaning to old evidence; it allows us to reassemble the original facts, return to where our quest began, and with T. S. Eliot, "know the place for the first time."
In the last chapter, I pointed out that two universes developed according to Riemann's plan would obey a single unifying principle, curvature, and yet could differ totally if the two varied with respect to dimension. Thus the hologramic continuum of both a salamander and a human being depend on the phase code and tensor transformations therein. But our worlds are far different from theirs by virtue of dimension. Now let's take this statement out of the abstract.
***
It's no great surprise to anyone that a monkey quickly learns to sit patiently in front of a display panel and win peanut after peanut by choosing, say, a triangle instead of a square. By doing essentially the same thing, rats and pigeons follow the same trend Edward Thorndike first called attention to in the 1890s. Even a goldfish when presented with an apparatus much like a gum machine soon learns that bumping its snout against a green button will earn it a juicy tubifex worm while a red button brings forth nothing. Do choice-learning experiments mean that the evolution of intelligence is like arithmetic: add enough bacteria and we'd end up with a fish or an ape? In the 1950s a man named Bitterman began to wonder if it was all really that simple. Something didn't smell right to him. Bitterman decide to add a new wrinkle to the choice experiments.
Bitterman began by training various species to perform choice tasks. His animals had to discriminate, say, A from B. He trained goldfish, turtles, pigeons, rats and monkeys to associate A with a reward and B with receiving nothing.
Then Bitterman played a dirty trick. He switched the reward button. Chaos broke out in the laboratory. Even the monkey became confused. But as time went by, the monkey began to realize that now B got you the peanut, not A. Then the rat began to get the idea. And the pigeon too! Meanwhile over in the aquarium, the goldfish was still hopelessly hammering away at the old choice. Unlike the other members of the menagerie, the fish could not kick its old habit and learn a new one.
What about his turtle? It was the most interesting of all Bitterman's subjects. Confronted with a choice involving spatial discrimination, the turtle quickly and easily made the reversal. But when the task involved visual recognition, the turtle was as bad the goldfish; it couldn't make the switch. It was as though the turtle's behavior lay somewhere between that of the fish and the bird. Turtles, of course, are reptiles. During vertebrate evolution, reptiles appeared after fishes (and amphibians) but before birds and mammals.
Now an interesting thing takes place in the evolution of the vertebrate brain. In reptiles, the cerebrum begins to acquire a cortex on its external surface. Was the cerebral cortex at the basis of his results? Bitterman decided to find out by scaping the cortex off the rat's cerebrum. Believe it or not, these rats successfully reversed the habit when give a space problem. But they failed when the choice involved visual discrimination. Bitterman's rats acted like turtles!
***
Bitterman's experiments illustrate that with the evolution of the cerebral cortex something had emerged in the vertebrate character that had not existed before. Simple arithmetic won't take us from bacterium to human being.
As embryos, each of us re-enacts evolution, conspicuously in appearance but subtly in behavior, as well. At first we're more like a slime mold than an animal. Up to about the fourth interuterine month, we're quite salamander-like. We develop a primate cerebrum between the forth and sixth month. When the process fails, we see in a tragic way how essential the human cerebral cortex is to the "Human Condition," in the Barry N. Schwartz sense of of the term.
Mesencephalia is o
ne of the several terms applied to an infant whose cerebrum fails to differentiate a cortex.[11] A mesencephalic infant sometimes lives for a few months. Any of us (with a twist of an umbilical cord, or if mom took too long a swig on a gin bottle) might have suffered the same fate. Like its luckier kin, the mesencephalic child will cry when jabbed with a diaper pin; will suckle a proffered breast; will sit up; and it can see, hear, yawn, smile and coo. It is a living organism. But human though its genes and chromosomes and legal rights may be, it never develops a human personality, no matter how long it lives. It remains in the evolutionary bedrock out of which the dimensions of the human mind emerge. It stands pat at the stage where the human embryo looked and acted like the creature who crawls from the pond.
Yet there's no particular moment in development when we can scientifically delineate human from nonhuman: no specific minute, hour or day through which we can draw a neat draftsman's line. Development is a continuous process. The embryo doesn't arrive at the reptilian stage, disassemble itself and construct a mammal, de novo. In embryonic development, what's new harmoniously integrates with what's already there to move up from one ontological step to the next.[12] The embryo's summations demand Riemann's nonlinear rule: curvature!
What is arithmetic, anyway. What do we mean by addition and subtraction? At the minimum, we need discrete sets. The sets must equal each other--or be reducible to equal sets be means of constants; and their relationships must be linear. The correct adjective for describing a consecutive array of linear set is contiguous (not continuous), meaning that the successive members touch without merging into and becoming a part of each other--just the opposite of Riemann's test of continuity. This may seem utterly ridiculous. But if the sets in 1 + 1 + 1 surrender parts of themselves during addition, their sum will be something less than 3. We literally perform addition and subtraction with our fingers, abacus beads, nursery blocks and digital computers-- any device that uses discrete, discontinuous magnitudes.