Asimov's New Guide to Science

Home > Science > Asimov's New Guide to Science > Page 117
Asimov's New Guide to Science Page 117

by Isaac Asimov


  Those who take pleasure in the sensations they experience when under the influence of a hallucinogen refer to this as mind expansion—apparently indicating that they sense, or think they sense, more of the universe than they would under ordinary conditions. But then, so do drunks once they bring themselves to the stage of delirium tremens. The comparison is not as unkind as it may seem, for investigations have shown that a small dose of LSD, in some cases, can produce many of the symptoms of schizophrenia!

  What can all this mean? Well, serotonin (which is structurally like the amino acid tryptophan) can be broken down by means of an enzyme called amine oxidase, which occurs in brain cells. Suppose that this enzyme is taken out of action by a competitive substance with a structure like serotonin’s—lysergic acid, for example. With the breakdown enzyme removed, serotonin will accumulate in the brain cells, and its level may rise too high. This will upset the serotonin balance in the brain and may bring on the schizophrenic state.

  Is it possible that schizophrenia arises from some naturally induced upset of this sort? The manner in which a tendency to schizophrenia is inherited certainly makes it appear that some metabolic disorder (one, moreover, that is gene-controlled) is involved. In 1962, it was found that with a certain course of treatment, the urine of schizophrenics often contained a substance absent from the urine of nonschizophrenics. The substance eventually turned out to be a chemical called dimethoxyphenylethylamine, with a structure that lies somewhere between adrenalin and mescaline. In other words, certain schizophrenics seem, through some metabolic error, to form their own hallucinogens and to be, in effect, on a permanent drug-high.

  Not everyone reacts identically to a given dose of one drug or another. Obviously, however, it is dangerous to play with the chemical mechanism of the brain. To become a mental cripple is a price surely too high for any amount of “mind-expanding” fun. Nevertheless, the reaction of society to drug use—particularly to that of marijuana, which has not yet been definitely shown to be as harmful as other hallucinogens—tends to be overstrenuous. Many of those who inveigh against the use of drugs of one sort or another are themselves thoroughly addicted to the use of alcohol or tobacco, both of which, in the mass, are responsible for much harm both to the individual and to society. Hypocrisy of this sort tends to decrease the credibility of much of the antidrug movement.

  MEMORY

  Neurochemistry also offers a hope for understanding that elusive mental property known as memory. There are, it seems, two varieties of memory: short-term and long-term. If you look up a phone number, it is not difficult to remember it until you have dialed; it is then automatically forgotten and, in all probability, will never be recalled again. A telephone number you use frequently, however, enters the long-term memory category. Even after a lapse of months, you can dredge it up.

  Yet even of what we would consider long term memory items, much is lost. We forget a great deal and even, alas, forget much of vital importance (as every student facing an examination is woefully aware). Yet is it forgotten? Has it really vanished, or is it simply so well stored that it is difficult to recall—buried, so to speak, under too many extraneous items?

  The tapping of such hidden memories has become an almost literal tap. The American-born surgeon Wilder Graves Penfield at McGill University in Montreal, while operating on a patient’s brain, accidentally touched a particular spot that caused the patient to hear music. That happened over and over again. The patient could be made to relive an experience in full, while remaining quite conscious of the present. Proper stimulation can apparently reel off memories with great accuracy. The area involved is called the interpretative cortex. It may be that the accidental tapping of this portion of the cortex gives rise to the phenomenon of déjà vu (the feeling that something has happened before) and other manifestations of extrasensory perception.

  But if memory is so detailed, how can the brain find room for it all? It is estimated that, in a lifetime, a brain can store 1,000,­000,­000,­000,­000 (a million billion) units of information. To store so much, the units of storage must be of molecular size. There would be room for nothing more.

  Suspicion is currently falling on ribonucleic acid (RNA) in which the nerve cell, surprisingly enough, is richer than almost any other type of cell in the body. This is surprising because RNA is involved in the synthesis of protein (see chapter 13) and is therefore usually found in particularly high quantity in those tissues producing large quantities of protein either because they are actively growing or because they are producing copious quantities of proteinrich secretions. The nerve cell falls into neither classification.

  A Swedish neurologist, Holger Hyden, developed techniques that could separate single cells from the brain and then analyze them for RNA content. He took to subjecting rats to conditions where they were forced to learn new skills—that of balancing on a wire for long periods of time, for instance. By 1959, he had discovered that the brain cells of rats that were forced to learn increased their RNA content up to 12 percent higher than that of the brain cells of rats allowed to go their normal way.

  The RNA molecule is so very large and complex that, if each unit of stored memory is marked off by an RNA molecule of distinctive pattern, we need not worry about capacity. So many different RNA patterns are available that even a number such as a million billion is insignificant in comparison.

  But ought one to consider RNA by itself? RNA molecules are formed according to the pattern of DNA molecules in the chromosomes. Is it that each of us carries a vast supply of potential memories—a memory bank, so to speak—in the DNA molecules we were born with, called upon and activated by actual events with appropriate modifications?

  And is RNA the end? The chief function of RNA is to form specific protein molecules. Is it the protein, rather than the RNA, that is truly related to the memory function?

  One way of testing this hypothesis is to make use of a drug called puromycin, which interferes with protein formation by way of RNA. The American man-and-wife team Louis Barkhouse Flexner and Josepha Barbara Flexner conditioned mice to solve a maze, then immediately injected puromycin. The mice forgot what they had learned. The RNA molecule was still there, but the key protein molecule could not be formed. Using puromycin, the Flexners showed that while short-term memory could be erased in this way in rats, long-term memory could not. The proteins for the latter had presumably already been formed.

  And yet it may be that memory is more subtle and is not to be fully explained on the simple molecular level. There are indications that patterns of neural activity may be involved, too. Much yet remains to do.

  Automatons

  It is only very recently, however, that the full resources of science have been turned upon the effort to analyze the functioning of living tissues and organs, in order that the manner in which they perform—worked out hit-and-miss over billions of years of evolution—might be imitated in man-made machines. This study is called bionics, a term—suggested by “biological electronics” but much broader in scope—coined by the American engineer Jack Steele in 1960.

  As one example of what bionics might do, consider the structure of dolphin skin. Dolphins swim at speeds that would require 2.6 horsepower if the water about them were as turbulent as it would be about a vessel of the same size. For some reason, water flows past the dolphin without turbulence, and therefore little power is consumed overcoming water resistance. Apparently this happens because of the nature of dolphin skin. If we can reproduce that effect in vessel walls, the speed of an ocean liner could be increased and its fuel consumption decreased—simultaneously.

  Then, too, the American biophysicist Jerome Lettvin studied the frog’s retina in detail by inserting tiny platinum electrodes into its optic nerve. It turned out that the retina did not merely transmit a melange of light and dark dots to the brain and leave it to the brain to do all the interpretation. Rather, there were five different types of cells in the retina, each designed for a particular job. One cell reacted to
edges—that is, to sudden changes in the nature of illumination, as at the edge of a tree marked off against the sky. A second reacted to dark curved objects (the insects eaten by the frog). A third reacted to anything moving rapidly (a dangerous creature that might better be avoided). A fourth reacted to dimming light; and a fifth, to the watery blue of a pond. In other words, the retinal message went to the brain already analyzed to a considerable degree. If man-made sensors made use of the tricks of the frog’s retina, they could be made far more sensitive and versatile than they now are.

  If, however, we are to build a machine that will imitate some living device, the most attractive possibility is the imitation of that unique device that interests us most profoundly—the human brain.

  The human mind is not a “mere” machine; it is safe enough to say that. On the other hand, even the human mind, which is certainly the most complex object or phenomenon we know of, has certain aspects that remind us of machines in certain ways. And the resemblances can be important.

  Thus, if we analyze what it is that makes a human mind different from other minds (to say nothing of different from mindless objects), one thought that might strike us is that, more than any other object, living or nonliving, the human mind is a self-regulating system. It is capable of controlling not only itself but also its environment. It copes with changes in the environment, not by yielding but by reacting according to its own desires and standards. Let us see how close a machine can come to this ability.

  About the simplest form of self-regulating mechanical device is the controlled valve. Crude versions were devised as early as 50 A.D. by Hero of Alexandria, who used one in a device to dispense liquid automatically. A very elementary version of a safety valve is exemplified in a pressure cooker invented by Denis Papin in 1679. To keep the lid on against the steam pressure, he placed a weight on it, but he used a weight light enough so that the lid could flyoff before the pressure rose to the point where the pot would explode.

  The present-day household pressure cooker or steam boiler has more sophisticated devices for this purpose (such as a plug that will melt when the temperature gets too high); but the principle is the same.

  FEEDBACK

  Of course, this is a “one shot” sort of regulation. But it is easy to think of examples of continuous regulation. A primitive type was a device patented in 1745 by an Englishman, Edmund Lee, to keep a windmill facing squarely to the wind. He devised a fantail with small vanes that caught the wind whenever the wind shifted direction; the turning of these vanes operated a set of gears that rotated the windmill itself so that its main vanes were again head on to the wind in the new quarter. In that position, the fantail vanes remained motionless; they turned only when the windmill was not facing the wind.

  But the archetype of modern mechanical self-regulators is the governor invented by James Watt for his steam engine (figure 17.4). To keep the steam output of his engine steady, Watt conceived a device consisting of a vertical shaft with two weights attached to it laterally by hinged rods, allowing the weights to move up and down. The pressure of the steam whirled the shaft. When the steam pressure rose, the shaft whirled faster, and the centrifugal force drove the weights upward. In moving up, they partly closed a valve, choking off the flow of steam. As the steam pressure fell, the shaft whirled less rapidly, gravity pulled the weights down, and the valve opened. Thus, the governor kept the shaft speed, and hence the power delivered, at a uniform level. Each departure from that level set in train a series of events that corrected the deviation. This is called feedback: the error itself continually sends back information and serves as the measure of the correction required.

  Figure 17.4. Watt’s governor.

  A very familiar example of a feedback device is the thermostat, first used in crude form by the Dutch inventor Cornelis Drebble in the early seventeenth century. A more sophisticated version, still used today, was invented in principle by a Scottish chemist named Andrew Ure in 1830. Its essential component consists of two strips of different metals laid against each other and soldered together. Since the two metals expand and contract at different rates with changes in temperature, the strip bends. The thermostat is set, say, at 70° F. When the room temperature falls below that, the thermocouple bends in such a fashion as to make a contact that closes an electric circuit and turns on the heating system. When the temperature rises above 70° F, the thermocouple bends back enough to break the contact. Thus, the heater regulates its own operation through feedback.

  It is feedback that similarly controls the workings of the human body. To take one example of many, the glucose level in the blood is controlled by the insulin-producing pancreas, just as the temperature of a house is controlled by the heater, And just as the working of the heater is regulated by the departure of the temperature from the norm, so the secretion of insulin is regulated by the departure of the glucose concentration from the norm. A too-high glucose level turns on the insulin, just as a too-low temperature turns on the heater. Likewise, as a thermostat can be turned up to higher temperature, so an internal change in the body, such as the secretion of adrenalin, can raise the operation of the human body to a new norm, so to speak.

  Self-regulation by living organisms to maintain a constant norm was named homeostasis by the American physiologist Walter Bradford Cannon, who was a leader in investigation of the phenomenon in the first decades of the twentieth century.

  The feedback process in living systems is essentially the same as in machines and ordinarily is not given a special name. The use of biofeedback for cases where voluntary control of autonomic nerve functions is sought is an artificial distinction for convenience.

  Most systems, living and nonliving, lag a little in their response to feedback. For instance, after a heater has been turned off, it continues for a time to emit its residual heat; conversely, when it is turned on, it takes a little time to heat up, Therefore, the room temperature does not hold to 70° F but oscillates around that level; it is always overshooting the mark on one side or the other. This phenomenon, called hunting, was first studied in the 1830s by George Airy, the Astronomer Royal of England, in connection with devices he had designed to turn telescopes automatically with the motion of the earth.

  Hunting is characteristic of most living processes, from control of the glucose level in the blood to conscious behavior. When you reach to pick up an object, the motion of your hand is not a single movement but a series of movements continually adjusted in both speed and direction, with the muscles correcting departures from the proper line of motion, those departures being judged by the eye, The corrections are so automatic that you are not aware of them. But watch an infant, not yet practiced in visual feedback, try to pick up something: the child overshoots and undershoots because the muscular corrections are not precise enough, And victims of nerve damage that interferes with the ability to utilize visual feedback go into pathetic oscillations, or wild hunting, whenever they attempt a coordinated muscular movement.

  The normal, practiced hand goes smoothly to its target and stops at the right moment because the control center looks ahead and makes corrections in advance. Thus, when you drive a car around a corner you begin to release the steering wheel before you have completed the turn, so that the wheels will be straight by the time you have rounded the corner. In other words, the correction is applied in time to avoid overshooting the mark to any significant degree.

  It is the chief role of the cerebellum, evidently, to take care of this adjustment of motion by feedback. It looks into the future and predicts the position of the arm a few instants ahead, organizing motion accordingly. It keeps the large muscles of the torso in constantly varying tensions to keep you in balance and upright if you are standing. It is hard work to stand and “do nothing”; we all know how tiring just standing can be.

  Now this principle can be applied to a machine. Matters can be arranged so that, as the system approaches the desired condition, the shrinking margin between its actual state and the desi
red state will automatically shut off the corrective force before it overshoots. In 1868, a French engineer, Leon Farcot, used this principle to invent an automatic control for a steam-operated ship’s rudder. As the rudder approached the desired position, his device automatically closed down the steam valve; by the time the rudder reached the specified position, the steam pressure had been shut off. When the rudder moved away from this position, its motion opened the appropriate valve so that it was pushed back. Farcot called his device a servomechanism, and in a sense it ushered in the era of automation (a term introduced in 1951 by the American engineer John Diebold).

  EARLY AUTOMATION

  The invention of mechanical devices that imitated human foresight and judgment, no matter how crudely, was enough to set off the imagination of some into considering the possibility of some device that could imitate human actions more or less completely—an automaton. Myths and legends are full of them.

  To translate the accomplishments of gods and magicians into those of mere men required the gradual development of clocks during the Middle Ages. As clocks advanced in complexity, clockwork, the use of intricately related wheels that cause a device to perform certain motions in the right order and at appropriate times, made it possible to consider the manufacture of objects that mimick the actions associated with life more closely than ever.

 

‹ Prev