Book Read Free

A Step Farther Out

Page 8

by Jerry Pournelle


  It was an impressive sight. Nowadays, though, I can carry on my belt a TI 59—which is an order of magnitude more powerful than was ILIAC, and a very great deal more reliable.

  Every year since the 1950's the information storable in a given calculator chip volume has doubled; and that trend shows no sign of slowing. So, although the human brain remains, despite all the micro-chip technology, the most efficient data-storage system ever built, electronics is catching up. And the brain is nowhere near as reliable as are the computers of today. Our brain does, though, have the capability for packing a lot of data into a small space and retrieving it quickly.

  The brain has another characteristic that's very useful: the information doesn't seem to be stored in any specific place. Karl Lashley, after 30 years of work trying to find the engrain—the exact site of any particular memory—gave up. All our memories seem to be stored all over our brains.

  That is: Lashley, and now others, train specific reflexes and memory patterns into experimental animals, then extirpate portions of their brains. Take out a chunk here, or a chunk there: surely you'll get the place where the memory is stored if you keep trying, won't you?

  No. Short of killing the animal, the memory remains, even when up to 90% of the cortical matter has been removed. Lashley once whimsically told a conference that he'd just demonstrated that learning isn't possible.

  The experiment has been duplicated a number of times, and the evidence of human subjects who've had brain damage as a result of accidents confirms it: our various memories are stored, not in one specific place, but in a lot of places; literally, all over our cortices. That's got to be a clue to how the brain works.

  A second characteristic of the brain is that it's fast Consider visual stimulation as an example. You see an unexpected object. You generally don't have to stop to think what it is: a hammer, a saucer, a pretty girl, the Top Sergeant, ice cream cone, saber-toothed tiger about to spring, or whatever; you just know, and know very quickly.

  Yet the brain had to take the impulses from the light pattern on the retina and do something with them. What? Introspection hints that a number of trial and error operations were conducted: "test" patterns were compared with the stimulus object, until there was a close correspondence, and then the "aha!" signal was sent. If, somehow, the "aha!" was sent up for the wrong test pattern, it takes conscious effort to get rid of that and "see" the stimulus as it should be seen.

  We're still trying to teach computers to recognize a small number of very precisely drawn patterns, yet yesterday I met a man I hadn't seen for ten years and didn't know well then, and recognized him instantly. Dogs and cats do automatically what we sweat blood to teach computers. If only we could figure out how the brain does it. . .

  A number of neuro-scientists think they've found the proper approach at last. It's only a theory, and it may be all wrong, but there is now a lot of evidence that the human brain works like a hologram. Even if that isn't how our internal computer works, a holographic computer could, at least in theory, store information as compactly and retrieve it as rapidly as the human brain, and thus make possible the self-contained robots dear to science fiction.

  * * *

  The first time Dr. David Goodman proposed the holographic brain model to me, I thought he'd lost his mind. Holograms I understood: you take a laser beam and shine part of it onto a photographic plate, while letting the rest fall on an object and be reflected off the object onto the film. The result is a messy interference pattern on the film that, when illuminated with coherent light of the proper frequency, will reproduce an image of the object. Marvelous and all that, but there aren't any laser beams in our heads. It didn't make sense.

  Well, of course it does make sense. There's no certainty that holography is the actual mechanism for memory storage in human beings, but we can show the mechanism the brain might use to do it that way. First, though, let's look at some of the characteristics of holograms.

  They've been around a long time, to begin with, and they don't need lasers. Lasers are merely a rather convenient (if you're rich enough to afford them) source of very coherent light. If you don't have a laser, a monochromatic filter will do the job nicely, or you can use a slit, or both.

  A coherent light beam differs from ordinary light in the same way that a platoon of soldiers marching in step differs from a mob running onto the field after the football game. The light is all the same frequency (marching in step) and going in the same direction (parallel rays). Using any source of coherent light to make a hologram of a single point gives you a familiar enough thing: a Fresnel lens, which looks like a mess of concentric circles. Holography was around as "lensless photography" back before WW II.

  As soon as you have several points, the neat appearance vanishes, of course. A hologram of something complicated, such as several chessmen or a group of toy soldiers, is just a smeared film with strange patterns on it.

  Incidentally, you can buy holograms from Edmund Scientific or a number of other sources, and they're fascinating things. I've even seen one of a watch with a magnifying glass in front of it. Because the whole image, from many viewpoints, is stored in the hologram, you can move your head around until you see the watch through the image of the magnifying glass—and then you can read the time. Otherwise the watch numerals are too small to see.

  Hmm. Our mental images have the property of viewpoint changes; we can recall them from a number of different angles.

  Another interesting property of holograms is that any significant part of the photographic plate contains the whole picture. If you want to give a friend a copy of your hologram, simply snip it in half; then you've both got one. He can do the same thing, of course, and so can the guy he gave his to. Eventually, when it gets small enough, the images become fuzzy; acuity and detail have been lost, but the whole image is still there.

  That sounds suspiciously like the results Lashley got with his brain experiments, and also like reports from soldiers with severe brain tissue losses: fuzzy memories, but all of them still there. (I'll come back to that point and deal with aphasias and the like in a moment.)

  Holograms can also be used as recognition filters. Let us take a hologram of the word "Truth" for example, and view a page of print through it. Because the hologram is blurry, we can't read the text: BUT, if the word "truth" is on that page, whether it's standing alone or embedded in a longer word, you will see a very bright spot of light at the point where the word will be found when you remove the filter.

  The printed word can be quite different from the one used to make the hologram, by the way. Different type fonts can be employed, and the letters can be different sizes. The spot of Tight won't be as bright or as sharp if the hologram was made from a type font different from the image examined, but it will still be there, because it's the pattern that's important.

  The Post Office is working on mail-sorting through use of this technique. Computers can be taught to recognize patterns this way. The police find it interesting too: you can set up a gadget to watch the freeways and scream when it sees a 1964 Buick, but ignore everything else; or examine license plates for a particular number.

  There's another possibility. Cataracts are caused by cloudy lenses. If you could just manage to make a hologram of the cataracted lens, you could, at least in theory, give the sufferer a pair of glasses that would compensate for his cataracts. That technique isn't in the very near future, but it looks promising.

  You'll have noticed that this property of holograms sounds a bit like the brain's pattern-search when confronted with an unfamiliar object. A large number of test patterns can be examined "through" a hologram of the stimulus object, and one will stand out.

  Brain physiologists have found another property of the brain that's similar to a holographic computer. The brain appears to perform a Fourier transform on data presented to it; and holograms can be transmitted through Fourier-transform messages.

  A Fourier transform is a mathematical operation that takes a compl
ex wave form, pattern, signals, or what have you, and breaks it down into a somewhat longer, but precisely structured, signal of simpler frequencies. If you have a very squiggly line, for example, it can be turned into a string of numbers and transmitted that way, then be reconstructed exactly. The brain appears to make this kind of transformation of data.

  Once a message (or image, or memory) is in Fourier format, it's easy systematically to compare it with other messages, because it is patterned into a string of information; you have only to go through those whose first term is the same as your unknown, ignoring all the millions of others; and then find those with similar second terms, etc., until you've located either the proper matching stored item, or one very close to it. If our memories are stored either in Fourier format or in a manner easily converted to that, we've a mechanism for the remarkable ability we have to recognize objects so swiftly.

  * * *

  So. It would be convenient if the brain could manufacture holograms; but can it, and does it?

  It can: that is, we can show a mechanism it could use to do it Whether it does or not isn't known, but there don't appear to be any experiments that absolutely rule out the theory.

  There are rhythmic pulses in the brain that radiate from a small area: it's a bit like watching ripples from a stone thrown into a pond. Waves or ripples of neurons firing at precise frequencies spread through the cerebrum. These, of course, correspond to the "laser" or coherent light source of a hologram. Beat them against incoming impulses and you get an electrical/neuron-firing analog of a hologram.

  Just as you can store thousands of holograms on a single photographic plate by using different frequencies of coherent light for each one, so could the brain store millions of billions of bits of information by using a number of different frequencies and sources of "coherent" neuron impulses.

  That model also makes something else a bit less puzzling; selective loss of memory. Older people often retain very sharp memories for long-past events, while losing the ability to remember more recent things; perhaps they're losing the ability to come up with new coherent reference standards. Some amnesiacs recall nearly everything in great detail, yet can't remember specific blocks of their life: the loss or scrambling of certain "reference standards" would tend to cause en bloc memory losses without affecting other memories at all.

  Aphasias are often caused by specific brain-structure damage. I have met a man who can write anything he likes, including all his early memories; but he can't talk. A brain injury caused him to "forget" how. It's terribly frustrating, of course. It's also hard to explain, but if the brain uses holographic codes for information storage, then the encoder/decoder must survive for that information to be recovered. A sufficiently selective injury might well destroy one decoder while leaving another intact.

  In other words, the model fits a great deal of known data. Farther than that no one can go. The brain could use holograms.

  Not very long ago, Ted Sturgeon, A. E. van Vogt, and I were invited to speak to the Los Angeles Cryonics Society. That's the outfit that arranges to have people quick-frozen and stored at the temperature of liquid nitrogen in the hopes that someday they can be revived in a time when technology is sufficiently advanced to be able to cure whatever it was that killed them to begin with.

  I chose to give my talk on the holographic brain model. The implications weren't very encouraging for the Cryonics Society.

  If the brain uses holographic computer methods, then the information storage is probably dynamic, not static; and even if a frozen man could be revived, since the electrical impulses would have been stopped, he'd have no memories, and thus no personality. If the holographic brain model is a true picture, it's goodbye to that particular form of immortality.

  On the other hand, whether our own brains use holograms or not, holographic computers almost undoubtedly will work and the holographic information storage technique offers us a way to construct those independent robots that figure so large in science fiction stories. Either way, it looks as if the big brains may be coming before the turn of the century.

  * * *

  The above was written in 1974. Surprisingly, it needed no revision, except to foreshadow what follows: Since 1974, there have been some exciting developments, most of which came to light at the 1976 meeting of the American Association for the Advancement of Science. They were reported in my column "Science and Man's Future: Prognosis Magnificent!", from which the following has been derived.

  * * *

  Studies of how we think—and of how machines might do so—continue. Take biofeedback The results are uncanny, and they're just beginning. Barbara Brown, the Veteran's Administration Hospital physiologist whose book NEW MIND, NEW BODY began much of the current interest in biofeedback, is now convinced that there's nothing the eastern yogas can do that you can't teach yourself in weeks to months. Think about that for a moment: heart rate, breathing, relaxation, muscle tension, glandular responses—every one of them subject to your own will. Dr. Brown is convinced of it.

  The results are pouring in, and not just from her VA hospital in Sepulveda, either. Ulcers cured, neuroses conquered, irrational fears and hatreds brought under conscious control—all without mysticism. When I put it to Dr. Brown that there was already far more objective evidence for the validity of the new psycho-physiological theories than there ever has been for Freudian psychoanalysis, she enthusiastically agreed.

  One does want to be careful. There are many charlatans in the biofeedback business; some sell equipment, others claim to be "teachers." The field is just too new to have many standards, in either equipment or personnel, and the potential buyer should be wary. However: there is definite evidence, hard data, to indicate that you can, with patience (but far less than yoga demands) learn to control many allergies, indigestion, shyness, fear of crowds, stage fright, and muscular spasmodic pain; and that's got to be good news.

  After I left the 1976 AAAS meeting in Boston I wandered the streets of New York between editorial appointments. On the streets and avenues around Times Square I found an amazing sight. (No, not that; after all, I live not far from Hollywood and thus am rather hard to shock.)

  Every store window was filled with calculators. Not merely "four function" glorified arithmetic machines, but real calculators with scientific powers-of-ten notation, trig, logs, statistical functions, and the rest. Programmable calculators for under $300. (Since 1976 the price of programmables has plummeted: you can get a good one with all scientific functions for $50 now; while the equivalent of my SR-50 now sells for $12.95 in discount houses. JEP)

  Presumably there's a market for the machines: which means that we may, in a few years, have a large population of people who really do use numbers in their everyday lives. That could have a profound impact on our society. Might we even hope for some rational decision-making?

  John R, McCarthy of the Stanford University Artificial Intelligence Laboratories certainly hopes so. McCarthy is sometimes called "the western Marvin Minsky." He foresees home computer systems in the next decade. OK, that's not surprising; they're available now. (Since that was written, the home computer market has boomed beyond anyone's prediction; in less than two years home computers have become well-nigh ubiquitous, and everyone knows someone who has one or is getting one. I even have one; I'm writing this on it. JEP) McCarthy envisions something a great deal more significant, though: information utilities.

  There is no technological reason why every reader could not, right now, have access to all the computing power he or she needs. Not wants—what's needed is more than what's wanted, simply because most people don't realize just what these gadgets can do. Start with the simple things like financial records, with the machine reminding you of bills to be paid and asking if you want to pay them—then doing it if so instructed. At the end of the year it flawlessly and painlessly computes your income tax for you.

  Well, so what? We can live without all that, and we might worry a bit about privacy if we didn't have phys
ical control over the data records and such. Science fiction stories have for years assumed computer controlled houses, with temperatures, cooking, menus, grocery orders, etc., all taken care of by electronics; but we can live without it.

  Still, it would be convenient. (More than I knew when I wrote that; I don't see how I could get along without my computer, which does much of that, now that I'm used to it. JEP)

  But what of publishing? McCarthy sees the end of the publishing business as we know it. If you want to publish a book, you type it into the computer terminal in your home; edit the text to suit yourself; and for a small fee put the resulting book into the central information utility data banks.

 

‹ Prev