Book Read Free

Borderlands of Science

Page 40

by Charles Sheffield


  History records examples of people with prodigious memories. Mozart, at thirteen, went to the Sistine Chapel in Rome to hear a famous Miserere by Allegri, then wrote out the whole work. The mathematician, Gauss, did not need to look up values in logarithm tables, because he knew those tables by heart. And Thomas Babington, Lord Macaulay, seemed to have read so much and remembered it so exactly that one of his exasperated colleagues, Lord Melbourne, said, "I wish I was as cocksure of anything as Tom Macaulay is of everything."

  To the rest of us, hard-pressed to remember our own sister's phone number, such monster memories seem almost inhuman. My bet, however, is that even these people would, if asked, complain of their poor memories and emphasize what they forgot. And each of us, without ever thinking about it, has enormous amounts of learned information stored away in our brain.

  I say "learned information," because some of what we know is hard-wired, and we call that instinct. We don't learn to suck, to crawl, or to walk by committing actions to memory, and we normally reserve the word "memorize" to things that we learn about the world through observation and experience. I am going to stick with this distinction between instinct and memory, though sometimes the borderline becomes blurred. We don't remember learning to talk, but we accept that it relies on memory because others tell us we did (though there is good evidence that the ability to acquire language is hard-wired). And most of us would not say that riding a bicycle depends on memory, although clearly this is a learned and not an inborn activity.

  I want to concentrate on factual information that is definitely learned, stored, and recalled, and ask two simple questions: Where is it stored, and how is it stored?

  The easy part first: information is stored in the brain. But when we ask where in the brain, and ask for the form of storage, we run at once into problems. The tempting answer, that a piece of data is stored in a single definite location, as it would be in a computer, proves to be wrong. Although many people believe that the brain ultimately operates like a computer—a "computer made of meat"—in this case the analogy is more misleading than helpful.

  Much of what we know about memory comes from the study of unfortunate individuals with brains damaged by accident or disease. This is hardly surprising, since volunteers for brain experiments are hard to come by (as Woody Allen remarked, "Not my brain. It's my second favorite organ."). Studies of abnormal brains can be misleading, but they show unambiguously that a human memory does not sit in a single defined place. Rather, each memory seems to be stored in a distributed form, scattered somehow in bits and pieces at many different physical locations. Although ultimately the information must be stored in the brain's neurons (we know of nowhere else that it could be stored), we do not yet understand the mechanism. Some unknown process hears the question, "Who delivered the Gettysburg address?", goes off into the interior of the brain, finds and assembles information, and returns the answer (or occasionally, and frustratingly, fails to return the answer): "Abraham Lincoln."

  And it does the job fast. The brain contains a hundred billion neurons, but the whole process, from hearing the question to retrieving and speaking the answer, takes only a fraction of a second.

  We may not be Mozart, but each of us possesses an incredible ability to store and recall information. And are we impressed by this? Not at all. Instead of being pleased by such a colossal capability, we are like the celebrated Mr. X, always complaining about his sieve-like memory.

  I would give Mr. X's name, but at the moment I cannot quite recall it.

  A.17. In defense of Chicken Little. Chicken Little wasn't completely wrong. Some of the sky does fall, some of the time. When a grit-sized particle traveling at many miles a second streaks into the Earth's atmosphere and burns up from friction with the air before reaching the ground, we call it a shooting star or a meteor. Some of us make a wish on it. We think of meteors as harmless and beautiful, especially when they come in large groups and provide spectacular displays such as the Leonid and Perseid meteor showers.

  Meteors, however, have big brothers. These exist in all sizes from pebbles to basketballs to space-traveling mountains. If the speeding rock is large enough, it can remain intact all the way to the ground and it is then known as a meteorite. The reality of meteorites was denied for a long time—Thomas Jefferson said, "I could more easily believe that two Yankee professors would lie than that stones would fall from heaven"—but today the evidence is beyond dispute.

  If one of these falling rocks is big enough, its great speed gives it a vast amount of energy, all of which is released on impact with the Earth. Even a modest-sized meteorite, twenty meters across, can do as much damage as a one-megaton hydrogen bomb. This sounds alarming, so let us ask three questions: How many rocks this size or larger are flying around in orbits that could bring them into collision with the Earth? How often can impact by a rock of any particular size be expected? And how does damage done vary with the size of the meteorite?

  Direct evidence of past impacts with Earth is available only for large meteorites. For small ones, natural weathering by wind, air, and water erases the evidence in a few years or centuries. However, we know that a meteorite, maybe two hundred meters across, hit a remote region of Siberia called Tunguska, on June 30, 1908. It flattened a thousand square kilometers of forest and put enough dust into the atmosphere to provide colorful sunsets half a continent away. About 20,000 years ago, a much bigger impact created Meteor Crater in Arizona, more than a kilometer across. And 65 million years ago, a monster meteorite, maybe ten kilometers across, struck in the Gulf of Mexico. It caused global effects on weather, and is believed to have led to the demise of the dinosaurs and the largest land reptiles.

  The danger of impact is real, and beyond argument. But is it big enough for us to worry about? After all, sixty-five million years is an awfully long time. How do we make an estimate of impact frequency?

  The answer may seem odd: we look at the Moon. The Moon is close to us in space, and hit by roughly the same meteorite mix. However, the Moon is airless, waterless, and almost unchanging, so the history of impacts there can be discovered by counting craters of different sizes. Combining this with other evidence about the general size of objects in orbits likely to collide with Earth, we can calculate numbers for frequency and energy release. They are not totally accurate, but they are probably off by no more than a factor of three or four.

  I will summarize the results by size of body, and translate that to the equivalent energy released as number of megatons of H-bombs. About once a century, a "small" space boulder about five meters across will hit us and produce a matching "small" energy equal to that released by the Hiroshima atomic bomb. It will probably burn up in the atmosphere and never reach the ground, but the energy release will be no less. Once every two thousand years, on average, we will get hit by a twenty-meter boulder, with effects a little bigger than a one-megaton H-bomb. Every two million years, a five-hundred-meter giant will arrive, delivering as much energy as a full-scale nuclear war.

  I found these numbers disturbing, so a few years ago I sent them to the late Gene Shoemaker, an expert on the bombardment of Earth by rocks from space. He replied, not reassuringly, that he thought my numbers were in the right ballpark, but too optimistic. We will be hit rather more often than I have said.

  Even if I were exactly right, that leaves plenty of room for worry. Being hit "on average" every 100,000 years is all very well, but that's just a statistical statement. A big impact could happen any time. If one did, we would have no way to predict it, or—despite what recent movies would have you believe—prevent it.

  A.18. Language problems and the Theory of Everything. A couple of weeks ago I received a letter in Spanish. I don't know Spanish. I was staring at the text, trying and failing to make sense of it by using my primitive French, when my teenage daughter wandered by. She picked up my letter and cockily gave me a quick translation.

  I was both pleased and annoyed—aren't I supposed to know more than my children?�
�but the experience started me thinking: about language, and the importance of the right language if you want to do science in general, and physics in particular.

  Of course, every science has its own special vocabulary, but so does every other subject you care to mention. Partly it's for convenience, although sometimes I suspect it's a form of job security. Phrases like "stillicide rights," "otitis mycotica," and "demultiplexer" all have perfectly good English equivalents, but they also serve to sort out the insiders from the outsiders.

  One subject, though, is more like an entire language than a special vocabulary, and we lack good English equivalents for almost all its significant statements. I am referring to mathematics; and, like it or not, modern physics depends so heavily on mathematics that non-mathematical versions of the subject mean very little. To work in physics today, you have to know the language of mathematics, and the appropriate math vocabulary and methods must already exist.

  On the face of it, you might think this would make physics an impossibly difficult subject. What happens if you are studying some aspect of the universe, and the piece of mathematical language that you need for its description has not yet been invented? In that case you will be out of luck. But oddly—almost uncannily—throughout history, the mathematics had already been discovered before it was needed in physics.

  For example, in the seventeenth century Kepler wanted to show that planets revolved around the Sun not in perfect circles, but in other more complex geometrical figures. No problem. The Greeks, fifteen hundred years earlier, had proved hundreds of results about conic sections, including everything Kepler needed to know about the ellipses in which planets move. Two hundred years later, Maxwell wanted to translate Michael Faraday's experiments into a formal theory. The necessary mathematics, of partial differential equations, was sitting there waiting for him. And, to give one more example, when Einstein's theory of general relativity needed a precise way to describe the properties of curved space, the right mathematics had been created by Riemann and others and was already in the text books.

  Of course, there can be no guarantee that the mathematical tools and language you want will be there when you need it. And that brings me to the central point of this column. One of the hottest subjects in physics today is the "Theory Of Everything," or TOE. The "Everything" promised here is highly limited. It won't tell you how a flower grows, or explain the IRS tax codes. But a TOE, if successful, will pull together all the known basic forces of physics into one integrated set of equations.

  Now for the tricky bit. The most promising efforts to create a TOE involve something known as string theory, and they call for a description of space and time far more complicated than the height-width-length-time we find adequate for most purposes. The associated mathematics is fiendishly difficult, and is not just sitting in the reference books waiting to be applied. New tools are being created, by the same people doing the physics, and it is quite likely that these will prove inadequate. The answers may just have to wait, until, ten or fifty years from now, the right mathematical language has been evolved and can be applied.

  It's one of my minor personal nightmares. Mathematics, more than almost any other subject, is a game played best by the young. Suppose that, five or fifteen years from now, we have a TOE that explains everything from quarks to quasars in a single consistent set of equations. It will, almost certainly, require for its understanding some new mathematical language. By that time I may just be too old or set in my ways ever to learn what's needed.

  It's a dismal prospect. You wait your whole life for something, and then when it finally comes along you find you can't understand it.

  A.19. Fellow travelers. My mother grew up in a household with nine children and little money. Not much was wasted. Drop a piece of food on the floor and you picked it up, dusted it off, and ate it. This doesn't seem to have done my mother much harm, since she is still around at ninety-seven. Her philosophy toward food and life can be summed up in her comment, "You eat a peck of dirt before you die."

  Contrast this with the television claim I heard a couple of weeks ago: "Use this product regularly, and you will rid yourself and your house completely of germs and pests."

  The term "pest" was not described. It probably didn't include your children's friends. But whatever the definition, the advertisers are kidding themselves and the public by making such extravagant claims. Your house, and you yourself, are swarming with small organisms, whose entry to either place was not invited but whose banishing is a total impossibility.

  I have nothing against cleanliness, and certainly no one wants to encourage the presence in your home of the micro-organisms that cause cholera, malaria, bubonic plague, and other infectious diseases. Such dangers are, however, very much in the minority. Fatal diseases are also the failures among the household invaders. What's the point of invading a country, if the invasion makes the land uninhabitable? In our case, that amounts to the organism infecting and killing its host. Successful invaders don't kill you, or even make you sick. The most successful ones become so important to you that you could not live without them.

  Biologists set up a hierarchy of three types of relationship between living organisms. When one organism does nothing but harm to its host, that's called parasitism. In our case, this includes things like ringworm, pinworms, athlete's foot, ticks, and fleas. All these have become rarer in today's civilized nations, but most parents with children in elementary school have heard the dread words "head lice," and have probably dealt with at least one encounter.

  Parasites we can do without. This includes everything from the influenza virus, far too small to see, to the tapeworm that can grow to twenty feet and more inside your small intestine.

  Much more common, however, are the creatures that live on and in us and do neither harm nor good. This type of relationship is known to biologists as commensalism. We provide a comfortable home for tiny mites that live in our eyelashes, to others that dine upon cast-off skin fragments, and to a wide variety of bacteria. We are unaware of their presence, and we would have great difficulty ridding ourselves of them. It might even be a bad idea, since we can't be sure that they do not serve some useful function.

  And then there is symbiosis, where we and our fellow-traveling organisms are positively good for each other. What would happen if you could rid yourself of all organisms that do not possess the human genetic code?

  The answer is simple. You would die, instantly. In every cell of your body are tiny objects called mitochondria. They are responsible for all energy generation, and they are absolutely essential to your continued existence. But they have their own genetic material and they reproduce independently of normal cell reproduction. They are believed to be bacteria, once separate organisms, that long ago entered a symbiotic relationship with humans (and also with every other animal on earth).

  If the absence of mitochondria didn't kill you in a heartbeat, you would still die in days. We depend on symbiotic bacteria to help digest our food. Without them, the digestive system would not function and we would starve to death.

  "We are not alone." More and more, we realize the truth of that statement. We are covered on the outside and riddled on the inside by hundreds of different kinds of living organisms, and we do not yet understand the way that we all relate to each other. For each, we have to ask, is this parasitism, commensalism, or symbiosis?

  Sometimes, the answers are surprising. Twenty years ago, gastric ulcers were blamed on diet or stress. Today, we know that the main cause is the presence in the stomach of a particular bacterium known as Helicobacter pylori. Another organism, Chlamydia, is a suspect for coronary disease and hardening of the arteries. A variety of auto-immune diseases may be related to bacterial action.

  All these facts encourage a new approach for biologists and physicians: The best way to study humans is not as some pure and isolated life form; rather, each of us should be regarded as a "superorganism." The life-cycles and reproductive patterns of us and all our fellow
travellers should be regarded as one big interacting system.

  Disgusting, to be lumped in with fleas and mites and digestive bacteria, as a single composite object? I don't think so. In a way it's a comforting thought. We are not alone, and we never will be.

  A.20. How do we know what we know? At the moment there is a huge argument going on about the cause of AIDS. Most people in this country—but by no means all—believe that the disease is caused by a virus known as HIV, the Human Immunodeficiency Virus. In Africa, however, heads of governments have flatly stated that they don't accept this. They blame a variety of other factors, from diet to climate to genetic disposition.

  The available scientific evidence ought to be the same for everyone. So how can there be such vast differences in what people believe?

  Part of the reason is what we might call the "Clever Hans" effect. Clever Hans was a horse who lived in Germany early in the twentieth century, and he seemed to be smarter than many of the humans around him. He could answer arithmetic problems by tapping out the correct answers with a fore-hoof, and give yes or no answers to other questions—Is London the capital of France?—by shaking or nodding his head, just like a human.

  His owner, a respected Berliner named Wilhelm von Osten, was as astonished as anyone by Clever Hans' abilities. There seemed no way that he would commit fraud, particularly since Clever Hans could often provide correct answers when von Osten was out of the room, or even in a different town. The Prussian Academy of Sciences sent an investigating committee, and they too were at first amazed by the horse's powers. True, there were inconsistencies in the level of performance, but those could often be explained away.

  Finally, almost reluctantly, the truth was discovered. Clever Hans could not do arithmetic, and did not know geography and history. He was responding to the body language of the audience. Most observers, including members of the investigating committee, wanted Hans to get the right answers. So they would instinctively tense at the question, and relax when Hans gave the right answer. The body movements were very subtle, but not too subtle for Hans. He really was clever—clever at reading non-verbal cues from the humans around him.

 

‹ Prev