by Isaac Asimov
Finally, 100 million years after the first invasion of the land, there came a new invasion by creatures that could afford to be bulky despite gravity because they had a bracing of bone within. The new colonizers from the sea were bony fishes belonging to the subclass Crossopterygii (“fringed fins”). Some of their fellow members had migrated to the uncrowded sea deeps, including the coelacanth, which biologists discovered in 1938 to be still in existence (much to their astonishment).
The fishy invasion of land began as a result of competition for oxygen in brackish stretches of fresh water. With oxygen available in unlimited quantities in the atmosphere, those fish best survived that could most effectively gulp air when the oxygen content of water fell below the survival point. Devices for storing such gulped air had survival value, and fish developed pouches in their alimentary canals in which swallowed air could be kept. These pouches developed into simple lungs in some cases. Descendants of these early fish include the lungfishes; a few species of which still exist in Africa and Australia. These live in stagnant water where ordinary fishes would suffocate, and can even survive summer droughts when their habitat dries up. Even fish who live in the sea, where the oxygen supply is no problem, show signs of their descent from the early-lunged creatures, for they still possess air-filled pouches, used not for respiration but for buoyancy.
Some of the lung-possessing fishes, however, carried the matter to the logical extreme and began living, for shorter or longer stretches, out of the water altogether. These crossopterygian species with the strongest fins could do so most successfully for, in the absence of water buoyancy, they had to prop themselves up against the pull of gravity. By the end of the Devonian age, some of the primitive-lunged crossopterygians found themselves standing on the dry land, propped up shakily on four stubby legs.
After the Devonian came the Carboniferous (“coal-bearing”) age, so named by Lyell because it was the period of the vast, swampy forests that, some 300 million years ago, represented what was perhaps the lushest vegetation in earth’s history; eventually, they were buried and became this planet’s copious coal beds. This was the age of the amphibians; the crossopterygians by then were spending their entire adult lives on land. Next carne the Permian age (named for a district in the Urals, for the study of which Murchison made the long trip from England). The first reptiles now made their appearance. They ushered in the Mesozoic era, in which reptiles were to dominate the earth so thoroughly that it has become known as the age of the reptiles.
The Mesozoic is divided into three ages—the Triassic (it was found in three strata), the Jurassic (from the Jura mountains in France), and the Cretaceous (“chalk-forming”). In the Triassic arose the dinosaurs (Greek for “terrible lizards”). The dinosaurs reached their peak form in the Cretaceous, when Tyrannosaurus rex thundered over the land—the largest carnivorous land animal in the history of our planet.
It was during the Jurassic that the earliest mammals and birds developed, each from a separate group of reptiles. For millions of years, these creatures remained inconspicuous and unsuccessful. With the end of the Cretaceous, however, all the dinosaurs vanished in a relatively short period. So did other large reptiles that are not classified with the dinosaurs—ichthyosaurs, plesiosaurs, and the pterosaurs. (The first two were sea reptiles; the third, winged reptiles.) In addition, certain groups of invertebrates, such as the ammonites (related to the still-living chambered nautilus), died out—as did many smaller organisms, down to many types of microscopic organisms in the sea.
According to some estimates, as many as 75 percent of all species then living died during what is sometimes called the Great Dying and the end of the Cretaceous. Of the 25 percent that survived, there may have been great individual carnage, and it would not be surprising that 95 percent of all organisms died. Something happened that nearly sterilized the earth—but what?
In 1979, the American paleontologist Walter Alvarez headed a team who were trying to test ancient sedimentation rates, by testing for the concentration of certain metals along the length of a core taken out of rocks in central Italy. One of the metals being tested for, by neutron-activation techniques, happened to be iridium; and, somewhat to his astonishment, Alvarez found a concentration of iridium in a single narrow band that was 25 times as high as the concentrations immediately below or above.
Where could the iridium have come from? Could the sedimentation rate have been unusually high at that point? Or could it have come from some unusually rich iridium source. Meteorites are richer in iridium and certain other metals than the earth’s crust is, and that section of the core was rich in the other metals as well. Alvarez suspected that a meteor had fallen, but there was no sign of any ancient crater in the region.
Later investigations, however, showed that the iridium-rich layer occurred in widely separated places on Earth and always in rocks of the same age. It began to look as though a huge meteor could have fallen, and enormous quantities of material had been thrown, by the impact, into the upper atmosphere (including the entire vaporized meteor itself) and they had slowly settled out over the whole earth.
At what time did this happen? The rock from which the iridium-rich material was taken was 65 million years old—precisely the end of the Cretaceous. Many geologists and paleontologists (but not all, by any means) began to look with favor on the suggestion that the dinosaurs and the other organisms that seemed to have come to a sudden end, during the Great Dying at the close of the Cretaceous, had died as a result of the catastrophic impact with the earth of an object perhaps as much as 10 kilometers in diameter—either a small asteroid or the core of a comet.
There may well have been periodic collisions of this sort, each of which may have produced a Great Dying. The one at the end of the Cretaceous is merely the most spectacular of the recent ones and therefore the easiest to document in detail. And, of course, similar events may take place in the future unless humanity’s developing space capability eventually makes it possible to destroy threatening objects while they are still in space and before they strike. Indeed, it now appears that Great Dyings take place regularly every 28 million years. In 1984, it was speculated that the sun has a small dim star as a companion and that its approach to perihelion every 28 million years disrupts the Oort cloud of comets (see chapter 3) and sends millions into the inner solar system. A few are bound to strike the Earth.
Such an impact devastates areas in the vicinity at once, but the planetary effect is more the result of the vast quantity of dust lofted into the stratosphere—dust that produced a long, frigid night over the world and put a temporary end to photosynthesis.
In 1983, the astronomer Carl Sagan and the biologist Paul Ehrlich have pointed out that, in the event of a nuclear war, the explosion of as little as 10 percent of the present-day armory of nuclear weapons would send enough matter into the stratosphere to initiate an artificial wintry night that might last long enough to put human life on Earth into serious jeopardy—another Great Dying we certainly cannot afford.
But, in any case, the death of the dominant reptiles at the end of the Cretaceous, whatever the cause, meant that the Cenozoic era that followed became the age of mammals. It brought in the world we know.
BIOCHEMICAL CHANGES
The unity of present life is demonstrated in part by the fact that all organisms are composed of proteins built from the same amino acids. The same kind of evidence has recently established our unity with the past as well. The new science of paleobiochemistry (the biochemistry of ancient forms of life) was opened in the late 1950s, when it was shown that certain 300-million-year-old fossils contained remnants of proteins consisting of precisely the same amino acids that make up proteins today—glycine, alanine, valine, leucine, glutamic acid, and aspartic acid. Not one of the ancient amino acids differed from present ones. In addition, traces of carbohydrates, cellulose, fats, and porphyrins were located, with (again) nothing that would be unknown or unexpected today.
From our knowledge of bioche
mistry we can deduce some of the biochemical changes that may have played a part in the evolution of animals.
Let us take the excretion of nitrogenous wastes. Apparently, the simplest way to get rid of nitrogen is to excrete it in the form of the small ammonia molecule (NH3), which can easily pass through cell membranes into the blood. Ammonia happens to be extremely poisonous; if its concentration in the blood exceeds one part in a million, the organism will die. For a sea animal, this is no great problem; it can discharge the ammonia into the ocean continuously through its gills. But for a land animal, however, ammonia excretion is out of the question. To discharge ammonia as quickly as it is formed would require such an excretion of urine that the animal would quickly be dehydrated and die. Therefore a land organism must produce its nitrogenous wastes in a less toxic form than ammonia. The answer is urea. This substance can be carried in the blood in concentrations up to one part in a thousand without serious danger.
Now fish eliminate nitrogenous wastes as ammonia, and so do tadpoles. But when a tadpole matures to a frog, it begins to eliminate nitrogenous wastes as urea. This change in the chemistry of the organism is every bit as crucial for the changeover from life in the water to life on land as is the visible change from gills to lungs.
Such a biochemical change must have taken place when the crossopterygians invaded the land and became amphibians. Thus, there is every reason to believe that biochemical evolution played as great a part in the development of organisms as morphological evolution (that is, changes in form and structure).
Another biochemical change was necessary before the great step from amphibian to reptile could be taken. If the embryo in a reptile’s egg excreted urea, it would build up to toxic concentrations in the limited quantity of water within the egg. The change that took care of this problem was the formation of uric acid instead of urea. Uric acid (a purine molecule resembling the adenine and guanine that occur in nucleic acids) is insoluble in water; it is therefore precipitated in the form of small granules and thus cannot enter the cells.
In adult life, reptiles continue eliminating nitrogenous wastes as uric acid. They have no urine in the liquid sense. Instead, the uric acid is eliminated as a semisolid mass through the same body opening that serves for the elimination of feces. This single body opening is called the cloaca (Latin for “sewer”).
Birds and egg-laying mammals, which lay eggs of the reptilian type, preserve the uric-acid mechanism and the cloaca. In fact, the egg-laying mammals are often called monotremes (from Greek words meaning “one hole”).
Placental mammals, on the other hand, can easily wash away the embryo’s nitrogenous wastes, for the embryo is connected, indirectly, to the mother’s circulatory system. Mammalian embryos, therefore, manage well with urea. It is transferred to the mother’s bloodstream and passes out through the mother’s kidneys.
An adult mammal has to excrete substantial amounts of urine to get rid of its urea. Hence, there are two separate openings: an anus to eliminate the indigestible solid residues of food and a urethral opening for the liquid urine.
The account just given of nitrogen excretion demonstrates that, although life is basically a unity, there are systematic minor variations from species to species. Furthermore, these variations seem to be greater as the species considered are farther removed from each other in the evolutionary sense.
Consider, for instance, that antibodies can be built up in animal blood to some foreign protein or proteins as, for example, those in human blood. Such antisera, if isolated, will react strongly with human blood, coagulating it, but will not react in this fashion with the blood of other species. (This is the basis of the tests indicating whether bloodstains are of human origin, which sometimes lend drama to murder investigations.) Interestingly, antisera that will react with human blood will respond weakly with chimpanzee blood, while antisera that will react strongly with chicken blood will react weakly with duck blood, and so on. Antibody specificity thus can be used to indicate close relationships among life forms.
Such tests indicate, not surprisingly, the presence of minor differences in the complex protein molecule—differences small enough in closely related species to allow some overlapping in antiserum reactions.
When biochemists developed techniques for determining the precise amino-acid structure of proteins, in the 1950s, this method of arranging species according to protein structure was vastly sharpened.
In 1965, even more detailed studies were reported on the hemoglobin molecules of various types of primates, including humans. Of the two kinds of peptide chains in hemoglobin, one, the alpha chain, varied little from primate to primate; the other, the beta chain, varied considerably. Between a particular primate and the human species, there were only six differences in the amino acids and the alpha chain, but twenty-three in those of the beta chains. Judging by differences in the hemoglobin molecules, it is believed that human beings diverged from the other apes about 75 million years ago, or just about the time the ancestral horses and donkeys diverged.
Still broader distinctions can be made by comparing molecules of cytochrome c, an iron-containing protein molecule made up of about 105 amino acids and found in the cells of every oxygen-breathing species—plant, animal, or bacterial. Through analysis of the cytochrome-c molecules from different species, it was found that the molecules in humans differed from those of the rhesus monkey in only one amino acid in the entire chain. Between the cytochrome-c of a human being and that of a kangaroo, there were ten differences in amino acid; between those of human and a tuna fish, twenty-one differences; between those of a human and a yeast cell, some forty differences.
With the aid of computer analysis, biochemists have decided it takes on the average some 7,000,000 years for a change in one amino-acid residue to establish itself, and estimates can be made of the time in the past when one type of organism diverged from another. It was about 2,500,000,000 years ago, judging from cytochrome-c analysis, that higher organisms diverged from bacteria (that is, it was about that long ago that a living creature was last alive that might be considered a common ancestor of all eukaryotes). Similarly, it was about 1,500,000,000 years ago that plants and animals had a common ancestor, and 1,000,000,000 years ago that insects and vertebrates had a common ancestor. We must understand, then, that evolutionary theory stands not alone on fossils but is supported by a wide variety of geological, biological, and biochemical detail.
RATE OF EVOLUTION
If mutations in the DNA chain, leading to changes in amino-acid pattern, were established by random factors only, it might be supposed that the rate of evolution would continue at an approximately constant rate. Yet there are occasions when evolution seems to progress more rapidly than at others—when there is a sudden flowering of new species—as described in Gould’s notion of punctuated evolution, earlier mentioned. It may be that the rate of mutations is greater at some periods in Earth’s history than at others, and these more frequent mutations may establish an extraordinary number of new species or render un viable an extraordinary number of old ones. (Or else some of the new species may prove more efficient than the old and compete them to death.)
One environmental factor that encourages the production of mutations is energetic radiation, and Earth is constantly bombarded by energetic radiation from all directions at all times. The atmosphere absorbs most of it, but even the atmosphere is helpless to ward off cosmic radiation. Can it be that cosmic radiation is greater at some period than at others?
A difference can be postulated in each of two different ways. Cosmic radiation is diverted to some extent by Earth’s magnetic field. However, the magnetic field varies in intensity, and there are periods, at varying intervals, when it sinks to zero intensity. Bruce Heezen suggested in 1966 that these periods when the magnetic field, in the process of reversal, goes through a time of zero intensity may also be periods when unusual amounts of cosmic radiation reach the surface of the earth, bringing about a jump in mutation rate. Th
is is a sobering thought in view of the fact that the earth seems to be heading toward such a period of zero intensity.
Then, too, what about the occurrence of supernovas in earth’s vicinity—close enough to the solar system, that is, to produce a distinct increase in the intensity of bombardment by cosmic rays of the earth’s surface? Some astronomers have speculated on that possibility.
The Descent of Man
James Ussher, a seventeenth-century Irish archbishop, dated the creation of man (a term commonly used for human beings of both sexes until the rise of the women’s movement in the 1960s) precisely in the year 4004 B.C.
Before Darwin, few people dared to question the Biblical interpretation of early human history. The earliest reasonably definite date to which the events recorded in the Bible can be referred is the reign of Saul, the first king of Israel, who is believed to have become king about 1025 B.C. Bishop Ussher and other biblical scholars who worked back from that date through the chronology of the Bible came to the conclusion that human beings and the universe could not be more than a few thousand years old.