The First Americans

Home > Other > The First Americans > Page 14
The First Americans Page 14

by James Adovasio


  With only such techniques available to them, by the time World War II ended, about all archaeologists could say was that Clovis Man had evidently abided here and there across the United States around the end of the Pleistocene, maybe 10,000 to 20,000 years ago, but more likely 10,000, and had been succeeded by Folsom Man and then other Paleo-Indians andlater Archaic Indians and so on, all in a vaguely perceived and imperfectly defined sequential chronology.

  After Folsom Man, by the way, none of these lithic cultures got a “Man” moniker. So-called Eden points were found in Wyoming north of the first Clovis finds, and Yuma points had been found to their west. Soon yet other points were discovered, including unfluted Folsoms called Midlands, eastern varieties called Dalton and Cumberland, and many other types, all clearly old but of very uncertain or inexact vintage. In any event, with other similar but not identical points, and some that were not so similar but still old, showing up all over the country, things were getting complicated. This condition was, of course, exacerbated by the fact that none of these ancient items could be assigned a “real” age.

  Relative age was another matter, and in some cases stratigraphy could establish which type of artifact was older or younger than another. The basic principles of stratigraphy had been known, if not widely practiced, since 1668, when Niels Stensen (a.k.a. Nicolaus Steno) had articulated his stratigraphic “rules.” The first rule is the law of superposition, which simply says that in an undisturbed geological situation, the younger layers and any materials in them are superposed or lie above the older layers and their contents. Put most simply, the deeper layers are older than the shallower ones. But the world and therefore stratigraphy is more complicated than that in many areas.

  In practice, establishing stratigraphic relationships is a very difficult and notoriously tricky process, especially for many modern archaeologists, who are remarkably ill trained to comprehend or perform this critical chore. Whole textbooks have been devoted to the subject of stratigraphy, and many careers have been spent sorting out the most important ways to establish stratigraphic relationships and avoid the many and often subtle pitfalls of interpreting complex stratigraphic sequences. Using an old and shopworn but still useful analogy, strata are like chapters in a book; the stratigraphy is the story told by those chapters.

  To give you an idea of how complicated reading the story told by the strata is, imagine you find a projectile point in what had once been a hole or pit. Obviously, it had to have entered the pit after it was dug. Similarly, the pit had been dug into a stratum, which had to have been there before the pit could be dug. If there are other pits, say, four pits all intersecting oneanother, establishing the sequence in which they were dug and the sequence of their contents is going to be difficult, to put it mildly. Few archaeologists today are trained to do this kind of thing properly.

  Even today, when such instruments as lasers and electron microscopes capable of showing what an atom looks like have been brought to bear on the matters of stratigraphy and dating, people can still find reason to disagree. Certainly I learned this (to my dismay) as the years went by and the grousers about Meadowcroft groused on. Archaeology, which in the United States has been essentially a subdiscipline of anthropology since the time of Franz Boas, is thought of at least by some scholars as a social science—which means almost automatically soft and imprecise, like sociology and economics, as opposed to the “hard” sciences such as physics and chemistry and molecular biology. In these fields, you can frame an experiment to test a hypothesis—say, about light waves—and prove or disprove the hypothesis. Another researcher can create the same original conditions and perform the same experiment, and he or she should get the same result. But archaeology, however soft a science it is (and theoreticians have spent lifetimes fretting about this), was soon to benefit from the hardest of all sciences.

  Another means of telling the relative time of things found in the ground came along around the time of World War II: the fluorine method. Bones that are buried in the ground take up fluorine from the groundwater over time, so a collection of bones found in one place that were buried at about the same time should have the same concentration of fluorine in them. On the other hand, if they have gross differences in fluorine content, they were not buried at the same time. The method is of little use in determining the age of bones in different sites since the amount of groundwater and the amount of fluorine it contains can vary widely over fairly small areas, but it came in handy in solving a major problem in early human evolution.

  In 1912, a paleontologist named Charles Dawson found in a gravel bed near Lewes, England, the remains of extinct Ice Age animals, chert artifacts, and the skull and jawbone of what appeared from association to be an early Pleistocene humanoid. Called Eoanthropus dawsonii, this “dawn man” also seemed to be a different strain altogether from the Homo erectus remains that were roughly contemporary with it. By putting a whollyunexpected branch on the human genealogical tree, this anomaly threw what the British call a “spanner” into the human evolutionary works until the 1950s, when J. S. Weiner and others subjected the skull and the jaw to the fluorine method. It showed that the two skeletal items were not of the same age and that both were much younger than the legitimately early Pleistocene animal remains. It soon became known that this creature, called Piltdown Man, was a hoax; in fact, it was a fairly recent human skull and the jawbone of an orangutan, cleverly doctored to look old and planted in the gravel with the animal fossils and artifacts so that the hapless Dawson would be sure to come across them. Just who had perpetrated the hoax and for what reason remains unclear, but speculation about who this hoaxer was is as spirited as that about Jack the Ripper. There has even been speculation that Jack and the Piltdown hoaxer were the same man, none other than Arthur Conan Doyle. Of course, Conan Doyle has since been “cleared” of this charge, but the accusation made the entire matter all the more titillating.

  Informed speculation continued to be the primary means of assigning an age to any prehistoric material until about this same time, the 1940s and 1950s, when physics revolutionized archaeology and other fields that probed the prehistoric past. Long before, it had become clear that various elements, such as uranium, potassium, and carbon, come in various forms called isotopes. Isotopes of a given element all have very much the same properties, differing only in having different amounts of neutrons in their nuclei. And many isotopes are radioactive, emitting energetic particles into the surroundings at an average rate peculiar to that element. As each atom of a particular radioactive isotope loses particles, it turns into something else—either a different element or another so-called daughter version of itself. And this brings up the notion of a half-life, a concept that is often misunderstood.

  A RADIOCARBON PRIMER

  The chemical basis of most life on earth is the element carbon, and each living thing consumes a bit of carbon all the time it is alive. Plants get it from the carbon dioxide in the atmosphere, herbivorous animals get it fromplants and the atmosphere, carnivores from the herbivores and the atmosphere. A certain small percentage of all this carbon is a radioactive isotope of carbon called carbon-14. The rest of it is called carbon-12. (There is no need to protest over the presence of radioactive carbon in all of us; it is simply part of nature, not a nefarious plot by the government or the nuclear power industry.) All of us living things take in carbon—both carbon-14 and carbon-12 (which is basically inert)—until we die. Then the carbon-14 present in our tissues decays, transforming itself into another element, nitrogen-14. It takes 5,730 years for half the carbon-14 in, say, a dead geranium to turn into nitrogen-14. In another 5,730 years, half of that remaining carbon-14 will have become nitrogen-14. And so on. All this came to be known in 1947 through the work of a chemist, Willard Libby, who was rewarded with a Nobel Prize in 1960.

  So suppose you took a bit of a scraping from a wooden tool used by a potential ancestor you found in the family rockshelter and measured how much carbon-14 was still in it compared to nitr
ogen-14. If it was half and half, you could confidently say that about 5,730 years had passed.

  In the early days of radiocarbon dating, one needed fairly large samples to get a measurement, and even in such a case the measuring techniques were comparatively rough. So a radiocarbon date was given in years B.C. or in years before the present (B.P.) plus or minus a smaller number of years. A radiocarbon date would appear in a report like this: 12,000 ± 120 years B.P. (before present, which is, by long established convention, actually A.D. 1950).

  This of course meant anywhere between 11,880 and 12,120 years ago, a difference of 240 years, which is still in a sense relative, but less relative than mere guesswork or even informed speculation. The amount of years plus or minus is a statistical measure of the confidence one can place in the date—and that confidence depends on several things, but chiefly the size of the sample being analyzed. Bigger is better.

  Carbon-14 dating has become so important a feature in studies of early humans and other prehistoric events and phenomena that it is best we go a bit further in explaining some of the technicalities involved. For despite the application of high-tech methods and ever-increasing confidence in the technique, serious scholars still can—and do—argue about the results. And of course people like fundamentalist Creationists or some NativeAmericans will point to the complexities and occasional ambiguities involved and say the entire technique is flawed and therefore meaningless, thus preserving their belief that the Genesis story is verbatim history or that humans arose in North America and spread to the rest of the world.

  Carbon-14 dating is possible thanks to cosmic processes. In brief, the constant rain of cosmic radiation into the earth's atmosphere causes the isotope of nitrogen, nitrogen-14, to turn into carbon-14. An assumption here was that the amount of carbon-14 thus produced in the atmosphere has remained constant over the past 50,000 to 100,000 years. Carbon-14 reacts with oxygen in the atmosphere and becomes carbon dioxide. The assumptions in this case were that C-14 is as likely to be oxidized into carbon dioxide as is nonradioactive carbon (mostly carbon-12) and that all this takes place fast enough to make the ratio of carbon-14 and carbon-12 the same everywhere. Because most organisms—plants and animals—do not discriminate between the two forms of carbon present in the overall carbon budget they take in from their surround, it was assumed that the plant or animal would have the same ratio of the two carbons as are present in the atmosphere. Once the plant or animal dies and stops absorbing carbon, the radioactive carbon begins to decrease, half of it turning back into nitrogen-14 after 5,730 years. One can measure the amount of carbon-14 by counting its atomic emissions over a brief period of time with equipment similar in essence to a Geiger counter.

  After radiocarbon dating was first tried out, some of the assumptions in italics above proved to be not quite perfect. First of all, the industrial age has pumped a whole lot of very old carbon into the atmosphere from the burning of fossil fuels—fuels consisting of little else than dead plants that died several hundred million years ago and were buried, where time and pressure and other forces turned them into coal, oil, and natural gas. Because the dead plants have long since lost all their carbon-14, when burned they add huge amounts of carbon-14-deficient carbon to the atmosphere. Happily, adjustments for this can be made by using tree-ring dating and other dating systems to figure out the ratio of C-14 to C-12 before the Industrial Revolution. Similarly, by comparing the carbon ratios with calendrical methods of dating, one can make adjustments for such monkey wrenches in the works as changes in the earth's magnetic field (which affectthe amount of cosmic radiation striking the atmosphere and thus the amount of carbon-14 available to be soaked up by plants).

  Another complexity that must be considered is that some creatures, such as certain mollusks and water plants, take up “dead” carbon from old rocks, chiefly limestone, as well as from the atmosphere, and thus appear older than they are. Also, carbon can enter a dead creature after death from such contaminants as humic acids in the soil. These considerations put a premium on the preparation of the specimen to be tested.

  Even with all of the needed precautions taken, radiocarbon dates tend to underestimate the calendrical age of specimens, and the amount of underestimation tends to grow with time. Something that is 22,000 calendrical years old will test out to be only 18,500 radiocarbon years—or 3,500 years “too young.” At 33,000 calendrical years, the radiocarbon years can be as much as 4,300 years “too young,” or 28,700. The practical limit for radiocarbon dating is about 45,000 years, and in fact, beyond 22,000 (calendrical) years, the entire method gets considerably more relative. But for specimens that are 22,000 years old or less, there are straightforward gauges for relating the radiocarbon years to calendar years. When encountering prehistoric dates in the scientific literature or the press, it is important for obvious reasons to know if the figures are given in raw radiocarbon years or calibrated years, that is, adjusted to calendrical years.

  More recently, a far greater accuracy has been achieved in radiocarbon dating thanks to a device called an accelerator mass spectrometer (AMS). Previously, the way to measure the amount of carbon-14 in a sample compared to its decay product, nitrogen-14, was to count the sample's emissions over some reasonable period of time. This counted only the amount of radiocarbon that decayed in that interval, and from this one could do the math and determine how much was present. In AMS dating, which amounts to putting the sample in a kind of linear accelerator like the ones used to smash atoms, one counts all the atoms of carbon-14 present, not just the ones that decay. This permits far greater accuracy; theoretically, it should provide accurate dates up to 100,000 years ago. In actuality, achieving accurate dates of such great age is not practical even with AMS dating. The reason is that any sample that is 35,000 years old will have only 2 percent of its original carbon-14 left. Even a tiny bit of a recent contaminant—say a 1 percent increment of modern coal dust in the sample— is enough to skew the radiocarbon date seriously. So AMS dating has proven a boon to archaeologists not so much in extending the process back in time as in making it possible to date ever-smaller samples with ever-increasing accuracy. In earlier days, the way to increase one's confidence in a date (and reduce the number of years plus or minus that followed the date) was to use a bigger sample. Now this is no longer much of a consideration—as long as there is enough money in one's research grant to afford the more expensive AMS technique.

  There are other, similar means of dating materials. The potassium/argon method, wherein naturally occurring potassium-40 decays into argon-40, is especially useful for dating the minerals within rocks. The half-life of potassium-40 is a whopping 1.35 billion years, so in theory and practice it can be used to date the very origin of the planet Earth, approximately 4.6 billion years ago. On the other hand, it is not much good at dating rocks that are any younger than a few hundred thousand years old, which leaves a large gap of time where most isotopic dating techniques do not function very well. In certain circumstances the gap can be filled by dating inorganic carbonates such as limestone and the stalactites and stalagmites of caves in which case isotopes of uranium that decay into various “daughter” isotopes at given rates are used. But this isotopic technique is of less value dating once-living things represented by shells or fossil bones since these objects absorb uranium unevenly from their burial surround, thereby skewing the date.

  Yet other techniques have been developed, in particular thermoluminescence. This takes advantage of the fact that certain crystalline substances imprison extra electrons in the interstices of the crystals (called “traps”) when the substance is subjected to irradiation from naturally occurring elements, normally uranium in the ground. When the substance is bombarded in the laboratory with a dose of thermal neutrons, the trapped electrons are released in the form of light. In other words, they glow. The intensity of the glow is directly proportional to the number of trapped electrons. If you know the intensity of irradiation from the ground (which is essentially consta
nt, given the long half-life of the substances, such as uranium, that generate it), you can then estimate how long ago the irradiation began, thus dating the object. This has proven especially useful in datingpottery sherds and some thermally altered or heated chert artifacts, particularly those that are between a few thousand and a few hundred thousand years old. Thermoluminescence has been used, for example, to show that humans were present in Australia 40,000 or more years ago—which would be an iffy call for radiocarbon dating.

  Soon enough, by the late 1950s and early 1960s, carbon-14 dating had established an interval during which Clovis Man had flourished in North America. His tools had been found chiefly south of the Canadian border all the way into northern Mexico and extending from the eastern foothills of the Rocky Mountains all the way to the east coast with several glaring and still intriguing gaps. Other, similar tools had been discovered as far south as the southern tip of South America. By 1967, it was thought that this amazingly expansionary Clovis interval had lasted from 11,500 uncalibrated radiocarbon years B.P. to 11,000 B.P., a mere half millennium. This was an amazingly short period for such widespread exploits. And radiocarbon dates were showing that this was the very same half millennium in which more than sixty species of large mammals were wiped out in the New World. This was the first circumstantial evidence that the first Americans were the heedless destroyers of the continent's most dramatic wildlife.

  THE GREAT EXTINCTION

 

‹ Prev