Coming of Age in the Milky Way

Home > Other > Coming of Age in the Milky Way > Page 26
Coming of Age in the Milky Way Page 26

by Timothy Ferris


  Thermodynamics had advanced a long way by the time Darwin came on the scene. Thanks in large measure to its important practical applications in the design of steam engines, the study of heat attracted some of the most intrepid intellects of the nineteenth century—men of the stature of Lord Kelvin, Hermann von Helmholtz, Rudolf Clausius, and Ludwig Boltzmann. But when all this brainpower was brought to bear upon the question of geochronology, the verdict was bad news for Darwin and the uniformitarian geologists.

  The titans of physics chose to focus less on the earth than on that suitably grander and more luminous body, the sun. Helmholtz was helpful: An able philosopher as well as a scientist, he was amused to read that the late Immanuel Kant (with whom he disagreed over just about everything) had thought that the sun was “a flaming body, and not a mass of molten and glowing matter.”36 This Helmholtz the physicist knew to be wrong; were the sun simply burning like a giant campfire, it would have run out of fuel in but a thousand years. Casting about for an alternative solar energy-source, Helmholtz hit upon gravitational contraction: The material of the sun, he reasoned, settles in toward the center, releasing gravitational potential energy in the form of heat. This, the most efficient solar energy-production mechanism that could be envisioned by nineteenth-century physics, yielded an age for the sun of some twenty to forty million years—a lot longer than the chronology of Buffon or the Bible, though still not enough to satisfy the Darwinians.

  The question of the age of the sun then was taken up by Lord Kelvin, an imposing figure by any intellectual standard. Born in Belfast in 1824, Kelvin (né William Thompson) was admitted to the University of Glasgow at the age of ten, had published his first paper in mathematics before he was seventeen, and was named professor of natural philosophy at Glasgow at age twenty-two. An adept musician and an expert navigator as well as a distinguished mathematician and physicist and inventor, Kelvin was a hard man with whom to differ. Moreover, his forte was heat: The Kelvin scale of absolute temperature is named after him, and he was instrumental in identifying the first law of thermodynamics (that energy is conserved in all interactions, meaning that no machine can produce more energy than it consumes) and the second law (that some energy must always be lost in the process). When Kelvin declared the verdict of thermodynamics as to the question of the age of the sun, few mortals, and fewer biologists, could expect both to differ with him and to prevail.

  Kelvin calculated that the sun, releasing heat by virtue of gravitational contraction, could not have been shining for more than five hundred million years. This was a disaster for Darwin. “I take the sun much to heart,” he wrote to Lyell in 1868. “I have not as yet been able to digest the fundamental notion of the shortened age of the sun and earth,” he wrote to Wallace three years later.37 Huxley the bulldog dutifully debated Kelvin on geochronology, at a meeting of the Geological Society of London, but Kelvin was no Bishop Wilberforce and Huxley got nowhere. Clearly either Darwin’s theory or Kelvin’s calculations were wrong. Darwin died not knowing which.

  To their credit, both Darwin and Kelvin allowed that something important might be missing from their considerations. As Darwin put it, pleading his case in a late edition of the Origin, “We are confessedly ignorant; nor do we know how ignorant we are.”38 Kelvin, for his part, admitted that his assessments of the age of the sun depended upon the accuracy of Helmholtz’s hypothesis that solar energy came from the alleged contraction of the sun. He remarked, in one of the most pregnant parenthetical phrases in the history of physics, that “(I do not say there may not be laws which we have not discovered.)”39

  It was in conceding that their views might be incomplete that both men proved most prophetic. What they lacked was an understanding of two of the fundamental forces of nature, known corporately as nuclear energy. It is the decay of radioactive material —via the weak nuclear force—that has kept the earth warm for nearly five billion years. It is nuclear fusion—which also involves the strong force—that has powered the sun for as long, and that promises to keep it shining for another five billion years. With the discovery of nuclear energy the time-scale debate was resolved in Darwin’s favor, the doors to nuclear physics swung open, and the world lost its innocence.

  The nuclear age may be said to have dawned on November 8, 1895, in a laboratory at the University of Würzburg, at the hands of the physicist Wilhelm Conrad Röntgen. Röntgen was experimenting with electricity in a semi vacuum tube. The laboratory was dark. He noticed that a screen across the room, coated with barium, platinum, and cyanide, glowed in the dark whenever he turned on the power to the tube, as if light from the tube were reaching the screen. But ordinary light could not be responsible: The tube was enclosed in black cardboard and no light could escape it. Puzzled, Röntgen placed his hand between the tube and the screen and was startled to see the bones in his hand exposed, as if the flesh had become translucent. Röntgen had detected “X rays”—high-energy photons generated by electron transitions at the inner shells of atoms.*

  Among the scores of physicists who took notice of Röntgen’s detection of X rays was Henri Becquerel, a third-generation student of phosphorescence who shared with his father and grandfather a fascination with anything that glowed in the dark. Becquerel’s discovery, like Röntgen’s, was accidental, though both illustrated the validity of Louis Pasteur’s dictum that chance favors the prepared mind. Between experiments in his laboratory in Paris, Becquerel stored some photographic plates wrapped in black paper in a drawer. A piece of uranium happened to be sitting on top of them. When Becquerel developed the plates several days later, he found that they had been imprinted, in total darkness, with an image of the lump of uranium. He had detected radioactivity, the emission of subatomic particles by unstable atoms like those of uranium—which, Becquerel noted in announcing his results in 1896, was particularly radioactive. His work helped initiate a path of research that would lead, eventually, to Einstein’s realization that every atom is a bundle of energy.

  At McGill University in Montreal, the energetic experimentalist Ernest Rutherford, a great bear of a man whose roaring voice sent his assistants and their laboratory glassware trembling, found that radioactive materials can produce surprisingly large amounts of energy. A lump of radium, Rutherford established, generates enough heat to melt its weight in ice every hour, and can continue to do so for a thousand years or more. Other radioactive elements last even longer; some keep ticking away at an almost undiminished rate for billions of years.

  This, then, was the answer to Kelvin, and one that spelled deliverance for the late Charles Darwin: The earth stays warm because it is heated by radioactive elements in the rocks and molten core of the globe. As Rutherford wrote:

  The discovery of the radioactive elements, which in their disintegration liberate enormous amounts of energy, thus increases the possible limit of the duration of life on this planet, and allows the time claimed by the geologist and biologist for the process of evolution.40

  Understandably pleased with this conclusion, the young Rutherford rose to address a meeting of the Royal Institution, only to find himself confronted by the one scientist in the world his paper could most deeply offend:

  I came into the room, which was half dark, and presently spotted Lord Kelvin in the audience and realized that I was in for trouble at the last part of my speech dealing with the age of the earth, where my views conflicted with his. To my relief, Kelvin fell fast asleep, but as I came to the important point, I saw the old bird sit up, open an eye and cock a baleful glance at me! Then a sudden inspiration came, and I said Lord Kelvin had limited the age of the earth, provided no new source [of energy] was discovered. That prophetic utterance refers to what we are now considering tonight, radium! Behold! the old boy beamed upon me.41

  Radioactive materials not only testified to the antiquity of the earth, but provided a way of measuring it as well. Rutherford’s biographer A. S. Eve recounts an exchange that signaled this new insight:

  About this time Rutherford, w
alking in the Campus with a small black rock in his hand, met the Professor of Geology. “Adams,” he said, “how old is the earth supposed to be?” The answer was that various methods lead to an estimate of one hundred million years. “I know” said Rutherford quietly, “that this piece of pitchblende is seven hundred million years old.”42

  What Rutherford had done was to determine the rate at which the radioactive radium and uranium in the rock gave off what he called alpha particles, which are the nuclei of helium atoms, and then to measure the amount of helium in the rock. The result, seven hundred million years, constituted a reasonably reliable estimate of how long the radioactive materials had been in there, emitting helium.

  Rutherford had taken a first step toward the science of radiometric dating. Every radioactive substance has a characteristic half-life, during which time half of the atoms in any given sample of that element will decay into another, lighter element. By comparing the abundance of the original (or “parent”) isotope with that of the decay product (or “daughter”), it is possible to age-date the stone or arrowhead or bone that contains the parent and daughter isotopes.

  Carbon-14 is especially useful in this regard, since every living thing on Earth contains carbon. The half-life of carbon-14 is 5,570 years, meaning that after 5,570 years half of the carbon-14 atoms in any given sample will have decayed into atoms of nitrogen-14. If we examine, say, the remains of a Navaho campfire and find that half the carbon-14 in the charred remains of the burnt logs has decayed into nitrogen-14, we can conclude that the fire was built 5,570 years ago. If three quarters of the carbon has turned to nitrogen, then the logs are twice as old—11,140 years—and so forth. After about five half-lives the amount of remaining parent isotope generally has become too scanty to be measured reliably, but geologists have recourse to other, more long-lived radioactive elements. Uranium-238, for one, has a half-life of over 4 billion years, while the half-life of rubidium-87 is a methuselian 47 billion years.

  In practice, radiometric dating is a subtle process, fraught with potential error. First one has to ascertain when the clock started. In the case of carbon-14, this is usually when the living tissue that contained it died. Carbon-14 is constantly being produced by the collision of high-energy subatomic particles from space with atoms in the earth’s upper atmosphere. Living plants and animals ingest carbon-14, along with other forms of carbon, only so long as they live. The scientist who comes along years later to age-date their remains is, therefore, reading a clock that started when the host died. The reliability of the process depends upon the assumption that the amount of ambient carbon-14 in the enviroment at the time was roughly the same as it is today. If not—if, for instance, a storm of subatomic particles from space happened to increase the amount of carbon-14 around thousands of years ago—then the radiometric date will be less accurate. In the case of inorganic materials, one may be dealing with radioactive atoms older than the earth itself; their clocks may have started with the explosion of a star that died when the sun was but a gleam in a nebular eye. But if such intricacies complicate the process of radiometric age-dating they also hint at the extraordinary range of its potential applications, in fields ranging from geology and geophysics to astrophysics and cosmology.

  The process of radiometrically age-dating geological strata got under way only ten years after the discovery of radioactivity itself, when the young British geologist Arthur Holmes, in his book The Age of the Earth, correlated the ages of uranium-bearing igneous rocks with those of adjacent fossil-bearing sedimentary strata. By the 1920s it was becoming generally accepted by geologists, physicists, and astronomers that the earth is billions of years old and that radiometric dating presents a reliable way of measuring its age. Since then, ancient rocks in southwestern Greenland have been radiometrically age-dated at 3.7 billion years, meaning that the crust of the earth can be no younger than that. Presumably the planet is older still, having taken time to cool from a molten ball and form a crust. Moon rocks collected by the Apollo astronauts are found to be nearly 4.6 billion years old, about the same age as meteorites—chunks of rock that once were adrift in space and since have been swept up by the earth in its orbit around the sun. It is upon this basis that scientists generally declare the solar system to be some 5 billion years old, a finding that fits well with the conclusions of astrophysicists that the sun is a normal star about halfway through a 10-billion-year lifetime.

  When nuclear fission, the production of energy by splitting nuclei, was detailed by the German chemists Otto Hahn and Fritz Strassmann in 1938, and nuclear fusion, which releases energy by combining nuclei, was identified by the American physicist Hans Bethe the following year, humankind could at last behold the mechanism that powers the sun and the other stars. In the general flush of triumph, few paid attention to the dismaying possibility that such overwhelming power might be set loose with violent intent on the little earth. Einstein, for one, assumed that it would be impossible to make a fission bomb; he compared the problem of inducing a chain reaction to trying to shoot birds at night in a place where there are very few birds. He lived to learn that he was wrong. The first fission (or “atomic”) bomb was detonated in New Mexico on July 16, 1945, and two more were dropped on the cities of Hiroshima and Nagasaki a few weeks later. The first fusion (or “hydrogen”) bomb, so powerful that it employed a fission weapon as but its detonator, was exploded in the Marshall Islands on November 1, 1952.

  A few pessimists had been able to peer ahead into the gloom of the nuclear future, though their words went largely unheeded at the time. Pierre Curie had warned of the potential hazards of nuclear weapons as early as 1903. “It is conceivable that radium in criminal hands may become very dangerous,” said Curie, accepting the Nobel Prize.* “… Explosives of great power have allowed men to do some admirable works. They are also a terrible means of destruction in the hands of the great criminals who lead nations to war.”43 Arthur Stanley Eddington, guessing that the release of nuclear energy was what powered the stars, wrote in 1919 that “it seems to bring a little nearer to fulfillment our dream of controlling this latent power for the well-being of the human race—or for its suicide.”44 These and many later admonitions notwithstanding, the industrialized nations set about building bombs just as rapidly as they could, and by the late 1980s there were over fifty thousand nuclear weapons in a world that had grown older if little wiser. Studies indicated that the detonation of as few as 1 percent of these warheads would reduce the combatant societies to “medieval” levels, and that climatic effects following a not much larger exchange could lead to global famine and the potential extinction of the human species. The studies were widely publicized, but years passed and the strategic arsenals were not reduced.

  It was through the efforts of the bomb builders that Darwin’s century-old theory of the origin of coral atolls was at last confirmed. Soon after World War II, geologists using tough new drilling bits bored nearly a mile down into the coral of Eniwetok Atoll and came up with volcanic rock, just as Darwin had predicted. The geologists’ mission, however, had nothing to do with evolution. Their purpose was to determine the structure and strength of the atoll before destroying it, in a test of the first hydrogen bomb. When the bomb was detonated, its fireball vaporized the island on which it had been placed, tore a crater in the ocean floor two miles deep, and sent a cloud of freshly minted radioactive atoms wafting across the paradisiacal islands downwind. President Truman in his final State of the Union message declared that “the war of the future would be one in which Man could extinguish millions of lives at one blow, wipe out the cultural achievements of the past, and destroy the very structure of civilization.

  “Such a war is not a possible policy for rational men,” Truman added.45 Nonetheless, each of the next five presidents who succeeded him in office found it advisable to threaten the Soviets with the use of nuclear weapons. As the British physicist P.M. S. Blackett observed, “Once a nation pledges its safety to an absolute weapon, it becomes emotionally
essential to believe in an absolute enemy.”46

  Einstein, sad-eyed student of human tragedy, closed the circle of evolution, thermodynamics, and nuclear fusion in a single sentence. “Man,” he said, “grows cold faster than the planet he inhabits.”47

  *Though Darwin, echoing Newton, characterized much of his research as purely inductive—“I worked on true Baconian principles,” he said of his account of evolution, “and without any theory collected facts on a wholesale scale”—this has always been a difficult claim to justify scrupulously, and Darwin formulated his theory of coral atoll formation while still in South America, before he ever laid eyes on a real atoll.

  *The rise in animal breeding was spurred on by the growing industrialization of England, which brought working people in from the country, where they could keep a few barnyard animals of their own, to the cities, where they were fed from ever larger herds bred to maximize profits. More generally, the advent of Darwinism itself might be said to have been fostered by a certain distancing of human beings from the creatures they studied; it was only once people stopped cohabiting with animals that they began to entertain the idea that they were the animals’ relations.

  *Malthus, incidentally, appears to have been inspired in part by reading Darwin’s grandfather Erasmus. It’s a small world, or was so in Victorian England.

 

‹ Prev