by Livio, Mario
Perry saw nothing wrong with 4 billion years for the Earth’s age, fairly close to today’s determination of about 4.5 billion years.
Perry’s work created the first crack in Kelvin’s seemingly unshakable calculations, by challenging the postulates that Kelvin made concerning the Earth’s solidity and homogeneity. There was, however, another crucial hypothesis in Kelvin’s estimate of the age of the Earth: that there were no unknown internal or external energy sources that could compensate for the heat losses. Events toward the end of the nineteenth century demolished this premise too.
Radioactivity
In the spring of 1896, the French physicist Henri Becquerele discovered that the decay of unstable atomic nuclei is accompanied by spontaneous emission of particles and radiation. The phenomenon became known as radioactivity. Seven years later, physicists Pierre Curie and Albert Laborde communicated that the decay of radium salts provided a previously unknown source of heat. It took the amateur astronomer William E. Wilson less than four months from the Curie and Laborde announcement to come up with the speculation that this property of radium “may possibly afford a clue to the source of energy in the sun and stars.” Wilson estimated that just “3.6 grams of radium per cubic meter of the sun’s volume would supply the entire output.” While Wilson’s extremely short note to Nature received relatively little attention from the scientific community, the potential implications of an unanticipated source of energy did not escape George Darwin. This mathematical physicist, who ceaselessly looked for ways to free geology from the straitjacket imposed by Kelvin’s chronology, declared emphatically in September 1903: “The amount of energy available [in radioactive materials] is so great as to render it impossible to say how long the sun’s heat has already existed, or how long it will last in the future.” The Irish physicist and geologist John Joly embraced this pronouncement enthusiastically and immediately applied it to the problem of the age of the Earth. In a letter to Nature published on October 1, Joly pointed out that “a source of supply of heat [the radioactive minerals] in every element of material” would be equivalent to an increased transfer of heat from the Earth’s interior. This was precisely what Perry had shown was needed in order to increase the age estimates. Put differently, in Kelvin’s scenario, the Earth was merely losing heat from its original reservoir. The discovery of a new source of internal heat seemed to undermine the entire basis for this scheme.
One of the key figures in the ensuing frantic research on radioactivity was the young New Zealand–born physicist Ernest Rutherford, who later became known as the “father of nuclear physics.” At the time, Rutherford was working at McGill University in Montreal (he later moved to the United Kingdom), where he concluded on the basis of scores of experiments that the atoms of all of the radioactive elements contained enormous amounts of latent energy that could be released as heat. One journal welcomed the announcement by Rutherford that the Earth would survive much longer than Kelvin had estimated with the headline: “DOOMSDAY POSTPONED.”
On his part, Kelvin showed great interest in the discoveries concerning radium and radioactivity, but he remained unconvinced that these would alter his age estimates. Refusing to admit, at least initially, that the source of energy of the radioactive elements could come from within, he wrote, “I venture to suggest that somehow ethereal waves may supply the energy to the radium while it is giving out heat to the ponderable matter around it.” In other words, Kelvin proposed that the atoms simply collect energy from the ether (ether was supposed to permeate all space), only to release it back upon their decay. In 1904, however, with considerable intellectual courage, he abandoned this idea at the British Association meeting, although he never published a retraction in print. Unfortunately, for some unclear reason, he again lost touch with the rest of the physics community in 1906 when he rejected the notion that radioactive decay transmuted one element into another, even though Rutherford and others had accumulated solid experimental evidence for this phenomenon. Throughout this period, Rutherford’s one-time collaborator Frederick Soddy lost his patience. In an acerbic exchange with Kelvin in the pages of the London Times, he declared disrespectfully, “It would be a pity if the public were misled into supposing that those who have not worked with radioactive bodies [alluding to Kelvin] are as entitled to as weighty an opinion as those who have.” Even before that altercation, in a book he had published in 1904, Soddy did not hesitate to firmly assert that “the limitations with respect to the past and future history of the universe have been enormously extended.”
Rutherford was a little more generous. Many years later, he told and retold an anecdote related to a lecture on radioactivity that he had given in 1904 at the Royal Institution:
I came into the room, which was half dark, and presently spotted Lord Kelvin in the audience and realized that I was in for trouble at the last part of the speech dealing with the age of the Earth, where my views conflicted with his. To my relief he fell fast asleep but as I came to the important point, I saw the old bird sit up, open an eye and cock a baleful glance at me! Then sudden inspiration came, and I said Lord Kelvin had limited the age of the Earth, provided no new source of heat was discovered. That prophetic utterance refers to what we are now considering tonight, radium! Behold! The old boy beamed at me.
Eventually, radiometric dating became one of the most reliable techniques to determine the ages of minerals, rocks, and other geological features, including the Earth itself. Generally, a radioactive element decays into another radioactive element at a rate determined by its half-life: the period of time it takes for the initial amount of radioactive material to decrease by half. The decay series continues until it reaches a stable element. By measuring and comparing the relative abundances of naturally occurring radioactive isotopes and all of their decay products, and coupling those data with the known half-lives, geologists have been able to determine the Earth’s age to high precision. Rutherford was one of the pioneers of this technique, as the following story documents: Rutherford was walking on campus with a small black rock in his hand, when he met his Canadian geologist colleague Frank Dawson Adams. “Adams,” he asked, “how old is the earth supposed to be?” Adams answered that several methods had given an estimate of one hundred million years. Rutherford then commented quietly, “I know that this piece of pitchblende [a mineral that is a source of uranium] is seven hundred million years old.”
Most if not all descriptions of the age-of-the-Earth controversy would have you believe that Kelvin’s dramatically wrong age estimate was a direct consequence of the fact that he ignored radioactivity. If this were the whole truth, Kelvin’s error would not have qualified as a blunder in my book, since Kelvin could not have considered a previously undiscovered source of energy. However, it is actually mistaken to attribute the erroneous age determination entirely to radioactivity. It is true that radioactive decays within the entire volume of the Earth’s mantle (down to a depth of about 1,800 miles) do indeed produce heat at a rate that is roughly equal to half the rate of heat flow through the planet. But not all of this heat can be tapped readily. A careful examination of the problem reveals that, given Kelvin’s assumptions, had he even included radioactive heating, he really should have considered only the heat generated inside the Earth’s outer 60-mile-deep skin. The reason is that Kelvin showed that only heat from such depths could be effectively mined by conduction in about one hundred million years. Geologists Philip England, Peter Molnar, and Frank Richter demonstrated in 2007 that when this fact is taken into account, the inclusion of radioactive heat deposition would not have altered Kelvin’s estimate for the age of the Earth in any significant way. Kelvin’s most serious blunder was not in being unaware of radioactivity (even though, once discovered, ignoring it was certainly not justified), but in initially ignoring and later objecting to the possibility raised by Perry of convection within the Earth’s mantle. This was the true source of the unacceptably low age estimate.
How could a man of such intellectual
powers as Kelvin be so sure that he was right even when he was dead wrong? Like all humans, Kelvin still had to use the hardware between his ears—his brain—and the brain has limitations, even when it belongs to a genius.
On the Feeling of Knowing
Since we can neither interview Kelvin nor image areas of his functioning brain, we will never know for sure the precise reasons for his misguided stubbornness. We do know, of course, that people who have spent much of their working lives defending certain propositions do not like to admit that they were wrong. But shouldn’t have Kelvin, the great scientist that he was, been different? Isn’t changing one’s theories based on new experimental evidence part of what science is all about? Fortunately, modern psychology and neuroscience are beginning to shed some light on what has been termed the “feeling of knowing,” which almost certainly shaped some of Kelvin’s thinking.
I should first note that in his approach to science and crusade for knowledge, Kelvin was more akin to an engineer than to a philosopher. Being on one hand an effective mathematical physicist, and on the other, a gifted experimentalist, he always sought a premise with which he could calculate or measure something rather than an opportunity to contemplate different possibilities. At the very basic level, therefore, Kelvin’s blunder was a consequence of his belief that he could always determine what was probable, not realizing the ever-present danger of overlooking some possibilities.
At a somewhat deeper stratum, Kelvin’s blunder probably stemmed from a well-recognized psychological trait: The more committed we are to a certain opinion, the less likely we are to relinquish it, even if confronted with massive contradictory evidence. (Does the phrase “weapons of mass destruction” ring a bell?) The theory of cognitive dissonance, originally developed by psychologist Leon Festinger, deals precisely with those feelings of discomfort that people experience when presented with information that is inconsistent with their beliefs. Multiple studies show that to relieve cognitive dissonance, in many cases, instead of acknowledging an error in judgment, people tend to reformulate their views in a new way that justifies their old opinions.
The messianic stream within the Jewish Hasidic movement known as Chabad provided an excellent, if esoteric, example for this process of reorientation. The belief that the Chabad leader Rabbi Menachem Mendel Schneerson was the Jewish Messiah picked up momentum during the decade preceding the rabbi’s death in 1994. After the Rabbi suffered a stroke in 1992, many faithful followers in the Chabad movement were convinced that he would not die but would “emerge” as the Messiah. Faced with the shock of his eventual death, however, dozens of these followers changed their attitudes and argued (even during the funeral) that his death was, in fact, a required part of the process of his returning as the Messiah.
An experiment conducted in 1955 by psychologist Jack Brehm, then at the University of Minnesota, demonstrated a different manifestation of cognitive dissonance. In that study, 225 female sophomore students (the classical subjects of experiments in psychology) were first asked to rank eight manufactured articles as to their desirability on a scale of 1.0 (“not at all desirable”) to 8.0 (“extremely desirable”). In the second stage, the students were allowed to choose as a take-home gift one of two articles presented to then from the original eight. A second round of rating all eight items then followed. The study showed that in the second round, the students tended to increase their ratings for the article they had chosen and to lower them for the rejected item. These and other similar findings support that idea that our minds attempt to reduce the dissonance between the cognition “I chose item number three” and the cognition “But item number seven also has some attractive features.” Put differently, things seem better after we choose them; a conclusion corroborated further by neuroimaging studies that show enhanced activity in the caudate nucleus, a region of the brain implicated with “feeling good.”
Kelvin’s case appears to fit the cognitive dissonance theory like a glove. After having repeated the arguments about the age of the Earth for more than three decades, Kelvin was not likely to change his opinion just because someone suggested the possibility of convection. Note that Perry was not able to prove that convection was taking place, nor was he even able to show that convection was probable. By the time radioactivity appeared on the scene another decade later, Kelvin was probably even less inclined to publish a concession of defeat. Instead, he preferred to engage in an elaborate scheme of experiments and explanations intended to demonstrate that his old estimates still held true.
Why is it so difficult to let go of opinions, even in the face of contradictory evidence that any independent observer would regard as convincing? The answer can perhaps be found in the way the reward circuitry of the brain operates. Already in the 1950s, researchers James Olds and Peter Milner of McGill University identified pleasure centers in the brains of rats. Rats were found to press the lever that activated the electrodes placed at these pleasure-inducing locations more than six thousand times per hour! The potency of this pleasure-producing stimulation was illustrated dramatically in the mid-1960s, when experiments showed that when forced to choose between obtaining food and water or the rewarding pleasure stimulation, rats suffered self-imposed starvation.
Neuroscientists of the past two decades have developed sophisticated imaging techniques that allow them to see in detail which parts of the human brain light up in response to pleasing tastes, music, sex, or winning at gambling. The most commonly used techniques are positron-emission tomography (PET) scans, in which radioactive tracers are injected and then followed in the brain, and functional MRI (fMRI), which monitors the flow of blood to active neurons. Studies showed that an important part of the reward circuitry is a collection of nerve cells that originate near the base of the brain (in an area known as the ventral tegmental area, or VTA) and communicate to the nucleus accumbens—an area beneath the frontal cortex. The VTA neurons communicate with the nucleus accumbens neurons by dispatching a particular chemical neurotransmitter called dopamine. Other brain areas provide the emotional contents and relate the experience to memories, and to the triggering of responses. The hippocampus, for instance, effectively “takes notes,” while the amygdala “grades” the pleasure involved.
So how does all of this relate to intellectual endeavors? To embark on, and persist in, some relatively long-term thought process, the brain needs at least some promise of pleasure along the way. Whether it is the Nobel Prize, the envy of neighbors, a salary raise, or the mere satisfaction of completing a Sudoku puzzle labeled “evil,” the nucleus accumbens of our brain needs some dose of reward to keep going. However, if the brain derives frequent rewards over an extended period of time, then just as in the case of those self-starving rats, or with people who are addicted to drugs, the neural pathways connecting the mental activity to the feeling of accomplishment become gradually adapted. In the case of drug addicts, they need more drugs to get the same effect. For intellectual activities, this may result in an enhanced need for being right all the time and, concomitantly, in an increasing difficulty to admit errors. Neuroscientist and author Robert Burton suggested specifically that the insistence upon being right might have physiological similarities to other addictions. If true, then Kelvin would no doubt match the profile of an addict to the sensation of being certain. Almost a half century of what he surely regarded as victorious battles with the geologists would have strengthened his convictions to the point where those neural links could not be dissolved. Irrespective, however, of whether the sensation of being certain is addictive or not, fMRI studies have shown that what is known as motivated reasoning—when the brain converges on judgments that maximize positive affect states associated with attaining motives—is not associated with neural activity linked to cold reasoning tasks. In other words, motivated reasoning is regulated by emotions, not by dispassionate analysis, and its goal is to minimize threats to the self. It is not inconceivable that late in life, Kelvin’s “emotional mind” occasionally swamped his “rat
ional mind.”
You may recall that earlier I referred to Kelvin’s calculation of the age of the Sun. I do not consider his estimate to be a blunder. How is that possible? After all, his estimate of less than one hundred million years was wrong by as much as his value for the age of the Earth.
Fusion
In an article on the age of the Earth written in 1893, three years before the discovery of radioactivity, the American geologist Clarence King wrote, “The concordance of results between the ages of the sun and earth certainly strengthens the physical case and throws the burden of proof upon those who hold to the vaguely vast age derived from sedimentary geology.” King’s point was well taken. As long as the age of the Sun was estimated to be only a few tens of millions of years, any age estimates based on sedimentation would have been constrained, since for sedimentation to occur, the Earth had to be warmed by the Sun.
Recall that Kelvin’s calculation of the age of the Sun relied entirely on the release of gravitational energy in the form of heat as the Sun contracts. This idea—that gravitational energy could be the source of the Sun’s power—originated with the Scottish physicist John James Waterston as early as 1845. Ignored initially, the hypothesis was revived by Hermann von Helmholtz in 1854, and then enthusiastically endorsed and popularized by Kelvin. With the discovery of radioactivity, many assumed that the radioactive release of heat would turn out to be the real source of the Sun’s power. This, however, proved to be incorrect. Even under the wild assumption that the Sun is composed largely of uranium and its radioactive decay products, the power generated would not have matched the observed solar luminosity (as long as chain reactions not known at Kelvin’s time were not included). Kelvin’s estimate of the age of the Sun had served to strengthen his objection to revising his calculation of the age of the Earth—as long as the problem of the age of the Sun existed, the discrepancy with the geological guesstimates would not have been resolved fully. The answer to the question of the Sun’s age came only a few decades later. In August 1920, astrophysicist Arthur Eddington suggested that the fusion of hydrogen nuclei to form helium might provide the energy source of the Sun. Building on this concept, physicists Hans Bethe and Carl Friedrich von Weizsäcker analyzed a variety of nuclear reactions to explore the viability of this hypothesis. Finally, in the 1940s, astrophysicist Fred Hoyle (whose groundbreaking work we shall investigate in chapter 8) proposed that fusion reactions in stellar cores could synthesize the nuclei between carbon and iron. As I noted in the previous chapter, Kelvin was therefore right when he declared in 1862: “As for the future, we may say, with equal certainty, that inhabitants of the earth can not continue to enjoy the light and heat [of the Sun] essential to their life for many million years longer unless sources now unknown to us are prepared in the great storehouse of creation [emphasis added].” The solution to the problem of the age of the Sun required no less than the combined genius of Einstein, who showed that mass could be converted into energy, and the leading astrophysicists of the twentieth century, who identified the nuclear fusion reactions that could lead to such a conversion.