To make matters even stranger, it turned out that this new, supposedly elementary particle, a fundamental constituent of all matter, wasn’t even stable. For if you take a neutron and isolate it within a box, it will, on average, decay within a paltry ten minutes or so!
How can it be that a particle that comprises the better part of every element but hydrogen can be so ephemeral, and yet continue to dominate the mass of everything we can see? A miracle of Einstein’s famous relativistic connection between mass and energy saves the day, and as a result makes our lives possible.
For it works out that a neutron is only very slightly heavier than a proton—less than one part in a hundred heavier, to be exact. When neutron decay was first observed, the decay products included protons and electrons. Originally, in fact, Chadwick thought that a neutron might be a compound object, consisting of a tightly bound proton and electron. However, relativity makes this impossible, because when particles are bound to one another it takes energy to tear them apart. But, adding energy to something, according to the precepts of relativity, makes it heavier. Thus, a bound state of a proton and an electron would weigh slightly less than would the proton and electron if they were separated.
If this were the case, and a neutron were such a bound state and thus lighter than the sum of the proton plus electron masses, it would be energetically impossible for it to spontaneously decay into a free proton and an electron. The observation of neutron decay therefore implied that the mass of the neutron had to be larger than this sum, and subsequent careful measurements showed this to be the case, if just barely. However, by the same reasoning as given above, when a neutron itself is bound in a nucleus, by forces that were then unknown, its mass would be less than the mass of a free, unbound neutron. It turns out, remarkably, that its mass changes by just enough so that it can no longer decay into a proton plus electron when it is inside a nucleus. Thus, neutrons inside nuclei are stable. As a result, complex nuclei can be stable, and we can exist. Getting back to the neutron itself, if it were not a bound state of a proton and an electron, how could it decay into these products? All previous observations of natural radioactivity involved the disintegration of heavy complex nuclei into smaller nuclear components. Was the neutron therefore elementary, or wasn’t it? And what new force could be responsible for converting neutrons into other particles? Suddenly the strange new world of elementary particle physics became even stranger, if such a thing was possible.
And if this wasn’t bad enough, the decay of the neutron produced yet another puzzle. If a neutron spontaneously decayed into a proton and an electron, the law of conservation of energy tells us that the proton and electron would each be emitted with a fixed amount of energy, so that the total energy after the decay would equal the energy available from the rest mass of the neutron. However, when the decay of the neutron was observed, it turned out that the electrons that were emitted were measured to have not a fixed energy, but a variable energy, ranging over a continuum from zero energy of motion (i.e., an electron at rest) to carrying off the total energy available associated with the mass difference between the initial neutron and the emitted proton.
If energy was to be conserved in this strange new subatomic world, there was only one solution: Another particle—one that would be invisible to the detectors—had to be emitted in the neutron decay. In this case, this mystery particle and the electron could share the total available energy, with the mystery particle carrying off whatever energy might not be carried off by the electron. The problem with this explanation, however, was that the mass difference between the neutron and the sum of the masses of the proton and electron is very, very small. This means that this hypothetical particle had to be very nearly massless. Moreover, in order to have escaped detection, the particle had to have no charge, and have essentially no other significant interactions with normal matter! The Italian physicist Enrico Fermi called this proposed particle a “neutrino,” which, in Italian, means “little neutron.” It took another twenty years or so for the neutrino to finally be detected, and in the interim the subatomic particle menagerie had expanded even further. The neutrino was simply the first of the novel, exotic, and alien forms of elementary particles that appeared to exist in nature, associated with seemingly new forces. This particle also appeared to not exist as a part of the stuff that makes us up and also everything we see around us. Moreover, as we shall see, the nature of some of these new forces defied our very notions of how a commonsense universe should behave. Coming to grips with the mysterious plethora of new particles and forces would occupy much of the rest of the century and would ultimately lead to speculations that even these particles and forces may reflect only the very edge of reality.
One final observational development, which actually occurred before the other two I have described thus far, contributed to the intellectual excitement of the post-1930 world. Strictly speaking, it actually occurred in 1929, but it was in the 1930s that it was fully confirmed and that its utterly revolutionary implications began to be fully appreciated by the scientific community. This was the discovery by Edwin Hubble that the universe we live in is not, on its largest scales, eternal and unchanging. A fascinating character, Hubble was sufficiently accomplished to have garnered lasting recognition even if he had not been an expert at selfpromotion. A former high school athlete, Rhodes scholar, lawyer, and high school Spanish teacher, Hubble returned to his first love, science, when he was twenty-four. A decade later, following a stint as a major in World War I, Hubble moved to the Mount Wilson Observatory to use the new hundred-inch telescope that had just been completed there. In 1924 he made his first great discovery, which ultimately changed our picture of the universe as much as anything that had ever been seen before. Observing faint variable stars in the Andromeda nebula, as it was then called, he established that these objects existed at a distance of over one million lightyears away, more than three times farther away than the most distant objects known to exist within our own galaxy. Before this time the conventional wisdom—established by the influential American astronomer Harlow Shapley, who was the first to determine the size of the Milky Way—held that our galaxy was in essence an island universe, containing all there is to see. Suddenly Hubble’s discovery challenged this picture. The Andromeda nebula turned out to be a neighboring galaxy of comparable size to our own, and just one of what is now understood to be more than four hundred billion galaxies in the observable universe. Could the universe be infinite in all directions, full of galaxies as far as the eye could see and beyond? Hubble proceeded over the next five years to attempt to classify the nature of distant galaxies, and in 1929 arrived at an unexpected conclusion that made his previous startling discovery pale by comparison. In that year he reported evidence that distant galaxies are, on average, moving away from us and that, moreover, their speed is proportional to their distance: Those twice as far away are moving away twice as fast!
One’s first reaction upon hearing this is to conclude that we are therefore the center of the universe. Needless to say (as my wife reminds me daily), this is not the case. What it does imply, however, is that the space between galaxies is actually uniformly expanding in all directions. Put more simply, the universe is expanding. (To prove this to yourself, draw a square grid of dots on a piece of a paper, with the dots regularly spaced. Then draw a grid with the same number of dots but with a larger uniform spacing between them. Then, if you overlay one grid over the other, placing one of the dots in the second grid right over the corresponding dot in the first grid, you will see that from the vantage point of that dot, it looks like all the other dots are moving away from it, with those twice as far away shifting by twice the amount. This works no matter which dot you do this with.)
An expanding universe is in fact precisely one of the two possibilities allowed by Einstein’s general relativity. Indeed, a frustration that Einstein first encountered after developing his theory and attempting to apply it to the nature of the universe as a whole was that it d
id not allow for a static universe unless that universe was devoid of matter. He tried to get around this problem by introducing an extra ad hoc element into his equations—called the “cosmological term”—which he thought could allow for a static solution with matter. The effect of the cosmological term was to add a small repulsive force throughout space that Einstein thought could counteract gravity on large scales, holding distant objects apart. Unfortunately, however, he blundered. His static solution with a cosmological constant was not stable. Had Einstein had more courage of his convictions in 1916, he might have predicted either an expanding universe or a collapsing one, because these are the only two options allowed by general relativity. Once Hubble had discovered our cosmic expansion, Einstein was overjoyed and even went to visit him at Mount Wilson in 1931 so he could look through the famous telescope himself. George Gamow, physicist and author, later said Einstein confided to him that he thought his introduction of a cosmological term into his equations was his “biggest blunder.” As we shall later see, being willing to discard this term immediately after it seemed unnecessary may have been yet a bigger blunder. In any case, Hubble’s discovery of cosmic expansion changed everything about the way we think of “universal” history. If the universe was now expanding, it was once smaller. Assuming the expansion has been continuous, following its history backward meant that ultimately all objects in our visible universe would have been located at a single point at a finite time in the past. This implied, first of all, that our universe had a beginning. Indeed, when Hubble initially used his measured expansion rate to determine the age of the universe, he found an upper limit of two billion years. This was embarrassing, because the earth was, and is, known to be older than that, except by school boards in Ohio, Georgia, and Kansas perhaps. Fortunately, Hubble’s original measurement was actually off by almost a factor of ten, establishing a now noble tradition in cosmology. With current and thankfully more precise measurements of its expansion history, we now know that the age of the universe is about fourteen billion years.
But a finite age for the universe was not the only startling implication of the observed Hubble expansion. As we continue to move back in time, the size of the region occupied by the presently observable universe decreases. Originally, macroscopic bodies such as stars and galaxies would have been crowded together in a volume smaller than the size of an atom. In this case, the physics that would have governed the earliest moments of what has now become known as the big bang would involve processes acting on the smallest scales. On these scales the strange laws of quantum mechanics reign supreme, at least as far as we know. But, as we peer back to the very beginning itself, when all the matter in the observable universe existed together at virtually a single point, the very nature of space itself, and possibly even time as well, may have been dramatically different. Perhaps the entire universe as we know it emerged from behind the looking glass, from another dimension of sight and sound. Suddenly, faced with a possible singularity at the beginning of time, truth was stranger than fiction.
While the past remains a compelling subject, the future is usually of more practical interest. And a currently expanding universe could have one of three possible futures: Either the expansion continues unabated, or it slows down but never quite stops, or it stops and the universe recollapses. Determining which of these fates awaits the cosmos, by determining the magnitude of each of the terms in Einstein’s equations for an expanding universe, became one of the principal items of business for cosmology for the rest of the twentieth century. In the 1990s we thought we finally had the answer down pat. But the universe, as it has a way of doing, surprised us. As we shall see, it turns out that empty space—not matter, and not radiation—holds the key to our future. Thus, just as trying to understand our cosmic beginnings has forced us to ponder the ultimate nature of space and time, our very future may depend upon whether there is much more to empty space than meets the eye.
These revolutions in our picture of the universe at fundamental scales, from the existence of antimatter and virtual particles, to the apparent population explosion of particles and forces, and ultimately to the dynamic nature of space itself, completely transformed the landscape of physics and affected the very questions about nature that physicists might ask. Happily, many of the confusions raised by these unexpected discoveries have been resolved, as we shall see. But not all of them have been, and in the process other puzzles have arisen that have made the preliminary thrusts of physics at the beginning of the twenty-first century bear an odd resemblance to the philosophical speculations that so inspired Poincaré, Wells, Picasso, and others at the beginning of the previous century.
C H A P T E R 1 0
CURIOUSER AND CURIOUSER . . .
After a storm comes a calm.
—Fourteenth-century proverb
The 1950s are remembered by many to be a period of relative peace and stability, at least compared to the World War and subsequent recovery that had occupied the previous decade, and the tumultuous era that was yet to come. Memories, of course, can be deceiving, and I suspect that the families of the many thousands of Korean and U.S. soldiers killed in the Korean War, and of those who lost their lives or became trapped in Communist Hungary in 1956, may think otherwise. Whatever one’s assessment of the political situation of the time, in physics it was a period of growing but exciting confusion as the implications of the remarkable discoveries of the 1930s became manifest. Part of this excitement was generated by the availability of gargantuan tools that were part of the emergence of “big science” in late 1940s, following the mammoth Manhattan Project that led to the development of the atomic bomb and an immediate, and gruesome, end to World War II. During this period the unprecedented power of atomic weapons raised scientists up on a pedestal. While general scientific education in the United States did not become a priority until the crisislike reaction following the Soviet Union’s launching of the Sputnik satellite in 1957, the public began to appreciate the possibly dramatic impact of what would otherwise be considered rather esoteric physical phenomena. The newfound knowledge of the inner workings of atomic nuclei had manifested itself in the devastation wrought by nuclear weapons. But almost as if to balance the scales, physicists also invented the transistor and, with it, solid-state electronics, exploiting the strange laws of quantum mechanics to positively revolutionize our daily lives in almost every way. Today it is hard to imagine going for even an hour without depending at some time on transistors and the technology that has been developed around them.
Even biology was benefiting from knowledge on atomic scales. X-ray crystallography was enabling scientists to piece together the atomic structure of many materials, and in April 1953, Watson and Crick discovered the remarkable double-helix structure of DNA and, with it, the very basis of life itself. Or, as they put it in the concluding sentence of the paper announcing their results, in one of the most celebrated understatements in the history of science: “It has not escaped our notice that the specific pairing we have postulated immediately suggests a possible copying mechanism for the genetic material.”
The potential for the future seemed endless, limited only by our imagination. And imagination was in no short supply. But at the same time, on fundamental scales at least, nature seemed to be outpacing our ability to keep up.
The onslaught had begun slowly, as early as 1937, when once again cosmic rays produced a surprise. Recall that Carl Anderson had discovered the existence of the positron in cosmic rays in 1932 by using a cloud chamber. Shortly thereafter, in England, Patrick Blackett and his young Italian colleague Giuseppe Occhialini set out, in Blackett’s charming terms, “to devise a method of making cosmic rays take their own photographs.” They hooked up electronic sensors above and below a cloud chamber, which produced signals when cosmic rays passed through them. These signals were transmitted to the device that controlled the expansion of the vapor in the cloud chamber, causing the tracks to be visible. In this way, instead of expanding the cloud chamber
at random, as had been done previously, and catching a cosmic ray in, on average, one out of fifty such expansions, they caught a cosmic ray in each expansion. Using this technique, physicists could study cosmic ray properties more comprehensively, and within a few years it was observed that cosmic rays appeared to be more penetrating than one would expect based on theoretical estimates for the energy loss by electrons propagating through matter. It was natural for some—particularly experimenters, perhaps—to question whether the new quantum theory predictions of energy loss rates were, in fact, correct. Ultimately, however, the problem was demonstrated to lie elsewhere when, in 1937, two different teams of researchers (one of which included Anderson) demonstrated unambiguously that the cosmic rays being observed were not electrons, but new elementary particles, almost two hundred times heavier than the electron, and about ten times lighter than the proton and neutron. The world of elementary particles was becoming even more crowded.
Theorists, not to be outdone, pointed out that in fact one of their clan had earlier “predicted” such a particle. The soon to be famous (and infamous) U.S. physicist J. Robert Oppenheimer and his colleague Robert Serber explained that in a little-known Japanese journal, in 1935, the physicist Hideki Yukawa had proposed, by analogy to the force of electromagnetism—which operates by the exchange of electromagnetic radiation (which quantum mechanics implied could also be represented by particles, i.e., photons) between charged objects—that the strong force that must bind neutrons and protons together inside of nuclei might also operate by particle exchange. Because the nuclear force is very short range, operating over only nuclear distances, Yukawa used the Heisenberg uncertainty principle to argue that the particles responsible for transmitting this force would have to be heavy, about two hundred times the mass of the electron.
Hiding in the Mirror: The Quest for Alternate Realities, From Plato to String Theory (By Way of Alicein Wonderland, Einstein, and the Twilight Zone) Page 13