by David Toomey
THE STANDARD MODEL OF PARTICLE PHYSICS
Physicists’ best understanding of these laws is expressed in the “Standard Model” of particles and forces. The Standard Model states that everything in the universe is made from twelve fundamental particles—six types of quarks (which compose protons and neutrons) and six types of leptons (the most familiar of which are electrons and neutrinos). The model further states that these particles are governed by three fundamental forces: the “electromagnetic force,” the “strong nuclear force” (or simply the “strong force”), and the “weak force.” The model does not have a place for gravity, nor does it explain many other features of our universe; nonetheless, it is regarded as successful in that it explained a variety of experimental results and predicted the existence of several subatomic particles years before their actual discovery.
Let’s take a moment to review the Standard Model’s details. Two types of quarks go to making protons and neutrons, and protons and neutrons go to making atomic nuclei. Just as protons and neutrons have particular masses, so do the quarks that compose them. The quarks (and their masses) are governed in various ways by the fundamental forces. The strong force holds quarks together to make protons and neutrons, the weak force causes a type of radioactive decay, and the electromagnetic force governs electricity, magnetism, and other electromagnetic waves.
What puzzled Carter and others about the masses of quarks and the strengths of the forces is that all seem finely tuned to a universe that allows life. Consider the quarks. Adjust their masses ever so slightly—so that, for instance, neutrons become 2 percent heavier than protons—and you couldn’t have stable oxygen or carbon. No oxygen and no carbon would mean no life. Adjust the masses of quarks so that protons are much heavier than neutrons, and you couldn’t have stable hydrogen. No hydrogen would mean no life. For the fundamental forces the situation is much the same. Changes to the strength of any of them, even small changes, would make a universe inhospitable. If the electromagnetic force were a bit stronger, atoms would not share electrons and chemistry would be impossible; if it were a bit weaker, atoms could not hold on to electrons, the universe would be populated only by loose subatomic particles, and chemistry (again) would be impossible. If the strong nuclear force were stronger, stars would turn all their hydrogen into helium and then iron, and we’d have a universe without hydrogen. If it were weaker, complex atomic nuclei couldn’t form, and we’d have a universe without carbon. If the weak force were stronger, atomic nuclei would decay before heavy elements could form; if it were weaker, then (as with a stronger strong force), all hydrogen would be turned into helium.
The strangeness doesn’t end there. The universe we know has features not explained by the Standard Model—features that physicists and cosmologists call “cosmic parameters”—and these also have values that are quite specific and seem finely tuned to allow life. They also seem, for lack of a better word, arbitrary. Take, for instance, the strength of gravity. It has a specific value expressed by the gravitational constant: G = 6.67 × 10–11 cubic meters per kilogram per second squared. This equation, one can’t help noting, is so lacking in elegance that it might have been generated randomly. If you guessed that it’s not the prediction of a theory, you’d be right; the equation representing the strength of gravity had to be found by direct experiment. But for all its coarse appearance, that equation is precise, and for that we should be thankful. Were gravity much stronger, cosmic expansion would have slowed, halted, and reversed itself, and the universe would have collapsed almost before it had a chance to start. Were it much weaker, the material created in the big bang would have continued to expand so quickly that it would have dissipated, and we’d have a universe of very diffuse particles.
Naturally, this question arises: Why should the values of the particles and forces be what they are? To this Brandon Carter had an answer. If there are a great number of universes, and laws vary from universe to universe, there may be no reason, and to look for one would be pointless. It is to be expected that we find ourselves in a universe with laws conducive to our existence; obviously it couldn’t be otherwise. If anything explained these laws, Carter said, it was what statisticians call a “selection effect”—something that, on this largest and most fundamental of questions, scientists were failing to take into account.
Carter presented the idea in 1973 at a conference commemorating Copernicus. It was a rather pointed selection effect on his part, in that it ran counter to the Copernican-inspired principle of mediocrity and, rather more immediately, counter to an extension of the principle proposed by Fred Hoyle (the same Fred Hoyle who some years earlier had imagined a sentient interstellar cloud). As you may recall, Hoyle was a proponent of the “steady state” theory of cosmology, which postulated that the expanding universe was never in a state of higher density, that there was no big bang, and that matter is constantly being created out of empty space. Now Hoyle was suggesting that the principle of mediocrity should apply not only to space, but to time. In other words, as we assume that the universe is the same everywhere in space, we should assume that it is the same throughout time; and as we should expect to find ourselves at no special place in the universe’s space, we should expect to find ourselves at no special place in its history. In Carter’s view, this was one expectation too far, particularly in light of recent findings concerning the nature of the very early universe that, all evidence suggested, was a very different place. Carter was suggesting that perhaps we were someplace special after all. He called his idea the “anthropic principle,” expressed formally (and in his words) as, “What we can expect to observe must be restricted by the conditions necessary for our presence as observers.”4
In a very short time the anthropic principle generated hundreds of papers, several books, and several versions of the principle, the most audacious being the “participatory” form, posited in 1986 by theoretical physicists John D. Barrow and Frank Tipler, which asserted that the laws of physics and the universe are destined to produce observers of those laws and that universe. In other words, it suggested that life and intelligence, in some indeterminate manner, brought the universe into being.
By comparison with Barrow and Tipler’s rendering, Carter’s original version was tame, but it was controversial nonetheless, and remains so to this day, for two reasons. Recall that the Standard Model is regarded as incomplete and provisional. Many theoretical physicists believe that the laws of physics can be explained—and someday will be explained—as consequences of a single underlying principle, a synthesis of general relativity and quantum mechanics called quantum gravity. In something of an overstatement, it’s also called a “theory of everything”*—although it probably won’t explain, for instance, why fools fall in love or why there’s always water in my basement. Nonetheless, many have devoted their careers in the attempt to discover that theory, and have regarded talk of anthropic rationales and alternate universes, with its implications that their work was a waste of time, as waving the flag of surrender. There is a related reason. Although Carter’s anthropic principle constrained conditions to those we observe, it did not explain those conditions, and there was a danger that constraining would be confused with explaining. There was a danger, in other words, that cosmologists would come to believe they had explained a natural phenomenon like the strength of gravity merely because they had shown it to be compatible with life.
THE INFLATIONARY MULTIVERSE
In the mid-1990s, astronomers and cosmologists were amassing evidence to support a set of hypotheses that yield a model of the “inflationary” multiverse. The model implied that shortly after the big bang, the universe underwent a very rapid expansion, with some parts of space inflating like quickly rising bread even as they produced spaces within them (like pockets of air within that bread) that, once formed, stretched far less rapidly. The resulting picture is of a fantastically enormous expanse of ever-inflating space, doubling in size every 10–34 second or faster, and all the while producing
ever-greater numbers of pockets of space. Each of the pockets, it should be said, is its own universe. And one of them is ours. For reasons explained by mathematics rather outside our interests here, while finite when viewed from its outside, each universe (ours included) would be infinite when seen from within.5
The inflationary multiverse, many think, may be the stage on which Carter’s anthropic principle is realized. Cosmologists believe that as the very young universe expanded and cooled to a more stable state—more precisely, a “metastable vacuum state”—things gelled. It was a bit like a game of spin the bottle. As long as the bottle spins, it is unstable. It is stable—or at least more stable—only when it stops spinning. But exactly where it stops spinning is determined by many factors: an unevenness of the floor, air turbulence, and so forth—all the products of chance. Likewise, the universe’s initial conditions—the densities and motions of matter—were products of chance, created early in its history by quantum fluctuations. Because the fluctuations were perfectly random, they were able to produce all possible metastable vacuum states. Consequently, so this model suggests, for each pocket universe, with its own big bang and its own expanding and cooling, the metastable vacuum state would be different. Therefore, physical constants like ratios of the masses of subatomic particles might be different, strengths of fundamental forces might be different, and perhaps most strangely, dimensionality might be different, and it might different in any number of ways—including the total number of dimensions, the parts of the total number that are compacted and unseen, and the geometry and topology of each.
Exactly how many metastable vacuum states (and pocket universes) might there be? No one really knows, but string theory—a leading contender for the theory of everything—allows an estimate. One prediction of string theory is that what we termed the traditional universe resides within a kind of substrate of space that physicists call a “brane.” More or less as a page in a closed book is near other pages yet slightly offset from them, our brane is near other branes (with their attendant universes), yet slightly offset from them. In a hypothetical string theory “landscape,” there are as many as 10500 different unstable vacuum states, meaning 10500 various sets of constants, particles, and dimensionalities, and 10500 directions for the spinning bottle, when it stops spinning, to point.
The environments of most other pocket universes are likely to be unfriendly to life, and you probably wouldn’t want to visit one. You couldn’t even if you did want to. Unlike the observable universes within the traditional universe, pocket universes are truly separate, and growing more separate by the second. They are driven apart with a speed proportional to the ever-growing space between them, such that two pocket universes sufficiently distant from each other are moving away from each other faster than the speed of light. If you started from one pocket universe, you could never reach the other, no matter how fast you traveled and no matter how much time you had. For these reasons you might think that we must abandon all hope of learning anything about them, and you might be right. But many physicists have reasons to suspect otherwise.
First, there is a precedent. The space beneath the event horizons of black holes is a place we will likely never see or visit. Nonetheless, theoretical physicists routinely use Einstein’s theory of general relativity to describe the nature of that space, and they do so with some confidence. Second, evidence that physical constants have changed in our own universe—even a little bit—would show that they can change, and lend support to ideas of universes where they might be very different. (Intriguingly, there is evidence, albeit controversial evidence, of such a change. In 2001 a team of physicists reported observations of spectral lines produced by very distant quasars suggesting that 6 billion years ago, the electromagnetic force [specifically, the fine-structure constant characterizing the strength of the electromagnetic interaction] was slightly weaker.)6 Finally, there is the possibility of experiment. If, sometime in the future, the hypotheses of string theory can be tested experimentally, then its prediction of pocket universes will gain authority.
There is, however, a way that the existence of a multiverse might be tested now: by predicting what scientists call a probability distribution. The thinking is as follows. Suppose the speculation of the previous pages is indeed the case—in other words, that our universe is one of an incredibly large set of universes, each with different physical constants, strengths for the fundamental forces, and dimensionalities. There is no reason to suppose that our universe is the only one that allows life; it is rather more likely that it is one among a whole subset of universes that allow life. If it is, then context changes everything. The only violinist in a high school concert orchestra who matriculates at Juilliard and suddenly finds herself in a roomful of violinists will learn that she is nothing special. Likewise, within a subset of universes that allow life, we would learn that ours is nothing special. In fact, within the subset of Juilliard violinists and life-allowing universes, the principle of mediocrity returns in full force. If our violinist is merely typical of violinists matriculating at Juilliard, odds are she won’t be chosen to play first chair. Likewise, if our universe is merely typical of universes that allow life, odds are it is not among the few universes whose conditions for life are optimal. Rather, odds are it is among the many that meet those conditions with the slimmest of margins. An illustration may help.
Imagine a dartboard affixed to a wall, and if you like, imagine the wall extending a great distance in all directions. Suppose that the bull’s-eye in the dartboard represents the optimal values of the constants, force strengths, and dimensionality for a universe that allows life. There are several rings around the bull’s-eye, and they are of equal width, meaning that the larger the ring is, the greater is its surface area. Suppose that the smallest ring represents values slightly less optimal for a universe that allows life, the ring around it represents values less optimal still, and so on to the largest, outermost ring, which represents values that just barely meet the conditions necessary for a universe to allow life. All places on the wall off the dartboard, of course, represent values that fail to meet those conditions.
Now suppose that we throw darts at the wall, that they strike in a perfectly random manner, and that every place a dart strikes, a universe is created. Soon darts cover the wall and the dartboard evenly. The bull’s-eye, having the smallest surface area, contains the fewest darts, the ring around it somewhat more, the ring around it still more, and so on. Because the outermost ring presents the most surface area, we may expect that there is a greater chance that a dart will hit it. Upon examination, we find that indeed, this outer ring contains more darts than any other ring and, of course, a great many more darts than the bull’s-eye.
If our universe is typical of those with the conditions necessary to allow life, there is a greater chance that it is represented in the outermost ring than in any other. In other words, in our universe the conditions necessary to allow life would be far from optimum; in fact, the values of the constants, masses of fundamental particles, force strengths, and dimensionality for a universe would meet the necessary conditions just barely. So, we are now in a position to ask, do the values meet the conditions just barely? In the case of many values, the answer is yes—and in one case, a decided yes.
THE COSMOLOGICAL CONSTANT
Space many of us think of as empty is actually teeming with virtual particles that generate a repulsive force—the dark energy that drives galaxies apart at an accelerating rate. The number representing the value for the strength of that force is called the cosmological constant. For the first part of the twentieth century, no one knew what that number was. Then, in the 1970s, theoretical physicists calculated how much dark energy resides in a given volume of space, and they predicted a number for the constant. Their prediction surprised them: the number represented so much energy that galaxies would never have had a chance to form, and they assumed they had made a mistake. But upon careful review they concluded that there was no mistake—a
nd yet the galaxies, obviously, had formed. Astrophysicists and cosmologists suspected that something was neutralizing the dark energy, and they expected it was neutralizing it perfectly, as a –1 neutralizes a +1. Such perfect cancellations were not unknown in physics. Far from it: they were a feature of the symmetries of the universe.
In the 1990s, astronomers were able to measure dark energy directly and found that the actual cosmological constant was different from the value predicted, and so was what physicists call “technically unnatural.” In fact, it was much, much smaller than the predicted value—120 powers of ten less than that value. Yet it was incredibly precise. Tip the scale one one-hundredth of a decimal place in one direction and the universe would expand far too rapidly for galaxies to form. Tip it one one-hundredth of a decimal place in the other direction and the universe would collapse a fraction of a second after it appeared. If that precision had represented a perfect cancellation of the dark energy, it might not have been particularly remarkable. Instead, it turned out to be a very near but not quite perfect cancellation. This balance between the universe’s expanding and contracting forces was slightly asymmetrical, and strangely, all evidence is that this slight asymmetry makes the universe possible.
Considered outside the realm of science, this tiny asymmetry might seem to be evidence of intelligent design, although, one can’t help but think, by a designer who had conceived an altogether elegant table and, upon making it, found he needed to slip a twice-folded paper napkin under one leg to keep it level. It has been called the most disturbing example of cosmic fine-tuning, a “put-up job,” and a big fix.7 Considered within the realm of science, it might be a fluke, and for you and me, a very lucky one. Or, it might be evidence for a multiverse.