Suppose some constant of nature, call it X, varies from one region of the universe to another. In some of the regions observers are disallowed, while in others observers can exist and the value of X will be measured. Suppose further that some Statistical Bureau of the Universe collected and published the results of these measurements. The distribution of values measured by different observers would most likely have the shape of a bell curve, similar to the one in Figure 14.1. We could then discard 2.5 percent at both ends of the distribution and predict the value of X at a 95 percent confidence level.
Figure 14.2. An observer randomly picked in the universe. The values of the constants measured by this observer can be predicted from a statistical distribution.
What would be the meaning of such a prediction? If we randomly picked observers in the universe, their observed values of X would be in the predicted interval 95 percent of the time. Unfortunately, we cannot test this kind of prediction, because all regions with different values of X are beyond our horizon. We can only measure X in our local region. What we can do, though, is to think of ourselves as having been randomly picked. We are just one out of a multitude of civilizations scattered throughout the universe. We have no reason to believe a priori that the value of X in our region is very rare, or otherwise very special compared with the values measured by other observers. Hence, we can predict, at 95 percent confidence level, that our measurements will yield a value in the specified range. The assumption of being unexceptional is crucial in this approach; I called it “the principle of mediocrity.”
Some of my colleagues objected to this name. They suggested “the principle of democracy” instead. Of course, nobody wants to be mediocre, but the name expresses nostalgia for the times when humans were at the center of the world. It is tempting to believe that we are special, but in cosmology, time and again, the assumption of being mediocre proved to be a very fruitful hypothesis.
The same kind of reasoning can be applied to predicting the height of people. Imagine for a moment that you don’t know your own height. Then you can use statistical data for your country and gender to predict it. If, for example, you are an adult man living in the United States and have no reason to think that you are unusually tall or short, you can expect, at 95 percent confidence, to be between 1.63 and 1.90 meters tall.
I later learned that similar ideas had been suggested earlier by the philosopher John Leslie and, independently, by the Princeton astrophysicist Richard Gott. The main interest of these authors was in predicting the longevity of the human race. They argued that humanity is not likely to last much longer than it has already existed, since otherwise we would find ourselves to be born surprisingly early in its history. This is what’s called the “doomsday argument.” It dates back to Brandon Carter, the inventor of the anthropic principle, who presented the argument in a 1983 lecture, but never in print (it appears that Carter already had enough controversy on his hands).1 Gott also used a similar argument to predict the fall of the Berlin Wall and the lifetime of the British journal Nature, where he published his first paper on this topic. (The latter prediction, that Nature will go out of print by the year 6800, is yet to be verified.)
If we have a statistical distribution for the constants of nature measured by all the observers in the universe, we can use the principle of mediocrity for making predictions at a specified confidence level. But where are we going to get the distribution? In lieu of the data from the Statistical Bureau of the Universe, we will have to derive it from theoretical calculations.
The statistical distribution cannot be found without a theory describing the multiverse with variable constants. At present, our best candidate for such a theory is the theory of eternal inflation. As we discussed in the preceding chapter, quantum processes in the inflating spacetime spawn a multitude of domains with all possible values of the constants. We can try to calculate the distribution for the constants from the theory of eternal inflation, and then—perhaps!—we could check the results against the experimental data. This opens an exciting possibility that eternal inflation can, after all, be subjected to observational tests. Of course, I felt this opportunity was not to be missed.
COUNTING OBSERVERS
Consider a large volume of space, so large that it includes regions with all possible values of the constants. Some of these regions are densely populated with intelligent observers. Other regions, less favorable to life, are greater in volume, but more sparsely populated. Most of the volume will be occupied by huge barren domains, where observers cannot exist.
The number of observers who will measure certain values of the constants is determined by two factors: the volume of those regions where the constants have the specified values (in cubic light-years, for example), and the number of observers per cubic light-year. The volume factor can be calculated from the theory of inflation, combined with a particle physics model for variable constants (like the scalar field model for the cosmological constant).2 But the second factor, the population density of observers, is much more problematic. We know very little about the origin of life, let alone intelligence. How, then, can we hope to calculate the number of observers?
What comes to the rescue is that some of the constants do not directly affect the physics and chemistry of life. Examples are the cosmological constant, the neutrino mass, and the parameter, usually denoted by Q, that characterizes the magnitude of primordial density perturbations. Variation of such life-neutral constants may influence the formation of galaxies, but not the chances for life to evolve within a given galaxy. In contrast, constants such as the electron mass or Newton’s gravitational constant have a direct impact on life processes. Our ignorance about life and intelligence can be factored out if we focus on those regions where the life-altering constants have the same values as in our neighborhood and only the life-neutral constants are different. All galaxies in such regions will have about the same number of observers, so the density of observers will simply be proportional to the density of galaxies.3
Thus, the strategy is to restrict the analysis to life-neutral constants. The problem then reduces to the calculation of how many galaxies will form per given volume of space—a well-studied astrophysical problem. The result of this calculation, together with the volume factor derived from the theory of inflation, will yield the statistical distribution we are looking for.
CONVERGING ON THE COSMOLOGICAL CONSTANT
As I was thinking about observers in remote domains with different constants of nature, it was hard to believe that the equations I scribbled in my notepad had much to do with reality. But having gone this far, I bravely pressed ahead: I wanted to see if the principle of mediocrity could shed some new light on the cosmological constant problem.
The first step was already taken by Steven Weinberg. He studied how the cosmological constant affects galaxy formation and found the anthropic bound on the constant—the value above which the vacuum energy would dominate the universe too soon for any galaxies to form. Moreover, as I already mentioned, Weinberg realized that there was a prediction implicit in his analysis. If you pick a random value between zero and the anthropic bound, this value is not likely to be much smaller than the bound, for the same reason that the first man you meet is not likely to be a dwarf. Weinberg argued, therefore, that the cosmological constant in our part of the universe should be comparable to the anthropic bound.aw
The argument sounded convincing, but I had my reservations. In regions where the cosmological constant is comparable to the anthropic bound, galaxy formation is barely possible and the density of observers is very low. Most observers are to be found in regions teeming with galaxies, where the cosmological constant is well below the bound—small enough to dominate the universe only after the process of galaxy formation is more or less complete. The principle of mediocrity says that we are most likely to find ourselves among these observers.
I made a rough estimate, which suggested that the cosmological constant measured by a typical observer s
hould not be much greater than ten times the average density of matter. A much smaller value is also improbable—like the chance of meeting a dwarf. I published this analysis in 1995, predicting that we should measure a value of about ten times the matter density in our local region. 4 More detailed calculations, also based on the principle of mediocrity, were later performed by the Oxford astrophysicist George Efstathiou5 and by Steven Weinberg, who was now joined by his University of Texas colleagues Hugo Martel and Paul Shapiro. They arrived at similar conclusions.
I was very excited about this newly discovered possibility of turning anthropic arguments into testable predictions. But very few people shared my enthusiasm. One of the leading superstring theorists, Joseph Polchinski, once said that he would quit physics if a nonzero cosmological constant were discovered.ax Polchinski realized that the only explanation for a small cosmological constant would be the anthropic one, and he just could not stand the thought. My talks about anthropic predictions were sometimes followed by an embarrassed silence. After one of my seminars, a prominent Princeton cosmologist rose from his seat and said, “Anyone who wants to work on the anthropic principle—should.” The tone of his remark left little doubt that he believed all such people would be wasting their time.
SUPERNOVAE TO THE RESCUE
As I already mentioned in earlier chapters, it came as a complete shock to most physicists when evidence for a nonzero cosmological constant was first announced. The evidence was based on the study of distant supernova explosions of a special kind—type Ia supernovae.
These gigantic explosions are believed to occur in binary stellar systems, consisting of an active star and a white dwarf—a compact remnant of a star that ran out of its nuclear fuel. A solitary white dwarf will slowly fade away, but if it has a companion, it may end its life with fireworks. Some of the gas ejected from the companion star could be captured by the white dwarf, so the mass of the dwarf would steadily grow. There is, however, a maximum mass that a white dwarf can have—the Chandrasekhar limit—beyond which gravity causes it to collapse, igniting a tremendous thermonuclear explosion. This is what we see as a type Ia supernova.
A supernova appears as a brilliant spot in the sky and, at the peak of its brightness, can be as luminous as 4 billion suns. In a galaxy like ours, one type Ia supernova explodes once in about 300 years. So, in order to find a few dozen such explosions, astronomers had to monitor thousands of galaxies over a period of several years. But the effort was worth it. Type Ia supernovae come very close to realizing the long-standing astronomer’s dream of finding a standard candle—a class of astronomical objects that have exactly the same power. Distances to standard candles can be determined from their apparent brightness—in the same way as we could determine the distance to a 100-watt lightbulb from how bright it appears. Without such magic objects, distance determination is notoriously difficult in astronomy.
Type Ia supernovae have nearly the same power because the exploding white dwarfs have the same mass, equal to the Chandrasekhar limit.6 Knowing the power, we can find the distance to the supernova, and once we know the distance, it is easy to find the time of the explosion—by just counting back the time it took light to traverse that distance. In addition, the reddening, or Doppler shift, of the light can be used to find how fast the universe was expanding at that time.7 Thus, by analyzing the light from distant supernovae, we can uncover the history of cosmic expansion.
This technique was perfected by two competing groups of astronomers, one called the Supernova Cosmology Project and the other the High-Redshift Supernova Search Team. The two groups raced to determine the rate at which cosmic expansion was slowed down by gravity. But this was not what they found. In the winter of 1998, the High-Redshift team announced they had convincing evidence that instead of slowing down, the expansion of the universe had been speeding up for the last 5 billion years or so. It took some courage to come out with this claim, since an accelerated expansion was a telltale sign of a cosmological constant. When asked how he felt about this development, one of the leaders of the team, Brian Schmidt, said that his reaction was “somewhere between amazement and horror.”8
A few months later, the Supernova Cosmology Project team announced very similar conclusions. As the leader of the team, Saul Perlmutter, put it, the results of the two groups were “in violent agreement.”
The discovery sent shock waves through the physics community. Some people simply refused to believe the result. Slava Mukhanovay offered me a bet that the evidence for a cosmological constant would soon evaporate. The bet was for a bottle of Bordeaux. When Mukhanov eventually produced the wine, we enjoyed it together; apparently, the presence of the cosmological constant did not affect the bouquet.
There were also suggestions that the brightness of a supernova could be affected by factors other than the distance. For example, if light from a supernova were scattered by dust particles in the intergalactic space, the supernova would look dimmer, and we would be fooled into thinking that it was farther away. These doubts were dispelled a few years later, when Adam Riess of the Space Telescope Science Institute in Baltimore analyzed the most distant supernova known at that time, SN 1997ff. If the dimming were due to obscuration by dust, the effect would only increase with the distance. But this supernova was brighter, not dimmer, than it would be in a “coasting” universe that neither accelerates nor decelerates. The explanation was that it exploded at 3 billion years A.B., during the epoch when the vacuum energy was still subdominant and the accelerated expansion had not yet begun.
As the evidence for cosmic acceleration was getting stronger, cosmologists were quick to realize that from certain points of view, the return of the cosmological constant was not such a bad thing. First, as we discussed in Chapter 9, it provided the missing mass density to make the total density of the universe equal to the critical density. And second, it resolved the nagging cosmic age discrepancy. The age of the universe calculated without a cosmological constant turns out to be smaller than the age of the oldest stars. Now, if the cosmic expansion accelerates, then it was slower in the past, so it took the universe longer to expand to its present size.az The cosmological constant, therefore, makes the universe older, and the age discrepancy is removed.9
Thus, only a few years after cosmic acceleration was discovered, it was hard to see how we could ever live without it. The debate now shifted to understanding what it actually meant.
EXPLAINING THE COINCIDENCE
The observed value of the vacuum energy density, about three times the average matter density, was in the ballpark of values predicted three years earlier from the principle of mediocrity. Normally, physicists regard a successful prediction as strong evidence for the theory. But in this case they were not in a hurry to give anthropic arguments any credit. In the years following the discovery, there was a tremendous effort by many physicists to explain the accelerated expansion without relying on the anthropics. The most popular of these attempts was the quintessence model, developed by Paul Steinhardt and his collaborators.10
The idea of quintessence is that the vacuum energy is not a constant, but is gradually decreasing with the expansion of the universe. It is so small now because the universe is so old. More specifically, quintessence is a scalar field whose energy landscape looks as if it were designed for downhill skiing (Figure 14.3). The field is assumed to start high up the hill in the early universe, but by now it has rolled down to low elevations—which means low energy densities of the vacuum.
The problem with this model is that it does not resolve the coincidence puzzle: why the present energy density of the vacuum happens to be comparable to the matter density (see Chapter 12). The shape of the energy hill can be adjusted for this to happen, but that would amount to simply fitting the data, instead of explaining it. 11
Figure 14.3. Quintessence energy landscape.
On the other hand, the anthropic approach naturally resolves the puzzle. According to the principle of mediocrity, most observers live in
regions where the cosmological constant caught up with the density of matter at about the epoch of galaxy formation. The assembly of giant spiral galaxies like ours was completed in the relatively recent cosmological past, at several billion years A.B.12 Since then, the density of matter has fallen below that of the vacuum, but not by much (by a factor of 3 or so in our region).13
Despite numerous attempts, no other plausible explanations for the coincidence have been suggested. Gradually, the collective psyche of the physicists was getting used to the thought that the anthropic picture might be here to stay.
PROS AND CONS
The reluctance of many physicists to embrace the anthropic explanation is easy to understand. The standard of accuracy in physics is very high, you might say unlimited. A striking example of that is the calculation of the magnetic moment of the electron. An electron can be pictured as a tiny magnet. Its magnetic moment, characterizing the strength of the magnet, was first calculated by Paul Dirac in the 1930s. The result agreed very well with experiments, but physicists soon realized that there was a small correction to Dirac’s value, due to quantum fluctuations of the vacuum. What followed was a race between particle theorists doing more and more accurate calculations and experimentalists measuring the magnetic moment with higher and higher precision. The most recent measurement result for the correction factor is 1.001159652188, with some uncertainty in the last digit. The theoretical calculation is even more accurate. Remarkably, the agreement between the two is up to the eleventh decimal point. In fact, failure to agree at this level would be a cause for alarm, since any disagreement, even in the eleventh decimal point, would indicate some gap in our understanding of the electron.
Many Worlds in One: The Search for Other Universes Page 14