Book Read Free

Many Worlds in One: The Search for Other Universes

Page 13

by Vilenkin, Alex


  You think you might be able to live with that? But wait; let us not turn the knob just yet. The effect of the change at earlier stages of cosmic evolution could be much more devastating. As we discussed in Chapter 4, heavy elements, such as carbon, oxygen, and iron, were forged in stellar interiors and then dispersed in supernova explosions. These elements are essential for the formation of planets and living creatures. Without supernovae, they would remain buried inside stars and the only elements available would be the lightest ones, formed in the big bang: hydrogen, helium, and deuterium, with a trace of lithium—not the kind of universe you would like to live in.

  Gravity is by far the weakest of the four fundamental forces. Its effects are important only in the presence of huge aggregates of matter, like galaxies or stars. In fact, it is the weakness of gravity that makes the stars so massive : the mass has to be large enough to squeeze the hot gas to the high density needed for nuclear reactions. If we were to make gravity stronger, the stars would not be so large and would burn out faster. A millionfold increase in the strength of gravity would make stellar masses a billion times smaller.ap The mass of a typical star would then be less than the present mass of the Moon, and its lifetime would be about 10,000 years (compared to 10 billion years for the Sun). This time interval is hardly long enough for even the simplest bacteria to evolve. A much smaller enhancement of gravity may in fact be sufficient to depopulate the universe. A hundredfold increase, for example, would reduce stellar lifetimes well below the few billion years that it took for intelligent life to evolve on Earth.

  These and many other examples show that our presence in the universe depends on a precarious balance between different tendencies—a balance that would be destroyed if the constants of nature were to deviate significantly from their present values.2 What are we to make of this fine-tuning of the constants? Is it a sign of a Creator who carefully adjusted the constants so that life and intelligence would be possible? Perhaps. But there is also a completely different explanation.

  THE ANTHROPIC PRINCIPLE

  The alternative view is based on a very different image of the Creator. Instead of meticulously designing the universe, he botches one sloppy job after another, putting out a huge number of universes with different and totally random values of the constants. Most of these universes are as exciting as the neutron world, but once in a while, by pure chance, a finely tuned universe fit for life will be created.

  Given this worldview, let us ask ourselves: What kind of universe can we expect to live in? Most of the universes will be dreary and unsuitable for life, but there will be nobody there to complain about that. All intelligent beings will find themselves in the rare bio-friendly universes and will marvel at the miraculous conspiracy of the constants that made their existence possible. This line of reasoning is known as the anthropic principle. The name was coined in 1974 by Cambridge astrophysicist Brandon Carter,aq who offered the following formulation of the principle: “[W]hat we can expect to observe must be restricted by the conditions necessary for our presence as observers.”3

  The anthropic principle is a selection criterion. It assumes the existence of some distant domains where the constants of nature are different. These domains may be located in some remote parts of our own universe, or they could belong to other, completely disconnected spacetimes. A collection of domains with a wide variety of properties is called a multiverse—the term introduced by Carter’s former classmate Martin Rees, now Britain’s Astronomer Royal. Later in this book we shall encounter three types of multiverse ensembles. The first consists of a multitude of regions all belonging to the same universe. The second type is made up of separate, disconnected universes.ar And the third type is a combination of the two: it consists of multiple universes, each of which has a variety of different regions. If a multiverse of any type really exists, then it is not surprising that the constants of nature are fine-tuned for life. On the contrary, they are guaranteed to be fine-tuned.

  Anthropic reasoning can also be applied to variations of observable properties in time, rather than in space. One of the earliest applications was by Robert Dicke, who used the anthropic approach to explain the present age of the universe. Dicke argued that life can form only after heavy elements are synthesized in stellar interiors. This takes a few billion years. The elements are then dispersed in supernova explosions, and we have to allow a few more billion years for the second generation of stars and their planetary systems to form in the aftermath of the explosions and for biological evolution to occur. The first observers could not, therefore, appear much earlier than 10 billion years A.B. We should also keep in mind that a star like our Sun exhausts its nuclear energy in about 10 billion years and that the galactic supply of gas for new star formation is also depleted on a similar time scale. At 100 billion years A.B. there will be very few Sun-like stars left in the visible universe.4 If we assume that life will perish with the death of stars, we are left with a window between, say, 5 and 100 billion years A.B. when observers can exist.as Not surprisingly, the present age of the universe falls within this window.5

  Dicke’s use of the anthropic principle to constrain our location in time was uncontroversial. But Brandon Carter, Martin Rees, and a few other physicists attempted to go beyond that, using anthropic reasoning to explain the fine-tuning of the fundamental constants. And that’s where the controversy began.

  WHAT DOES THE ANTHROPIC PRINCIPLE HAVE IN COMMON WITH PORNOGRAPHY?

  As formulated by Carter, the anthropic principle is trivially true. The constants of nature and our location in spacetime should not preclude the existence of observers. For otherwise our theories would be logically inconsistent. When interpreted in this sense, as a simple consistency requirement, the anthropic principle is, of course, uncontroversial, although not very useful. But any attempt to use it as an explanation for the fine-tuning of the universe evoked an adverse and unusually temperamental response from the physics community.

  There were in fact some good reasons for that. In order to explain the fine-tuning, one has to postulate the existence of a multiverse, consisting of remote domains where the constants of nature are different. The problem is, however, that there is not one iota of evidence to support this hypothesis. Even worse, it does not seem possible to ever confirm or disprove it. The philosopher Karl Popper has argued that any statement that cannot be falsified cannot be scientific. This criterion, which has been generally adopted by physicists, seems to imply that anthropic explanations of the fine-tuning are not scientific. Another, related criticism was that the anthropic principle can only be used to explain what we already know. It never predicts anything, and thus cannot be tested.

  It did not help that the whole subject of the anthropic principle had been obscured by murky and confusing interpretations.at On top of that, many different formulations of the principle appeared in the literature (the philosopher Nick Bostrom, who wrote a book on the subject,6 counted more than thirty). The situation is well summarized by a quote from Mark Twain: “The researches of many commentators have already thrown much darkness on this subject, and it is probable that, if they continue, we shall soon know nothing at all about it.”7 The term “anthropic” was itself a source of confusion, as it seems to refer to human beings, rather than to intelligent observers in general.

  But the main reason why the response to anthropic explanations was so emotional was probably the feeling of betrayal. Ever since Einstein, physicists believed that the day will come when all constants of nature will be calculated from some all-encompassing Theory of Everything. Resorting to anthropic arguments was viewed as a capitulation and evoked reactions ranging from annoyance to outright hostility. Some well-known physicists went so far as to say that anthropic ideas were “dangerous”8 and that they were “corrupting science.”9 Only in extreme cases, when all other possibilities have been exhausted, might one be excused for mentioning the “A-word,” and sometimes not even then. The Nobel Prize winner Steven Weinberg once said that a phys
icist talking about the anthropic principle “runs the same kind of risk as a cleric talking about pornography. No matter how much you say you are against it, some people will think you are a little too interested.”

  THE COSMOLOGICAL CONSTANT

  If there has ever been a problem calling for measures of last resort, it is the cosmological constant problem. Different contributions to the vacuum energy density conspire to cancel one another with an accuracy of one part in 10120. This is the most notorious and perplexing case of fine-tuning in physics. Andrei Linde was one of the first brave souls to apply anthropic reasoning to this problem. He was not satisfied with the vague talk about “other universes” and suggested a specific model of how the cosmological constant could be variable and what could make it change from one place to another.

  Linde used an idea that had worked for him before. Remember the little ball rolling down in the energy landscape? The ball represented a scalar field; and its elevation, the energy density of the field. As the field rolled downhill, its energy drove the inflationary expansion of the universe.

  The feature Linde took from this model of inflation was that different elevations in the landscape correspond to different energy densities. He assumed the existence of another scalar field with an energy landscape of its own. To avoid confusion with the field responsible for inflation, we shall call the latter field “the inflaton”—its usual name in the physics literature. In our neighborhood, the inflaton has already rolled to the bottom of its energy hill. (This happened 14 billion years ago, at the end of inflation.) To prevent his new field from rolling downhill too quickly, Linde had to require that the slope should be exceedingly gentle, much more so than in the model of inflation. Any slope, no matter how small, will eventually cause the field to roll down. With smaller slopes it will take longer “to get the ball rolling.” Linde assumed the slope to be so small that the field would not move much in the 14 billion years that elapsed since the big bang. But if the slope extends to a great length in both directions, the energy density can reach very large positive or negative values. (See Figure 13.2.)

  Figure 13.2. A scalar field on a very gentle slope of the energy landscape.

  The full energy density of the vacuum—the cosmological constant—is obtained by adding the energy density of the scalar field to the vacuum energy densities of fermions and bosons calculated from particle physics. Even if there are no miraculous cancellations and the particle physics contribution is huge, there will be a spot on the slope where the scalar field contribution has an equal magnitude and opposite sign, so the total vacuum energy density is zero. The scalar field is presumably very close to that spot in our part of the universe.

  If the scalar field were to vary from one part of the universe to another, the cosmological “constant” would also be variable, and that is all one needs to apply the anthropic principle. But what could cause the scalar field to vary? Linde had a good answer to this one as well!

  Prior to the big bang, in the course of eternal inflation, the field experienced random quantum kicks. As before, we can represent the behavior of the field by that of a party of random walkers (see Chapter 8). The slope of the hill is too small to be of any importance in this case, so the walkers step left and right with nearly equal probability. Even if they start at the same place, the walkers will gradually drift apart and, given enough time, will spread along the entire slope. (Recall, there is no shortage of time in eternal inflation.) Since the walkers represent scalar field values in different regions of space, we conclude that quantum processes during inflation necessarily generate a distribution of regions with all possible values of the field—and therefore all possible values of the cosmological constant.

  While the walkers are wandering on the slope, the distances between the regions they represent are being stretched by the exponential inflationary expansion. As a result, the spatial variation of the vacuum energy density is extremely small.au You would have to travel googles of miles before you noticed the slightest change.

  Linde’s model can be extended to include more scalar fields and make other constants of nature variable.av And if the fundamental particle physics allows the constants to vary, then quantum processes during eternal inflation inescapably generate vast regions of space with all possible values of the constants. Eternal inflation thus provides a natural arena for applications of the anthropic principle.

  Now that we have an ensemble of regions with different values of the cosmological constant, what value should we expect to observe? In regions where the mass density of the vacuum is greater than the density of water (1 gram per cubic centimeter), stars would be torn apart by repulsive gravity. It turns out, however, that a much smaller vacuum density would do enough damage to make observers impossible. This was shown by Steven Weinberg, in a paper that later became a classic of anthropic reasoning.

  Figure 13.3. Steven Weinberg. (Photo by Frank Curry, Studio Penumbra)

  As the universe expands, the density of matter is diluted, and inevitably there comes a time when it drops below that of the vacuum. Weinberg found that once this happens, matter can no longer clump into galaxies; instead it is dispersed by the repulsive vacuum gravity. The larger the cosmological constant is, the earlier is the time of vacuum dominance. And regions where it dominates before any galaxies have had a chance to form will have no cosmologists to worry about the cosmological constant problem.

  The effect of a negative cosmological constant is even more devastating. In this case, the vacuum gravity is attractive, and vacuum domination leads to a rapid contraction and collapse of the corresponding regions. The anthropic principle requires that the collapse should not occur before galaxies and observers have had time to evolve.

  According to Weinberg’s analysis, the largest mass density of the vacuum that still allows some galaxies to form is about the mass of a few hundred hydrogen atoms per cubic meter—1027 times smaller than the density of water. This was a great improvement over the googles of tons per cubic centimeter suggested by particle physicists’ calculations.

  If indeed the smallness of the cosmological constant is due to anthropic selection, then, small though it is, the constant does not have to be exactly zero. In fact, there seems to be no reason why it should be much smaller than required by the anthropic principle. In the late 1980s, the observational accuracy was already reaching the level necessary to detect such values of the constant, and Weinberg made a prediction that it would soon show up in astronomical observations. Indeed, almost a decade later the first hints of cosmological constant appeared in supernova data.

  14

  Mediocrity Raised to a Principle

  I consider myself an average man, except for the fact that I consider myself an average man.

  —MICHEL DE MONTAIGNE

  THE BELL CURVE

  The most scathing criticism raised against the anthropic principle is that it does not yield any testable predictions. All it says is that we can observe only those values of the constants that allow observers to exist. This can hardly be regarded as a prediction, since it is guaranteed to be true. The question is, Can we do any better? Is it possible to extract some nontrivial predictions from anthropic arguments?

  If the quantity I am going to measure can take a range of values, determined largely by chance, then I cannot predict the result of the measurement with certainty. But I can still try to make a statistical prediction. Suppose, for example, I want to predict the height of the first man I am going to see when I walk out into the street. According to the Guinness Book of Records, the tallest man in medical history was the American Robert Pershing Wadlow, whose height was 2.72 meters (8 feet 11 inches). The shortest adult man, the Indian Gul Mohammed, was just 56 centimeters tall (about 22 inches). If I want to play it really safe, I should predict that the first man I see will be somewhere between these two extremes. Barring the possibility of breaking the Guinness records, this prediction is guaranteed to be correct.

  To make a more meaning
ful prediction, I could consult the statistical data on the height of men in the United States. The height distribution follows a bell curve, shown in Figure 14.1, with a median value at 1.77 meters (about 5 feet 9½ inches). (That is, 50 percent of men are shorter and 50 percent are taller than this value.) The first man I meet is not likely to be a giant or a dwarf, so I expect his height to be in the mid-range of the distribution. To make the prediction more quantitative, I can assume that he will not be among the tallest 2.5 percent or shortest 2.5 percent of men in the United States. The remaining 95 percent have heights between 1.63 meters (5 feet 4 inches) and 1.90 meters (6 feet 3 inches). If I predict that the man I meet will be within this range of heights and then perform the experiment a large number of times, I can expect to be right 95 percent of the time. This is known as a prediction at 95 percent confidence level.

  In order to make a 99 percent confidence level prediction, I would have to discard 0.5 percent at both ends of the distribution. As the confidence level is increased, my chances of being wrong get smaller, but the predicted range of heights gets wider and the prediction less interesting.

  Figure 14.1. Height distribution of men in the United States. The number of men whose height is within a given interval is proportional to the area under the corresponding portion of the curve. The shaded “tails” of the bell curve mark 2.5 percent at low and high ends of the distribution. The range between the marked areas is predicted at 95 percent confidence level.

  Can a similar technique be applied to make predictions for the constants of nature? I was trying to find the answer to this question in the summer of 1994, when I visited my friend Thibault Damour at the Institut des Hautes Études Scientifiques in France. The institute is located in a small village, Bures-sur-Yvette, a thirty-minute train ride from Paris. I love the French countryside and, despite the calories, French food and wine. The famous Russian physicist Lev Landau used to say that a single alcoholic drink was enough to kill his inspiration for a week. Luckily, this has not been my experience. In the evenings, with my spirits up after a very enjoyable dinner, I would take a walk in the meadows along the little river Yvette, and my thoughts would gradually return to the problem of anthropic predictions.

 

‹ Prev