Book Read Free

Brilliant Blunders: From Darwin to Einstein - Colossal Mistakes by Great Scientists That Changed Our Understanding of Life and the Universe

Page 26

by Livio, Mario


  Type 1a supernovae are very rare, occurring roughly only once per century in a given galaxy. Consequently, each team had to examine thousands of galaxies to collect a sample of a few dozen supernovae. The astronomers determined the distances to these supernovae and their host galaxies, and the recession velocities of the latter. With these data at hand, they compared their results with the predictions of a linear Hubble’s law. If the expansion of the universe were indeed slowing, as everyone expected, they should have found that galaxies that are, say, two billion light-years away, appear brighter than anticipated, since they would be somewhat closer than where uniform expansion would predict. Instead, Riess, Schmidt, Perlmutter, and their colleagues found that the distant galaxies appeared dimmer than expected, indicating that they had reached a larger distance. A precise analysis showed that the results imply a cosmic acceleration for the past six billion years or so. Perlmutter, Schmidt, and Riess shared the 2011 Nobel Prize in physics for their dramatic discovery.

  Since the initial discovery in 1998, more pieces of this puzzle have emerged, all corroborating the fact that some new form of a smoothly distributed energy is producing a repulsive gravity that is pushing the universe to accelerate. First, the sample of supernovae has increased significantly and now covers a wide range of distances, putting the findings on a much firmer basis. Second, Riess and his collaborators have shown by subsequent observations that an earlier epoch of deceleration preceded the current six-billion-year-long accelerating phase in the cosmic evolution. A beautifully compelling picture emerges: When the universe was smaller and much denser, gravity had the upper hand and was slowing the expansion. Recall, however, that the cosmological constant, as its name implies, does not dilute; the energy density of the vacuum is constant. The densities of matter and radiation, on the other hand, were enormously high in the very early universe, but they have decreased continuously as the universe expanded. Once the energy density of matter dropped below that of the vacuum (about six billion years ago), acceleration ensued.

  The most convincing evidence for the accelerating universe came from combining detailed observations of the fluctuations in the cosmic microwave background by the Wilkinson Microwave Anisotropy Probe (WMAP) with those of supernovae, and supplementing those observations with separate measurements of the current expansion rate (the Hubble constant). Putting all of the observational constraints together, astronomers were able to determine precisely the current contribution of the putative vacuum energy to the total cosmic energy budget. The observations revealed that matter (ordinary and dark together) contributes only about 27 percent of the universe’s energy density, while “dark energy”—the name given to the smooth component that is consistent with being the vacuum energy—contributes about 73 percent. In other words, Einstein’s diehard cosmological constant, or something very much like its contemporary “flavor”—the energy of empty space—is currently the dominant energy form in the universe!

  To be clear, the measured value of the energy density associated with the cosmological constant is still some 53 to 123 orders of magnitude smaller than what naïve calculations of the energy of the vacuum produce, but the fact that it is definitely not zero has frustrated much wishful thinking on the part of many theoretical physicists. Recall that given the incredible discordance between any reasonable value for the cosmological constant—one that the universe could accommodate without bursting at the seams—and the theoretical expectations, physicists were anticipating that some yet-undiscovered symmetry would lead to the complete cancellation of the cosmological constant. That is, they hoped that the different contributions of the various zero-point energies, as large as they might be individually, would come in pairs of opposite signs so that the net result would be zero.

  Some of these expectations were hung on concepts such as supersymmetry: particle physicists predict that every particle we know and love, such as electrons and quarks (the constituents of protons and neutrons), should have yet-to-be-found supersymmetric partners that have the same charges (for example, electrical and nuclear), but spins removed by a half quantum mechanical unit. For instance, the electron has a spin of 1/2, and its “shadow” supersymmetric partner is supposed to have spin of 0. If all superpartners were also to have the same mass as their known partners, then the theory predicts that the contribution of each such pair would indeed cancel out. Unfortunately, we know that the superpartners of the electron, the quark, and the elusive neutrino cannot have the same mass, respectively, as the electron, quark, and neutrino, or they would have been discovered already. When this fact is taken into account, the total contribution to the vacuum energy is larger than the observed one by some 53 orders of magnitude. One might still have hoped that another, yet-unthought-of symmetry would produce the desired cancellation. However, the breakthrough measurement of the cosmic acceleration has shown that this is not very likely. The exceedingly small but nonzero value of the cosmological constant has convinced many theorists that it is hopeless to seek an explanation relying on symmetry arguments. After all, how can you reduce a number to 0.00000000000000000000000000000000000000000000000000001 of its original value without canceling it out altogether? This remedy seems to require a level of fine-tuning that most physicists are unwilling to accept. It would have been much easier, in principle, to imagine a hypothetical scenario that would make the vacuum energy precisely zero than one that would set it to the observed minuscule value. So, is there a way out? In desperation, some physicists have taken to relying on one of the most controversial concepts in the history of science—anthropic reasoning—a line of thought in which the mere existence of human observers is assumed to be part of the explanation. Einstein himself had nothing to do with this development, but it was the cosmological constant—Einstein’s brainchild or “blunder”—that has convinced quite a few of today’s leading theorists to consider this condition seriously. Here is a concise explanation of what the fuss is all about.

  Anthropic Reasoning

  Almost everybody would agree that the question “Does extraterrestrial intelligent life exist?” is one of the most intriguing questions in science today. That this is a reasonable question to ask stems from an important truth: The properties of our universe, and the laws governing it, have allowed complex life to emerge. Obviously, the precise biological peculiarities of humans depend crucially on the Earth’s properties and its history, but some basic requirements would seem necessary for any form of intelligent life to materialize. For instance, galaxies composed of stars, and planets orbiting at least some of those stars, appear to be reasonably generic. Similarly, nucleosynthesis in stellar interiors had to forge the building blocks of life: atoms such as carbon, oxygen, and iron. The universe also had to provide for a sufficiently hospitable environment—for a long enough time—that these atoms could combine and form the complex molecules of life, enabling primitive life to evolve to its “intelligent” phase.

  In principle, one could imagine “counterfactual” universes that are not conducive for the appearance of complexity. For instance, consider a universe harboring the same laws of nature as ours, and the same values of all the “constants of nature” but one. That is, the strengths of the gravitational, electromagnetic, and nuclear forces are identical to those in our universe, as are the ratios of the masses of all the elementary particles. However, the value of one parameter—the cosmological constant—is a thousand times higher in this hypothetical universe. In such a universe, the repulsive force associated with the cosmological constant would have resulted in such a rapid expansion that no galaxies could have ever formed.

  As we have seen, the question we have inherited from Einstein was this: Why should there be a cosmological constant at all? Modern physics transformed that question into: Why should empty space exert a repulsive force? However, owing to the discovery of accelerating expansion, we now ask: Why is the cosmological constant (or the force exerted by the vacuum) so small? In 1987, in the wake of all the previous failed attempts t
o put a cap on the energy of empty space, physicist Steven Weinberg came up with a bold “What if?” question. What if the cosmological constant is not truly fundamental—explicable within the framework of a “theory of everything”—but accidental? That is, imagine that there exists a vast ensemble of universes—a “multiverse”—and that the cosmological constant may assume different values in different universes. Some universes, such as the counterfactual one we discussed with a thousandfold larger lambda, would not have developed complexity and life. We humans find ourselves in one of those universes that are “biophilic.” In such a case, no grand unified theory of the basic forces would fix the value of the cosmological constant. Rather, the value would be determined by the simple requirement that it should fall within the range that would allow humans to evolve. In a universe with too large a cosmological constant, there would be no one to ask the question about its value. Physicist Brandon Carter, who first presented this type of argument in the 1970s, dubbed it the “anthropic principle.” The attempts to delineate the “pro-life” domains are accordingly described as anthropic reasoning. Under what conditions can we even attempt to apply this type of reasoning to explain the value of the cosmological constant?

  In order to make any sense at all, anthropic reasoning has to rely on three basic assumptions:

  1. Observations are subjected to a “selection bias”—filtering of physical reality—even merely by the fact that they are executed by humans.

  2. Some of the nominal “constants of nature” are accidental rather than fundamental.

  3. Our universe is but one member of a gigantic ensemble of universes.

  Let me examine very briefly each one of these points and attempt to assess its viability.

  Statisticians always dread selection biases. These are distortions of the results, introduced either by the data-collecting tools or by the method of data accumulation. Here are a few simple examples to demonstrate the effect. Imagine that you want to test an investment strategy by examining the performance of a large group of stocks against twenty years’ worth of data. You might be tempted to include in the study only stocks for which you have complete information over the entire twenty-year period. However, eliminating stocks that stopped trading during this period would produce biased results, since these were precisely the stocks that did not survive the market.

  During World War II, the Jewish Austro-Hungarian mathematician Abraham Wald demonstrated a remarkable understanding of selection bias. Wald was asked to examine data on the location of enemy fire hits on bodies of returning aircraft, to recommend which parts of the airplanes should be reinforced to improve survivability. To his superiors’ amazement, Wald recommended adding armor to the locations that showed no damage. His unique insight was that the bullet holes that he saw in surviving aircraft indicated places where an airplane could be hit and still endure. He therefore concluded that the planes that had been shot down were probably hit precisely in those places where the persevering planes were lucky enough not to have been hit.

  Astronomers are very familiar with the Malmquist bias (named after the Swedish astronomer Gunnar Malmquist, who greatly elaborated upon it in the 1920s). When astronomers survey stars or galaxies, their telescopes are sensitive only down to a certain brightness. However, objects that are intrinsically more luminous can be observed to greater distances. This will create a false trend of increasing average intrinsic brightness with distance, simply because the fainter objects will not be seen.

  Brandon Carter pointed out that we shouldn’t take the Copernican principle—the fact that we are nothing special in the cosmos—too far. He reminded astronomers that humans are the ones who make observations of the universe; consequently, they should not be too surprised to discover that the properties of the cosmos are consistent with human existence. For instance, we could not discover that our universe contains no carbon, since we are carbon-based life-forms. Initially, most researchers took Carter’s anthropic reasoning to be nothing more than a trivially obvious statement. Over the past couple of decades, however, the anthropic principle has gained some popularity. Today quite a few leading theorists accept the fact that in the context of a multiverse, anthropic reasoning can lead to a natural explanation for the otherwise perplexing value of the cosmological constant. To recapitulate the argument, if lambda were much larger (as some probabilistic considerations seem to require), then the cosmic acceleration would have overwhelmed gravity before galaxies had a chance to form. The fact that we find ourselves here in the Milky Way galaxy necessarily biases our observations to low values of the cosmological constant in our universe.

  But how reasonable is the assumption that some physical constants are “accidental”? A historical example can help clarify the concept. In 1597 the great German astronomer Johannes Kepler published a treatise known as Mysterium Cosmographicum (The Cosmic Mystery). In this book, Kepler thought that he had found the solution to two bewildering cosmic enigmas: Why were there precisely six planets in the solar system (only six were known at this time) and what determined the sizes of the planetary orbits? Even in Kepler’s time, his answers to these riddles were borderline crazy. He constructed a model for the solar system by embedding the five regular solids known as the Platonic solids (tetrahedron, cube, octahedron, dodecahedron, and icosahedron) inside each other. Together with an outer sphere corresponding to the fixed stars, the solids determined precisely six spacings, which to Kepler “explained” the number of the planets. By choosing a particular order for which solid to embed in which, Kepler was able to achieve approximately the correct relative sizes for the orbits in the solar system. However, the main problem with Kepler’s model was not in its geometrical details—after all, Kepler used the mathematics that he knew to explain existing observations. The key failure was that Kepler did not realize that neither the number of planets nor the sizes of their orbits were fundamental quantities—ones that can be explained from first principles. While the laws of physics indeed govern the general process of planet formation from a protoplanetary disk of gas and dust, the particular environment of any young stellar object determines the end result.

  We now know that there are billions of extrasolar planets in the Milky Way, and each planetary system is different in terms of its members and orbital properties. Both the number of the planets and the dimensions of their circuits are accidental, as is, for instance, the precise shape of any individual snowflake.

  There is one particular quantity in the solar system that has been crucial for our existence: the distance between the Earth and the Sun. The Earth is in the Sun’s habitable zone—the narrow circumstellar band that allows for liquid water to exist on the planet’s surface. At much closer distances, water evaporates, and at much larger ones, it freezes. Water was essential for life to emerge on Earth, since molecules could combine easily in the young Earth’s “soup” and could form long chains while being sheltered from harmful ultraviolet radiation. Kepler was obsessed with the idea of finding a first-principles explanation to the Earth-Sun distance, but this obsession was misguided. There was nothing to prevent the Earth (in principle) from forming at a different distance. But had that distance been significantly larger or smaller, there would have been no Kepler to wonder about it. Among the billions of solar systems in the Milky Way galaxy, many probably do not harbor life, since they don’t have the right planet in the habitable zone around the host star. Even though the laws of physics did determine the orbit of the Earth, there is no deeper explanation for its radius other than the fact that had it been very different, we wouldn’t be here.

  This brings us to the last necessary ingredient of anthropic reasoning: For the explanation of the value of the cosmological constant in terms of an accidental quantity in a multiverse to hold any water, there must be a multiverse. Is there? We don’t know, but that has never stopped smart physicists from speculating. What we do know is that in one theoretical scenario known as “eternal inflation,” the dramatic stretching of space-tim
e can produce an infinite and everlasting multiverse. This multiverse is supposed to continually generate inflating regions, which evolve into separate “pocket universes.” The big bang from which our own “pocket universe” came into existence is just one event in a much grander scheme of an exponentially expanding substratum. Some versions of “string theory” (now sometimes called “M-theory”) also allow for a huge variety of universes (more than 10500!), each potentially characterized by different values of physical constants. If this speculative scenario is correct, then what we have traditionally called “the universe” could indeed be just one piece of space-time in a vast cosmic landscape.

  One should not get the impression that all (or even most) physicists believe that the solution to the puzzle of the energy of empty space will come from anthropic reasoning. The mere mention of the “multiverse” and “anthropics” tends to raise the blood pressure of some physicists. There are two main reasons for this adverse reaction. First, as already mentioned in chapter 9, ever since the seminal work of philosopher of science Karl Popper, for a scientific theory to be worthy of its name, it has to be falsifiable by experiments or observations. This requirement has become the foundation of the “scientific method.” An assumption about the existence of an ensemble of potentially unobservable universes appears, at first glance at least, to be in conflict with this prerequisite and therefore in the realm of metaphysics rather than physics. Note, however, that the boundary between what we define as observable and what is not is unclear. Consider, for instance, the “particle horizon”: that surface around us from which radiation emitted at the big bang is just reaching us. In the Einstein–de Sitter model—the model for a homogeneous, isotropic, constant curvature universe, with no cosmological constant—the cosmic expansion decelerates, and one could safely expect that all the objects currently lying beyond the horizon will eventually become observable in the distant future. But since 1998, we know that we don’t live in an Einstein–de Sitter cosmos: our universe is accelerating. In this universe any object now beyond the horizon will stay beyond the horizon forever. Moreover, if the accelerating expansion continues, as anticipated from a cosmological constant, even galaxies that we can now see will become invisible to us! As their recession speed approaches the speed of light, their radiation will stretch (redshift) to the point where its wavelength will exceed the size of the universe. (There is no limit on how fast space-time can stretch, since no mass is really moving.) So even our own accelerating universe contains objects that neither we nor future generations of astronomers will ever be able to observe. Yet we would not consider such objects as belonging to metaphysics. What could then give us confidence in potentially unobservable universes? The answer is a natural extension of the scientific method: We can believe in their existence if they are predicted by a theory that gains credibility because it is corroborated in other ways. We believe in the properties of black holes because their existence is predicted by general relativity—a theory that has been tested in numerous experiments. The rules should be a straightforward extrapolation of Popper’s ideas: If a theory makes testable and falsifiable predictions in the observable parts of the universe, we should be prepared to accept its predictions in those parts of the universe (or multiverse) that are not accessible to direct observations.

 

‹ Prev