From Eternity to Here: The Quest for the Ultimate Theory of Time

Home > Other > From Eternity to Here: The Quest for the Ultimate Theory of Time > Page 48
From Eternity to Here: The Quest for the Ultimate Theory of Time Page 48

by Sean M. Carroll


  The prediction that we live in a multiverse is, as far as we can tell, untestable. (Although, who knows? Scientists have come up with remarkably clever ideas before.) But that misses the point. The multiverse is part of a larger, more comprehensive structure. The question should be not “How can we test whether there is a multiverse?” but “How can we test the theories that predict the multiverse should exist?” Right now we don’t know how to use those theories to make a falsifiable prediction. But there’s no reason to think that we can’t, in principle, do so. It will require a lot more work on the part of theoretical physicists to develop these ideas to the point where we can say what, if any, the testable predictions might be. One might be impatient that those predictions aren’t laid out before them straightforwardly right from the start—but that’s a personal preference, not a principled philosophical stance. Sometimes it takes time for a promising scientific idea to be nurtured and developed to the point where we can judge it fairly.

  THE SEARCH FOR MEANING IN A PREPOSTEROUS UNIVERSE

  Throughout history, human beings have (quite naturally) tended to consider the universe in human-being-centric terms. That might mean something as literal as putting ourselves at the geographical center of the universe—an assumption that took some effort to completely overcome. Ever since the heliocentric model of the Solar System gained widespread acceptance, scientists have held up the Copernican Principle—“we do not occupy a favored place in the universe”—as a caution against treating ourselves as something special.

  But at a deeper level, our anthropocentrism manifests itself as a conviction that human beings somehow matter to the universe. This feeling is at the core of much of the resistance in some quarters to accepting Darwin’s theory of natural selection as the right explanation for the evolution of life on Earth. The urge to think that we matter can take the form of a straightforward belief that we (or some subset of us) are God’s chosen people, or something as vague as an insistence that all this marvelous world around us must be more than just an accident.

  Different people have different definitions of the word God, or different notions of what the nominal purpose of human life might be. God can become such an abstract and transcendental concept that the methods of science have nothing to say about the matter. If God is identified with Nature, or the laws of physics, or our feeling of awe when contemplating the universe, the question of whether or not such a concept provides a useful way of thinking about the world is beyond the scope of empirical inquiry.

  There is a very different tradition, however, that seeks evidence for God in the workings of the physical universe. This is the approach of natural theology, which stretches long before Aristotle, through William Paley’s watchmaker analogy, up to the present day.300 It used to be that the best evidence in favor of the argument from design came from living organisms, but Darwin provided an elegant mechanism to explain what had previously seemed inexplicable. In response, some adherents to this philosophy have shifted their focus to a different seemingly inexplicable thing: from the origin of life to the origin of the cosmos.

  The Big Bang model, with its singular beginning, seems to offer encouragement to those who would look for the finger of God in the creation of the universe. (Georges Lemaître, the Belgian priest who developed the Big Bang model, refused to enlist it for any theological purposes: “As far as I can see, such a theory remains entirely outside of any metaphysical or religious question.”301) In Newtonian spacetime, there wasn’t even any such thing as the creation of the universe, at least not as an event happening at a particular time; time and space persisted forever. The introduction of a particular beginning to spacetime, especially one that apparently defies easy understanding, creates a temptation to put the responsibility for explaining what went on into the hands of God. Sure, the reasoning goes, you can find dynamical laws that govern the evolution of the universe from moment to moment, but explaining the creation of the universe itself requires an appeal to something outside the universe.

  Hopefully, one of the implicit lessons of this book has been that it’s not a good idea to bet against the ability of science to explain anything whatsoever about the operation of the natural world, including its beginning. The Big Bang represented a point past which our understanding didn’t stretch, back when it was first studied in the 1920s—and it continues to do so today. We don’t know exactly what happened 14 billion years ago, but there’s no reason whatsoever to doubt that we will eventually figure it out. Scientists are tackling the problem from a variety of angles. The rate at which scientific understanding advances is notoriously hard to predict, but it’s not hard to predict that it will be advancing.

  Where does that leave us? Giordano Bruno argued for a homogeneous universe with an infinite number of stars and planets. Avicenna and Galileo, with the conservation of momentum, undermined the need for a Prime Mover to explain the persistence of motion. Darwin explained the development of species as an undirected process of descent with random modifications, chosen by natural selection. Modern cosmology speculates that our observable universe could be only one of an infinite number of universes within a grand ensemble multiverse. The more we understand about the world, the smaller and more peripheral to its operation we seem to be.302

  That’s okay. We find ourselves, not as a central player in the life of the cosmos, but as a tiny epiphenomenon, flourishing for a brief moment as we ride a wave of increasing entropy from the Big Bang to the quiet emptiness of the future universe. Purpose and meaning are not to be found in the laws of nature, or in the plans of any external agent who made things that way; it is our job to create them. One of those purposes—among many—stems from our urge to explain the world around us the best we can. If our lives are brief and undirected, at least we can take pride in our mutual courage as we struggle to understand things much greater than ourselves.

  NEXT STEPS

  It’s surprisingly hard to think clearly about time. We’re all familiar with it, but the problem might be that we’re too familiar. We’re so used to the arrow of time that it’s hard to conceptualize time without the arrow. We are led, unprotesting, to temporal chauvinism, prejudicing explanations of our current state in terms of the past over those in terms of the future. Even highly trained professional cosmologists are not immune.

  Despite all the ink that has been spilled and all the noise generated by discussions about the nature of time, I would argue that it’s been discussed too little, rather than too much. But people seem to be catching on. The intertwined subjects of time, entropy, information, and complexity bring together an astonishing variety of intellectual disciplines: physics, mathematics, biology, psychology, computer science, the arts. It’s about time that we took time seriously, and faced its challenges head-on.

  Within physics, that’s starting to happen. For much of the twentieth century, the field of cosmology was a bit of a backwater; there were many ideas, and little data to distinguish between them. An era of precision cosmology, driven by large-scale surveys enabled by new technologies, has changed all that; unanticipated wonders have been revealed, from the acceleration of the universe to the snapshot of early times provided by the cosmic microwave background.303 Now it is the turn for ideas to catch up to the reality. We have interesting suggestions from inflation, from quantum cosmology, and from string theory, to how the universe might have begun and what might have come before. Our task is to develop these promising ideas into honest theories, which can be compared with experiment and reconciled with the rest of physics.

  Predicting the future isn’t easy. (Curse the absence of a low-entropy future boundary condition!) But the pieces are assembled for science to take dramatic steps toward answering the ancient questions we have about the past and the future. It’s time we understood our place within eternity.

  APPENDIX: MATH

  Lloyd: You mean, not good like one out of a hundred?

  Mary: I’ d say more like one out of a million.

  [pause]


  Lloyd: So you’re telling me there’s a chance.

  —Jim Carrey and Lauren Holly, Dumb and Dumber

  In the main text I bravely included a handful of equations—a couple by Einstein, and a few expressions for entropy in different contexts. An equation is a powerful, talismanic object, conveying a tremendous amount of information in an extraordinarily compact notation. It can be very useful to look at an equation and understand its implications as a rigorous expression of some feature of the natural world.

  But, let’s face it—equations can be scary. This appendix is a very quick introduction to exponentials and logarithms, the key mathematical ideas used in describing entropy at a quantitative level. Nothing here is truly necessary to comprehending the rest of the book; just bravely keep going whenever the word logarithm appears in the main text.

  EXPONENTIALS

  These two operations—exponentials and logarithms—are exactly as easy or difficult to understand as each other. Indeed, they are opposites; one operation undoes the other one. If we start with a number, take its exponential, and then take the logarithm of the result, we get back the original number we started with. Nevertheless, we tend to come across exponentials more often in our everyday lives, so they seem a bit less intimidating. Let’s start there.

  Exponentials just take one number, called the base, and raise it to the power of another number. By which we simply mean: Multiply the base by itself, a number of times given by the power. The base is written as an ordinary number, and the power is written as a superscript. Some simple examples:

  22 = 2 • 2 = 4,

  25 = 2 • 2 • 2 • 2 • 2 = 32,

  43 = 4 • 4 • 4 = 64.

  (We use a dot to stand for multiplication, rather than the × symbol, because that’s too easy to confuse with the letter x.) One of the most convenient cases is where we take the base to be 10; in that case, the power simply becomes the number of zeroes to the right of the one.

  101 = 10,

  102 = 100,

  109 = 1,000,000,000,

  1021 = 1,000,000,000,000,000,000,000.

  That’s the idea of exponentiation. When we speak more specifically about the exponential function, what we have in mind is fixing a particular base and letting the power to which we raise it be a variable quantity. If we denote the base by a and the power by x, we have

  ax = a • a • a • a • a • a ... • a, x times.

  This definition, unfortunately, can give you the impression that the exponential function makes sense only when the power x is a positive integer. How can you multiply a number by itself minus-two times, or 3.7 times? Here you will have to have faith that the magic of mathematics allows us to define the exponential for any value of x. The result is a smooth function that is very small when x is a negative number, and rises very rapidly when x becomes positive, as shown in Figure 88.

  Figure 88: The exponential function 10x. Note that it goes up very fast, so that it becomes impractical to plot it for large values of x.

  There are a couple of things to keep in mind about the exponential function. The exponential of 0 is always equal to 1, for any base, and the exponential of 1 is equal to the base itself. When the base is 10, we have:

  100 = 1,

  101 = 10.

  If we take the exponential of a negative number, it’s just the reciprocal of the exponential of the corresponding positive number:

  10-1 = 1/101 = 0.1,

  10 -3 = 1/103 = 0.001.

  These facts are specific examples of a more general set of properties obeyed by the exponential function. One of these properties is of paramount importance: If we multiply two numbers that are the same base raised to different powers, that’s equal to what we would get by adding the two powers and raising the base to that result. That is:

  10x • 10y = 10(x+y)

  Said the other way around, the exponential of a sum is the product of the two exponentials.304

  BIG NUMBERS

  It’s not hard to see why the exponential function is useful: The numbers we are dealing with are sometimes very large indeed, and the exponential takes a medium-sized number and creates a very big number from it. As we discuss in Chapter Thirteen, the number of distinct states needed to describe possible configurations of our comoving patch of universe is approximately

  1010120

  That number is just so enormously, unimaginably huge that it would be hard to know how to even begin describing it if we didn’t have recourse to exponentiation.

  Let’s consider some other big numbers to appreciate just how giant this one is. One billion is 109, while one trillion is 1012; these have become all too familiar terms in discussions of economics and government spending. The number of particles within our observable universe is about 1088, which was also the entropy at early times. Now that we have black holes, the entropy of the observable universe is something like 10101, whereas it conceivably could have been as high as 10120. (That same 10120 is also the ratio of the predicted vacuum energy density to the observed density.)

  For comparison’s sake, the entropy of a macroscopic object like a cup of coffee is about 1025. That’s related to Avogadro’s Number, 6.02 • 1023, which is approximately the number of atoms in a gram of hydrogen. The number of grains of sand in all the Earth’s beaches is about 1020. The number of stars in a typical galaxy is about 1011, and the number of galaxies in the observable universe is also about 1011, so the number of stars in the observable universe is about 1022—a bit larger than the number of grains of sand on Earth.

  The basic units that physicists use are time, length, and mass, or combinations thereof. The shortest interesting time is the Planck time, about 10-43 seconds. I nflation is conjectured to have lasted for about 10-30 seconds or less, although that number is extremely uncertain. The universe created helium out of protons and neutrons about 100 seconds after the Big Bang, and it became transparent at the time of recombination, 380,000 years (1013 seconds) after that. (One year is about 3 • 107 seconds.) The observable universe now is 14 billion years old, about 4 • 1017 seconds. In another 10100 years or so, all the black holes will have mostly evaporated away, leaving a cold and empty universe.

  The shortest length is the Planck length, about 10-33 centimeters. The size of a proton is about 10-13 centimeters, and the size of a human being is about 102 centimeters. (That’s a pretty short human being, but we’re only being very rough here.) The distance from the Earth to the Sun is about 1013 centimeters; the distance to the nearest star is about 1018 centimeters, and the size of the observable universe is about 1028 centimeters.

  The Planck mass is about 10-5 grams—that would be extraordinarily heavy for a single particle, but isn’t all that much by macroscopic standards. The lightest particles that have more than zero mass are the neutrinos; we don’t know for sure what their masses are, but the lightest seem to be about 10-36 grams. A proton is about 10-24 grams, and a human being is about 105 grams. The Sun is about 1033 grams, a galaxy is about 1045 grams, and the mass within the observable universe is about 1056 grams.

  LOGARITHMS

  The logarithm function is the easiest thing in the world: It undoes the exponential function. That is, if we have some number that can be expressed in the form 10x—and every positive number can be—then the logarithm of that number is simply

  log(10x) = x.

  What could be simpler than that? Likewise, the exponential undoes the logarithm:

  10log(x) = x.

  Another way of thinking about it is: If a number is a perfect power of 10 (like 10, 100, 1,000, etc.), the logarithm is simply the number of zeroes to the right of the initial 1:

  log(10) = 1,

  log(100) = 2,

  log(1,000) = 3.

  But just as for the exponential, the logarithm is actually a smooth function, as shown in Figure 89. The logarithm of 2.5 is about 0.3979, the logarithm of 25 is about 1.3979, the logarithm of 250 is about 2.3979, and so on. The only restriction is that we can’t take
the logarithm of a negative number; that makes sense, because the logarithm inverts the exponential function, and we can never get a negative number by exponentiating. Roughly speaking, for large numbers the logarithm is simply “the number of digits in the number.”

  Figure 89: The logarithm function log(x). It is not defined for negative values of x, and as x approaches zero from the right the logarithm goes to minus infinity.

  Just like the exponential of a sum is the product of exponentials, the logarithm has a corresponding property: The logarithm of a product is the sum of logarithms. That is:

  log(x • y) = log(x) + log(y).

  It’s this lovely property that makes logarithms so useful in the study of entropy. As we discuss in Chapter Eight, a physical property of entropy is that the entropy of two systems combined together is equal to the sum of the entropies of the two individual systems. But you get the number of possible states of the combined systems by multiplying the numbers of states of the two individual systems. So Boltzmann concluded that the entropy should be the logarithm of the number of states, not the number of states itself. In Chapter Nine we tell a similar story for information: Shannon wanted a measure of information for which the total information carried in two independent messages was the sum of the individual informations in each message, so he realized he also had to take the logarithm.

 

‹ Prev