Book Read Free

From Eternity to Here: The Quest for the Ultimate Theory of Time

Page 52

by Sean M. Carroll


  The problem is that, when we are first introduced to the concept of “momentum,” it is typically defined as the mass times the velocity. But somewhere along the line, as you move into more esoteric realms of classical mechanics, that idea ceases to be a definition and becomes something that you can derive from the underlying theory. In other words, we start conceiving of the essence of momentum as “some vector (magnitude and direction) defined at each point along the path of the particle,” and then derive equations of motion that insist the momentum will be equal to the mass times the velocity. (This is known as the Hamiltonian approach to dynamics.) That’s the way we are thinking in our discussion of time reversal. The momentum is an independent quantity, part of the state of the system; it is equal to the mass times the velocity only when the laws of physics are being obeyed.

  116 David Albert (2000) has put forward a radically different take on all this. He suggests that we should define a “state” to be just the positions of particles, not the positions and momenta (which he would call the “dynamical condition”). He justifies this by arguing that states should be logically independent at each moment of time—the states in the future should not depend on the present state, which clearly they do in the way we defined them, as that was the entire point. But by redefining things in this way, Albert is able to live with the most straightforward definition of time-reversal invariance: “A sequence of states played backward in time still obeys the same laws of physics,” without resorting to any arbitrary-sounding transformations along the way. The price he pays is that, although Newtonian mechanics is time-reversal invariant under this definition, almost no other theory is, including classical electromagnetism. Which Albert admits; he claims that the conventional understanding that electromagnetism is invariant under time reversal, handed down from Maxwell to modern textbooks, is simply wrong. As one might expect, this stance invited a fusillade of denunciations; see, for example, Earman (2002), Arntzenius (2004), or Malament (2004).

  Most physicists would say that it just doesn’t matter. There’s no such thing as the one true meaning of time-reversal invariance, which is out there in the world waiting for us to capture its essence. There are only various concepts, which we may or may not find useful in thinking about how the world works. Nobody disagrees on how electrons move in the presence of a magnetic field; they just disagree on the words to use when describing that situation. Physicists tend to express bafflement that philosophers care so much about the words. Philosophers, for their part, tend to express exasperation that physicists can use words all the time without knowing what they actually mean.

  117 Elementary particles come in the form of “matter particles,” called “fermions,” and “force particles,” called “bosons.” The known bosons include the photon carrying electromagnetism, the gluons carrying the strong nuclear force, and the W and Z bosons carrying the weak nuclear force. The known fermions fall neatly into two types: six different kinds of “quarks,” which feel the strong force and get bound into composite particles like protons and neutrons, and six different kinds of “leptons,” which do not feel the strong force and fly around freely. These two groups of six are further divided into collections of three particles each; there are three quarks with electric charge +2/3 (the up, charm, and top quarks), three quarks with electric charge -⅓ (the down, strange, and bottom quarks), three leptons with electric charge -1 (the electron, the muon, and the tau), and three leptons with zero charge (the electron neutrino, the muon neutrino, and the tau neutrino). To add to the confusion, every type of quark and lepton has a corresponding antiparticle with the opposite electric charge; there is an anti-up-quark with charge -2/3, and so on.

  All of which allows us to be a little more specific about the decay of the neutron (two down quarks and one up): it actually creates a proton (two up quarks and one down), an electron, and an electron antineutrino. It’s important that it’s an antineutrino, because that way the net number of leptons doesn’t change; the electron counts as one lepton, but the antineutrino counts as minus one lepton, so they cancel each other out. Physicists have never observed a process in which the net number of leptons or the net number of quarks changes, although they suspect that such processes must exist. After all, there seem to be a lot more quarks than antiquarks in the real world. (We don’t know the net number of leptons very well, since it’s very hard to detect most neutrinos in the universe, and there could be a lot of antineutrinos out there.)

  118 “Easiest” means “lowest in mass,” because it takes more energy to make higher-mass particles, and when you do make them they tend to decay more quickly. The lightest two kinds of quarks are the up (charge +2/3) and the down (charge -1/3), but combining an up with an anti-down does not give a neutral particle, so we have to look at higher-mass quarks. The next heaviest is the strange quark, with charge -1/3, so it can be combined with a down to make a kaon.

  119 Angelopoulos et al. (1998). A related experiment, measuring time-reversal violation by neutral kaons in a slightly different way, was carried out by the KTeV collaboration at Fermilab, outside Chicago (Alavi-Harati et al. 2000).

  120 Quoted in Maglich (1973). The original papers were Lee and Yang (1956) and Wu et al. (1957). As Wu had suspected, other physicists were able to reproduce the result very rapidly; in fact, another group at Columbia performed a quick confirmation experiment, the results of which were published back-to-back with the Wu et al. paper (Garwin, Lederman, and Weinrich, 1957).

  121 Christenson et al. (1964). Within the Standard Model of particle physics, there is an established method to account for CP violation, developed by Makoto Kobayashi and Toshihide Maskawa (1973), who generalized an idea due to Nicola Cabbibo. Kobayashi and Maskawa were awarded the Nobel Prize in 2008.

  122 We’re making a couple of assumptions here: namely, that the laws are time-translation invariant (not changing from moment to moment), and that they are deterministic (the future can be predicted with absolute confidence, rather than simply with some probability). If either of these fails to be true, the definition of whether a particular set of laws is time-reversal invariant becomes a bit more subtle.

  8 . ENTROPY AND DISORDER

  123 Almost the same example is discussed by Wheeler (1994), who attributes it to Paul Ehrenfest. In what Wheeler calls “Ehrenfest’s Urn,” exactly one particle switches side at every step, rather than every particle having a small chance of switching sides.

  124 When we have 2 molecules on the right, the first one could be any of the 2,000, and the second could be any of the remaining 1,999. So you might guess there are 1,999 × 2,000 = 3,998,000 different ways this could happen. But that’s overcounting a bit, because the two molecules on the right certainly don’t come in any particular order. (Saying “molecules 723 and 1,198 are on the right” is exactly the same statement as “molecules 1,198 and 723 are on the right.”) So we divide by two to get the right answer: There are 1,999,000 different ways we can have 2 molecules on the right and 1,998 on the left. When we have 3 molecules on the right, we take 1,998 × 1,999 × 2,000 and divide by 3 × 2 different orderings. You can see the pattern; for 4 particles, we would divide 1,997 × 1,998 × 1,999 × 2,000 by 4 × 3 × 2, and so on. These numbers have a name—“binomial coefficients”—and they represent the number of ways we can choose a certain set of objects out of a larger set.

  125 We are assuming the logarithm is “base 10,” although any other base can be used. The “logarithm base 2” of 8 = 23 is 3; the logarithm base 2 of 2,048 = 211 is 11. See Appendix for fascinating details.

  126 The numerical value of k is about 3.2 × 10-16 ergs per Kelvin; an erg is a measure of energy, while Kelvin of course measures temperature. (That’s not the value you will find in most references; this is because we are using base-10 logarithms, while the formula is more often written using natural logarithms.) When we say “temperature measures the average energy of moving molecules in a substance,” what we mean is “the average energy per degree of freedom is one-half
times the temperature times Boltzmann’s constant.”

  127 The actual history of physics is so much messier than the beauty of the underlying concepts. Boltzmann came up with the idea of “S = k log W,” but those are not the symbols he would have used. His equation was put into that form by Max Planck, who suggested that it be engraved on Boltzmann’s tomb; it was Planck who first introduced what we now call “Boltzmann’s constant.” To make things worse, the equation on the tomb is not what is usually called “Boltzmann’s equation”—that’s a different equation discovered by Boltzmann, governing the evolution of a distribution of a large number of particles through the space of states.

  128 One requirement of making sense of this definition is that we actually know how to count the different kinds of microstates, so we can quantify how many of them belong to various macrostates. That sounds easy enough when the microstates form a discrete set (like distributions of particles in one half of a box or the other half) but becomes trickier when the space of states is continuous (like real molecules with specific positions and momenta, or almost any other realistic situation). Fortunately, within the two major frameworks for dynamics—classical mechanics and quantum mechanics—there is a perfectly well-defined “measure” on the space of states, which allows us to calculate the quantity W, at least in principle. In some particular examples, our understanding of the space of states might get a little murky, in which case we need to be careful.

  129 Feynman (1964), 119-20.

  130 I know what you’re thinking. “I don’t know about you, but when I dry myself off, most of the water goes onto the towel; it’s not fifty-fifty.” That’s true, but the reason why is because the fiber structure of a nice fluffy towel provides many more places for the water to be than your smooth skin does. That’s also why your hair doesn’t dry as efficiently, and why you can’t dry yourself very well with pieces of paper.

  131 At least in certain circumstances, but not always. Imagine we had a box of gas, where every molecule on the left side was “yellow” and every molecule on the right was “green,” although they were otherwise identical. The entropy of that arrangement would be pretty low and would tend to go up dramatically if we allowed the two colors to mix. But we couldn’t get any useful work out of it.

  132 The ubiquity of friction and noise in the real world is, of course, due to the Second Law. When two billiard balls smack into each other, there are only a very small number of ways that all the molecules in each ball could respond precisely so as bounce off each other without disturbing the outside world in any way; there are a much larger number of ways that those molecules can interact gently with the air around them to create the noise of the two balls colliding. All of the guises of dissipation in our everyday lives—friction, air resistance, noise, and so on—are manifestations of the tendency of entropy to increase.

  133 Thought of yet another way: The next time you are tempted to play the Powerball lottery, where you pick five numbers between 1 and 59 and hope that they come up in a random drawing, pick the numbers “1, 2, 3, 4, 5.” That sequence is precisely as likely as any other “random-looking” sequence. (Of course, a nationwide outcry would ensue if you won, as people would suspect that someone had rigged the drawing. So you’d probably never collect, even if you got lucky.)

  134 Strictly speaking, since there are an infinite number of possible positions and an infinite number of possible momenta for each particle, the number of microstates per macrostate is also infinite. But the possible positions and momenta for a particle on the left side of the box can be put into one-to-one correspondence with the possible positions and momenta on the right side; even though both are infinite, they’re “the same infinity.” So it’s perfectly legitimate to say that there are an equal number of possible states per particle on each side of the box. What we’re really doing is counting “the volume of the space of states” corresponding to a particular macrostate.

  135 To expand on that a little bit, at the risk of getting hopelessly abstract: As an alternative to averaging within a small region of space, we could imagine averaging over a small region in momentum space. That is, we could talk about the average position of particles with a certain value of momentum, rather than vice versa. But that’s kind of crazy; that information simply isn’t accessible via macroscopic observation. That’s because, in the real world, particles tend to interact (bump into one another) when they are nearby in space, but nothing special happens when two distant particles have the same momentum. Two particles that are close to each other in position can interact, no matter what their relative velocities are, but the converse is not true. (Two particles that are separated by a few light years aren’t going to interact noticeably, no matter what their momentum is.) So the laws of physics pick out “measuring average properties within a small region of space” as a sensible thing to do.

  136 A related argument has been given by mathematician Norbert Wiener in Cybernetics (1961), 34.

  137 There is a loophole. Instead of starting with a system that had delicately tuned initial conditions for which the entropy would decrease, and then letting it interact with the outside world, we could just ask the following question: “Given that this system will go about interacting with the outside world, what state do I need to put it in right now so that its entropy will decrease in the future?” That kind of future boundary condition is not inconceivable, but it’s a little different than what we have in mind here. In that case, what we have is not some autonomous system with a naturally reversed arrow of time, but a conspiracy among every particle in the universe to permit some subsystem to decrease in entropy. That subsystem would not look like the time-reverse of an ordinary object in the universe; it would look like the rest of the world was conspiring to nudge it into a low-entropy state.

  138 Note the caveat “at room temperature.” At a sufficiently high temperature, the velocity of the individual molecules is so high that the water doesn’t stick to the oil, and once again a fully mixed configuration has the highest entropy. (At that temperature the mixture will be vapor.) In the messy real world, statistical mechanics is complicated and should be left to professionals.

  139 Here is the formula: For each possible microstate x, let px be the probability that the system is in that microstate. The entropy is then the sum over all possible microstates x of the quantity -kpx log px, where k is Boltzmann’s constant.

  140 Boltzmann actually calculated a quantity H, which is essentially the difference between the maximum entropy and the actual entropy, thus the name of the theorem. But that name was attached to the theorem only later on, and in fact Boltzmann himself didn’t even use the letter H; he called it E, which is even more confusing. Boltzmann’s original paper on the H-Theorem was 1872; an updated version, taking into account some of the criticisms by Loschmidt and others, was 1877. We aren’t coming close to doing justice to the fascinating historical development of these ideas; for various different points of view, see von Baeyer (1998), Lindley (2001), and Cercignani (1998); at a more technical level, see Ufflink (2004) and Brush (2003). Any Yale graduates, in particular, will lament the short shrift given to the contributions of Gibbs; see Rukeyser (1942) to redress the balance.

  141 Note that Loschmidt is not saying that there are equal numbers of increasing-entropy and decreasing-entropy evolutions that start with the same initial conditions. When we consider time reversal, we switch the initial conditions with the final conditions; all Loschmidt is pointing out is that there are equal numbers of increasing-entropy and decreasing-entropy evolutions overall, when we consider every possible initial condition. If we confine our attention to the set of low-entropy initial conditions, we can successfully argue that entropy will usually increase; but note that we have sneaked in time asymmetry by starting with low-entropy initial conditions rather than final ones.

  142 Albert (2000); see also (among many examples) Price (2004). Although I have presented the need for a Past Hypothesis as (hopefully) perfectly obvious, it
s status is not uncontroversial. For a dash of skepticism, see Callender (2004) or Earman (2006).

  143 Readers who have studied some statistical mechanics may wonder why they don’t recall actually doing this. The answer is simply that it doesn’t matter, as long as we are trying to make predictions about the future. If we use statistical mechanics to predict the future behavior of a system, the predictions we get based on the Principle of Indifference plus the Past Hypothesis are indistinguishable from those we would get from the Principle of Indifference alone. As long as there is no assumption of any special future boundary condition, all is well.

  9. INFORMATION AND LIFE

  144 Quoted in Tribus and McIrvine (1971).

  145 Proust (2004), 47.

  146 We are, however, learning more and more all the time. See Schacter, Addis, and Buckner (2007) for a recent review of advances in neuroscience that have revealed how the way actual brains reconstruct memories is surprisingly similar to the way they go about imagining the future.

  147 Albert (2000).

  148 Rowling (2005).

  149 Callender (2004). In Callender’s version, it’s not that you die; it’s that the universe ends, but I didn’t want to get confused with Big Crunch scenarios. But really, it would be nice to see more thought experiments in which the future boundary condition was “you fall in love” or “you win the lottery.”

  150 Davis (1985, 11) writes: “I will lay out four rules, but each is really only a special application of the great principle of causal order: after cannot cause before . . . there is no way to change the past . . . one-way arrows flow with time.”

  151 There are a number of references that go into the story of Maxwell’s Demon in greater detail than we will here. Leff and Rex (2003) collect a number of the original papers. Von Baeyer (1998) uses the Demon as a theme to trace the history of thermodynamics; Seife (2006) gives an excellent introduction to information theory and its role in unraveling this puzzle. Bennett and Landauer themselves wrote about their work in Scientific American (Bennett and Landauer, 1985; Bennett, 1987).

 

‹ Prev