Book Read Free

From Eternity to Here: The Quest for the Ultimate Theory of Time

Page 20

by Sean M. Carroll


  From the point of atoms, this all makes sense. Consider the classic example of two objects at different temperatures in contact with each other: an ice cube in a glass of warm water, discussed at the end of the previous chapter. Both the ice cube and the liquid are made of precisely the same kind of molecules, namely H2O. The only difference is that the ice is at a much lower temperature. Temperature, as we have discussed, measures the average energy of motion in the molecules of a substance. So while the molecules of the liquid water are moving relatively quickly, the molecules in the ice are moving slowly.

  But that kind of condition—one set of molecules moving quickly, another moving slowly—isn’t all that different, conceptually, from two sets of molecules confined to different sides of a box. In either case, there is a broad-brush limitation on how we can rearrange things. If we had just a glass of nothing but water at a constant temperature, we could exchange the molecules in one part of the glass with molecules in some other part, and there would be no macroscopic way to tell the difference. But when we have an ice cube, we can’t simply exchange the molecules in the cube for some water molecules elsewhere in the glass—the ice cube would move, and we would certainly notice that even from our everyday macroscopic perspective. The division of the water molecules into “liquid” and “ice” puts a serious constraint on the number of rearrangements we can do, so that configuration has a low entropy. As the temperature between the water molecules that started out as ice equilibrates with that of the rest of the glass, the entropy goes up. Clausius’s rule that temperatures tend to even themselves out, rather than spontaneously flowing from cold to hot, is precisely equivalent to the statement that the entropy as defined by Boltzmann never decreases in a closed system.

  None of this means that it’s impossible to cool things down, of course. But in everyday life, where most things around us are at similar temperatures, it takes a bit more ingenuity than heating them up. A refrigerator is a more complicated machine than a stove. (Refrigerators work on the same basic principle as the piston in Figure 44, expanding a gas to extract energy and cool it off.) When Grant Achatz, chef of Chicago’s Alinea restaurant, wanted a device that would rapidly freeze food in the same way a frying pan rapidly heats food up, he had to team with culinary technologist Philip Preston to create their own. The result is the “anti-griddle,” a microwave-oven-sized machine with a metallic top that attains a temperature of -34 degrees Celsius. Hot purees and sauces, poured on the anti-griddle, rapidly freeze on the bottom while remaining soft on the top. We have understood the basics of thermodynamics for a long time now, but we’re still inventing new ways to put them to good use.

  DON’T SWEAT THE DETAILS

  You’re out one Friday night playing pool with your friends. We’re talking about real-world pool now, not “physicist pool” where we can ignore friction and noise.132 One of your pals has just made an impressive break, and the balls have scattered thoroughly across the table. As they come to a stop and you’re contemplating your next shot, a stranger walks by and exclaims, “Wow! That’s incredible!”

  Somewhat confused, you ask what is so incredible about it. “Look at these balls at those exact positions on the table! What are the chances that you’d be able to put all the balls in precisely those spots? You’d never be able to repeat that in a million years!”

  The mysterious stranger is a bit crazy—probably driven slightly mad by reading too many philosophical tracts on the foundations of statistical mechanics. But she does have a point. With several balls on the table, any particular configuration of them is extremely unlikely. Think of it this way: If you hit the cue ball into a bunch of randomly placed balls, which rattled around before coming to rest in a perfect arrangement as if they had just been racked, you’d be astonished. But that particular arrangement (all balls perfectly arrayed in the starting position) is no more or less unusual than any other precise arrangement of the balls.133 What right do we have to single out certain configurations of the billiard balls as “astonishing” or “unlikely,” while others seem “unremarkable” or “random”?

  This example pinpoints a question at the heart of Boltzmann’s definition of entropy and the associated understanding of the Second Law of Thermodynamics: Who decides when two specific microscopic states of a system look the same from our macroscopic point of view?

  Boltzmann’s formula for entropy hinges on the idea of the quantity W, which we defined as “the number of ways we can rearrange the microscopic constituents of a system without changing its macroscopic appearance.” In the last chapter we defined the “state” of a physical system to be a complete specification of all the information required to uniquely evolve it in time; in classical mechanics, it would be the position and momentum of every single constituent particle. Now that we are considering statistical mechanics, it’s useful to use the term microstate to refer to the precise state of a system, in contrast with the macrostate, which specifies only those features that are macroscopically observable. Then the shorthand definition of W is “the number of microstates corresponding to a particular macrostate.”

  For the box of gas separated in two by a divider, the microstate at any one time is the position and momentum of every single molecule in the box. But all we were keeping track of was how many molecules were on the left, and how many were on the right. Implicitly, every division of the molecules into a certain number on the left and a certain number on the right defined a “macrostate” for the box. And our calculation of W simply counted the number of microstates per macrostate.134

  The choice to just keep track of how many molecules were in each half of the box seemed so innocent at the time. But we could imagine keeping track of much more. Indeed, when we deal with the atmosphere in an actual room, we keep track of a lot more than simply how many molecules are on each side of the room. We might, for example, keep track of the temperature, and density, and pressure of the atmosphere at every point, or at least at some finite number of places. If there were more than one kind of gas in the atmosphere, we might separately keep track of the density and so on for each different kind of gas. That’s still enormously less information than the position and momentum of every molecule in the room, but the choice of which information to “keep” as a macroscopically measurable quantity and which information to “forget” as an irrelevant part of the microstate doesn’t seem to be particularly well defined.

  The process of dividing up the space of microstates of some particular physical system (gas in a box, a glass of water, the universe) into sets that we label “macroscopically indistinguishable” is known as coarse-graining. It’s a little bit of black magic that plays a crucial role in the way we think about entropy. In Figure 45 we’ve portrayed how coarse-graining works; it simply divides up the space of all states of a system into regions (macrostates) that are indistinguishable by macroscopic observations. Every point within one of those regions corresponds to a different microstate, and the entropy associated with a given microstate is proportional to the logarithm of the area (or really volume, as it’s a very high-dimensional space) of the macrostate to which it belongs. This kind of figure makes it especially clear why entropy tends to go up: Starting from a state with low entropy, corresponding to a very tiny part of the space of states, it’s only to be expected that an ordinary system will tend to evolve to states that are located in one of the large-volume, high-entropy regions.

  Figure 45 is not to scale; in a real example, the low-entropy macrostates would be much smaller compared to the high-entropy macrostates. As we saw with the divided-box example, the number of microstates corresponding to high-entropy macrostates is enormously larger than the number associated with low-entropy macrostates. Starting with low entropy, it’s certainly no surprise that a system should wander into the roomier high-entropy parts of the space of states; but starting with high entropy, a typical system can wander for a very long time without ever bumping into a low-entropy condition. That’s what equilibrium is like; it’s n
ot that the microstate is truly static, but that it never leaves the high-entropy macrostate it’s in.

  Figure 45: The process of coarse-graining consists of dividing up the space of all possible microstates into regions considered to be macroscopically indistinguishable, which are called macrostates. Each macrostate has an associated entropy, proportional to the logarithm of the volume it takes up in the space of states. The size of the low-entropy regions is exaggerated for clarity; in reality, they are fantastically smaller than the high-entropy regions.

  This whole business should strike you as just a little bit funny. Two microstates belong to the same macrostate when they are macroscopically indistinguishable. But that’s just a fancy way of saying, “when we can’t tell the difference between them on the basis of macroscopic observations.” It’s the appearance of “we” in that statement that should make you nervous. Why should our powers of observation be involved in any way at all? We like to think of entropy as a feature of the world, not as a feature of our ability to perceive the world. Two glasses of water are in the same macrostate if they have the same temperature throughout the glass, even if the exact distribution of positions and momenta of the water molecules are different, because we can’t directly measure all of that information. But what if we ran across a race of superobservant aliens who could peer into a glass of liquid and observe the position and momentum of every molecule? Would such a race think that there was no such thing as entropy?

  There are several different answers to these questions, none of which is found satisfactory by everyone working in the field of statistical mechanics. (If any of them were, you would need only that one answer.) Let’s look at two of them.

  The first answer is, it really doesn’t matter. That is, it might matter a lot to you how you bundle up microstates into macrostates for the purposes of the particular physical situation in front of you, but it ultimately doesn’t matter if all we want to do is argue for the validity of something like the Second Law. From Figure 45, it’s clear why the Second Law should hold: There is a lot more room corresponding to high-entropy states than to low-entropy ones, so if we start in the latter it is natural to wander into the former. But that will hold true no matter how we actually do the coarse-graining. The Second Law is robust; it depends on the definition of entropy as the logarithm of a volume within the space of states, but not on the precise way in which we choose that volume. Nevertheless, in practice we do make certain choices and not others, so this transparent attempt to avoid the issue is not completely satisfying.

  The second answer is that the choice of how to coarse-grain is not completely arbitrary and socially constructed, even if some amount of human choice does come into the matter. The fact is, we coarse-grain in ways that seem physically natural, not just chosen at whim. For example, when we keep track of the temperature and pressure in a glass of water, what we’re really doing is throwing away all information that we could measure only by looking through a microscope. We’re looking at average properties within relatively small regions of space because that’s what our senses are actually able to do. Once we choose to do that, we are left with a fairly well-defined set of macroscopically observable quantities.

  Averaging within small regions of space isn’t a procedure that we hit upon randomly, nor is it a peculiarity of our human senses as opposed to the senses of a hypothetical alien; it’s a very natural thing, given how the laws of physics work.135 When I look at cups of coffee and distinguish between cases where a teaspoon of milk has just been added and ones where the milk has become thoroughly mixed, I’m not pulling a random coarse-graining of the states of the coffee out of my hat; that’s how the coffee looks to me, immediately and phenomenologically. So even though in principle our choice of how to coarse-grain microstates into macrostates seems absolutely arbitrary, in practice Nature hands us a very sensible way to do it.

  RUNNING ENTROPY BACKWARD

  A remarkable consequence of Boltzmann’s statistical definition of entropy is that the Second Law is not absolute—it just describes behavior that is overwhelmingly likely. If we start with a medium-entropy macrostate, almost all microstates within it will evolve toward higher entropy in the future, but a small number will actually evolve toward lower entropy.

  It’s easy to construct an explicit example. Consider a box of gas, in which the gas molecules all happened to be bunched together in the middle of the box in a lo w-entropy configuration. If we just let it evolve, the molecules will move around, colliding with one another and with the walls of the box, and ending up (with overwhelming probability) in a much higher-entropy configuration.

  Now consider a particular microstate of the above box of gas at some moment after it has become high-entropy. From there, construct a new state by keeping all of the molecules at exactly the same positions, but precisely reversing all of the velocities. The resulting state still has a high entropy—it’s contained within the same macrostate as we started with. (If someone suddenly reversed the direction of motion of every single molecule of air around you, you’d never notice; on average there are equal numbers moving in every direction.) Starting in this state, the motion of the molecules will exactly retrace the path that they took from the previous low-entropy state. To an external observer, it will look as if the entropy is spontaneously decreasing. The fraction of high-entropy states that have this peculiar property is astronomically small, but they certainly exist.

  Figure 46: On the top row, ordinary evolution of molecules in a box from a low-entropy initial state to a high-entropy final state. At the bottom, we carefully reverse the momentum of every particle in the final state from the top, to obtain a time-reversed evolution in which entropy decreases.

  We could even imagine an entire universe that was like that, if we believe that the fundamental laws are reversible. Take our universe today: It is described by some particular microstate, which we don’t know, although we know something about the macrostate to which it belongs. Now simply reverse the momentum of every single particle in the universe and, moreover, do whatever extra transformations (changing particles to antiparticles, for example) are needed to maintain the integrity of time reversal. Then let it go. What we would see would be an evolution toward the “future” in which the universe collapsed, stars and planets unformed, and entropy generally decreased all around; it would just be the history of our actual universe played backward in time.

  However—the thought experiment of an entire universe with a reversed arrow of time is much less interesting than that of some subsystem of the universe with a reversed arrow. The reason is simple: Nobody would ever notice.

  In Chapter One we asked what it would be like if time passed more quickly or more slowly. The crucial question there was: Compared to what? The idea that “time suddenly moves more quickly for everyone in the world” isn’t operationally meaningful; we measure time by synchronized repetition, and as long as clocks of all sorts (including biological clocks and the clocks defined by subatomic processes) remain properly synchronized, there’s no way you could tell that the “rate of time” was in any way different. It’s only if some particular clock speeds up or slows down compared to other clocks that the concept makes any sense.

  Exactly the same problem is attached to the idea of “time running backward.” When we visualize time going backward, we might imagine some part of the universe running in reverse, like an ice cube spontaneously forming out of a cool glass of water. But if the whole thing ran in reverse, it would be precisely the same as it appears now. It would be no different than running the universe forward in time, but choosing some perverse time coordinate that ran in the opposite direction.

  The arrow of time isn’t a consequence of the fact that “entropy increases to the future”; it’s a consequence of the fact that “entropy is very different in one direction of time than the other.” If there were some other part of the universe, which didn’t interact with us in any way, where entropy decreased toward what we now call the future
, the people living in that reversed-time world wouldn’t notice anything out of the ordinary. They would experience an ordinary arrow of time and claim that entropy was lower in their past (the time of which they have memories) and grew to the future. The difference is that what they mean by “the future” is what we call “the past,” and vice versa. The direction of the time coordinate on the universe is completely arbitrary, set by convention; it has no external meaning. The convention we happen to prefer is that “time” increases in the direction that entropy increases. The important thing is that entropy increases in the same temporal direction for everyone within the observable universe, so that they can agree on the direction of the arrow of time.

  Of course, everything changes if two people (or other subsets of the physical universe) who can actually communicate and interact with each other disagree on the direction of the arrow of time. Is it possible for my arrow of time to point in a different direction than yours?

  THE DECONSTRUCTION OF BENJAMIN BUTTON

  We opened Chapter Two with a few examples of incompatible arrows of time in literature—stories featuring some person or thing that seemed to experience time backward. The homunculus narrator of Time’s Arrow remembered the future but not the past; the White Queen experienced pain just before she pricked her finger; and the protagonist of F. Scott Fitzgerald’s “The Curious Case of Benjamin Button” grew physically younger as time passed, although his memories and experiences accumulated in the normal way. We now have the tools to explain why none of those things happen in the real world.

 

‹ Prev