Book Read Free

From Eternity to Here: The Quest for the Ultimate Theory of Time

Page 24

by Sean M. Carroll


  The setup is simple: the same kind of box of gas divided into two sides that we’re very familiar with by now. But instead of a small opening that randomly lets molecules pass back and forth, there’s a small opening with a very tiny door—one that can be opened and closed without exerting a noticeable amount of energy. At the door sits a Demon, who monitors all of the molecules on either side of the box. If a fast-moving molecule approaches from the right, the Demon lets it through to the left side of the box; if a slow-moving molecule approaches from the left, the Demon lets it through to the right. But if a slow-moving molecule approaches from the right, or a fast-moving one from the left, the Demon shuts the door so they stay on the side they’re on.

  It’s clear what will happen: Gradually, and without any energy being exerted, the high-energy molecules will accumulate on the left, and the low-energy ones on the right. If the temperatures on both sides of the box started out equal, they will gradually diverge—the left will get hotter, and the right will get cooler. But that’s in direct violation of Clausius’s formulation of the Second Law. What’s going on?

  If we started in a high-entropy state, with the gas at equal temperature throughout the box, and we evolve reliably (for any beginning state, not just some finely tuned ones) into a lower-entropy state, we’ve gone from a situation where a large number of initial states all evolve into a small number of final states. But that simply can’t happen, if the dynamical laws are information conserving and reversible. There’s no room for all of those initial states to be squeezed into the smaller number of final states. So clearly there has to be a compensating increase in entropy somewhere, if the entropy in the gas goes down. And there’s only one place that entropy could go: into the Demon.

  Figure 49: By letting high-energy molecules move from the right half of the box to the left, and slow-moving molecules move from the left to the right, Maxwell’s Demon lets heat flow from a cold system to a hotter one, in apparent violation of the Second Law.

  The question is, how does that work? It doesn’t look like the Demon increased in entropy; at the start of the experiment it’s sitting there peacefully, waiting for the right molecules to come along, and at the end of the experiment it’s still sitting there, just as peacefully. The embarrassing fact is that it took a long time—more than a century—for scientists to really figure out the right way to think about this problem. Hungarian-American physicist Leó Szilárd and French physicist Léon Brillouin—both of whom were pioneers in applying the new science of quantum mechanics to problems of practical interest—helped pinpoint the crucial relationship between the information gathered by the Demon and its entropy. But it wasn’t until the contributions of two different physicist/computer scientists who worked for IBM, Rolf Landauer in 1961 and Charles Bennett in 1982, that it finally became clear why exactly the Demon’s entropy must always increase in accordance with the Second Law.151

  RECORDING AND ERASING

  Many attempts to understand Maxwell’s Demon concentrated on the means by which it measured the velocities of the molecules zooming around its vicinity. One of the big conceptual leaps of Landauer and Bennett was to focus on the means by which the Demon recorded that information. After all, the Demon has to remember—even if just for a microsecond—which molecules to let by, and which to keep on their original sides. Indeed, if the Demon simply knew from the start which molecules had which velocities, it wouldn’t have to do any measurements at all; so the crux of the problem can’t be in the measurement process.

  So we have to equip the Demon with some way to record the velocities of all the molecules—perhaps it carries around a notepad, which for convenience we can imagine has just enough room to record all of the relevant information. (Nothing changes if we consider larger or smaller pads, as long as the pad is not infinitely big.) That means that the state of the notepad must be included when we calculate the entropy of the combined gas/Demon system. In particular, the notepad must start out blank, in order to be ready to record the velocities of the molecules.

  But a blank notepad is, of course, nothing other than a low-entropy past boundary condition. It’s just the Maxwell’s Demon version of the Past Hypothesis, sneaked in under another guise. If that’s the case, the entropy of the combined gas/Demon system clearly wasn’t as high as it could have been. The Demon doesn’t lower the entropy of the combined system; it simply transfers the entropy from the state of the gas to the state of the notepad.

  You might be suspicious of this argument. After all, you might think, can’t the Demon just erase the notepad when all is said and done? And wouldn’t that return the notepad to its original state, while the gas went down in entropy?

  This is the crucial insight of Landauer and Bennett: No, you can’t just erase the notepad. At least, you can’t erase information if you are part of a closed system operating under reversible dynamical laws. When phrased that way, the result is pretty believable: If you were able to erase the information entirely, how would you ever be able to reverse the evolution to its previous state? If erasure is possible, either the fundamental laws are irreversible—in which case it’s not at all surprising that the Demon can lower the entropy—or you’re not really in a closed system. The act of erasing information necessarily transfers entropy to the outside world. (In the case of real-world erasing of actual pencil markings, this entropy comes mostly in the form of heat, dust, and tiny flecks of rubber.)

  So you have two choices. Either the Demon starts with a blank low-entropy notepad, in a demonic version of the Past Hypothesis, and simply transfers entropy from the gas to the notepad; or the Demon needs to erase information from the notepad, in which case entropy gets transferred to the outside world. In either case, the Second Law is safe. But along the way, we’ve opened the door to the fascinating connection between information and entropy.

  INFORMATION IS PHYSICAL

  Even though we’ve tossed around the word information a lot in discussing dynamical laws of physics—reversible laws conserve information—the concept still seems a bit abstract compared to the messy world of energy and heat and entropy. One of the lessons of Maxwell’s Demon is that this is an illusion: Information is physical. More concretely, possessing information allows us to extract useful work from a system in ways that would have otherwise been impossible.

  Leó Szilárd showed this explicitly in a simplified model of Maxwell’s Demon. Imagine that our box of gas contained just a single molecule; the “temperature” would just be the energy of that one gas molecule. If that’s all we know, there’s no way to use that molecule to do useful work; the molecule just rattles around like a pebble in a can. But now imagine that we have a single bit of information: whether the molecule is on the left side of the box or the right. With that, plus some clever thought-experiment-level manipulation, we can use the molecule to do work. All we have to do is quickly insert a piston into the other half of the box. The molecule will bump into it, pushing the piston, and we can use the external motion to do something useful, like turn a flywheel.152

  Note the crucial role played by information in Szilárd’s setup. If we didn’t know which half of the box the molecule was in, we wouldn’t know where to insert the piston. If we inserted it randomly, half the time it would be pushed out and half the time it would be pulled in; on average, we wouldn’t be getting any useful work at all. The information in our possession allowed us to extract energy from what appeared to be a system at maximal entropy.

  To be clear: In the final analysis, none of these thought experiments are letting us violate the Second Law. Rather, they provide ways that we could appear to violate the Second Law, if we didn’t properly account for the crucial role played by information. The information collected and processed by the Demon must somehow be accounted for in any consistent story of entropy.

  The concrete relationship between entropy and information was developed in the 1940s by Claude Shannon, an engineer/mathematician working for Bell Labs.153 Shannon was interested in find
ing efficient and reliable ways of sending signals across noisy channels. He had the idea that some messages carry more effective information than others, simply because the message is more “surprising” or unexpected. If I tell you that the Sun is going to rise in the East tomorrow morning, I’m not actually conveying much information, because you already expected that was going to happen. But if I tell you the peak temperature tomorrow is going to be exactly 25 degrees Celsius, my message contains more information, because without the message you wouldn’t have known precisely what temperature to expect.

  Shannon figured out how to formalize this intuitive idea of the effective information content of a message. Imagine that we consider the set of all possible messages we could receive of a certain type. (This should remind you of the “space of states” we considered when talking about physical systems rather than messages.) For example, if we are being told the outcome of a coin flip, there are only two possible messages: “heads” or “tails.” Before we get the message, either alternative is equally likely; after we get the message, we have learned precisely one bit of information.

  If, on the other hand, we are being told what the high temperature will be tomorrow, there are a large number of possible messages: say, any integer between -273 and plus infinity, representing the temperature in degrees Celsius. (Minus 273 degrees Celsius is absolute zero.) But not all of those are equally likely. If it’s summer in Los Angeles, temperatures of 27 or 28 degrees Celsius are fairly common, while temperatures of -13 or +4,324 degrees Celsius are comparatively rare. Learning that the temperature tomorrow would be one of those unlikely numbers would convey a great deal of information indeed (presumably related to some global catastrophe).

  Roughly speaking, then, the information content of a message goes up as the probability of a given message taking that form goes down. But Shannon wanted to be a little bit more precise than that. In particular, he wanted it to be the case that if we receive two messages that are completely independent of each other, the total information we get is equal to the sum of the information contained in each individual message. (Recall that, when Boltzmann was inventing his entropy formula, one of the properties he wanted to reproduce was that the entropy of a combined system was the sum of the entropies of the individual systems.) After some playing around, Shannon figured out that the right thing to do was to take the logarithm of the probability of receiving a given message. His final result is this: The “self-information” contained in a message is equal to minus the logarithm of the probability that the message would take that particular form.

  If many of these words sound familiar, it’s not an accident. Boltzmann associated the entropy with the logarithm of the number of microstates in a certain macrostate. But given the Principle of Indifference, the number of microstates in a macrostate is clearly proportional to the probability of picking one of them randomly in the entire space of states. A low-entropy state is like a surprising, i nformation-filled message, while knowing that you’re in a high-entropy state doesn’t tell you much at all. When all is said and done, if we think of the “message” as a specification of which macrostate a system is in, the relationship between entropy and information is very simple: The information is the difference between the maximum possible entropy and the actual entropy of the macrostate.154

  DOES LIFE MAKE SENSE?

  It should come as no surprise that these ideas connecting entropy and information come into play when we start thinking about the relationship between thermodynamics and life. Not that this relationship is very straightforward; although there certainly is a close connection, scientists haven’t even yet agreed on what “life” really means, much less understood all its workings. This is an active research area, one that has seen an upsurge in recent interest, drawing together insights from biology, physics, chemistry, mathematics, computer science, and complexity studies.155

  Without yet addressing the question of how “life” should be defined, we can ask what sounds like a subsequent question: Does life make thermodynamic sense? The answer, before you get too excited, is “yes.” But the opposite has been claimed—not by any respectable scientists, but by creationists looking to discredit Darwinian natural selection as the correct explanation for the evolution of life on Earth. One of their arguments relies on a misunderstanding of the Second Law, which they read as “entropy always increases,” and then interpret as a universal tendency toward decay and disorder in all natural processes. Whatever life is, it’s pretty clear that life is complicated and orderly—how, then, can it be reconciled with the natural tendency toward disorder?

  There is, of course, no contradiction whatsoever. The creationist argument would equally well imply that refrigerators are impossible, so it’s clearly not correct. The Second Law doesn’t say that entropy always increases. It says that entropy always increases (or stays constant) in a closed system, one that doesn’t interact noticeably with the external world. It’s pretty obvious that life is not like that; living organisms interact very strongly with the external world. They are the quintessential examples of open systems. And that is pretty much that; we can wash our hands of the issue and get on with our lives.

  But there’s a more sophisticated version of the creationist argument, which is not quite as silly—although it’s still wrong—and it’s illuminating to see exactly how it fails. The more sophisticated argument is quantitative: Sure, living beings are open systems, so in principle they can decrease entropy somewhere as long as it increases somewhere else. But how do you know that the increase in entropy in the outside world is really enough to account for the low entropy of living beings?

  As I mentioned back in Chapter Two, the Earth and its biosphere are systems that are very far away from thermal equilibrium. In equilibrium, the temperature is the same everywhere, whereas when we look up we see a very hot Sun in an otherwise very cold sky. There is plenty of room for entropy to increase, and that’s exactly what’s happening. But it’s instructive to run the numbers.156

  Figure 50: We receive energy from the Sun in a concentrated, low-entropy form, and radiate it back to the universe in a diffuse, high-entropy form. For every 1 high-energy photon we receive, the Earth radiates about 20 low-energy photons.

  The energy budget of the Earth, considered as a single system, is pretty simple. We get energy from the Sun via radiation; we lose the same amount of energy to empty space, also via radiation. (Not exactly the same; processes such as nuclear decays also heat up the Earth and leak energy into space, and the rate at which energy is radiated is not strictly constant. Still, it’s an excellent approximation.) But while the amount is the same, there is a big difference in the quality of the energy we get and the energy we give back. Remember back in the pre-Boltzmann days, entropy was understood as a measurement of the uselessness of a certain amount of energy; low-entropy forms of energy could be put to useful work, such as powering an engine or grinding flour, while high-entropy forms of energy just sat there.

  The energy we get from the Sun is of a low-entropy, useful form, while the energy we radiate back out into space has a much higher entropy. The temperature of the Sun is about 20 times the average temperature of the Earth. For radiation, the temperature is just the average energy of the photons of which it is made, so the Earth needs to radiate 20 low-energy (long-wavelength, infrared) photons for every 1 high-energy (short-wavelength, visible) photon it receives. It turns out, after a bit of math, that 20 times as many photons directly translates into 20 times the entropy. The Earth emits the same amount of energy as it receives, but with 20 times higher entropy.

  The hard part is figuring out just what we mean when we say that the life forms here on Earth are “low-entropy.” How exactly do we do the coarse-graining? It is possible to come up with reasonable answers to that question, but it’s complicated. Fortunately, there is a dramatic shortcut we can take. Consider the entire biomass of the Earth—all of the molecules that are found in living organisms of any type. We can easily calculate
the maximum entropy that collection of molecules could have, if it were in thermal equilibrium; plugging in the numbers (the biomass is 1015 kilograms; the temperature of the Earth is 255 Kelvin), we find that its maximum entropy is 1044. And we can compare that to the minimum entropy it could possibly have—if it were in an exactly unique state, the entropy would be precisely zero.

  So the largest conceivable change in entropy that would be required to take a completely disordered collection of molecules the size of our biomass and turn them into absolutely any configuration at all—including the actual ecosystem we currently have—is 1044. If the evolution of life is consistent with the Second Law, it must be the case that the Earth has generated more entropy over the course of life’s evolution by converting high-energy photons into low-energy ones than it has decreased entropy by creating life. The number 1044 is certainly an overly generous estimate—we don’t have to generate nearly that much entropy, but if we can generate that much, the Second Law is in good shape.

 

‹ Prev