by Lee Smolin
Things are very different for large, macroscopic bodies. In the world we live in, the future is very different from the past, which is exactly what is captured in the law stating that entropy increases into the future. Because this seemed to contradict the fact that in Newton’s theory the future and the past are reversible, many physicists refused to believe that matter is made of atoms until the first few decades of the twentieth century, when conclusive experimental proof was obtained for their existence.
The ideas that temperature is a measure of energy in random motion and entropy is a measure of information underlie what is called the statistical formulation of thermodynamics. According to this view ordinary matter is made out of enormous numbers of atoms. This means that one has to reason statistically about the behaviour of ordinary matter. According to the founders of statistical mechanics, as the idea was called, one could explain the apparent paradox about the direction of time by deriving the laws of thermodynamics from Newton’s laws. The paradox was resolved by understanding that the laws of thermodynamics are not absolute: they describe what is most likely to happen, but there will always be a small probability of the laws being violated.
In particular, the laws assert that most of the time a large collection of atoms will evolve in such a way as to reach a more random - meaning more disorganized - state. This is just because the randomness of the interactions tends to wash out any organization or order that is initially present. But this need not happen, it is just what is most likely to happen. A system which is very carefully prepared, or which incorporates structures that preserve a memory of what has happened to it - such as a complex molecule such as DNA - can be seen to evolve from a less ordered to a more ordered state.
The argument here is rather subtle, and it took several decades for most physicists to be convinced. The originator of the idea that entropy had to do with information and probability, Ludwig Boltzmann, committed suicide in 1906, which was before most physicists had accepted his arguments. (Whether his depression had anything to do with the failure of his colleagues to appreciate his reasoning, Boltzmann’s suicide had at least one far-reaching consequence: it convinced a young physics student named Ludwig Wittgenstein to give up physics and go to England to study engineering and philosophy.) In fact, the arguments that finally convinced most physicists of the existence of atoms had just been published the year before by the then patent office clerk Albert Einstein (‘Same Einstein’, as my physics teacher used to say.) This argument had to do with fact that the statistical point of view allowed the laws of thermodynamics to be violated from time to time. What Boltzmann had found was that the laws of thermodynamics would be exactly true for systems that contained an infinite number of atoms. Of course, the number of atoms in a given system, such as the water in a glass, is very large, but it is not infinite. Einstein realized that for systems containing a finite number of atoms the laws of thermodynamics would be violated from time to time. Since the number of atoms in the glass is large, these effects are small, but they still may in some circumstances be observed. By making use of this fact Einstein was able to discover manifestations of the motions of atoms that could be observed. Some of these had to do with the fact that a grain of pollen, observed in a microscope, will dance around randomly because it is being jiggled by atoms colliding with it. As each atom has a finite size, and carries a finite amount of energy, the jiggles that result when they collide with the grain of pollen can be seen, even if the atoms themselves are far too small to be seen.
The success of these arguments persuaded Einstein and a few others, such as his friend Paul Ehrenfest, to apply the same reasoning to light. According to the theory published by James Clerk Maxwell in 1865, light consisted of waves travelling through the electromagnetic field, each wave carrying a certain amount of energy. Einstein and Ehrenfest wondered whether they could use Boltzmann’s ideas to describe the properties of light on the inside of an oven.
Light is produced when the atoms in the walls of the oven heat up and jiggle around. Could the light so produced be said to be hot? Could it have an entropy and a temperature? What they found was profoundly puzzling to them and to everyone else at the time. They found that horrible inconsistencies would arise unless the light were in a sense also to consist of atoms. Each atom of light, or quantum as they called it, had to carry a unit of energy related to the frequency of the light. This was the birth of quantum theory.
I shall tell no more of this story, for it is indeed a very twisted one. Some of the results that Einstein and Ehrenfest employed in their reasoning had been found earlier by Max Planck, who had studied the problem of hot radiation five years earlier. It was in this work that the famous Planck’s constant first appeared. But Planck was one of those physicists who believed neither in atoms nor in Boltzmann’s work, so his understanding of his own results was confused and, in part, contradictory. He even managed to invent a convoluted argument that assured him that photons did not exist. For this reason the birth of quantum physics is more properly attributed to Einstein and Ehrenfest.
The moral of this story is that it was an attempt to understand the laws of thermodynamics that prompted two crucial steps in our understanding of atomic physics. These were the arguments that convinced physicists of the existence of atoms, and the arguments by which the existence of the photon were first uncovered. It was no coincidence that both these steps were taken by the same young Einstein in the same year.
We can now turn back to quantum gravity, and in particular to quantum black holes. For what we have seen in the last few chapters is that black holes are systems which may be described by the laws of thermodynamics. They have a temperature and an entropy, and they obey an extension of the law of increase of entropy. This allows us to raise several questions. What does the temperature of a black hole actually measure? What does the entropy of a black hole really describe? And, most importantly, why is the entropy of a black hole proportional to the area of its horizon?
The search for the meaning of temperature and entropy of matter led to the discovery of atoms. The search for the meaning of the temperature and entropy of radiation led to the discovery of quanta. In just the same way, the search for the meaning of the temperature and entropy of a black hole is now leading to the discovery of the atomic structure of space and time.
Consider a black hole interacting with a gas of atoms and photons. The black hole can swallow an atom or a photon. When it does so, the entropy of the region outside the black hole decreases because the entropy is a measure of information about that region, and if there are fewer atoms or photons there is less to know about the gas. To compensate, the entropy of the black hole must increase, otherwise the law that entropy can never decrease would be violated. As the entropy of the black hole is proportional to the area of its horizon, the result must be that the horizon expands a little.
And indeed, this is what happens. The process can also go the other way: the horizon can shrink a little, which means that the entropy of the black hole will decrease. To compensate, the entropy outside the black hole must increase. To accomplish this, photons must be created just outside the black hole - photons that comprise the radiation that Hawking predicted should be emitted by a black hole. The photons are hot, so they can carry the entropy that must be created to compensate for the fact that the horizon shrinks.
What is happening is that, to preserve the law that entropy does not decrease, a balance is being struck between, on the one hand, the entropy of atoms and photons outside the black hole, and, on the other, the entropy of the black hole itself. But notice that two very different things are being balanced. The entropy outside the black hole we understand in terms of the idea that matter is made out of atoms; it has to do with missing information. The entropy of the black hole itself seems to have nothing to do with either atoms or with information. It is a measure of a quantity which has to do with the geometry of space and time: it is proportional to the area of the black hole’s event horizon.
/> There is something incomplete about a law which asserts a balance or an exchange between two very dissimilar things. It is as though we had two kinds of currency, the first of which was exchangeable into a concrete entity such as gold, while the other had no worth in terms other than paper. Suppose we were allowed to freely mix the two kinds of money in our bank accounts. Such an economy would be based on a contradiction, and could not survive for long. (In fact, communist governments experimented with two kinds of currency, one convertible into other currencies and one not, and discovered that the system is unstable in the absence of all sorts of complicated and artificial restrictions on the use of the two kinds of money.) Similarly, a law of physics that allows information to be converted into geometry, and vice versa, but gives no account of why, should not survive for long. There must be something deeper and simpler at the root of the equivalence.
This raises two profound questions:• Is there an atomic structure to the geometry of space and time, so that the entropy of the black hole could be understood in exactly the same way that the entropy of matter is understood: as a measure of information about the motion of the atoms?
• When we understand the atomic structure of geometry will it be obvious why the area of a horizon is proportional to the amount of information it hides?
These questions have motivated a great deal of research since the mid-1970s. In the next few chapters I shall explain why there is a growing consensus among physicists that the answer to both questions must be ‘yes’.
Both loop quantum gravity and string theory assert that there is an atomic structure to space. In the next two chapters we shall see that loop quantum gravity in fact gives a rather detailed picture of that atomic structure. The picture of the atomic structure one gets from string theory is presently incomplete but, as we shall see in Chapter 11, it is still impossible in string theory to avoid the conclusion that there must be an atomic structure to space and time. In Chapter 13 we shall discover that both pictures of the atomic structure of space can be used to explain the entropy and temperature of black holes.
But even without these detailed pictures there is a very general argument, based simply on what we have learned in the last few chapters, that leads to the conclusion that space must have an atomic structure. This argument rests on the simple fact that horizons have entropy. In previous chapters we have seen that this is common to both the horizons of black holes and to the horizon experienced by an accelerated observer. In each case there is a hidden region in which information can be trapped, outside the reach of external observers. Since entropy is a measure of missing information, it is reasonable that in these cases there is an entropy associated with the horizon, which is the boundary of the hidden region. But what was most remarkable is that the amount of missing information measured by the entropy had a very simple form. It was simply equal to one-quarter of the area of the horizon, in Planck units.
The fact that the amount of missing information depends on the area of the boundary of the trapped region is a very important clue. It becomes even more significant if we put this dependence together with the fact that spacetime can be understood to be structured by processes which transmit information from the past to the future, as we saw in Chapter 4. If a surface can be seen as a kind of channel through which information flows from one region of space to another, then the area of the surface is a measure of its capacity to transmit information. This is very suggestive.
It is also strange that the amount of trapped information is proportional to the area of the boundary. It would seem more natural for the amount of information that can be trapped in a region to be proportional to its volume, not to the area of its boundary. No matter what is on the other side of the boundary, trapped in the hidden region, it can contain the answer to only a finite number of yes/no questions per unit area of the boundary. This seems to be saying that a black hole, whose horizon has a finite area, can hold only a finite amount of information.
If this is the right interpretation of the results I described in the last chapter, it suffices to tells us that the world must be discrete, since whether a given volume of space is behind a horizon or not depends on the motion of an observer. For any volume of space we may want to consider, we can find an observer who accelerates away from it in such a way that that region becomes part of that observer’s hidden region. This tells us that in that volume there could be no more information than the limit we are discussing, which is a finite amount per unit area of the boundary. If this is right, then no region can contain more than a finite amount of information. If the world really were continuous, then every volume of space would contain an infinite amount of information. In a continuous world it takes an infinite amount of information to specify the position of even one electron. This is because the position is given by a real number, and most real numbers require an infinite number of digits to describe them. If we write out their decimal expansion, it will require an infinite number of decimal places to write down the number.
In practice, the greatest amount of information that may be stored behind a horizon is huge - 1066 bits of information per square centimetre. No actual experiment so far comes close to probing this limit. But if we want to describe nature on the Planck scale, we shall certainly run into this limitation, as it allows us to talk about only one bit of information for every four Planck areas. After all, if the limit were one bit of information per square centimetre rather than per square Planck area, it would be quite hard to see anything because our eyes would then be able respond to at most one photon at a time.
Many of the important principles in twentieth-century physics are expressed as limitations on what we can know. Einstein’s principle of relativity (which was an extension of a principle of Galileo’s) says that we cannot do any experiment that would distinguish being at rest from moving at a constant velocity. Heisenberg’s uncertainty principle tells us that we cannot know both the position and momentum of a particle to arbitrary accuracy. This new limitation tells us there is an absolute bound to the information available to us about what is contained on the other side of a horizon. It is known as Bekenstein’s bound, as it was discussed in papers Jacob Bekenstein wrote in the 1970s shortly after he discovered the entropy of black holes.
It is curious that, despite everyone who has worked on quantum gravity having been aware of this result, few seem to have taken it seriously for the twenty years following the publication of Bekenstein’s papers. Although the arguments he used were simple, Jacob Bekenstein was far ahead of his time. The idea that there is an absolute limit to information which requires each region of space to contain at most a certain finite amount of information was just too shocking for us to assimilate at the time. There is no way to reconcile this with the view that space is continuous, for that implies that each finite volume can contain an infinite amount of information. Before Bekenstein’s bound could be taken seriously, people had to discover other, independent reasons why space should have a discrete, atomic structure. To do this we had to learn to do physics at the scale of the smallest possible things.
CHAPTER 9
HOW TO COUNT SPACE
The first approach to quantum gravity that yielded a detailed description of the atomic structure of space and spacetime was loop quantum gravity. The theory offers more than a picture: it makes precise predictions about what would be observed were it possible to probe the geometry of space at distances as short as the Planck scale.
According to loop quantum gravity, space is made of discrete atoms each of which carries a very tiny unit of volume. In contrast to ordinary geometry, a given region cannot have a volume which is arbitrarily big or small - instead, the volume must be one of a finite set of numbers. This is just what quantum theory does with other quantities: it restricts a quantity that is continuous according to Newtonian physics to a finite set of values. This is what happens to the energy of an electron in an atom, and to the value of the electric charge. As a result, we say that the volume of
space is predicted to be quantized.
One consequence of this is that there is a smallest possible volume. This minimum volume is minuscule - about 1099 of them would fit into a thimble. If you tried to halve a region of this volume, the result would not be two regions each with half that volume. Instead, the process would create two new regions which together would have more volume than you started with. We describe this by saying that the attempt to measure a unit of volume smaller than the minimal size alters the geometry of the space in a way that allows more volume to be created
Volume is not the only quantity which is quantized in loop quantum gravity. Any region of space is surrounded by a boundary which, being a surface, will have an area, and that area will be measured in square centimetres. In classical geometry a surface can have any area. In contrast, loop quantum gravity predicts that there is a smallest possible area. As with volume, the theory limits the possible areas a surface can have to a finite set of values. In both cases the jumps between possible values are very small, of the order of the square and cube of the Planck length. This is why we have the illusion that space is continuous.
These predictions could be confirmed or refuted by measurements of the geometry of things made on the Planck scale. The problem is that because the Planck scale is so small, it is not easy to make these measurements - but it is not impossible, as I shall describe in due course.
In this chapter and the next I shall tell the story of how loop quantum gravity developed from a few simple ideas into a detailed picture of space and time on the shortest possible scales. The style of these chapters will be rather more narrative than the others, as I can describe from personal experience some of the episodes in the development of the theory. I do this mainly to illustrate the complicated and unexpected ways in which a scientific idea can develop. This can only be communicated by telling stories, but I must emphasize that there are many stories. My guess is that the inventors of string theory have better stories, with more human drama. I must also stress that I do not intend these chapters to be a complete history of loop quantum gravity. I am sure that each of the people who worked on the theory would tell the story in a different way. The story I tell is sketchy and leaves out many episodes and steps in the theory’s development. Worse, it leaves out many of the people who at one time or another have contributed something important to the theory.