Book Read Free

Einstein's Unfinished Revolution

Page 24

by Lee Smolin


  In a recent paper, I showed that the hypothesis of maximal variety leads to the Schrödinger equation, and hence to quantum mechanics. This happens because there turns out to be a mathematical similarity between the variety and Bohm’s quantum force. As a result, Bohm’s quantum force acts to increase the variety of a system. It does so by making the neighborhoods of all the different particles as different from each other as possible.

  In this approach the probabilities in quantum mechanics refer to an ensemble that really exists, the ensemble of all systems with similar views. This is a real ensemble, in that the elements are not located in our imagination; they are, each and every one, a part of the natural world. This is in accord with the principles of causal completeness and reciprocity.

  This was the basis of a relational hidden variable theory I proposed, which I called the real ensemble formulation of quantum mechanics. From it, I could derive the Schrödinger formulation of quantum mechanics from a principle that maximizes the variety present in real ensembles of systems with similar views of the universe.

  On the technical side, this theory borrows from the many interacting classical universes theory I described in the last chapter, only the ensemble of similar systems does not come from other universes parallel to our own; instead they are similar systems far away in distant regions of our own single universe.

  In this theory, the phenomena of quantum physics arise from a continual interplay between the similar systems that make up an ensemble. The partners of an atom in my glass of water are spread through the universe. The indeterminism and uncertainties of quantum physics arise from the fact that we cannot control or observe those different systems. In this picture, an atom is quantum because it has many nearly identical copies of itself, spread through the universe.

  An atom with its neighborhood has many copies because it is close to the smallest possible scale. It is simple to describe, as it has few degrees of freedom. In a big universe it will have many near copies.

  Large, macroscopic systems such as cats, machines, or ourselves have, by contrast, a vast complexity, which takes a great deal of information to describe. Even in a very big universe, such systems have no close or exact copies. Hence, cats and machines and you and I are not part of any ensemble. We are singletons, with nothing similar enough to interact with through the nonlocal interactions. Hence we do not experience quantum randomness. This is a solution to the measurement problem.

  This theory is new, and, as is the case with any new theory, it is most likely wrong. One good thing about it is that it will most likely be possible to test it against experiment. It is based on the idea that systems with a great many copies in the universe behave according to quantum mechanics, because they are continually randomized by nonlocal interactions with their copies.

  I argued that large complex systems have no copies, and hence are not subject to quantum randomness. But can we produce microscopic systems, made from a small number of atoms, which also have no copies anywhere in the universe? Such systems would not obey quantum mechanics, in spite of being microscopic.

  We have the capability to do just that using the tools of quantum information theory. Indeed, a sufficiently large quantum computer should be able to produce states involving enough entangled qubits that they are very unlikely to have any natural copies anywhere in the observable universe. This suggests that the real ensemble theory can be falsified by making a large quantum computer that works exactly as predicted by quantum mechanics.

  Science progresses when we invent falsifiable theories, even if the result is that they get falsified. It is when theorists invent non-falsifiable theories that science gets stuck.

  And what about systems with small numbers of copies? These behave neither quantum mechanically nor deterministically. They will have to exhibit behavior of a new kind which is neither classical nor quantum. This will give us further opportunities to test this new theory.*

  THE PRINCIPLE OF PRECEDENCE

  The real ensemble theory depends on a system being able to recognize and interact with other systems which are similar to it, in the sense that they have a similar view of the universe of relations, no matter where they are in the universe. According to this hypothesis, similarity or difference of views is more fundamental than space; space emerges to describe the rough order created by similarity of views. Two systems may interact if their views are similar enough. Often that reflects their being nearby in space and time, but not always, and it is the latter cases that underlie quantum phenomena.

  What happens if we apply this viewpoint to systems at different times? Might a system interact with systems in the past that have similar views? If this is possible, we can use the influence of the past on the present to find a new understanding of what the laws of nature are. This leads to a novel idea, which I call the principle of precedence.13

  To explain it in simple terms, it helps to use operational terminology, in which a quantum process is defined by three steps. The first is its preparation, which picks the initial state. Next we have an evolution, during which it changes in time according to Rule 1. At the end we have a measurement, which is governed by Rule 2. We have several choices about what we measure, but whichever we choose, several different outcomes are possible. Quantum mechanics predicts that the probabilities for these different outcomes will depend on the preparation, the evolution, and the choice of what we measure. If we know the forces acting on the system during the evolution, we can use Rules 1 and 2 to predict the probabilities of the different outcomes.

  It is common to believe that, once the environment of the system is fixed, Rule 1 evolves the system in time as dictated by the fundamental laws. These laws are presumed not to change in time. As a consequence, we can say the following. For every quantum system we study in the present, defined by a specific preparation, evolution, and measurement, there will be a collection of similar systems in the past. These are similar in the sense that they had the same preparation, evolution, and measurement as our present system. Now, the fact that the laws don’t change implies that the probabilities for different outcomes also don’t change.

  As a result we can say that

  The probabilities for different outcomes to result in the present experiment are the same as if we picked random outcomes from the collection of past similar instances.*

  We can call this the law of precedents.

  Now I would like to make a simple but radical proposal. The law of precedents is usually understood to be a consequence of the existence of unchanging laws. But actually, this law of precedents is all we need of law. We can posit that there is no law except the law of precedents. Instead of the above, we posit that

  The probabilities for different outcomes to result in the present experiment are arrived at by picking random outcomes from the collection of past similar instances.

  By this I postulate that a physical system has access to the outcomes of systems with similar preparations, evolutions, and measurements in its past (we call these “similar systems,” for short). Our hypothesis is then

  A physical system, when faced with a choice of outcomes of a measurement, will pick a random outcome from the collection of similar systems in the past.

  This law of precedents guarantees that most of the time, the present will resemble the past, in that the probabilities for the various possible outcomes of the same experiment will be unchanged.

  If this is right, the appearance that atoms are governed by unchanging laws is an illusion created by the fact that the universe is old enough and big enough that there is ample precedent for most situations an atom will find itself in.

  But what if there are no precedents? What if we prepare a quantum state which has never so far existed in the history of the universe? If we make a measurement of it, how will we determine its outcome, if there are no past similar instances to refer to?

  I don’t know the answer to this question.
This could be and, I hope, will be a question for experimental physics. The standard belief in a timeless fundamental law has no problem making a prediction, by applying the known law to the new situation. If the experiments always confirm that answer, we can deduce that the principle of precedence is wrong. However, if precedence is the key to lawfulness, then the response to a novel situation, a novel quantum state, will be novel.

  After many repetitions precedence builds up, and there will no longer be surprises. The transition, though, from novelty to precedence should be open to experimental investigation.

  The site for such investigations is again likely to be laboratories where experimentalists are preparing entangled states of several atoms. Such states will at some point soon be complex enough that it would be safe to deduce they have no precedents in the history of the universe. So very soon it ought to become possible to test the principle of precedence experimentally, and perhaps discover the process by which precedence builds up.

  FIFTEEN

  A Causal Theory of Views

  Each of us theorists has his or her commitments: the guesses about nature you are willing to bet your career on. Personally, I am a realist, a relationalist, and, indeed, a temporal relationalist. I believe that quantum mechanics is incomplete and aim to construct a realist theory according to the principles of temporal relationalism, which can stand as a simultaneous completion of quantum mechanics and general relativity. I have hopes that this theory will not only resolve the puzzles in the foundations of quantum theory, but will lead to the discovery of the right quantum theory of gravity, as well as address mysteries in cosmology and particle physics coming from the universe’s apparent freedom to choose both laws and initial conditions.

  In this closing chapter I’d like to describe one path we might take to reach this goal, and then tell you about some very recent work that brings us a few steps along this path.

  This is a theory of nads, of the sort I’ve been describing, with two additional ideas. First, we take seriously Leibniz’s idea that what is real in a purely relational description of the world is the views that each nad has of the rest of the universe. The views don’t represent what is real; they are what is real. This means that the views themselves are the dynamical degrees of freedom, the protagonists of our story. This indeed brings our nads closer to what Leibniz called monads (although there are still some differences).

  But to what exactly do the nads correspond in the world we are familiar with, and of what do their views consist?

  If we want a correspondence with general relativity, it is natural to presume that the nads are events. In relativity theory, events are things that happen at a single place and time. They are fundamental to general relativity’s picture of the world. You can think of them as moments when something changes at one place: for example, two particles colliding make an event. A world made of events is a world in which “to become” is more fundamental than “to be.”

  If the nads are events, what do the relations between them describe? The short answer is causation. Events cause other events.

  Each event is woven into the history of the universe through relations with the other events, which express which events might be a cause of which. These causal relations chart the history of processes of change.

  We can extract how these relations work from general relativity. Given that causes can propagate only at the speed of light or less, we say that an event B is in the causal past of another event, A, if a physical cause could have traveled at the speed of light or less from B to A. If this relation holds, then conditions at B might have contributed to causing conditions at A.

  Under the same condition we also say that A is in the causal future of B.

  Given any two events, A and B, we usually require of general relativity that only one of the following three things must be true. Either A is in the causal future of B, or B is in the causal future of A, or they are causally unrelated because no signal traveling at the speed of light or less could have passed between them. This rules out closed causal loops in which A is in both the causal future and causal past of B. Exotic histories with closed causal loops are fun to speculate about, but they raise puzzles and paradoxes. I see no reason to presume closed causal loops are part of nature, especially as I want to presume that causation is fundamental, and fundamentally irreversible.*

  FIGURE 14. A set of discrete events, connected by causal links.

  If we say what the causal relations are between every pair of events, we are describing the universe in terms of its causal structure.

  According to general relativity, spacetime consists of a continuous infinity of events. Instead, I follow some of the pioneers of quantum gravity, who hypothesize the nads to be a discrete set of fundamental events. Discrete means they can be counted, whether the count is finite or infinite. We will also require that even if their total numbers are infinite, there is a finite number within any finite volume of space and finite interval of time. This greatly simplifies things.

  At a minimum, we will want to ascribe causal relations to the nads. These work just like causal relations in general relativity. Given any two nads, A and B, either A is in the causal future of B, or B is in the causal future of A, or they are causally unrelated. A set of nads together with their causal relations is a model of what a discrete or quantum spacetime might be like.

  Since the nads are a discrete set, their causal relations are discrete as well. We can count backward and forward in discrete causal steps. Each nad has its immediate causal past, which consists of those nads one step back from it into the past.

  It is then natural to think of nads in terms of a metaphor of parentage. Nad C might have had two parents, A and B; then we can think of C as the event defined by the meeting of two causes, one from A and one from B. Tracing the ancestry of C back through A and B to their parents, and beyond, gives us a network of causes stretching deep into the past. C in turn might have two progeny, D and E, which it influences.

  At this point, we have in front of us a possibility of breathtaking simplicity. We can suppose that the events which make up the history of the world have fundamentally only these causal relations. All other entities and all other properties in nature are to be derived from a large but discrete set of events whose only property is which causes which. This radical suggestion was made by Rafael Sorkin,1 and developed in close collaboration with a group of friends and enthusiasts. They call it the causal set theory.

  A causal set is simply a discrete set on which there are defined only causal relations, satisfying the condition that an event is never its own cause. One also requires that given any two events A and B, only a finite number of events are in both the causal future of B and the causal past of A.

  I admire the ambition and radical purity of causal set theory. It is a completely relational description of spacetime, in which each event is defined completely in terms of its place in the network of causal relations.

  One very good feature is that the geometry of a spacetime can, to a good approximation, be captured by a causal set. This is done by a method analogous to how polls of our political views are taken. Rather than ask everyone’s views, those of a small, randomly chosen sample are queried. Similarly, one can pick out a random sample of events in a spacetime and record their causal relations with each other. One loses a lot of information, but if one picks an event per some fixed volume of space and unit of time, one gets a representation of the causal relations which is accurate down to that scale.

  However, Sorkin and his collaborators hypothesize that the reverse is also the case. They believe the history of the universe is, at its most fundamental, a discrete causal set, from which emerges, on a sufficiently large scale, the illusion of a continuous spacetime. Just like a liquid appears to us continuous but is actually made up of discrete atoms, the events of the causal set would constitute the atoms of spacetime.

  One g
reat success of the causal set theory is that it predicted the rough value of the cosmological constant. Sorkin derived this prediction before the cosmological constant was measured.2 It was the only approach to quantum gravity to do so.

  The causal set hypothesis is one of several competing hypotheses concerning the properties of spacetime atoms. Compared to the others, such as spin foam models, it enjoys the great advantage of its utter simplicity, in that the only properties of events are their causal relations. This greatly narrows down the possible forms that a fundamental law of spacetime atoms could take.

  This radical simplicity is also behind a very formidable obstacle that this approach faces, which is called the inverse problem. As I said earlier, given a continuous spacetime, we can easily sample its events to find a causal set. But the reverse is almost never the case. In the world of possible causal sets, almost none provide an approximate description of a spacetime, with three dimensions of space. This makes it seem as if there is more to a spacetime than a rough description of a network of causal relations.

  Quantum gravity, or the problem of understanding spacetime within quantum theory, has certainly proved to be a formidable challenge. It helps to put the challenge of discovering the atoms of spacetime in perspective by comparing it with the history of the hypothesis that matter is made of atoms.

  In the case of matter, the challenges facing atomists in the nineteenth and early twentieth centuries were twofold. First, they needed to discover the fundamental laws that govern the atoms. Second, they had to deduce from those fundamental laws the rough properties we perceive matter to have. They had to understand how the illusions of solids, liquids, and gases arise as consequences of the more fundamental atomic laws. Theorists of quantum gravity face the same two challenges.

 

‹ Prev