Book Read Free

Einstein's Unfinished Revolution

Page 15

by Lee Smolin


  Such a theory could not have Rule 2 among its postulates, because that would contradict realism. So we would have to build our theory on Rule 1 alone. This is also a modification of the theory, but it is one it shares with pilot wave theory, so perhaps it’s a change worth exploring. Such a theory has no obvious reference to experiment, also no apparent notion of uncertainty or probability, because Rule 1 is deterministic and makes no reference to probability. Can we possibly make such a theory work and stay consistent with realism?

  One way to accomplish this would be to derive Rule 2 from a theory that doesn’t postulate it. The collapse of the wave function would happen only in certain special circumstances, such as when an atom interacts with a large, human-size measuring instrument. To do this we have to find roles for uncertainty and probability arising in a world described by a theory that has none.

  The project to make sense of quantum mechanics based solely on Rule 1, and in a way that is consistent with realism, has a long history. It was initiated in 1957 by a PhD student of John Wheeler’s named Hugh Everett III, and so can be called Everettian quantum mechanics. But it is most often referred to as the Many Worlds Interpretation of quantum mechanics, because some have argued, not without controversy, that it implies that the world we experience is just one of a vast number of parallel universes.

  Everett’s proposal was presented in his PhD thesis of 1957, and was published the same year.1 It was unusually short for a PhD thesis, yet it was to have, after a while, a big impact.

  Everett, as many have, left academic science just after his PhD to begin a career in the defense industry, so his thesis was his only contribution to physics. And it took many years before it was widely read. But, apart from de Broglie’s thesis, I can think of no other PhD thesis which was to have, over the long term, such a disruptive or revolutionary (you choose) effect on the foundations of physics.

  * * *

  —

  ONE OF EVERETT’S IDEAS is certainly correct and useful. If there is no Rule 2, wave functions don’t collapse, so we have to describe what happens in a measurement using only Rule 1. As we saw in our discussion of Schrödinger’s cat at the end of chapter 4, interactions, including measurements, lead to correlated states. The example we discussed was

  IN BETWEEN = (EXCITED AND NO AND ALIVE) OR (GROUND AND YES AND DEAD)

  The OR signifies a superposition of different possible situations, in each one of which the atom, Geiger counter, and cat are all correlated. Given that they are in a superposition of states, observables such as the aliveness of the cat have no definite value. But Everett noticed that, nonetheless, we can read this superposed state as giving us two contingent statements about the state of the combined system after the measurement. These contingent statements are

  If the atom is in the excited state, then the counter will read NO and the cat will be alive.

  and

  If the atom is in the ground state, then the counter will read YES and the cat will be dead.

  These tell us that the atom, the counter, and the cat have become correlated by the photon’s possible passage through the detector.*

  The superposed state doesn’t tell us which outcome will be observed, but it tells us that the outcome expresses a correlation between the state of the atom and the states of the counter and cat.

  This much of Everett’s thesis is unimpeachable. It is generally true that interactions between two quantum systems set up correlation between the states of the two systems, and these correlations can be read as sets of contingent statements. This is a consequence of Rule 1, applied to interactions.

  But notice what this doesn’t do. It doesn’t tell us which outcome will be observed. Contingent statements may be useful as they give us definite information about the system. But they do not give us complete information. A theory that gave us only contingent statements could not be enough for a realist.

  So Everett went further. To make the theory with only Rule 1 realist, he proposed to change our conception of reality. Everett suggested that a state which consists of a superposition of states of detectors describes a reality in which both outcomes happen. In this enlarged reality, both contingent statements will be true. That is, Everett asserted that a full description of reality is the superposition of the two states. Part (but only part) of what that implies is that the following statement is true:

  The atom is in the excited state, the counter reads NO, and the cat is alive, and the atom is in the ground state, the counter reads YES, and the cat is dead.

  This would seem to be blatantly false. In the world we live in, the cat experiences only one outcome. This is why in chapter 3 we described the superposition as characterizing an “or.” Either she experiences that she is alive, or she is dead and experiences nothing. In our world, it is one or the other.

  Everett proposed that the world we experience is only a part of the full reality. In the enlarged world which, he proposed, makes up that full reality, versions of ourselves exist that experience every possible outcome of every quantum experiment.

  In other words, the “or” of ordinary experience becomes, in quantum mechanics, an “and.” We say “the cat is alive or the cat is dead” because the two states are mutually exclusive. But in this formulation, it can nonetheless be true that “the cat is alive and the cat is dead.”

  The idea is that each time an experiment is performed which could have different outcomes, the universe splits into different, parallel worlds, one for each of the possible outcomes. We split as well, along with the world. The experiment creates an additional version of ourselves for each of the possible outcomes. Each version of ourselves lives from then on in a world described consistently by one of the contingent statements we can read off the combined state.

  In contrast with pilot wave theory, Everettian quantum mechanics has no particles, so nothing distinguishes the different branches from each other.* We then are invited to regard all branches as equally real, and work out the consequences. So if Everett is right, I am at this moment in Toronto, and I am in London, and indeed simultaneously in myriad places my life might have taken me, including the ocean floor off Peggy’s Cove.

  These branches are sometimes called worlds. You can see why Everett’s proposal has come to be called the Many Worlds Interpretation of quantum mechanics.

  For this to work, each version of an observer must have no way to communicate with the others; the branches must be autonomous.

  * * *

  —

  WHAT I HAVE DESCRIBED so far was Everett’s initial version of the Many Worlds Interpretation. On examination, it turned out to be a bit naive, as it ran into several big problems.

  The first problem with Everett’s formulation is that he suggested that the branching happens when a measurement is made. But this makes measurements appear to be special, whereas it is a basic tenet of realism that measurements are ordinary interactions to be treated like any others.

  Indeed, Rule 1 makes no distinction for experiments. So, if you are a realist,* you must insist that what happens for a measurement must happen more generally. The key thing that causes a splitting is an interaction, which produces correlations between the systems that interacted. These correlations can, as we saw, be expressed as contingent statements describing different possible outcomes of that interaction.

  To avoid making experiments special, the universe must split each and every time there is an interaction which has more than one possible outcome. But this is happening literally all the time—all that is required is for two atoms to collide with each other, and that is happening myriad times a second just in the air in this room.

  Moreover, the interaction that causes the splitting can happen anywhere in the universe. So while you are reading this sentence you are splitting a vast number of times, into a vast number of versions of yourself.*

  This is a lot to ask someone to believe, all in the
name of realism. No wonder it took some time for Everett’s ideas to catch on.

  A second problem is that if the branching is to replace Rule 2, then it must be irreversible, to reproduce the basic fact that we observers experience every experiment to have a definite outcome. Indeed, the action of Rule 2, which the branching is supposed to replace, is irreversible. But the branching is supposed by Everett to be a consequence only of Rule 1, which is reversible.

  A third big problem with giving up Rule 2 has to do with probabilities—or rather, their absence.

  Experiments measure probabilities for different outcomes to occur, and comparing these to the predictions of the theory is an important part of testing quantum mechanics. But notice something important: Rule 1 doesn’t speak of probabilities. All reference to probabilities in quantum mechanics comes from Rule 2, which gave us a formula for how probable each possible outcome is. That formula, as we noted before, is called Born’s rule, and it relates probabilities to the square of the wave function. This is the only part of quantum theory that refers to probabilities, and it is part of Rule 2. If we eliminate Rule 2 from quantum theory, we have nothing left in the theory that speaks of probabilities.

  As a result, Everett’s version of quantum mechanics tells us only that every possible outcome occurs. Not with some probability, but with certainty.

  That is, for every possible outcome of an experiment, the Many Worlds Interpretation asserts there is a branch in which it occurs. There is no sense in which some branches are more probable than other branches. All Rule 1 can assert is that with certainty, all branches will exist. So we seem to have lost an important part of quantum mechanics—that part which predicts the probabilities that different outcomes occur.

  Everett was not dumb; he was aware of this issue, and he attempted to address it. In his thesis, he offered a way to predict probabilities using only Rule 1. To accomplish this, he suggested a way to derive the relation between probabilities and squares of the wave function—a relation which Rule 2 postulates—directly from Rule 1 alone.

  At first many were impressed by this result. I know I certainly was when I first read Everett’s paper. But it turns out something was concealed in his derivation. Like many erroneous proofs, the argument assumed what was to be proved. The relation between the square of the wave function and probability was snuck into a seemingly innocuous step, which assumed that branches with small wave functions* have small probabilities.* But that was tantamount to assuming a relation between the size of the wave functions and probabilities, and so the proof proved less than was first claimed for it.

  Everett’s proof did establish one important thing: that if one wanted to introduce quantities called probabilities, it would be consistent to assume they follow Born’s rule. But it did not prove that it was necessary to introduce probabilities, nor did it prove that those probabilities must be related to the size of the wave function.

  Yet another problem with Everett’s original formulation of the Many Worlds Interpretation was that splitting the quantum state into branches is ambiguous. As I explained, each branch is defined by some quantity having a definite value. There is one branch in which the atom is excited and the cat is alive and another branch in which the atom is in the ground state and the cat is dead. But why these and not some other quantities? Ground and excited are states of different energy. But there are other incompatible quantities that we might use instead to define a split. There will be some superposition of ground and excited states that corresponds to the electron being on the left side of the atom and a different superposition that corresponds to the electron being on the right side.

  Let’s call these states left and right. Why not split with respect to these? These would lead to states of the cat which were superpositions of alive and dead. She no longer would experience a world where there are definite outcomes to experiments. But Rule 1 doesn’t care whether she experiences definite outcomes or does not. We call this the preferred splitting problem.

  At first there seems to be an obvious answer to the preferred splitting problem: we must split the wave function so that the different branches describe situations in which macroscopic observers like the cat see definite outcomes.

  But this is tantamount to reintroducing Rule 2, because it gives what macroscopic observers see a special role. You have not solved the mystery of why macroscopic observers see definite outcomes. And by giving observers a special role, you give up on achieving a realist interpretation, which must be based on hypotheses about what is real in the absence of observers.

  ELEVEN

  Critical Realism

  There is a near consensus among the people who have examined the original version of the Many Worlds Interpretation, the version put forward by Everett and championed by Wheeler and DeWitt, that it fails as a realist approach to quantum theory. Either you make measurement special and give up on realism, or you face the big issues I raised. The most important of these are the preferred splitting problem and the question of where the theory contains the probabilities, and the related uncertainties, that experimentalists measure.

  So, can the project of giving a realist version of quantum theory, based only on the wave function evolving strictly according to Rule 1, be saved?

  In recent years some rather radical solutions have been offered to the two big puzzles—the preferred splitting problem and the question of where the probabilities come from. The preferred splitting problem is widely thought to have been solved by an idea called decoherence, which I will explain shortly. Ideas about the origin of probabilities mostly originated from a group of deep thinkers at Oxford, centered in its philosophy department. The new approach to probabilities was formulated by David Deutsch, and it has been extensively studied and developed by his Oxford colleagues.1

  Oxford has had a very smart group of philosophers of physics, and several of them have focused on making sense of Everett’s ideas. They have included Hilary Greaves, Wayne Myrvold, Simon Saunders, and David Wallace.* Together with Deutsch and a few others, they have put forward what has sometimes been called the Oxford interpretation of quantum mechanics.2 These proposals and the arguments offered in their support are both ingenious and subtle, but so are the objections made by several physicists and philosophers. Given the very high level of careful thought that has gone into these developments, I think it is fitting to call this an episode of critical realism.

  After many spirited and elaborate arguments, the project of making sense of a realist quantum theory based only on Rule 1 is still in progress. The issues are surprisingly intricate and elusive, and there is as yet no general agreement among experts as to what has been achieved. To make it even more complicated, the proponents disagree among themselves, so that among the five or six main initiators of this view, several different versions are defended, which differ in subtle but important ways from each other. Consequently, I can present only a rough introduction to the key ideas and issues behind this new “Oxford interpretation.”

  * * *

  —

  THE IDEA OF DECOHERENCE starts with the observation that a macroscopic system, such as a detector or an observer, is never isolated. Instead, it lives in constant interaction with its environment. The environment is made up of a vast number of atoms all moving about unpredictably hither and thither, and this introduces a big dose of randomness into the system. This random element affects the motions of the atoms which make up the detector. This, roughly speaking, leads the detector to lose its delicate quantum properties and behave as if it were described by the laws of classical physics.

  Consider what an observer can learn by looking at a detector. The observer is also a big object made of vast numbers of atoms, all in contact with a random environment. If we look at the detailed small-scale behavior of the atoms making up the detector and the observers, we will see chaos, as the picture will be dominated by the random motions of the individual atoms, both our atoms a
nd those in the detector. To see any kind of coherent behavior we have to look at bulk, large-scale motions of relatively large pieces of the detector. These require averaging over the motions of myriad atoms. What emerges are bulk quantities which measure macroscopic quantities such as the color of a pixel or the position of a dial. Only these behave reliably and predictably.

  Indeed, these bulk quantities behave as if the laws of Newtonian physics are true. It is only when we focus on these bulk, large-scale quantities that we can perceive something irreversible to have happened, such as the recording of an image, in which each pixel comprises a vast number of atoms. And according to this picture, it is only when something irreversible has happened that we can say that a measurement has taken place.

  Decoherence is the name we give to the process by means of which irreversible changes emerge by averaging out the random chaos of the atomic real. Decoherence is a very important feature of quantum theory, for it is why the bulk properties of large-scale objects, such as the rough motions of soccer balls, swing bridges, rocket ships, planets, and so forth, appear to have well-defined values, which obey the laws of Newtonian physics.

  The word “decoherence” refers to the fact that such bulk objects appear to have lost their wave properties, and so they behave as if they are simply made of particles. According to quantum mechanics, everything, including cats, soccer balls, and planets, has wave as well as particle properties. But for these bulk objects, the wave properties have been so randomized by their interactions with their chaotic environment that they cannot be accessed in any experiment, so the wave half of the wave-particle duality has been rendered mute, and the objects behave like ordinary particles.

 

‹ Prev