Book Read Free

Einstein's Unfinished Revolution

Page 21

by Lee Smolin


  As a result, if one could follow the trajectories of the individual quantum particles, one could see that entangled particles are influencing each other nonlocally (i.e., at a distance). Because we normally measure only average positions and average motions, this incessant nonlocal influence is washed out by the randomness of the quantum motions. But it is there explicitly in the way the wave function guides the particles, and one can contemplate experiments which might be able to observe it.

  The alert reader may be hearing alarm bells going off. This nonlocal communication of forces over a distance requires us to objectively speak of events that are distant from each other, but are nonetheless simultaneous. Such an instantaneous effect at a distance directly contradicts special relativity, which tells us that there is no absolute notion of simultaneity for distant events. This is indeed a problem, and as a result there is a tension between special relativity and pilot wave theory.

  In particular, the guidance equation, which is the source of the nonlocal forces, is inconsistent with relativity. It requires for its definition a preferred frame of reference, which defines an absolute notion of simultaneity. In practice, the conflict is blunted because the randomness of quantum physics implies that, so long as one stays in quantum equilibrium,* one cannot directly observe the nonlocal correlations in an experiment. Nor can we send information faster than light. If we don’t look too closely at what is happening in individual systems, pilot wave theory maintains an uneasy coexistence with relativity. But then again, the whole point of pilot wave theory is that it enables us to look more closely.

  At the present time there is work in progress aimed at extending pilot wave theory to relativistic field theory, so we cannot give a definitive picture as to how this tension between relativity and pilot wave theory resolves.2

  WAVE-FUNCTION COLLAPSE

  The spontaneous collapse hypothesis also serves us well as a realist description of the quantum world in terms of beables. According to this picture, there are no particles—only waves—but those waves occasionally interrupt their smooth flow to suddenly collapse into particle-like concentrations. From there, the wave flows and spreads out again. Because the wave has this peculiar behavior, it mimics particles when needed, and thus is the only beable.

  The collapse models also solve the measurement problem, because the collapse of the wave function is posited to be a real phenomenon. For atomic systems this is rare. But the rate of collapse grows rapidly with the size and complexity of the system, so there is no chance for superpositions and entanglements to survive for macroscopic systems. Superpositions and entanglements are destroyed by the collapses, and so are limited to the atomic domain. This solves the measurement problem, because the wave functions of the measuring instruments are always collapsed somewhere definite. It also gets rid of the ghost branches.

  The pilot wave theory and spontaneous collapse models are not just two different interpretations of quantum mechanics. They are distinct theories, which each make some predictions that differ from those of quantum mechanics. Yet when it comes to the behavior of atoms and molecules, they agree with each other, and with conventional quantum mechanics, to much better precision than the experiments can detect. So, up until this point they cannot be distinguished experimentally from each other or from quantum mechanics. Pilot wave theory, however, predicts that superposition and entanglement are universal and should be in principle detectible in any system, no matter how large or complex. This is challenging to test experimentally, because one has to fight the tendency for a system of many particles to decohere, as the many interactions with the system’s environment randomize the phases* of the wave function. In principle it can be done, and, indeed, experimentalists are continually expanding the domain of quantum phenomena.

  But if the wave function undergoes spontaneous collapse, as soon as that happens the game is up. If spontaneous collapse is right, no experimentalist will ever be able to superpose two wave functions of a large, complex system.

  Another difference between spontaneous collapse and pilot wave theory lies in their attitude toward time. The laws of pilot wave theory are reversible in time, just like the laws of Newtonian dynamics. Spontaneous collapse is irreversible, like the laws of thermodynamics.

  The theories of wave-function collapse have some of the same drawbacks as pilot wave theory. In particular, the collapse is instantaneous, but takes place everywhere at once, creating a severe conflict with relativity. As with pilot wave theory, the precise law requires a preferred frame of reference to be specified and therefore contradicts relativity theory. And, as in that case, there is some work that indicates that the conflict can be managed, so that in the domain where the theory agrees with quantum mechanics, the violations of relativity theory are very small.

  Another drawback of some collapse models is the fact, already mentioned, that energy is not conserved. Still another is that this defect can be minimized by tuning a free parameter. To my understanding, the ability to tune parameters to ensure agreement with an experiment is a weakness, as it suggests the theory is contrived to hide an essential tension in its construction.

  Indeed, collapse models come in several versions, and there is some freedom to modify them and tune new parameters. That is why they are called models, while pilot wave theory, having no freedom to adjust anything, is a theory.

  Among the various issues we have discussed, it is impressive that all the hidden variable theories which have been proposed conflict with special relativity. The reason is simple. If one wants a complete description of individual processes, that description must, because of the experimental tests of Bell’s restriction, be nonlocal, and that requires a preferred simultaneity. Averaging over individual cases gives one probabilities, and since these agree with the probabilities predicted by quantum mechanics, there is no manifest contradiction with special relativity, because information cannot be sent faster than light. But for a realist the conflict is nonetheless present because reality is made of individual cases. We see this clearly in pilot wave theory and in spontaneous collapse models.

  Nor can one escape this dilemma by giving up the ambition of going beyond quantum mechanics, for the conflict is present in quantum mechanics itself. When the wave function collapses following Rule 2, it does so everywhere at once.

  No problem in physics has given me more pain, and kept me up more nights, than this conflict between commonsense realism applied to the atomic domain and the principles of special relativity.

  To my mind, the most important reason to be skeptical about both pilot wave theory and collapse models is that they make little contact with the other big questions in physics, such as quantum gravity and unification.

  At minimum, both approaches provide proof of concept that we can be realists about quantum physics. But neither has the ring of truth. There is more work to do to discover a realist completion of quantum mechanics that avoids the pitfalls of the existing theories while offering solutions to the other key questions in physics, and so gives us a platform on which to rebuild physics.

  * * *

  —

  THERE HAVE BEEN SOME new proposals of realist quantum theories, none of which are, to my mind, completely convincing either. But they contain some intriguing ideas.

  RETROCAUSALITY

  A recent realist approach to quantum mechanics is retrocausality, which supposes that causal effects can go backward as well as forward in time. Usually the effect follows the cause, but, the proponents of this view argue, sometimes the effect precedes the cause. By zigzagging backward and forward in time, a chain of causations can appear nonlocal, as we see in figure 10 on the following page. The trick is easy. If we can go backward in time at light speed, and then forward, we can end up at an event simultaneous with, but far from, where we started. So in a theory with causation both in the future and in the past, we can aim to explain nonlocality and entanglement.

  This kind o
f approach has been advocated by Yakir Aharonov3 and colleagues. Another version, called the transactional interpretation, has been proposed by John Cramer and Ruth Kastner.4 Huw Price has published an argument that any time-symmetric version of quantum mechanics must rely on retrocausality.5

  FIGURE 10. RETROCAUSALITY The two atoms travel to the future, one to the left and one to the right. But a causal influence can travel from the location marked atom B back to the point in the past from which the atoms originated, and then forward to the point at atom A. Thus the effect at atom A appears to be simultaneous with its cause at atom B.

  APPROACHES BASED ON HISTORIES

  An ancient idea holds that what is fundamentally real is not things, but processes; not states, but transitions. This bold idea underlies several approaches to quantum physics. They arise from a discovery Richard Feynman made while he was still a PhD student. Feynman formulated an alternative way of expressing quantum mechanics that eschews the description of nature in which quantum states change continuously in time. Instead, we compute the probability for the system to make a transition between an earlier configuration and a later configuration. We do this by considering all the possible histories that might have taken the system between the two configurations. The theory assigns to each history a quantum phase,* and to find the wave function for the transition, we add up these phases for all the possible histories. Then we take the square to get the probability, as in Born’s rule.

  As Feynman proposed it, this is just a scheme to calculate probabilities in quantum mechanics. But Rafael Sorkin proposes that this is the basis of a realist quantum theory, in which the beables are histories. The catch (you should know by now there always is a catch) is that one has to use a nonstandard quantum logic to talk about what is real about those histories.6

  A very different use is made of histories by Murray Gell-Mann and James Hartle,7 who maintain that the reality we experience is just one of many equally consistent and equally real histories. The idea is that if different histories decohere, they can’t be superposed; thus they can be thought of as alternative histories. Gell-Mann and Hartle, along with Robert Griffiths and Roland Omnès, formulated this idea as the consistent histories approach to quantum mechanics.8 A key result of this approach was that a history obeying Newton’s laws of classical physics would be part of a family that would decohere. These decoherent histories could be treated as if they were alternative real histories. However, the converse was shown not to be the case by Fay Dowker and Adrian Kent, who demonstrated that there are many classes of histories that decohere which are not related to Newtonian physics.9

  None of these history-based theories satisfy my desire to have a naively realist description of the world. I have nothing against a realism in which what is real is processes rather than states, happen-ables rather than beables. But in the approaches I’ve just mentioned, you end up computing not what happened, but only the probabilities for what happened. And the relationship between the histories posited by the theory and the probabilities we observe are always related by Born’s rule, which suggests that those histories represent possibilities and not actualities.

  MANY INTERACTING CLASSICAL WORLDS

  Here is another contemporary realist formulation of quantum physics.10 Assume that our world is classical, but it is just one of a very large number of classical worlds, which exist simultaneously. These worlds are similar to each other, in that they have the same numbers and kinds of particles. But they differ as to the positions and trajectories of the particles.

  All these worlds obey Newton’s laws, with a single change, which is that, in addition to the usual forces between the particles in a single world, there is a new kind of force, which involves an interaction between the particles in the different worlds.

  When you throw a ball, it responds to the force from your arm as well as the gravitational attraction of the Earth. At the same time, a large number of similar copies of you, each in their own world, throws a ball. Each of these balls has a slightly different starting point and trajectory. The different balls reach out to each other from their separate worlds and interact with each other. These new, inter-world forces are tiny, but the result is that each ball is jiggled a bit as it travels. You only observe the ball in your universe, so you can’t account in detail for the jiggles. Thus there appears to be a random fluctuation which slightly disturbs the flight of your ball. The result is that you have to introduce a random, probabilistic element into any predictions you may make of your ball’s motion. This probabilistic description is quantum mechanics.

  This is called the many interacting worlds theory. To make it work out in detail, you have to choose the forces between the worlds very carefully. To get quantum mechanics to emerge, that force must be unlike any force we know about. It has to involve triplets of worlds, so there is a jiggle on your ball which depends on where two other balls are, each in their own worlds.

  One great advantage of this formulation is that it’s been extremely useful as a basis for detailed and highly accurate computer calculations of the chemistry of molecules.11

  I am not going to suggest we take this as a serious proposal about nature. But it serves as another example of a realist version of quantum physics.

  SUPERDETERMINISM

  Not everyone working on quantum foundations accepts the conclusion of Bell’s theorem that locality is violated in nature. There are several loopholes, most of which have been ruled out by experiment. One loophole which is not as easy to rule out is based on an idea called superdeterminism. Recall Aspect and colleagues’ experiment disproving Bell locality, which we talked about in chapter 4. Two observers, distant from each other, each choose a direction along which the polarization of the photon on their side will be measured. The proof that locality is violated relies on an assumption that these two choices are made independently.

  But, strictly speaking, the two events in which these choices are made are both in the causal future of some events in their past. We just have to go back in time far enough until we find events whose causal futures include both events when the choices of which polarization to measure were made. So we could include such events in the past, whose causal future includes the whole experiment, as necessary parts of the experiment. You could then imagine that the angles chosen on each side were both specified by someone setting up very carefully the initial conditions in the past of both. The philosophy of superdeterminism asserts that the universe evolves deterministically so that all such correlations were fixed long ago, in the big bang.

  Several physicists have proposed that if we assume that the initial state of the universe was chosen extremely delicately (by whatever agency can be recognized as setting the initial conditions), all the entangled pairs that would ever be measured could be determined to be set up in such a way as to mimic the results that are usually taken as confirming nonlocality. Those results then should be read as confirmations of superdeterminism rather than nonlocality. One is then free to propose a local hidden variable to explain quantum mechanics. Proposals like this have been made by Gerard ’t Hooft,12 among others.

  Gerard ’t Hooft is a truly great scientist, who in his twenties was singlehandedly responsible for a good portion of the key results that went into the construction of the standard model. I was very fortunate to take a course from him in graduate school, and I’ve always looked up to him personally. For many years he has been claiming to have constructed a deterministic and local hidden variable theory based on a cellular automaton, which is a model of a computer. If I understand correctly, it works for special cases; but he claims a more general validity based on an appeal to superdeterminism. But, details aside, between nonlocality and superdeterminism I am willing to bet that pursuing the former will bring us closer to the truth. I say this with some regret, as there are few theorists of his generation whom I admire more than Gerard ’t Hooft.

  GOING BEYOND PILOT WAVE THEORY AND COLLAPSE MO
DELS

  The conclusion I come to is that none of the proposals for a realist quantum theory that I’ve presented so far are entirely compelling. Some are captivating, but none have either experimental support or the kind of elegance or completeness that can, for a time, substitute for that decisive experiment. So if you want to join Einstein, de Broglie, Schrödinger, Bohm, and Bell, and go beyond the statistical description of quantum theory to a description of beables that will tell us what exactly is happening in each individual quantum process, stay with us, for we are not yet done.

  Are there lessons to take with us as we move beyond pilot wave theory and collapse models? Indeed there are. The most important lesson we can learn from the successes of the collapse models and pilot wave theory is that the wave function captures an element of physical reality. Let’s see how this conclusion comes about.

  The pilot wave theory asserts that everything in the universe has a dual existence—as a particle and as a wave. This solves the measurement problem because it keeps the particle. And it does so in a way that incorporates superposition, entanglement, and all their weird consequences because it keeps the wave. But is it right? I argued that impressive as it is, it has severe drawbacks. This brings us to our next option: to go beyond pilot wave theory to invent a new theory of beables.

  Pilot wave theory succeeds because it posits that both particles and waves are real. But is this really necessary? Might there be a theory that accomplishes what pilot wave theory does which doesn’t require the doubled ontology? This would also resolve the issue of the lack of reciprocity in the theory.

 

‹ Prev