Quantum Reality

Home > Other > Quantum Reality > Page 17
Quantum Reality Page 17

by Jim Baggott


  Note that in the de Broglie–Bohm interpretation, the wavefunction itself is very much part of the reality being described—it is not simply a convenient way of summarizing statistical behaviour derived from some kind of underlying, hidden reality. Consequently, it is not ruled out by the PBR no-go theorem. There is a statistical flavour, but this relates not to the wavefunction but to the (presumably random) spread of initial positions and velocities of the physically real particles that are guided by it. These initial conditions determine which paths the particles will subsequently follow. A statistical distribution of initial conditions gives rise to a statistical distribution over the available paths.

  Bohm’s interest in the theory waned, but was resurrected in 1978 by the enthusiasm of Basil Hiley, his colleague at Birkbeck College in London, and the work of two young researchers, Chris Dewdney and Chris Philippidis. Seeing can sometimes be believing, and when Dewdney used de Broglie–Bohm theory to compute the hypothetical trajectories of electrons in a two-slit experiment (see Figure 15), the resulting picture provoked gasps of astonishment. In these simulations, each electron passes through one or other of the two slits, follows one of the predetermined paths, guided by the quantum potential, and is detected as a bright spot on the screen. As more electrons pass through the apparatus, the variation in their initial conditions means that they individually pass through different slits and follow different paths. The end result is a pattern of spots on the screen which reflects the grouping of the various paths—what we interpret as a two-slit interference pattern.

  Figure 15 Particle trajectories in a two-slit experiment as predicted by de Broglie–Bohm theory.

  We can get some sense for how this might work from some recent, rather fascinating, experiments. John Bush and his colleagues at the Massachusetts Institute of Technology have reported on experiments in which a small oil droplet is bounced off the surface of a liquid.1 Each bounce creates a set of ripples in the liquid which overlap and interfere, and the resulting interference pattern guides the subsequent motion of the droplet. As the ripples pass through two adjacent openings in a barrier they produce the familiar two-slit interference pattern (though there are now some doubts about this).2 The ‘walking’ droplet then passes through one opening or the other, guided by the ripples to its destination. Although this is an entirely classical experiment, it is argued that these motions mimic the kinds of behaviours we might anticipate from de Broglie–Bohm theory, in which the oil droplet is replaced by a real quantum particle.

  But such classical experiments cannot mimic the quantum potential which, true to its nature, exhibits some very peculiar behaviours. To explore these, let’s return once more to our favourite quantum system consisting of particles prepared in a superposition of spin states ↑ and ↓. We pass these particles one at a time between the poles of a magnet. In de Broglie–Bohm theory, the quantum potential is split by the presence of the magnetic field into two equal but non-overlapping parts. One of these guides the particles upwards, and particles following this path will produce an ↑ result. The other guides the particles downwards, giving a ↓ result. The result we get depends on the initial conditions for each particle, but if a particle follows the upwards path, the quantum potential for the downward path doesn’t disappear. Instead it persists as an ‘empty wave’.

  We have to assume that the particle following the upwards trajectory between the poles of the magnet interacts with some kind of detection device. Although this is likely to be of classical dimensions, we once again acknowledge that it is composed of quantum entities, and the first stages of the interaction between the particle and the detector will be quantum in nature. What follows is decoherence, but not in the sense of a convenient mathematical device designed to eliminate the interference terms from a pre-probability, as in the decoherent histories interpretation. In de Broglie–Bohm theory, the quantum potential is a real wave or field, and so decoherence is required to be a real physical mechanism. We’ll examine the details later in this chapter.

  In de Broglie–Bohm theory, the wavefunction doesn’t physically collapse from all over space. The ‘empty waves’ persist, but these are no longer relevant to the measurement or the subsequent change in the state of our knowledge of the system. And the irreversible process of amplification from quantum to complex classical scales means that the measurement outcomes are decided long before we get to the scale of Schrödinger’s cat.

  This is all very fine, but once again such causal explanations come with a price tag, this one associated with the proliferation of empty waves. Of course, if empirical evidence for the existence of empty waves could be found, this would provide tremendous support for the theory. But, alas, we gain knowledge of the wavefunction only through measurements on the particles that are guided by it and, by definition, the empty waves are not associated with any particles—they’re empty.

  We’re not quite done yet. Let’s return again to the EPR experiment involving entangled atoms—A and B—formed in opposite spin states ↑ and ↓. Conservation of angular momentum demands that the spins must be aligned, such that if both magnets are orientated in the same direction (0°—see Figure 13), then the possible measurement outcomes are A↑B↓ and A↓B↑.

  Suppose atom A moves off to the left and passes through the poles of a magnet orientated at 0°. We record an ↑ result. According to de Broglie–Bohm theory, this act of measurement performed on atom A instantaneously changes the quantum potential, and a kind of non-local quantum ‘torque’ is exerted on atom B. This action sets up the spin of atom B so that it will be detected in a ↓ state. Theorist Peter Holland writes that3

  The act of measurement on [A] polarizes [B] (in the direction of the analyzing field acting on [A]) and in any subsequent measurement on [B], the results will come out in the way predicted by quantum mechanics.

  We see immediately that in this interpretation, non-locality really does imply action at a distance: ‘the “spooky action-at-a-distance” is embraced as a fact of life from the outset. In this way any possibility of conflict with the empirical content of quantum mechanics is avoided.’4

  We watch as the boulder of Sisyphus rolls down the hill. It reaches the bottom, where its motion is no longer influenced by the slope (the classical potential energy). If there’s nothing in the way, and we can assume there’s no friction to slow it down, then the boulder will trundle across the valley floor in a straight line at a constant speed according to Newton’s first law of motion. But, although the valley is flat, in de Broglie–Bohm theory the quantum potential doesn’t necessarily disappear here. The motion of the quantum boulder is still subject to some very spooky non-local influences, and Newton’s first law no longer applies. Suppose its entangled partner passes through a measuring device positioned over in the next valley, causing ripples through the quantum potential. Imagine watching the first boulder rolling along the valley floor, when for no apparent reason it is hit by some invisible quantum torque which changes both its speed and direction.

  De Broglie–Bohm theory restores causality and determinism. It eliminates the need to invoke a collapse of the wavefunction, but at the cost of accepting non-local spooky action at a distance. Make no mistake, this most definitely contradicts the spirit of Einstein’s special theory of relativity, although advocates argue that the faster-than-light transfer of information implied by the theory cannot be extracted in any experiment that is also consistent with quantum mechanics. Particles A and B do indeed ‘signal’ each other over vast distances, by exerting effects on each other through the quantum potential, but we can do nothing useful with this. In this sense, de Broglie–Bohm theory and special relativity can peacefully—though rather uneasily—coexist.

  Not surprisingly, Einstein was not enamoured of the approach Bohm had taken. In a letter to Born he wrote5

  Have you noticed that Bohm believes (as de Broglie did, by the way, 25 years ago) that he is able to interpret the quantum theory in deterministic terms? That way seems too c
heap to me.

  Today the de Broglie–Bohm theory retains a small but dedicated following within the communities of concerned physicists and philosophers, but it remains firmly outside the mainstream of quantum physics and features in few textbooks on the subject. It is in all respects equivalent to quantum mechanics and yet it allows a profoundly different interpretation of events occurring at the quantum level, one which is much more in tune with our more intuitive metaphysical preconceptions about the way reality ought to work.

  It has been argued that the reason we teach the standard quantum mechanical formalism and not de Broglie–Bohm theory is one of historical contingency.6 Who can say what might have happened if de Broglie had not been dissuaded in 1927, and the grip of the Copenhagen orthodoxy had been less firm? Could some kind of pilot wave theory have become the standard, default interpretation? This is a potentially disturbing argument for anyone with an idealistic view of how science progresses. Disturbing because the choice between equivalent, competing rival interpretations for one of the most important foundational theories of physics might have been driven simply by the order in which things happened rather than more compelling arguments based on notions of truth or explanatory power.

  In 1982, Bell wrote7

  Why is the pilot wave picture ignored in text books? Should it not be taught, not as the only way, but as an antidote to the prevailing complacency? To show that vagueness, subjectivity, and indeterminism, are not forced on us by experimental facts, but by deliberate theoretical choice?

  Frankly, I’m not so sure. Science through the ages has made a habit of stripping irrelevant or unnecessary metaphysical elements from theories that can be shown to work perfectly well without them. Examples include Ptolemy’s epicycles, phlogiston, the ether, and caloric, the substance once thought to be responsible for the phenomenon of heat. Even if something like de Broglie–Bohm theory had become established as the preferred interpretation in 1927, I suspect it is really too cumbersome to have survived in this form. Those utilitarian physicists less concerned about causality and determinism, and less obsessed with interpretation and meaning, would have quickly dispensed with the theory’s superfluous elements in the interests of more efficient calculation.

  If we are inclined to agree with Einstein and abandon this approach as ‘too cheap’, then we must acknowledge that we’re running out of options. This means abandoning all kinds of hidden variable theories, whether local, crypto non-local, or purely non-local. If we still want to continue to insist on a realist interpretation of the wavefunction, then we have to accept that we’re now in a bit of a bind.

  We have no choice but to attempt to pick away at one of the more stubborn conundrums of quantum mechanics and hope to at least resolve this by adding a further physical ingredient to the formalism. Even modest success might then provide some clues as to how we might resolve some of the others.

  We look again at the formalism and try to identify some vulnerability, some point of attack. And once again, the rather obvious place to focus our attention is the process of quantum measurement, the collapse of the wavefunction, and the ‘shifty split’ this implies between the quantum and classical worlds. Bell perfectly encapsulated the uneasiness we feel in any realistic interpretation of quantum measurement in an article published in 1990:8

  What exactly qualifies some physical systems to play the role of ‘measurer’? Was the wavefunction of the world waiting to jump for thousands of years until a single-celled living creature appeared? Or did it have to wait a little longer, for some better qualified system…. with a PhD?

  So let’s look a little more closely at quantum measurement. Consider a quantum system consisting of a large number of identical particles (such as photons or electrons). Physicists call such a collection an ensemble. Think of the particles acting together ‘in concert’, all playing from the same score just like an ensemble of musicians.

  We prepare the particles so that they’re all represented by a single total wavefunction which we can write as a superposition of ↑ and ↓, as before. These particles are still in a single quantum state: it just happens that this is a state represented by a superposition. Such an ensemble is then said to be in a pure state. We pass the particles through a measuring device, and the total wavefunction collapses randomly into a sequence of measurement outcomes. After all the particles have passed through the device (and we assume they haven’t been destroyed in the process) we would expect to have a 50:50 mixture of particles that are individually in the ↑ or ↓ state. The measurement has transformed the ensemble from a pure state into a mixture. Von Neumann showed that such a transformation is associated with an increase in the entropy of the system.

  Entropy is a thermodynamic quantity that we tend to interpret rather crudely as the amount of ‘disorder’ in a system. For example, as a block of ice melts, it transforms into a more disordered, liquid form. As liquid water is heated to steam, it transforms into an even more disordered, gaseous form. The measured entropy of water increases as water transforms from solid to liquid to gas.

  The second law of thermodynamics claims that, in a spontaneous change, entropy always increases. If we take a substance—such as air—contained in a closed system, prevented from exchanging energy with the outside world, then the entropy of the air will increase spontaneously and inexorably to a maximum as it reaches equilibrium with its surroundings. It seems intuitively obvious that the oxygen and nitrogen molecules and trace atoms of inert gas that make up the air will not all huddle together in one corner of the room in which I’m writing these words. Instead, the air spreads out to give a reasonably uniform pressure (thank goodness). This is the state with maximum entropy.

  The second law ties entropy to the ‘arrow of time’, the experience that despite being able to move freely in three spatial dimensions—forward–back, left–right, up–down—we appear obliged to follow time in only one direction—forwards. Suppose we watch a video taken during a recent cocktail party. We watch as a smashed cocktail glass spontaneously reassembles itself from the wooden floor, refills with Singapore sling, and flies upwards through the air to return to a guest’s fingers. We would quickly conclude that this reversal of the second law of thermodynamics signals that the video is playing backwards in time.

  It would seem that quantum measurement, transforming a pure state into a mixture, is closely associated with entropy, and hence with the second law. From here it’s a short step to the arrow of time and the notion of irreversibility. Just as that smashed cocktail glass ain’t going to reassemble itself anytime soon, so we wouldn’t expect a mixture of ↑ and ↓ quantum states to reassemble spontaneously into a superposition.

  Bohr recognized the importance of the ‘irreversible act’ of measurement linking the quantum and classical worlds. Some years later, Wheeler wrote about an ‘irreversible act of amplification’. The rather obvious truth is that we gain information about the quantum world only when we can amplify elementary quantum events and turn them into perceptible signals, such as the deflection of a gauge pointer. Perhaps a logical place to seek a physical collapse of the total wavefunction is right here, during this act. Schrödinger’s cat is then spared the discomfort of being both dead and alive, because the irreversible act of amplification associated with registering a radioactive emission by the Geiger counter has already settled the matter, one way or the other.

  The physicist Dieter Zeh was the first to note that the interaction of a wavefunction with a measuring apparatus and its ‘environment’ leads to rapid, irreversible decoupling of the components in a superposition in such a way that interference terms are suppressed. This is decoherence, which we first encountered in Chapter 6, but once again required to operate in a real physical sense, as we’re wanting to apply this to a real wavefunction.

  Let’s stop and think about how this is supposed to work. As we’ve seen, the first step in a measurement interaction leads to the entanglement of whatever it is that is doing the interacting within the to
tal wavefunction. This is followed by another interaction, then another, and another. We could suppose that this sequence continues until the classical measuring device is completely entangled within the total wavefunction, leading to the possibility of superpositions of, and interference between, classical-sized objects—such as a gauge pointing in two different directions at once, or a cat that is at once both alive and dead. Now, this would imply that the entanglement of the classical measuring device with the quantum system is entirely coherent.

  But, of course, with some exceptions which I’ll describe below, we never see superpositions of classical objects. The argument goes that as the sequence of interactions becomes more and more complex, the coherence required to maintain the integrity of the interference terms is quickly lost. The interference terms are essentially diluted away, and the wavefunction settles randomly on one outcome or the other. This is to all intents and purposes irreversible.

  In this scenario the wavefunction doesn’t ‘collapse’ instantaneously, as such. The time required for this kind of physical decoherence to take effect is obviously related to the size of the system under study and the number of particles in the device and the environment with which it interacts. The smaller the ‘decoherence time’, the faster the wavefunction loses coherence and evolves, together with the device and its environment, into what we recognize as a classical system.

  Let’s put this into some kind of perspective. A large molecule with a radius of about a millionth (10−6) of a centimetre moving through the air is estimated to have a decoherence time on the order of a millionth of a trillionth of a trillionth (10−30) of a second.9 This means that the molecule becomes ‘classical’ within an unimaginably short time. If we remove the air and observe the molecule in a laboratory vacuum, we reduce the number of possible interactions and so we might increase the estimated decoherence time to 10−17 seconds, which is getting large enough to be imaginable (and potentially measurable). Placing the molecule in intergalactic space, where it is exposed only to the photons which constitute the cosmic microwave background radiation, increases the estimated decoherence time to 1012 seconds, meaning that the molecule may persist as a quantum system (for instance, in a superposition state) for a little under 32,000 years.

 

‹ Prev