Book Read Free

Quantum Reality

Page 16

by Jim Baggott


  What we’ve discovered is that quantum mechanics violates Bell’s inequality. It predicts that the extent of the correlation between atoms A and B can sometimes be greater, sometimes less, than any local hidden variable theory can allow.

  This is such an important result that it’s worth taking some time to recap, to ensure we understand how we got here. EPR sought to expose the incompleteness of quantum mechanics in a thought experiment involving a pair of entangled particles. If we adopt a realist interpretation of the wavefunction, and we assume that the particles are locally real and measurements on one can’t in any way influence the outcomes of measurements on the other, then something is surely missing. Bohm and Aharonov adapted this experiment and showed how it might provide a practical test. Bell went further, introducing a whole new level of deviousness and devising Bell’s inequality.

  Here, indeed, is a direct test: quantum mechanics versus local hidden variables. Which is right? Is Bell’s inequality violated in practice? This is more than enough reason to get back on board the Ship of Science and set sail for Empirical Reality.

  Bell wrote his paper in 1964 but, due to a mix-up, it wasn’t published until 1966.12 It took about another ten years for experimental science to develop the degree of sophistication needed to begin to produce some definitive answers.

  Although this kind of experimentation continues to this day, perhaps the most famous of these tests was reported in the early 1980s by Alain Aspect and his colleagues at the University of Paris. These were not based on entangled atoms and magnets. Instead they made use of pairs of entangled photons produced in a ‘cascade’ emission from excited calcium atoms.

  Like electrons, photons also possess spin angular momentum, but there’s a big difference. Photons are ‘force particles’. They carry the electromagnetic force and are called bosons (named for Satyendra Nath Bose), and have a spin quantum number of 1. Because photons travel at the speed of light, there are only two spin orientations which we associate with left-circularly (⭯) and right-circularly (⭮) polarized light, as judged from the perspective of the source of the light. Now, the outermost electrons in a calcium atom sit in a spherical orbit with their spins paired and zero angular momentum. So, when one of these absorbs a photon and is excited to a higher-energy orbit, it picks up a quantum of angular momentum from the photon. This can’t go into the electron’s spin, since this is fixed. It goes instead into the electron’s orbital motion, pushing it into an orbit with a different shape, from a sphere to a dumbbell—look back at Figure 6c.

  But if we now hit the excited calcium atom with another photon, we can excite the electron left behind in the spherical orbit also into the dumbbell-shaped orbit. There are now three possible quantum states, depending on how the spin and orbital angular momenta of the electrons combine together. In one of these the angular momentum cancels to zero.

  Although this state is very unstable, the calcium atom can’t simply emit a photon and return to the lowest-energy spherical orbit. This would involve a transition with no change in angular momentum, and there’s simply no photon for that. I suspect you can see where this is going. Instead, the atom emits two photons in rapid succession. One of the photons has a wavelength corresponding to green (we’ll call this photon A) and the other is blue (photon B). As there can be no net change in angular momentum, and angular momentum must be conserved, the photons must be emitted with opposite states of circular polarization.

  The photons are entangled.

  The advantage of using photon polarization rather than the spin of electrons or atoms is that we can measure the polarization of light in the laboratory quite easily using polarizing analysers, such as calcite crystals.13 We don’t need to use unwieldy magnets.

  One small issue. Polarizing analysers don’t measure the circular polarization states of photons; they measure horizontal (↔) or vertical (↕) polarization.* But that’s okay. A left- or right-circularly polarized photon incident on a linear polarizing analyser orientated vertically has a 50% probability of being transmitted. Likewise for an analyser orientated horizontally. And we know well enough by now that a total wavefunction expressed in a basis of left- and right-circular polarization states can be readily changed to a basis of horizontal and vertical polarization states.

  Just like Bell’s devious experiment with magnets, the analysers used to measure the polarization states of both photons A and B were mounted on platforms that could be rotated relative to one another. This experiment with entangled photons is entirely equivalent to Bell’s.

  One other important point. The detectors for each photon were placed 13 metres apart, on opposite sides of the laboratory. It would take about 40 billionths of a second for any kind of signal travelling at the speed of light to cross this distance. But the experiment was set up to detect pairs of photons A and B arriving within a window of just 20 billionths of a second. In other words, any spooky quantum influences passing between the photons—allowing measurements on one to affect the other—would need to travel faster than the speed of light.

  We’re now firmly on the shores of Empirical Reality, and we must acknowledge that the real world can be rather uncooperative. Polarizing analysers ‘leak’, so they don’t provide 100% accuracy. Not all the photons emitted can be ‘gathered’ and channelled into their respective detectors, and the detectors themselves can be quite inefficient, recording only a fraction of the photons that are actually incident on them. Stray photons in the wrong place at the wrong time can lead to miscounting the number of pairs detected.

  Some of these very practical deficiencies can be compensated by extending the experiment to a fourth arrangement of the analysers, and writing Bell’s inequality slightly differently. For the particular set of arrangements that Aspect and his colleagues studied, Bell’s inequality places a limit for local hidden variables of less than or equal to 2. Quantum mechanics predicts a maximum of 2 times the square root of 2, or 2.828. Aspect and his colleagues obtained the result 2.697, with an experimental error of ±0.015, a clear violation of Bell’s inequality.14

  These results are really quite shocking. They confirm that if we want to interpret the wavefunction realistically, the photons appear to remain mysteriously bound to one another, sharing a single wavefunction, until the moment a measurement is made on one or the other. At this moment the wavefunction collapses and the photons are ‘localized’ in polarization states that are correlated to an extent that simply cannot be accounted for in any theory based on local hidden variables. Measuring the polarization of photon A does seem to affect the result that will be obtained for photon B, and vice versa, even though the photons are so far apart that any communication between them would have to travel faster than the speed of light.

  Of course, this was just the beginning. For those physicists with deeply held realist convictions, there just had to be something else going on. More questions were asked: What if the hidden variables are somehow influenced by the way the experiment is set up? This was just the first in a series of ‘loopholes’, invoked in attempts to argue that these results didn’t necessarily rule out all the local hidden variable theories that could possibly be conceived.

  Aspect himself had anticipated this first loophole, and performed further experiments to close it off. The experimental arrangement was modified to include devices which could randomly switch the paths of the photons, directing each of them towards analysers orientated at different angles. This prevented the photons from ‘knowing’ in advance along which path they would be travelling, and hence through which analyser they would eventually pass. This is equivalent to changing the relative orientations of the two analysers while the photons were in flight. It made no difference. Bell’s inequality was still violated.15

  The problem can’t be made to go away simply by increasing the distance between the source of the entangled particles and the detectors. Experiments have been performed with detectors located in Bellevue and Bernex, two small Swiss villages outside Geneva almost 11
kilometers apart.16 Subsequent experiments placed detectors in La Palma and Tenerife in the Canary Islands, 144 kilometers apart. Bell’s inequality was still violated.17

  Okay, but what if the hidden variables are still somehow sensitive even to random choices in the experimental setup, simply because these choices are made on the same timescale? In experiments reported in 2018, the settings were determined by the colours of photons detected from quasars, the active nuclei of distant galaxies. The random choice of settings was therefore already made nearly eight billion years before the experiment was performed, as this is how long it took for the trigger photons to reach the Earth. Bell’s inequality was still violated.18

  There are other loopholes, and these too have been closed off in experiments involving both entangled photons and ions (electrically charged atoms). Experiments involving entangled triplets of photons performed in 2000 ruled out all manner of locally realistic hidden variable theories without recourse to Bell’s inequality.19

  If we want to adopt a realistic interpretation, then it seems we must accept that this reality is non-local or, at the very least, it violates local causality.

  But can we still meet reality halfway? In these experiments, we assume that the properties of the entangled particles are governed by some, possibly very complex, set of hidden variables. These possess unique values that predetermine the quantum states of the particles and their subsequent interactions with the measuring devices. We further assume that the particles are formed with a statistical distribution of these variables determined only by the physics and not by the way the experiment is set up.

  Local hidden variable theories are characterized by two further assumptions. In the first, we assume (as did EPR) that the outcome of the measurement on particle A can in no way affect the outcome of the measurement on B, and vice versa. In the second, we assume that the setting of the device we use to make the measurement on A can in no way affect the outcome of the measurement on B, and vice versa.

  The experimental violation of Bell’s inequality shows that one or other (or both) of these assumptions is invalid. But, of course, these experiments don’t tell us which.

  In a paper published in 2003, Nobel laureate Anthony Leggett chose to drop the setting assumption. This admits that the behaviour of the particles and the outcomes of subsequent measurements is influenced by the way the measuring devices are set up. This is still all very spooky and highly counterintuitive:20

  nothing in our experience of physics indicates that the orientation of distant [measuring devices] is either more or less likely to affect the outcome of an experiment than, say, the position of the keys in the experimenter’s pocket or the time shown by the clock on the wall.

  By keeping the outcome assumption, we define a class of non-local hidden variable theories in which the individual particles possess defined properties before the act of measurement. What is actually measured will of course depend on the settings, and changing these settings will somehow affect the behaviour of distant particles (hence, ‘non-local’). Leggett referred to this broad class of theories as ‘crypto’ non-local hidden variable theories. They represent a kind of halfway house between strictly local and completely non-local.

  He went on to show that dropping the setting assumption is in itself still insufficient to reproduce all the results of quantum mechanics. Just as Bell had done in 1964, he derived an inequality that is valid for all classes of crypto non-local hidden variable theories but which is predicted to be violated by quantum mechanics. At stake then was the rather simple question of whether quantum particles have the properties we assign to them before the act of measurement. Put another way, here was an opportunity to test whether quantum particles have what we might want to consider as ‘real’ properties before they are measured.

  The results of experiments designed to test Leggett’s inequality were reported in 2007 and, once again, the answer is pretty unequivocal. For a specific arrangement of the settings in these experiments, Leggett’s inequality demands a result which is less than or equal to 3.779. Quantum mechanics predicts 3.879, a violation of less than 3%. The experimental result was 3.8521, with an error of ±0.0227. Leggett’s inequality was violated.21 Several variations of experiments to test Leggett’s inequality have been performed more recently. All confirm this general result.

  It would seem that there is no grand conspiracy of nature that can be devised which will preserve locality. In 2011, Mathew Pusey, Jonathan Barrett, and Terry Rudolph at Imperial College in London published another ‘no-go’ theorem.22 In essence, this says that any kind of hidden variable extension in which the wavefunction is interpreted purely statistically cannot reproduce all the predictions of quantum mechanics.

  The ‘PBR theorem’, as it is called, sparked some confusion and a lot of debate when it was first published.23 It was positioned as a theorem which rules out all manner of interpretations in which the wavefunction represents ‘knowledge’ in favour of interpretations in which the wavefunction is considered to be real. But ‘knowledge’ here is qualified as knowledge derived from the statistics of whatever it is that is assumed to underlie the physics and which is further assumed to be objectively real. Whilst it rules in favour of realist interpretations that are not based on statistics, it does not rule out the kinds of anti-realist interpretations which we considered in Chapters 5 and 6.

  We should note in passing that whilst local or crypto non-local hidden variable theories have been all but ruled out by these experiments, they underline quite powerfully how realistic interpretations have provided compelling reasons for the experimentalists to roll up their sleeves and get involved. In this case, the search for theoretical insight and understanding, in the spirit of Proposition #4 (see the Appendix), has encouraged some truly wonderful experimental innovations. The relatively new scientific disciplines of quantum information, quantum computing, and quantum cryptography have derived in part from efforts to resolve these foundational questions and to explore the curious phenomenon of entanglement, even though the search for meaningful answers has so far proved fruitless.

  But we must now confront the conclusion from all the experimental tests of Bell’s and of Leggett’s inequalities. In any realistic interpretation in which the wavefunction is assumed to represent the real physical state of a quantum system, the wavefunction must be non-local or it must violate local causality.

  Okay, so let’s see what that means.

  * Actually, modulus-…. okay—you’ve got it now, so I’ll stop.

  * Think about it like this. Make a Möbius band by taking a length of tape, twisting it once and joining the ends together so the band is continuous and seamless. What you have is a ring of tape with only one ‘side’ (it doesn’t have distinct outside and inside surfaces). Now picture yourself walking along this band. You’ll find that, to get back to where you start, you need to walk twice around the ring.

  * This is known as a Stern–Gerlach apparatus, named for physicists Otto Stern and Walter Gerlach, who demonstrated the effect in 1922 using silver atoms. A beam of silver atoms passed between the poles of a magnet splits into two equal halves—one half is bent upwards towards the north pole, the other downwards towards the south pole—consistent with a random (50:50) alignment of the spins of the atoms’ outermost electron, ↑ and ↓.

  * Polaroid sunglasses reduce glare by filtering out horizontally polarized light.

  8

  Quantum Mechanics is Incomplete

  So We Need to Add Some Other Things

  Pilot Waves, Quantum Potentials, and Physical Collapse Mechanisms

  Einstein was not alone in searching for ways to reintroduce causality and determinism in a realistic interpretation of quantum mechanics. De Broglie was looking, too, and at the fifth Solvay Conference in Brussels in 1927 he presented his own ‘double solution’ theory, involving both pilot waves and ‘probability waves’. But if de Broglie had been hoping for support from Einstein at the conference, he was disappointed. Othe
r than suggesting that de Broglie was searching in the right direction, Einstein remained impassive.

  De Broglie’s theory quickly evolved into a more familiar pilot wave theory, in which the wavefunction guides the paths of physically real particles. But further discussions (most notably with Pauli) raised doubts in his own mind about its validity and, by early 1928, he had all but abandoned it. He did not include it in a course on wave mechanics he taught at the Faculté des Sciences in Paris later that year. In fact, de Broglie became a convert to the Copenhagen orthodoxy.

  Bohm’s encounter with Einstein in 1951 encouraged him to look again and think more deeply. As I explained in Chapter 4, the Copenhagen interpretation is formally embedded in the standard quantum formalism through the structure of its axioms, and especially Axiom #1 (for a reminder, see the Appendix). Rather than accept this at face value, Bohm decided to explore the possibility that other descriptions and hence other interpretations are conceivable in principle.

  He began by reworking Schrödinger’s wave equation, assuming the existence of a real particle following a real path through space, its motion tied to the wave through the imposition of a ‘guidance condition’ which determines its velocity. What this means is that the motion of the particle is governed by the classical potential energy of the system—the steepness of the hill in my earlier Sisyphus analogy—and a second, so-called quantum potential. The latter is firmly non-classical and non-local and is alone responsible for the introduction of quantum effects in what would otherwise be an entirely classical description.

  Soon afterwards, Bohm realized that he had rediscovered de Broglie’s pilot wave theory, and the approach is now known variously as de Broglie–Bohm theory or Bohmian mechanics. It was Bell’s interest in this theory that led him to devise his theorem and his inequality. He wanted to know whether, in any hidden variable interpretation, non-locality is inevitable (it is).

 

‹ Prev