Farewell to Reality
Page 7
In subsequent experiments the physicists modified their arrangement to include devices which could switch the paths of the photons, directing each of them towards two differently orientated polarizing filters. This prevented the photons from ‘knowing’ in advance along which path they would be travelling, and hence through which filter they would eventually pass. This was equivalent to changing the relative orientations of the two polarizing filters while the photons were in flight.
The physicists obtained the result 2.404±0.080, once again in clear violation of the generalized form of Bell’s inequality.
Similar experiments were carried out in August 1998 by a research group from the University of Geneva. They measured pairs of entangled photons using detectors positioned in Bellevue and Bernex, two small Swiss villages outside Geneva almost 11 kilometres apart. The results showed a clear violation of Bell’s inequality. This suggests that any spooky action-at-a-distance would need to propagate from one detector to another with a speed at least twenty thousand times that of light.
Quantum entanglement has opened up intriguing possibilities in quantum information processing, cryptography and teleportation (‘Beam me up, Scotty’ for photons). Such possibilities are based inherently on the kind of non-local spookiness required to breach Bell’s inequality and which Einstein had hoped to avoid. In May 2012, a team of physicists from various institutes in Austria, Canada, Germany and Norway, led by Austrian Anton Zeilinger, reported successful teleportation of photons from La Palma in the Canary Islands to Tenerife, 143 kilometres distant.18
There is no escaping the conclusion. Reality at the quantum level is decidedly non-local.
Testing non-local hidden variable theories
But the reality advocated by the proponents of hidden variable theories does not have to be a local reality. The influences of the hidden variables could be non-local. This would still leave us with an action-at-a-distance that is somewhat spooky, but at least it would get rid of the collapse of the wavefunction and the inherent quantum ‘chanciness’ that this implies.
Like Bell, Anthony Leggett was also rather distrustful of the Copenhagen interpretation of quantum theory. He understood that local hidden variable theories (of the kind that Bell had considered) are constrained by two important assumptions. In the first, we assume that whatever result we get for photon A, this can in no way affect the result of any simultaneous or subsequent measurement on the distant photon B, and vice versa.
The second assumption is rather subtle. We assume that however we set up the apparatus to make the measurement on photon A, this can in no way affect the result we get for photon B, and vice versa. Remember, in response to the challenge posed by Einstein, Podolsky and Rosen, Bohr argued that the properties and behaviour of photon B are defined by the way we set up the measurement on photon A.
We know from the experimental tests of Bell’s inequality that one or other or both of these assumptions must be wrong and that something has to give. But the experiments do not tell which of them is invalid. Leggett wondered what would happen if, instead of abandoning both assumptions, we keep the ‘result’ assumption but relax the ‘set-up’ assumption.
In essence, relaxing the set-up assumption means that the behaviour of the photons and the results of measurements can be influenced by the way we set up our measuring devices, just as Bohr had argued. This is still pretty weird. It requires some kind of curious, unspecified non local influence to be exerted by the choices we make in a possibly very distant laboratory.
In the context of our coin analogy, keeping the result assumption means that the result we get for coin B cannot depend on the result we get for coin A. We’re still assuming that the faces of both coins are fixed at the moment they split apart. However, relaxing the set-up assumption means that the result we get for coin B can be influenced by how we look at coin A to see what result we got.
We can be reasonably confident that Einstein wouldn’t have liked it.
By keeping the result assumption, Leggett defined a class of what he called ‘crypto’ non-local hidden variable theories. The most important thing to note about this class of theories is that the individual quantum particles are assumed to possess defined properties before we measure them. What we actually measure will, of course, depend on the way we set up our measuring devices, and changing these will affect the properties and behaviour of distant particles.
Here’s the bottom line. This comes down to the rather simple question of whether or not quantum particles have the properties we assign to them before the act of measurement.
Leggett found that keeping the result assumption but relaxing the set-up assumption is still insufficient to reproduce all the predictions of quantum theory. Just as Bell had done in 1966, Leggett now derived a relatively simple inequality that could provide a direct test.
Experiments designed to test Leggett’s inequality were performed in 2006 by physicists at the University of Vienna and the Institute for Quantum Optics and Quantum Information. The greatest difference between the predictions of quantum theory and the prediction of this whole class of crypto non-local theories arises for a specific arrangement of the polarizing filters.* For this arrangement, the class of non-local hidden variable theories predict a value for the Leggett inequality of 3.779. Quantum theory predicts 3.879, a difference of less than 3 per cent.
Nevertheless, the results were once again unequivocal. For the arrangement mentioned above, the experimental value was found to be 3.852±0.023, a violation of the Leggett inequality by more than three times the experimental error.19
Things-as-they-are-measured
In his response to the challenge from Einstein, Podolsky and Rosen, Bohr appeared to accept that there could be no ‘mechanical disturbance’, no ‘ripple’ effect arising from the outcome of the measurement on photon A. It’s not clear from his writings if he believed that, despite the absence of a mechanical disturbance, both the result and set-up assumptions inherent in the presumption of local reality should be abandoned: ‘… even at this stage there is essentially the question of an influence on the very conditions which define the possible types of predictions regarding the future behaviour of the system’.20
However, the experimental tests of Leggett’s inequality demonstrate that we must indeed abandon both the result and the set-up assumptions. The properties and behaviour of the distant photon B are affected by both the setting we use to measure photon A and the result of that measurement. It seems that no matter how hard we try, we cannot avoid the collapse of the wavefunction.
What does this mean?
It means that in experimental quantum mechanics we have run right up against what was previously perceived to be a purely philosophical barrier. The experiments are telling us that we can know nothing of reality-in-itself.
We have to accept that the properties we ascribe to quantum particles like photons, such as energy, frequency, spin, polarization, position (‘here’ or ‘there’), are properties that have no meaning except in relation to a measuring device that allows them to be projected into our empirical reality of experience. We can no longer assume that the properties we measure necessarily reflect or represent the properties of the particles as they really are.
Perhaps even more disturbing is the conclusion that when we try to push further and ascribe to reality-in-itself properties that might help us to reconcile and understand our observations, we get it demonstrably wrong.
This is all strangely reminiscent of a famous philosophical conundrum. If a tree falls in the forest and there’s nobody around to hear, does it make a sound?
Philosophers have been teasing our intellects with such questions for centuries. Of course, the answer depends on how we choose to interpret the use of the word ‘sound’. If by sound we mean compressions and rarefactions in the air which result from the physical disturbances caused by the falling tree and which propagate through the air with audio frequencies, then we might not hesitate to answer in the affirmative.<
br />
Here the word ‘sound’ is used to describe a physical phenomenon — the wave disturbance carried by the air. But sound is also a human experience, the result of physical signals delivered by human sense organs which are synthesized in the mind as a form of perception.
As we have seen, sense perceptions can be described using chemical and physical principles, up until the point at which the perception becomes a mental experience. And the precise details of this process remain, at present, unfathomable.
The experiences of sound, colour, taste, smell and touch are all secondary qualities which exist only in our minds. We have no basis for our common-sense assumption that these secondary qualities reflect or represent reality as it really is. So, if we interpret the word ‘sound’ to mean a human experience rather than a physical phenomenon, then when there is nobody around, there is a sense in which the falling tree makes no sound at all.
What the experimental tests of Bell’s and Leggett’s inequalities tell us is much the same. We have no basis for our common-sense assumption that the properties of quantum particles such as photons reflect or represent reality as it really is.
Who would have thought that the theory of light would lead to such philosophy? It should now be apparent why the Reality Principle given in Chapter 1 is so structured.
But look back through this chapter. The conclusions of quantum theory may be utterly bizarre, but this is a theory founded on solid observational and experimental fact. It has been tested over and over again. Whether we like it or not, it is here to stay. It is ‘true’. It describes the properties and behaviour of light better than any theory that has gone before, and is an essential component of the authorized version of empirical reality.
* Or track 6 if you’re not familiar with the structure of a long-playing record.
* Think about the destructive forces unleashed when the energy contained in a tsunami strikes land.
* See http://pdg.lbl.gov. Click ‘Summary Tables’ and select the top entry ‘Gauge and Higgs Bosons (gamma, g, W, Z, …)’. The photon is here referred to as ‘gamma’. The rest mass is the mass that a photon would have if it could be (hypothetically) slowed down and stopped.
* Physicists call this changing the basis of the description.
* Actually, the probability is related to the modulus-square of the amplitude. Amplitudes can be positive or negative or ‘imaginary’ (i.e. they depend on i, the square root of -l), but, by definition, probabilities are always positive.
* This is only a high degree of certainty rather than absolute certainty as the polarizing film won’t be perfect. It will allow some photons that aren’t precisely vertically polarized to ‘leak’ through.
** I’m sure you want to know what the other two laws are. The first law says that when a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is almost certainly wrong. The second law says that the only way to discover the limits of the possible is to venture a little way beyond them into the impossible.
* This is a variation of the original Einstein-Podolsky-Rosen thought experiment, but it is entirely consistent with their approach.
* Strictly speaking, it’s not necessary to fix the polarization states at the moment the photons are produced. It’s enough that the hidden variables are so fixed and determine how the photons interact with the polarizing film, such that if photon A is measured to be vertically polarized, photon B is, too.
* Here ‘polarizing films’ is a shorthand for what was a complex bit of technical kit, including polarization analysers, photomultipliers, detectors and timing electronics that could identifty when the photons belonged to the same pair.
** Actually, quantum theory predicts the value 2√2.
* Again, I’m paraphrasing. The experiments were a lot more complicated than this.
3
The Construction of Mass
Matter, Force and the Standard Model of Particle Physics
A theory is the more impressive the greater the simplicity of its premises, the more different kinds of things it relates, and the more extended its area of applicability.
Albert Einstein1
Perhaps we would be more comfortable if the behaviour described in Chapter 2, which poses such bizarre philosophical conundrums, was in some way restricted to photons, those ghostly white ambassadors of morning. Alas, when in 1923 de Broglie speculated that there might be a connection between the wavelength of a quantum wave particle and its momentum, he was thinking not of photons, but of electrons. And, I can now admit, the two-slit interference pattern shown in Figure 1 on page 37 was produced not with a faint beam of light, admitting one photon after another, but with a faint beam of electrons.
It comes as something of a shock to realize that pictures such as Figure 1 relate to particles that we’re more inclined to think of as tiny, but solid, bits of material substance. We then start to ask some really uncomfortable questions. If we thought it was spooky to lose sight of massless particles as they make their way (as waves) through a two-slit interference apparatus, then doing the same with electrons is surely downright embarrassing. As electrons pass — one after the other — through the apparatus, to be ‘constituted’ only when the wavefunction collapses, we have to ask ourselves: what happens to the electrons’ mass?
Okay, okay. It’s important to stay calm. Electrons are particles with mass, but this mass is nevertheless very, very small. The mass of an electron is about 0.9 thousandths of a billionth of a billionth of a billionth (9 × 10-31) of a kilogram. If we lost an electron, I guess we would hardly miss it. Perhaps we can still get away with the idea that, because of their small size, electrons are susceptible to phantom-like, non-local behaviour of the kind we associate with photons. Larger particles or more massive structures should surely be less susceptible.
But this won’t do. Quantum wave interference effects have been demonstrated with large molecules containing 60 and 70 carbon atoms. Superconducting quantum interference devices (SQUIDs, for short) have been used to demonstrate interference in objects of millimetre dimensions. These are dimensions you can see. These experiments involved combining SQUID states in which a billion electrons move clockwise around a small superconducting ring and another billion electrons move anticlockwise around the ring. In such a quantum superposition, in what direction are the electrons actually supposed to be moving?
It gets worse. In the standard model of particle physics, we learn that the property of mass of all the elementary particles — the particles that make up everything we are and everything we experience — is not an intrinsic or primary property of the stuff of material substance. It results from the interaction of quantum particles that would otherwise be massless with a mysterious energy field called the Higgs field which pervades the entire universe, like a modern-day ether. These interactions slow down the particles that interact with it, to an extent determined by the magnitude of their coupling to the field. We interpret this slowing down as inertia. And, ever since Galileo, we interpret inertia as a property of objects possessing mass.
We are forced to conclude that this interaction with the Higgs field, this slowing down, is actually what mass is.
We’d better take a closer look.
The forces of nature
When Einstein developed his theories of relativity and challenged Bohr over the interpretation of quantum theory in the 1920s, it was believed that there were just two forces of nature — electromagnetism and gravity. Early attempts to construct a unified theory capable in principle of describing all the elementary particles and their interactions therefore involved reconciling just these two forces in a single framework.
But two forces of nature aren’t enough to account for the properties of atoms as these came to be understood in the early 1930s.
In 1932, English physicist James Chadwick discovered the neutron, an electrically neutral particle which, together with the positively charged pro
ton, forms the building blocks of all atomic nuclei. It was now understood that each chemical element listed in the periodic table consists of atoms. Each atom consists of a nucleus composed of varying numbers of protons and neutrons. Each element is characterized by the number of protons in the nuclei of its atoms. Hydrogen has one, helium two, lithium three, and so on to uranium, which has 92. It is possible to create elements heavier than uranium in a particle accelerator or a nuclear reactor, but they do not occur in nature.
This was all very well, but it posed something of a dilemma. We know that like charges repel each other, so how could all those positively charged protons be squeezed together and packed so tightly inside an atomic nucleus, and yet remain stable? Careful experimental studies revealed that the strengths of the interactions between protons and protons inside the nucleus are very similar in magnitude to those between protons and neutrons. None of this made any sense, unless the force governing these interactions is very different from electromagnetism. And very much stronger, able to overcome the force of electrostatic repulsion threatening to tear the nucleus apart.
This suggested the existence of another force, which became known as the strong nuclear force, binding protons and neutrons together in atomic nuclei.
This was not quite the end of the story. It had been known since the late nineteenth century that certain isotopes of certain elements — atoms with the same numbers of protons in their nuclei but different numbers of neutrons — are unstable. For example, the isotope caesium-137 contains 55 protons and 82 neutrons. It is radioactive, with a half-life (the time taken for half the radioactive caesium-137 to disintegrate) of about thirty years. The caesium-137 nuclei disintegrate spontaneously through one or more nuclear reactions.