Quantum Reality
Page 13
We’ll begin by taking a look at the first alternative.
As we’ve seen, the philosopher Karl Popper shared some of the realist leanings of Einstein and Schrödinger, and it is clear from his writings on quantum mechanics that he stood in direct opposition to the Copenhagen interpretation, and in particular to Heisenberg’s positivism. As far as Popper was concerned, all this fuss about quantum paradoxes was the result of misconceiving the nature and role of probability.
To explain what he meant, Popper made extensive use of an analogy. Figure 11 shows an array of metal pins embedded in a wooden board. This is enclosed in a box with a transparent side, so that we can watch what happens when a small marble, selected so that it just fits between any two adjacent pins, is dropped into the grid from the top, as shown. On striking a pin, the marble may jump either to the left or to the right. The path followed by the marble is then determined by the sequence of random left/right jumps as it hits successive pins. We measure the position at the bottom of the grid at which the marble comes to rest.
Figure 11 Popper’s pin board.
Repeated measurements made with one marble (or with a ‘beam’ of identical marbles) allow us to determine the frequencies with which the individual marbles come to rest in specific channels at the bottom. As we make more and more measurements, these frequencies converge to a fixed pattern which we can interpret in terms of statistical probabilities. If successive marbles always enter the grid at precisely the same point and if the pins are identical, then we would expect a uniform distribution of probabilities, with a maximum around the centre, thinning out towards the extreme left and right. The shape of this distribution simply reflects the fact that the probability of a sequence in which there are about as many left jumps as there are right is greater than the probability of obtaining a sequence in which the marble jumps predominantly to the left or to the right.
From this, we deduce that the probability for a single marble to appear in any of the channels at the bottom (E, say) will depend on the probabilities for each left-or-right jump in the sequence. Figure 11 shows the sequence left–left–right–left–right–right, which puts the marble in the E channel. The probability for a particular measurement outcome is therefore determined by the chain of probabilities in each and every step in the sequence of events that gives rise to it. If we call such a sequence of events a ‘history’, then we note that there’s more than one history in which the marble lands in the E channel. The sequences right–left–left–right–left–right and right–right–right–left–left–left will do just as well.
Popper argued that we change the propensity for the system to produce a particular distribution of probabilities by simply tilting the board at an angle or by removing one of the pins. He wrote:3
[Removing one pin] will alter the probability for every single experiment with every single ball, whether or not the ball actually comes near the place from which we removed the pin.
So, here’s a thought. In conventional quantum mechanics, we introduce quantum randomness at the moment of interaction, or measurement. What if, instead, the quantum world is inherently probabilistic at all moments? What if, just like Popper’s pin board example, the probability for a specific measurement outcome reflects the chain of probabilities for events in each history that gives rise to it?
It’s a little easier to think about this in the context of a more obvious quantum example. So let’s return once again to our favourite quantum system (particle A), prepared in a superposition of ↑ and ↓ states. We connect our measuring device to a gauge with a pointer which moves to the left, , when A is measured to be in an ↑ state (A↑), and moves to the right, , when A is measured to be in a ↓ state (A↓).
In what follows, we will focus not on the wavefunction per se, but on a construction based on the wavefunction which—in terms of the pictograms we’ve considered so far—can be (crudely) represented like this:
In this expression, I’ve applied the so-called projection operators derived from the wavefunctions for A↑ and A↓ onto the total wavefunction. Think of these operators as mathematical devices that allow us to ‘map’ the total wavefunction onto the ‘space’ defined by the basis functions A↑ and A↓.* If it helps, you can compare this process to that of projecting the features of the surface of the (near-spherical) Earth onto a flat, rectangular chart. The Mercator projection is the most familiar but this trades off the advantages of two-dimensionality for a loss of accuracy as we approach the poles, such that Greenland and Antarctica appear larger than they really are.
What happens in this quantum-mechanical projection is that the ‘front end’ of each projection operator combines with the total wavefunction to yield a number, which is simply related to the proportion of that wavefunction in the total. The ‘back-end’ of each projection operator is the wavefunction itself. So, what we end up with is a simple sum. The total wavefunction is given by the proportion of A↑ multiplied by the wavefunction for A↑, plus the proportion of A↓ multiplied by the wavefunction for A↓. So far in our discussions we have assumed that these proportions are equal, and we will continue to assume this.
I know that this looks like an unnecessary complication, but the projection operators take us a step closer to the actual properties (↑ and ↓) of the quantum system, and it can be argued that these are more meaningful than the wavefunctions themselves.
How should we think about the properties of the system (represented by its projection operators) as it evolves in time through a measurement process? To simplify this some more, we’ll consider just three key moments: the total (quantum plus measurement) system at some initial time shortly after preparation (we’ll denote this as time t0), the system at some later time just before the measurement takes place (t1), and the system after measurement (t2). What we get then is a measurement outcome that results from the sequence or ‘history’ of the quantum events.
Here’s the interesting thing. The history we associate with conventional quantum mechanics is not the only history compatible with what we see in the laboratory, just as there are different histories that will leave the marble in the E channel of Popper’s pin board.
In the consistent histories interpretation, first developed by physicist Robert Griffiths in 1984, these histories are organized into ‘families’ or what Griffiths prefers to call ‘frameworks’. For the measurement process we’re considering here we can devise at least three different frameworks: In Framework #1, we begin at time t0 with an initial quantum superposition of the A↑ and A↓ states, with the measuring device (which we continue to depict as a gauge of some kind) in its ‘neutral’ or pre-measurement state. We suppose that as a result of some spontaneous process, by t1 the system has evolved into either A↑ or A↓, each entangled with the gauge in its neutral state. The measurement then happens at t2, when the gauge pointer moves either to the left or to the right, depending on which state is already present.
Framework #2 is closest to how the conventional quantum formalism encourages us to think about this process. In this family of histories, the initial superposition entangles with the gauge, only separating into distinct A↑ or A↓ states at t2, which is where we imagine or assume the ‘collapse’ to occur. There is no such collapse in Framework #3, in which A↑ and A↓ are entangled at t2, producing a macroscopic quantum superposition (also known, for obvious reasons, as a Schrödinger cat state).
These different frameworks are internally consistent but mutually exclusive. We can assign probabilities to different histories within each framework using the Born rule, and this is what makes them consistent. But, as Griffiths explains: ‘In quantum mechanics it is often the case that various incompatible frameworks exist that might be employed to discuss a particular situation, and the physicist can use any one of them, or contemplate several of them.’4 Each provides a valid description of events, but they are distinct and they cannot be combined.
At a stroke, this interpretation renders any debate about the bound
ary between the quantum and classical worlds—Bell’s ‘shifty split’—completely irrelevant. All frameworks are equally valid, and physicists can pick and choose the framework most appropriate to the problem they’re interested in. Of course, it’s difficult for us to resist the temptation to ask: But what is the ‘right’ framework? In the consistent histories interpretation, there isn’t one. Just as there is no such thing as the ‘right’ wavefunction, and there is no ‘preferred’ basis.
But doesn’t the change in physical state suggested by the events happening between t0 and t1 in Framework #1, and between t1 and t2 in Framework #2, still imply a collapse of some kind? No, it doesn’t:5
Another way to avoid these difficulties is to think of wave function collapse not as a physical effect produced by the measuring apparatus, but as a mathematical procedure for calculating statistical correlations…. That is, ‘collapse’ is something which takes place in the theorist’s notebook, rather than the experimentalist’s laboratory.
We know by now what this implies from our earlier discussion of the relational and information-theoretic interpretations. The wavefunctions (and hence the projection operators derived from them) in the consistent histories interpretation are not real. Griffiths treats the wavefunction as a purely mathematical construct, a pre-probability, which enables the calculation of quantum probabilities within each framework. From this we can conclude that the consistent histories interpretation is anti-realist. It involves a rejection of Proposition #3.
The consistent histories interpretation is most powerful when we consider different kinds of questions. Think back to the two-slit interference experiment with electrons. Now suppose that we use a weak source of low-energy photons in an attempt to discover which slit each electron passes through. The photons don’t throw the electron off course, but if they are scattered from one slit or the other, this signals which way the electron went. We allow the experiment to run, and as the bright spots on the phosphorescent screen accumulate, we anticipate the build-up of an interference pattern (Figure 4). In this way we reveal both particle-like, ‘Which way did it go?’, and wave-like interference behaviour at the same time.
Not so fast. In the consistent histories interpretation, it is straightforward to show that which way and interference behaviours belong to different incompatible frameworks. If we think of these alternatives as involving ‘particle histories’ (with ‘which way’ trajectories) or ‘wave histories’ (with interference effects), then the consistent histories interpretation is essentially a restatement of Bohr’s principle of complementarity couched in the language of probability. There simply is no framework in which both particle-like and wave-like properties can appear simultaneously. In this sense, consistent histories is not intended as an alternative, ‘but as a fully consistent and clear statement of basic quantum mechanics, “Copenhagen done right” ’.6
But there’s a problem. If we rely on the Born rule to determine the probabilities for different histories within each framework, then we must acknowledge an inescapable truth of the resulting algebra. Until it interacts with a measuring device, the square of the total wavefunction may contain ‘cross terms’ or ‘interference terms’:
As the name implies, the interference terms are responsible for interference, of precisely the sort that gives rise to alternating bright and dark fringes in the two-slit experiment. In conventional quantum mechanics, the collapse of the wavefunction implies not only a random choice between the outcomes A↑ and A↓, but also the disappearance of the interference terms.
We can observe interference effects using light, or electrons, or (as we’ll see in Chapter 8), with large molecules or small superconducting rings. But it goes without saying that we don’t observe interference of any kind in large, laboratory-sized objects, such as gauge pointers or cats. So we need to find a mechanism to account for this.
Bohr assumed the existence of a boundary between the quantum and the classical worlds, without ever being explicit about where this might be or how it might work. But we know that any classical measuring device must be composed of quantum entities, such as atoms and molecules. We therefore expect that the first stages of an interaction between a quantum system and a classical detector are likely to be quantum in nature. We can further expect that the sheer number of quantum states involved quickly mushrooms as the initial interaction is amplified and converted into a signal that a human experimenter can perceive—perhaps as a bright spot on a phosphorescent screen, or the changing direction of a gauge pointer.
In the example we considered earlier, the presence in the detector of particle A in an ↑ state triggers a cascade of ever more complex interactions, with each step in the sequence governed by a probability. Although each interaction taken individually is in principle reversible, the process is quickly overwhelmed by the ‘noise’ and the complexity in the environment and so appears irreversible. Just as a smashed cocktail glass on the floor doesn’t spontaneously reassemble, no matter how long we wait, though there’s nothing in the classical theory of statistical mechanics that says this can’t happen.
This ‘washing out’ of quantum interference effects as the measurement process grows in scale and complexity is called decoherence. In 1991, Murray Gell-Mann and James Hartle extended the consistency conditions of the consistent histories interpretation specifically to account for the suppression of interference terms through decoherence. The resulting interpretation is now more frequently referred to as decoherent histories.
We will meet decoherence again. But I’d like to note in passing that this is a mechanism for translating phenomena at the microscopic quantum scale to things we observe at our macroscopic classical scale, designed to eliminate all the strange quantum quirkiness along the way. Decoherence is deployed in a number of different interpretations, as we’ll see. In this particular instance, decoherence is used as a rather general, and somewhat abstract, mathematical technique which is used to ‘cleanse’ the probabilities arising from the interference terms. This is entirely consistent with the view that the wavefunction is a pre-probability, and so not physically real. Other interpretations which take a more realistic view of the wavefunction make use of decoherence as a real physical process.
One last point. Decoherence rids us of the interference terms. But it does not force the choice of measurement outcome (either A↑ or A↓)—which is still left to random chance. Einstein would not have been satisfied.
Gell-Mann and Hartle were motivated in their search for an alternative to the Copenhagen interpretation as this appears to attach a special significance to the process of measurement. At the time this sat rather uncomfortably with emerging theories of quantum cosmology—in which quantum mechanics is applied to the entire Universe—because, by definition, there can in theory be nothing ‘outside’ the Universe to make measurements on it. The decoherent histories interpretation resolves this problem by making measurement no more significant than any other kind of quantum event.
Interest in the interpretation grew, promoted by a small but influential international group of physicists that included Griffiths, Roland Omnès, Gell-Mann, and Hartle.
But the concerns grew, too.
In 1996, theorists Fay Dowker and Adrian Kent showed that serious problems arise when the frameworks are carried through to classical scales. Whilst the history of the world with which we are familiar may indeed be a consistent history, it is not the only one admitted by the interpretation.7 There is an infinite number of other histories, too. Because all the events within each history are probabilistic in nature, some of these histories include a familiar sequence of events but then abruptly change to an utterly unfamiliar sequence. There are histories that are classical now but which in the past were superpositions of other classical histories, suggesting that we have no basis on which to conclude that the discovery of dinosaur fossils today means that dinosaurs roamed the Earth a hundred million years ago.
Because there is no ‘right’ framework that emerges un
iquely as a result of the exercise of some law of nature, the interpretation regards all possible frameworks to be equally valid and the choice then depends on the kinds of questions we ask. This appears to leave us with a significant context dependence, in which our ability to make sense of the physics seems to depend on our ability to ask the ‘right’ questions. Rather like the vast computer Deep Thought, built to answer the ultimate question of ‘life, the universe and everything’ in Douglas Adams’s Hitch-hiker’s Guide to the Galaxy, we are furnished with the answer,* but we can only hope to make sense of this if we can be more specific about the question.
Griffiths acknowledges that Dowker and Kent’s concerns are valid, but concludes that this is the price that must be paid. The decoherent histories interpretation is8
contrary to a deeply rooted faith or intuition, shared by philosophers, physicists, and the proverbial man in the street, that at any point in time there is one and only one state of the universe which is ‘true’, and with which every true statement about the world must be consistent. [This intuition] must be abandoned if the histories interpretation of quantum theory is on the right track.
Sometimes, in situations like this, I find it helpful to step back. Maybe I’ll make another cup of tea, stop thinking about quantum mechanics for a while, and hope that my nagging headache will go away.
In these moments of quiet reflection, my mind wanders (as it often does) to the state of my personal finances. Now, I regard myself as a rational person. When faced with a choice between two actions, I will tend to choose the action that maximizes some expected utility, such as my personal wealth. This seems straightforward, but the world is a complex and often unpredictable place, especially in the age of Trump and Brexit. I understood quite some time ago that buying weekly tickets for the national lottery does not make for a robust pension plan. But beyond such obvious realizations, how do any of us know which actions to take? Should I keep my money in my bank account or invest in government bonds or the stock market?