by Brian Cox
This process of calculating the clock (known in the jargon as the ‘amplitude’) for each Feynman diagram, adding all the clocks together and squaring the final clock to get a probability that the process will happen is the bread and butter of modern particle physics. But there is a fascinating issue hiding away beneath the surface of all that we have been saying – an issue that bothers some physicists a lot and others not at all.
The Quantum Measurement Problem
When we add the clocks corresponding to the different Feynman diagrams together, we are allowing for the orgy of quantum interference to happen. Just as for the case of the double-slit experiment, where we had to consider every possible route that the particle could take on its journey to the screen, we must consider every possible way that a pair of particles can get from their starting positions to their final positions. This allows us to compute the right answer because it allows for interference between the different diagrams. Only at the end of the process, when all of the clocks have been added together and all the interference is accounted for, should we square up the size of the final clock to calculate the probability that the process will happen. Simple. But look at Figure 10.2.
What happens if we attempt to identify what the electrons are doing as they hop to X and Y? The only way we can examine what is going on is to interact with the system according to the rules of the game. In QED, this means that we must stick to the electron–photon branching rule, because there is nothing else. So let’s interact with one of the photons that can be emitted from one or other of the electrons, by detecting it using our own personal photon detector; our eye. Notice that we are now asking a different question of the theory: ‘What is the chance to find an electron at X and another at Y and also a photon in my eye?’ We know what to do to get the answer – we should add together all of the clocks associated with the different diagrams that start out with two electrons and end up with an electron at X, another at Y, and also a photon ‘in my eye’. More precisely, we should talk about how the photon interacts with my eye. Although that might start out simple enough, it soon gets out of hand. For example, the photon will scatter off an electron sitting in an atom in my eye, and that will trigger a chain of events leading ultimately to my perception of the photon as I become consciously aware of a flash of light in my eye. So to describe fully what is happening involves specifying the positions of every particle in my brain as they respond to the arrival of the photon. We are sailing close to something called the quantum measurement problem.
Figure 10.2. A human eye taking a look at what is going on.
So far in the book we have described in some detail how to compute probabilities in quantum physics. By that, we mean that quantum theory allows us to calculate the chances of measuring some particular outcome if we conduct an experiment. There is no ambiguity in this process, provided we follow the rules of the game and stick to computing the probabilities of something happening. There is, however, something to feel uneasy about. Imagine an experimenter conducting an experiment for which there are only two outcomes, ‘yes’ and ‘no’. Now imagine actually doing the experiment. The experimenter will record either ‘yes’ or ‘no’, and obviously not both at the same time. So far, so good.
Now imagine some future measurement of something else (it doesn’t matter what) made by a second experimenter. Again, we’ll assume it is a simple experiment whose outcome is to make a ‘click’ or ‘no click’. The rules of quantum physics dictate that we must compute the probability that the second experiment goes ‘click’ by summing clocks associated with all of the possibilities that lead to this outcome. Now this may include the circumstance where the first experimenter measures ‘yes’ and the complementary case where they measure ‘no’. Only after summing over the two do we get the correct answer for the chances of measuring a ‘click’ in the second experiment. Is that really right? Do we really have to entertain the notion that, even after the outcome of some measurement, we should maintain the coherence of the world? Or is it the case that once we measure ‘yes’ or ‘no’ in the first experiment then the future is dependent only upon that measurement? For example, in our second experiment it would mean that if the first experimenter measures ‘yes’ then the probability that the second experiment goes ‘click’ should be computed not from a coherent sum over the ‘yes’ and ‘no’ possibilities but instead by considering only the ways in which the world can evolve from ‘first experimenter measures yes’ to ‘second experiment goes click’. This will clearly give a different answer from the case where we are to sum over both the ‘yes’ and ‘no’ outcomes and we need to know which is the right thing to do if we are to claim a full understanding.
The way to check which is right is to determine whether there is anything at all special about the measurement process itself. Does it change the world and stop us from adding together quantum amplitudes or rather is measurement part of a vast complex web of possibilities that remain forever in coherent superposition? As human beings we might be tempted to think that measuring something now (‘yes’ or ‘no’ say) irrevocably changes the future and if that were true then no future measurement could occur via both the ‘yes’ and ‘no’ routes. But it is far from clear that this is the case because it seems that there is always a chance to find the Universe in a future state which can be arrived at via either the ‘yes’ or ‘no’ routes. For those states, the laws of quantum physics, taken literally, leave us with no option but to compute the probability of their manifestation by summing over both the ‘yes’ and ‘no’ routes. Weird though this may seem, it is no more weird than the summing over histories that we have been performing throughout this book. All that is happening is that we are taking the idea so seriously that we are prepared to do it even at the level of human beings and their actions. From this point of view there is no ‘measurement problem’. It is only when we insist that the act of measuring ‘yes’ or ‘no’ really changes the nature of things that we run into a problem, because it is then incumbent upon us to explain what it is that triggers the change and breaks the quantum coherence.
The approach to quantum mechanics that we have been discussing, which rejects the idea that Nature goes about choosing a particular version of reality every time someone (or something) ‘makes a measurement’, forms the basis of what is often referred to as the ‘many worlds’ interpretation. It is very appealing because it is the logical consequence of taking the laws that govern the behaviour of elementary particles seriously enough to use them to describe all phenomena. But the implications are striking, for we are to imagine that the Universe is really a coherent superposition of all of the possible things that can happen and the world as we perceive it (with its apparently concrete reality) arises only because we are fooled into thinking that coherence is lost every time we ‘measure’ something. In other words, my conscious perception of the world is fashioned because the alternative (potentially interfering) histories are highly unlikely to lead to the same ‘now’ and that means quantum interference is negligible.
If measurement is not really destroying quantum coherence then, in a sense, we live out our lives inside one giant Feynman diagram and our predisposition to think that definite things are happening is really a consequence of our crude perceptions of the world. It really is conceivable that, at some time in our future, something can happen to us which requires that, in the past, we did two mutually opposite things. Clearly, the effect is subtle because ‘getting the job’ and ‘not getting the job’ makes a big difference to our lives and one cannot easily imagine a scenario where they lead to identical future Universes (remember, we should only add amplitudes that lead to identical outcomes). So in that case, getting and not getting the job do not interfere much with each other and our perception of the world is as if one thing has happened and not the other. However, things become more ambiguous the less dramatic the two alternative scenarios are and, as we have seen, for interactions involving small numbers of particles summing over the differe
nt possibilities is absolutely necessary. The large numbers of particles involved in everyday life mean that two substantially different configurations of atoms at some time (e.g. getting the job or not) are simply very unlikely to lead to significantly interfering contributions to some future scenario. In turn, that means we can go ahead and pretend that the world has changed irrevocably as a result of a measurement, even when nothing of the sort has actually happened.
But these musings are not of pressing importance when it comes to the serious business of computing the probability that something will happen when we actually carry out an experiment. For that, we know the rules and we can implement them without any problems. But that happy circumstance may change one day – for now it is the case that questions about how our past might influence the future through quantum interference simply haven’t been accessible to experiment. The extent to which meditations on the ‘true nature’ of the world (or worlds) described by quantum theory can detract from scientific progress is nicely encapsulated in the position taken by the ‘shut up and calculate’ school of physics, which deftly dismisses any attempt to talk about the reality of things.
Anti-matter
Back in this world, Figure 10.3 shows another way that two electrons can scatter off each other. One of the incoming electrons hops from A to X, whereupon it emits a photon. So far so good but now the electron heads backwards in time to Y where it absorbs another photon and thence it heads into the future, where it might be eventually detected at C. This diagram does not contravene our rules for hopping and branching, because the electron goes about emitting and absorbing photons as prescribed by the theory. It can happen according to the rules and, as the title of the book suggests, if it can happen, then it does. But such behaviour does appear to violate the rules of common sense, because we are entertaining the idea that electrons travel backwards in time. This would make for nice science fiction, but violating the law of cause and effect is no way to build a universe. It would also seem to place quantum theory in direct conflict with Einstein’s Theory of Special Relativity.
Figure 10.3. Anti-matter … or an electron travelling backwards in time.
Remarkably, this particular kind of time travel for subatomic particles is not forbidden, as Dirac realized in 1928. We can see a hint that all may not be quite as defective as it seems if we reinterpret the goings-on in Figure 10.3 from our ‘forwards in time’ perspective. We are to track events from left to right in the figure. Let’s start at time T = 0, where there is a world of just two electrons located at A and B. We continue with a world containing just two electrons until time T1, whereupon the lower electron emits a photon; between times T1 and T2 the world now contains two electrons plus one photon. At time T2, the photon dies and is replaced by an electron (which will end up at C) and a second particle (which will end up at X). We hesitate to call the second particle an electron because it is ‘an electron travelling back in time’. The question is, what does an electron that is travelling back in time look like from the point of view of someone (like you) travelling forwards in time?
To answer this, let’s imagine shooting some video footage of an electron as it travels in the vicinity of a magnet, as illustrated in Figure 10.4. Providing that the electron isn’t travelling too fast,5 it will typically travel around in a circle. That electrons can be deflected by a magnet is, as we have said before, the basic idea behind the construction of old-fashioned CRT television sets and, more glamorously, particle accelerators, including the Large Hadron Collider. Now imagine that we take the video footage and play it backwards. This is what ‘an electron going backwards in time’ would look like from our ‘forwards in time’ perspective. We’d now see the ‘backwards in time electron’ circle in the opposite direction as the movie advances. From a physicist’s perspective, the backwards in time video will look exactly like a forwards in time video shot using a particle which is in every way identical to an electron except that the particle appears to carry positive electric charge. Now we have the answer to our question: electrons travelling backwards in time would appear, to us, as ‘electrons of positive charge’. Thus, if electrons do actually travel back in time then we expect to encounter them as ‘electrons of positive charge’.
Figure 10.4. An electron, circling near a magnet.
Such particles do exist and they are called ‘positrons’. They were introduced by Dirac in early 1931 to solve a problem with his quantum mechanical equation for the electron – namely that the equation appeared to predict the existence of particles with negative energy. Later, Dirac gave a wonderful insight into his way of thinking, and in particular his strong conviction in the correctness of his mathematics: ‘I was reconciled to the fact that the negative energy states could not be excluded from the mathematical theory, and so I thought, let us try to find a physical explanation for them.’
Just over a year later, and apparently unaware of Dirac’s prediction, Carl Anderson saw some strange tracks in his experimental apparatus while observing cosmic ray particles. His conclusion was that, ‘It seems necessary to call upon a positively charged particle having a mass comparable with that of an electron.’ Once again, this illustrates the wonderful power of mathematical reasoning. In order to make sense of a piece of mathematics, Dirac introduced the concept of a new particle – the positron – and a few months later it was found, produced in high-energy cosmic ray collisions. The positron is our first encounter with that staple of science fiction, anti-matter.
Armed with this interpretation of time-travelling electrons as positrons, we can finish off the job of explaining Figure 10.3. We are to say that when the photon reaches Y at time T2 it splits into an electron and a positron. Each head forwards in time until time T3 when the positron from Y reaches X, whereupon it fuses with the original upper electron to produce a second photon. This photon propagates to time T4, when it gets absorbed by the lower electron.
This might all sound a little far fetched: anti-particles have emerged from our theory because we are permitting particles to travel backwards in time. Our hopping and branching rules allow particles to hop both forwards and backwards in time, and despite our possible prejudice that this must be disallowed, it turns out that we do not, indeed must not, prevent them from doing so. Quite ironically, it turns out that if we did not allow particles to hop back in time then we would have a violation of the law of cause and effect. This is odd, because it seems as if things ought to be the other way around.
That things work out just fine is not an accident and it hints at a deeper mathematical structure. In fact, you may have got the feeling on reading this chapter that the branching and hopping rules all seem rather arbitrary. Could we make up some new branching rules and tweak the hopping rules then explore the consequences? Well, if we did that we would almost certainly build a bad theory – one that would violate the law of cause and effect, for example. Quantum Field Theory (QFT) is the name for the deeper mathematical structure that underpins the hopping and branching rules and it is remarkable for being the only way to build a quantum theory of tiny particles that also respects the Theory of Special Relativity. Armed with the apparatus of QFT, the hopping and branching rules are fixed and we lose the freedom to choose. This is a very important result for those in pursuit of fundamental laws because using ‘symmetry’ to remove choice creates the impression that the Universe simply has to be ‘like this’ and that feels like progress in understanding. We used the word ‘symmetry’ here and it is appropriate, because Einstein’s theories can be viewed as imposing symmetry restrictions on the structure of space and time. Other ‘symmetries’ further constrain the hopping and branching rules, and we shall briefly encounter those in the next chapter.
Before leaving QED, we have a final loose end to tie up. If you recall, the opening talk of the Shelter Island meeting concerned the Lamb shift, an anomaly in the hydrogen spectrum that could not be explained by the quantum theory of Heisenberg and Schrödinger. Within a week of the meeting, Hans Bethe prod
uced a first, approximate, calculation of the answer. Figure 10.5 illustrates the QED way to picture a hydrogen atom. The electromagnetic interaction that keeps the proton and the electron bound together can be represented by a series of Feynman diagrams of increasing complexity, just as we saw for the case of two electrons interacting together in Figure 10.1. We’ve sketched two of the simplest possible diagrams in Figure 10.5. Pre-QED, the calculations of the electron energy levels included only the top diagram in the figure, which captures the physics of an electron that is trapped within the potential well generated by the proton. But, as we’ve discovered, there are many other things that can happen during the interaction. The second diagram in Figure 10.5 shows the photon briefly fluctuating into an electron–positron pair, and this process must also be included in a calculation of the possible energy levels of the electron. This, and many other diagrams, enter the calculation as small corrections to the main result.6 Bethe correctly included the important effects from ‘one-loop’ diagrams, like that in the figure, and found that they slightly shift the energy levels and therefore the detail in the observed spectrum of light. His result was in accord with Lamb’s measurement. QED, in other words, forces us to imagine a hydrogen atom as a fizzing cacophony of subatomic particles popping in and out of existence. The Lamb shift was humankind’s first direct encounter with these ethereal quantum fluctuations.
Figure 10.5. The hydrogen atom.
It did not take long for two other Shelter Island attendees, Richard Feynman and Julian Schwinger, to pick up the baton and, within a couple of years, QED had been developed into the theory we know today – the prototypical quantum field theory and exemplar for the soon-to-be-discovered theories describing the weak and strong interactions. For their efforts, Feynman, Schwinger and the Japanese physicist Sin-Itiro Tomonaga received the 1965 Nobel Prize ‘for their fundamental work in quantum electrodynamics, with deep-ploughing consequences for the physics of elementary particles’. It is to those deep-ploughing consequences that we now turn.