Book Read Free

Farewell to Reality

Page 25

by Jim Baggott


  But when we intervene to make a measurement, we are obliged to abandon these equations. The measurement itself is like a quantum jump. It is ‘discontinuous’. The properties of the measurement outcomes are not closely, smoothly and continuously connected to the initial wavefunction. The only connection between the initial wavefunction and the measurement outcomes is that the modulus squares of the amplitudes of the various components of the former can be used to determine the probabilities for the latter. The measurement is completely indeterministic. This is the ‘collapse of the ‘wavefunction’.

  Von Neumann was also very clear that this collapse or process of ‘projecting’ the wavefunction into its final measurement state is not inherent in the equations of quantum theory. It has to be postulated, which is a fancy way of saying that it has to be assumed. Oh, and by the way, there is no experimental or observational evidence for this collapse per se. We just know that we start with a wavefunction which can be expressed as a superposition of the different possible outcomes and we end up with one — and only one — outcome.

  There are, in general, three ways in which we can attempt to get around this assumption. We can try to eliminate it altogether by supplementing quantum theory with an elaborate scheme based on hidden variables. As the experiments described in Chapter 2 amply demonstrate, this scheme has to be very elaborate indeed. We know that hidden variables which reintroduce local reality — variables which establish localized properties and behaviours in an entangled quantum system, for example — are pretty convincingly ruled out by experiments that test Bell’s inequality. We also know that the experiments designed to test Leggett’s inequality tell us that ‘crypto’ non-local hidden variable theories in which we abandon the set-up assumption won’t work either.

  This leaves us with no choice but to embrace a full-blown non-local hidden variables theory.

  Pilot waves

  Such theories do exist, the best known being de Broglie—Bohm pilot wave theory, named for French theorist Louis de Broglie and American David Bohm. At great risk of oversimplifying, the de Broglie—Bohm theory assumes that the behaviour of completely localized (and therefore locally real) quantum particles is governed by a non-local field — the ‘pilot wave’ — which guides the particles along a path from their initial to their final states.

  The particles follow entirely predetermined paths, but the pilot wave is sensitive to the measurement apparatus and its environment. Change the nature of the measurement by changing the orientation of a polarizing filter or opening a second slit, and the pilot wave field changes instantaneously in response. The particles then follow paths dictated by the new pilot wave field.

  The de Broglie—Bohm theory has attracted a small but dedicated group of advocates, but it is not regarded as mainstream physics. To all intents and purposes, we have simply traded the collapse assumption for a bunch of further assumptions. Yes, we have avoided the collapse assumption and regained determinism — the fates of quantum particles are determined entirely by the operation of classical cause-and-effect principles. But we have also gained a pilot wave field which remains responsible for all the ‘spooky’ action-at-a-distance. And the end result is a theory that, by definition, predicts precisely the same results as quantum theory itself.

  Einstein tended to dismiss this approach as ‘too cheap’.2

  Decoherence and the irreversible act of amplification

  The second approach is to find ways to supplement quantum theory with a mechanism that makes the collapse rather more explicit. In this approach we recognize a basic, unassailable fact about nature — the quantum world of atomic and subatomic particles and the classical world of experience are fundamentally different. At some point we must cross a threshold; we must cross the point at which all the quantum weirdness — the superpositions, the phantom-like ‘here’ and ‘there’ behaviour — disappears. In the process of being amplified to scales that we can directly perceive, the superpositions are eliminated and the phantoms banished, and we finish up with completely separate and non-interacting states of ‘here’ or ‘there’.

  Is it therefore possible to arrange it so that Schrödinger’s cat is never both alive and dead? Can we fix it so that the quantum superposition is collapsed and separated into non-interacting measurement outcomes long before it can be scaled up to cat-sized dimensions?

  The simple truth is that we gain information about the microscopic quantum world only when we can amplify elementary quantum events and turn them into perceptible macroscopic signals, such as the deflection of a pointer against a scale. We never (but never) see superpositions of pointers (or cats). It stands to reason that the process of amplification must kill off this kind of behaviour before it gets to perceptible levels.

  The physicist Dieter Zeh was one of the first to note that the interaction of a quantum wavefunction with a classical measuring apparatus and its environment will lead to rapid, irreversible decoupling or ‘dephasing’ of the components in a superposition, such that any interference terms are destroyed.

  Each state will now produce macroscopically correlated states: different images on the retina, different events in the brain, and different reactions of the observer. The different components represent two completely decoupled worlds. This decoupling describes exactly the [‘collapse of the wavefunction’]. As the ‘other’ component cannot be observed any more, it serves only to save the consistency of quantum theory.3

  But why would this happen? In the process of amplification, the various components of the wavefunction become strongly coupled to the innumerable quantum states of the measuring apparatus and its environment. This coupling selects components that we will eventually recognize as measurement outcomes, and suppresses the interference. The process is referred to as decoherence.

  We can think of decoherence as acting like a kind of quantum ‘friction’, but on a much faster timescale than classical friction. It recognizes that a wavefunction consisting of a superposition of different components is an extremely fragile thing.* Interactions with just a few photons or atoms can quickly result in a loss of phase coherence that we identify as a ‘collapse’. This is fast but it is not instantaneous.

  For example, it has been estimated that a large molecule with a radius of about a millionth (10-6) of a centimetre moving through the air has a ‘decoherence time’ of the order of 10-30 seconds, meaning that the molecule is localized within an unimaginably short time and behaves to all intents and purposes as a classical object.4 If we remove the air and observe the molecule in a vacuum, the estimated decoherence time increases to one hundredth of a femtosecond (10-17 seconds), which is getting large enough to be at least imaginable. Placing the molecule in intergalactic space, where it is exposed only to interactions with the cosmic microwave background radiation, increases the estimated decoherence time to 1012 seconds, meaning that a molecule formed in a quantum superposition state would remain in this state for a little under 32,000 years.

  In contrast, a dust particle with a radius of a thousandth of a centimetre — a thousand times larger than the molecule — has a decoherence time in intergalactic space of about a microsecond (10-6 seconds). So, even where the possibility of interactions with the environment is reduced to its lowest, the dust particle will behave largely as a classical object.

  The kinds of timescales over which decoherence is expected to occur for any meaningful example of a quantum system interacting with a classical measuring device suggest that it will be impossible to catch the system in the act of losing coherence. This all seems very reasonable, but we should remember that decoherence is an assumption: we have no direct observational evidence that it happens.

  But does this really solve the measurement problem?

  Decoherence eliminates the potentially embarrassing interference terms in a superposition. We are left with separate, non-interacting states that are statistical mixtures — different proportions of states that are ‘up’ or ‘down’, ‘here’ or ‘there’, ‘alive’ o
r ‘dead’. We lose all the curious juxtapositions of the different possible outcomes (blends of ‘up’ and ‘down’, etc.). But decoherence provides no explanation for why this specific measurement should give that specific outcome. As John Bell has argued:

  The idea that elimination of coherence, in one way or another, implies the replacement of ‘and’ by ‘or’, is a very common one among solvers of the ‘measurement problem’. It has always puzzled me.5

  This is sometimes referred to as the ‘problem of objedification’. Decoherence theory can eliminate all the superpositions and the interference, but we are still left to deal with quantum probability. We have no mechanism for determining which of the various outcomes — ‘up’/’down’, ‘here’/’there’, ‘alive’/’dead’ — we are actually going to get in the next measurement.

  There are other theories that seek to make the collapse of the wavefunction explicit. But, of course, they all typically involve the replacement of the collapse assumption with a bunch of other assumptions which similarly have no basis in observation or experiment.

  This leaves us with one last resort.

  Everett’s ‘relative state’ formulation of quantum theory

  The third approach is to turn the quantum measurement problem completely on its head. If there is nothing in the structure of quantum theory to suggest that the collapse of the wavefunction actually happens, then why not simply leave it out? Ah, I hear you cry. We did that already and it led us to non-local hidden variables.

  But there is another way of doing this that is astonishing in its simplicity and audacity. Let’s do away with the collapse assumption and put our trust purely in quantum theory’s deterministic equations. Let’s not add anything.

  Okay, I sense your confusion. If we don’t add anything, then how can we possibly get from a smoothly and continuously evolving superposition of measurement possibilities to just one — and only one — measurement outcome? Easy. We note that as observers in this universe we detect just one — and only one — outcome. We assume that at the moment of measurement the universe splits into two separate, non-interacting ‘branches’. In this branch of the universe you observe the result ‘up’, and you write this down in your laboratory notebook. But in another branch of the universe another you observes the result ‘down’.

  In one branch we run to fetch a bowl of milk for Schrödinger’s very much alive and kicking cat. In another branch we ponder what to do with this very dead cat we found in a box. All the different measurement possibilities inherent in the wavefunction are actually realized. But they’re realized in different branches of the universe.

  As Swedish-American theorist Max Tegmark explained in the BBC Horizon programme mentioned in the Preface:

  I’m here right now but there are many, many different Maxes in parallel universes doing completely different things. Some branched off from this universe very recently and might look exactly the same except they put on a different shirt. Other Maxes may have never moved to the US in the first place or never been born.6

  I have always found it really rather incredible that the sheer stubbornness of the measurement problem could lead us here, to Hugh Everett Ill’s ‘relative state’ formulation of quantum theory.

  Everett was one of John Wheeler’s graduate students at Princeton University. He began working on what was to become his ‘relative state’ theory in 1954, though it was to have a rather tortured birth. The theory was born, ‘after a slosh or two of sherry’,7 out of a complete rejection of the Copenhagen interpretation and its insistence on a boundary between the microscopic quantum world and the classical macroscopic world of measurement (the world of pointers and cats).

  The problem was that Wheeler revered Niels Bohr and regarded him as his mentor (as did many younger physicists who had passed through Bohr’s Institute for Theoretical Physics in Copenhagen in their formative years). Wheeler was excited by Everett’s work and encouraged him to submit it as a PhD thesis. But he insisted that Everett tone down his language, eliminating his anti-Copenhagen rhetoric and all talk of a ‘splitting’ or ‘branching’ universe.

  Everett was reluctant, but did as he was told. He was awarded his doctorate in 1957 and summarized his ideas in an article heavily influenced by Wheeler which was published later that year. Wheeler published a companion article in the same journal extolling the virtues of Everett’s approach.

  It made little difference. Bohr and his colleagues in Copenhagen accused Everett of using inappropriate language. Everett visited Bohr in 1959 in an attempt to move the debate forward, but neither man was prepared to change his position. By this time the disillusioned Everett had in any case left academia to join the Pentagon’s Weapons System Evaluation Group. He went on to become a multimillionaire.

  Despite Wheeler’s attempts to massage the Everett theory into some form of acceptability, there was no escaping the theory’s implications, nor its foundation in metaphysics. It seemed like the work of a crackpot. Wheeler ultimately came to reject it, declaring: ‘… its infinitely many unobservable worlds make a heavy load of metaphysical baggage’.8

  Many worlds

  And so Everett’s interpretation of quantum theory languished for a decade. But in the early 1970s it would be resurrected and given a whole new lease of life.

  The reason is perhaps not all that hard to understand. The Copenhagen interpretation’s insistence on the distinction between microscopic quantum system and classical macroscopic measurer or observer is fine in practice (if not in principle) when we’re trying to deal with routine ‘everyday’ measurements in the laboratory. But early theories of quantum cosmology, culminating in the development of the ACDM model, tell us that there was a time when the entire universe was a quantum system.

  When American theorist Bryce DeWitt, who had also been a student of Wheeler, developed an early version of a theory of quantum gravity featuring a ‘wavefunction of the universe’, this heralded the beginnings of a quantum cosmology. There was no escaping the inadequacy of the Copenhagen interpretation in dealing with such a wavefunction. If we assume that everything there is is ‘inside’ the universe, then there can be no measuring device or observer sitting outside whose purpose is to collapse the wavefunction of the universe and make it ‘real’.

  DeWitt, for one, was convinced that there could be no place for a special or privileged ‘observer’ in quantum theory. In 1973, together with his student Neill Graham, he popularized Everett’s approach as the ‘many worlds’ interpretation of quantum theory, publishing Everett’s original (unedited) Princeton PhD thesis in a book alongside a series of companion articles.

  In this context, the different ‘worlds’ are the different branches of the universe that split apart when a measurement is made. According to this interpretation, the observer is unaware that the universe has split. The observer records the single result ‘up’. She scratches her head and concludes that the wavefunction has somehow mysteriously collapsed. She is unaware of her parallel self, who is also scratching her head and concluding that the wavefunction has collapsed to give the single result ‘down’.

  If this were really happening, how come we remain unaware of it? Wouldn’t we retain some sense that the universe has split? The answer given by early proponents of the Everett formulation is that the laws of quantum theory simply do not allow us to make this kind of observation.

  In a footnote added to the proofs of his 1957 paper, Everett accepted the challenge that a universe that splits every time we make a quantum measurement appears to contradict common experience (and sense). However, he went on to note that when Copernicus first suggested that the earth revolves around the sun (and not the other way around), this view was initially criticized on the grounds that nobody had ever directly experienced the motion of the earth through space. Our inability to sense that the earth is moving was eventually explained by Galileo’s theory of inertia. Likewise, Everett argued, our inability to sense a splitting of the universe is explained by quantum physics.<
br />
  If the act of quantum measurement has no special place in the many worlds interpretation, then there is no reason to define measurement as being distinct from any process involving a quantum transition between initial and final states. Now, we would be safe to assume that there have been a great many quantum transitions since the big bang origin of the universe. Each transition will have therefore split the universe into as many worlds as there were contributions in all the different quantum superpositions. DeWitt estimated that there must by now be more than a googol (10100) worlds.

  As Wheeler himself remarked, the many worlds interpretation is cheap on assumptions, but expensive with universes.

  As the different worlds in which the different outcomes are realized are completely disconnected from each other, there is a sense in which Everett’s formulation anticipated the emergence of decoherence theory. Indeed, variants of the many worlds interpretation that have been developed since DeWitt resurrected it have explicitly incorporated decoherence, avoiding the problem of objectification by assuming that different outcomes are realized in different worlds.

  The original Everett conception of a universe which splits into multiple copies has undergone a number of reinterpretations. Oxford theorist David Deutsch argued that it is wrong to think of the universe splitting with each quantum interaction, and proposed instead that there exist a possibly infinite number of parallel worlds or parallel universes, among which the different outcomes are somehow partitioned. These parallel universes have become collectively known as the ‘multiverse’.*

 

‹ Prev