Book Read Free

Quantum Reality

Page 22

by Jim Baggott


  But this ‘neo-Everettian’ line of reasoning really just locks us into an endless debate about trading off different metaphysical preconceptions. If you personally prefer to stick with the simplicity of a real universal wavefunction whose motion is fully described by process 2, then you must decide whether you’re ready to accept a multiplicity of worlds, defined either as physically real or simply ‘effective’, for all practical purposes. You can avoid this by resorting to de Broglie–Bohm theory, but then as we’ve seen you then need to reconcile yourself to non-local spooky action at a distance.

  Zeh introduced decoherence into the mix as a way of sharpening the relationship between quantum and classical systems. Bell talked about replacing many worlds with many particle ‘histories’. It’s therefore a relatively short step from this back to the decoherent histories interpretation which I presented in Chapter 6. In his popular book The Quark and the Jaguar, published in 1994, Murray Gell-Mann explained it like this:17

  We believe Everett’s work to be useful and important, but we believe that there is much more to be done. In some cases too, his choice of vocabulary and that of subsequent commentators on his work have created confusion. For example, his interpretation is often described in terms of ‘many worlds’, whereas we believe that ‘many alternative histories of the universe’ is what is really meant. Furthermore, the many worlds are described as being ‘all equally real’, whereas we believe it is less confusing to speak of ‘many histories, all treated alike by the theory except for their different probabilities’.

  This is fine, as far as it goes. But note that this change of perspective isn’t only about reinterpreting the words we use. The many-worlds interpretation is fundamentally realist—it assumes the existence of a real universal wavefunction and possibly a multiplicity of real worlds—whereas, as we saw in Chapter 6, decoherent histories is broadly anti-realist: ‘all equally real’ is diluted to ‘all treated alike by the theory’. You begin to get the sense that there’s no easy way out.

  And herein lies the rub. Many theoretical physicists and philosophers who advocate the many-worlds interpretation, or who claim to be ‘Everettians’ or ‘neo-Everettians’, don’t necessarily buy into a single interpretation or a single set of metaphysical preconceptions.* ‘After 50 years, there is no well-defined, generally agreed set of assumptions and postulates that together constitute “the Everett interpretation of quantum theory”,’ wrote Adrian Kent in 2010.18 The chances are that advocates of many worlds buy into their own, possibly individually unique, understanding of what this actually means for them. This is important for what follows, as my criticism is confined to those theorists and philosophers who have not only embraced their inner metaphysician, but who have decided to go all-in with the metaphysics.

  In May 1977, DeWitt and Wheeler invited Everett to participate in a conference organized at the University of Texas in Austin. The subjects of the conference were human consciousness and computer-generated artificial intelligence, likely reflecting Wheeler’s growing interest in the role of consciousness in helping define the laws of physics in a ‘participatory universe’. Everett gave a seminar not on many worlds, but on machine self-learning.

  During a lunchtime break at a beer-garden restaurant, DeWitt arranged for Everett to sit alongside one of Wheeler’s young graduate students. Over lunch the student probed Everett about his interpretation, and about how he himself preferred to think about it. Although Everett’s career had by now taken him far from academia and he was no longer immersed in questions about the interpretation of quantum mechanics, the student was very impressed. Everett was still very much in tune with the debate.

  The student’s name was David Deutsch.

  Deutsch would go on to develop his own singular version of the many-worlds interpretation. He argued that the notion of a universe ‘branching’ with each and every transition involving a quantum superposition couldn’t be right. The simple fact that interference is possible with a single particle tells us that reality consists of an infinity of parallel universes, which form what is now generally known as the multiverse.

  To follow Deutsch’s arguments let’s return to the description of two-slit interference involving electrons in Chapter 1, and especially Figure 4. Look again at Figure 4a, in which we see just a few scattered points of brightness each indicating that ‘an electron struck here’. In Chapter 1, I explained that interference effects with single electrons arguably demonstrate that each individual electron behaves as a wave—conceived of as a real wave or a ‘wave of information’ (whatever that means)—passing through both slits at once. But what if electrons really do maintain their integrity as real, localized particles, capable of passing through only one slit or the other? Deutsch argues that the only way to recover interference from this is to propose that each electron is accompanied by a host of ‘shadow’ or ‘ghost’ electrons, which pass through both slits and interfere with the path of the visible electron.

  Whilst these ‘shadow’ electrons clearly influence the path of the visible electron, they are themselves not detectable—they make no other tangible impression. One explanation for this is that the ‘shadow’ electrons do not exist in ‘our’ universe. Instead they inhabit ‘a huge number of parallel universes, each similar in composition to the tangible one, and each obeying the same laws of physics, but differing in that the particles are in different positions in different universes’.19 When we observe single-particle interference, what we see is not a quantum wave–particle interfering with itself, but rather a host of particles in parallel universes interfering with a particle in our own, tangible universe.

  In this interpretation, the ‘tangible’ universe is simply the one which you experience and with which you are familiar. It is not privileged or unique: there is no ‘master universe’. In fact, there is a multiplicity of ‘you’ in a multiplicity of universes, and each regards their universe as the tangible one. Because of the quantum nature of the reality on which these different universes are founded, some of these ‘you’s have had different experiences and have different histories or recollections of events. As Deutsch explained: ‘Many of those Davids are at this moment writing these very words. Some are putting it better. Others have gone for a cup of tea.’20

  This is quite a lot to swallow, particularly when we consider that we have absolutely no empirical evidence of the existence of all these parallel universes. But Deutsch argues that the multiverse interpretation is the only way we can explain the extraordinary potential for quantum computing.

  This is worth a short diversion.

  The processors that sit in every desktop computer, laptop, tablet, smartphone, smartwatch, and item of wearable technology perform their computations on strings of binary information called ‘bits’, consisting of ‘0’s and ‘1’s. Now, classical bits have the values 0 or 1. Their values are one or the other. They can’t be added together in mysterious ways to make superpositions of 0 and 1. However, if we make bits out of quantum particles such as photons or electrons, then curious superpositions of 0 and 1 become perfectly possible. For example, we could assign the value ‘0’ to the spin state ↑ and the value ‘1’ to the spin state ↓. Such ‘quantum bits’ are referred to as ‘qubits’. Because we can form superpositions of qubits, the processing of quantum information works very differently compared with the processing of classical information.

  A system of classical bits can form only one ‘bit string’ at a time, such as 01001101. But in a system consisting of qubits we can form superpositions of all the different possible combinations. The physical state of the superposition is determined by the amplitudes of the wavefunctions of each qubit combination, subject to the restriction that the squares of these amplitudes sum to 1 (measurement can give one, and only one, bit string).

  And here’s where it gets very interesting. If we apply a computational process to a classical bit, then the value of that bit may change from one possibility to another. For example, a string with eight bits may
change from 01001101 to 01001001. But applying a computational process to a qubit superposition changes all the components of the superposition simultaneously. An input superposition yields an output superposition.

  A computation performed on an input in a classical computer is achieved by a few transistors which turn on or off. A linear input gives us a linear output, and if we want to perform more computations within a certain time we need to pack more transistors into the processor. We’ve been fortunate to witness an extraordinary growth in computer power in the past 30 years.* The Intel 4004, introduced in 1971, held 2,300 transistors in a processor measuring 12 square millimetres. The Apple 12 Bionic, used in the iPhone XS, XS Max, and XR, which were launched in September 2018, packs about 7 billion transistors into a processor measuring 83 square millimetres. The record is currently on the order of 20 to 30 billion transistors, depending on the type of chip.

  But this is nothing compared with a quantum computer that promises an exponential amount of computation in the same amount of time.

  The cryptographic systems used for most Internet-based communications and financial transactions are based on the simple fact that finding the prime factors of very large integer numbers requires a vast amount of computing power.† For example, it has been estimated that a network of a million conventional computers would require over a million years to find the prime factors of a 250-digit number. Yet this feat could, in principle, be performed in a matter of minutes using a single quantum computer.‡

  Deutsch claims that this kind of enhancement in computing power is only possible by leveraging the existence of the multiverse. When using a quantum computer to factor a number requires 10500 or so times the computational resources that we see to be physically present, where then is the number factorized?21 Given that there are only about 1080 atoms in the visible Universe, Deutsch argues that to complete a quantum computation we need to call on the resources available in a multitude of other parallel universes.

  Needless to say, a quantum computer with this kind of capability is not yet available. As I’ve already mentioned, systems prepared in a quantum superposition are extremely sensitive to environmental decoherence and, in a quantum computer, the superposition must be manipulated without destroying it. It is allowed to decohere only when the computation is complete. To date, quantum computing has been successfully demonstrated using only a relatively small number of qubits.* We may still be 20–30 years away from the kind of quantum computer that could threaten the encryption systems used today.

  Practical considerations notwithstanding, we need to address Deutsch’s principal assertion—that the existence of the multiverse is the only explanation for the enhancement of processing speed in a quantum computer.

  It should come as no real surprise to learn that there are more than a few problems with the many-worlds interpretation that carry through to Deutsch’s multiverse. First, there is a problem with the way that many worlds handles quantum probability, and for many years arguments have bounced back and forth over whether it is possible to derive the Born rule using this interpretation. Everett claimed to have solved this problem already in his dissertation, but not everybody is satisfied. There seems nothing to prevent an observer in one particular universe observing a sequence of measurement outcomes that do not conform to Born-rule expectations.

  A colourful example was provided by many-worlds enthusiast Max Tegmark (another former student of Wheeler’s). He proposed an experiment to test the many-worlds interpretation that is not for the faint of heart. Indeed, experimentalists of weak disposition should look away now.

  Instead of connecting our measuring device to a gauge, imagine that we connect it to a machine gun. This is set up so that when particle A is detected to be in a ↓ state, a bullet is loaded into the chamber of the machine gun and the gun fires. If particle A is detected to be in an ↑ state, no bullet is loaded and the gun instead just gives an audible ‘click’. We stand well back, and turn on our preparation device. This produces a steady stream of A particles in a superposition of ↑ and ↓ states. We satisfy ourselves that the apparatus fires bullets and gives audible clicks with equal frequency in an apparently random sequence.

  Now for the grisly bit.

  You stand with your head in front of the machine gun. (I’m afraid I’m not so convinced by arguments for the many-worlds interpretation that I’m prepared to risk my life in this way, and somebody has to do this experiment.) Of course, as an advocate of many worlds, you presume that all you will hear is long series of audible clicks. You are aware that there are worlds in which your brains have been liberally distributed on the laboratory walls, but you are not particularly worried by this because there are other worlds where you are spared.

  By definition, if you are not dead, then your history is one in which you have heard only a long series of audible clicks. You can check that the apparatus is still working properly by moving your head to one side, at which point you will start to hear gunfire again. If, on the other hand, the many-worlds interpretation is wrong and the wavefunction simply represents coded information, or the collapse is a physically real phenomenon, then you might be lucky with the first few measurements but, make no mistake, you will soon be killed. Your continued existence (indeed, you appear to be miraculously invulnerable to an apparatus that really should kill you) would appear to be convincing evidence that the many-worlds interpretation is right.

  Apart from the obvious risk to your life, the problem with this experiment becomes apparent as soon as you try to publish a paper describing your findings to a sceptical physics community. There may be worlds in which all you recorded was a long series of audible clicks. There are, however, many other worlds where I was left with a very unpleasant mess and a lot of explaining to do. The possibility of entering one of these worlds when you repeat the experiment does not disappear, and you will find that you have a hard time convincing your peers that you are anything other than quite mad. Tegmark wrote:22

  Perhaps the greatest irony of quantum mechanics is that…. if once you feel ready to die, you repeatedly attempt quantum suicide: you will experimentally convince yourself that the [many-worlds interpretation] is correct, but you can never convince anyone else!

  I’d like to make a further point. A history in which all you’ve heard is a long series of audible clicks suggests a world in which the superposition only ever produces an A↑ result. This is much like tossing a fair coin but only ever getting ‘heads’, or rolling a dice and only getting six. Yet the Born rule insists that there should be a 50:50 probability of observing A↑ and A↓. How can the Born rule be recovered from this?

  Philosopher David Wallace devotes three long chapters to probability and statistical inference in his book The Emergent Multiverse: Quantum Theory According to the Everett Interpretation, published in 2012. This is perhaps the most comprehensive summary of the Everett interpretation available, though readers should note that this is not a popular account, nor is it likely to be accessible to graduate students without some training in both physics and philosophy. Wallace seeks to avoid the connotations of ‘many worlds’ (the reader can almost hear Wheeler, whispering over his shoulder: ‘better words needed!’) but, like Everett, Wallace resorts to subjective, Bayesian decision theory, arguing that the components of the wavefunction are translated to different ‘weights’ for different branches. The observer then subjectively assigns ‘probabilities to the outcomes of future events in accordance with the Born Rule’.23

  But in Tegmark’s quantum suicide experiment, you only ever experience a long series of audible clicks, and this would surely lead you to conclude that in future events you’re only ever going to get the result A↑. Getting around this challenge requires some interesting mental gymnastics. If this is about the expectation of different subjective experiences, then we might be inclined to accept that death is not an experience. The experiment is flawed precisely because of the way it is set up. Wallace writes: ‘experiments which provide the ev
idential basis for quantum mechanics do not generally involve the death of the experimenter, far less of third parties such as the writer!’24

  I confess I’m not entirely convinced. And I’m not alone. Lev Vaidman—another many-worlds enthusiast—isn’t completely convinced, either.25

  Look back briefly at the discussion in Chapter 5. Suppose we prepare an ensemble of A particles in a superposition of spin states ↑ and ↓, but we measure the outcomes in the basis + and −. With any one of a potentially large number of ways of expressing the superposition in terms of different basis states we can freely choose from, how are the branches supposed to ‘know’ which basis corresponds to the outcomes that are to be observed? This is the problem of the ‘preferred basis’.

  As we’ve seen, the localization of the different measurement outcomes through decoherence arguably helps to get rid of (or, at least, minimize) the interference terms, but the wavefunction must still be presumed to be somehow steered towards the preferred basis in the process. In Wallace’s version of the Everett interpretation, the real wavefunction interacts with the classical measuring device and the environment, evolving naturally into a superposition of the measurement outcomes. The interference terms are dampened, and the eventual outcomes are realized in different branches. Wallace writes: ‘Decoherence is a dynamical process by which two components of a complex entity (the quantum state) come to evolve independently of one another, and it occurs owing to rather high-level, emergent consequences of the particular dynamics and the initial state of our universe.’26 So, decoherence naturally and smoothly connects the initial wavefunction with the preferred basis, determined by how the experiment is set up.

  This might sound quite plausible, but now let’s—at last—return to quantum computing. It turns out that there is more than one way to process qubits in a quantum computer. Instead of processing them in sequence, a ‘one-way’ quantum computer based on a highly entangled ‘cluster state’ proceeds by feeding forward the outcomes of irreversible measurements performed on single qubits. The (random) outcome from one step determines the basis to be applied for the next step, and the nature of the measurements and the order in which they are executed can be configured so that they compute a specific algorithm.* Such a computer was proposed in 2001 by Robert Raussendorf and Hans Briegel,27 and a practical demonstration of computations using a four-qubit cluster state was reported in 2005.28

 

‹ Prev