Book Read Free

Quantum Reality

Page 23

by Jim Baggott


  So, in a cluster state computer, the basis for each measurement changes randomly from one step in the calculation to the next, and will differ from one qubit to the next. Note that whilst ‘measurement’ here is irreversible, it does not involve an act of amplification, in which we might invoke decoherence. Quite the contrary. The superpositions required for quantum computing must remain coherent—the maximum length of a computation is determined by the length of time that coherence can be maintained. In such a system, decoherence is unwanted ‘noise’.

  Even if it were possible for a preferred basis somehow to emerge during one step of the computation, this is not necessarily the basis needed for the next step in the sequence. Decoherence can’t help us here. Philosopher Michael Cuffaro writes: ‘Thus there is no way in which to characterise the cluster state computer as performing its computations in many worlds, for there is no way, in the context of the cluster state computer, to even define these worlds for the purposes of describing the computation as a whole.’29

  In this case, it’s doubtful that many worlds or the multiverse serve any useful purpose as a way of thinking about quantum computation. Cuffaro believes that those advocates who take the physical reality of the different worlds or branches rather less seriously should be broadly in agreement with his arguments.30 In other words, if the many different worlds simply represent a useful way of thinking about the problem, but are not assumed to be physically real, then this is no big deal.

  Wallace acknowledges that whilst what he calls ‘massive parallelism’ (i.e. the multiverse) might have been helpful historically as a way of thinking about quantum computation, it ‘has not been especially productive subsequently’. He continues: ‘Nor would the Everett interpretation particularly lead one to think otherwise: the massively parallel-classical-goings-on way to understand a quantum state is something that occurs only emergently, in the right circumstances, and there’s no reason it has to be available for inherently microscopic (i.e. not decoherent) systems.’31 But, in circumstances where decoherence is possible, as far as Wallace is concerned the many emergent worlds are real ‘in the same sense that Earth and Mars are real’.32

  Wallace is a philosopher, and his musings on the reality of the multiverse are largely confined to philosophy journals and books. But Deutsch is a scientist. Yet he too insists that we accept that these worlds really do exist: ‘It’s my opinion that the state of the arguments, and evidence, about other universes closely parallels that about dinosaurs. Namely: they’re real—get over it.’33

  I believe we’ve now crossed a threshold. I’ve claimed that it is impossible to do science of any kind without metaphysics. But when the metaphysics is completely overwhelming and the hope of any contact with Empirical Reality is abandoned—when the metaphysics is all there is—I would argue that such speculations are no longer scientific. Now perched on the very edge of Charybdis, the Ship of Science is caught in its powerful grip. We watch, dismayed, as it starts to slip into the maelstrom.

  In recent years, other varieties of multiverse theory have entered the public consciousness, derived from the cosmological theory of ‘eternal inflation’ and the so-called ‘cosmic landscape’ of superstring theory. These variants are very different: they were conceived for different reasons and purport to ‘explain’ different aspects of foundational physics and cosmology. But these variants provoke much the same line of argument. Thus, Martin Rees, Britain’s Astronomer Royal, declares that the cosmological multiverse is not metaphysics but exciting science, which ‘may be true’, and on which he’d bet his dog’s life.34

  Despite their different origin and explanatory purpose, some theorists have sought to conflate these different multiverse theories into a single structure. In his recent book Our Mathematical Universe, Tegmark organizes these different approaches into a nested hierarchy of four ‘levels’.35 The Level I multiverse comprises universes with different sets of initial Big Bang conditions and histories but the same fundamental laws of physics. This is the multiverse of eternal inflation. Level II is a multiverse in which universes have the same fundamental laws of physics but different effective laws (different physical constants, for example). We happen to live in a universe for which the laws and constants enable intelligent life to exist (this is the ‘fine-tuning’ problem). Level III is the multiverse of the many-worlds interpretation of quantum mechanics. Level IV is the multiverse of all possible mathematical structures corresponding to different fundamental laws of physics.

  I’ll leave you to decide what to make of this.

  The many-worlds interpretation and the different varieties of multiverse theory have attracted some high-profile advocates, such as Sean Carroll, Neil deGrasse Tyson, David Deutsch, Brian Greene, Alan Guth, Lawrence Krauss, Andre Linde, Martin Rees, Leonard Susskind, Max Tegmark, Lev Vaidman, and David Wallace. Note once again that this list includes ‘neo-Everettians’ who do not necessarily interpret the multiverse realistically, but prefer to think about it as a useful conceptual device. Curiously, these tend to be philosophers: it is frequently the scientists who want to be so much more literal. Those raising their voices against this kind of approach—for all kinds of different reasons—include Paul Davies, George Ellis, David Gross, Sabine Hossenfelder, Roger Penrose, Carlo Rovelli, Joe Silk, Paul Steinhardt, Neil Turok, and Peter Woit. I’m inclined to agree with them.36

  Of course, academic scientists are free to choose what they want to believe and within reason they can publish and say what they like. But in their public pronouncements and publications, the highly speculative and controversial nature of multiverse theories are often overlooked, or simply ignored. The multiverse is cool. Put multiverse in the title or in the headlines of an article and it is more likely to capture attention, get reported in the mainstream media, and invite that all-important click, or purchase.

  Many worlds can also be positioned as a rather fashionable rejection of the Copenhagen interpretation, with its advocates (especially Everett and DeWitt) romanticized and portrayed as heroes, ‘sticking it to the man’, the ‘man’ in question being Bohr and Heisenberg, and their villainous orthodoxy.37 This is the picture that Adam Becker paints in his recent popular book What is Real? Becker argues that theories ‘need to give explanations, unify previously disparate concepts, and bear some relationship with the world around us’.38 But when all contact with Empirical Reality is lost and all we are left with is the metaphysics, who decides what constitutes ‘some relationship’?

  In his recent book on quantum mechanics, Lee Smolin calls this tendency ‘magical realism’,39 and I personally believe this is very dangerous territory. The temptation to fight dogma with yet more dogma can be hard to resist. When taken together with other speculative theories of foundational physics, the multiverse tempts us away from what many regard as rather old-fashioned notions of the scientific method; they want to wean us off our obsession with empirical evidence and instead just embrace the ‘parsimony’ that comes with purely metaphysical explanations.

  At a time when the authority of science is increasingly questioned by those promoting a firmly anti-scientific agenda, this kind of thing can’t be good. As noted Danish historian Helge Kragh concluded:40

  But, so it has been argued, intelligent design is hardly less testable than many multiverse theories. To dismiss intelligent design on the ground that it is untestable, and yet to accept the multiverse as an interesting scientific hypothesis, may come suspiciously close to applying double standards. As seen from the perspective of some creationists, and also by some non-creationists, their cause has received unintended methodological support from multiverse physics.

  Don’t get me wrong. I fully understand why those theorists and philosophers who prefer to adopt a realist perspective feel they have no choice but to accept the many-worlds interpretation. But, in the absence of evidence, personal preferences don’t translate into really existing physical things. For my part I’ll happily accept that many worlds were of enormous value as a way of th
inking about quantum computation. But thinking about them doesn’t make them real. And, whilst alternative anti-realist interpretations may be less philosophically acceptable to some, it must be admitted that they just don’t drag quite so much metaphysical baggage around with them. The formalism itself remains passively neutral and inscrutable. It doesn’t care what we think it means.

  One last point. Unlike even the more outrageously speculative realist interpretations we’ve considered thus far, interpretations based on many worlds or the multiverse offer no real clues as to how we might gain any further empirical evidence one way or the other. This is, for me at least, where the multiverse theories really break down. Whatever we think they might be ‘explaining’ about the nature of quantum reality, we have to admit that there’s little or nothing practical to be gained from such explanations. They provide no basis for taking any kind of action according to Proposition #4. Even when predictions are claimed, they’re little different from the vague soothsayers’ tricks cited by Popper. Unsurprising, really, as this is surely what Einstein was warning us about in 1950, when he explained that the ‘passion for understanding’ leads to the ‘illusion that man is able to comprehend the objective world rationally by pure thought without any empirical foundations—in short, by metaphysics’.41

  As I explained in Chapter 3, the philosopher James Ladyman suggests that we look to the institutions of science to demarcate between science and non-science, and so defend the integrity of science by excluding claims to objective knowledge based on pure metaphysics. But these institutions haven’t so far prevented the publication, in scientific journals, of research papers empty of empirical content, filled with speculative theorizing that offers little or no promise of ever making any kind of contact with Empirical Reality. Despite efforts by cosmologist George Ellis and astrophysicist Joe Silk to raise a red flag in 2014 and call on some of these institutions to ‘defend the integrity of physics’,42 little has changed. Ladyman seems resigned to this fate: ‘Widespread error about fundamentals among experts can and does happen,’ he tells me.43 He believes a correction will come in the long run, when a real scientific breakthrough is made.

  Until that happens, we have no choice but to watch in horror as the Ship of Science disappears into the maelstrom. All hands are lost.

  * Interestingly, in his Physics Today paper, De Witt argued that von Neumann’s collapse postulate was part of the ‘conventional’ or ‘Copenhagen’ interpretation. And yet a formal theory of measurement was never part of the Copenhagen orthodoxy (Bohr’s biographer Abraham Pais found an entry in one of his old notebooks pertaining to a lecture delivered by Bohr in November 1954 which reads: ‘[Bohr] thinks that the notion “quantum theory of measurement” is wrongly put’ (see Abraham Pais, Niels Bohr’s Times, Oxford University Press, 1991, p. 435). Indeed, why would an anti-realist interpretation which draws a line between quantum and classical worlds require a quantum theory of measurement? Whilst ‘conventional’ interpretation is fair, De Witt misrepresents ‘Copenhagen’ and—to me at least—it appears that he is seeking to demonize it. More on this later in this chapter.

  * Just as there’s no single ‘Copenhagen interpretation’.

  * According to Moore’s law (named for Gordon Moore, one of the founders of Intel), the number of transistors in a computer processor doubles every 18 months to 2 years.

  † Factorization involves finding integer numbers that are factors of a larger number. For example, the number 42 can be factored as 6 × 7. Prime factorization involves finding factors that are prime numbers, numbers that themselves can’t be factored any further. Thus, the prime factors of the number 42 are 2, 3, and 7.

  ‡ We need to be a bit careful here. Quantum computers offer an exponentially faster processing speed only for certain problems, such as factoring. For other problems—such as sorting—quantum computers offer no advantages at all. See Peter Shor’s interview with John Horgan: https://blogs.scientificamerican.com/cross-check/quantum-computing-for-english-majors/

  * The D-Wave 2000Q System is based on a quantum processor consisting of 2,000 superconducting qubits and purportedly costs $15 million. But there’s a caveat. Whilst it is generally understood that D-Wave machines are indeed based on quantum computing, many of the qubits are needed for error correction rather than calculation.

  * For example, a specific configuration of four qubits organized in a circuit can be used to compute a database search algorithm devised by Lov Grover in 1996.

  Epilogue

  I’ve Got a Very Bad Feeling about This

  Whatever you make of all this, I think you have to agree that quantum mechanics is an extraordinary theory. It forces us to confront difficult questions about what we think we’re doing when we develop a scientific representation of physical reality, and what we expect to get from such a representation. And it forces us to face some simple philosophical truths that were all too easy to ignore in classical mechanics.

  I hope I’ve done enough in this book to explain the nature of our dilemma. We can adopt an anti-realist interpretation in which all our conceptual problems vanish, but which obliges us to accept that we’ve reached the limit of our ability to access deeper truths about a reality of things-in-themselves. The anti-realist interpretations tell us that there’s nothing to see here. Of necessity, they offer no hints as to where we might look to gain some new insights or understanding. They are passive; mute witnesses to the inscrutability of nature.

  In contrast, the simpler and more palatable realist interpretations based on local or crypto non-local hidden variables offered plenty of hints and continue to motivate ever more exquisitely subtle experiments. Alas, the evidence is now quite overwhelming and all but the most stubborn of physicists accept that nature denies us this easy way out. If we prefer a realist interpretation, taking the wavefunction and all the conceptual problems this implies at face value, then we’re left with what I can only call a choice between unpalatable evils. We can choose de Broglie–Bohm theory, and accept non-local spooky action at a distance. We can choose to add a rather ad hoc spontaneous collapse mechanism, and hope for the best. We can choose to involve consciousness in the mix, conflating one seemingly intractable problem with another. Or we can choose Everett, many worlds, and the multiverse.

  In his recent book Einstein’s Unfinished Revolution, Smolin concludes that quantum mechanics must be incomplete, but that ‘realism, in any version, has a price we have to pay to get a new theory that makes complete sense and describes nature correctly and completely’.1 I leave you to decide which of the realist interpretations we’ve considered in this book might be worth paying the price (or which is the lesser evil). Smolin remains unconvinced by any and all of them, and has grown weary of arguing the ins and outs of existing approaches. He feels he has no choice but to ‘head down into the swamps’ in search of new ideas, knowing that ‘I will almost certainly fail, but I hope to send back reports to interest and inspire those few others who feel in their bones the cost of our ignorance, of giving up the search too soon.’2

  For many years, I have on balance preferred Einstein’s realism. I have championed Bell’s rejection of the Copenhagen orthodoxy (I still do). I trained as an experimentalist, and I’d argue that it’s really hard to do experiments of any kind—to intervene, in Hacking’s parlance—without a strong belief in the reality of the things you’re experimenting with. This is why I think it’s fundamentally important to unpack what it means to be a ‘realist’ based on the four propositions I’ve set out in Chapters 2 and 3. Few scientists will argue against Propositions #1 (objective reality) and #2 (entity realism). But, though my realist convictions are unshaken, the more I’ve thought about it, the more I’ve come to question Proposition #3, the presumption that the base concepts of a theory (such as the quantum wavefunction) necessarily represent real physical states. Over the years I’ve developed some real doubts.

  Like the great philosopher Han Solo, I’ve got a very bad feeling about this.
r />   I’ll leave you with just two reasons for my doubts. One derives simply from the way we routinely apply quantum mechanics. I explained that there is no such thing as the ‘right’ wavefunction and that, provided we follow the rules, we’re perfectly at liberty to express the wavefunction in whatever basis is most suited to our problem. In fact, when we stand back and look hard at what we’re doing when we use the quantum formalism, we become aware that, for the most part, we don’t make use of the wavefunctions at all. We use mathematical objects such as projection operators, projection amplitudes, and expectation values which derive from the wavefunctions and more closely connect with that body of knowledge we call quantum physics. To do this, we don’t need to discover what the wavefunction actually looks like. We just need to know how one quantum state relates to another, and we get this from our experience of the physics.

  Surely, this freedom and flexibility is at odds with any realistic interpretation of the wavefunction. Though I don’t necessarily favour Rovelli’s relational interpretation, I must admit that his claim that the wavefunction is just a way of ‘coding’ our experience has started to resonate very deeply.

  The second reason is that the results of experiments designed to test Bell’s and Leggett’s inequalities should cause any realist to stop and think. I’ve grown very wary of the enormous price we have to pay for any realist interpretation not yet ruled out by experiment, and I’ve grown especially wary of realist interpretations that don’t lend themselves to experimental test. Yes, I accept that some of these have been successful in motivating new experimental searches in the spirit of Proposition #4, although on a bad day I confess to some deep suspicions about how these experiments will turn out.

 

‹ Prev