Book Read Free

Quantum Reality

Page 18

by Jim Baggott


  In contrast, a dust particle with a radius of a thousandth of a centimetre—a thousand times larger than the molecule—has a decoherence time of a microsecond (10−6 seconds), even in intergalactic space. The dust particle behaves classically even here, where the likely number of interactions with the environment is at its very lowest.

  Clearly, for quantum entities such as photons, electrons, and individual atoms, the decoherence times will be longer in all the different environments considered. But when large numbers of interacting particles are involved, as in the interaction of the quantum systems with a classical measuring device and its environment, the decoherence time becomes extremely short and, for all practical purposes, the transition from quantum to classical behaviour can be assumed to be essentially instantaneous, and irreversible.

  The kinds of timescales over which decoherence is expected to occur suggest that it will be very tricky—though perhaps not impossible—to catch a quantum system in the act of losing coherence in any conventional experiment. Recall, however, that this is one of those times where size really does matter. It is possible to find systems of intermediate scale, between quantum and classical, with decoherence times measured in microseconds to milliseconds. Decoherence has been observed directly in systems involving trapped beryllium ions, and highly excited rubidium atoms sitting inside cavities filled with microwave photons.10 If decoherence isn’t allowed to proceed too far, it is possible to reverse the process and watch as the initial wavefunction is recovered through recoherence.11

  Diffraction and interference effects have been demonstrated at these intermediate scales using closed-cage molecules consisting of 60 carbon atoms (called buckminsterfullerene) and 70 carbon atoms (fullerene-70), and more recently in large organic molecules with up to 430 atoms.12 In experiments performed on so-called superconducting quantum interference devices (SQUIDs), interference has been observed between states involving about a billion pairs of electrons travelling in opposite directions around a superconducting ring large enough to be visible to the naked eye.13 Such systems have been described as the laboratory equivalents of Schrödinger cat states.

  These experiments demonstrate that what we’ve so far thought of as the collapse of the wavefunction is not some philosophical or mathematical abstraction, but a real physical process that can be observed and quantified. They also demonstrate the lengths we have to go to in order to avoid exposure to an environment likely to induce rapid decoherence.

  We might then wonder why, if quantum coherence is really so fragile and difficult to maintain, how is it possible for us routinely to observe interference effects requiring coherent superpositions of many photons? The answer is that the interactions occurring in an electromagnetic field involving large numbers of photons are primarily photon–photon interactions. Such interactions do happen but they are extremely weak. To a first approximation, photons do not interact with themselves at all and so do not represent a significant source of decoherence in an intense electromagnetic field. Coherence survives, and interference at large scales can be readily observed.

  This is beginning to feel like real progress. A physical explanation for the collapse of the wavefunction must surely solve the problem of quantum measurement. And, if the collapse is a real physical thing, then this must mean just as surely that the wavefunction itself must be real. How could it not be? There has to be something physical to decohere.

  But it should be evident by now that in quantum mechanics it pays never to get too carried away. Alas, decoherence does not solve the measurement problem, as Bell argued:14

  The idea that elimination of coherence, in one way or another, implies the replacement of ‘and’ by ‘or’, is a very common one among solvers of the ‘measurement problem’. It has always puzzled me.

  Decoherence suppresses the interference terms by diluting them over the vast number of states in the measuring device and its environment. We can argue that the evolving measurement forces a ‘privileged basis’, one that necessarily accords with the way the measurement is set up and hence with our classical experience. We make measurements and record that a particle was either ↑ or ↓, because this is what the device is set up to measure and so these outcomes are ‘robust’ to the effects of decoherence. But as a physical mechanism, decoherence can’t force the choice between all the different measurement possibilities. That we randomly get either ↑ or ↓ remains essentially mysterious. Decoherence can’t account for quantum probability; it can’t convert ↑ and ↓ into ↑ or ↓.

  The mathematical physicist Roger Penrose makes similar observations:15

  [Decoherence] does not help us to determine that the cat is actually either alive or dead…. we need more…. What we do not have is a thing which I call OR standing for Objective Reduction. It is an objective thing —either one thing or the other happens objectively. It is a missing theory. OR is a nice acronym because it also stands for ‘or’, and that is indeed what happens, one OR the other.

  The theoretician Roland Omnès has called this the problem of ‘objectification’. In the context of decoherence, it remains unsolved.

  Okay, so that’s a bit disappointing. But what about the implications of the experimental observation of decoherence for the reality of the wavefunction? Can we at least gain some solace from this?

  Alas, no. Decoherence is a critically important mechanism, but it is not in itself a distinct interpretation of quantum mechanics. In fact, as we have already seen in the case of the consistent or decoherent histories interpretation, it is a mechanism that is most usefully employed within an interpretation to nail down the ‘shifty split’ between the quantum and classical domains. As such, it is equally at home in both realist and anti-realist interpretations.

  Let me explain. We are sorely tempted to conclude that the physical process of decoherence implies a physically real quantum state. This is understandable. But we are still perfectly at liberty to suppose that decoherence applies to a system that we choose to represent in terms of a wavefunction that simply holds information about the quantum system. Instead of a real physical wavefunction, together with its interference terms, becoming diluted through the sequence of ever more complex interactions associated with a measurement, we follow the evolution of the information we presume it contains. As before, the forced choice between measurement possibilities is then just an unproblematic updating of our knowledge.

  You might think that information as a concept seems too artificial or abstract to have any real physical consequences, but the relational and information-theoretic interpretations both deal with real physics. It’s just that they don’t require a realistic interpretation of the wavefunction to do so. And, what’s more, we can just as easily base our arguments about the relation between quantum measurement and entropy on the information content of the wavefunction.

  In 1948, the mathematician and engineer Claude Shannon developed an early but very powerful form of information theory. Shannon worked at Bell Laboratories in New Jersey, the prestigious research establishment of American Telephone and Telegraph (AT&T) and Western Electric (it is now the research and development subsidiary of Alcatel-Lucent). Shannon was interested in the efficiency of information transfer via communications channels such as telegraphy. He found that ‘information’ as a concept can have what we would normally regard as physical properties. Most notably, information has entropy, which we know today as ‘Shannon entropy’. It is in many ways equivalent to the ‘Von Neumann entropy’ derived from quantum mechanics.

  In 1961, this kind of logic led IBM physicist Rolf Landauer to declare that ‘information is physical’. He was particularly interested in the processing of information in a computer. He concluded that when information is erased during a computation it is actually dumped into the environment surrounding the processor, adding to the entropy. This increase in entropy results in an increase in temperature: the environment surrounding the processor heats up. Anyone who has ever run a complex computation on their laptop co
mputer will have noticed how, after a short while, the computer starts to get uncomfortably hot.

  Landauer’s famous statement requires some careful interpretation, but it’s enough for now to note the direct connection between the processing of information and physical quantities such as entropy and temperature. It seems that ‘information’ is not an abstract concept invented by the human mind. It can have real, physical consequences.

  The bottom line is that observation of physical decoherence can’t be taken as evidence that the wavefunction represents the physically real state of a quantum system. Yes, there has to be something physical to decohere, but interpretations based on information will work just as well.

  Decoherence provides a physical basis for understanding the transition between the quantum and the classical worlds, but it does so by relying on conventional quantum mechanics extrapolated to complex, large-scale systems. In this sense, the only thing ‘added’ to the formalism is a complexity that is otherwise absent or just ignored. Given that there is now some well-established experimental evidence for decoherence, it might appear strange that the phenomenon is often overlooked in student textbooks.16 The simple truth is that decoherence does not remove the need for an interpretational framework and, once again, those physicists less concerned with interpretation and meaning tend not to fuss about the quantum-to-classical transition, because they really don’t need to.

  But there’s nothing to stop us going a little further. Whilst enjoying this short visit to the shores of Metaphysical Reality, why not supplement quantum mechanics with a completely new physical mechanism, one that avoids the ‘shifty split’ without running into the problem of objectification? This is what physicists Giancarlo Ghirardi, Alberto Rimini, and Tullio Weber did in 1986. Their initial theory has subsequently been refined and extended by Philip Pearle and others, but for simplicity I will continue to refer to it here as GRW theory.17

  GRW chose to add a new mathematical term to the quantum formalism. The new term has the effect of subjecting the wavefunction (interpreted realistically) to random, spontaneous localizations, or jumps, or ‘hits’, if you prefer. Instead of being left alone, to glide gracefully through space on a course set by the Schrödinger equation according to Axiom #5, the wavefunction is poked every now and then with the quantum equivalent of an electric prod, which forces it to collapse and curl up on itself rather like a startled hedgehog. To get this to work properly, GRW found that they needed to introduce two new physical constants. The first of these refers to the ‘localization accuracy’, which determines the dimensions of the function that combines with the total wavefunction describing the superposition when it is ‘hit’. They fixed on a localization accuracy of about 10−5 centimetres.

  The second of these new constants represents the mean frequency of spontaneous localizations. GRW set this to a value of 10−16 per second. This implies that the wavefunction is localized on average about once every few hundred million years. However, just like decoherence, this frequency is sensitively dependent on the number of interacting particles involved, such that a complex system localizes at an average frequency given by the number of particles times 10−16 per second.

  This means that the wavefunction of a quantum system consisting of individual or small numbers of particles never localizes: it continues to evolve in time according to the Schrödinger equation. With these choices for the constants, there is no practical difference between the GRW theory and conventional quantum mechanics, at least for quantum systems. However, any kind of classical measuring device will consist of trillions and trillions of particles, and the mean localization frequency increases such that the wavefunction is localized (collapsed) within a few billionths of a second: ‘[Schrödinger’s] cat is not both dead and alive for more than a split second.’18

  There are obvious parallels with the mechanics of decoherence, but the GRW theory has the added advantage of forcing the selection of a specific measurement outcome, though this is still very much a random process associated with the spontaneous localizations.

  There are versions of spontaneous collapse mechanisms that are variations on the GRW theme but which address issues of application, for example to quantum systems consisting of ensembles of identical particles, and which make the collapse mechanism continuous. We should accept that such mechanisms are bolted on to conventional quantum mechanics for no other reason than to satisfy our metaphysical preconceptions concerning the reality of the wavefunction (Proposition #3). But, although we might baulk at the rather ad hoc nature of this particular ‘work around’, we should also acknowledge that, once again, this realist extension has provoked sufficient curiosity to prompt a search for empirical evidence. In other words, GRW theory is an ‘active’ interpretation according to Proposition #4 (see the Appendix). Much of the experimental effort to detect decoherence in systems of intermediate scale serves as crucibles for potential evidence for spontaneous collapse mechanisms, too. Although to date no favourable evidence has yet been gathered, Ghirardi is optimistic: ‘fully discriminating tests seem not to be completely out of reach’.19 There’s more on this in the remaining part of this chapter.

  But introducing new physical constants is always less satisfactory than having the solution to the problem emerge naturally from the theory itself.

  Recall that in later life Einstein tended to assume that all these problems would be resolved in an elusive grand unified theory. Quantum mechanics is clearly not the end. It is not finished. It relies on the assumption of a background space and time not much different from Newton’s metaphysical absolutes, in contrast with Einstein’s general theory of relativity, in which space and time are emergent. This poses a vitally important question. Can a quantum theory of gravity save us?

  In general relativity, the action at a distance implied by the classical Newtonian force of gravity is replaced by a curved spacetime. The amount of curvature in a particular region of spacetime is related to the density of mass–energy present. Wheeler explained it this way: ‘Spacetime tells matter how to move; matter tells spacetime how to curve.’20

  This logic led a number of physicists, starting with Feynman, to suggest that the structure of spacetime itself may have a role to play in quantum mechanics. Lajos Diósi (in 1987) and later Roger Penrose (in 1996) proposed another kind of spontaneous collapse mechanism, now commonly referred to as Diósi–Penrose theory.21 They argue that a superposition will begin to break down and eventually collapse into specific quantum states when it encounters a region of significant spacetime curvature. Unlike decoherence or the GRW theory, in which the number of particles is the key to the collapse, in Diósi–Penrose theory it is the density of mass–energy which is important, as this determines the extent of spacetime curvature around it. In his popular book The Emperor’s New Mind, Penrose wrote22

  My own point of view is that as soon as a ‘significant’ amount of space-time curvature is introduced, the rules of quantum linear superposition must fail. It is here that the complex-amplitude superpositions of potentially alternative states become replaced by probability-weighted actual alternatives—and one of the alternatives indeed actually takes place.

  This is potentially quite neat. On a quantum scale, gravitational (spacetime curvature) effects are insignificant, leaving the wavefunction free to evolve according to the Schrödinger equation. But these effects become much more significant when the wavefunction encounters a classical measuring device. Of course, the quantum system is created in a laboratory which sits in Earth’s gravitational field, so Penrose suggests that it is the difference in spacetime curvature in the two situations which triggers the collapse.

  Diósi–Penrose theory explains how gravitationally induced decoherence or GRW-like spontaneous collapse mechanisms would work. But these arguments do not derive from a fully fledged quantum theory of gravity. The disappointment is that the current prime candidates for such a theory—loop quantum gravity (my personal favourite) and superstring theory—can’t yet o
ffer us any further clarity. There may indeed be insights to be gained, such as Smolin’s suggestion that the non-local connections between quanta of space predicted by loop quantum gravity might explain non-locality in quantum mechanics.23 But it’s still too soon to be definitive about any of this. As far as I can tell, these theories of quantum gravity are still heavily dependent on quantum mechanics as a foundational theory, and in themselves do not require this particular foundation to be different than it is.*

  It seems we’re unlikely to get any answers directly from theory anytime soon. Despite this, we see yet again how proposals based on realistic interpretations serve to motivate the community of experimentalists. Any suggestion—sometimes no matter how tenuous—that reality might actually be different than we understand it to be in ways that can be probed in experiments is more than enough to pique their interest. It’s more than enough reason to climb back aboard the Ship of Science and sail to the shores of Empirical Reality.

  In his more recent book Fashion, Faith and Fantasy, Penrose summarizes ‘various currently active proposals’ to test these ideas.24 One such proposal, involving macroscopic quantum resonators (MAQRO) installed in an orbiting satellite, aims to test the predictions of quantum mechanics for superpositions of objects with more than a hundred million atoms.25 In these circumstances, it is hoped that the subtle differences between the predictions of quantum mechanics and the GRW and Diósi–Penrose theories might start to become accessible to experiment.

 

‹ Prev