Book Read Free

Time Loops

Page 16

by Eric Wargo


  You’ll be hard-pressed to find a book on ESP written since the late 1970s that doesn’t mention entanglement, along with its corollary, non-locality , as a possible explanation for the tendency of, say, twins to feel each other’s pains or a remote viewer’s ability to “see” a secret facility halfway around the world. Abuse of entanglement to explain ESP so angered some physicists that it nearly got the Parapsychological Association booted out of the American Association for the Advancement of Science. At the 1979 meeting of the AAAS, the eminent physicist John Wheeler thundered, “Let no one use the Einstein-Podolsky-Rosen experiment [the basis for the entanglement concept] to … postulate any so-called ‘quantum-interconnectedness’ between separate consciousnesses.” 1 He added that such a notion was “moonshine.”

  Physicists famously have mini-strokes when quantum concepts are used by parapsychologists to explain “woo” like psychic phenomena, and even when those concepts are borrowed by researchers in more mainstream fields like psychology or economics to help illuminate problems in those domains. (Uttering the words “quantum consciousness” is especially guaranteed to make a physicist keel over, or punch you in the face.) But physicists do not hold an exclusive trademark on the word quantum . Knowledge evolves by the spreading of metaphors and the (mis)application of new concepts to different, seemingly unrelated questions—a healthy epistemic ecosystem depends on cross-fertilization, play, and error. The great anthropologist Claude Levi-Strauss called it bricolage , from the way handymen in France collect odds and ends from their past jobs and use them to solve new problems—a perfect metaphor for the ever-resourceful, ever-scavenging myth-making mind. 2 Often the result is moonshine … but nobody (not even John Wheeler) can predict what batch of today’s moonshine might, after being aged a few decades, turn out to be the basis for some exciting and powerful new paradigm.

  We may right now be in the midst of a monumental shift—if not even a paradigm shift—around how to interpret the famously “spooky” (and wavy) behavior of matter in the quantum world. It is a shift that could end up validating some of the woo-peddlers and hand-wavers. “Separate consciousnesses” might not be entangled across space; but everything made of matter, including brains, may bear the traces of their entanglements in both directions across time.

  “I Was Told There’d Be Other Photons at This Party”

  The Alice-in-Wonderland behavior of matter on a fundamental level can be seen in quantum physics’ most famous object lesson, the double slit experiment. If you shoot a light beam through a barrier with a single slit, the photons will obediently pile on a screen behind it (such as a photographic plate) like a vertical smudge. You will see a dense area in the center fading out gradually on both sides, very much as if the slit was a stencil held in front of the screen and you sprayed the photons through it like spray paint. This doesn’t tell us all that much about the nature of light, such as whether it consists of particles or waves. But if you shoot the beam through a barrier with two slits, you will not see two dark areas side by side, which you might expect if photons were like paint droplets or little solid bullets; instead you will see a rippling, zebra-stripe interference pattern on the screen, indicating that the photons passed through the slits like waves, cancelling each other where peaks and troughs coincided and amplifying each other where the wave peaks were in sync.

  So far, so good. The discovery of this interference pattern in the early 19th century overthrew Isaac Newton’s older dogma that light consisted of little particles. If light passing through parallel slits could interfere the way sound waves or ripples on a pond do, its true nature had to be wavy.

  But wait. Where it gets weird—or indeed, “impossible,” from a classical physics point of view—is if you dial down the intensity of your expensive laboratory flashlight to such a low setting that it shoots just single photons, individual quanta of light, through the slits, one at a time—one every second, one every minute, or one every day. If you do that, at the end of your experiment you will still find an interference pattern on the screen, not a twin pair of smudges, as you might expect. 3 The same is true with electrons and other larger particles, even atoms and certain molecules—they interfere with something even when they are by themselves.

  How can this be? What are the individual photons interfering with so that they still land on the screen in the distinctive zebra-stripe interference pattern? How does each photon “know” there were other photons before them, and other ones coming later? Or are they really interfering with themselves in some weird way? It was enough to turn grown male scientists into teenage girls from Galilee.

  In 1935, the Danish physicist Niels Bohr described a variant of this experiment. What would happen, he wondered, if you attached some kind of detector to one of the slits so it was possible to determine which path each individual particle took on its way to the screen? According to his predictions, if you attempt to spy on the individual particles, they will suddenly change their behavior and pile on the screen the way they would if they were just boring old bullets (or droplets of spray paint)—in two dark lines, with no interference pattern, regardless of whether they are shot through the slits individually or en masse. In the last few decades it has become possible to actually test Bohr’s idea, obtaining “which path” information in double-slit experiments that use beams of atoms and a non-interfering detector at one of the slits. Bohr’s prediction has been proven right. They somehow know what’s up and behave like little solid paint droplets in that case—even the ones flying through the slit that doesn’t have the detector attached to it. 4

  Even more mind-bogglingly, none other than John Wheeler showed in 1978 that if you change the parameters of the double-slit experiment while particles are already in mid-flight, they appear to change their nature retroactively —again, almost as if they “know” in advance what they are being asked to do. 5

  The central dogma of quantum physics is that there is no way to predict how any individual photon or electron or any other particle will behave in any situation—for instance, which path it will take through the slits of the double-slit experiment. The equations that make quantum physics the most powerful and precise scientific theory ever devised only apply to large numbers of particles. The unpredictability of a single particle’s behavior goes way deeper than the butterfly effect: Even knowing all the initial conditions out to the last decimal place would not be enough to enable an experimenter to predict how the particle will behave. Until you actually make a measurement of its position, or its momentum, or its spin , or any of the other variable states it might possess, you must assume those things only exist as a vague cloud of mathematical probabilities, what’s known as a wavefunction . It is not simply that the behavior of particles before measurement is unknown. It is that there is no “already existing” reality to these particles at all until they are observed.

  This at least is the most famous interpretation of the evidence, called the Copenhagen Interpretation, which was arrived at by Bohr and his friend Werner Heisenberg in the mid 1920s. It radically redefined the whole business of physics: from describing a preexisting, stable reality “out there” to describing what happens in experiments only—because in some very real sense, there seemed to be nothing out there, outside of or prior to the act of observation … at least nothing you could ever put your hands on. According to one of quantum theory’s axioms, the “projection postulate,” it takes an observer (or at the very least a measuring device) to cause the mathematical cloud of probabilities, in their state of self-interfering superposition , to “collapse” into a single definite particle-like value.

  This need for an observer to be the midwife for reality has a further strange implication, which Bohr also discovered. Each of a particle’s knowable features (like its position) is only knowable when you sacrifice your ability to know about another, complementary feature (like momentum). It is not simply that measurement interferes with what is being measured—Heisenberg’s famous uncertainty principle —but t
hat the tools needed to measure one kind of information are physically antithetical to the kinds of tools needed to measure the other kind. This is known as indeterminacy . 6 Thus, when the observer delivers the object into the world, her choice of how to perform her measurement really determines the form it takes: particle or wave. Her choice influences that particle’s destiny. John Wheeler, who studied under Bohr, underscored that observation not only brings the world into being but actually shapes it—an idea known as the “participatory universe.” 7

  Exactly what it is about observation or measurement that might cause the “collapse of the wavefunction,” and what really happens in that magic instant, has been widely debated. 8 One idea floated by John von Neumann in a 1932 textbook and now popular with the wider public (and especially writers on psi phenomena) is that consciousness itself is what collapses wavefunctions during observation. 9 A more modest, non-anthropocentric theory that has come to prevail in the last few decades within physics is known as decoherence : Fluctuations in the environment act as an “observer,” constantly converting matter’s waviness into something solid and definitive. 10 Whatever the case, for most physicists, the collapse of the wavefunction is held to be a real transformation in reality, and it is generally held to be irreversible—a one-way change.

  Einstein, who famously quipped that God doesn’t play dice, wasn’t happy with any of the arguments about uncertainty and wavefunctions collapsing and the complementarity of position and momentum and all the rest. He had trouble accepting that randomness could be somehow built in to the structure of reality at a fundamental level, and especially that it would take an observer to “realize” matter’s definiteness. In many histories of quantum physics, Einstein comes across as the grumpy old man, shaking his cane at the young’uns, telling them to get off of his beautiful relativistic lawn. In fact, Einstein’s criticisms—even if they reflected an old-fashioned realism—were incisive, and his contributions not only helped hone the reasoning of his younger, more surrealistically minded peers like Bohr but also gave the field what has remained its most “magical” showpiece: the phenomenon of entanglement (more on which momentarily).

  Einstein was also far from alone in his distrust of the Copenhagen Interpretation. A minority of physicists over the years have also suspected that quantum formalisms obscure what is actually happening during measurement. Some have even argued that there has to be a hidden variable accounting for the apparent randomness and uncertainty of unmeasured physical systems. Recent developments are making it look like the truth may indeed lie somewhere between Einstein’s retrograde realism and the resignation to indeterminacy and uncertainty preached by Bohr and Heisenberg. Time seems to hold the answer.

  Bass-Ackwards

  A big problem with the idea of wavefunctions collapsing is the irreversibility of it. This time-a symmetry comports well with our understanding of thermodynamics and the one-way-ness of entropic physical processes we observe in daily life, and thus with our intuitive billiard-ball understanding of cause preceding effect. But it goes against most of the rules of physics and the equations describing electromagnetism, which leave open the possibility that systems could evolve in both directions in time. 11 Most physicists have just shrugged and thrown out the “backward” solutions to these equations. But it feels wrong somehow.

  In the early 1940s, Wheeler and his student Richard Feynman made a stab at reconciling the temporal symmetries suggested by most of physics’ laws with the seemingly asymmetric behavior of light—the way it ordinarily radiates outward from a source, dissipating through a medium according to thermodynamic principles of entropy, and thus seems to carry causation in a single direction (outward). According to their absorber theory , objects that receive (or absorb) radiant energy emit waves of their own, called “advanced waves,” which interact with the “retarded waves” from the radiation source and produce the apparently asymmetric behavior of radiation. The single direction of light-carried causality is not some a priori, in their theory, but is a product of the interaction of these advanced and retarded waves. 12 Wheeler and Feynman’s absorber theory does not presume that the destination of a photon has advanced knowledge that it is going to receive the photon; thus, even though it is a time-symmetric solution, it is not exactly retrocausal. But others have gotten more radical in their thinking.

  In the 1960s, an Israeli physicist named Yakir Aharonov basically agreed with Einstein about God not playing dice, and he proposed that the future is the hidden variable underlying quantum strangeness. Individual particles, such as those photons passing through the slits of the double-slit experiment, are actually influenced by what will happen to them next (i.e., when they hit the screen), not just by what happened to them a moment ago (when they were shot out of the expensive flashlight and passed through the slits). The randomness that seems to rule the quantum casino, Aharonov suggested, may really be the inherently unknowable influence of those particles’ future histories on their present behavior. Measurement thus becomes part of the particle’s “backstory”—precisely the part that always looked like randomness, or quantum uncertainty.

  Physicists like to give innocuously obscure names to their ideas, partly to keep the rest of the world from abusing those ideas in ESP books, like I’m doing here. The time-symmetric, retrocausal framework advanced by Aharonov and his colleagues is sometimes called the two-state vector formalism . It is not the only, or even the most well-known, retrocausal solution, however.

  In the 1980s, University of Washington physicist John G. Cramer argued for a transactional interpretation of quantum mechanics that resembles in many ways Wheeler and Feynman’s absorber theory but is actually retrocausal. Like Wheeler and Feynman, Cramer proposed that the wavefunction of a particle moving forward in time is just one of two relevant waves determining its behavior. The retarded wave in Cramer’s theory is complemented by a response wave that travels specifically from the particle’s destination, in temporal retrograde. In his theory, a measurement, or an interaction, amounts to a kind of “handshake agreement” between the forward-in-time and backward-in-time influences. 13 This handshake can extend across enormous lengths of time, if we consider what happens when we view the sky at night. As Cramer writes:

  When we stand in the dark and look at a star a hundred light years away, not only have the retarded waves from the star been traveling for a hundred years to reach our eyes, but the advanced waves generated by absorption processes within our eyes have reached a hundred years into the past, completing the transaction that permitted the star to shine in our direction. 14

  Cramer may not have been aware of it, but his poetic invocation of the spacetime greeting of the eye and a distant star, and the transactional process that would be involved in seeing, was actually a staple of medieval and early Renaissance optics. Before the ray theory of light emerged in the 1600s, it was believed that a visual image was formed when rays projecting out from the eye interacted with those coming into it. It goes to show that everything , even old physics, comes back in style if you wait long enough—and it is another reason not to laugh too hard, or with too much self-assurance, at hand-waving that seems absurd from one’s own limited historical or scientific standpoint.

  In short: Cramer’s and Aharonov’s theories both imply a backward causal influence from the photon’s destination. The destination of the photon “already knows” it is going to receive the photon, and this is what enables it to behave with the appropriate politeness. Note that neither of these theories have anything to do with billiard balls moving in reverse, a mirror of causation in which particles somehow fly through spacetime and interact in temporal retrograde. That had been the idea at the basis of Gerald Feinberg’s hypothesized tachyons, particles that travel faster than light and thus backward in time. It inspired a lot of creative thinking about the possibilities of precognition and other forms of ESP in the early 1970s (and especially inspired the science-fiction writer Philip K. Dick), but we can now safely set aside that clun
ky and unworkable line of thinking as “vulgar retrocausation.” No trace of tachyons has turned up in any particle accelerator, and they don’t make sense anyway. What we are talking about here instead is an inflection of ordinary particles’ observable behavior by something ordinarily unobservable: measurements—that is, interactions—that lie ahead in those particles’ future histories. Nothing is “moving” backwards in time—and really, nothing is “moving” forwards in time either. A particle’s twists and turns as it stretches across time simply contain information about both its past and its future.

  A prominent advocate of time-symmetric, retrocausal solutions is Cambridge University analytic philosopher Huw Price. It is not the ostensible randomness and observer-dependence of the microworld that nags Price so much as entanglement, everybody’s favorite quantum quirk. Einstein and two of his younger colleagues, Boris Podolsky and Nathan Rosen (collectively known as “EPR”), originally predicted the existence of entangled states in a 1935 paper that was intended to show the incompleteness of quantum theory. Despite being predicted in the equations, entanglement seemed intrinsically impossible because it would require information to travel between entangled particles at a speed faster than light (i.e., instantaneously), violating light’s strict speed limit and thus violating unilinear causality. Yet in 1964, a CERN physicist named John Bell published a theorem proving the EPR prediction, and subsequent experiments supported the existence of entangled states. Since then, entanglement has gone from a curious special case to the state of almost everything in the universe—all particles that interact become entangled, at least to a degree. However, no one has ever been able to explain just how entanglement works—it is just one of many quantum bizarreries that physics students are instructed to take on faith. But Price thinks retrocausation (or what he also calls “advanced action”) holds the answer: The measurement that affects one of the two entangled particles sends information back in time to the point when they interacted and became entangled in the first place. Thus, that future event in the life of Alice became part of the destiny of Bob too—a kind of zig-zagging causal path cutting across Minkowski’s four-dimensional spacetime block. 15

 

‹ Prev