Book Read Free

Time Loops

Page 18

by Eric Wargo


  Information vs. Meaning

  Retrocausation is increasingly thinkable nowadays thanks to another, much bigger, slower, and less controversial paradigm shift that has been occurring across many physical sciences over the past few decades: a trend away from describing natural phenomena in terms of causes and effects to instead describing them in terms of information and its transformations. 38 Such a reframing not only frees us conceptually to understand the kinds of backward influences being detected in physics laboratories, but it also enables us to better consider what these influences may have to do with, or not do with, questions of meaning . It may be that we cannot understand matter on a fundamental level (let alone such notions as “sending signals back in time”) until we understand meaning and how it gets made, and more often not made , from the raw material of information. 39

  First, what does it mean to redescribe causation in terms of information? Every process in nature, every collision, every interaction of matter and energy, boils down to a transformation in some theoretically measurable state of the atoms and photons and electrons involved. It changes the states of those particles the same way energy flowing through a computer changes binary switches—it “flips bits.” Thus causation is, when it comes right down to it, synonymous with computation . 40 John Wheeler famously described the physical universe as the product of computation, “it from bit” 41 ; and quantum computing pioneer Seth Lloyd argues we should really think of the whole universe as a giant computer, because you could theoretically use any lump of matter in the universe, or even the universe as a whole, to compute things. 42 You just need to measure the particles first, “program” them by interacting with them in some way, and measure them again to find out how they have changed; the latter is your “output.”

  “The universe is a computer” is a bit of a misleading metaphor, however, for a couple of reasons. For one thing, it calls to mind a “user” who can derive meaning from all this computation and apply it to some problem. Mere computation, the transformation of information, is not enough for the making of meaning. You can theoretically quantify the information in a system, even the amount of information in the universe as a whole (the universe can store roughly 10 92 bits of information, in case you were wondering 43 ) or the amount of information processing in any subsystem of the universe—like a computer, a rock, a star, or a starfish—without necessarily paying attention to whether a given bit flip has any value or utility to someone in guiding behavior or communicating a message. Meaning, while often confused with information, really refers to the value of a piece of information to some agent (conscious or not) who can use it to convey a message or otherwise effect some change.

  Because meaning is relative to the needs of the agent holding or using the information, it cannot be measured in any absolute way. This is what mathematician Claude Shannon realized in the middle of the 20th century when he tried to turn information theory into something rigorously scientific. 44 Shannon, who worked at Bell Labs during WWII and later taught at MIT, was interested in technical problems in cryptography, telecommunications, and computing, and it was safe to assume for all these human purposes that the information we are concerned with can be meaningful (otherwise why would we care about it?). But it is hard to quantify the truth versus falsehood of a signal, and impossible to quantify its value to a recipient (or sender)—indeed its value to different agents could differ. Formalizing his theory thus meant setting aside these “psychological factors.” It was an important move that later paved the way for thinking of causality on a fundamental (quantum) level in information terms. Yet lingering confusions about information and whether it must be meaningful have persisted and have even led to some of the most interesting byways in late-20th -century science. 45

  The most colorful of these byways is described by David Kaiser in his 2011 book How the Hippies Saved Physics . 46 Because their funding came directly or indirectly from the military-industrial complex, whose overriding imperative was to technologically out-compete our global rivals, postwar American physicists fully internalized the Copenhagen ideology of not questioning quantum mysteries—“shut up and calculate” was the motto. But in the 1970s, especially in California, a younger generation undaunted by their elders’ warnings dreamed of using John Bell’s theorem proving the existence of entangled states to do really nifty tricks like sending messages faster than light. Since a measurement of one of a pair of entangled particles seems to determine what its partner does instantaneously, no matter how far apart they are, it naturally invites thoughts of instantaneous communication across space without some intervening transfer of energy. The out-of-the-box California physicists beat their heads against this problem for years, but by the early 1980s, it became apparent that there is no way to send a signal via entanglement alone. For one thing, if you force one of a pair of entangled particles into a certain state, the entanglement with the other particle will be broken, so it will not “send” information about its state to its twin. You are limited to performing measurements of a particle’s uncertain value, which compels it to make up its mind about the (previously uncertain) state it is in. In that case, you can be sure its entangled twin will make the same choice, but then some additional information channel needs to be available to let your distant partner know what measurement you performed and what result you got.

  The latter part of the problem has an analogy in basic semantics. For a piece of information to be meaningful, it needs to be reliably paired with another piece of information that gives it context or serves as its cipher. If I say “yes” to my wife, it can only be meaningless noise, a random word, unless my utterance was produced in the context of a question, like “Are you going to the store later?” Without knowing exactly how the physicist on Earth measured her particle, Alice, and what result she got, the change in Alice’s entangled partner Bob four light years away in that lab orbiting Alpha Centauri cannot be meaningful, even if it is information. The Earth physicist needs to send some slower-than-light signal to inform her distant colleague about her measurement and its outcome … which defeats the whole purpose of using entanglement to carry a message. 47

  This is also the problem with the metaphor of the universe as a computer. No matter how much computation the universe can perform, its outputs can be little more than out-of-context yesses and nos, addressed to no one in particular. If there is no “outside” to the system, there is nothing to compare it to and no one to give all those bit flips meaning. In fact, it is a lot like the planetary supercomputer “Deep Thought” in Douglas Adam’s Hitchhiker’s Guide to the Galaxy : When, after millions of years of computation, it finally utters its output, “42,” no one knows what it means, because the question the computer had been programmed to answer has long been forgotten.

  We are now perhaps in a better position to understand how the behavior of atoms, photons, and subatomic particles could carry information about their future—tons of information—without any of it being meaningful to us, and why we would naturally (mis)construe it as randomness: It is noise to our ears, stuck as we are in the Now with no way of interpreting it. It is like the future constantly sending back strings of yesses and nos without us knowing the questions. We are only now realizing that there may indeed be words in all that noise—it’s not just gibberish. But how to decode them?

  Post-selection, used in the Rochester team’s beam-amplification experiment, is one principle that may allow backward-flowing influence from the future to assume a semblance of meaning in the present. If an experimenter could be confident that a certain behavior of a group of particles (such as amplification of a laser beam) at time point A was correlated with some known measurement (e.g., versus no measurement) at a pre-ordained time point B, information from the first measurement might be readable as a meaningful retrograde “signal” (although it would really not be a signal so much as a correlation between a present state and a future state). Something else that may be of use here is the discovery that the interference tha
t goes along with measurement is not, as was once thought, a zero-sum, all-or-nothing proposition. It is possible to extract a net gain of meaningful information even when interfering. The new technique of weak measurement, for example, seems to let researchers obtain a margin of meaningful information without badly perturbing a quantum system, so long as they are willing to accept some degree of uncertainty or imprecision as a trade-off. In other words, the relationship between the amount of meaningful information it is possible to gain through measurement and the amount of interference the measurement produces is non-linear . 48 These principles may just make the impossible—detecting the future—possible.

  So, at the risk of giving mini-strokes to the few quantum physicists who may be slumming it reading a trashy book on ESP, here are a couple admittedly hand-wavy possibilities for how a future detector might be created. The simplest method uses the same principle as John Howell’s beam-amplification experiment: measuring particles at time point A in a “gentle,” relatively non-interfering way so that a predictable subsequent stronger measurement of some of them at time point B—the post-selection phase—reliably shows a difference between the two groups. Tying some real-world outcome to the measurement of a previously weakly-measured group of particles could create something like Asimov’s “endochronometer” that used thiotimoline’s pre-dissolution in water to detect future events. Measuring the amplification of a laser, specifically, might not be your best choice of system to use because you’d need some way of slowing down the speedy photons (or shooting them a long distance across space) for the setup to provide any useful temporal window between weak measurement and post-selected measurement. However, whatever system you use, you could in principle amplify such a device by daisy-chaining multiple future detectors together, just as Asimov did with his endochronometers to create a “ telechronic battery.”

  Another possibility for a future detector is more complicated and uses one of several serendipitous fruits of the hippie physicists’ failed attempts to crack the nut of faster-than-light signaling: quantum teleportation . 49 Even if you can’t use entanglement by itself to send a signal faster than light, you can combine conventional signals with entanglement to “beam” information or even matter short distances across space. For teleportation to work, the entangled couple Alice and Bob need to both be back on Earth, near enough to each other that a conventional signal can be sent between the separate laboratories that house them. It is usually described more or less like this: The physicist who holds Bob in his possession can compare Bob to another, third particle Chris—the object he wants to teleport—to determine the relative properties of the two particles (destroying the properties of Chris in the process, because measurement interferes). Once this is done, the physicist can phone up his colleague who has Alice and tell her how Bob and Chris compared. His colleague can then measure Alice (destroying Alice’s unique properties) and from the knowledge she got from her colleague, reconstruct information about Chris at her location, since she knows Alice was just like Bob. 50 It sounds like a crazy-complicated Rube Goldberg contraption, but it is cool because it is basically a transporter, like in Star Trek —destroying matter or information in one location and reconstituting it some distance away. Or, if you remember the 1982 Disney movie Tron , you can picture the memorable scene of video-gamer Kevin Flynn (Jeff Bridges) being digitized and disassembled by his office laser and reconstituted inside the virtual world of his favorite arcade game. First proposed by IBM physicist Charles Bennett in 1993, quantum teleportation has since been demonstrated experimentally many times, over distances of as much as 870 miles (the current record, as of this writing), using lasers to send one of a pair of entangled photons to a satellite in orbit. 51

  Seth Lloyd and his colleagues developed and even tested a variant of this method that combines teleportation with—you guessed it—post-selection, to send information back in time instead of across space. 52 It involves breaking the entanglement of Alice and Bob and then reentangling one of them (Bob, say) with Chris. Information associated with Chris will in certain, post-selected circumstances be found to have been correlated with information associated with the divorced particle, Alice, prior to this procedure. 53 “It’s complicated,” as they say of relationships on Facebook, so don’t bother trying to wrap your head around it— but for a mental image, you could picture Jeff Bridges being digitized by the laser, same as before, and instead of materializing inside his slick 1982 Tron game, he would materialize several years earlier inside a mid-1970s Pong console. (In the prequel, Pong: Legacy , we’d find that Flynn had become ruler of a dull virtual table tennis scenario and gone mad from the boredom.) In their experimental test of this idea, Lloyd and colleagues successfully made a particle interact with itself in the past and, as predicted by Igor Novikov’s self-consistency principle, only in a non-paradoxical way—“no matter how hard the time-traveler tries, she finds her grandfather is a tough guy to kill.” 54 Before you get too excited, it was only a few billionths of a second into the past. Still, it was an important proof of principle. Lloyd considers this a way that information and potentially even matter could be teleported into the past, creating what physicists call a closed timelike curve—that is, a time loop. 55

  If there is one technology that may really open the door to the future detectors of tomorrow, it could be quantum computing—the biggest, juiciest, most expensive fruit of those hippie physicists’ tireless attempts to figure out what science-fictional things could be done with entangled particles. A quantum computer is a supercooled matrix of entangled (or quantum-coherent 56 ) atoms or other particles, whose parameters such as “spin” are assigned values and used to encode and process information. These quantum bits or qubits play a role similar to the binary switches in a conventional microprocessor, except that each qubit can assume multiple values simultaneously (0 and 1, off and on) thanks to the magic of superposition. The system needs to be kept extremely cold to maintain the precious entanglement between qubits, which washes out very easily when there is any interference by the environment.

  The typical explanation for why a quantum computer ought to be vastly more powerful than a classical one is that it can simultaneously take multiple computational paths to the right answer to a problem. A somewhat more precise characterization, according to Scott Aaronson, is that quantum computers “choreograph a pattern of interference” in which wrong answers cancel each other out and only correct solutions remain. 57 Either way, it boils down to the assumption that probabilities (or “amplitudes” as they are known in the field) interact constructively or destructively like waves to produce a single outcome. Like light rays finding the fastest path to their destination, a quantum computer finds the fastest path to the solution of the problem it was programmed to solve. But there is still a surprising amount of disagreement about how quantum computers actually work (and even if they will work as expected 58 ), partly because there is still so much disagreement about how quantum mechanics works more generally.

  Remember that the standard (Copenhagen) view that particles are totally wavy until collapsing on measurement is not the only possible interpretation. Retrocausal frameworks suggest instead that particles’ behavior is determined by their subsequent interactions as well as their previous ones, and that it is our inability to know about the former that makes their pre-measurement behavior seem more wave-like than it really is. Also, remember what Olivier Costa de Beauregard, Huw Price, and Ken Wharton proposed about entangled particles: that their fates are really interwoven across time. Thus, the term “nonlocality” commonly used to characterize entanglement may be a bit misleading: Entangled states really seem to partake of a kind of eternity , the quintessence of the Minkowski glass block. If this is true (or at least, insofar as it might characterize what is happening in the expensive guts of a quantum computer), what may give a matrix of entangled qubits its extra computational oomph is not a kind of massive parallel processing but, rather, an ability to draw on its processing power ov
er its history. Could it be that quantum computers really compute across time ?

  Call an ambulance! That thud you just heard is an “actual real-life physicist” somewhere keeling over from a stroke as a result of a non-physicist being wrong (or too simplistic) about quantum computing in an ESP book. But there is mounting evidence that there is indeed something special, even “timeless,” about entangled states like those that a quantum computer relies on. Again, new research in quantum information theory is showing that the causal order in quantum computational operations can be indefinite—it is possible to scramble cause and effect in operations using entangled particles. 59 This is being proposed as a principle that could be used to radically accelerate quantum computation. 60 Whether a quantum computer could ever be used to “precognize” its own future states remains to be seen.

 

‹ Prev