by Jed Brody
Let’s review, once more, the creation of two entangled photons that are just as likely to be horizontally polarized as vertically polarized. Let’s place a horizontal polarizer in front of each photon. Half the time, both photons pass through the polarizers. Half the time, neither photon passes through.
Let’s name the photons A and B. Suppose Photon A reaches its polarizer before Photon B. If Photon A passes through the polarizer, then we know that Photon B is certain to pass through its horizontal polarizer when it gets there. But what really changed when Photon A passed through its polarizer? Did Photon A change? Did both photons change? Did neither change? Or do the changes take place only after the photons reach detectors, or after the detectors communicate the result to a circuit board?
If I claim that the size of my right foot changes when I measure my left foot, we would expect to observe this directly: when I hold a ruler up to my left foot, we should be able to watch my right foot shrink or expand, or perhaps transform from fuzziness to solidity. Similarly, we want to observe Photon B, both before and after Photon A is measured, to see if anything changes. But then, the first observation of Photon B would be a measurement, which may affect the state of Photon A!
My claim is that both photons are transformed by the first observation of either photon. Thus this transformation can never be observed; we can’t perform any observation prior to the first observation. So we can never watch one particle change in response to the measurement of its twin. The innermost workings of nature remain forever out of reach. The quest for complete understanding is always an unscratchable itch. The only fact that’s (almost) certain is that local realism cannot account for measured results.
Local realism is defeated by violations of Bell inequalities, which is why local realism is the negative space of quantum physics: local realism is the excluded explanation. If we reject local realism, what’s left? Are the only remaining views of reality mystical? Does quantum mechanics, after all, say something mystical about the universe? We can no longer argue that physics is merely a set of formulas for predicting experimental outcomes, disjoint from philosophical considerations: Bell inequalities show that experiment has overruled a plausible philosophical assumption. There are many alternative assumptions, but none are especially plausible, and all have their partisans.
Indeed, there are many philosophical interpretations of quantum mechanics. I will not try to compile a complete list, or give equal attention to leading viewpoints, or even to classify the viewpoints in a standard way. But I will consider four categories of responses to Bell inequalities:
1.Ferret out assumptions we didn’t know we were making.
2.Abandon both locality and realism.
3.Abandon locality to save realism.
4.Abandon realism to save locality.
We can no longer argue that physics is merely a set of formulas for predicting experimental outcomes, disjoint from philosophical considerations: Bell inequalities show that experiment has overruled a plausible philosophical assumption.
Ferret Out Assumptions We Didn’t Know We Were Making
If we hope to cling to local realism, like a life preserver in a stormy sea, we need to identify another assumption that may be false. We then blame this other assumption for the incompatibility between experiment and the mathematical constraints that we derived. If this other assumption is to blame, then local realism may be innocent.
First, let’s think about an assumption that seems identical to realism: the unstated assumption of counterfactual definiteness.1 This is the assumption that even though each photon goes through a polarizer set to a single angle, we can specify what the photon would do if it went through a polarizer at a different angle. The assumption of realism—that the photon has properties that predetermine its response to any chosen polarizer angle—seems to require counterfactual definiteness. In a moment, we’ll distinguish realism from counterfactual definiteness.
In normal life, counterfactual definiteness seems reasonable. For example, I’m not jumping right now, so it’s counterfactual to discuss what would definitely happen if I were to jump. Yet I can say with confidence that if I jump, I will come back down. And if I drop my pen, it will fall. And if I clap, I will hear the sound. It seems obvious that all these statements are true—and that’s because counterfactual definiteness is so innocuous, we assume it all the time.
We already know that quantum particles defy our expectations in many ways. We might as well ask whether there’s something fundamentally forbidden about specifying what a particle would do in any situation other than the one it actually experiences. If we reject counterfactual definiteness, can we save realism?
The distinction between realism and counterfactual definiteness becomes clearer if we consider the viewpoint of superdeterminism.2 According to superdeterminism, there’s no free will. The entire universe is a Rube Goldberg device evolving inexorably along its predetermined course. Every future occurrence, down to the minutest detail, was predetermined at the moment of the Big Bang. Free will is an illusion, and if we believe in this illusion, it’s only because we were predestined to do so.
All of the Bell inequalities are derived under the assumption that the experimenters can freely choose polarizer angles, such as 0°, 30°, or 60°. Since the photons don’t “know” the angles that they’ll encounter, they have to be prepared (they have to have hidden properties) for all possible angles. In a superdetermined universe, the photons have a lot less preparation to do. Each photon needs a property only for the single polarizer angle that it’s certain to encounter. The assumption of realism is thus valid; the photon has its single property (causing it to be transmitted or blocked) all along, even before we measure it. Although this explanation preserves realism, it’s still very strange. Somehow each photon “knows” exactly which polarizer angle it will encounter. The two photons no longer have to collude with each other, but instead each has to collude with its own polarizer before it even gets there. We can preserve locality by arguing that the collusion took place during the Big Bang, when everything was scrunched into one locale.
In any case, counterfactual definiteness isn’t valid in a superdetermined world. We can’t specify what a photon would do in any measurement except the one it actually experiences because there’s never any possibility of anything else.
Another viewpoint that undermines counterfactual definiteness is the many-worlds interpretation of quantum mechanics.3 In this view, all possible outcomes of a measurement are real—in parallel universes! When the measurement is performed, the world splits—the photons are vertically polarized in one world, and horizontally polarized in the other. (I believe adherents of this interpretation prefer different terminology: The only reality is the sum of all possible outcomes. So reality itself isn’t splitting; there are just new branches within the single reality, and we’re conscious of only one of the branches.)
As unrealistic as it seems, the many-worlds interpretation is based on realism. Quantum mechanics represents the state of our entangled photons as a sum of two mutually exclusive outcomes, horizontal polarization and vertical polarization. This sum is considered the ultimate reality in the many-worlds interpretation. The measurement causes the terms in the sum to split off into separate worlds. The sum of all the worlds remains the single deep reality, but we perceive only the one world we inhabit. In a sense, when I measure a photon’s polarization, I’m not changing the photon, which always existed as a sum of vertical and horizontal polarization; I’m changing myself, splitting into someone who observes vertical polarization, and someone who observes horizontal polarization.
Now we want to explore whether counterfactual definiteness makes sense in the many-worlds interpretation. In a particular world, can we specify what a photon would do if the polarizer were set differently from how it’s actually set? Well, what would make the experimenter decide to set it differently? Is it random? Let’s imagine that a random quantum event determines the direction of the pola
rizer. For example, we might set up a light source to emit a single photon. Let’s call this photon the Decider. We arrange an experiment so that the Decider has a 50 percent chance of being vertically polarized, and a 50 percent chance of being horizontally polarized. Suppose the polarization of this photon sets the angle of one of the polarizers in our entanglement experiment.
But wait—the world split when the Decider photon was measured! We can’t talk about what the entangled photon would do if the polarizer were set differently because that occurs only in a different universe!
When describing the many-worlds interpretation, authors like to give the disclaimer, “This sounds like science fiction.” And yet a surprising number of physicists actually believe in it. It’s amazing that the same profession that gives us airplanes and computer chips also tells us that our whole entire universe may be a vanishingly tiny speck in an exploding infinity of parallel worlds.
Indeed, the many-worlds interpretation has its partisans because of how it resolves the measurement problem in quantum mechanics.4 The measurement problem is not specific to entangled particles. Even a single particle is in a fundamentally undecided, unknowable state prior to measurement. The measurement forces the particle to settle into a state with a more exact value of one property (its position, for example), while another property (its speed) unavoidably becomes more uncertain. Similarly, when measuring a photon’s polarization in a horizontal or vertical direction, we learn whether it’s horizontally or vertically polarized, but we lose any information we may have had about whether it was polarized in the 45° or −45° direction. The loss of information about one property, when measuring a different property, is Heisenberg’s famous uncertainty principle.
But at exactly what point does the measurement take place? This is a key question, since such a fundamental transformation takes place. The particle seemingly transmutes into something it wasn’t just before. Does the transmutation occur when the photon passes through the polarizer? Or when it reaches the detector? Or when the detector sends an electronic signal to a circuit board? Or when the circuit board transmits the message to a computer? Or when the computer displays 0 or 1? Or when a conscious observer sees the result on the computer screen? Some physicists have actually proposed that consciousness creates objectively real states. Before registering in someone’s consciousness, the photon is in a fundamentally undetermined and unknowable state—and so is everything it encounters on its way, in an avalanche of indeterminacy! In this view, the computer screen is in some unimaginable combination of showing both mutually exclusive outcomes before the conscious observer comes along.
The same profession that gives us airplanes and computer chips also tells us that our whole entire universe may be a vanishingly tiny speck in an exploding infinity of parallel worlds.
Measurement is a problem because a measurement disrupts the smooth evolution of a quantum state. The fundamental equation of quantum physics does one thing very well: it specifies the probabilities of measurable outcomes, and how these probabilities change over time. As soon as a measurement is made, all the outcomes that weren’t measured get thrown out of the equation. This “throwing out” process is external to the equation and no one fully understands it—and it doesn’t happen at all in the many-worlds interpretation.5 In the many-worlds interpretation, no outcomes get thrown out because all possible outcomes coexist in parallel worlds.
Now we’ll think about a few more assumptions underlying the derivations of the Bell inequalities. There’s an assumption called fair sampling: no detectors are 100 percent efficient; the detectors miss a large fraction of the photons that reach them. For example, suppose a detector responds to 20 percent of the photons that reach it. Our unstated assumption is then that each photon arriving at the detector has the same 20 percent chance of detection; the system is not somehow rigged. This fair-sampling assumption is also called the detection loophole. If we reject the fair-sampling assumption, we could make the (outrageous?) claim that the detector somehow favors photons that violate Bell inequalities, just to fool us; only the detected photons violate Bell inequalities, and if we replaced our detectors with ideal (100 percent efficient) detectors, the Bell inequalities would in fact be satisfied.
We’ve also assumed that the photons don’t “know” in advance the angle of the polarizer they’ll encounter; this is why realism requires the photons to have preset outcomes for all possible polarizer angles. What if the photons can somehow sense in advance the angles of the polarizers? Then we can’t derive any Bell inequalities because the photons need to have just a single preset outcome. (We followed this line of reasoning in the discussion of superdeterminism.) We’re now imagining some form of signal from the polarizers to the photons, so that the photons are alerted to exactly which polarizer angles they’ll encounter.
To me, it’s much less spooky to think that the measurement of one photon affects the other, than to think that the photons somehow sense the angles of polarizers they haven’t arrived at yet. If we imagine that the photons may somehow receive information from the polarizers before they get there, we have what’s called the locality loophole. The idea is that if information travels (no faster than light) from the polarizer to the photons, then this information is available locally, at the photons’ original position. This is contrasted with the idea that the measurement of one photon instantly (faster than the speed of light) affects the other (nonlocal) photon.
From 1982 until 2015, various experiments closed either the detection loophole or the locality loophole. For example, in 1982, Alain Aspect led an experiment that effectively rotated the polarizers as the photons were traveling to them, so the photons couldn’t have any advance notice as to which polarizer angle they’d encounter.6 Other experiments used detectors with high efficiency to close the detection loophole. But prior to 2015, a true zealot could still insist that local realism was not to blame for the experimental violation of mathematical constraints. Finally, in 2015, both loopholes were closed simultaneously in a single experiment.7
To close the locality loophole, the polarizer angles have to be chosen unpredictably so that the photons can’t have any advance notice of what they’ll encounter. We can place a random number generator at each polarizer to choose the angle. But what if some unknown, common cause affects both the random number generators and the photons? Then, the photons could have predetermined properties all along, while still violating Bell inequalities, due to the unknown influence tampering with the random number generators. This loophole is called the freedom-of-choice loophole. It challenges our assumption that the choice of polarizer angles can be made freely, independent of the properties of incoming photons. Theoretical work has shown that local realism can be preserved if the tampering influence is minimal; we don’t need to go to the extreme of superdeterminism.8
How can we close the freedom-of-choice loophole? We need to rule out a tampering influence, which travels no faster than the speed of light. Some physicists have used light from distant stars to set the angle of polarizers, and the results were the same as always: Bell inequalities were violated.9 The starlight was emitted hundreds of years ago, and we assume that the stellar photons were unaltered during their long journey to Earth. If a tampering influence exists, it must have planned ahead by hundreds of years, before the starlight was emitted, just to produce a Bell inequality violation. This hypothetical, tampering influence is like a patient villain with an extremely perplexing goal.
In another experiment to close the freedom-of-choice loophole, about 100,000 people from around the world generated random numbers.10 The random numbers were used to set the polarizer angles (or equivalent analyzer settings) in tests of Bell inequalities. Participants generated random numbers by playing a video game online.11 The Bell inequalities were violated, as usual. We conclude that local realism was defeated: the entangled particles did not have definite properties prior to measurement, or if they did, the measurement of one particle affected the other. Alternativ
ely, a superdeterministic power governed the seemingly random choices of 100,000 people so that their choices corresponded with properties that the entangled particles had prior to measurement. In either case, common sense cannot account for the results.
Let’s consider a final assumption that we’ve made all along, which also seems like common sense: the two entangled particles, when separated by an arbitrarily large distance, are in two different places, not a single place. How could this possibly be untrue? Well, what if the two entangled particles are connected by a wormhole, which is a shortcut through space and time (like Madeleine L’Engle’s “wrinkle in time”)? Since both ends of a wormhole are actually the same point, then no matter how far apart the entangled particles are, they occupy the same position! Leonard Susskind and Juan Maldacena advanced this idea several years ago, in 2013.12 The succinct nickname for this conjecture is ER = EPR.
“EPR” refers to the paper Einstein coauthored with Boris Podolsky and Nathan Rosen in 1935, arguing that entanglement reveals a flaw in quantum mechanics: a more complete theory is needed to specify the exact outcome of any possible measurement. Less than two months after the EPR paper was published, Einstein and Rosen (ER) published a paper about (what we now call) wormholes.13 If ER=EPR, then entangled particles are even stranger than we thought, connected via invisible tunnels through space and time!
In fact, a favored view among physicists is that reality is a higher-dimensional space.14 Our ordinary ideas of space and time are inadequate to understand entanglement. To recognize our cognitive limitations, we can imagine a world with fewer dimensions than ours: imagine a society constrained to exist in a flat, geometric plane.15 The two-dimensional people in this world have no concept of three-dimensional space because they have never experienced it.
Now imagine that a three-dimensional titan starts poking the tips of a fork through the two-dimensional world. The fork is poked at random moments through random locations. The two-dimensional people (quivering in terror) perceive the tines of the fork as four isolated, round blobs. They see no possible physical connections among the four blobs; they can completely encircle each blob with a string to prove that it’s isolated from the others. The four blobs always appear at almost the same time, however, and they disappear at almost the same time. Although the two-dimensional scientists can’t predict where or when the blobs will appear, the distance between adjacent blobs is always the same. (Perhaps the blobs expand slightly after they appear, and they shrink before they vanish, but the distance between the centers of the blobs is always the same.)