Book Read Free

The Many Worlds of Hugh Everett III: Multiple Universes, Mutual Assured Destruction, and the Meltdown of a Nuclear Family

Page 11

by Peter Byrne


  EVERETT:

  I’ve got to admit that, that is right, and might very well have been totally instrumental in what happened.

  Winter, 1954

  By Christmas 1954, Everett was hard at work constructing mathematical proofs and logical arguments to show that the “outrageous implications” of quantum mechanics were true. Within the year, he claimed to have solved the measurement problem with his new theory of a universal wave function.

  In January 1955, Wheeler evaluated him for the National Science Foundation.

  [Everett is] highly original, originated an apparent paradox in the interpretation of the measurement problem in quantum theory and showed its remarkable difference from other paradoxes in the respect that it deals with amplification processes, encountered in typical measurements, contrary to all other paradoxes so far considered. In discussions of this paradox with graduate students and staff members here at Princeton, and with Niels Bohr, Everett brought to light new features of the problem that make it in and of itself an appropriate subject for an outstanding thesis when further developed. Everett has also done independent calculations on problems in general relativity in association with Misner, each stimulating and providing ideas for the other…. Everett did on his own an outstanding paper—or so I am told by Professor Tucker—on game theory. He really is an original man.17

  Wheeler was initially enthusiastic about the concept of a universal wave function because he wanted to apply it to a theory of quantum gravity. He was seriously troubled by its consequence: multiple universes. Nonetheless, he was to make a heroic effort to convince Bohr that Everett’s idea was useful. Not surprisingly, he failed to obtain Bohr’s approval, but he had already changed the history of physics by encouraging Everett to transcend the ontological restrictions imposed by the Copenhagen interpretation. In effect, Everett broke the embargo on talking about the measurement problem by treating the universe as fundamentally quantum mechanical and, therefore, whole.

  But before we can describe what it is that Everett did to make his strange theory plausible to Wheeler, we need to delve more deeply into how physicists and philosophers had dealt with the “outrageous implications” of quantum mechanics before Everett arrived on the scene with his bright idea.

  It will require learning the basics of quantum theory as Everett learned it, but in ordinary language and without using a single equation!

  10 More on the Measurement Problem

  Once we have granted that any physical theory is essentially only a model for the world of experience, we must renounce all hope of finding anything like ‘the correct theory.’ There is nothing which prevents any number of quite distinct models from being in correspondence with experience (i.e. all ‘correct’), and furthermore no way of ever verifying that any model is completely correct, simply because the totality of all experience is never accessible to us.

  Hugh Everett III, 19561

  Mysterious slits

  In a famous experiment in 1807, Thomas Young demonstrated the wave character of light. Shining a single light source through a pair of small holes cut into a plate onto a screen, Young saw a pattern of bright and dark fringes, rather than two bright spots. This “interference” pattern was explained by considering light to be composed of waves, not corpuscles, or tiny particles, as had been envisioned by Sir Isaac Newton.

  The fringes stood out because the peaks and troughs of light waves passing through the two holes overlapped or superposed like waves of water rippling out from the epicenters of two stones thrown into a pond. The dark fringes resulted from peaks and troughs combining to cancel each other and register as shadow between areas of brightness, while merging peaks registered as bright lines.

  Nowadays the experiment is performed with slits, not holes. The “two-slit” experiment is important in the history of quantum mechanics because shooting electrons (particles discovered in 1897) at two closely separated slits also produces an interference pattern. When you beam a lot of electrons at the slits, an interference pattern builds up showing that the electron beam has wave properties. Complicating the issue, repeatedly shooting single electrons, one at a time, at the pair of slits also produces an interference pattern, indicating that the electron must have flown through both holes simultaneously and interfered with itself. Equally mysterious is the fact that when you repeatedly aim and shoot one electron through one hole—after blocking the other hole so that the electron cannot possibly go through it—the interference pattern disappears, as if the electron “senses” whether or not it is possible for it to go through both slits at once, or it is limited by nature to passing through one slit.

  It seems as if the electron “behaves” sometimes like a wave, and sometimes like a particle, depending on how the experiment is set-up. Of course, the electron is not an animal, so it does not behave. Rather, where it lands—or might land—depends on the environment in which it moves. Although the modern experimentalist has no way of knowing in advance exactly where a particular electron will land, using quantum mechanics he can confidently predict the probability for it to land at a certain spot. But until it lands and leaves an observable record, the electron, as a probability wave, is in a superposition of positions as it moves through the apparatus toward the screen, where one landing spot out of all those possible is selected by nature, (or so the standard, non-Everettian interpretation goes).2

  Richard Feynman, a master of scientific summary, remarked of the two slit experiment:

  It is not our ignorance of the internal gears, of the internal complications, that makes nature appear to have probability in it. It seems to be somehow intrinsic. Some one has said it this way—‘Nature herself does not know which way the electron is going to go.’ A philosopher once said ‘It is necessary for the very existence of science that the same conditions always produce the same results.’ Well, they do not.

  Quantum dawn

  In the history of physics, 1905 is known as the Year of Miracles because Einstein published four revolutionary papers, including his theory of special relativity, “On the Electrodynamics of Moving Bodies.” A decade later, as the First World War raged, Einstein expanded his model of a relativistic, yet still-classical, still-deterministic universe to include gravity (the theory of general relativity). Abstract, counter-intuitive and, for a few years divorced from robust experimental proof, the basic notions of Einstein’s new theories could still be communicated by visual metaphors: by pictures of train passengers experiencing the same event differently when traveling at “relative” velocities; by moving clocks slowing down relative to each other; by space-time as a fabric molding around heavy bodies. Without being able to read the equations of special and general relativity, normal people could still appreciate Einstein’s discovery that there are boundaries circumscribed by the speed of light, that mass and energy interlock, that massive objects curve the continuum of space-time, that gravity is geometrical, and that the bodily motions induced by gravitational attraction or acceleration are indistinguishable. In short, Einstein’s relativity theories kept intact causality, determinism, the common sense we use to predict the future.

  But Einstein wrote about something else besides relativity theory in 1905: he examined the “photoelectric effect.” He asked why shining light on metal causes electrons to zing off the surface. His answer was that as electrons trapped inside the metal substance absorb “quanta” of light energy, they gain energy in proportion to the frequency (the number of vibrations per second) of the light and may escape the metal. The energy came in quantifiable chunks.

  Einstein’s quantization of light waves used a new mathematical constant, “Planck’s constant,” signified by “h,” discovered in 1900 by Max Planck, a professor of physics at the University of Berlin. Planck’s constant (also called the “quantum of action”) is a fabulously small number: 33 zeros and a 6 to the right of the decimal when expressed in standard units. Planck unwittingly unleashed a new, non-classical physics by postulating that energy is radiated in
discrete bundles, with his new constant h governing the proportion of energy to the frequency of its transmission as a radiating wave. Einstein postulated that not only does radiation exchange energy with matter in quantum units, it also exists as quantum units (later called “photons”). Importantly, this meant that systems composed of individual chunks of light energy could now be tracked statistically, i.e. probabilistically.

  Despite the excruciatingly high level of technical expertise a person needed to explore in detail the worlds opened up by relativity and quantum theory, the break-through fired up the popular imagination. When relativity was validated in 1919 by the discovery that starlight is deflected by the Sun, the wild-haired, doe-eyed Einstein became a global personality—the cynosure of pure intelligence, blessed with a sense of social responsibility and good humor. He was awarded the Nobel Prize for Physics in 1921 for his work on the photo-electric effect, which, ironically, had set the theoretical stage for undermining the determinist physics that he cherished.

  The statistical element

  In a burst of group creativity during the 1920s, the founders of quantum mechanics discovered that electrons, as well as photons, are governed by the laws of probability. In fact, all atomic particles behave probabilistically—they simply do not occupy determinate positions in space and time.

  The mathematics underpinning quantum mechanics is incredibly precise. Theorists use it to predict outcomes, and experimentalists to test predictions. The ability to make these predictions is based on the collection of data: the key to determining the probability that an electron will show up at particular place when measured is to recreate the same experiment over and over until thousands or millions or billions of measurements on identically “prepared” particles have been logged. Early experimenters performed their tests with small “cloud chambers,” we do it now by recording data about particle collisions inside house-sized drift chambers and town-sized particle accelerators.

  Quantum mechanics differs from classical physics in this: if you can exactly record and reproduce the initial conditions of a classical experiment (dropping a cannonball from a tower, say), then, every time you exactly repeat the set-up of that experiment, the results will be exactly the same. The same is not true in quantum physics: every time you repeat the experiment, tracking the trajectory of an electron, say, the results can be different, within a range of possible difference. Identical initial conditions do not necessarily produce identical results!

  Let’s stick to measuring position: after logging the data on the relative frequency of the different positions that a series of identically prepared electrons are found at over a large number of measurements, the data is translated into linear algebra, creating a statistical function that obeys wave mechanics: ψ.

  ψ represents a quantum state (which can be a superposition). To reiterate: ψ is manufactured from data collected by observing the relative frequencies with which certain positions are repeated in an experiment. ψ sums up experimental results and the wave equation governing it enables us to make predictions. The Schrodinger equation can tell us the probability that we will find an electron at a certain position within a range of possible positions. For instance, we might learn that there is a 40 percent chance that the electron will appear at position X, and a 60 percent chance that it will be found at position Y. But we do not know whether we will find the electron at X or Y, only the probability that it will be at one or the other, within the range of positions already observed. In this case, the range, or “probability distribution,” includes only X and Y, but the range could include an infinite number of possible positions stretching from “here” to the end of the universe.3

  Let’s deepen the explanation. The wave function (or “probability amplitude”) is not the probability, only the seed of a probability. Remarkably (and inexplicably), multiplying a wave function attached to a specific position by itself, squaring it, reveals the probability that the next time we measure an identically prepared particle it will be found at that same position.4 This is the rule discovered by Max Born in 1926: Squaring the positive value of ψ for a property of a particle tells all that we can know about that property. We do not know if the electron will be at place X, we only know the probability that it could be at that position, because we have collected enough data to determine the relative frequency with which identically situated electrons have appeared at that position in the past. And that is why quantum mechanics is viewed as statistical and indeterminist.

  And, as the founders of quantum mechanics realized, to their great consternation, the micro world of the quantum departs from the macro world of our experience: Until the position of a particle is measured, it has no certain position. Its wave function describes all possible positions, but the precise future of the particle is indeterminable. However, every possible position within ψ is evolved by the Schrödinger equation deterministically, as if it will be at a certain place within a range of places (the probability distribution). And the wave equation does not distinguish between possible positions; each position is treated as equally real by the equation until the measurement interaction occurs and the particle is found at a particular place with a specific probability of reoccurring at that place.5 Our current understanding of nature offers us no explanation of why it is at that place. All it offers is a probability that if we repeat the experiment it will appear at that place again. Nor does nature tell us why we do not find the particle at all of the positions in the probability distribution encoded in ψ.

  Hence: the measurement problem.

  Indeterminism

  The Polish physicist, Max Born, who introduced the postulate of squaring ψ to obtain a classical probability, wrote:

  One obtains the answer to the question, not ‘what is the state after the collision’ but ‘how probable is a given effect of the collision’ … Here the whole problem of determinism arises. From the point of view of our quantum mechanics there exists no quantity which in an individual case causally determines the effect of a collision…. I myself tend to give up determinism in the atomic world.6

  Born puzzled over how probability related to Schrödinger’s equation:

  The motion of particles follows probability laws but the probability itself propagates according to the law of causality.7

  In other words, a series of single (often different) results emerged from ψ as it evolved causally, deterministically, linearly and the particle was measured. This meant that probability was no longer a measure, as in classical physics, of the experimenter’s ignorance about a pre-existing condition; indeterminism was, somehow, embedded in Nature’s construction of the quantum condition.

  In 1927, Heisenberg wrote an epitaph for classical determinism:

  Even in principle we cannot know the present in all detail. For that reason everything observed is a selection from a plenitude of possibilities and a limitation on what is possible in the future. As the statistical character of quantum theory is so closely linked to the inexactness of all perceptions, one might be led to the presumption that behind the perceived statistical world there still hides a ‘real’ world in which causality holds. But such speculations seem to us, to say it explicitly, fruitless and senseless. Physics ought to describe only the correlation of observations. One can express the true state of affairs better in this way: … quantum mechanics establishes the final failure of causality.8

  Heisenberg’s comment was prompted by his discovery of the uncertainty principle, a natural law prohibiting the simultaneous measurement of mutually exclusive quantum properties, such as position and momentum, or energy and time. Everett accounted for the uncertainty principle in his theory, but, contradicting Heisenberg, he viewed quantum mechanics as fundamentally causal and deterministic in the sense that every physically possible event occurs.

  Heisenberg made another philosophical comment that has caused much confusion and debate in the years since:

  I believe that one can fruitfully formulate the origin of the classical �
�orbit’ in this way: the ‘orbit’ comes into being only when we observe it.9

  Heisenberg, von Neumann, and to a large extent, Bohr, privileged the role of the observer by partitioning the observer from the object observed in a measurement interaction. But both the observer and the object are composed of interacting atomic particles, which are represented by interacting, overlapping, superposing, entangling wave functions. So how can they be separate? How could it be possible to stand outside the wave function when, according to the Schrödinger equation, the wave function of an object naturally expands to include the wave functions of all the objects with which it interacts, including the observer?

  Entanglement

  Entanglement is tied to superposition—and it is also inexplicable (albeit describable). Entanglement is the principle that a single ψ can describe the combined state of two or more separate particles. Entangled particles may be spatially separated, yet linked, correlated. Consider a pair of interacting electrons that are entangled (“prepared”) by a measuring device so that the “spin” of one particle must be “up” if the spin of its partner is “down.”10 After their spins are correlated, the particles fly off very fast and very far in opposite directions. The principle of superposition tells us that until we measure the spin of one of these particles, the composite wave function includes spin up and spin down for each particle. But when we do measure one of the particles, recording spin up, for instance, we then automatically know that its entangled partner is spin down, i.e. the second particle is no longer in a superposition of spin states even though we did not directly measure it (only its partner).

 

‹ Prev