Book Read Free

Einstein's Unfinished Revolution

Page 16

by Lee Smolin


  But sometimes there is more than one way the system could decohere. A perfect example of this is Schrödinger’s cat. The cat could decohere as a live cat or it could decohere as a dead cat. What makes the difference is a quantum variable: if the atom was decayed the cat would decohere as dead; if the atom was excited the cat would decohere as living. So a detector is a kind of amplifier, with a filter that only allows it to register states where the atom is definitely either excited or decayed.

  The puzzle, you may recall, was: What happened to the cat while the atom existed in a superposition of excited and decayed? The answer is still the same—if you look at the quantum state microscopically: it is a correlated superposition of an excited atom with a live cat superposed with a decayed atom and a dead cat.

  But if you look only at bulk properties so that decoherence can do its work, the randomness turns the superposition into an almost irreversible change. Now there are two outcomes—the live cat and the dead cat—and both emerge! This, according to the decoherence story, is how the world splits in two.

  The Oxford thinkers then claim that the branchings and the splittings of the wave function are defined by decoherence.* The split is made so that it separates different outcomes which have different values of macroscopic properties, such as the position of a dial.

  The main claim then is that only those subsystems which decohere can be counted on to have observers associated with them. As we are interested in what observers see, we should focus on these and throw the rest away. This opens up a route to deriving probabilities in which you compare only the likelihoods of what would be observed on branches that decohered.

  This introduces a notion of observers into the theory, which might be thought to weaken its claim to realism. However, this is a way of discovering a role for observers that arises from the dynamics of the theory, which is surely better than just postulating a special role for observers at the beginning. One might argue that probabilities are not intrinsic to the world, but are only aspects of observers’ beliefs about the world. Then such a description could be consistent with realism because there is an objective characterization of a property that distinguishes observers from other subsystems. Observers are subsystems that decohere.

  Decoherence solves the preferred splitting problem because decoherence takes place only with respect to certain observables. Often these are positions of large-scale objects.

  Before we go further, I should mention that there is, unfortunately, a problem with making decoherence a necessary part of the interpretation of the theory, which was pointed out a long time ago by my teacher Abner Shimony. This problem can be put very simply. Rule 1 is reversible in time, so every change a state undergoes under Rule 1 can be undone, and indeed will be undone if we wait long enough. But Rule 2 is irreversible, and the way it introduces probabilities for the outcomes of measurements makes sense only if measurements are irreversible and cannot be undone. Thus, Shimony argued, it is impossible that Rule 2 could be derived from Rule 1 alone.

  As I described it above, decoherence is an irreversible process in which coherence of states, needed to define superpositions, is lost to random processes in the environment of the measuring instrument. But how can decoherence arise in a theory based on Rule 1 alone, as all changes dictated by Rule 1 are reversible in time?

  The answer is that decoherence is always an approximate notion. Complete decoherence is impossible. Indeed, if we wait a very long time, decoherence will always be reversed, as the information needed to define superpositions seeps back into the system from the environment.

  This is due to a general theorem, called the quantum Poincaré recurrence theorem.3 Under certain conditions, which can be expected to hold for systems containing an atomic system plus a detector, there is a time within which the quantum state of the system will return arbitrarily close to its initial state. This time, called the Poincaré recurrence time, can be very large, but it is always finite. The conditions include that the spectrum of energies be discrete, which is certainly reasonable.*

  Decoherence is a statistical process, similar to the random motion of atoms that leads to increases of entropy, bringing systems to equilibrium. These processes appear to be irreversible. But they are actually reversible, because every process governed by Rule 1 is reversible. This is true in both Newtonian and quantum physics; both have a recurrence time. In either case the second law of thermodynamics, according to which entropy probably increases, can hold only for times much shorter than the Poincaré recurrence time. If we wait long enough, we will see entropy go down as often as it goes up.

  Similarly, one might try to argue that over shorter times, there is a low probability for decoherence to reverse, giving way to recoherence.

  Now, as long as we are interested only in what happens over much shorter times than it takes to recohere, and we want only an approximate description of what goes on when atomic systems interact with large bodies, suitable for practical purposes, decoherence provides a useful approximate description of what happens during a measurement. Indeed, decoherence is a very useful concept when analyzing real quantum systems; for example, much of the design of a quantum computer goes into counteracting decoherence. But as a matter of principle, that description is incomplete, as it leaves out the processes that will recohere the state if we wait long enough.

  However, when the state recoheres, measurements based on decoherence are undone. Therefore, measurements as described by Rule 2 cannot be the result of decoherence, at least as decoherence is described in a theory based only on Rule 1.

  So it seems that decoherence cannot alone be the key to how probabilities appear in the Everett quantum theory, because it is based solely on Rule 1.

  * * *

  —

  THIS DISCUSSION MAKES it clear that the question of where probabilities come from is central to making sense of the Many Worlds Interpretation. The key to understanding the Oxford approach lies in understanding what a probability is. This question is far more difficult than it appears. We all have an intuitive idea of what it means to say the probability of a flipped coin landing on heads is 50 percent. People know the difference between what to expect when the forecast says the chance of rain tomorrow is 10 percent and the forecast says the chance of rain is 90 percent. But when we look into what we actually mean when we talk of probabilities, we find the notion gets surprisingly slippery.

  Part of the reason probability is confusing is that there are at least three different kinds, or meanings, of probability.

  The simplest notion is that probability is a measure of our credence or belief that something will happen. When we say there is a 50 percent chance of heads on the next coin toss, that is not a statement about the coin; it is a description of our belief about the result of tossing the coin. These are called Bayesian probabilities.

  When we say the Bayesian probability for rain tomorrow is 0 percent, that is just a way of saying we believe it will not rain, and when we say that probability is 100 percent, that says we are sure it will. Probabilities between them, such as 20 percent, 50 percent, or 70 percent, refer to the strength of our belief that it will rain. In particular, when we say something has a 50 percent probability of happening, we are really confessing we have no idea whether it will happen.

  Bayesian probabilities are clearly subjective. They are best evaluated in terms of our behavior. The higher the probability for rain, the more likely it will be that we would bet on rain, or at least carry an umbrella.

  Many probabilities we deal with in ordinary life are best understood in this way, as betting odds. Certainly, probabilistic predictions about the stock market or the housing market are of this kind. Indeed, most of the time when we refer to the probability of some future event, we are making a subjective statement of belief, using Bayesian probabilities.

  A second kind of probability comes into play when we keep records of the relevant events. If we toss a
large number of coins and keep records of how often they come up heads, we can define the proportion of heads in that sequence of tosses to be a probability. These are called frequency probabilities.

  Batting averages and other sports statistics are frequency probabilities. They give the proportion of the times that a batter got on base after he was at bat.

  Sometimes weather forecasts are of this kind. When the National Weather Service website tells us in the morning that there is a 70 percent probability that it will rain this afternoon, what they might be saying is that within their vast records, roughly 70 out of 100 days with conditions like those of this morning had rain in the afternoon.

  Of course, these probabilities are imprecise. The problem with these is that so long as the number of days observed is finite, the frequencies will vary. But the more days of which the weather service has records, the more reliable the forecast will be.

  If one flips a coin 100 times, then one can ask how often one gets heads. The proportion is called the relative frequency of getting heads. This will tend to be around 50; we are not surprised if it turns out often to be 48 or 53.

  For any finite number of trials, then, the number of heads will rarely be exactly half. The key idea is that, were we able to do an infinite number of trials, the proportion of different outcomes would tend to some fixed values. This defines the relative frequency notion of probability.

  The problem with this is that in the real world, we only get a finite number of tries. As long as the number of trials is finite, there is a good chance that the number of heads will be different from exactly half the trials. A surprisingly hard question to answer is what it takes to show that a probabilistic prediction is wrong, given that we can only do a finite number of tests. Indeed, often all we can say is that our prediction is improbable. But for this to be meaningful we have to define what we mean by improbable. We cannot assume we know what improbable means as we are in the process of defining it.

  Suppose we toss a coin a million times and come up with 900,000 heads. It is possible that this is a rare fluke and our coin is normal. But we can conclude that it’s very probable—although not certain—that the coin is weighted.

  By definition, we choose our subjective probabilities. But we can ask that there be a relation between the subjective Bayesian probabilities we choose and objective frequencies taken from past records. So long as we have no more information, the best bet we can make is the one that follows the odds that are based on historical records. What we mean here by “best bet” is the choice that, most of the time, will serve our interests. In economic-speak we could say that this is the “most rational choice.”

  We might put this as follows:

  It is most rational, in a situation where you have limited knowledge, to choose to align your subjective betting odds with the frequencies observed in the historical record.

  This is a version of the “principal principle” of the philosopher David Lewis. This principle has at its root an assumption that, everything else being equal, the future will resemble the past. Or at least that, given incomplete information, it is rational to bet on the future resembling the past. This bet may sometimes put you on the wrong side of history, but it is still the safest bet you can make.*

  Now, suppose we ask a different question, which is to explain the frequencies observed in the records of a particular experiment. Suppose the frequency observed was close to 50 percent. It would be natural to try to explain that result by an application of the laws of physics to the particular experiment.

  Such an explanation might give reasons why heads would be as likely an outcome as tails. This would include the hypothesis that the coin was fair, as well as hypotheses about the tosses, how the coin behaves when it hits a surface, and so on. Our explanation might also refer to results from other experiments, which support our belief in the theory.

  Once we have such an explanation, we would use it to predict that a single toss has an equal chance to end up heads or tails. This prediction is a belief, and hence a subjective Bayesian probability. But it refers to the single toss. This toss need not be part of a large number of trials; hence no relative frequency is involved. It then makes sense to say that the particular coin has, in its context, a physical propensity for a single throw to end up heads 50 percent of the time.

  The propensity is an intrinsic property the coin has as a consequence of the laws of physics. It can be expressed as a probability, but it is not a belief. Rather, it justifies a belief. It is something in the world that we may have a belief about. Nor, as we said, is a propensity a frequency, for it is a property of the coin, which applies to each individual toss. Propensity would then seem to be a third kind of probability, different from either beliefs or frequencies.

  Note that unlike the other two kinds of probabilities, propensities are consequences of theories and hypotheses about nature. But they have distinct relations to the two other kinds of probabilities. We can have beliefs about propensities. Propensities in turn can explain relative frequencies and can justify beliefs.

  In ordinary quantum mechanics, probabilities arise from Rule 2, in particular the Born rule, which connects the probability of seeing a particle at some position to the square of the amplitude of the wave at that position. That probability is posited to be an intrinsic property of the quantum state; hence it is a propensity probability. Quantum mechanics asserts that there is no deeper explanation for that probability and the resulting uncertainty; it is an intrinsic property of the quantum state.

  When Everett dropped Rule 2, the result was a theory without any notion of probability, intrinsic or otherwise. As I described, he tried and failed to replace this with a frequency notion of probability.

  The dilemma proponents of the Everett formulation of quantum mechanics faced was that there are branches in which observers see that Born’s rule connecting magnitudes with frequencies holds, and there are other branches whose observers see that Born’s rule is violated. Let’s call these benevolent branches and malevolent branches. The latter may have smaller wave functions than the former, but one cannot use this to argue that the latter are any less probable, because to do so would be to impose on the theory the relation between size or magnitude of the wave function and probability. But that is exactly what proponents of Everett’s formulation are trying to derive from Rule 1; to assume it would be to sneak in Rule 2 by the back door.

  * * *

  —

  THE EVERETT THEORY is a hypothesis about the nature of reality. It posits that all that exists is a wave function evolving deterministically. From the imaginary perspective of a godlike observer outside the universe, there are no probabilities, because the theory is deterministic. All branches of the wave function exist; all are equally real.

  The Everett theory asserts that each of us leads many parallel lives, each defined by a branch that has decohered. The theory also tells us that each of these branches exists, with certainty. So if this theory is right, since there is no Rule 2, there are no objective probabilities at all. Let us call this Everett’s hypothesis.

  But we are not godlike; we are observers living inside the universe, and, according to the hypothesis, we are part of the world that the wave function describes. So that external description has no relevance for us or for the observations we make.

  We are then faced with a puzzle. Where in this world do we find the probabilities that ordinary quantum mechanics claims to predict, which are to be compared with frequencies counted by experimentalists? With no Rule 2, these probabilities are not part of the world as it would be in our absence. Frequencies are counts of definite outcomes, but such things are not unique or exclusive facts in Everettian quantum theory, because given any possible counting of outcomes of a repeated experiment, there are branches which have that count. There are branches in which those counts agree with the predictions of quantum mechanics (with Rule 2) and branches in which they don
’t. We cannot say the former are more probable than the latter, because in Everettian quantum theory there are no objective probabilities. We cannot even say that the former are more numerous than the latter because in realistic cases there will be infinite numbers of each.

  You read this right: Everettian quantum mechanics predicts that an infinite number of observers will observe experimental results that disagree with the predictions of quantum mechanics! That is the fate of the infinite number of observers whose ill fortune takes them along malevolent branches. It is also the case that an infinite number of observers on benevolent branches see experimental results consistent with quantum theory’s predictions. But that is small consolation, because a benevolent branch can turn malevolent at any moment.

  What it seems we cannot say, in Everettian quantum mechanics, is that quantum theory predicts objective probabilities, which are inherent features of nature that exist in our absence. And, unless we find another way to introduce probabilities, we cannot say that the theory can be tested by doing the experiment and counting the different outcomes, because the failure of any such test can be dismissed by supposing that we are just on a malevolent branch—and those are not any less probable or any less numerous than the benevolent branches which confirm the probabilistic predictions of quantum mechanics.

  To address this situation, David Deutsch made an interesting proposal, which was to ask not whether the Everett theory is true or false, but how we, as observers inside the universe, should bet, were we to assume that it is true. In particular, the major thing we have to bet on, assuming the Everett story is true, is whether the branch we live on is benevolent or malevolent. Every other bet we might make depends on that single bet. If we are on a benevolent branch, then bets we place based on Born’s rule will pay out. If we aren’t so fortunate, then all bets are off, because literally, anything could happen.

 

‹ Prev