Book Read Free

The World Philosophy Made

Page 24

by Scott Soames


  A second experiment puts this apparatus on a floating platform that can be turned in any direction, e.g., facing north, south, east, or west. Because of the rotation of the earth and its revolution around the sun, these changes in orientation of the apparatus relative to the earth’s rotation, as well as changes in the velocity of the earth’s movement around the sun at different times, might—as far as one could know before putting it to the test—result in small changes in the speed of light passing through the slits of the apparatus. If so, the device would have to be recalibrated when its orientation in the laboratory is changed, or when it is used at different times of the year. Special relativity predicts this will never happen, and it doesn’t.20 One can also put the apparatus in motion in a straight line and determine whether the slits have to be adjusted for the light to pass through, depending on whether the light source moves with the apparatus or is stationary relative to it. As predicted by special relativity, they don’t.

  Einstein’s theory of general relativity, which emerged ten years after he developed special relativity, introduced a new way of understanding gravity, which was accompanied by a new relativistic conception of the structure of space-time. In the new structure, the shortest distance between two points is a curved line, as it is on the surface of a sphere. In addition, parallel lines that intersect the same line at an angle of 90 degrees may, if extended far enough, intersect one another, just as the lines of longitude on the surface of the earth intersect one another at the poles, despite intersecting the equator at right angles. Nevertheless, Einstein didn’t think of space-time as having the uniform geometry of the surface of a sphere. Rather, its curvature is variable. Unlike parallel lines on a sphere, parallel lines on a portion of space-time with a concave curvature can diverge from one another as they are extended. Space-time as a whole was not thought of as having a single type of curvature everywhere.

  If the geometry of space-time is, as Einstein says, variable, what determines, or at any rate influences, that variability? The answer is, the distribution of matter in the universe. This is where gravity comes in. As we know, Newton thought of it as a force acting on bodies, proportional to their mass, which changes their trajectories through absolute space and time, pulling them closer together. By contrast, in developing general relativity, Einstein came to think of the mass of material bodies as bending the curvature of space-time itself, proportional to the mass of the bodies. Since light always follows the geometry of space-time, this means that the observed trajectory of light from a distant body will increase its curvature around a massive body in its path, seeming to an observer on the other side of the body to “bend” around it. This fact was experimentally confirmed in 1919 by Arthur Eddington, whose photographs of a solar eclipse demonstrated the bending effect. Though this was rightly hailed as vindicating general relativity, talk of “bending” shouldn’t be misunderstood. In following the locally curved path, light was doing what it always does: moving along the shortest physically possible path in space from one point to another.

  More recent observations have provided further confirmation of general relativity. One involves a quasar 8 billion light-years from the earth, with a galaxy 400 million light-years away from us and intervening between us and the quasar. This galaxy, which is a massive collection of matter, bends the space-time through which the light passes, resulting in the appearance to us of a cluster of four images of the same star. In addition, the rotation of the earth has been shown to produce a disturbance in space-time around it, caused, according to general relativity, by rotation of a massive body. This was observed as the change in the axes of gyroscopes in a satellite—Gravity Probe B (GP-B)—in 2004. The principal investigator, Francis Everitt of Stanford University, explained the results this way.

  Imagine the earth as if it were immersed in honey. As the planet rotates, the honey around it would swirl, and it’s the same with space and time. GP-B confirmed two of the most profound predictions of Einstein’s universe, having far-reaching implications across astrophysics research.21

  One of the most fascinating discussions of the way in which gravity is understood in general relativity involves Richard Feynman’s discussion of Galileo’s famous experiment showing that the velocity with which bodies of varying masses (e.g., a bowling ball vs. a marble) dropped from a tower are not affected by their mass; except for air resistance, they fall at the same rate and so land together.22 The reason for this in Newton’s framework is that although the gravitational effect on a body by other bodies is proportional to its mass, the resistance of the body to the gravitational effects of other bodies is proportional, in exactly the same degree, to its mass. Hence, these two features of mass cancel each other out, and, Newton explains, the marble and bowling ball hit the ground at the same time.

  By contrast, in general relativity the bowling ball and the marble are free of all forces after being dropped, and so follow straight (i.e., shortest) spatial trajectories, much like twin B in the earlier example, who, remaining at rest, was subjected to no forces during A’s journey. In Galileo’s case, the tower and any clocks at the top or bottom of the tower continue to be moved by forces deriving from the earth’s movement—rather like twin A was moved away from twin B when she fired her rockets. With this in mind, suppose there are two synchronized clocks at the base of the tower. One, Clock A, remains there; the other, Clock B, is thrown upward precisely when Galileo drops the marble and bowling ball—thrown with just enough force so as to hit the ground the moment the marble and the bowling ball do. Just as twin A’s accelerated motion relative to twin B resulted in more time elapsing for B than A between A’s departure and return, so, Feynman observes, the account of gravity in general relativity predicts that the accelerated motion of Clock A (on the ground) throughout the flight of Clock B—which, in the second half of its trajectory, is free of forces and undergoes no acceleration—results in more time elapsing for (and being recorded by) Clock B than Clock A. (The same would be true if clocks were dropped along with the bowling ball and marble.) This, it has been reported, has been confirmed.23

  Up to now, in speaking of Newton, Einstein, and others, I have mostly been concerned with macroscopic objects and events, which came to be understood much better in the twentieth century than they ever had been before. The twentieth century also saw stunning developments in our understanding of the universe at the microscopic level, including atomic and subatomic states and particles. The work of Max Planck and Albert Einstein on radiation and light early in the century was followed by Niels Bohr’s theory of the atom, and the mathematical formalisms for describing the subatomic world developed by Max Born, Werner Heisenberg, Erwin Schrödinger, and Paul Dirac. The result was the spectacular success of quantum mechanics in precisely measuring and predicting subatomic events and processes. Paradoxically, this predictive success wasn’t matched by a comparable advance in our ability to connect the micro-level of reality with the macro-level, or to even understand what micro-level facts generate the well-attested micro-level predictions we are able to make. Simply put, quantum physics is telling us something about the universe, but we don’t yet know what it is.

  The point can be illustrated with a simple abstract example. Suppose we arrange for particles of type P to traverse a region of space by one or the other of two possible routes, A and B, from which the particles emerge, continue on, and end up in places C or D. Having the ability to test whether a particle takes route A or B, we wish to know which of the two possible destinations it will arrive at when it takes those routes. So we set up an unobtrusive measuring device that displays ‘A’ or ‘B’ depending on which route the particle takes. When we run experiments, we find that 50% of the particles that take route A end up at place C and 50% end up at D. The same is true for route B. So, we are confident that we can turn off the monitor, knowing that running more particles through the region of space will always result in 50% ending at C and 50% ending at D. Surprisingly, however, we are wrong. With the monitor off
, 100% of the particles arrive at place C and none arrive at D!24

  How can that be? Surely, one is inclined to think, either there must more than two routes or our measurements monitoring the routes are faulty. It turns out, however, that there aren’t more routes and we aren’t mismeasuring the particles going through them; every measuring device produces the same results. Thus, we are forced to conclude, measuring the particles somehow changes the reality we are trying to measure. But how? There is no consensus about this among quantum physicists or philosophers of physics.

  There is, however, an accepted vocabulary for describing the situation and making probabilistic predictions. In quantum physics it is commonly said that certain properties of a particle don’t exist until you measure them—or, at any rate, that it doesn’t make sense to say that they have the properties, or fail to have them, until you measure them (at which point they definitely do or definitely don’t have the properties). It is not clear what, if anything, is said to exist (prior to measurement), but it certainly sounds as if there is a wave function associated with the particle (or other entity)—a kind of smear of energy that can be represented by a mathematical function that contains information encoded in positive or negative numerical values.25 Think of these numbers as measuring the amplitude—height (positive numbers) or trough (negative numbers)—of the wave. From these we calculate the probability that an entity has one or another property. The probability is the square of the measurement of the amplitude of the wave, so, in standard cases, the probability calculated from an assignment of a positive number n is the same as that resulting from its negative counterpart −n. As we will see, however, in special cases assignments of n and −n to the same possible outcome cancel out before the final probability is calculated. This is crucial to predicting differences between the behaviors of measured vs. unmeasured particles.

  Proponents of the once standard Copenhagen interpretation of quantum physics sometimes seemed to want to say that an unmeasured particle (or other entity) has a certain probability of having a given property P, while refusing to say that it either does have P or doesn’t have P prior to being measured, and suggesting that both the claim that it does and the claim that it doesn’t are meaningless. This challenge to classical logic is troubling, in part because it isn’t clear what it would mean to say that the statement “the probability that x has P is so-and-so” is true (and hence meaningful), if the claim that x has P—to which you have assigned a probability—is either meaningless, or a claim that couldn’t possibly be true. It is one thing to assign a probability to a claim one doesn’t, and perhaps can’t, know to be true; it is another to assign a probability to a claim that one takes to be meaningless (or to a claim that one knows could not possibly be true).

  Things are improved if we modestly revise the above, perhaps incautious, characterization, by saying simply that the claim that an unmeasured particle (or other entity) has a certain probability of being measured as having property P is true—while continuing to take the claim that it has a certain probability of being P, or of not being P, to be meaningless. Although this terminological revision doesn’t remove the violation of classical logic, at least the claim it makes is not so obviously incoherent. Nor does the revision resolve the mystery of how mere measurement could bring it about that the wave function associated with a particle “collapses,” and the particle comes to have the seemingly independent property of being P, or of not being P.26

  With this in mind, let us return to our example of particles traversing different routes, A and B, to final positions C or D. When we measure the routes taken, we find that half the particles observed to travel through route A, and half those observed to travel through route B, end up at position C, while the rest end up at D. But when no measurement of the trajectories takes place, they all end up at C. Quantum physics has a way of accommodating this. Allowing wave functions associated with particles to take negative numbers among their values makes it possible to predict that certain possibilities will cancel each other out in a way that yields determinate results, even though in other cases no such canceling occurs and only probabilities can be predicted.

  When, in our example, there is no measurement of particles passing through routes A or B, the wave function assigned to the particle generates a determinate outcome—arrival at position C. This result is reached in roughly the following way. First, the numerical value 0.7071 (which when squared would give us the probability 0.5) is assigned to arriving at C via route A; the same number is assigned to arriving at D via A. Second, 0.7071 is assigned to arriving at C via route B (no surprise), but the value assigned to arriving at D via route B is −0.7071. Because we have positive and negative values assigned to the same outcome, namely arriving at D, the rules of quantum mechanics tell us to sum these values before determining the final probabilities. Since their sum is 0, and since 0 squared is 0, we generate the prediction that the probability of arriving at D is 0. This, together with the reinforcing values for arriving at C, results in the prediction that the probability of arriving at C is 100%.27

  What happens when we measure the particles moving through routes A and B? Since we know that measurement affects outcome, the relevant wave function is not associated simply with a particle; it is associated with the pair consisting of the particle and the measuring device, which, we may imagine, will be in one of two states, displaying ‘A’ or displaying ‘B’, immediately after measurement. As before, the wave function gives us numerical values for four states: (i) the value of the state consisting of the particle arriving at C after being correctly measured to follow path A is 0.7071, 0.7071 (particle at C, device measures ‘A’) for short, (ii) the value of the state consisting of the particle arriving at D after being correctly measured to follow path A is also 0.7071, 0.7071 (particle at D, device measures ‘A’) for short, (iii) the value of the state consisting of the particle arriving at C after being correctly measured to follow path B is 0.7071, 0.7071 (particle at C, device measures ‘B’) for short, (iv) the value of the state consisting of the particle arriving at D after being correctly measured to follow path B is −0.7071, −0.7071 (particle at D, device measures ‘B’) for short.

  Note the two italicized values involving arrival at D, one positive and one negative. Because the states assigned these amplitudes include different states of the measuring device, the states to which the positive and negative numbers are assigned are themselves different. Thus, the intrusion of measurement makes it impossible to sum or combine these values. This means that nothing sums to 0 and there is no cancellation, as there was when there was no measurement. As a result, a particle correctly measured as running through A has a probability of 50% of ending at C and a 50% probability of ending at D, and similarly for a particle correctly measured as running through B. This fits our observations: half the particles measured as running through A do end up at C and half end up at D; and half the particles measured as running through B do end up at C and half end up at D.

  Getting the mathematics to work out this way was an achievement, which, once systematized and mastered, allowed physicists to make incredibly precise and surprising predictions. But what reality is described by the assignment of probabilities to quantum states? How and why does measurement prevent the cancellation of possible outcomes in our example? What physical reality is represented by cancellation vs. non-cancellation? Suppose we think of it this way. States of the particle (in the unmeasured case) and of the particle-measuring device pair (when we measure the routes taken) are physical situations that cannot causally interact with one another. The particle, when unmeasured and left to its own devices, always arrives at position C; physical laws determine this result. But when measurement is introduced we are left with two equally probable possibilities—arriving at C and arriving at D.

  If we don’t say that measurement changes the laws of physics—as it would seem we shouldn’t—then we must say either that measurement introduces some real but previously unimagined elemen
t, or that measurement is somehow faulty. One possibility, espoused by the physicist David Bohm, is that some further hidden element, or variable, not caused by the measuring device, but somehow interacting with it, must be involved.28 A different idea developed initially by Hugh Everett III in his 1957 doctoral dissertation in physics at Princeton has, after decades of neglect, now begun to attract more attention.29

  Suppose, Everett imagined, that measurement (somehow) causes a single particle-plus-measuring device to split into a pair of such systems—particle p1 + measuring device 1 and particle p2 + measuring device 2. Suppose further that one of these particles reaches C, and is measured by its companion device as doing so, while the other reaches D, while being similarly measured. We don’t observe the latter because, despite being just like p1, p2 is causally isolated from p1, and so incapable of interacting with p1 in any way at all, including being observed by us to arrive at D when p1 arrives at C. From the moment of its creation, p2 is in a part of the universe inaccessible to us and our measuring device. The laws of physics determine that whenever a particle of type P is measured passing through routes A or B, a duplicate is created that will arrive at D when the original arrives at C.

 

‹ Prev