Quantum Reality

Home > Other > Quantum Reality > Page 7
Quantum Reality Page 7

by Jim Baggott


  Le Verrier proposed the existence of another planet, closer to the Sun than Mercury, which became known as Vulcan. Astronomers searched for it in vain. Einstein was delighted to discover that his general theory of relativity predicts a further ‘relativistic’ contribution of 43 arc-seconds per century, due to the curvature of spacetime around the Sun in the vicinity of Mercury.† This discovery gave Einstein the strongest emotional experience of his life in science: ‘I was beside myself with joy and excitement for days.’7

  It seems from this story that a theory is only going to be abandoned when a demonstrably better theory is available to replace it. We could conclude from this that scientific theories are never falsified, as such, they are just eventually shown to be inferior when compared with competing alternatives. Even then, demonstrably falsified theories can live on. We know that Newton’s laws of motion are inferior to quantum mechanics in the microscopic realm of molecules, atoms, and subatomic particles, and they break down when stuff of any size moves at or close to the speed of light. We know that Newton’s law of universal gravitation is inferior to Einstein’s general theory of relativity. And yet Newton’s laws remain perfectly satisfactory when applied to ‘everyday’ objects and situations and physicists and engineers will happily make use of them, even though we know they’re ‘not true’.

  Problems like these were judged by philosophers of science to be insurmountable, and consequently Popper’s falsifiability criterion was abandoned (though, curiously, it still lives on in the minds of many practising scientists). Its demise led Paul Feyerabend—something of a Loki among philosophers of science—to reject the notion of the scientific method altogether and promote an anarchistic interpretation of scientific progress. In science, he argued, anything goes. He encouraged scientists8

  to step outside the circle and either to invent a new conceptual system, for example a new theory, that clashes with the most carefully established observational results and confounds the most plausible theoretical principles, or to import such a system from outside science, from religion, from mythology, from the ideas of incompetents, or the ramblings of madmen.

  According to Feyerabend, science progresses in an entirely subjective manner, and scientists should be afforded no special authority: in terms of its application of logic and reasoning, science is no different from any other form of rational inquiry. He argued that a demarcation criterion is all about putting science on a pedestal, and ultimately stifles progress as science becomes more ideological and dogmatic.

  In 1983, philosopher Larry Laudan declared that the demarcation problem is intractable, and therefore a pseudo-problem.* He argued that the real distinction is between knowledge that is reliable and that which is unreliable, irrespective of its provenance. Terms like ‘pseudoscience’ and ‘unscientific’ are ‘just hollow phrases which only do emotive work for us’.9

  Okay—time for some confessions. I don’t buy the idea that science is fundamentally anarchic, and that it has no rules. I can accept that there are no rules associated with the creative processes that take place along the shores of Metaphysical Reality. You prefer induction? Go for it. You think it’s better to deduce hypotheses and then test them? Great. Although I would personally draw the line at seeking new ideas from religion, mythology, incompetents, and madmen,* at the end of the day nobody cares overmuch how new theoretical concepts or structures are arrived at if they result in a theory that works.

  But, for me at least, there has to be a difference between science and pseudoscience, and between science and pure metaphysics. As evolutionary-biologist-turned-philosopher Massimo Pigliucci has argued, ‘it is high time that philosophers get their hands dirty and join the fray to make their own distinctive contributions to the all-important—and sometimes vital—distinction between sense and nonsense.’10

  So, if we can’t make use of falsifiability as a demarcation criterion, what do we use instead? I don’t think we have any real alternative but to adopt what I might call the empirical criterion. Demarcation is not some kind of binary yes-or-no, right-or-wrong, black-or-white judgement. We have to admit shades of grey. Popper himself (who was no slouch, by the way) was more than happy to accept this:11

  the criterion of demarcation cannot be an absolutely sharp one but will itself have degrees. There will be well-testable theories, hardly testable theories, and non-testable theories. Those which are non-testable are of no interest to empirical scientists. They may be described as metaphysical.

  Some scientists and philosophers have argued that ‘testability’ is to all intents and purposes equivalent to falsifiability, but I disagree. Testability implies only that the theory either make contact or, at the very least, hold some promise of making contact, with empirical evidence. It makes absolutely no presumptions about what we might actually do in light of the evidence. If the evidence verifies the theory, that’s great—we celebrate and then start looking for another test. If the evidence fails to support the theory, then we might ponder for a while or tinker with the auxiliary assumptions. Either way, we have something to work with. This is science.

  Returning to my grand metaphor, a well-testable theory is one for which the passage back across the sea to Empirical Reality is relatively straightforward. A hardly testable theory is one for which the passage is for whatever reason more fraught. Some theories take time to develop properly, and may even be perceived to fail if subjected to tests before their concepts and limits of applicability are fully understood. Sometimes a theory will require an all-important piece of evidence which may take time to uncover. Peter Higgs proposed the mechanism that would be named for him in a paper published in 1964, and the Higgs mechanism went on to become an essential ingredient in the standard model of particle physics, the currently accepted quantum description of all known elementary particles. But the mechanism wasn’t accepted as ‘true’ until the tell-tale Higgs boson was discovered at the Large Hadron Collider, nearly fifty years later.

  Make no mistake, if the theory fails to provide even the promise of passage across the sea—if it is trapped in the tidal forces of the whirlpool of Charybdis—then this is a non-testable theory. No matter how hard we try, we simply can’t bring it back to Empirical Reality. This implies that the theory makes no predictions, or makes predictions that are vague and endlessly adjustable, more typical of the soothsayer or the snake oil salesman. This is pure metaphysics, not science, and brings me to my second-favourite Einstein quote: ‘Time and again the passion for understanding has led to the illusion that man is able to comprehend the objective world rationally by pure thought without any empirical foundations—in short, by metaphysics.’12

  I want to be absolutely clear. I’ve argued that it is impossible to do science of any kind without involving metaphysics in some form. The scientists’ metaphysical preconceptions are essential, undeniable components in the construction of any scientific theory. But there must be some kind of connection with empirical evidence. There must be a tension between the ideas and the facts. The problem is not metaphysics per se but rather the nature and extent of the metaphysical content of a theory. Problems arise when the metaphysics is all there is.

  Of course, this is just my opinion. If we accept the need for a demarcation criterion, then we should probably ask who should be responsible for using it in making judgements. Philosophers Don Ross, James Ladyman, and David Spurrett argue that individuals (like me, or them) are not best placed to make such judgements, and we should instead rely on the institutions of modern science.13 These institutions impose norms and standards and provide sense-checks and error filters that should, in principle, exclude claims to objective knowledge derived from pure metaphysics. They do this simply by not funding research proposals that don’t meet the criteria, or by not publishing papers in recognized scientific journals.

  But, I would argue, even institutions are fallible and, like all communities, the scientific community can fall prey to groupthink.14 We happen to be living in a time characterized by a
veritable cornucopia of metaphysical preconceptions coupled with a dearth of empirical facts. We are ideas-rich, but data-poor. As we will see in Chapter 10, I personally believe the demarcation line has been crossed by a few theorists, some with strong public profiles, and I’m not entirely alone in this belief. But, at least for now, the institutions of science appear to be paying no attention.

  So I encourage you to form your own opinions.

  An accepted scientific theory serves at least two purposes. If it is a theory expressed in the form of one or more mathematical equations, then these equations allow us to calculate what will happen given a specific set of circumstances or inputs. We plug some numbers in, crank the handle, and we get more numbers out. The outputs might represent predictions for observations or experiments that we can then design and carry out. Or they might be useful in making a forecast, or designing a new electronic device, building a skyscraper, or planning a town’s electricity network. Used in this way, our principal concerns rest with the inputs and the outputs, and we might not need to think too much about what the theory actually says. Provided we can trust its accuracy and precision, we can quite happily use the theory as a ‘black box’, as an instrument.

  The second purpose is concerned with how the theory should be interpreted. The equations are expressed using a collection of concepts represented by sometimes rather abstract symbols. These concepts and symbols may represent the properties and behaviours of invisible entities such as electrons, and the strengths of forces that act on them or that they produce or carry. Most likely, the equations are structured such that everything we’re interested in takes place within a three-dimensional space and in a time interval stretching from then until now, or from now until sometime in the future. The interpretation of these symbols then tells us something meaningful about the things we find in nature. This is no longer about our ability to use the theory; it is about how the theory informs our understanding of the world.

  You might think I’m labouring this point. After all, isn’t it rather obvious how a physical theory should be interpreted? What’s the big deal? If we’re prepared to accept the existence of objective reality (Realist Proposition #1), and the reality of invisible entities such as photons and electrons (Realist Proposition #2), it surely doesn’t require a great leap of imagination to accept:

  Realist Proposition #3: The base concepts appearing in scientific theories represent the real properties and behaviours of real physical things.

  By ‘base concepts’ I mean the familiar terms we use to describe the properties and behaviours of the objects of our theories. These are concepts such as mass, momentum, energy, spin, and electric charge, with events unfolding in space and time. It’s important to distinguish these from other, more abstract mathematical constructions that scientists use to manipulate the base concepts in order to perform certain types of calculations. For example, in classical mechanics it is possible to represent the complex motions of a large collection of objects more conveniently as the motion of a single point in something called configuration space or phase space. Nobody is suggesting that such abstractions should be interpreted realistically, just the base concepts that underpin them. We’ll soon see that arguments about the interpretation of quantum mechanics essentially hinge on the interpretation of the wavefunction. Is the wavefunction a base concept, with real properties and real behaviours? Or is it an abstract construction?

  Proposition #3 appears straightforward, but then I feel obliged to point out that the conjunction of everyday experience and familiarity with classical mechanics has blinded us to the difference between the physical world and the ways in which we choose to represent it.

  Let’s use Newton’s second law of motion as a case in point. We know this law today by the rather simple equation:

  This says that accelerated motion will result if we apply a force (or an ‘action’ of some kind) on an object with a certain mass. Now, whilst it is certainly true to say that the notion of mechanical force still has much relevance today, as I explained in Chapter 1 the attentions of eighteenth- and nineteenth-century physicists switched from force to energy as the more fundamental concept. My foot connects with a stone, this action impressing a force on the stone. But a better way of thinking about this is to see the action as transferring energy to the stone.

  The term ‘energy’ was first introduced in the early nineteenth century and it gradually became clear that this is a conserved quantity—energy can be neither created nor destroyed and is simply moved around a physical system, shifting from one object to another or converting from one form to another. It was realized that kinetic energy—the energy associated with motion—is not in itself conserved. Physicists recognized that a system might also possess potential energy by virtue of its physical characteristics and situation. Once this was understood it became possible to formulate a law of conservation of the total energy—kinetic plus potential—and this was achieved largely through the efforts of physicists concerned with the principles of thermodynamics.

  Today, we replace Newton’s force with the rate of change of potential energy in space. Think about it this way. Sisyphus, the king of Ephyra, is condemned for all eternity to push an enormous boulder to the top of a hill, only for it roll back to the bottom (Figure 8). As he pushes the boulder upwards, he expends kinetic energy on the way, determined by the mass of the boulder and the speed with which he rolls or carries it. If we neglect any losses due to friction, all this kinetic energy is transferred into potential energy, held by the boulder, perched at the top. This potential energy is represented by the way the hill slopes downwards. As the boulder rolls back down the slope, the potential energy is converted back into the kinetic energy of motion. For a given mass, the steeper the slope (the greater the force), the greater the resulting acceleration.

  Figure 8 The myth of Sisyphus (painting by Titian, 1549) demonstrates the relationship between kinetic and potential energy.

  With this in mind, why would we hesitate, even for an instant, to accept Realist Proposition #3? Sisyphus is a figure from Greek mythology, but there are many real hills, and many real boulders, and we don’t doubt what will happen as the boulder rolls down. Acceleration is something we’ve experienced many thousands of times—there is no doubting its reality.

  But force = mass × acceleration is an equation of a classical scientific theory, and we must remember that it is impossible to do science without metaphysics, without assuming some things for which we can’t contrive any evidence. And, lest we forget, remember that to apply this equation we must also do physics in a box.

  The first thing we can acknowledge is that the slope of the hill represents the rate of change of potential energy in space. Acceleration is the rate of change of velocity with time. And, for that matter, velocity itself is the rate of change of the boulder’s position in space with time. That Newton’s second law requires space and time should come as no real surprise—it’s about motion, after all.

  But, as I explained in Chapter 2, Newton’s absolute space and time are entirely metaphysical. Despite superficial appearances, we only ever perceive objects to be moving towards or away from each other, changing their relative positions. This is relative motion, occurring in a space and time that are in principle defined only by the relationships between the objects themselves. Newton’s arch-rival, the philosopher Gottfried Wilhelm Leibniz, argued: ‘the fiction of a finite material universe, the whole of which moves about in an infinite empty space, cannot be admitted. It is altogether unreasonable and impracticable.’15

  Newton understood very well what he was getting himself into. So why, then, did he insist on a system of absolute space and time? Because by adopting this metaphysical preconception he found that he could formulate some very highly successful laws of motion. Success breeds a certain degree of comfort, and a willingness to suspend disbelief in the grand but sometimes rather questionable foundations on which theoretical descriptions are constructed.

  Then ther
e’s the question of Newton’s definition of mass. Here it is: ‘The quantity of matter is the measure of the same, arising from its density and bulk conjunctly…. It is this that I mean hereafter everywhere under the name body or mass.’16 If we interpret Newton’s use of the term ‘bulk’ to mean volume, then the mass of an object is simply its density (say in grams per cubic centimetre) multiplied by its volume (in cubic centimetres). It doesn’t take long to figure out that this definition is entirely circular, as Mach pointed out many years later: ‘As we can only define density as the mass of a unit of volume, the circle is manifest.’17

  I don’t want to alarm you unduly, but no matter how real the concept of mass might seem, the truth is that we’ve never really understood it. Einstein messed things up considerably with his famous equation E = mc2, which is more deeply meaningful when written as m = E/c2: ‘The mass of a body is a measure of its energy content.’18 In the standard model of particle physics, elementary particles such as electrons are assumed to ‘gain mass’ through their interactions with the Higgs field (this is the Higgs mechanism). The masses of protons and neutrons (and hence the masses of all the atoms in your body) are actually derived in large part from the energy of the colour force (carried by gluons) that binds the up and down quarks inside them.19

  See what I mean? If an equation as simple and familiar as force = mass × acceleration is rife with conceptual problems and difficulties of interpretation, why would we assume that we can understand anything at all?

  Once again we should remember that we don’t actually have to understand it completely in order to use it. We know that it works (within the limits of its applicability) and we can certainly use it to calculate lots of really useful things. But the second law of motion is much more than a black box. There is a sense in which it provides genuine understanding, even though we might be unsure about the meaning of some of its principal concepts.

 

‹ Prev