Quantum Reality

Home > Other > Quantum Reality > Page 6
Quantum Reality Page 6

by Jim Baggott


  † I’ll be making frequent references to these propositions throughout the rest of the book. To save you the trouble of looking back here to remind you what they are, I’ve collected them all together in a handy appendix at the back of this book.

  * See http://www.rigb.org/blog/2013/march/faraday-appointment

  * Although philosophers of science will have some things to say about the design and building of the ship, how it might be captained, its instruments and maritime charts, and the nature of the journey back and forth.

  3

  Sailing on the Sea of Representation

  How Scientific Theories Work (and Sometimes Don’t)

  We all tend to use the word ‘theory’ rather loosely. I have a theory about the reasons why British citizens voted by a narrow margin to leave the European Union in the referendum that was held in June 2016. I also have a theory about Donald Trump’s outrageous success in the race to become the 45th President of the United States later that year. We can all agree that no matter how well reasoned they might be, these are ‘just theories’.

  But successful scientific theories are much more than this. They appear to tell us something deeply meaningful about how nature works. Theories such as Newton’s system of mechanics, Darwin’s theory of evolution, Einstein’s special and general theories of relativity, and, of course, quantum mechanics are broadly accepted as contingently ‘true’ representations of reality and form the foundations on which we seek to understand how the Universe came to be and how we come to find ourselves here, able to theorize about it. Much of what we take for granted in our complex, Western scientific-technical culture depends on the reliable application of a number of scientific theories. We have good reasons to believe in them.

  In a recent New York Times online article, cell biologist Kenneth R. Miller explained that a scientific theory ‘doesn’t mean a hunch or a guess. A theory is a system of explanations that ties together a whole bunch of facts. It not only explains those facts, but predicts what you ought to find from other observations and experiments.’1

  This is all well and good, but how does that happen? Where does a scientific theory come from and how is it shaped through the confrontation of metaphysical preconceptions with the empirical facts? How does it gain acceptance and trust in the scientific community? And, most importantly for our purposes here, how should we interpret what the theory represents?

  In other words, what exactly is involved in sailing the Ship of Science across the Sea of Representation?

  Your first instinct might be to reach for a fairly conventional understanding of what’s generally referred to as the ‘scientific method’. You might think that scientists start out by gathering lots of empirical data, and then they look for patterns. The patterns might suggest the possibility that there is a cause-and-effect relationship or even a law of nature sitting beneath the surface which guides or governs what we observe.

  Drawing on our metaphor, the scientists arm themselves with a cargo of empirical data and set sail for the shores of Metaphysical Reality. Here they collect those preconceptions about reality that are relevant to the data they want to understand, perhaps involving familiar concepts, such as space and time, and familiar (but invisible) entities such as photons or electrons, and what we already think we know about their behaviours and their properties.

  If these are patterns in physical data, the scientists will typically use mathematics to assemble their preconceptions into a formal theoretical structure, involving space and time, mass and energy, and maybe further properties such as charge, spin, flavour, or colour. To be worthy of consideration, the new structure will provide the scientists with the connections they need. The theory will say that when we do this, we get that pattern, and this is consistent with what is observed. The scientists then go further. Trusting the veracity of the structure, they figure out that when instead we choose to do that, then something we’ve never before observed (or thought to look for) should happen. They sail back across the Sea to the shores of Empirical Reality to make more observations or do some more experiments. When this something is indeed observed to happen, the theory gains greater credibility and acceptance.

  Credit for this version of the scientific method, based on the principle of induction, is usually given to Francis Bacon who, after an illustrious career (and public disgrace), died of pneumonia caught whilst trying to preserve a chicken by stuffing it with snow. Now, we might feel pretty comfortable with this version of the scientific method, which remained unquestioned for several hundred years, but we should probably acknowledge that science doesn’t actually work this way.

  If the eternal, immutable laws of nature are to be built through inductive inference substantiated or suitably modified as a result of experiment or observation, then, philosophers of the early twentieth century argued, we must accept that these laws can never be certain.

  Suppose we make a long series of observations on ravens all around the world. We observe that all ravens are black. We make use of induction to generalize this pattern into a ‘law of black ravens’.* This would lead us to predict that the next raven observed (and any number of ravens that might be observed in the future) should also be black.

  But then we have to admit that no matter how many observations we make, there would always remain the possibility of observing an exception, a raven of different colour, contradicting the law and necessitating its abandonment or revision. The probability of finding a non-black raven might be considered vanishingly small, but it could never be regarded to be zero and so we could never be certain that the law of black ravens would hold for all future observations. This is a conclusion reinforced by the experiences of European explorers—who might have formulated a similar ‘law of white swans’—until they observed black swans (Cygnus atratus) on the lakes and rivers of Australia and New Zealand.

  The philosopher Bertrand Russell put it this way: ‘The man who has fed the chicken every day throughout its life at last wrings its neck instead, showing that more refined views as to the uniformity of nature would have been useful to the chicken.’2

  Karl Popper argued that these problems are insurmountable, and so rejected induction altogether as a basis on which to build a scientific method. In two key works, The Logic of Scientific Discovery (first published in German in 1934) and Conjectures and Refutations (published in 1963), he argued that science instead proceeds through the invention of creative hypotheses, from which empirical consequences are then deduced. Conscious of some outstanding problems or gaps in our understanding of nature, scientists start out not with lots of empirical data, but by paying a visit to the shores of Metaphysical Reality. They conjure some bold (and, sometimes, outrageous) ideas for how nature might be, and then deduce the consequences on the voyage back across the Sea of Representation. The result is a formal theory.

  What then happens when we reach the shores of Empirical Reality? This is easy: the theory is tested. It is exposed to the hard, brutal, and unforgiving facts. If the theory is not falsified by the data, by even just a single instance (observation of a single black swan will falsify the ‘law of white swans’), then Popper argued that it remains a relevant and useful scientific theory.

  We can go a bit further than Popper and suggest that if the test is based on existing data, on empirical facts we already know, then the theory will be tentatively accepted if it provides a better explanation of these facts than any available alternative. Better still, if the theory makes predictions that can be tested, and which are then upheld by data from new observations or experiments, then the theory is likely to be more widely embraced.*

  I honestly doubt that many practising scientists would want to take issue with any of this. I could provide you with many, many examples from the history of science which show that this is, more or less, how it has worked out. This isn’t a book about history, however, so I’ll restrict myself to just one very pertinent example.

  We saw in Chapter 1 that the real quantum revolutionar
y was Einstein, who published his light-quantum hypothesis in his ‘miracle year’ of 1905. It’s important to note that Einstein didn’t induct this hypothesis from any available data. He simply perceived a problem with the way that science was coming to understand matter—in terms of discrete atoms and molecules—and electromagnetic radiation, which was understood exclusively in terms of continuous waves. The prevailing scientific description of matter and light didn’t fit with Einstein’s metaphysical preconceptions about how nature ought to be. Whilst on his visit to the shores of Metaphysical Reality, he conjectured that Planck’s conclusions should be interpreted as though light itself is composed of discrete quanta.

  Einstein then sailed across the sea, representing his light-quanta in a theory that predicted some consequences for the photoelectric effect. The rest, as they say, is history.

  The light-quantum hypothesis passed the test, but as we’ve seen, it still remained controversial (scientists can be very stubborn). Just a few years after the experiments on the photoelectric effect, Arthur Compton and Pieter Debye showed that light could be ‘bounced’ off electrons, with a predictable change in the frequency (and hence the energy) of the light. These experiments demonstrated that light does indeed consist of particles moving like small projectiles. Gradually, light-quanta became less controversial and more acceptable.

  Popper’s take on science is known as the ‘hypothetico-deductive’ method. This is a bit of a clumsy term, but essentially it means that scientists draw on all their metaphysical preconceptions to hypothesize (or ‘make a guess’) about how nature works, and then deduce a formal theory which tells what we might expect to find in empirical observations or experiments. Science then proceeds through the confrontation between theory and the facts, as the ship sails between the shores. This is not a one-way journey—the ship makes many journeys back and forth, and those relevant metaphysical preconceptions that survive become tightly and inextricably bound into the theory (and, as we’ve seen, into the empirical observations, too). In this way the relevant metaphysics becomes ‘naturalized’ or ‘habitual’, justified through the success of the resulting theory.3

  It’s worth mentioning in passing that the preconceptions, data, and indeed the ship itself is conditioned by their historical and cultural contexts, or perspectives. There are passages in Newton’s Principles of Natural Philosophy, published in 1687, that refer to God’s role in keeping the stars apart, a metaphysical preconception that would be unusual in today’s science. ‘Journeys are always perspectival,’ contemporary philosopher Michela Massimi told me in a discussion based on my metaphor, ‘we sail our ship using the only instruments (compass and whatever else) that our current technology, theories, and experimental resources afford. So any back and forth between the shores of Empirical Reality and metaphysical posits is guided and channelled by who we are, and most importantly, by our scientific history.’4 Compare a seventeenth-century tall ship in full sail with a modern ocean liner.

  There is in principle no constraint on the nature of the hypotheses that scientists might come up with during their frequent visits to the shores of Metaphysical Reality. How, you might then ask, is science any different from any other kind of wild speculation? If I propose that some mysterious force of nature governs our daily lives depending on our birth signs, surely we would all agree that this is not science? What if I propose that similia similibus curentur—like cures like—and that when diluted by a factor of 1012 (or 1060), the substances that cause human diseases provide effective cures for them? Is this science? Or is it snake oil? What if I reach for the ultimate metaphysical preconception and theorize that God is the intelligent cause of all life on planet Earth, and all that we erroneously claim to be the result of evolution by natural selection is actually God’s grand design?

  Your instinct might be to dismiss astrology, homeopathy, and intelligent design as pseudoscience, at best. But why? After all, they involve hypotheses based on metaphysics, from which certain theoretical principles are deduced, and they arguably make predictions which can be subjected to empirical test. We can see immediately that, given the fundamental role of metaphysics in scientific theorizing, if we are to draw a line between theories that we regard to be scientific and pseudoscience or pure metaphysics, then we need something more. We need a demarcation criterion.

  The logical positivists proposed to use ‘verification’ to serve this purpose. If a theory is capable in principle of being verified through observational or experimental tests, then it can be considered to be scientific. But the principle of induction was also central to the positivists’ programme and, in rejecting induction, Popper had no alternative but to reject verification as well. Logically, if induction gives no guarantees about the uniformity of nature (as Russell’s chicken can attest), then the continued verification of theories gives none either. Theories tend to be verified until, one day, they’re not.

  As we saw earlier, Popper argued that what distinguishes a scientific theory from pseudoscience and pure metaphysics is the potential for it to be falsified on exposure to the empirical data. In other words, a theory is scientific if it has the potential to be proved wrong.

  This shift is rather subtle, but it is very effective. Astrology makes predictions, but these are intentionally general, and wide open to interpretation. Popper wrote: ‘It is a typical soothsayers’ trick to predict things so vaguely that the predictions can hardly fail: that they become irrefutable.’5 If, when confronted with contrary and potentially falsifying evidence, the astrologer can simply reinterpret the prediction, then this is not scientific. We can find many ways to criticize the premises of homeopathy and dismiss it as pseudoscience, as it has little or no foundation in our current understanding of Western, evidence-based medicine—as a theory it doesn’t stand up to scrutiny. But even if we take it at face value we should admit that it fails all the tests—there is no evidence from clinical trials for the effectiveness of homeopathic remedies beyond a placebo effect. Those who stubbornly argue for its efficacy are not doing science.

  And, no matter how much we might want to believe that God designed all life on Earth, we must accept that intelligent design makes no testable predictions of its own. It is simply a conceptual alternative to evolution as the cause of life’s incredible complexity. Intelligent design cannot be falsified, just as nobody can prove the existence or non-existence of God. Intelligent design is not a scientific theory: it is simply overwhelmed by its metaphysical content.

  Alas, this is still not the whole story. This was perhaps always going to be a little too good to be true. The lessons from history teach us that science is profoundly messier than a simple demarcation criterion can admit. Science is, after all, a fundamentally human endeavour, and humans can be rather unpredictable things. Although there are many examples of falsified or failed scientific theories through history, science doesn’t progress through an endless process of falsification. To take one example: when Newton’s classical mechanics and theory of universal gravitation were used to predict the orbit of a newly discovered planet called Uranus in 1781, the prediction was found to be wrong. But this was not taken as a sign that the structures of classical mechanics and gravitation had failed.

  Remember that it’s actually impossible to do science without metaphysics, without some things we’re obliged to accept at face value without proof. Scientific theories are constructed from abstract mathematical concepts, such as point-particles or gravitating bodies treated as though all their mass is concentrated at their centres. If we think about how Newton’s laws are actually applied to practical situations, such as the calculation of planetary orbits, then we are forced to admit that no application is possible without a whole series of so-called auxiliary assumptions or hypotheses.

  Some of these assumptions are stated, but most are implied. Obviously, if we apply Newton’s mechanics to planets in the Solar System then, among other things, we assume our knowledge of the Solar System is complete and there is no interference f
rom the rest of the Universe. In his recent book Time Reborn, contemporary theorist Lee Smolin wrote: ‘The method of restricting attention to a small part of the universe has enabled the success of physics from the time of Galileo. I call it doing physics in a box.’6

  One of the consequences of doing physics in a box is that when predictions are falsified by the empirical evidence, it’s never clear why. It might be that the theory is false, but it could simply be that one or more of the auxiliary assumptions is invalid. The evidence doesn’t tell us which. This is the Duhem–Quine thesis, named for physicist and philosopher Pierre Duhem and philosopher Willard Van Orman Quine.

  And, indeed, the problem with the orbit of Uranus was traced to one of the auxiliary assumptions. It was solved simply by making the box a little bigger. John Adams and Urbain Le Verrier independently proposed that there was an as-yet unobserved eighth planet in the Solar System that was perturbing the orbit of Uranus. In 1846 Johann Galle discovered the new planet, subsequently called Neptune, less than one degree from its predicted position.

  Emboldened by his success, in 1859 Le Verrier attempted to use the same logic to solve another astronomical problem. The planetary orbits are not exact ellipses. If they were, each planet’s point of closest approach to the Sun (called the perihelion) would be fixed, the planet always passing through the same point in each and every orbit. But astronomical observations had shown that with each orbit the perihelion shifts slightly, or precesses. It was understood that much of the observed precession is caused by the cumulative gravitational pull of all the other planets in the Solar System, effects which can be predicted using Newton’s gravitation.

  But, for the planet Mercury, lying closest to the Sun, this ‘Newtonian precession’ is predicted to be 532 arc-seconds per century.* The observed precession is rather more, about 575 arc-seconds per century, a difference of 43 arc-seconds. Though small, this difference accumulates and is equivalent to one ‘extra’ orbit every three million years or so.

 

‹ Prev