Farewell to Reality
Page 2
‘What is real?’ asked the character Morpheus in the 1999 Hollywood blockbuster movie The Matrix. ‘How do you define real? If you’re talking about what you can feel, what you can smell, what you can taste and see, then real is simply electrical signals interpreted by your brain.’2
These days we tend not to look for profundity in a Hollywood movie,* but it’s worth pausing for a moment to reflect on this observation. I want to persuade you that reality is like liquid mercury: no matter how hard you try, you can never nail it down. I propose to explain why this is by reference to three ‘everyday’ things: a red rose, a bat and a dark cave.
So, imagine a red rose, lying on an expanse of pure white silk. We might regard the rose as a thing of beauty, its redness stark against the silk sheen of brilliant nothingness. What, then, creates this vision, this evocative image, this tantalizing reality? More specifically, what in reality creates this wonderful experience of the colour red?
That’s easy. We google ‘red rose pigment’ and discover that roses are red because their petals contain a subtle mixture of chemicals called anthocyanins, their colour enhanced if grown in soil of modest acidity. So, anthocyanins in the rose petals interact with sunlight, absorbing certain wavelengths of the light and reflecting predominantly red light into our eyes. We look at the petals and we see red. This all seems quite straightforward.
But hang on. What, precisely, is ‘red light’? Our instinct might be to give a scientific answer. Red light is electromagnetic radiation with wavelengths between about 620 and 750 billionths of a metre. It sits at the long-wavelength end of the visible spectrum, sandwiched between invisible infrared and orange.
But light is, well, light. It consists of tiny particles of energy which we call photons. And no matter how hard we look, we will not find an inherent property of ‘redness’ in photons with this range of wavelengths. Aside from differences in wavelength, there is nothing in the physical properties of photons to distinguish red from green or any other colour.
We can keep going. We can trace the chemical and physical changes that result from the interactions of photons with cone cells in your retina all the way to the stimulation of your visual cortex at the back of your brain. Look all you like, but you will not find the experience of the colour red in any of this chemistry and physics. It is obviously only when you synthesize the information being processed by your visual cortex in your conscious mind that you experience the sensation of a beautiful red rose. And this is the point that Morpheus was making.
We could invent some equivalent scenarios for all of our other human senses — taste, smell, touch and hearing. But we would come to much the same conclusion. What you take to be your reality is just electrical signals interpreted by your brain.
What is it like to be a bat?
Now, you might be ready to dismiss all this as just so much juvenile philosophizing. Of course we’re all reliant on the way our minds process the information delivered to us by our senses. But does it make any sense at all for the human mind to have evolved processes that represent reality differently from how it really is? Surely what we experience and the way we experience it must correspond to whatever it is that’s ‘out there’ in reality? Otherwise how could we survive?
To answer these questions, it helps to imagine what it might be like to be a bat.
What does the world look like — what passes for reality — from a bat’s point of view? We know that bats compensate for their poor night vision by using sophisticated sonar, or echolocation. Bats emit high-frequency sounds, most of them way above the threshold of human perception. These sound waves bounce off objects around them, forming echoes which they then detect.
Human beings do not use echolocation to gather information about the world. We cannot possibly imagine what it’s like for a bat to be a bat because we lack the bat’s sensory apparatus, in much the same way that we cannot begin to describe colours to someone who has been blind from birth.
But the bat is a highly evolved mammal, successful in its own ecological niche. Just because I can’t understand what reality might be like for a bat doesn’t mean that the bat’s perceptions and experiences of that reality are any less legitimate than mine.
What this suggests is that evolutionary selection pressures lead to the development of a sensory apparatus that delivers a finely tuned representation of reality. All that matters is that this is a representation that lends a creature survival advantages. There is no evolutionary selection pressure to develop a mind to represent reality as it really is.
Plato’s allegory of the cave
So, what do we perceive if not reality as it really is? In The Republic, the ancient Greek philosopher Plato used an allegory to describe the situation we find ourselves in. This is his famous allegory of the cave.
Imagine you are a prisoner in a dark cave. You have been a prisoner all your life, shackled to a wall. You have never experienced the world outside the cave. You have never seen sunlight. In fact, you have no knowledge of a world outside your immediate environment and are not even aware that you are a prisoner, or that you are being held in a cave.
It is dark in the cave, but you can nevertheless see men and women passing along the wall in front of you, carrying all sorts of vessels, and statues and figures of animals. Some are talking. As far as you are concerned, the cave and the men and women you can see constitute your reality. This is all you have ever known.
Unknown to you, however, there is a fire constantly burning at the back of the cave, filling it with a dim light. The men and women you can see against the wall are in fact merely shadows cast by real people passing in front of the fire. The world you perceive is a world of crude appearances of objects which you have mistaken for the objects themselves.
Plato’s allegory was intended to show that whilst our reality is derived from ‘things-in-themselves’ — the real people that walk in front of the fire — we can only ever perceive ‘things-as-they-appear’ — the shadows they cast on the cave wall. We can never perceive reality for what it is; we can only ever perceive the shadows. ‘Esse est percipi’, declared the eighteenth-century Irish philosopher George Berkeley: essence is perception, or to be is to be perceived.
These kinds of arguments appear to link our ability to gain knowledge of our external reality firmly with the workings of the human mind. A disconnect arises because of the apparent unbridgeable distance between the physical world of things and the ways in which our perception of this shapes our mental world of thoughts, images and ideas. This disconnect may arise because we lack a rigorous understanding of how the mind works. But knowing how the mind works wouldn’t change the simple fact that thoughts are very different from things.
Veiled reality
It doesn’t end here. Another disconnect, of a very different kind but no less profound, is that between the quantum world of atomic and subatomic dimensions and the classical world of everyday experience. What we will discover is that our anxiety over the relationship between reality and perception is extended to that between reality and measurement.
Irrespective of what thoughts we think and how we think them, we find that we can no longer assume that what we measure necessarily reflects reality as it really is. We discover that there is also a difference between ‘things-in-themselves’ and ‘things-as-they-are-measured’.
The contemporary physicist and philosopher Bernard d’Espagnat called it ‘veiled reality’, and commented that:
… we must conclude that physical realism is an ‘ideal’ from which we remain distant. Indeed, a comparison with conditions that ruled in the past suggests that we are a great deal more distant from it than our predecessors thought they were a century ago.3
At this point the pragmatists among us shrug their shoulders and declare: ‘So what?’ I can never be sure that the world as I perceive or measure it is really how the world is ‘in reality’, but this doesn’t stop me from making observations, doing experiments and forming theories about it. I ca
n still establish facts about the shadows — the projections of reality into our world of perception and measurement — and I can compare these with similar facts derived by others. If these facts agree, then surely we have learned something about the nature of the reality that lies beneath the shadows. We can still determine that if we do this, then that will happen.
Just because I can’t perceive or measure reality as it really is doesn’t mean that reality has ceased to exist. As American science-fiction writer Philip K. Dick once observed: ‘Reality is that which, when you stop believing in it, doesn’t go away.’4
And this is indeed the bargain we make. Although we don’t always openly acknowledge it upfront, ‘reality-in-itself’ is a metaphysical concept. The reality that we attempt to study is inherently an empirical reality deduced from our studies of the shadows. It is the reality of observation, measurement and perception, of things-as-they-appear and of things-as-they-are-measured. As German physicist Werner Heisenberg once claimed: ‘… we have to remember that what we observe is not nature in itself but nature exposed to our method of questioning’.5
But this isn’t enough, is it? We may have undermined our own confidence that there is anything we can ever know about reality-in-itself, but we must still have some rules. Whatever reality-in-itself is really like, we know that it must exist. What’s more, it must surely exist independently of perception or measurement. We expect that the shadows would continue to be cast whether or not there were any prisoners in the cave to observe them.
We might also agree that, whatever reality is, it does seem to be rational and predictable, within recognized limits. Reality appears to be logically consistent. The shadows that we perceive and measure are not completely independent of the things-in-themselves that cause them. Even though we can never have knowledge of the things-in-themselves, we can assume that the properties and behaviour of the shadows they cast are somehow determined by the things that cast them.
That feels better. It’s good to establish a few rules. But don’t look too closely. If you want some assurance that there are good, solid scientific reasons for believing in the existence of an independent reality, a reality that is logical and structured, for which our cause-and-effect assumptions are valid, then you’re likely to be disappointed. To repeat one last time, reality is a metaphysical concept — it lies beyond the grasp of science. When we adopt specific beliefs about reality, what we are actually doing is adopting a specific philosophical position.
If we accept the rules as outlined above, then we’re declaring ourselves as scientific realists. We’re in good company. Einstein was a realist, and when asked to justify this position he replied: ‘I have no better expression than the term “religious” for this trust in the rational character of reality and in its being accessible, to some extent, to human reason.’6
Now, it’s one thing to be confident about the existence of an independent reality, but it’s quite another to be confident about the existence of overtly theoretical entities that we might want to believe to exist in some shape or form within this reality. When we invoke entities that we can’t directly perceive, such as photons or electrons, we learn to appreciate that we can’t know anything of these entities as things-in-themselves. We may nevertheless choose to assume that they exist. I can find no better argument for such ‘entity realism’ than a famous quote from philosopher Ian Hacking’s book Representing and Intervening. In an early passage in this book, Hacking explains the details of a series of experiments designed to discover if it is possible to reveal the fractional electric charges characteristic of ‘free’ quarks.* The experiments involved studying the flow of electric charge across the surface of balls of superconducting niobium:
Now how does one alter the charge on the niobium ball? ‘Well, at that stage,’ said my friend, ‘we spray it with positrons to increase the charge or with electrons to decrease the charge.’ From that day forth I’ve been a scientific realist. So far as I’m concerned, if you can spray them then they are real.7
This brings us to our first principle.
The Reality Principle. Reality is a metaphysical concept, and as such it is beyond the reach of science. Reality consists of things-in-themselves of which we can never hope to gain knowledge. Instead, we have to content ourselves with knowledge of empirical reality, of things-as-they-appear or things-as-they-are-measured. Nevertheless, scientific realists assume that reality (and its entities) exists objectively and independently of perception or measurement. They believe that reality is rational, predictable and accessible to human reason.
Having established what we can and can’t know about reality, it’s time to turn our attention properly to science.
The scientific method
In 2009, Britain’s Science Council announced that after a year of deliberations, it had come up with a definition of science, perhaps the first such definition ever published: ‘Science is the pursuit of knowledge and understanding of the natural and social world following a systematic methodology based on evidence.’8
Given that any simple definition of science is likely to leave much more unsaid than it actually says, I don’t think this is a bad attempt. It all seems perfectly reasonable. There’s just the small matter of the ‘systematic methodology’, the cold, hard, inhuman, unemotional logic engine that is supposed to lie at the very heart of science. A logic that we might associate with Star Trek’s Spock.
The ‘scientific method’ has at least three components. The first concerns the processes or methodologies that scientists use to establish the hard facts about empirical reality. The second concerns methods that scientists use to create abstract theories to accommodate and explain these facts and make testable predictions. The third concerns the methods by which those theories are tested and accepted as true or rejected as false. Let’s take a look at each of these in turn.
Getting at the facts
The first component seems reasonably straightforward and should not detain us unduly. Scientists pride themselves on their detachment and rigour. They are constantly on the lookout for false positives, systematic errors, sample contamination, anything that might mislead them into reporting empirical facts about the world that are later shown to be wrong.
But scientists are human. They are often selective with their data, choosing to ignore inconvenient facts that don’t fit, through the application of a range of approaches that, depending on the circumstances, we might forgive as good judgement or condemn as downright fraud. They make mistakes. Sometimes, driven by greed or venal ambition, they might cheat or lie.
There is no equivalent of a Hippocratic oath for scientists, no verbal or written covenant to commit them to a system of ethics and work solely for the benefit of humankind. Nevertheless, ethical behaviour is deeply woven into the fabric of the scientist’s culture. And the emphasis on repetition, verification and critical analysis of scientific data means that any mistakes or wrongdoing will be quickly found out.
Here’s a relevant example from contemporary high-energy physics. The search for the Higgs boson at CERN’s Large Hadron Collider has involved the detection and analysis of the debris from trillions upon trillions of protons colliding with each other at energies of seven and, most recently, eight trillion electron volts.* If the Higgs boson exists, then one of the many ways in which it can decay involves the production of two high-energy photons, a process written as H → γγ, where H represents the Higgs boson and the Greek symbol γ (gamma) represents a photon.
About three thousand physicists have been involved in each of two detector collaborations searching for the Higgs, called ATLAS and CMS.** One of their tasks is to sift through the data and identify instances where the proton—proton collisions have resulted in the production of two high-energy photons. They narrow down the search by looking for photons emitted in specific directions with specific energies. Even so, finding the photons can’t be taken as evidence that they come from a Higgs boson, as theory predicts that there are many other ways
in which such photons can be produced.
The physicists therefore have to use theory to calculate the ‘background’ events that contribute to the signal coming from the two photons. If this can be done reliably, and if any systematic errors in the detectors themselves can be estimated or eliminated, then any significant excess events can be taken as evidence for the Higgs.
On 21 April 2011, an internal discussion note from within the ATLAS collaboration was leaked to a high-energy physics blogger. The note suggested that clear evidence for a Higgs boson had been found in the H → γγ decay channel, with a signal thirty times greater than predicted.
If this was true, it was fantastic, if puzzling, news. But it wasn’t true. The purpose of internal discussion notes such as this is to allow the exchange of data and analysis within the collaboration before a collective, considered view is made public. It was unfortunate that the note had been leaked. Within just a few weeks, ATLAS released an official update based on the analysis of twice as much collision data as the original note, work that no doubt demanded many more sleepless nights for those involved. There was no excess of events. No Higgs boson — yet.*
As ATLAS physicist Jon Butterworth subsequently explained:
Retaining a detached scientific approach is sometimes difficult. And if we can’t always keep clear heads ourselves, it’s not surprising people outside get excited too. This is why we have internal scrutiny, separate teams working on the same analysis, external peer review, repeat experiments, and so on.9
This was a rare example in which the public got to see the way science self-regulates, how it uses checks and balances in an attempt to ensure that it gets its facts right. Scientists don’t really like us looking over their shoulders in this way, as they fear that if we really knew what went on, this would somehow undermine their credibility and authority.
I take a different view. The knowledge that science can be profoundly messy on occasion simply makes it more human and accessible; more Kirk than Spock. Knowing what can go wrong helps us to appreciate that when it does go seriously wrong, this is usually an exception, rather than the rule.