Farewell to Reality
Page 31
Albert Einstein1
In the last four chapters we have had an opportunity to review the various approaches that theorists have taken in their attempts to resolve some of the more stubborn problems with the current authorized version of reality.
We have seen how attempts to resolve the quantum measurement problem have led to many worlds. We have seen how attempts to push beyond the standard model of particle physics and find rhyme or reason for its twenty experimentally determined parameters have led to SUSY and superstring/M-theory. We have seen how SUSY and various braneworld scenarios in M-theory suggest solutions for the hierarchy problem. In search of potential dark matter candidates we examined the lightest supersymmetric particles predicted by SUSY and the lightest Kaluza—Klein particles, thought to be projected from M-theory’s hidden dimensions. We saw how the multiverse resolves the problem of dark energy and the cosmological constant. In the cosmic landscape, ours is but an ordinary, if not rather mundane, universe among a vast multiplicity of universes.
Despite their speculative or metaphysical nature, in the context in which I’ve presented them so far these contemporary theories of physics still conform to the Copernican Principle. They do not assume an especially privileged role for us as human observers.
Perhaps you’ll then be surprised to learn that one approach to resolving the fine-tuning problem, an approach growing in importance and gaining support within the theoretical physics community, puts human beings firmly back into the equation. It is called the anthropic cosmological principle, where ‘anthropic’ means ‘pertaining to mankind or humans’.
I should say upfront that this is all rather controversial stuff. Many scientists have argued that the anthropic cosmological principle is neither anthropic nor a principle. Others have argued that it is either dangerous metaphysics or completely empty of insight. Either a false hypothesis, deserving of Einstein’s pity, or erroneous and sloppy thinking, deserving of a beating.
I believe that the anthropic cosmological principle is symptomatic of the malaise that has overtaken contemporary theoretical physics. We’d better take a closer look.
The carbon coincidence
In his Discourse on Method, first published in 1637, the French classical modern philosopher René Descartes tried to establish an approach to acquiring knowledge of the world by first eschewing all the information delivered to his mind by his senses. He had decided that his senses couldn’t be trusted. He cast around looking to fix on something about which he could be certain.
After some reflection, he decided that that something was his own mind. And, he reasoned, given that he possesses a mind, then in some form or another he must exist. Cogito ergo sum, he declared: I think therefore I am.
Human beings are carbon-based life forms that have evolved certain mental capacities. We are conscious and self-aware, and, like Descartes, we are able to reflect intelligently on the nature of the physical universe we find around us. It seems a statement of the blindingly obvious that, whatever it is and wherever it comes from, the physical universe supports the possibility that we could (and, indeed, do) exist. We exist therefore the universe must be just so. We might adapt Descartes’ famous saying thus: ego sum ergo est — I am therefore it is.
The problem is that as soon as we put human beings (or, at the very least, the possibility of cognitive biological entities) back into the equation in this way, we acquire a perspective that makes the universe look like an extraordinary conspiracy.
One of the most notable examples of a ‘coincidence’ of the kind that betrays conspiracy in the physical mechanics of the universe was identified by the physicist Fred Hoyle in the early 1950s. It concerns the process by which carbon nuclei are produced in the interiors of stars.
The primordial big bang universe (or the steady-state universe favoured at the time by Hoyle) contains only hydrogen and helium and trace amounts of slightly heavier elements such as lithium. In the early 1950s, the relative abundances of heavier elements were therefore something of a mystery. How are these elements formed?
Hoyle supposed that at the high temperatures and pressures that prevail in the centres of stars, the primordial hydrogen and helium would get further ‘cooked’. These light nuclei would fuse together in a series of reactions to form successively heavier nuclei, in a process now called stellar nucleosynthesis.
When two hydrogen nuclei fuse together to form a helium nucleus, two of the four protons transform into neutrons, and energy is released. This energy holds the star up against further gravitational collapse, and the star settles down into a period of relative stability.
As the supply of hydrogen becomes depleted, however, the energy released from such fusion reactions is no longer sufficient to resist the force of gravity. A star with enough mass will blow off its outer layers and its core will shrink. The temperature and pressure in the core will rise, eventually triggering fusion reactions involving helium nuclei.
But at this point we hit a snag. Fusing hydrogen nuclei (one proton) and helium nuclei (two protons and two neutrons) together to make lithium is not energetically possible. A lithium nucleus with three protons and two neutrons is unstable: it needs one or two more neutrons to stabilize it. Fusing two helium nuclei together to make beryllium is similarly impossible — a beryllium nucleus with four protons and four neutrons is likewise unstable. It needs another neutron.
In a state of rising panic, we skip over lithium and beryllium and look to the next element in the periodic table. What about carbon? A carbon nucleus has six protons and six neutrons. This would seem to require fusing together three helium nuclei. This is energetically possible, but the chances of getting three helium nuclei to come together in a simultaneous ‘three-body’ collision are extremely remote. It’s much more feasible to suppose that two helium nuclei first fuse to form an unstable beryllium nucleus, which then in turn fuses with another helium nucleus before it can fall apart. This sounds plausible on energy grounds but the odds don’t look good. The beryllium nucleus tends to fall apart rather too quickly.
Yet here we are, intelligent beings evolved from a rich carbon-based biochemistry. Given that we exist, carbon must somehow be formed in higher abundance, despite the seemingly poor odds.
Hoyle reasoned that the odds must somehow get tipped in favour of carbon formation. He therefore suggested that the carbon nucleus must possess an energetic state that helps greatly to enhance the rate of the reaction between the unstable beryllium nucleus and another helium nucleus, thereby producing carbon faster than the beryllium nucleus can disintegrate. Such an energetic state is called a ‘resonance’. Hoyle estimated that the carbon nucleus must have a resonance at an energy of around 7.7 MeV. It was subsequently discovered at 7.68 MeV. The reaction is called the triple-alpha process.*
This struck Hoyle as remarkable. If the carbon resonance were slightly higher or lower in energy, then carbon would not be formed in sufficient abundance in the interiors of stars. There would therefore be insufficient carbon in the debris flung from those stars that are ultimately destined to explode in spectacular supernovae. The second-generation star systems that formed from this debris would then hold planets with insufficient carbon to allow intelligent, carbon-based life forms to evolve.
Change the energy of the carbon resonance by the slightest amount, and we could not exist. Hoyle wrote:
Would you not say to yourself, ‘Some super-calculating intellect must have designed the properties of the carbon atom, otherwise the chance of my finding such an atom through the blind forces of nature would be utterly minuscule.’ Of course you would … A common sense interpretation of the facts suggests that a superintellect has monkeyed with physics, as well as with chemistry and biology, and that there are no blind forces worth speaking about in nature. The numbers one calculates from the facts seem to me so overwhelming as to put this conclusion almost beyond question.2
The Goldilocks enigma
The carbon coincidence is just the beginning. It occurs b
ecause of a delicate balance between the strength of the strong force and the energetics of nuclear reactions involving protons. In just Six Numbers, British astrophysicist Martin Rees identified a series of six dimensionless physical constants and combinations of constants that determine the nature and structure of the universe we inhabit. Change any one of these numbers by just 1 per cent and, Rees argued, the universe that resulted would be inhospitable to life. If the constants were not so fine-tuned, we could not exist to observe the universe and ponder on its remarkable cosmic coincidences.
These six numbers include ɛ, the fraction of the mass of the four protons that is released as energy when these fuse together to form a helium nucleus inside a star. ɛ determines the amount of energy released by a star like our own sun and the subsequent chain of nuclear reactions responsible for the production of other chemical elements. Like the carbon coincidence, it depends on the strength of the strong force. If too little energy is released, the planetary system orbiting the star remains cold and lifeless. Too much, and the planetary system is hot and lifeless.
The set of numbers also includes , the ratio of the strength of the electromagnetic force to the strength of the force of gravity. This is a large number (1036), and says that the mechanics of the atom are dominated by electromagnetic forces — gravity is irrelevant. But gravity is cumulative. Gather lots of atoms together and it adds up. determines the relationship between the behaviour of matter at atomic and subatomic levels and matter at the levels of planets, stars, galaxies and clusters of galaxies. Make the force of gravity just a little bit larger in relation to the force of electromagnetism and the universe would be smaller and would evolve much faster. There would be no time for biology to develop. Make gravity a little weaker and there would be no stars.
The density parameter, Ω, is the ratio of the density of mass-energy to the critical value required of a flat universe. It depends on the balance between gravity and the rate of expansion of the universe. Too much mass-energy and the result is a closed universe which expands a little but then contracts rather quickly: too quickly for life to gain a foothold. Too little mass-energy and the result is an open universe which expands too quickly to support the evolution of galaxies.
Likewise, the cosmological constant, Λ, seems similarly fine-tuned for life. Although it was something of a shock to cosmologists in the late 1990s to discover that A isn’t precisely zero and that the expansion of the universe is accelerating, the value of the constant is still extremely small. Ridiculously small according to quantum theory. But if it was any larger, the universe would be open and there would be no stars, no galaxies and no life.
Of course, we can trace the large-scale structure of the visible universe — galaxies and clusters of galaxies — right back to the quantum fluctuations that prevailed during the inflationary epoch. These ripples are slight — about one part in 100,000. This variation is captured in the ratio , derived from the energy required to break up large galactic clusters or superclusters and the energy of the rest mass of such structures. If were smaller, there would be no large-scale structures. Make it larger and the universe would consist only of supermassive black holes.
Rees’ sixth number is the simplest of the set. It is the number of spatial dimensions (not including ‘hidden’ dimensions demanded by superstring/M-theory). There are no one- or two-dimensional complex biological entities, simply because complex biology demands a minimum of three dimensions. And a value of of 3 is the only number compatible with the inverse-square laws of gravity and electromagnetism.
The paranoia runs deep. Virtually everywhere we turn we’re faced with an apparently phenomenal fine-tuning of the universe’s physical constants and laws. If the weak force were stronger or weaker, then the primordial abundances of hydrogen and helium would have been very different, and subsequent stellar nucleosynthesis would not have produced the ingredients needed to sustain life. If the mass of the neutron wasn’t slightly larger than the proton … And so on and on.
In the fairy tale, Goldilocks finds that baby bear’s porridge is just the right temperature, that baby bear’s chair is just the right size and that baby bear’s bed has just the right softness. And what happens when we put human beings (or, at the very least, the possibility of biology) back into the picture? We find that the universe is not ‘just right’, it appears extraordinarily fine-tuned for life.
This is the Goldilocks enigma.
The weak anthropic principle
Although there have been many examples of ‘anthropic’ reasoning throughout history, the notion of an anthropic principle was first introduced by the Australian theorist Brandon Carter. Whilst studying for his PhD at Cambridge University in 1967, Carter had become absorbed by the challenge of understanding the origin of the numerical coincidences that seem to dominate cosmology and physics. Influenced by Wheeler at Princeton, he circulated lecture notes on the subject informally among colleagues, eventually publishing his ideas in 1974.
Carter intended the anthropic principle as a direct challenge to the Copernican Principle. He presented his arguments at an International Astronomical Union symposium in Cracow on 10—12 September 1973. The conference had been dedicated to commemorate the five hundredth anniversary of Copernicus’ birth.
Whereas the Copernican Principle insists that intelligent life occupies no privileged position in the cosmos, the anthropic principle suggests that our status as conscious observers of our universe demands at least some form of privileged perspective.
Carter offered two versions. The first is the weak anthropic principle: ‘we must be prepared to take account of the fact that our location in the universe is necessarily privileged to the extent of being compatible with our existence as observers’.3
This seems like common sense, if a bit of a tautology. In essence, it is a statement that relates to the notion of observer self-selection. The universe we observe must, by definition, include observers like us capable of observing it. Our observations are necessarily biased by virtue of our own existence. We are (again, by definition) unable to observe a universe in which observers like us cannot exist.
In Carter’s definition, the word ‘privileged’ is used in a relatively mild sense. This is entirely compatible with the notion that intelligent observers are a perfectly natural phenomenon. No matter how improbably the physical universe appears to be fine-tuned and no matter how unlikely the facts of biological evolution through natural selection, the bottom line is that we’re here.
Now, the key question is what — if anything — the weak anthropic principle has to say about the reason we’re here. One argument is that we’re here because, by happy accident or the operation of some complex natural physical mechanisms we have yet to fathom, the parameters of the universe just happen to be compatible with our existence.
In this argument, intelligent life remains a consequence of the nature and structure of our universe (it is therefore I am). But it leaves us in the rather unsatisfactory position of having no real explanation under present understanding as to precisely why the universe is the way it is.
Maybe you can already sense where this is leading. If we have no explanation for the fine-tuning of the universe, perhaps this is because the universe isn’t really fine-tuned after all. What if the universe we inhabit is but one of an infinite or near-infinite number of parallel universes in which all manner of different combinations of physical parameters are possible?
In most of these universes the parameters are incompatible with the existence of intelligent life. The operation of selection bias means that, no matter how atypical or improbable the parameters of the universe we find ourselves in, we shouldn’t be surprised to find that this is nevertheless precisely the universe we observe.
The anthropic multiverse
Carter used the weak anthropic principle to argue for the existence of what he called an ensemble of universes — meaning simply that there are many (maybe an infinite number) — without specifying precisely what these
were or where they might have come from. There are many different possibilities, some of which were discussed in Chapter 9. Readers interested in a more comprehensive review of the different multiverse theories should consult Brian Greene’s The Hidden Reality.
Of course, this is the reason why the anthropic principle has become so popular among theorists in recent years. It shouldn’t come as any real surprise that those theorists who favour the cosmic landscape of superstring/M-theory’s 10500 different ways of constructing a universe have rushed to embrace anthropic reasoning.
What I find quite remarkable is that the observer selection bias summarized in the anthropic principle is sometimes used as a kind of justification for landscape theories, as though it were an important piece of observational evidence in support of them! The logic runs: the fact that our universe seems highly improbable must mean that observer selection bias is operating in a multiverse of possibilities.
The relationship is mutual. The landscape is also used to lend credibility to the anthropic principle, as Susskind claims: ‘Whether we like it or not, this is the kind of behavior that gives credence to the anthropic principle.’4
Although this marriage between the landscape and anthropic logic might initially have been rather forced, it does seem to be a marriage made in heaven. In The Cosmic Landscape, Susskind writes:
Until very recently, the anthropic principle was considered by almost all physicists to be unscientific, religious, and generally a goofy misguided idea. According to physicists it was a creation of inebriated cosmologists, drunk on their own mystical ideas … But a stunning reversal of fortune has put string theorists in an embarrassing position: their own cherished theory is pushing them right into the waiting arms of the enemy … The result of the reversal is that many string theorists have switched sides.5