Book Read Free

Farewell to Reality

Page 18

by Jim Baggott


  So, lots of ideas and lots of searching. But no evidence one way or another, yet.

  The catastrophe of the vacuum

  We saw in the last chapter that a series of astronomical observations are now lined up behind the existence of dark energy, manifested in the ACDM model of big bang cosmology as a cosmological constant contributing an energy density equivalent to Ω of 0.73.

  At first, this seems quite baffling. It requires ‘empty’ spacetime to possess energy (the eponymous ‘dark’ energy) which acts like a kind of antigravity, pushing space apart and accelerating the expansion of the universe.

  But wait. Haven’t we already come across something like this before? The operation of Heisenberg’s uncertainty principle on the vacuum of ‘empty’ space means that it is, in fact, filled with quantum fluctuations. And the existence of these fluctuations is demonstrated by the experiment which measured the Casimir force between two parallel plates.

  The existence of vacuum fluctuations means that there is what physicists call a non—zero vacuum expectation value (a kind of fancy average value) for the energy of the vacuum. This sounds perfect. Surely the cosmological constant reflects the basic quantum uncertainty of spacetime. Wouldn’t that be poetic?

  Not so fast.

  The size of the cosmological constant required by the ACDM model suggests a density of vacuum energy of the order of a millionth of a billionth (10-15) of a joule per cubic centimetre. Now, you may not be entirely familiar with the joule as a unit of energy, so let’s try to put this figure into some sort of perspective.

  Kellogg’s cornflakes contain about 1.6 million joules of energy per 100 grams.* A typical box contains 450 grams and has dimensions 40 × 30 × 5 centimetres. In an upright box the cornflakes tend to shake down and occupy only a proportion — let’s say 75 per cent — of the volume inside. From this information we can estimate a chemical energy density of a box of cornflakes of about 1,600 joules per cubic centimetre. This gives us a sense of the scale of energy densities in ‘everyday’ life.

  Although we don’t normally include it in our daily nutritional considerations, cornflakes also contain energy in the form of mass. We can use Einstein’s formula E = mc2 to calculate that a 450 gram box is equivalent to about 40 million billion joules. This gives a mass-energy density of about 7 trillion (7 × 1012) joules per cubic centimetre. There’s not much to be gained by adding the chemical energy density to this figure to give a total.

  So, perhaps not altogether surprisingly, the energy density of the vacuum is about 10-27 times the energy density of an everyday object like a box of cornflakes. Hey, it might not be completely empty, but it’s still a ‘vacuum’, after all.

  What does quantum theory predict? Well, the calculation is a little problematic. On the one hand, the uncertainty principle appears to impose some fairly rigid restrictions on what can’t happen in the quantum world. However, on the other hand, it is extraordinarily liberal regarding what can happen. And history has shown that when quantum theory says that something can happen in principle, then this something generally tends to happen in practice.

  The trouble with the uncertainty principle is that it doesn’t care about the size (in energy terms) of quantum fluctuations in the vacuum provided they happen on timescales consistent with the principle. It simply demands a trade-off between energy and time. So, a fluctuation of near-infinite energy is perfectly acceptable provided it happens in an infinitessimally short time.

  I think you can probably guess where this leads. Accumulating all the quantum fluctuations that are ‘allowed’ by the uncertainty principle results in an infinite vacuum energy density.

  This is vaguely reminiscent of the problem we encountered in quantum electrodynamics, in which an electron interacts with its own self-generated electromagnetic field, resulting in an infinite contribution to the electron mass. That problem was resolved using the technique of renormalization, effectively subtracting infinity from infinity to give a finite, renormalized result. Unfortunately, renormalizing the vacuum is a non-trivial problem.

  Theorists have made some headway by ‘regularizing’ the calculation. In essence, this involves applying an arbitrary cut-off, simply deleting from the equations all the terms relating to the highest-energy fluctuations. These occur with dimensions and within timescales where in any case the theorists are no longer confident about the validity of quantum theory. This is the Planck scale, the domain of gravity.

  Now it’s certainly true that regularizing the calculation in this way does improve things. The vacuum energy density is no longer predicted to be infinite (yay!). Instead, it’s predicted to have a value of the order of 100,000 googol* (10105) joules per cubic centimetre. In case you’ve forgotten already, the ‘observed’ value is 10-15 joules per cubic centimetre, so the theoretical prediction is out by a staggering hundred billion billion googol (10120).

  That’s got to be the worst theoretical prediction in the history of science.

  It’s perhaps rather amusing to note that after all the wrangling over whether or not there is a cosmological constant, quantum theory is actually quite clear and unambiguous on this question. There definitely should be a cosmological constant — quantum theory demands it. But now we need to understand just how it can be so small.

  The long and winding road to quantum gravity

  Within a few short months of his final lecture on general relativity to the Prussian Academy of Sciences, Einstein was back at the Academy explaining that his new theory might need to be modified:

  Due to electron motion inside the atom, the latter should radiate gravitational, as well as electromagnetic energy, if only a negligible amount. Since nothing like this should happen in nature, the quantum theory should, it seems, modify not only Maxwell’s electrodynamics but also the new theory of gravitation.10

  In other words, Einstein was hinting that there should be a quantum theory of gravity.

  There are a number of different ways physicists can try to construct such a theory. They can try to impose quantum rules on general relativity in a process called ‘canonical quantization’. This is generally known as the canonical approach to quantum gravity. Einstein himself tended to dismiss this approach as ‘childish’.

  Alternatively, they can start with relativistic quantum field theory and try to make it generally covariant, meaning that the physical (quantum) laws described by the theory are independent of any arbitrary change in co-ordinate system, as demanded by general relativity. This is known as the covariant approach to quantum gravity. In a quantum field theory, gravity is described in much the same way as forces in the standard model of particle physics, in terms of the exchange of a force carrier — called the graviton — between gravitating objects.

  A third approach is to start over.

  Whichever approach we take, we run into a series of profound problems right at the outset. General relativity is about the motions of large-scale bodies such as planets, stars, solar systems, galaxies and the entire universe within a four—dimensional spacetime. In general relativity these motions are described by Einstein’s gravitational field equations. These are complex equations because the mass they consider distorts the geometry of spacetime around it and the geometry of the spacetime around it governs how the mass moves.

  But spacetime itself is contained entirely within the structure of general relativity — it is a fundamental variable of the theory. The theory itself constructs the framework within which mass moves and things happen. In this sense the theory is ‘background independent’: it does not presuppose the existence of a background framework to which the motions of large masses are to be referred.

  Quantum theory, in contrast, presumes precisely this. It is ‘background dependent’, requiring an almost classical Newtonian container of space and time within which the wavefunctions of quantum particles can evolve.

  We ran into problems thrown up by the uncertainty principle when we considered the energy of the vacuum. But we get even more he
adaches when we apply the uncertainty principle to spacetime itself. General relativity assumes that spacetime is certain: it is ‘here’ or ‘there’, curves this way or that way, at this or that rate. But the uncertainty principle hates certainty. It insists that we abandon this naive notion of smooth continuity and deal instead with a spacetime twisted and tortured and riddled with bumps, lumps and tunnels — ‘wormholes’ connecting one part of spacetime with another.

  The American physicist John Wheeler called it quantum or spacetime ‘foam’. A picturesque description, perhaps, but constructing a theory on it is like trying to build on a foundation of wet sand.

  There are further problems. Aside from having to confront the essentially chaotic nature of spacetime at the ‘Planck scale’, we also have to acknowledge that we’re now dealing with distances and volumes likely to catch us out in one of the most important assumptions of conventional quantum field theory — that of point particles.

  In the quantum field theories that comprise the standard model of particle physics, the elementary particles are treated as though they have no spatial extension.* All of the particle’s mass, charge and any other physical property it might be carrying are assumed to be concentrated to an infinitesimally small point. This would necessarily be true of elementary particles in any quantum field theory of gravity.

  Obviously, the assumption of point particles is much more likely to be valid when considering physics on scales much larger than the particles themselves. But as we start to think about physics at the dimensions of the Planck length — 1.6 hundredths of a billionth of a trillionth of a trillionth (1.6 × 10-35) of a metre, we must begin to doubt its validity.

  Quantum theory and general relativity are two of the most venerated theories of physics, but, like two grumpy old men, they just don’t get along. Both are wonderfully productive in helping us to understand the large-scale structure of our universe and the small—scale structure of its elementary constituents. But they are volatile and seem destined to explode whenever one is shoehorned into the other.

  The physics of the very small and the physics of the very large are seemingly incompatible, even though the very large (the universe) was once very, very small. A straightforward resolution of the problem is not forthcoming. Quantum gravity lies far beyond the standard model of particle physics and general relativity, and so far beyond the current authorized version of reality.

  The fine—tuning problem

  When we use it to try to make sense of the world around us, science forces us to abandon our singularly human perspective. We’re obliged to take the blinkers off and adopt a little humility. Surely the grand spectacle of the cosmos was not designed just to appeal to our particularly human sense of beauty? Surely the universe did not evolve baryonic matter, gas, dust, stars, galaxies and clusters of galaxies just so that we could evolve to gaze up in awe at it?

  Of course, the history of science is littered with stories of the triumph of the rational, scientific approach over human mythology, superstition and prejudice. So, we do not inhabit the centre of the solar system, with the sun and planets revolving around the earth. The sun, in fact, is a rather unspectacular star, like many in our Milky Way galaxy of between 200 and 400 billion stars. The Milky Way is just one of about 200 billion galaxies in the observable universe. It makes no sense to imagine that this is all for our benefit.

  This is the Copernican Principle.

  What, then, should we expect some kind of ultimate theory of everything to tell us? I guess the assumption inherent in the scientific practice of the last few centuries is that such a theory of everything will explain why the universe and everything in it has to be the way it is. It will tell us that our very existence is an entirely natural consequence of the operation of a (hopefully) simple set of fundamental physical laws.

  We might imagine a set of equations into which we plug a number of perfectly logical (and inescapable) initial conditions, then press the ‘enter’ key and sit back and watch as a simulation of our own universe unfolds in front of our eyes. After a respectable period, we reach a point in the simulation where the conditions are right for life.

  We are not so naïve as to imagine that science will ever completely eliminate opportunities for speculation and mythologizing. There are some questions that science may never be able to answer, such as what (or who, or should that be Who?) pressed the ‘enter’ key to start the universe we happen to live in. But surely the purpose of science is to reduce such opportunities for myth to the barest minimum and replace them with the cold, hard workings of physical mechanism.

  Here we run into what might be considered the most difficult problem of all. The current authorized version of reality consists of a marvellous collection of physical laws, governed by a set of physical constants, applied to a set of elementary particles. Together these describe how the physical mechanism is meant to work. But they don’t tell us where the mechanism comes from or why it has to be the way it is.

  What’s more, there appears to be no leeway. If the physical laws didn’t quite have the form they do have, the physical constants were to have very slightly different values, or the spectrum of elementary particles were marginally different, then the universe we observe could not exist.*

  This is the fine—tuning problem.

  We have already encountered some examples of fine—tuning, such as the mass-energy scales (and the relative strengths) of gravity and the weak and electromagnetic forces. More fine—tuning appears to be involved in the vacuum energy density, or the density of dark energy, responsible for the large-scale structure of the universe.

  In 1999, the British cosmologist Martin Rees published a book titled Just Six Numbers, in which he argued that our universe exists in its observed form because of the fine—tuning of six dimensionless physical constants.

  As I’ve already said a couple of times, physicists get a little twitchy when confronted with too many coincidences. Here, however, we’re confronted not so much with coincidences but rather with conspiracy on a grand scale.

  Now, we should note that some physicists have dismissed fine—tuning as a non—problem. The flatness and horizon problems in early big bang cosmology appeared to demand similarly fantastic fine—tuning but were eventually ‘explained’ by cosmic inflation. Isn’t it the case here that we’re mistaking ignorance for coincidence? In other words, the six numbers that Rees refers to are not fine—tuned at all: we’re just ignorant of the physics that actually governs them.

  But our continued inability to devise theories that allow us to deduce these physical constants from some logical set of ‘first principles’, and so explain why they have the values that they have, leaves us in a bit of a vacuum.

  It also leaves the door wide open.

  Clueless

  There are further problems that I could have chosen to include in this chapter, but I really think we have enough to be going on with. On reading about these problems it is, perhaps, easy to conclude that we hardly know anything at all. We might quickly forget that the current authorized version of reality actually explains an awful lot of what we can see and do in our physical world.

  I tend to look at it this way. Several centuries of enormously successful physical science have given us a version of reality unsurpassed in the entire history of intellectual endeavour. With a very few exceptions, it explains every observation we have ever made and every experiment we have ever devised.

  But the few exceptions happen to be very big ones. And there’s enough puzzle and mystery and more than enough of a sense of work—in—progress for us to be confident that this is not yet the final answer.

  I think that’s extremely exciting.

  Now we come to the crunch. We know the current version of reality can’t be right. There are some general but rather vague hints as to the directions we might take in search of solutions, but there is no flashing illuminated sign saying ‘this way to the answer to all the puzzles’. And there is no single observation, no one e
xperimental result, that helps to point the way. We are virtually clueless.

  Seeking to resolve these problems necessarily leads us beyond the current version of reality, to grand unified theories, theories of everything and other, higher speculations. Without data to guide us, we have no choice but to be idea—led.

  Perhaps it is inevitable that we cross a threshold.

  * But it’s worth remembering that under certain circumstances it is possible to form superpositions of macroscopic dimensions, such as (for example) superpositions of a couple of billion electrons travelling in opposite directions around a superconducting ring with a diameter over a hundred millionths of the metre. Okay, these aren’t cat-sized dimensions, but precisely where are we supposed to draw the line?

  * In principle, these fundamental physical constants simply ‘map’ the physics to our human, terrestrial (and arbitrary) standards of observation and measurement.

  ** Count them. There are three generations each consisting of two leptons and two flavours of quark which come in three different colours (making 24 in total), the anti—particles of all these (making 48), twelve force particles — a photon, W± and Z0 and eight gluons (making 60) — and a Higgs boson (61).

  * The nutritional information on a box of cornflakes indicates an energy content of 1,604 kJ (thousand joules) per 100 grams. This is chemical energy, released when the cornflakes are combusted or digested.

  * A googol is 10100. I had to look it up.

  * Hang on, I hear you cry. What about the uncertainty principle? Doesn’t the assumption of point—like properties for an elementary particle mean a consequent assumption of certainty in its location in space? Actually, no, it doesn’t. An elementary particle like an electron may be thought of as being ‘smeared’ out in space because the amplitude of its wavefunction is not fixed on a single point; it is extended. But it is not the particle itself that is smeared out. What is smeared out is the probability — calculated from the modulus—square of the wavefunction — of ‘finding’ the point—like electron within this space.

 

‹ Prev