Farewell to Reality

Home > Other > Farewell to Reality > Page 16
Farewell to Reality Page 16

by Jim Baggott


  Type Ia supernovae are of interest because their relatively predictable luminosity and light curve (the evolution of their luminosity with time) means they can be used as ‘standard candles’. In essence, finding a Type Ia supernova and determining its peak brightness provides a measure of the distance of its host galaxy.

  Galaxies that would otherwise be too dim to perceive at the edges of the visible universe are lit up briefly by the flare of a supernova explosion.

  In early 1998, two independent groups of astronomers reported the results of measurements of the redshifts and hence speeds of distant galaxies that had contained Type la supernovae. These were the Supernova Cosmology Project (SCP), based at the Lawrence Berkeley National Laboratory near San Francisco, California, headed by American astrophysicist Saul Perlmutter; and the High-z (meaning high-redshift) Supernova Search Team formed by Australian Brian Schmidt and American Nicholas Suntzeff at the Cerro Tololo Inter-American Observatory in Chile.

  The groups were rivals in the pursuit of data that would allow us to figure out the ultimate fate of the universe. Not surprisingly, their strongly competitive relationship had been fraught, characterized by bickering and disagreement. But in February 1998 they came to agree with each other — violently.

  It had always been assumed that, following the big bang and a period of rapid inflation, the universe must have settled into a phase of more gentle evolution, either continuing to expand at some steady rate or winding down, with the rate of expansion slowing. However, observations of Type Ia supernovae now suggested that the expansion of the universe is actually accelerating.

  ‘Our teams, especially in the US, were known for sort of squabbling a bit,’ Schmidt explained at a press conference some years later. ‘The accelerating universe was the first thing that our teams ever agreed on.’15

  Adam Reiss, a member of the High-z team, subsequently found a very distant Type la supernova that had been serendipitously photographed by the Hubble Space Telescope during the commissioning of a sensitive infrared camera. It had a redshift consistent with a distance of 11 billion light years, but it was about twice as bright as it had any right to be. It was the first glimpse of a supernova that had been triggered in a period when the expansion of the universe had been decelerating.

  The result suggested that about five billion years ago, the expansion had ‘flipped’. As expected, gravity had slowed the rate of expansion of the post-big-bang universe, until it reached a point at which it had begun to accelerate again. And there was really only one thing that could do this.

  The cosmological constant was back.

  One supernova does not a summer make, but in 2002, astronauts from the Space Shuttle Columbia installed the Advanced Camera for Surveys (ACS) on the Hubble Space Telescope. Reiss, now heading the Higher-z Supernova Search Team, used the ACS to observe a further 23 distant Type Ia supernovae. The results were unambiguous. The accelerating expansion has since been confirmed by other astronomical observations and further measurements of the CMB radiation.

  There are other potential explanations, but a consensus has formed around a cosmological constant with a value of 0.73. Its reintroduction into the gravitational field equations is thought to belie the existence of a form of vacuum energy — the energy of ‘empty’ spacetime — which acts to push spacetime apart, just as Einstein had originally intended.

  We have no idea what this energy is, and in a grand failure of imagination, it is called simply ‘dark energy’.

  The standard model of big bang cosmology

  The ΛCDM model is based on six parameters. Three of these are the density of dark energy, which is related to the size of the cosmological constant, Λ; the density of cold dark matter; and the density of baryonic matter, the stuff of gas clouds, stars, planets and us.*

  When applied to the seven-year WMAP results, the agreement between theory and observation is remarkable. Figure 4 shows the ‘power spectrum’ derived from the temperature of the CMB radiation mapped by WMAP. This is a complex graph, but it is enough to know that the oscillations in this spectrum are due to the physics of the plasma that prevailed at the time of recombination. In essence, as gravity tried to pull the atomic nuclei in the plasma together, radiation pressure pushed them apart, and this competition resulted in so-called acoustic oscillations.

  Figure 4 The 7-year temperature power spectrum from WMAP. The curve is the ACDM model best fit to the 7-year WMAP data. NASA/WMAP Science Team. See D. Larson et al., The Astrophyskal Journal Supplement Series, 192 (February 2011), 16.

  The positions of these oscillations and their damping with angular scale determine the parameters of the ΛCDM model with some precision. In Figure 4, the points indicate the WMAP power spectrum data with associated error bars, and the continuous line is the ‘best-fit’ prediction of the ΛCDM model obtained by adjusting the six parameters. In this case, the best-fit suggests that dark energy accounts for about 73.8 per cent of the universe and dark matter 22.2 per cent.

  What we used to think of as ‘the universe’ accounts for just 4.5 per cent. This means that the evolution of the universe to date has been determined by the push-and-pull between the antigravity of dark energy and the gravity of (mostly) dark matter.

  It seems that visible matter, of the kind we tend to care rather more about, has been largely carried along for the ride.

  * This sounds a bit like the Copernican Principle described in Chapter 1, but the Copernican Principle goes a lot further. The cosmological principle is in essence a kind of statistical assumption — our perspective from earth-bound or satellite-borne telescopes is representative of the entire universe. The Copernican Principle subsumes the cosmological principle, but goes on to insist that this perspective is due to the fact that the universe was not designed specifically with human beings in mind.

  * This is a Doppler shift, caused by the speed of motion of the light source, not to be confused with a gravitational shift due to the curvature of spacetime.

  * A megaparsec is equivalent to 3.26 million light-years (a light-year is the distance that light travels in a year), or 30.9 billion billion kilometres.

  * Although some heavier elements were formed in the big bang, much of the synthesis responsible for the distribution of elements in the universe is now understood to take place in the interiors of stars and during cataclysmic stellar events, such as supernovae. Hoyle played a significant role in working out the mechanisms of stellar nucleosynthesis. There’s more on this in Chapter 11.

  * And, indeed, of several other, similar, predictions that Gamow, Alpher and Herman had published in the intervening years.

  * Pope Pius XII had pronounced in 1951 that the big bang model was consistent with the official doctrine of the Roman Catholic church.

  * More recent evaluation puts this energy somewhere in the region of 200,000 billion GeV.

  * Although ‘grand’ and ‘unified’, GUTs do not seek to include the force of gravity. Theories that do so are often referred to as Theories of Everything, or TOEs.

  ** Liquid water can be supercooled to temperatures up to 40 degrees below freezing.

  * It was originally planned to be launched in 1988 on a space shuttle mission, but the shuttles were grounded following the Challenger disaster on 28 January 1986.

  * The discrepancy was reduced by subsequent analysis, but it remains significant.

  * Of course, these acronyms are not coincidental. WIMP was coined first, apparently inspiring the subsequent development of MACHO.

  * Actually, the densities of dark matter and of baryonic matter are derived from the model parameters.

  6

  What’s Wrong with this Picture?

  Why the Authorized Version of Reality Can’t be Right

  The truth of a theory can never be proven, for one never knows if future experience will contradict its conclusions.

  Albert Einstein1

  The last four chapters have provided something of a whirlwind tour of our current understanding of lig
ht, matter and force, space, time and the universe. Inevitably, I’ve had to be a bit selective. It’s not been possible to explore all the subtleties of contemporary physics, and the version of its historical development that I’ve provided here has been necessarily ‘potted’.

  I would hope that as you read through the last four chapters, you remembered to refer back to the six principles that I outlined in Chapter 1. I’d like to think that the developments in physical theory in the last century amply demonstrate the essential correctness of these principles, if not in word then at least in spirit.

  Quantum theory really brings home the importance of the Reality Principle. The experimental tests of Bell’s and Leggett’s inequalities tell us fairly unequivocally that we can discover nothing about a reality consisting of things-in-themselves. We have to settle instead for an empirical reality of things-as-they-are-measured. This is no longer a matter for philosophical nit-picking. These experimental tests of quantum theory are respectfully suggesting that we learn to be more careful about how we think about physical reality.

  I haven’t been able to provide you with all the observational and experimental evidence that supports quantum theory, the standard model of particle physics, the special and general theories of relativity and the ΛCDM model of big bang cosmology. But there should be enough in here to verify the Fact Principle. It is simply not possible to make observations or perform experiments without reference to a supporting theory of some kind. Think of the search for the Higgs boson at CERN, the bending of starlight by large gravitating objects, or the analysis of the subtle temperature variations in the CMB radiation.

  We have also seen enough examples of theory development to conclude that the Theory Principle is essentially correct. Abstracting from facts to theories is highly complex, intuitive and not subject to simple, universal rules. In some cases, theories have been developed in response to glaring inconsistencies between observation, experiment and the prevailing methods of explanation. Such developments have been ‘idea-led’, with observation or experiment causing widespread bafflement before a theoretical resolution could be found.

  Sometimes the theoretical resolution has been more baffling than the data, as Heisenberg himself could attest, wandering late at night in a nearby park after another long, arduous debate with Bohr in 1927. Could nature possibly be as absurd as it seemed?

  In other cases, theories have been sprung almost entirely from intuition: they have been ‘idea-led’. Such intuition has often been applied where a problem has been barely recognized, born from a stubborn refusal to accept inadequate explanations. Recall Einstein’s light-quantum hypothesis. Remember Einstein sitting in his chair at the Patent Office, struck by the thought that if a man falls freely he will not feel his own weight.

  Ideas and theories that follow from intuition can clearly precede observation and experiment. The notion that there should exist a meson formed from a charm and anti—charm quark preceded the discovery of the J/Ψ in the November revolution of 1974. The Higgs mechanism of electro-weak symmetry-breaking preceded the discovery of weak neutral currents, the W and Z particles and (as seems likely) the Higgs boson.

  I’ve tried to ensure that my descriptions of the theoretical structures that make up the authorized version of reality have been liberally sprinkled with references to the observations and experiments that have provided critically important tests. Although there are inevitable grey areas, in general the theories that constitute the authorized version are regarded as testable, and have been rigorously tested to a large degree. Perhaps you are therefore ready to accept the Testability Principle.

  Then we come to the Veracity Principle. It might come as a bit of a shock to discover that scientific truth is transient. What is accepted as true today might not be true tomorrow. But look back at how the ‘truth’ of our universe has changed from Newton’s time, or even over the last thirty years. Or even within the short period in which the Higgs boson took a big step towards becoming a ‘real’ entity.

  Finally, there is the Copernican Principle. Nowhere in the authorized version will you find any reference to ‘us’ as special or privileged observers of the universe. As we currently understand it, the physics of this version of reality operates without intention and without passion. We are just passively carried along for the ride.

  Now, you might have got the impression from the last four chapters that the authorized version of reality is a triumph of the human intellect and, as such, pretty rock—solid, possibly destined to last for all time. The four theoretical structures that make up the authorized version undoubtedly represent the pinnacle of scientific achievement. We should be — and are — immensely proud of them.

  But these theories are riddled with problems, paradoxes, conundrums, contradictions and incompatibilities. In one sense, they don’t make sense at all.

  They are not the end. The purpose of this chapter is to explain why, despite appearances, the authorized version of reality can’t possibly be right.

  Some of these problems were hinted at in previous chapters, but here we will explore them in detail. It is important to understand where they come from and what they imply. The attempt to solve them without guidance from observation or experiment is what has led to the creation of fairy-tale physics.

  The paradox of Schrödinger’s cat

  Actually, the problem of quantum measurement is a perfect problem for these economically depressed times. This is because it is, in fact, three problems for the price of one: the problem of quantum probability, the collapse of the wavefunction and the ‘spooky’ action-at-a-distance that this implies. Bargain!

  Our discomfort begins with the interpretation of quantum probability. Quantum particles possess the property of phase which in our empirical world of experience scales up to give us wave-like behaviour. We identify the amplitude of the quantum wavefunction (or, more correctly, the modulus-square of the amplitude) as a measure of probability, and this allows us to make the connection with particles.2 Thus, an electron orbiting a proton in a hydrogen atom might have a spherically symmetric wavefunction, and the modulus square of the amplitude at any point within the orbit relates the probability that the electron will be ‘found’ there.

  The trouble is that phases (waves) can be added or subtracted in ways that self-contained particles cannot. We can create superpositions of waves. Waves can interfere. Waves are extended, with amplitudes in many different places. The probabilities that connect us with particles are therefore subject to ‘spooky’ wave effects. We conclude that one particle can have probabilities for being in many different places (although thankfully it can’t have a unit or 100 per cent probability for being in more than one place at a time).

  Consider a quantum system on which we perform a measurement with two possible outcomes, say ‘up’ or ‘down’ for simplicity. The accepted approach is to form a wavefunction which is a superposition of both possible outcomes, including any interference terms. This represents the state of the system prior to measurement. The measurement then collapses the wavefunction and we get a result — ‘up’ or ‘down’ with a probability related to the modulus-squares of the amplitudes of the components in the superposition.

  Just how is this meant to work?

  The collapse of the wavefunction is essential to our understanding of the relationship between the quantum world and our classical world of experience, yet it must be added to the theory as an ad hoc assumption. It also leaves us pondering. Precisely where in the causally connected chain of events from quantum system to human perception is the collapse supposed to occur?

  Inspired by some lively correspondence with Einstein through the summer of 1935, Austrian physicist Erwin Schrödinger was led to formulate one of the most famous paradoxes of quantum theory, designed to highlight the simple fact that the theory contains no prescription for precisely how the collapse of the wavefunction is meant to be applied or where it is meant to occur. This is, of course, the famous paradox of Schrödinger’s c
at.

  He described the paradox in a letter to Einstein as follows:

  Contained in a steel chamber is a Geiger counter prepared with a tiny amount of uranium, so small that in the next hour it is just as probable to expect one atomic decay as none. An amplified relay provides that the first atomic decay shatters a small bottle of prussic acid. This and — cruelly — a cat is also trapped in the steel chamber. According to the [wave] function for the total system, after an hour, sit venia verbo, the living and dead cat are smeared out in equal measure.3

  Prior to actually measuring the disintegration in the Geiger counter, the wavefunction of the uranium atom is expressed as a superposition of the possible measurement outcomes, in this case a superposition of the wavefunctions of the intact atom and of the disintegrated atom. Our instinct might be to conclude that the wavefunction collapses when the Geiger counter triggers. But why? After all, there is nothing in the structure of quantum theory itself to indicate this.

  Why not simply assume that the wavefunction evolves into that of a superposition of the wavefunctions of the triggered and untriggered Geiger counter? And why not further assume that this evolves too, eventually to form a superposition of the wavefunctions of the live and dead cat? This is what Schrödinger meant when he wrote about the living and dead cat being ‘smeared out’ in equal measure.

  We appear to be trapped in an infinite regress. We can perform a measurement on the cat by lifting the lid of the steel chamber and ascertaining its physical state. Do we suppose that, at that point, the wavefunction collapses and we record the observation that the cat is alive or dead as appropriate?

 

‹ Prev