Book Read Free

Cosmology_A Very Short Introduction

Page 9

by Peter Coles


  First, there are the classical cosmological tests. The idea with these tests is to use observations of very distant objects to measure the curvature of space, or the rate at which the expansion of the Universe is decelerating. The simplest of these tests involves the comparison of the ages of astronomical objects (particularly stars in globular cluster systems) with the age predicted by cosmological theory. I discussed this in Chapter 4 because if the expansion of the Universe were not decelerating, the predicted age depends much more sensitively on the Hubble constant than it does on Ω, and in any case the ages of old stars are not known with any great confidence, so this test is not a powerful diagnostic of Ω at the moment. Other classical tests involve using the properties of very distant sources to probe directly the rate of deceleration or the spatial geometry of the Universe. Some of these techniques were pioneered by Hubble and developed into an art form by Sandage. They fell into some disrepute in the 1960s and 1970s because it was then realized that not only was the Universe at large expanding, but objects within it were evolving rapidly. Since one needs to probe very large distances to measure the very slight geometrical effects of spatial curvature, one is inevitably looking at astronomical objects as they were when their light started out on its journey to us. This could be a very long time ago indeed: more than 80 per cent of the age of the Universe is commonplace in cosmological observations. There is no guarantee that the brightness or size of the distant objects being used had the same properties as nearby ones because of the possibility that these properties change with time. Indeed the classical cosmological tests are now largely used to study the evolution of properties, rather than to test fundamental aspects of cosmology. There is, however, one important and recent exception. The use of supernovae explosions as standard light sources has yielded spectacular results that seem to suggest the Universe is not decelerating at all. I’ll talk more about these at the end of the chapter.

  Next are arguments based on the theory of nucleosynthesis. As I explained in Chapter 5, the agreement between observed elemental abundances and the predictions of nuclear fusion calculations in the early Universe is one of the major pillars of evidence supporting the Big Bang theory. But this agreement only holds if the density of matter is very low indeed: no more than a few per cent of the critical density required to make space flat. This has been known for many years, and at first sight it seems to provide a very simple answer to all the questions I have posed. However, there is an important piece of small print attached to this argument. The ‘few per cent’ limit only applies to matter which can participate in nuclear reactions. The Universe could be filled with a background of sterile particles that were unable to influence the synthesis of the light elements. The kind of matter that involves itself in things nuclear is called baryonic matter and is made up of two basic particles, protons and neutrons. Particle physicists have suggested that other types of particle than baryonic ones might have been produced in the seething cauldron of the early Universe. At least some of these particles might have survived until now, and may make up at least some of the dark matter. At least some of the constituents of the Universe may therefore comprise some form of exotic non-baryonic particle. Ordinary matter, of which we are made, may be but a small contaminating stain on the vast bulk of cosmic material whose nature is yet to be determined. This adds another dimension to the Copernican Principle: not only are we no longer at the centre of the cosmos, we’re not even made from the same stuff as most of the Universe.

  The third category of evidence is based on astrophysical arguments. The difference between these arguments and the intrinsically cosmological measurements discussed above is that they look at individual objects rather than the properties of the space between them. In effect, one is trying to determine the density of the Universe by weighing its constituent elements one by one. For example, one can attempt to use the internal dynamics of galaxies to work out their masses by assuming that the rotation of a galactic disk is maintained by gravity in much the same way as the motion of the Earth around the Sun is governed by the Sun’s gravity. It is possible to calculate the mass of the Sun from the velocity of the Earth in its orbit, and a similar calculation can be done for galaxies: the orbital speeds of the stars in galaxies is determined by the total mass of the galaxy pulling on them. The principle can also be extended to clusters of galaxies, and systems of even larger size than this. These investigations overwhelmingly point to the existence of much more matter in galaxies than one sees there in the form of stars like our sun. This is the famous dark matter we can’t see but whose existence we infer from its gravitational effects.

  Rich clusters of galaxies – systems more than a million light years across consisting of huge agglomerations of galaxies – also contain more matter than is associated with the individual galaxies in them. The exact amount of matter is unclear, but there is very strong evidence that there is enough matter in the rich cluster systems to suggest that Omega is certainly as big as 0.1, and possibly even larger than 0.3. Tentative evidence from the dynamics of even larger structures – superclusters of clusters that are tens of millions of light years in size – suggests there

  17. The Coma cluster. This is an example of a rich cluster of galaxies. Aside from the odd star (such as the one to the right of the frame), the objects in this picture are all galaxies contained within a giant cluster. Such enormous clusters are fairly rare but contain phenomenal amounts of mass, up to 100,000,000,000,000 times the mass of the Sun.

  18. Coma in X-rays. As well as the many hundreds of galaxies seen in the previous picture, clusters such as Coma also contain very hot gas that can be seen in the X-radiation it emits. This picture was taken by the ROSAT satellite.

  may be even more dark matter lurking in the space between clusters. These dynamical arguments have also more recently been tested and confirmed against independent observations of the gravitational lensing produced by clusters, and by measurements of the properties of the very hot X-ray-emitting gas that pervades them. Intriguingly, the fraction of baryonic matter in clusters compared to their total mass seems much larger than the global value allowed by nucleosynthesis if there is a critical density of matter overall. This so-called baryon catastrophe means that either the overall density of matter is much lower than the critical value or some unknown process may have concentrated baryonic matter in clusters.

  Finally, we have clues based on attempts to understand the origin of cosmological structure: how the considerable lumpiness and irregularity of the Universe can have developed within a Universe that is required to be largely smooth by the Cosmological Principle. The idea behind how this is thought to happen in the Big Bang models is discussed in more detail in the next chapter. The basic principles are, I believe, relatively well understood. The details, however, turn out to be incredibly complicated and prone to all kinds of uncertainty and bias. Models can and have been constructed which seem to fit all the available data with Ω very close to unity. Others can do the same, with Ω much less than this. This may sound a bit depressing, but this kind of study probably ultimately holds the key to a successful determination of Ω. If more detailed measurements of the features in the microwave background can be made, then the properties of these features will tell us immediately what the density of matter must be. And, as a bonus, it will also determine the Hubble constant, bypassing all the tedious business of the cosmological distance ladder. We can only hope that the satellites planned to do this, MAP (NASA) and the Planck Surveyor (ESA), will fly successfully in the next few years. Recent balloon experiments have shown that this appears to be feasible, but I’ll leave this to Chapter 7 to discuss further.

  19. Gravitational lensing. Rich clusters can be weighed by observing the distortion of light from background galaxies as it passes through the cluster. In this beautiful example of the cluster Abell 2218 light from background sources is focused into a complicated pattern of arcs as the cluster acts as a giant lens. These features reveal the amount of mass contained
within the cluster.

  We can summarize the status of the evidence by suggesting that the vast majority of cosmologists probably accept that the value of Ω cannot be smaller than 0.2. Even this minimal value requires that most of the matter in the Universe is dark. It also means that at least some must not be in the form of protons and neutrons (baryons), which is where most of the mass resides in material with which we are familiar in everyday experience. In other words there must be non-baryonic dark matter. Many cosmologists favour a value of Ω around 0.3, which seems to be consistent with most of the observational evidence. Some have claimed that the evidence supports a value of the density close to the critical value, so that Ω can be very close to unity. This is partly because of the accumulating astronomical evidence for dark matter, but also because of the theoretical realization that non-baryonic matter might be produced at very high energies in the Big Bang.

  The cosmic tightrope

  The considerable controversy surrounding Ω is only partly caused by disagreements resulting from the difficulty of assessing the reliability and accuracy of (sometimes conflicting) observational evidence. The most vocal arguments in favour of a high value for Ω (i.e. close to unity) are based on theoretical, rather than observational, arguments. One might be inclined to dismiss such arguments as mere prejudice, but they have their roots in a deep mystery inherent to the standard Big Bang theory and which cosmologists take very seriously indeed.

  To understand the nature of this mystery, imagine you are standing outside a sealed room. The contents of the room are hidden from you, except for a small window covered by a little door. You are told that you can open the door at any time you wish, but only once, and only briefly. You are told that the room is bare, except for a tightrope suspended in the middle about two metres in the air, and a man who, at some indeterminate time in the past began to walk the tightrope. You know also that if the man falls, he will stay on the floor until you open the door. If he doesn’t fall, he will continue walking the tightrope until you look in.

  What do you expect to see when you open the door? Whether you expect the man to be on the rope or on the ground depends on information you don’t have. If he is a circus artist, he might well be able to walk to and fro along the rope for hours on end without falling. If, on the other hand, he were not a specialist in this area (like most of us), his stay on the rope would be relatively brief. One thing, however, is obvious. If the man falls, it will take him a very short time to fall from the rope to the floor. You would be very surprised, therefore, if your peep through the window happened to catch the man in transit from rope to ground. It is reasonable, on the grounds of what we know about this situation, to expect the man to be either on the rope or on the ground when we look, but if you see him in mid-tumble you would conclude that something fishy is going on.

  This may not seem to have much to do with Ω, but the analogy becomes apparent with the realization that Ω does not have a constant value as time goes by. In the standard Friedmann models, Ω evolves and does so in a very peculiar way. At times arbitrarily close to the Big Bang, these models are all described by a value of Ω arbitrarily close to unity. To put this another way, look at Figure 16. Regardless of the behaviour at late times, all three curves shown get closer and closer together near the beginning and, in particular, they approach the ‘flat Universe’ line. As time passes, models with Ω just a little bit greater than unity in the early stages develop larger and larger values of Ω, with values far greater than unity when recollapse begins. Universes that start out with values of Ω just less than unity eventually expand much faster than the flat model, and then have values of Ω very close to zero. In the latter case, which is probably more relevant given the many indications that Ω is less than unity, the transition from Ω near unity, to Ω near zero is very rapid.

  Now we can see the problem. If Ω is, say, 0.3, then in the very early stages of cosmic history it was very close to unity, but less than this value by a tiny amount. In fact, it really is a tiny amount indeed. At the Planck time, for example (i.e. 10–43 seconds after the Big Bang), Ω had to differ from unity only in the 60th decimal place. As time went by, Ω hovered close to the critical density state, only beginning to diverge rapidly in the recent past. In the very near future it will be extremely close to zero. But now, it is as if we caught the tightrope walker right in the middle of his fall. This seems very surprising, to put it mildly.

  This paradox has become known as the Cosmological Flatness Problem, and it arises from the incompleteness of the standard Big Bang theory. That it is such a big problem convinced many scientists that it needed a big solution. The only way that seemed likely to resolve the conundrum was that our Universe really had to be a professional circus artist, to stretch the metaphor to breaking point. Obviously, Ω is not close to zero, as we have strong evidence of a lower limit to its value around 20 per cent. This rules out the man-on-the-ground alternative. The argument then goes that Ω must be equal to unity very closely, and that something must have happened in primordial times to single out this value very accurately.

  Inflation and flatness

  The happening that did this is claimed to be cosmological inflation, a speculation, originally made by Alan Guth in 1981, about the very early stages of the Big Bang model. Inflation involves a curious change in the properties of matter at very high energies known as a phase transition.

  We have already come across an example of a phase transition. One occurs in the standard model about one-millionth of a second of the Big Bang, and it involves the interactions between quarks. At low temperatures, quarks are confined in hadrons, whereas at higher temperatures they form a quark-gluon plasma. In between, there is a phase transition. In many unified theories, there can be many different phase transitions at even higher temperatures all marking changes in the form and properties of matter and energy in the Universe. Under certain circumstances a phase transition can be accompanied by the appearance of energy in empty space; this is called the vacuum energy. If this happens, the Universe begins to expand much more rapidly than it does in the standard Friedmann models. This is cosmic inflation.

  Inflation has had a great impact on cosmological theory over the last twenty years. In this context, the most important thing about it is that the phase of extravagant expansion – which is very short-lived – actually reverses the way Ω would otherwise change with time. Ω is driven hard towards unity when inflation starts, rather than drifting away from it as it does in the cases described above. Inflation acts like a safety harness, pushing our tightrope walker back onto the wire whenever he seems like falling. An easy way of understanding how this happens is to exploit the connection I have established already between the value of Ω and the curvature of space. Remember that a flat space corresponds to a critical density, and therefore to a value of Ω equal to unity. If Ω differs from this magic value then space may be curved. If one takes a highly curved balloon and blows it up to an enormous size, say the size of the Earth, then its surface will appear flat. In inflationary cosmology, the balloon starts off a tiny fraction of a centimetre across and ends up larger than the entire observable Universe. If the theory of inflation is correct, then we should expect to be living in a Universe which is very flat indeed. On the other hand, even if Ω were to turn out to be very close to unity, that wouldn’t necessarily prove that inflation happened. Some other mechanism, perhaps associated with quantum gravitational phenomena, might have trained our Universe to walk the tightrope. These theoretical ideas are extremely important, but they cannot themselves decide the issue. Ultimately, whether theorists like it or not, we have to accept that cosmology has become an empirical science. We may have theoretical grounds for suspecting that Ω should be very close to unity, but observations must prevail in the end.

  The sting

  The question that emerges from all this is that if, as seems tentatively to be the case, Ω is significantly smaller than unity, do we have to abandon inflation? The answer is ‘not n
ecessarily’. For one thing, some models of inflation have been constructed that can produce an open, negatively curved Universe. Many cosmologists don’t like these models, which do appear rather contrived. More importantly, there are now indications that the connection between Ω and the geometry of space may be less straightforward than has previously been thought. After many years in the wilderness, the classical cosmological tests I mentioned earlier have now staged a dramatic comeback. Two international teams of astronomers have been studying the properties of a particular type of exploding star, a Type Ia Supernova.

  A supernova explosion marks the dramatic endpoint of the life of a massive star. Supernovae are among the most spectacular phenomena known to astronomy. They are more than a billion times brighter than the Sun and can outshine an entire galaxy for several weeks. Supernovae have been observed throughout recorded history. A supernova observed and recorded in 1054 gave rise to the Crab Nebula, a cloud of dust and debris inside which lies a rapidly rotating star called a pulsar. The great Danish astronomer Tycho Brahe observed a supernova in 1572. The last such event to be seen in our galaxy was recorded in 1604 and was known as Kepler’s star. Although the average rate of these explosions in the Milky Way appears to be one or two every century or so, based on ancient records, none has been observed for nearly 400 years. In 1987, however, a supernova did explode in the Large Magellanic Cloud and was visible to the naked eye.

  There are two distinct kinds of supernova, labelled Type I and Type II. Spectroscopic measurements reveal the presence of hydrogen in the Type II supernovae, but this is absent in the Type I versions. Type II supernovae are thought to originate directly from the explosions of massive stars in which the core of the star collapses to a kind of dead relic, while the outer shell is ejected into space. The final state of this explosion would be a neutron star or black hole. Type II supernova may result from the collapse of stars of different mass, so there is considerable variation in their properties from one to another. The Type I supernovae are further subdivided into Type Ia, Ib and Ic, depending on details of the shape of their spectra. The Type Ia supernovae are of particular interest. These have very uniform peak luminosities, for the reason that they are thought to be the result of the same kind of explosion. The usual model for these events is that a white dwarf star is gaining mass by accretion from a companion. When the mass of the white dwarf exceeds a critical mass called the Chandrasekhar mass (about 1.4 times the mass of the Sun), its outer parts explode while its central parts collapse. Since the mass involved in the explosion is always very close to this critical value, these objects are expected always to result in the liberation of the same amount of energy. The regularity of their properties means that Type Ia supernovae are very promising objects with which to perform tests of the curvature of space-time and the deceleration rate of the Universe.

 

‹ Prev