Book Read Free

Farewell to Reality

Page 17

by Jim Baggott


  On the surface, it really seems as though we ought to be able to resolve this paradox with ease. But we can’t. There is obviously no evidence for peculiar superposition states of live-and-dead things or of ‘classical’ macroscopic objects of any description.* We can avoid the infinite regress if we treat the measuring instrument (in this case, the Geiger counter) as a classical object and argue that classical objects cannot form superpositions in the way that quantum objects can.

  But the questions remain: why should this be and how does it work? Perhaps more worryingly, if some kind of external ‘classical’ macroscopic measuring device is required, then precisely what was it that, in the early moments of the big bang when the universe was the size of a quantum object, collapsed the wavefunction of the universe? Is it necessary for us to invoke an ultimate ‘measurer-of-all-things’?

  Irish theorist John Bell called this seemingly arbitrary split between measured quantum object and classical perceiving subject ‘shifty’:

  What exactly qualifies some physical systems to play the role of ‘measurer’? Was the wavefunction of the world waiting to jump for thousands of years until a single-celled living creature appeared? Or did it have to wait a little longer, for some better qualified system … with a PhD?4

  When the wavefunction collapses, it does so instantaneously. That doesn’t seem like much of a problem if you say it quickly and move on to something else. But, of course, this is a big problem. In our universe nothing, but nothing, happens instantaneously across large distances. If I happen to make a measurement on one of a pair of entangled photons that has travelled halfway across the universe before reaching my detector, how does the photon on the other side of the universe discover what result I got? Surely this is completely at odds with Einstein’s special theory of relativity, which assumes that the speed of light cannot be exceeded?

  This isn’t a hypothetical scenario. Experiments have been performed on entangled photons which have allowed a lower limit to be placed on the speed with which the wavefunction had to collapse to give the results observed. This speed was estimated to be at least twenty thousand times faster than the speed of light.

  Many physicists have reconciled themselves to this kind of result by noting that, despite the apparent speed of the collapse process (if it really happens at all), it cannot be used to communicate information. No matter how hard we try, we cannot take advantage of this seeming example of an instantaneous physical effect to send messages of any kind. And this, they claim, allows quantum measurement peacefully (if rather uneasily) to co-exist with special relativity.

  Experiments designed to test the foundations of quantum theory, such as the tests of Bell’s and Leggett’s inequalities, have all served to confirm the essential correctness of the theory. But they also serve to deepen the mystery. Instead of being reassured, many physicists have become increasingly alarmed, as Bell himself noted in 1985 of the Aspect experiment:

  It is a very important experiment, and perhaps it marks the point where one should stop and think for a time, but I certainly hope it is not the end. I think that the probing of what quantum mechanics means must continue, and in fact it will continue, whether we agree or not that it is worth while, because many people are sufficiently fascinated and perturbed by this that it will go on.5

  The quantum measurement problem is extremely stubborn. We should be clear that any theoretical structure which attempts to resolve it takes us beyond quantum theory and therefore beyond the current authorized version of reality. Some of these attempts have led to some really bizarre interpretations of the physical world and, perhaps inevitably, to fairy-tale physics.

  No rhyme or reason

  In our rush to examine the theories that are supposed to help us make sense of the world, we didn’t really stop to think about what kind of ultimate, all—encompassing theory we might actually like to have. I appreciate that our personal preferences or desires have no influence on nature itself, and that, to a large extent, any kind of ultimate theory must be shaped to fit nature’s particular foibles. But there is nevertheless an important sense in which we seek a theory that we would find satisfying. And what is satisfaction but a consequence of fulfilling our personal preferences or desires?

  I don’t think many scientists (or non-scientists, for that matter) would dispute that a satisfying theory is one in which we are obliged to assume little or (even better) no a priori knowledge. The term a priori means in this context ‘independent of experience’. In other words, the ultimate theory would be one that encapsulates all the relevant laws of physics in a consistent framework, such that all we would need to do to calculate what happens in a physical system in a specific set of circumstances would be to define the circumstances appropriately and press the ‘enter’ key.

  To be fair, we would probably need to specify some fundamental physical constants — such as Planck’s constant, the charge on the electron, the speed of light, and so on — but that would be it. Everything else would flow purely from the physics.*

  As Einstein himself put it:

  It can scarcely be denied that the supreme goal of all theory is to make the irreducible basic elements as simple and as few in number as possible without having to surrender the adequate representation of a single datum of experience.6

  If this is indeed the kind of vision we have in the backs of our minds, then the current standard model of particle physics, in the form QCD × QFD × QED, falls way short. It doesn’t accommodate gravity. It requires a collection of 61 ‘elementary’ particles.** And it is held together by a set of parameters that must be entered a posteriori (by reference to experience, meaning that they can’t be calculated and so must be measured). American physicist Leon Lederman summarized the situation in 1993:

  The idea is that twenty or so numbers must be specified in order to begin the universe. What are these numbers (or parameters, as they are called in the physics world)? Well, we need twelve numbers to specify the masses of the quarks and leptons. We need three numbers to specify the strengths of the forces … We need some numbers to show how one force relates to another. Then we need … a mass for the Higgs particle, and a few other handy items.7

  Now we believe that the elementary particles acquire their mass through interactions with the Higgs field. This seems to suggest that there might be a way of calculating the masses a priori. In truth, however, we know nothing about the strengths of these interactions, other than the fact that they must produce the masses we observe experimentally. The interaction strengths cannot be deduced from within the standard model. Instead of putting the masses of the elementary particles into the standard model ‘by hand’, we put in the strengths of the interactions of these particles with the Higgs field necessary to reproduce the particle masses.

  What kind of fundamental theory of particle physics is it that can’t predict the masses of its constituent elementary particles? Answer: one that is not very satisfying.

  No clue can be gained from the masses themselves. These are summarized in Figure 5, based on mass data from the Particle Data Group. For convenience, I’ve expressed the masses as multiples of the proton mass. Thus, the electron mass is 0.00054 times the proton mass. The top quark is 184 times the proton mass, and so on.

  For a long time the various flavours of neutrino (electron, muon and tau) were thought to be massless, but evidence emerged in the late 1990s that neutrinos can change their flavour, in a process called ‘oscillation’. Neutrino oscillation is not possible unless the particles possess very small masses, currently too small to measure accurately. The Particle Data Group suggests an upper limit of just 2 electron volts. For different reasons it has also proved difficult to get a handle on the masses of the up and down quarks, and ranges are therefore quoted.

  So, can you spot the pattern? Of course, masses increase in each successive generation of particles: the tau is about 17 times heavier than the muon, which is about 209 times heavier than the electron. The top quark is about 134 times heavi
er than the charm quark, which is about 527 times heavier than the up quark. The bottom quark is about 41 times heavier than the strange quark, which is about 21 times heavier than the down quark. Three down quarks (charge -1) are about 29 times heavier than the electron (charge -1); three strange quarks are 2.9 times heavier than the muon and three bottom quarks are 7.1 times heavier than the tau.

  Figure 5 The masses of the standard model matter particles, measured relative to the proton mass (938.27 MeV). Data are taken from listings provided by the Particle Data Group: http//pdg.lbl.gov/index.html

  You can keep looking at different combinations if you like, but there is no evidence for a pattern. No rhyme or reason.

  Nature is probably telling us that expecting a pattern betrays our hopeless naivety. But, having found a rather consistent and compelling pattern in three generations of matter particles, I don’t think it was really asking too much to expect a similar pattern in the particle masses. But there is none. And there are no clues.

  Trouble with the hierarchy

  The hierarchy in question here concerns the relative strengths and characteristic mass-energy scales of the fundamental forces. Specifically, in the context of particle physics it concerns the relationship between gravity and the weak force and electromagnetism: why is gravity so much weaker?

  The standard model is based on the idea that the weak force and electromagnetism were once indistinguishable components of a single electro-weak force. The distinction between them was forced by symmetry-breaking at the end of the electro-weak epoch. The conditions of this period are re-created in the proton—proton collisions in the LHC, at mass-energies around a trillion (1012) electron volts.

  Playing the same kind of game, we suppose that the strong nuclear force and electro-weak force were once likewise indistinguishable components of a single electro-nuclear force. But to re-create the conditions characteristic of the grand unified epoch, we would need to reach mass-energies of the order of a trillion trillion (1024) electron volts. Pushing even further, to conditions in which gravity merges with the electro-nuclear force to produce the single primordial force that dominated the early stages of the big bang, takes us to the Planck epoch, characterized by the Planck mass, about 10,000 trillion trillion (1028) electron volts.8

  Why is this a problem, exactly? Well, imposing a distinction between nature’s forces as a result of breaking the prevailing symmetries is not in itself a problem. The problem is that before the symmetry is broken, all the particles involved are identical. After the symmetry is broken, the particles involved exhibit incredibly divergent masses — different by 15 orders of magnitude — consistent with the different mass-energy scales of the forces. It’s as though the antenatal ultrasound reveals two monozygotic (identical twin) embryos, and yet when the symmetry is broken the mother gives birth to Arnold Schwarzenegger and Danny DeVito.

  Breaking the symmetry to produce such divergent mass-energy scales appears to require an awful lot of fine-tuning. And physicists get a bit twitchy when confronted by too much coincidence.

  The chickens all come home to roost when we consider the mass of the electro-weak Higgs boson. The standard quantum-theoretical approach to calculating this mass involves computing so-called radiative corrections to the particle’s ‘bare’ mass, thereby renormalizing it. This is a calculation that has proved to be beyond the theorists, which is why nobody knew what the mass of the Higgs boson should be before the search for it began. However, it is possible to anticipate how the calculation would look in principle.

  The radiative corrections involve taking account of all the different processes that a Higgs particle can undergo as it moves from place to place. These include virtual processes, involving the production of other particles and their anti-particles for a short time before these recombine back to the Higgs. The Higgs boson is obliged to couple to other particles in direct proportion to their masses, so virtual processes involving heavy particles such as the top quark would be expected to make significant contributions to the calculated mass of the Higgs.

  To cut a long story short, the mass of the Higgs would be expected to mushroom as a result of these corrections. Calculations predict a mass for the Higgs that is as big as the Planck mass.

  If this were really the case, then the universe would be a very different place, and neither you nor I would be around to puzzle over it. If, as seems very likely, the Higgs boson has a mass around 125 GeV, then something must be happening to cancel out the contributions from all those radiative corrections, and so fine—tuning the scale of the weak force.

  But whatever it is, it is not to be found in the standard model.

  A Prayer for Owen Meany

  In John Irving’s 1989 novel A Prayer for Owen Meany, the title character believes he is an instrument of God, and taunts his best friend John Wheelwright as they practise a basketball shot over and over again. As dusk settles over their New Hampshire playground in late November or early December 1964, first the basket and then a nearby statue of Mary Magdalene slowly disappear into the darkness.

  ‘YOU CAN’T SEE HER, BUT YOU KNOW SHE’S STILL THERE — RIGHT?’ asks Meany, in his high—pitched, childlike voice.

  Yes, John knows the statue is still there.

  ‘YOU HAVE NO DOUBT SHE’S THERE?’ Meany nags.

  ‘Of course I have no doubt,’ John replies, getting exasperated.

  ‘BUT YOU CAN’T SEE HER — YOU COULD BE WRONG.’

  ‘No, I’m not wrong — she’s there, I know she’s there,’ John yells.

  ‘YOU ABSOLUTELY KNOW SHE’S THERE — EVEN THOUGH YOU CAN’T SEE HER?’

  ‘Yes!’

  ‘WELL, NOW YOU KNOW HOW I FEEL ABOUT GOD,’ said Owen Meany. ‘I CAN’T SEE HIM, BUT I ABSOLUTELY KNOW HE IS THERE.’9

  This neatly summarizes our attitude to dark matter. We can’t see it but we absolutely know it is there.

  Dark matter is unknown to the standard model of particle physics and is therefore by definition a big problem. Explaining it is definitely going to require something beyond current theories — it is going to require ‘new physics’.

  Ordinary ‘baryonic’ matter — the stuff of everyday substances — reveals itself through its interaction with electromagnetic radiation. So, we require a form of matter that exerts a gravitational pull but is unaffected by the electromagnetic force. Several alternative varieties of dark matter have been postulated, but cold dark matter consisting of weakly interacting, slow-moving particles is thought to be most compatible with the visible structures of galaxies and galactic clusters.

  There are several candidates, each more exotic than the last. WIMPs interact only via gravity and the weak force and so have many of the properties of neutrinos, except for the simple fact that they must be much heavier.

  But WIMPs are not the only dark matter candidates. In 1977, Italian physicist Roberto Peccei and Australian-born Helen Quinn proposed a solution to a niggling problem in QCD, in which a hypothetical new symmetry (called the Peccei—Quinn symmetry) is spontaneously broken. Steven Weinberg and Frank Wilczek subsequently showed that this symmetry-breaking would produce a new low-mass, electrically neutral particle, which they called the axion.

  Although of low mass, axions can account for dark matter provided there are enough of them. Once again, calculations seemed to suggest that, if they exist, axions would have been produced in abundance in the early universe and would be prevalent today.

  The Axion Dark Matter Experiment (ADMX), at the University of Washington’s Center for Experimental Nuclear Physics and Astrophysics, is looking for evidence of dark matter axions interacting with the strong magnetic field generated by a superconducting magnet. In such interactions, the axions are predicted to decay into microwave photons, which can be detected.

  Experiments to date have served to exclude one kind of strongly interacting axion in the mass range 1.9—3.5 millionths of an electron volt. In the next ten years the collaboration hopes either to find or exclude a weakly interac
ting axion in the mass range 2—20 millionths of an electron volt.

  The last dark matter candidate we will consider here is the primordial black hole. Most readers will already know that a black hole is formed when a large star collapses. Its mass becomes so concentrated that not even light can escape the pull of its gravity. Astronomers have inferred the existence of two types of black hole, those with a mass around ten times the mass of the sun (10 M) and super-massive black holes with masses between a million and ten billion M that reside at the centres of galaxies.

  Primordial black holes are not formed this way. It is thought that they might have been created in the early moments of the big bang, when wild fluctuations in the density of matter might have tripped over the threshold for black hole formation. Unlike black holes formed by the collapse of stars, primordial black holes would be small, with masses similar to those of asteroids, or about a ten billionth of M. To all intents and purposes, they would behave like massive particles.

  Options for detecting them are limited, however. In 1974, Stephen Hawking published a paper suggesting that, contrary to prevailing opinion, large black holes might actually emit radiation as a result of quantum fluctuations at the black hole’s event horizon, the point of no return beyond which nothing — matter or light — can escape. This came to be known as Hawking radiation. Its emission causes the black hole to lose mass and eventually evaporate in a small explosion (at least, small by astronomical standards).

  If primordial black holes were formed in the early universe, and if they emit Hawking radiation, then we may be able to identify them through telltale explosions in the dark matter halo surrounding our own Milky Way galaxy. One of the many tasks of the Fermi Gammaray Space Telescope, launched by NASA on 11 June 2008, is to look for exploding primordial black holes based on their expected ‘signature’ bursts of gamma rays.

 

‹ Prev