Book Read Free

A Short History of Nearly Everything: Special Illustrated Edition

Page 21

by Bill Bryson


  In the 1960s, in an attempt to bring just a little simplicity to matters, the Caltech physicist Murray Gell-Mann invented a new class of particles, essentially, in the words of Steven Weinberg, “to restore some economy to the multitude of hadrons”—a collective term used by physicists for protons, neutrons and other particles governed by the strong nuclear force. Gell-Mann’s theory was that all hadrons were made up of still smaller, even more fundamental particles. His colleague Richard Feynman wanted to call these new basic particles partons, as in Dolly, but was over-ruled. Instead they became known as quarks.

  Explaining the unexplainable: the American theoretical physicist and Nobel laureate Richard Feynman lecturing on the vastly complex theories of “quarks,” a collective term that encompasses all particles that are governed by the strong nuclear force. (credit 11.6)

  Gell-Mann took the name from a line in Finnegans Wake: “Three quarks for Muster Mark!” (Discriminating physicists rhyme the word with storks, not larks, even though the latter is almost certainly the pronunciation Joyce had in mind.) The fundamental simplicity of quarks was not long-lived. As they became better understood it was necessary to introduce subdivisions. Although quarks are much too small to have colour or taste or any other physical characteristics we would recognize, they became clumped into six categories—up, down, strange, charm, top and bottom—which physicists oddly refer to as their “flavours,” and these are further divided into the colours red, green and blue. (One suspects that it was not altogether coincidental that these terms were first applied in California during the age of psychedelia.)

  Eventually out of all this emerged what is called the Standard Model, which is essentially a sort of parts kit for the subatomic world. The Standard Model consists of six quarks, six leptons, five known bosons and a postulated sixth, the Higgs boson (named for a Scottish scientist, Peter Higgs), plus three of the four physical forces: the strong and weak nuclear forces and electromagnetism.

  The arrangement essentially is that among the basic building blocks of matter are quarks; these are held together by particles called gluons; and together quarks and gluons form protons and neutrons, the stuff of the atom’s nucleus. Leptons are the source of electrons and neutrinos. Quarks and leptons together are called fermions. Bosons (named for the Indian physicist S. N. Bose) are particles that produce and carry forces, and include photons and gluons. The Higgs boson may or may not actually exist; it was invented simply as a way of endowing particles with mass.

  It is all, as you can see, just a little unwieldy, but it is the simplest model that can explain all that happens in the world of particles. Most particle physicists feel, as Leon Lederman remarked in a 1985 television documentary, that the Standard Model lacks elegance and simplicity. “It is too complicated. It has too many arbitrary parameters,” Lederman said. “We don’t really see the creator twiddling twenty knobs to set twenty parameters to create the universe as we know it.” Physics is really nothing more than a search for ultimate simplicity, but so far all we have is a kind of elegant messiness—or as Lederman put it: “There is a deep feeling that the picture is not beautiful.”

  Three of the twentieth century’s greatest physicists. Left: Murray Gell-Mann (credit 11.7a) Middle: Satyendra Bose (credit 11.7b) Right: Leon Lederman (credit 11.7c)

  The Standard Model is not only ungainly but incomplete. For one thing, it has nothing at all to say about gravity. Search through the Standard Model as you will and you won’t find anything to explain why when you place a hat on a table it doesn’t float up to the ceiling. Nor, as we’ve just noted, can it explain mass. In order to give particles any mass at all we have to introduce the notional Higgs boson; whether it actually exists is a matter for twenty-first century physics. As Feynman cheerfully observed: “So we are stuck with a theory, and we do not know whether it is right or wrong, but we do know that it is a little wrong, or at least incomplete.”

  In an attempt to draw everything together, physicists have come up with something called superstring theory. This postulates that all those little things like quarks and leptons that we had previously thought of as particles are actually “strings”—vibrating strands of energy that oscillate in eleven dimensions, consisting of the three we know already plus time and seven other dimensions that are, well, unknowable to us. The strings are very tiny—tiny enough to pass for point particles.

  The Standard Model, the simplest scheme yet devised to convey the decidedly unsimple world of particles. The model constitutes “a kind of elegant messiness.” (credit 11.8)

  By introducing extra dimensions, superstring theory enables physicists to pull together quantum laws and gravitational ones into one comparatively tidy package; but it also means that anything scientists say about the theory begins to sound worryingly like the sort of thoughts that would make you edge away if conveyed to you by a stranger on a park bench. Here, for example, is the physicist Michio Kaku explaining the structure of the universe from a superstring perspective:

  The heterotic string consists of a closed string that has two types of vibrations, clockwise and counterclockwise, which are treated differently. The clockwise vibrations live in a ten-dimensional space. The counterclockwise live in a 26-dimensional space, of which 16 dimensions have been compactified. (We recall that in Kaluza’s original five-dimensional, the fifth dimension was compactified by being wrapped up into a circle.)

  And so it goes, for some 350 pages.

  String theory has further spawned something called M theory, which incorporates surfaces known as membranes—or simply branes to the hipper souls of the world of physics. This, I’m afraid, is the stop on the knowledge highway where most of us must get off. Here is a sentence from the New York Times, explaining this as simply as possible to a general audience:

  The ekpyrotic process begins far in the indefinite past with a pair of flat empty branes sitting parallel to each other in a warped five-dimensional space … The two branes, which form the walls of the fifth dimension, could have popped out of nothingness as a quantum fluctuation in the even more distant past and then drifted apart.

  No arguing with that. No understanding it either. Ekpyrotic, incidentally, comes from the Greek word for conflagration.

  Andrew Strominger and Cumrun Vafa of Harvard jocularly demonstrate string theory, which holds that quarks resemble strings of energy that exist in multiple dimensions, many of them beyond the comprehension of humans. (credit 11.9)

  Matters in physics have now reached such a pitch that, as Paul Davies noted in Nature, it is “almost impossible for the non-scientist to discriminate between the legitimately weird and the outright crackpot.” The question came interestingly to a head in the autumn of 2002 when two French physicists, twin brothers Igor and Grichka Bogdanov, produced a theory of ambitious density involving such concepts as “imaginary time” and the “Kubo–Schwinger–Martin condition,” and purporting to describe the nothingness that was the universe before the Big Bang—a period that was always assumed to be unknowable (since it predated the birth of physics and its properties).

  Almost at once the Bogdanov theory excited debate among physicists as to whether it was twaddle, a work of genius or a hoax. “Scientifically, it’s clearly more or less complete nonsense,” Columbia University physicist Peter Woit told the New York Times, “but these days that doesn’t much distinguish it from a lot of the rest of the literature.”

  Karl Popper, whom Steven Weinberg has called “the dean of modern philosophers of science,” once suggested that there may not in fact be an ultimate theory for physics—that, rather, every explanation may require a further explanation, producing “an infinite chain of more and more fundamental principles.” A rival possibility is that such knowledge may simply be beyond us. “So far, fortunately,” writes Weinberg in Dreams of a Final Theory, “we do not seem to be coming to the end of our intellectual resources.”

  Saul Steinberg (credit 11.10)

  Almost certainly this is an area that will see further d
evelopments of thought, and almost certainly again these thoughts will be beyond most of us. While physicists in the middle decades of the twentieth century were looking perplexedly into the world of the very small, astronomers were finding no less arresting an incompleteness of understanding in the universe at large.

  When we last met Edwin Hubble, he had determined that nearly all the galaxies in our field of view are flying away from us, and that the speed and distance of this retreat are neatly proportional: the further away the galaxy, the faster it is moving. Hubble realized that this could be expressed with a simple equation, Ho = v/d (where Ho is the constant, v is the recessional velocity of a flying galaxy and d its distance away from us). Ho has been known ever since as the Hubble constant and the whole as Hubble’s Law. Using his formula, Hubble calculated that the universe was about two billion years old, which was a little awkward because even by the late 1920s it was increasingly evident that many things within the universe—including, probably, the Earth itself—were older than that. Refining this figure has been an ongoing preoccupation of cosmology.

  Edwin Hubble photographed shortly before his death in 1953. By measuring the speed at which galaxies are receding, Hubble in his later years came up with a formula known as Hubble’s Law, suggesting that the universe was about two billion years old. The figure is universally agreed to be wrong, but by how much is still undecided. (credit 11.11)

  Almost the only thing constant about the Hubble constant has been the amount of disagreement over what value to give it. In 1956, astronomers discovered that Cepheid variables were more variable than they had thought; they came in two varieties, not one. This allowed them to rework their calculations and come up with a new age for the universe of between seven billion and twenty billion years—not terribly precise, but at least old enough, at last, to embrace the formation of the Earth.

  In the years that followed there erupted a dispute that would run and run, between Allan Sandage, heir to Hubble at Mount Wilson, and Gérard de Vaucouleurs, a French-born astronomer based at the University of Texas. Sandage, after years of careful calculations, arrived at a value for the Hubble constant of 50, giving the universe an age of twenty billion years. De Vaucouleurs was equally certain that the Hubble constant was 100.2 This would mean that the universe was only half the size and age that Sandage believed—ten billion years. Matters took a further lurch into uncertainty when in 1994 a team from the Carnegie Observatories in California, using measures from the Hubble Space Telescope, suggested that the universe could be as little as 8 billion years old—an age even they conceded was younger than some of the stars within the universe. In February 2003, a team from NASA and the Goddard Space Flight Center in Maryland, using a new, far-reaching type of satellite called the Wilkinson Microwave Anistropy Probe, announced with some confidence that the age of the universe is 13.7 billion years, give or take a hundred million years or so. There matters rest, at least for the moment.

  The difficulty in making final determinations is that there is often acres of room for interpretation. Imagine standing in a field at night and trying to decide how far away two distant electric lights are. Using fairly straightforward tools of astronomy you can easily enough determine that the bulbs are of equal brightness and that one is, say, 50 per cent more distant than the other. But what you can’t be certain of is whether the nearer light is, let us say, a 58-watt bulb that is 37 metres away or a 61-watt light that is 36.5 metres away. On top of that you must make allowances for distortions caused by variations in the Earth’s atmosphere, by intergalactic dust, by contaminating light from foreground stars and many other factors. The upshot is that your computations are necessarily based on a series of nested assumptions, any of which could be a source of contention. There is also the problem that access to telescopes is always at a premium and historically measuring red shifts has been notably costly in telescope time. It could take all night to get a single exposure. In consequence, astronomers have sometimes been compelled (or willing) to base conclusions on notably scanty evidence. In cosmology, as the journalist Geoffrey Carr has suggested, we have “a mountain of theory built on a molehill of evidence.” Or as Martin Rees has put it: “Our present satisfaction [with our state of understanding] may reflect the paucity of the data rather than the excellence of the theory.”

  This uncertainty applies, incidentally, to relatively nearby things as much as to the distant edges of the universe. As Donald Goldsmith notes, when astronomers say that the galaxy M87 is 60 million light years away, what they really mean (“but do not often stress to the general public”) is that it is somewhere between 40 million and 90 million light years away—not quite the same thing. For the universe at large, matters are naturally magnified. For all the éclat surrounding the latest pronouncements, we remain a long way from unanimity.

  One interesting theory recently suggested is that the universe is not nearly as big as we thought; that when we peer into the distance some of the galaxies we see may simply be reflections, ghost images created by rebounded light.

  A necessarily fanciful artist’s rendering of “dark matter,” which is invisible to us and yet is believed to account for 90 per cent, or more, of all the matter in the universe. Dark matter was first theorized in the 1930s by Fritz Zwicky, but was widely dismissed during his lifetime. (credit 11.12)

  The fact is, there is a great deal, even at quite a fundamental level, that we don’t know—not least what the universe is made of. When scientists calculate the amount of matter needed to hold things together, they always come up desperately short. It appears that at least 90 per cent of the universe, and perhaps as much as 99 per cent, is composed of Fritz Zwicky’s “dark matter”—stuff that is by its nature invisible to us. It is slightly galling to think that we live in a universe that for the most part we can’t even see, but there you are. At least the names for the two main possible culprits are entertaining: they are said to be either WIMPs (for Weakly Interacting Massive Particles, which is to say specks of invisible matter left over from the Big Bang) or MACHOs (for MAssive Compact Halo Objects—really just another name for black holes, brown dwarfs and other very dim stars).

  Dark objects known as MACHOs surround the Milky Way galaxy in one theorized version of the depths of space. Such objects would consist of brown dwarf stars, neutron stars, black holes and possibly other light-shy objects, and together would account for the missing mass of the universe. (credit 11.13)

  Particle physicists have tended to favour the particle explanation of WIMPs, astrophysicists the stellar explanation of MACHOs. For a time MACHOs had the upper hand, but not nearly enough of them were detected, so sentiment swung back towards WIMPs—with the problem that no WIMP has ever been found. Because they are weakly interacting, they are (assuming they even exist) very hard to identify. Cosmic rays would cause too much interference. So scientists must go deep underground. One kilometre underground cosmic bombardments would be one-millionth what they would be on the surface. But even when all these are added in, “two-thirds of the universe is still missing from the balance sheet,” as one commentator has put it. For the moment we might very well call them DUNNOS (for Dark Unknown Nonreflective Nondetectable Objects Somewhere).

  Recent evidence suggests not only that the galaxies of the universe are racing away from us, but that they are doing so at a rate that is accelerating. This is counter to all expectations. It appears that the universe may be filled not only with dark matter, but with dark energy. Scientists sometimes also call it vacuum energy or quintessence. Whatever it is, it seems to be driving an expansion that no-one can altogether account for. The theory is that empty space isn’t so empty at all—that there are particles of matter and antimatter popping into existence and popping out again—and that these are pushing the universe outwards at an accelerating rate. Improbably enough, the one thing that resolves all this is Einstein’s cosmological constant—the little piece of maths he dropped into the General Theory of Relativity to stop the universe’s presumed ex
pansion and that he called “the biggest blunder of my life.” It now appears that he may have got things right after all.

  The upshot of all this is that we live in a universe whose age we can’t quite compute, surrounded by stars whose distances from us and each other we don’t altogether know, filled with matter we can’t identify, operating in conformance with physical laws whose properties we don’t truly understand.

  (credit 11.14)

  And on that rather unsettling note, let’s return to Planet Earth and consider something that we do understand—though by now you perhaps won’t be surprised to hear that we don’t understand it completely and what we do understand we haven’t understood for long.

  1 There are practical side-effects to all this costly effort. The World Wide Web is a CERN offshoot. It was invented by a CERN scientist, Tim Berners-Lee, in 1989.

  2 You are of course entitled to wonder what is meant exactly by “a constant of 50” or “a constant of 100.” The answer lies in astronomical units of measure. Except conversationally, astronomers don’t use light years. They use a distance called the parsec (a contraction of parallax and second), based on a universal measure called the stellar parallax and equivalent to 3.26 light years. Really big measures, like the size of a universe, are measured in megaparsecs: 1 megaparsec = 1 million parsecs. The constant is expressed in terms of kilometres per second per megaparsec. Thus when astronomers refer to a Hubble constant of 50, what they really mean is “50 kilometres per second per megaparsec.” For most of us that is of course an utterly meaningless measure; but then, with astronomical measures most distances are so huge as to be utterly meaningless.

 

‹ Prev