The God Particle

Home > Other > The God Particle > Page 46
The God Particle Page 46

by Leon Lederman


  Rubbia was given the honor of presenting his results to the CERN community, and, uncharacteristically, he was nervous; eight years of work had been invested. His talk was spectacular. He had all the goods and the showmanship to display them with passionate logic(!). Even the Rubbia-haters cheered. Europe had its Nobel Prize, duly given to Rubbia and Van der Meer in 1985.

  Some six months after the W success, the first evidence appeared for the existence of the neutral partner the Z zero. With zero electric charge, it decays into, among many possibilities, an e+ and an e− (or a pair of muons, μ+ and μ−). Why? For those who fell asleep during the previous chapter since the Z is neutral, the charges of its decay products must cancel each other out, so particles of opposite signs are logical decay products. Because both electron and muon pairs can be precisely measured, the Z0 is an easier particle to recognize than the W. The trouble is that the Z0 is heavier than the W, and fewer are made. Still, by late 1983, the Z0 was established by both UA-1 and UA-2. With the discovery of the W's and the Z0 and a determination that their masses are just what was predicted, the electroweak theory—which unified electromagnetism and the weak force—was solidly confirmed.

  TOPPING OFF THE STANDARD MODEL

  By 1992, tens of thousands of W's had been collected by UA-1 and UA-2, and the new kid, CDF, at the Fermilab Tevatron. The mass of the W is now known to be about 79.31 GeV. Some two million Z0's were collected by CERN's Z0 factory," LEP (Large Electron-Positron Storage Ring), a seventeen-mile-around electron accelerator. The Z0 mass is measured to be 91.175 GeV.

  Some accelerators became particle factories. The first factories—in Los Alamos, Vancouver, and Zurich—produced pions. Canada is now designing a kaon factory. Spain wants a tau-charm factory. There are three or four proposals for beauty or bottom factories, and the CERN Z0 factory is, in 1992, in full production. At SLAC a smaller Z0 project might more properly be called a loft, or perhaps a boutique.

  Why factories? The production process can be studied in great detail and, especially for the more massive particles, there are many decay modes. One wants samples of many thousands of events in each mode. In the case of the massive Z0, there are a huge number of modes, from which one learns much about the weak and electroweak forces. One also learns from what isn't there. For example, if the mass of the top quark is less than half that of the Z0, then we have (compulsory) Z0 → top + antitop. That is, a Z zero can decay, albeit rarely, into a meson, composed of a top quark lashed to an antitop quark. The Z0 is much more likely to decay into electron pairs or muon pairs or bottom-quark pairs, as mentioned. The success of the theory in accounting for these pairs encourages us to believe that the decay of Z0 into top/antitop is predictable. We say it is compulsory because of the totalitarian rule of physics. If we make enough Zs, according to the probabilities of quantum theory, we should see evidence of the top quark. Yet in the millions of Z0's produced at CERN, Fermilab, and elsewhere, we have never seen this particular decay. This tells us something important about the top quark. It must be heavier than half of the Z0 mass. That's why the Z0 can't produce it.

  WHAT ARE WE TALKING ABOUT?

  A very broad spectrum of hypothetical particles has been proposed by theorists following one trail or another toward unification. Usually the properties of these particles, except for the mass, are well specified by the model. Not seeing these "exotics" provides a lower limit for their mass, following the rule that the larger the mass the harder it is to produce.

  Some theory is involved here. Theorist Lee says: a p/p-bar collision will produce a hypothetical particle—call it the Lee-on—if there is enough energy in the collision. However, the probability or relative frequency of producing the Lee-on depends on its mass. The heavier it is, the less frequently it is produced. The theorist hastens to supply a graph relating the number of Lee-ons produced per day to the particle's mass. For example: mass = 20 GeV, 1,000 Lee-ons (mind-numbing); 30 GeV, 2 Lee-ons; 50 GeV, one thousandth of a Lee-on. In the last case one would have to run the equipment for 1,000 days to get one event, and experimenters usually insist on at least ten events since they have additional problems with efficiency and background. So after a given run, say of 150 days (a year's run), in which no events are found, one looks at the curve, follows it down to where, say, ten events should have been produced—corresponding to a mass of, say, 40 GeV for the Lee-on. A conservative estimate is that some five events could have been missed. So the curve tells us that if the mass were 40 GeV, we would have seen a weak signal of a few events. But we saw nothing. Conclusion: the mass is heavier than 40 GeV.

  What next? If the Lee-on or the top quark or the Higgs is worth the game, one has a choice of three strategies. First, run longer, but this is a tough way to improve. Second, get more collisions per second; that is, raise the luminosity. Right on! That is exactly what Fermilab is doing in the 1990s, with the goal of improving the collision rate by about a hundredfold. As long as there is plenty of energy in the collision (1.8 TeV is plenty), raising the luminosity helps. The third strategy is to raise the energy of the machine, which increases the probability of producing all heavy particles. That's the Super Collider route.

  With the discovery of the W and Z, we have identified six quarks, six leptons, and twelve gauge bosons (messenger particles). There is a bit more to the standard model that we have not yet fully confronted, but before we approach that mystery, we should beat on the model a bit. Writing it as three generations at least gives it a pattern. We note some other patterns, too. The higher generations are successively heavier, which means a lot in our cold world today but wouldn't have been very significant when the world was young and very hot. All the particles in the very young universe had enormous energies—billions and billions of TeV, so a little difference in rest mass between a bottom quark and an up quark wouldn't mean much. All quarks, leptons, and so on were once upon a time on an equal footing. For some reason She needed and loved them all. So we have to take them all seriously.

  The Z0 data at CERN suggest another conclusion: it is very unlikely that we have a fourth or fifth generation of particles. How is that for a conclusion? How could these scientists working in Switzerland, lured by the snow-capped mountains, deep, icy lakes, and magnificent restaurants, come to such a limiting conclusion?

  It's a neat argument. The Z0 has plenty of decay modes, and each mode, each possibility for decay, shortens its life a bit. If there are a lot of diseases, enemies, and hazards, human life is also shortened. But that is a sick analogy. Each opportunity to decay opens a channel or a route for the Z0 to shake this mortal coil. The sum total of all routes determines the lifetime. Let's note that not all Z0's have the same mass. Quantum theory tells us that if a particle is unstable—doesn't live forever—its mass must be somewhat indeterminate. The Heisenberg relations tell us how the lifetime affects the mass distribution: long lifetime, narrow width; short lifetime, broad width. In other words, the shorter the lifetime, the less determinate the mass and the broader the range of masses. The theorists can happily supply us a formula for the connection. The distribution width is easy to measure if you have a lot of Z0's and a hundred million Swiss francs to build a detector.

  The number of produced Z0's is zero if the sum of the e+ and the e− energies at the collision is substantially less than the average Z0 mass of 91.175 GeV. The operator raises the energy of the machine until a low yield of Z0's is recorded by each of the detectors. Increase the machine energy, and the yield increases. It is a repeat of the J/psi experiment at SLAC, but here the width is about 2.5 GeV; that is, one finds a peak yield at 91.175, which decreases to about half on either side, at 89.9 GeV and 92.4 GeV. (If you'll recall, the J/psi width was much narrower: about 0.05 MeV.) The bell-shaped curve gives us a width, which is in effect a lifetime. Every possible Z0 decay mode decreases its lifetime and increases the width by about 0.20 GeV.

  What has this to do with a fourth generation? We note that each of the three generations has a low-mass (or zero-mass) neut
rino. If there is a fourth generation with a low-mass neutrino, then the Z0 must include, as one of its decay modes, the neutrino vx and its antiparticle, of this new generation. This possibility would add 0.17 GeV to the width. So the width of the Z0 mass distribution was carefully studied. And it turned out to be exactly what the three-generation standard model had predicted. The data on the width of the Z0 excludes the existence of a low-mass fourth-generation neutrino. All four LEP experiments chimed in to agree that their data allowed only three neutrino pairs. A fourth generation with the same structure as the other three, including a low- or zero-mass neutrino, is excluded by the Z0 production data.

  Incidentally, the same remarkable conclusion had been claimed by cosmologists years earlier. They based their conclusions on the way neutrons and protons combined to form the chemical elements during an early phase of the expansion and cooling of the universe after that humongous bang. The amount of hydrogen compared to the amount of helium depends (I won't explain) on how many neutrino species there are, and the data on abundances strongly suggested three species. So the LEP research is relevant to our understanding of the evolution of the universe.

  Well, here we are with an almost complete standard model. Only the top quark is missing. The tau neutrino is too, but that is not nearly so serious, as we have seen. Gravity must be postponed until the theorists understand it better and, of course, the Higgs is missing, the God Particle.

  SEARCH FOR TOP

  A NOVA TV program called "Race for the Top" was shown in 1990 when CERN's p-bar/p collider and Fermilab's CDF were both running. CDF had the advantage of three times higher energy, 1.8 TeV against CERN's 620 GeV. CERN, by cooling their copper coils a bit better had succeeded in raising their beam energies from 270 GeV to 310 GeV, squeezing every bit of energy they could in order to be competitive. Still, a factor of three hurts. CERN's advantage was nine years of experience, software development, and know-how in data analysis. Also they had redone the antiproton source, using some of Fermilab's ideas, and their collision rate was slightly better than ours. In 1989–90, the UA-1 detector was retired. Rubbia was now director general of CERN with an eye to the future of his laboratory, so UA-2 was given the task of finding top. An ancillary goal was to measure the mass of the W more precisely, for this was a crucial parameter of the standard model.

  At the time the NOVA program was put to bed, neither group had found any evidence for top. In fact, by the time the program aired, the "race" was over in that CERN was just about out of the picture. Each group had analyzed the absence of a signal in terms of top's unknown mass. As we have seen, not finding a particle tells you something about its mass. The theorists knew everything about the production of top and about certain decay channels—everything but the mass. The production probability depends critically on the unknown mass. Fermilab and CERN both set the same limits: the mass of the top quark was greater than 60 GeV.

  Fermilab's CDF continued to run, and slowly the machine energy began to pay off. By the time the collider run was over CDF had run for eleven months and had seen more than 100 billion (1011) collisions—but no top. The analysis gave a limit of 91 GeV for the mass, making the top at least eighteen times heavier than the bottom quark. This surprising result disturbed many theorists working on unified theories, especially in the electroweak pattern. In these models the top quark should be much lower in mass, and this led some theorists to view top with special interest. The mass concept is somehow tied in with Higgs. Is the heaviness of the top quark a special clue? Until we find top, measure its mass, and in general subject it to the experimental third degree, we won't know.

  The theorists went back to their calculations. The standard model was actually still intact. It could accommodate a top quark as heavy as 250 GeV, the theorists figured, but anything heavier would indicate a fundamental problem with the standard model. Experimenters were reinvigorated in their determination to pursue the top quark. But with top's mass greater than 91 GeV, CERN dropped out. The e+ e− machines are too low in energy and therefore useless; of the world's inventory, only Fermilab's Tevatron can make top. What is needed is at least five to fifty times the present number of collisions. This is the challenge for the 1990s.

  THE STANDARD MODEL IS A SHAKY PLATFORM

  I have a favorite slide that pictures a white-gowned deity, with halo, staring at a "Universe Machine." It has twenty levers, each one designed to be set at some number, and a plunger labeled "Push to create universe." (I got this idea from a sign a student put up on the bathroom hand drier: "Push to get a message from the dean.") The idea is that twenty or so numbers must be specified in order to begin the universe. What are these numbers (or parameters, as they are called in the physics world)? Well, we need twelve numbers to specify the masses of the quarks and leptons. We need three numbers to specify the strengths of the forces. (The fourth, gravity, really isn't a part of the standard model, at least not yet.) We need some numbers to show how one force relates to another. Then we need a number for how the CP-symmetry violation enters, and a mass for the Higgs particle, and a few other handy items.

  If we have these basic numbers, all other parameters are derived therefrom—for example, the 2 in the inverse-square law, the mass of the proton, the size of the hydrogen atom, the structure of H20 and the double helix (DNA), the freezing temperature of water, and the GNP of Albania in 1995. I wouldn't have any idea how to obtain most of the derived numbers, but we do have these enormous computers...

  The drive for simplicity leads us to be very sarcastic about having to specify twenty parameters. It's not the way any self-respecting God would organize a machine to create universes. One parameter—or two, maybe. An alternative way of saying this is that our experience with the natural world leads us to expect a more elegant organization. So this, as we have already complained, is the real problem with the standard model. Of course we still have an enormous amount of work to do to pinpoint these parameters accurately. The problem is the aesthetics—six quarks, six leptons, and twelve force-carrying gauge particles, and the quarks come in three colors, and then there are the antiparticles. And gravity waiting in the wings. Where is Thales now that we need him?

  Why is gravity left out? Because no one has yet succeeded in forcing gravity—the general theory of relativity—to conform to the quantum theory. The subject, quantum gravity, is one of the theoretical frontiers of the 1990s. In describing the universe in its present grand scale, we don't need quantum theory. But once upon a time the entire universe was no bigger than an atom; in fact, it was a good deal smaller. The extraordinarily weak force of gravity was enhanced by the enormous energy of the particles that made all the planets, stars, galaxies of billions of stars, all that mass compressed to a pinhead on a pinhead, a size tiny compared to an atom. The rules of quantum physics must apply here in this primal gravitational maelstrom, and we don't know how to do it! Among theorists the marriage of general relativity and quantum theory is the central problem of contemporary physics. Theoretical efforts along these lines are called "super gravity" or "supersymmetry" or "superstrings" or the "Theory of Everything" (TOE).

  Here we have exotic mathematics that curls the eyebrows of some of the best mathematicians in the world. They talk about ten dimensions: nine space and one time dimension. We live in four dimensions: three space dimensions (east-west, north-south, and up-down) and one time dimension. We can't possibly intuit more than three space dimensions. "No problem." The superfluous six dimensions have been "compactified," curled up to an unimaginably small size so as not to be evident in the world we know.

  Today's theorists have a bold objective: they're searching for a theory that describes a pristine simplicity in the intense heat of the very early universe, a theory with no parameters. Everything must emerge from the basic equation; all the parameters must come out of the theory. The trouble is, the only candidate theory has no connection with the world of observation—not yet anyway. It has a brief instant of applicability at the imaginary domain that t
he experts call the "Planck mass," a domain where all the particles in the universe have energies of 1,000 trillion times the energy of the Super Collider. The time interval of this greater glory lasted for a trillionth of a trillionth of a trillionth of a second. Shortly thereafter, the theory gets confused—too many possibilities, no clear road indicating that we the people and planets and galaxies are indeed a prediction.

  In the middle 1980s, TOE had a tremendous appeal for young physicists of the theoretical persuasion. In spite of the risk of long years of investment for small returns, they followed the leaders (like lemmings, some would say) to the Planck mass. We who stayed home at Fermilab and CERN received no postcards, no faxes. But disillusion began to set in. Some of the more stellar recruits to TOE quit, and pretty soon, buses began arriving back from the Planck mass with frustrated theorists looking for something real to calculate. The entire adventure is still not over, but it has slowed to a quieter pace, while the more traditional roads to unification are tried.

  These more popular roads toward a complete, overarching principle have groovy names: grand unification, constituent models, supersymmetry, Technicolor, to name a few. They all share one problem: there are no data! These theories made a rich stew of predictions. For example, supersymmetry (affectionately shortened to "Susy"), probably the most popular theory, if theorists voted (and they don't), predicts nothing less than a doubling of the number of particles. As I've explained, the quarks and leptons, collectively called fermions, all have one half unit of spin, whereas the messenger particles, collectively called bosons, all have one full unit of spin. In Susy this asymmetry is repaired by postulating a boson partner for every fermion and a fermion partner for every boson. The naming is terrific. The Susy partner of the electron is called "selectron," and the partners of all the leptons are collectively called "sleptons." The quark partners are "squarks." The spin-one-half partners of the spin-one bosons are given a suffix "ino" so that gluons are joined by "gluinos," photons couple with "photinos," and we have "winos" (partner of the W) and "zinos." Cute doesn't make a theory, but this one is popular.

 

‹ Prev