Book Read Free

Cracking the Particle Code of the Universe

Page 13

by Moffat, John W.


  In this initial attempt at an electroweak model, Weinberg considered only leptons such as the electron and muon, ignoring the quarks, to make his model easier to create. He also incorporated into the scheme an angle that allowed him to rotate the Z boson into the photon and vice versa. This meant that a neutral vector boson took upon itself the guise of both a massless photon and a massive Z particle. For a given value of this angle, Weinberg was able to come up with an approximate prediction for the masses of the W and Z bosons, which allowed him to obtain a result that agreed experimentally with the ratio of the neutral current to the charged weak current that had been observed at CERN. Weinberg’s seminal paper on this topic17 provided the basics for what we now call the standard model of particle physics.

  A key idea of Weinberg’s was to make the Higgs field an isospin doublet expressed in complex numbers—meaning that, in the isospin space, there is a charged and a neutral component to the field. When this isospin doublet interacted with the gauge fields such as the W and the Z, and the quarks and lep-tons, it allowed—through spontaneous symmetry breaking—the elementary particles to get their masses.

  Another important ingredient in developing the standard model was the choice of the potential energy for the scalar field, which was incorporated into the Lagrangian or action principle. This potential was such that it had a nonzero value when the scalar Higgs field vanished. This value of the potential is what is called the false vacuum, which is an unstable value. Imagine a Mexican hat (Figure 5.2). A ball rolls down from the top of the hat, or the false vacuum, to a nonzero value of the Higgs field at the bottom of the hat, where the potential energy of the Higgs field is zero. This is the true vacuum. If you picture yourself sitting at the false vacuum, or the top of the Mexican hat, and then rotating the hat around the vertical axis, from your point of view the hat keeps its symmetric shape. On the other hand, if you picture yourself sitting at the bottom of the hat brim—at the energy ground state where the scalar field is a constant nonzero value—then, when you look around you at the rotating hat, it no longer retains a rotational symmetry. By moving from the false vacuum to the true vacuum, you have broken spontaneously the symmetry of the group associated with the Mexican hat potential.

  Physicists choose the form of the potential energy for the scalar field in a rather ad hoc way. To guarantee renormalizability of the theory, the self-coupling of the scalar field in the potential energy must be of the fourth power of the scalar field with an unknown coupling constant, lambda (λ), determining the strength of the interaction. If the power of the scalar field self-coupling is greater than four, then the theory is not renormalizable. The Mexican hat picture describes, in a simple, generic way, the Higgs mechanism of spontaneous symmetry breaking, provided that the special form of the potential energy for the self-coupling of the scalar field is chosen.

  Figure 5.2 Mexican hat potential with the ball initially at the false vacuum.

  SOURCE: Luis Álvarez-Gaumé and John Ellis, “Eyes on a Prize Particle,” Nature Physics (Nature Publishing Group, Dec. 21, 2010).

  At the same time that Weinberg was working on incorporating spontaneous symmetry breaking into the electroweak gauge theory, Abdus Salam, at both Imperial College and at his International Center for Theoretical Physics in Trieste, was also working independently on the same problem using the same techniques. In a set of unpublished lectures to graduate students, which he gave in the fall of 1967, Salam developed a model very similar to Weinberg’s. Unfortunately, there is no written record of these lectures. At a symposium held in Gothenburg in May 1968 on particle physics, sponsored by the Nobel Foundation, Salam gave a talk that he claimed was based on his lectures at Imperial College the previous fall. This lecture, which developed a model involving only leptons, akin to Weinberg’s published model of the previous year, was published by the Nobel Symposium in a monograph with a limited circulation.18 In Glashow’s 1961 paper on weak interactions, he had predicted that neutrinos and electrons could scatter off other particles without the exchange of charge through a neutral weak current. This current was associated with a neutral vector boson, Z. In 1967, Weinberg used Glashow’s theory to predict the mass of the neutral Z boson. He then predicted the parity violation of polarized electron scattering, which was observed at Stanford in 1979. In Salam’s talk at the Nobel Symposium, he did not give the relationship between the W and the Z masses predicted by Weinberg. Therefore, he was not able to determine the difference in strengths between the weak, charged, and neutral currents. Glashow, Weinberg, and Salam were awarded the Nobel Prize in 1979 for their work on developing electroweak unification.

  Eventually the Glashow–Weinberg–Salam model, as it came to be called, was extended to incorporate quarks, making it a more complete theory that began to take on the appearance of what we now know as the standard model of particle physics. This electroweak theory was incorporated into the QCD theory of Gell-Mann, Fritzsch, and Leutwyler, and thus the strong interactions involving the massless colored gluon force carriers of QCD, as well as the massive W and Z bosons and the massless photon of the electroweak theory, were all accounted for in one scheme.

  From this historical account, we can see that the idea of spontaneous symmetry breaking of the vacuum in gauge theories in particle physics came directly from the theory of superconductors. From the citations in the published papers by the “Group of Six” in 1964, it is not sufficiently clear to what extent they owe an indebtedness to the pioneers of superconductor physics such as Ginzberg, Landau, and Anderson. However, Higgs did refer to Anderson’s paper in a subsequent paper published in 1966.19 This leads us to the fundamental question: Does nature in its laws for particle physics and quantum fields simply copy what happens in condensed-matter physics? The answer to this question, as we will see as this book continues, relies significantly on the discovery of the Higgs particle. Strictly speaking, however, the answer to the question is no if we identify the scalar Higgs particle as a fundamental, elementary particle, for no such particle exists in the theory of superconductors. The basic entity that plays the role of spontaneously breaking the symmetry of the superconductor is the electron condensate formed from the Cooper pair bound state, which is not an elementary particle. Therefore, it is not clear that carrying through the ideas of the nonrelativistic physics of superconductors to relativistic particle physics will result in the existence of a physical elementary scalar Higgs boson.

  Both Weinberg and Salam suggested that their electroweak theory was renormalizable; however, they did not provide any proof of this. Martinus Veltman in Utrecht had a brilliant graduate student, Gerard ‘t Hooft. He proposed to ‘t Hooft that he try to prove that the electroweak theory of Weinberg and Salam was renormalizable. One day, ‘t Hooft told Veltman that he was actually able to prove that the theory was renormalizable. He had done it! Veltman was astonished and delighted, and in 1971 at a conference he announced how his graduate student, ‘t Hooft, had proved the renormalizability of the theory. Veltman had contributed significantly to the understanding of weak interactions, and he and ‘t Hooft collaborated to fill in many of the technical details needed to prove that the claim of renormalizability was robust. Indeed, Veltman wrote a computer program called SCHOONSHIP, which was able to derive the many Feynman diagrams necessary to complete the proof. ‘t Hooft and Veltman were awarded the Nobel Prize for their contributions to electroweak theory in 1999.

  FINDING THE HIGGS BOSON

  By the 1970s and 1980s, accelerators began accumulating data that could validate the predictions of the standard model. In particular, confirmations of QCD—such as the verification of colored gluons—convinced most physicists that the standard model was here to stay. More data were accumulated for weak-interaction processes at high energies. The expected bottom and top quarks were discovered at Fermilab’s Tevatron in 1977 and 1995, respectively, completing the predicted three generations of quarks. By the late 1990s, all the basic elementary particles of the standard model had been proved to
exist, except for the Higgs boson, and they all had spin ½ or spin 1. The standard model now included QCD and the electroweak theory, which in turn contained QED.

  Despite the successes of the standard model, confirmed by accelerator results, one essential building block of the whole edifice was missing. Where was the spin-0 Higgs particle? Unfortunately, in contrast to the W and Z boson masses, the mass of the Higgs boson is not predicted by the standard model, which means that within certain theoretical limits, the experimentalists did not know at what energy level to look for it. However, by 2000, when the LEP closed down at CERN, the existence of the Higgs boson had been excluded up to an energy of 114.5 GeV. Moreover, if the Higgs boson mass was less than 120 GeV but above 114 GeV, then the vacuum state associated with the spontaneous symmetry-breaking mechanism would be unstable. This means that, in the worst circumstances, the universe would not last the 14 billion years that was now built into the standard model of cosmology. Moreover, if the Higgs mass was greater than 800 GeV, then the standard model would break down. At a mass of more than 800 GeV, it would no longer be possible to do precise calculations in the standard model using perturbation theory, in which ever-increasing orders of calculations were under control. Also, the theory would violate unitarity and we would start getting probabilities for scattering experiments that exceeded 100 percent, which is impossible. In 2000, this left the energy range from 114.5 to 800 GeV open to experimental detection of the Higgs particle, albeit taking into account the potential problems with instability at less than 120 GeV.

  The LHC began operating in 2009, after repairs. In March 2012, the CMS and ATLAS detectors at the LHC had excluded a Higgs particle between 130 GeV and 600 GeV to a confidence level of 95 percent. The fits to the precise electroweak data gathered independently by groups at CERN, the Tevatron, and other theoretical groups, had restricted the Higgs particle to a mass between 97 GeV and 135 GeV. Indeed, the best fit to the precise electroweak data for the Higgs mass was a mass of about 97 GeV, which had already been excluded by the LEP bound of 114.5 GeV. However, in statistical terms, the best fit of 97 GeV was one standard deviation away from the central value of the fit, allowing for a possible Higgs mass between 114.5 GeV and 135 GeV. By March 2012, this was the small window of energy–mass left open in which to detect the Higgs particle, and by late 2012, experimentalists claimed to have detected their quarry at about 125 GeV.

  Much like the ill-conceived ether of the 19th century, the Higgs field is supposed to pervade all of spacetime at all times. This Higgs field is believed to interact with the elementary quarks, leptons, and the W and Z bosons, as well as with itself, but not with the photon or gluon. This interaction would produce masses for all the elementary particles except for the photon and gluon. It is as if the particles are swimming through a tank of water that resists their forward motion. The heaviest particles, such as the top quark, would feel the resistance more than the lighter ones, such as the electron and the lighter up and down quarks. The photon and gluon would swim through the water without any resistance at all, because they are massless. The Higgs field/ether would be produced by a phase transition (like steam turning to water when cooled) about 10−12 seconds after the Big Bang. This hypothesis is basic to the standard-model electroweak theory.

  Despite the fact that, after almost 50 years, the elusive Higgs boson had not been detected conclusively up until March 2012, the idea that it really existed was so entrenched in the minds of particle physicists that almost the whole community expected it to be discovered soon. Why is this so?

  One reason is that the Higgs boson could explain the origin of the masses of the elementary particles. However, it does fall short of predicting the specific experimental masses of the fermions, such as the leptons and quarks, for each of these particle masses has a coupling constant associated with it, which determines the strength of the interaction of the particle and the Higgs field. Physicists simply adjust these coupling constants to yield the experimental values of the particle masses; the masses are not predicted by the theory. On the other hand, Weinberg was able to predict the approximate masses of the W and Z bosons given the experimental value of the so-called weak angle, which allowed for the rotation of the photon into the neutral Z boson. The Higgs mechanism seemed, to particle physicists, to be the best game in town for explaining the origin of mass for the elementary particles. An even stronger reason for believing in the existence of the Higgs boson is its ability to produce a renormalizable or finite theory of weak interactions that does not violate unitarity.

  However, there have been serious problems in believing in the existence of the Higgs boson. As Weinberg and Salam were developing the spontaneous symmetry-breaking mechanism, there was an ad hoc aspect to the scenario that led some theorists to question whether the model was correct. For one thing, they had put the masses and coupling constants of the quarks and leptons in by hand, rather than the theory predicting them. As we recall, another ad hoc feature of the theory was that the self-interaction of the scalar Higgs field was chosen to be a very specific value (the scalar field to the power of four) so that it generated a renormalizable electroweak theory. Other choices that would seem equally plausible would not allow the theory to be renormalizable or to lead to finite calculations.

  Another technical issue is that for a scalar spin-0 field like the Higgs field, the divergences occurring in the quantum theory calculation of the Higgs boson mass were quadratic in the energy cutoff, as opposed to the logarithmic divergences for the W and Z particles and fermions. This leads to a serious fine-tuning problem that had to be confronted. The fact that when quantum corrections to the calculation of the Higgs boson mass were performed, which are required when you renormalize the Higgs mass, these quantum corrections were not actually “corrections.” Indeed, they were so enormous that they produced a critical fine-tuning disaster for estimates of the Higgs mass. This became known as the “Higgs mass hierarchy problem,” which we encountered in Chapter 4. This means that the calculation of the Higgs mass would produce absurdly large results compared with what is anticipated to be experimentally valid.

  Another serious fine-tuning problem with the Higgs particle, which Martinus Veltman has emphasized for years, is that when one calculates the energy density of the vacuum in the presence of the Higgs ether, this vacuum density could be 56 orders of magnitude larger than has been determined observationally in cosmology. It could even be as much as 122 orders of magnitude larger, when the Planck mass associated with gravity is accounted for. Such a vacuum density would make it impossible for our universe to exist at all. This belongs to the category of what is called the cosmological constant fine-tuning problem. Other causes of such a huge vacuum density occur in particle physics, but the one associated with the Higgs boson has made the whole idea of the Higgs mechanism quite unattractive to some physicists.

  In summary, although the Higgs boson is essential for the electroweak theory to work, it creates serious problems. Therefore, some physicists, myself included, began to explore alternatives to both the Higgs mechanism and the conventional electroweak theory. Of course, if the evidence of something new at 125 GeV turns out definitely to be the Higgs boson, then physicists will have to roll up their sleeves and find solutions to the problems that the Higgs brings with it.

  6

  Data That Go Bump in the Night

  The annual Topical International Conference on Particle Physics, Cosmology, and Astrophysics is always known as the “Miami” conference, even though several years ago it moved from Miami to Ft. Lauderdale, Florida. My talk at “Miami 2011” was scheduled for 11:00 a.m. on December 15, 2011. Its title was “If the Higgs Particle Does Not Exist, Then What?” At the start of the session, the conference organizer, Thomas Curtwright, put my PowerPoint presentation into the conference computer. As my title flashed on the large screen at the front of the auditorium, Thomas laughed and said, “Well, in view of Tuesday’s press conference at CERN, I guess your talk has crashed.”


  I frowned at him. “Don’t be so fast with the conclusions. I’ll explain in my talk why the experimental results are still inconclusive.”

  The auditorium in the Lago Mar resort where the conference was being held was filled with physicists. In the audience was Lars Brink, who is currently the only particle theorist on the Stockholm Nobel committee. Also in the audience was Françoise Englert, one of the Group of Six who proposed the mechanism of spontaneous symmetry breaking that became such a centerpiece of the standard Glashow–Weinberg–Salam model of particle physics.

  It was the Higgs boson—named after Peter Higgs, who had predicted the existence of a particle within the symmetry-breaking mechanism—that had caused all the excitement at the CERN press conference on Tuesday, December 13. The experimentalists Fabiola Gianotti and Guido Tonelli, representing respectively the ATLAS and CMS detectors at the LHC, had presented their preliminary results for the new data searching for the Higgs particle. Since a summer conference in Grenoble, about three or four times more data had been collected by the two detectors.

  Remarkably, the CMS and ATLAS groups claimed that the Higgs boson mass had been excluded in the energy range from 127–129 GeV to 600 GeV when they combined all the decay channels. This left only a small window of about 12 GeV of energy where the Higgs particle could be hiding. That is the difference between the upper bound limit on the Higgs boson of 114.4 GeV obtained by the LEP2 experiments and about 127 GeV with the new LHC bound for the Higgs mass. At the press conference the week before, after the presentations of the new data, the director-general of CERN, Rolf-Dieter Heuer, claimed that tantalizing new evidence had been obtained from both detectors showing possible hints of a Higgs particle at 124 GeV in the CMS detector, and at about 126 GeV in ATLAS. However, he cautioned that these “bumps” could be statistical fluctuations and might disappear with increasing data and luminosity, or intensity, of the proton–proton collisions.

 

‹ Prev