To verify this theory, the LHC has been searching for what are called Kaluza–Klein particles, which are predicted when one generalizes Einstein’s field equations to five dimensions. The Kaluza–Klein particles are proposed to exist only in the fifth dimension. Visualize a city street and a car moving along it in our known four-dimensional spacetime. However, we can also imagine the car veering to the sides of the street, and this extra degree of freedom corresponds to having more than three spatial dimensions. So when we measure the momentum of the car moving along the street without veering to one side or the other, we are measuring the momentum of the car in our spacetime dimensions. If we measure the momentum of the car as it veers to the left or the right on the street, this would correspond to the car’s mass or momentum in the fifth dimension.
According to Kaluza–Klein theory, the known particles, such as the electron, would have partner particles with similar physical properties to the electron, but with a mass greater than the mass of the electron (and also greater than the muon and the tau, which are known, observed heavy electrons). The same can be said for the quarks. Are the heavier, known particles such as the muon and the tau Kaluza–Klein partners of the lighter electron? The answer is no. The electron is electrically charged and is surrounded by an electric field. The photon carries the electromagnetic force between electrons and it travels in the same spatial dimensions as the electrons. Therefore, if the electron has Kaluza–Klein partners, then the photon must also have them, and a photon should exist with a mass of about the muon mass. However, such a photon has never been detected. It follows that the muon is not the Kaluza–Klein partner of the electron.
For the Kaluza–Klein particles, if indeed they exist, their extra mass or momentum exists in the fifth dimension. This extra mass can be expressed as Planck’s constant h divided by the speed of light c multiplied by length L, which is associated with the extra dimension. For the tiny values of length L resulting from the compactification of the fifth dimension, we have a corresponding mass greater than the measured mass of the electron, and the LHC should be able to detect this extra mass. The magnitude of length L is the compactified length, which is a billionth of a meter or less. Thus, detecting a Kaluza–Klein particle with all the properties of, say, an electron but with a larger mass would verify the existence of an extra dimension. So far, no such Kaluza–Klein particles have been discovered at the LHC up to an energy of more than 2 TeV, at increasingly small compactified lengths. The exclusion of both superpartners and extra dimensions by the LHC has cast a cloud over the whole idea that supersymmetry and superstrings actually exist in nature.
NO DARK OR BLACK THINGS EITHER?
Another exotic prediction of supersymmetry is the existence of dark matter, which is required to explain the dynamics of galaxies, clusters of galaxies, and the standard model of cosmology, also known as the concordance cosmological model. There is much stronger gravity “out there” holding together galaxies and clusters of galaxies than is predicted by Einstein and Newtonian gravity, which has made it necessary to postulate extra, invisible matter in the universe to create stronger gravity. This dark matter has so far only been inferred from gravitational phenomena in astrophysics and cosmology. It is based on the idea that Einstein’s general relativity is universally correct and does not need any modification. Modified theories of gravity, including my own—called modified gravity, or MOG—have been published that do not require undetected dark matter and yet can explain the current observational data for galaxy and cluster dynamics and cosmology. However, the majority of physicists and cosmologists believe that dark matter exists and that Einstein’s general relativity does not need any modification.
There has been a continuing effort to detect dark matter in underground experiments, at the LHC, and using astrophysical data. There are several candidates for dark-matter particles. The lightest, stable superpartner in supersymmetry, called the neutralino, is a popular candidate for the dark-matter particle. Actually, there are four neutralino particles, superpartners of the neutrino, and the candidate for dark matter is the lightest stable partner of this quartet. This hypothetical particle has so far not been detected at the LHC, up to a high energy of about 1 TeV.
This neutralino may be a kind of dark-matter particle called a WIMP (or, weakly interacting massive particle). WIMPs are expected to have a mass of between 2 GeV and 100 GeV, and are classified as belonging to the category of “cold dark matter,” an important ingredient in the standard cosmology or concordance model, also called the LambdaCDM model. Dark-matter detection experiments are being performed deep underground, to remove the background effects caused by cosmic rays coming in from outer space. Until now, the several large underground experiments and astrophysical observations have not succeeded in finding a WIMP.
Another candidate for dark matter is the axion particle. In contrast to WIMPs, the axion is a very light particle. It can perform the required role of producing dark-matter haloes in galaxies, which could explain the dynamics of galaxies, thereby keeping Newtonian and Einstein gravity unmodified. However experiments for the past two or three decades have not been able to find axions either. All these experimental null results have so far dashed experimentalists’ initial hopes of identifying the elusive dark-matter particles.
Theorists with exuberant imaginations dreamed up yet another possible candidate for detection at the LHC: the mini or micro black hole. The idea is that, because regular black holes are formed by the collapse of stars under their own weight, the particles accelerated at the LHC should reach a high enough energy and mass that they, too, could collapse and form mini black holes. Stephen Hawking has gone so far as to suggest that such mini black holes could actually be hiding the Higgs boson. This led him, some years ago, to make a famous bet with Gordon Kane, a professor of theoretical physics at the University of Michigan, that the Higgs boson is either permanently hidden or does not exist. In the event that a mini black hole could be produced at the LHC, some theorists have speculated that it may be possible to examine them and detect the famous “Hawking radiation” supposedly produced at the event horizons of black holes.
Some alarmists feared that if the LHC did produce such mini black holes, they would destroy the nearby city of Geneva and large parts of Switzerland and France, possibly even the whole planet. As we now know, after several years of the successful running of the LHC, Geneva has not been destroyed by mini black holes. In fact, the latest LHC results make it unlikely that mini black holes exist.
WHITHER PHYSICS IF THE LHC CONTINUES TO COME UP EMPTY-HANDED BEYOND THE STANDARD MODEL?
As of early 2013, increasing evidence suggests that the LHC may have discovered the Higgs boson. However, no new physics has been discovered beyond the particles and forces of the standard model. If this situation continues, it raises serious questions about the future of particle physics. The tax-paying public and the governments of the many countries involved in the world’s largest experiment will wonder how the theorists could have been so wrong for more than half a century. How could so much money have been spent searching for tiny physical phenomena such as supersymmetric particles and dark matter that do not appear to exist? Despite the possible discovery of the Higgs boson, which would confirm the conventional standard model, does this lack of new physics signal the end of particle physics?
As Guido Altarelli mused after my talk at CERN in 2008, can governments be persuaded to spend ever greater sums of money, amounting to many billions of dollars, on ever larger and higher energy accelerators than the LHC if they suspect that the new machines will also come up with nothing new beyond the Higgs boson? Of course, to put this in perspective, one should realize that the $9 billion spent on an accelerator would not run a contemporary war such as the Afghanistan war for more than five weeks. Rather than killing people, building and operating these large machines has practical and beneficial spinoffs for technology and for training scientists. Thus, even if the accelerators continued to find no new particle
s, they might still produce significant benefits for society. The Worldwide Web, after all, was invented at CERN.
Not discovering any new physics beyond the Higgs boson at the LHC or future accelerators would have profound significance for the future of physics. Although many physicists consider that possible outcome a looming disaster, others, including myself, prefer to take a contrarian and positive attitude: not finding any new physics at the LHC beyond the standard model would be exciting! Back in 1879 (the year of Albert Einstein’s birth) the ether, which was supposed to be the all-pervasive medium and carrier of electromagnetic waves, was not discovered by the experimentalists Michelson and Morley. That important null experiment, and the later ones that confirmed the original results, did not herald the end of physics. Indeed, in 1905, freed from the concept of the ether, Einstein, building on earlier work by Henri Poincaré and Hendrik Lorentz, revolutionized our understanding of space and time with his theory of special relativity. Also, discoveries by Einstein, Planck, de Broglie, and others ushered in the whole quantum revolution in physics.
We can anticipate a similar revolution in physics if we truly discover that there are no new particles beyond those already observed in the standard model. Indeed, many physicists consider the current standard model of particle physics—based on the observed quarks, leptons, W and Z bosons, the gluon, the photon, and now perhaps the Higgs boson—to be unsatisfactory. Why? Because in its basic form, it has 20 free parameters that have to be fitted to data without understanding their physical origins properly. For example, the interaction or “coupling” of the Higgs field to the quarks and leptons does not determine the specific values of their masses, so the coupling constants measuring the strength of the Higgs field to these fermion masses have to be fitted to their observed masses “by hand,” rather than by being predicted by the theory. Hence, the fermion masses are free parameters.
Therefore some physicists, including myself, anticipate that there must be new physics beyond the standard model—beyond this mere roster of particles and forces—that could reveal a more unified picture of the particle forces, including gravity. Such a unified framework would then reduce the number of unknown parameters to a very few, even, possibly, zero. The question arises: Can we conceive of a theory that could unify the forces of nature and achieve a very economical form using only the known observed particles? Answering this question could constitute the next revolution in physics.
Discovering the Higgs boson will validate the standard model of particle physics, but it will leave us with serious problems, such as the very unnatural explanation for the Higgs mass itself, unless new physics beyond the standard model is discovered.
5
The Higgs Particle/Field and Weak Interactions
I joined the newly minted particle physics group at Imperial College London in 1959 as Abdus Salam’s first postdoctoral fellow, holding a Department of Scientific and Industrial Research fellowship that paid me £10 a week. Salam had recently been appointed professor of mathematical physics at Imperial College London. He had been brought in as the Cambridge University superstar who would rejuvenate theoretical physics at Imperial College by starting up a group in the very active field of particle physics.
When newly appointed research students to be supervised by Salam asked him what they should pursue as a research project, he would say, “Go and read Julian Schwinger’s paper on fundamental particle interactions. The scalar sigma particle may be a clue as to how particles get their mass.”1 Salam had in mind that a renormalizable quantum field theory, one that produces finite calculations of physical quantities, must have a gauge invariance, which implies that the force-carrying particles are massless, as is the case with the massless photon in QED. However, the weak interaction had to be mediated by a massive particle so that the weak force could be short range compared with the infinite-range force of classical electrodynamics mediated by a massless photon. The so-called sigma particle represented a scalar, spin-0 field and, through Julian Schwinger’s published papers in the 1950s, the ubiquitous scalar field of theoretical physics began to make its appearance. The fact that no experiment had ever detected an elementary spin-0 particle in high-energy accelerators did not deter the theorists from speculating about the consequences of such a particle in particle physics. The experimentally confirmed elementary fermions and bosons in the standard model all have spin ½ or spin 1, with the exception of the scalar spin-0 Higgs boson, which was unknown in 1959, and has possibly now been found.
When I first arrived at Imperial College, I made an appointment to see Salam in his large office, which had a rich Persian carpet spread across the floor. Salam sat behind an expansive wooden desk.
“Abdus, do you have any suggestions for what I should pursue in my research?” I asked.
He looked at me intensely with his brown eyes. “John, read Julian Schwinger’s paper on the sigma model. Maybe you can explain how particles get their mass.”
I sat in the comfortable chair in front of his desk and, averting my eyes, thought about this proposal.
“How particles get their mass?” I asked. “Why is this important?”
Salam grunted and said, “Of course it’s important. How are we going to explain weak interactions and beta decay if we don’t have gauge invariance in weak interactions, and we have to worry about the bothersome necessary mass of the W particle?”2
“So you are saying that this massive W, which has not been found experimentally, doesn’t lead to a renormalizable theory, and therefore the weak-interaction calculations become infinite and meaningless. The theory is, in fact, unrenormalizable?”
Salam replied, “Yes. That’s it. That’s the problem.”
“Are you suggesting that this sigma particle—this scalar particle—somehow resolves this problem of infinities? I don’t see how this can happen.”
Salam frowned and stated, “Well, I don’t understand how it happens either, but maybe there’s some clue here as to how to resolve the problem.”
The problem, of course, was how to give the W boson a mass and yet keep the calculations finite in the theory of weak interactions.
Not long after my arrival at Imperial College, Peter Higgs, an English theoretical physicist, joined the group as a postdoctoral fellow and shared my smartly furnished office near Salam’s office. I presumed at the time that he also asked Salam what he should work on as a research project, and I surmised that Salam had said, “Peter, go away and read Julian Schwinger’s paper on the sigma model. Maybe it’s got something to do with how the particles get their mass.”
In the meantime, I read Schwinger’s paper and wasn’t excited by it, and didn’t see how it could resolve the problem of getting rid of infinities in the weak-interaction calculations. I could see how the self-coupling of this scalar sigma particle, or the interaction of the particle with itself, could give mass to other particles—by the coupling of the scalar sigma field with the W boson—but it didn’t strike me as particularly convincing, and I moved on to other research.
BANISHING INFINITIES
The tremendous success of the theory of QED of photons and electrons achieved by Feynman, Tomanaga, and Schwinger in 1949/1950 owes much to their development of renormalizable quantum field theory. In this theory, the meaningless mathematical infinities that occur in the calculation of scattering amplitudes could be canceled out by the “bare” mass of the electron—that is, the mass of the noninteracting electron—and with the effective self-mass caused by the interaction of the electron with its own electromagnetic field. The bare mass and the self-interaction mass of the electron are both infinite in the calculations, but these infinities cancel one another, resulting in the finite observed mass of the electron. Paradoxically, although the initial calculations of scattering amplitudes were infinite and meaningless, the final, resulting scattering amplitudes using renormalizable quantum field theory involving interactions of electrons and photons were finite and independent of any cutoff in the energy that was
used in the calculations of these amplitudes. Physicists chose this cutoff of infinite energies arbitrarily to make the provisional calculations finite. However, the final calculation after renormalization has been executed must not depend physically on such an arbitrarily chosen cutoff energy. Freeman Dyson demonstrated that a renormalizable theory is, in fact, finally independent of the arbitrary cutoff energy in his famous papers published in 1949 and 1952.3 As we recall, however, Paul Dirac, who was one of the main protagonists in the invention of QED, never accepted this renormalization scheme.
Particle physics as theorists practice it today relies heavily on the more modern developments of renormalization theory, which is also used in investigations in condensed-matter physics. These developments play an important role in the formulation of QED. The success of QED depends primarily on the photon being massless, which allows the theory to be renormalizable. This fact allows one to prove that the QED based on the quantization of Maxwell’s classical field equations is gauge invariant. The feature of gauge invariance is crucial for the physical consistency of the theory. For example, it leads directly to the conservation of the electric charge current of the electron and matter fields.
The gauge invariance of QED gives rise to a set of mathematical identities discovered by John Ward during the early 1950s, called the Ward identities. These identities guarantee that the probabilities of scattering of photons and electrons never add up to more than 100 percent. In the initial version of Yang and Mills’s nonabelian gauge theory extension of Maxwell’s equations, the charged vector field was massless, which guaranteed that the theory had a gauge invariance, and the theory could be made renormalizable. It also satisfied the so-called unitarity condition for the S-matrix, guaranteeing that the probability of the scattering of particles never exceeded 100 percent. However, we recall Pauli’s objection to the original Yang–Mills assumption that massless charged vector mesons exist in nature. As we know, such particles do not exist in nature.
Cracking the Particle Code of the Universe Page 11