Book Read Free

The Particle at the End of the Universe: How the Hunt for the Higgs Boson Leads Us to the Edge of a New World

Page 13

by Sean Carroll


  When you use a remote control to turn on your TV, it looks like action at a distance, but it’s really not. You push the button and an electrical current starts to jiggle back and forth inside a circuit in the remote, creating a radio wave that propagates through the electromagnetic field to the TV and is absorbed by a similar gizmo. In the modern world, the electromagnetic field around us is made to do an enormous amount of work—illuminating our environment, sending signals to our cell phones and wireless computers, and microwaving our food. In every case it’s set up by moving charges that send ripples out through the field. All of which, by the way, was completely unanticipated by Hertz. When he was asked what his radio-wave-detecting device would ultimately be good for, he replied, “It’s of no use whatsoever.” Prodded to suggest some practical application, he replied, “Nothing, I guess.” Something to keep in mind as we contemplate the eventual applications of basic research.

  Waves of gravity

  It wasn’t until after physicists understood the relationship between electromagnetism and light that they began to wonder whether a similar phenomenon should happen with gravity. It might seem like an academic question, since you need an object the size of a planet or moon to create a gravitational field big enough to measure. We’re not going to pick up the earth and shake it back and forth to make waves. But to the universe, that’s no problem at all. Our galaxy is full of binary stars—systems where two stars orbit around each other—presumably shaking the gravitational field as they go. Does that lead to rippling waves spreading in every direction?

  Interestingly, gravity as Newton or Laplace described it would not predict radiation of any kind. When a planet or star moves, the theory says that its gravitational pull changes instantaneously all across the universe. It’s not a propagating wave but an instant transformation everywhere.

  That’s just one way in which Newtonian gravity doesn’t seem to fit well with the changing framework of physics that developed over the course of the nineteenth century. Electromagnetism, and especially the central role played by the speed of light, was instrumental in inspiring Albert Einstein and others to develop the theory of special relativity in 1905. According to that theory, nothing can travel faster than light—not even hypothetical changes in the gravitational field. Something would have to give. After ten years of hard work, Einstein was able to construct a brand-new theory of gravity, known as “general relativity,” that replaced Newton’s entirely.

  Just like Laplace’s version of Newtonian gravity, Einstein’s general relativity describes gravity in terms of a field that is defined at every point in space. But Einstein’s field is a much more mathematically complicated and intimidating field than Laplace’s; rather than the gravitational potential, which is just a single number at each point, Einstein used something called the “metric tensor,” which can be thought of as a collection of ten independent numbers at every point. This mathematical complexity helps general relativity accrue its reputation as a very difficult theory to understand. But the basic idea is simple, if profound: The metric describes the curvature of spacetime itself. According to Einstein, gravity is a manifestation of the bending and stretching of the very fabric of space, the way we measure distances and times in the universe. When we say, “The gravitational field is zero,” we mean that spacetime is flat, and the Euclidean geometry we learned in high school is valid.

  One happy consequence of general relativity is that, just like with electromagnetism, ripples in the field describe waves traveling at the speed of light. And we have detected them, although not directly. In 1974, Russell Hulse and Joseph Taylor discovered a binary system in which both objects are neutron stars, rapidly spinning in a very tight orbit. General relativity predicts that such a system should lose energy by giving off gravitational waves, causing the orbital period to gradually decrease as the stars draw closer together. Hulse and Taylor were able to measure this change in the period, exactly as predicted by Einstein’s theory; in 1993, they were awarded the Nobel Prize in Physics for their efforts.

  That’s an indirect measurement of gravitational waves, rather than directly seeing their effects in a laboratory here on earth. We are certainly trying. There are a number of ongoing efforts to observe gravitational waves coming from astrophysical sources, typically by bouncing lasers off mirrors separated by several kilometers. As a gravitational wave passes by, it stretches spacetime, bringing the mirrors closer together and then farther apart. That can be detected by measuring tiny changes in the number of laser wavelengths between the two mirrors. In the United States, the Laser Interferometer Gravitational Wave Observatory (LIGO) consists of two separate facilities, one in Washington State and the other in Louisiana. They collaborate with the VIRGO observatory in Italy and GEO600 in Germany. None of these laboratories has yet detected gravitational waves—but scientists are very optimistic that recent upgrades will help them make a dramatic discovery. If and when they do, it will be vivid confirmation that gravity is communicated by a dynamic, vibrating field.

  Particles out of fields

  The realization that light is an electromagnetic wave flew in the face of Newton’s theory of light, which insisted that it was made of particles dubbed “corpuscles.” There were good arguments on both sides. On the one hand, light casts a sharp shadow, like you might expect from a spray of particles, rather than bending around corners, as our experience with water and sound waves might lead us to believe. On the other hand, light can form interference patterns when passing through narrow openings, as a wave would do. The electromagnetic synthesis seemed to clinch the issue in favor of waves.

  Conceptually, a field is the opposite of a particle. A particle has a specific location in space, while a field exists at every point in space. It’s defined by its magnitude, which is some particular number at every point, and maybe some other qualities like a direction. Quantum mechanics, which was born in 1900 and came to dominate the physics of the twentieth century, ultimately brought the two concepts together. Long story short: Everything is made of fields, but when we look at them closely we see particles.

  Imagine you are outside on a very dark night, watching a friend holding a candle walk away from you. The candle grows dimmer as the distance to your friend increases. Eventually it becomes so dim that you can’t see it at all. But, you might think, that’s due to the fact that our eyes are imperfect instruments. Perhaps if we had ideal vision, we would see the candle grow progressively dimmer but never quite go away entirely.

  Actually that’s not what would happen. With perfect eyes, we would see the candle grow dimmer for a while, but at some point a remarkable thing would happen. Rather than growing progressively more faint, the candlelight would begin to flicker on and off, with some fixed brightness while it was on. As your friend retreated, the off periods would lengthen with respect to the on periods; eventually the candle would be almost completely dark, save for very rare flashes of low-intensity light. Those flashes would be individual particles of light: photons. Physicist David Deutsch discusses this thought experiment in his book The Fabric of Reality, where he notes that frogs have better vision than humans, good enough to see individual photons.

  The idea behind photons stretches back to Max Planck and Albert Einstein at the turn of the last century. Planck was thinking about the radiation that objects give off when they are heated. The wave theory of light predicted there should be much more radiation coming out with very short wavelengths, and therefore very high energies, than we actually observe. Planck suggested a brilliant and somewhat startling way out: that light came in discrete packets, or quanta (plural of “quantum”), and that a light quantum with some fixed wavelength would have a fixed energy. You need a good amount of energy to make just one quantum of short-wavelength light; Planck’s idea therefore helped explain why there is so much less radiation at short wavelengths than the wave theory predicted.

  This connection between energy and wavelength is a key concept in quantum mechanics and field theory. T
he wavelength is just the distance between two successive crests of a wave. When it’s short, the wave is all bunched together. It costs energy to do that, so we see why Planck’s packets of light have high energy when the corresponding wavelength is short, as in ultraviolet light or X-rays. Long wavelengths, like radio waves, imply individual light quanta with very low energies. Once quantum mechanics was invented, this relationship could even be extended to massive particles. High mass implies short wavelength, which means that a particle takes up less space. That’s why it’s the electrons, not the protons or neutrons, that define the size of an atom; they’re the lightest particle involved, so they have the longest wavelength, and therefore take up the most space. In a sense, it’s even why the LHC has to be so big. We’re trying to look at things that happen within very short distances, which means we need to use very small wavelengths, which means we need highly energetic particles, which means we need a giant accelerator to get them moving as fast as possible.

  Planck didn’t make the leap from quantized energies to literal particles of light. He thought of his idea as a sort of trick to get the right answer, not as a fundamental part of how reality works. That step was taken by Einstein, who was puzzling over something called the “photoelectric effect.” When you shine bright light on metal, you can shake electrons loose from the metal’s atoms. You might think that the number of electrons shaken free would depend on the intensity of the light, because more energy comes in when the light beam is more intense. But that’s not quite right; when the light has a long wavelength even a bright source doesn’t shake loose any electrons at all, while short-wavelength light is able to shake some loose even when it’s quite dim. Einstein realized that the photoelectric effect could be explained if we believed that light always came in individual quanta rather than as a smooth wave—not only when it was emitted by glowing bodies. “High intensity but long wavelength” implies a barrage of quanta, but each with an energy that is too small to disturb any electrons at all; “low intensity but short wavelength” means just a few quanta, but each with enough energy to do the job.

  Neither Planck nor Einstein used the word “photon.” That was coined by Gilbert Lewis in the 1920s, and popularized by Arthur Compton. It was Compton who finally convinced people that light came in the form of particles, by showing that the light quanta had momentum as well as energy.

  Einstein’s paper on the photoelectric effect was the work for which he ultimately won the Nobel Prize. It was published in 1905, and Einstein had another paper in the very same issue of the journal where it appeared—his other paper was the one that formulated the special theory of relativity. That’s what it was like to be Einstein in 1905: You publish a groundbreaking paper that helps lay the foundations of quantum mechanics, and for which you later win the Nobel Prize, but it’s only the second-most important paper that you publish in that issue of the journal.

  Quantum implications

  Quantum mechanics sneaked up on physicists over the course of the early decades of the twentieth century. Starting with Planck and Einstein, people tried to make sense of the behavior of photons and atoms, and by the time they were done they had completely upended the reliable Newtonian view of the world. There have been many revolutions in physics, but two stand out far above the rest: when Newton put together his great vision of “classical” mechanics in the 1600s, and when a collection of brilliant scientists worked together to replace Newton’s theory with that of quantum mechanics.

  The major difference between the quantum world and the classical one lies in the relationship between what “really exists” and what we can actually observe. Of course any real-world measurement is subject to the imprecision of our measuring devices, but in classical mechanics we can at least imagine being more and more careful and bringing our measurements closer and closer to reality. Quantum mechanics denies us that possibility, even in principle. In the quantum world, what we can possibly see is only a small subset of what really exists.

  Here is a ham-fisted analogy to illustrate the point. Imagine you have a friend who is very photogenic, but you notice something unusual about pictures in which she appears—she is always precisely in profile, showing her left side or right side but never appearing from the front or back. When you see her from the side and then take a picture, the image is always correctly from that side. But when you see her from directly in front and then take a picture, half the time it comes out as her left profile and half the time as her right profile. (The terms of the analogy dictate that “taking a picture” is equivalent to “making a quantum observation.”) You can take a picture from one angle and then really quickly move around to take a picture from ninety degrees away—but you only ever capture her in profile. That’s the essence of quantum mechanics—our friend can really be in any orientation, but when we snap a photo we see only one of two possible angles. This is a good analogy for the “spin” of an electron in quantum mechanics, a property we only ever measure to be precisely clockwise or counterclockwise, no matter what axis we use to make the measurement.

  The same principle holds for other observable quantities. Consider the location of a particle. In classical mechanics, there is something called the “particle’s position,” and we can measure it. In quantum mechanics there is no such thing. Instead, there is something called the “wave function” of the particle, which is a set of numbers that reveal the probability of seeing the particle in any particular place when we look at it. There is no such thing as “where the particle is, really”—but when we look, we always see it in some particular place.

  When quantum mechanics gets applied to fields, we end up with “quantum field theory,” which is the basis for our modern explanations of reality at its most fundamental level. According to quantum field theory, when we observe a field carefully enough we see it resolve into individual particles—although the field itself is real. (The field actually has a wave function describing the probability of finding it with any particular value at each point in space.) Think of a TV set or computer monitor, which seems to display a smooth picture from a distance, but close up we find that it’s actually a collection of tiny pixels. On a quantum TV set there really is a smooth picture, but when we look closely at it we can only ever observe it as pixels.

  Quantum field theory is responsible for the phenomenon of virtual particles, including the partons (quarks and gluons) inside protons that are so crucial to what happens in LHC collisions. Just as we can never quite pin down a single particle to a definite position, we can never really pin a field down to a definite configuration. If we look at it closely enough, we see particles appearing and disappearing in empty space, depending on the local conditions. Virtual particles are a direct consequence of the uncertainty inherent in quantum measurement.

  Physics students for generations now have been confronted with the ominous-sounding question, “Is matter really made of particles or waves?” Often they get through years of education without quite grasping the answer. Here it is: Matter is really waves (quantum fields), but when we look at it carefully enough we see particles. If only our eyes were as sensitive as those of frogs, this might make more sense to us.

  Matter from fields

  So light is a wave, a set of propagating ripples in the electromagnetic field that pervades space. When we throw quantum mechanics into the mix, we end up with quantum field theory, which says that when we look closely at an electromagnetic field we see it as individual photons. The same logic works for gravity—it’s described by a field, and there are gravitational waves that move through space at the speed of light, and if we looked at such a wave carefully enough we would see it as a collection of massless particles called “gravitons.” Gravity is far too weak for us to imagine detecting individual gravitons, but the basic truth of quantum mechanics insists that they must be there. Likewise, the strong nuclear force is carried by a field that we observe as particles called “gluons,” and the weak nuclear force is a field carried by W and Z bosons.r />
  All well and good; once we get that forces arise from fields stretching through space, and that quantum mechanics makes fields look like particles, we have a pretty good grasp of how the forces of nature work. But what about the matter that those forces act upon? It’s one thing to think of gravity or magnetism as arising from a field, but something else entirely to think of atoms themselves as being associated with fields. If anything is truly a particle rather than a field, it’s one of those tiny electrons that orbits around atoms. Right?

  Wrong. Just like force-carrying particles, matter particles also arise from applying the rules of quantum mechanics to a field that fills space. As we’ve discussed, force-carrying particles are bosons, while matter particles are fermions. They correspond to different kinds of fields, but fields nevertheless.

  Bosons can pile on top of one another, while fermions take up space. Let’s think about this from the point of view of the fields of which those particles are vibrations. The difference between them comes down to a simple distinction: Boson fields can take on any value whatsoever, while each possible vibrational frequency of a fermion field is either “on” or “off,” once and for all. When a boson field like the electromagnetic field has a really large value, it corresponds to a large number of particles; when it’s a small but nonzero value, it’s just a few particles. Those possibilities aren’t open to fermion fields. There is either a particle there (in some particular state), or there isn’t. This crucial feature is known as the “Pauli exclusion principle”: No two fermion particles can be in the same state. To define the “state” of a particle we need to tell you where it is, what energy it has, and maybe some other features like how it is spinning. The Pauli exclusion principle basically says you can’t have two identical fermions doing exactly the same thing in exactly the same place.

 

‹ Prev