Book Read Free

The God Particle

Page 23

by Leon Lederman


  The experiment goes as follows: we send 1,000 electrons toward the barrier. Geiger counters find that 550 penetrate the barrier and 450 are reflected, but in every case, it is an entire electron that is detected. The Schrödinger waves, when properly squared, give 550 and 450 as a statistical prediction. If we accept the Born interpretation, a single electron has a 55 percent probability of penetrating and a 45 percent chance of being reflected. Since a single electron never divides, Schrödinger's wave cannot be the electron. It can be only a probability.

  Born, along with Heisenberg, was part of the Gottingen school, a group of some of the brightest physicists of the age whose professional and intellectual lives revolved around the University of Gottingen in Germany. Born's statistical interpretation of Schrödinger's psi came from the Gottingen school's conviction that electrons are particles. They make Geiger counters click. They leave sharp tracks in Wilson cloud chambers. They collide with other particles and bounce off. So here is Schrödinger's equation, which gives correct answers but describes electrons as waves. How can it be converted to a particle equation?

  Irony is a constant companion of history, and the idea that changed everything was given (again!) by Einstein in a speculative paper of 1911 on the relationship of photons to Maxwell's classical field equations. Einstein had suggested that the field quantities guided the photons to places of high probability. Born's resolution of the particle-wave conflict is simply this: the electron (and its friends) act like particles at least when they are being detected, but their distribution in space between measurements follows the wavelike probability patterns that emerge from the Schrödinger equation. In other words, the Schrödinger psi quantity describes the probable location of the electrons. And this probability can behave like a wave. Schrödinger did the hard part, crafting the equation that lies at the heart of the theory. But it was Born, inspired by Einstein's paper, who figured out what the equation was actually predicting. The irony is that it was Born's probability interpretation of the wave function that Einstein never accepted.

  WHAT THIS MEANS, OR THE PHYSICS OF CLOTH CUTTING

  The Born interpretation of the Schrödinger equation is the single most dramatic and major change in our world view since Newton. It is not surprising that Schrödinger found the idea totally unacceptable and regretted inventing an equation that would involve such foolishness. However Bohr, Heisenberg, Sommerfeld, and others accepted it with little fuss because "probability was in the air." Born's paper made the eloquent assertion that the equation can only predict probability but that the mathematical form of probability is developed along perfectly predictable paths.

  In this new interpretation, the equation deals with probability waves, ψ, which predict what the electron is doing, what its energy is, where it will be, and so on. However these predictions are in the form of probabilities. What "waves" about the electron is just these probability predictions. These wavelike solutions to the equations can pile up in one place to add up to high probability and cancel in other places to yield low probability. When one puts these predictions to the test, one in effect does the experiment a huge number of times. Indeed, in most of the trials, the electron ends up where the equation says the probability is high; only very rarely does it end up where the probability is low. There is quantitative agreement. What is shocking is that for two apparently identical experiments one can get two quite different results.

  The Schrödinger equation with Born's probability interpretation of the wave function has been enormously successful. It is the key to understanding hydrogen and helium and, given a big enough computer, uranium. It was used to understand how two elements combine to make a molecule, putting chemistry on a far more scientific footing. It allows one to design electron microscopes and even proton microscopes. In the period 1930–1950 it was carried into the nucleus and was found to be as productive there as in the atom.

  The Schrödinger equation predicts with a high degree of accuracy, but, again, what it predicts is probability. What does that mean? Probability in physics is similar to probability in life. It's a billion-dollar business, as executives from insurance companies, clothing manufacturers, and a good fraction of the Fortune 500 industries will assure you. Actuaries can tell us that the average white American nonsmoking male born in, say, 1941, will live to be 76.4 years old. But they can't tell you diddly about your brother Sal, who was born that same year. For all they know, he could be run over by a truck tomorrow or die of an infected toenail in two years.

  In one of my classes at the University of Chicago, I play garment-center mogul for my students. Being a success in the rag trade is similar to making a career in particle physics. In either case, you need a strong grasp of probability and a working knowledge of tweed jackets. I ask the students to sing out their heights while I plot each student's height on a graph. I have two students at 4 foot 8 inches, one at 4 foot 10, four at 5 foot 2, and so on. One guy is 6 foot 6, way outside the others. (If Chicago only had a basketball team!) The average is 5 foot 7. After polling 166 students I have a nice, bell-shaped set of steps going up to 5 foot 7 and then stepping down toward the 6-foot-6 anomaly. Now I have a "distribution curve" of college freshman heights, and if I'm reasonably sure that choosing physics to fulfill the science requirement does not distort the curve, I have a representative sample of student heights at the University of Chicago. I can read percentages using the vertical scale; for example, I can figure out what percentage of students is between 5 foot 2 and 5 foot 4. With my graph I can also read that there is a 26 percent probability that the next student who shows up will be between 5 foot 4 and 5 foot 6, if this is something I want to know.

  Now I'm ready to make suits. If these students are my market (an unlikely prospect if I'm in the suit business), I can estimate what percentage of my suits should be size 36, 38, and so on. If I don't have a graph of heights, I have to guess, and if wrong, at the end of the season I have 137 size-46 suits left unsold (which I have to blame on my partner Jake, the schlemiel!).

  The Schrödinger equation, when solved for any situation involving atomic processes, generates a curve analogous to the distribution-of-student-heights curve. However, the shape may be quite different. If we want to know where the electron hangs out in the hydrogen atom—how far it is from the nucleus—we'll find some distribution that drops off sharply at about 10−8 centimeters, with about an 80 percent probability of finding the electron within the sphere of radius 10−8 centimeters. This is the ground state. If we excite the electron to the next energy level, we'll get a bell curve with a mean radius that's about four times as big. We can compute probability curves of other processes as well. Here we must clearly differentiate probability predictions from possibilities. The possible energy levels are very precisely known, but if we ask which energy state the electron will be found in, we calculate only a probability, which depends on the history of the system. If the electron has more than one choice as to which lower energy state to jump to, we can again predict probabilities; for example, an 82 percent probability of jumping to Ei, 9 percent into E2, and so on. Democritus said it best when he proclaimed, "Everything existing in the universe is the fruit of chance and necessity." The various energy states are the necessities, the only conditions that are possible. But we can only predict the probabilities of the electron being in any of these possible states. That's a matter of chance.

  Concepts of probability are well known to actuarial experts today. But they were upsetting to physicists trained in classical physics in the early part of the century (and remain upsetting to many people today). Newton described a deterministic world. If you threw a rock, launched a rocket, or introduced a new planet to a solar system, you could predict where it would go with total certainty, at least in principle, as long as you knew the forces and the initial conditions. Quantum theory said no: initial conditions are inherently uncertain. You get only probabilities for predictions of whatever you want to measure: a particle's location, its energy, velocity, or whatever. The
Born interpretation of Schrödinger was unsettling to physicists, who in the three centuries since Galileo and Newton had come to accept determinism as a way of life. Quantum theory threatened to transform them into high-level actuaries.

  A SURPRISE ON A MOUNTAINTOP

  In 1927 the English physicist Paul Dirac was trying to extend quantum theory, which at the time appeared to be at odds with Einstein's special theory of relativity. The two theories had already been introduced to each other by Sommerfeld. Dirac, intent on making the two theories happily compatible, supervised the marriage and its consummation. In doing so, he found an elegant new equation for the electron (curiously, we call it the Dirac equation). Out of this powerful equation comes the postdictum that electrons must have spin and must produce magnetism. Recall the g-factor from the beginning of the chapter. Dirac's calculations showed that the strength of the electron's magnetism as measured by g was 2.0. (It was much later that refinements led to the precise value given earlier.) More! Dirac (age twenty-four or so) found that in obtaining the electron-wave solution to his equation, there was another solution with bizarre implications. There had to be another particle with properties identical to those of the electron but with opposite electric charge. Mathematically, this is a simple concept. As every little kid knows, the square root of four is plus two, but it is also minus two because minus two times minus two is also four: 2 × 2 = 4, and −2 × −2 = 4. So there are two solutions. The square root of four is plus or minus two.

  The problem was that the symmetry implied by Dirac's equation meant that for every particle there must exist another particle with the same mass but opposite charge. So Dirac, a conservative gentleman who was so uncharismatic as to have generated legends, struggled with his negative solution and eventually predicted that nature must contain positive electrons as well as negative electrons. Someone coined the word antimatter. This antimatter should be all over the place, yet no one had ever spotted any.

  In 1932, a young Cal Tech physicist named Carl Anderson built a cloud chamber designed to register and photograph subatomic particles. A powerful magnet surrounded his apparatus to bend the path of the particles, giving a measure of their energy. Anderson bagged a bizarre new particle— or, rather, the track of one—in the cloud chamber. He called this strange new object a positron, because it was identical to an electron except that it had a positive charge instead of a negative charge. Anderson's publication made no reference to Dirac's theory, but the connection was soon made. He had found a new form of matter the antiparticle that had popped out of the Dirac equation a few years earlier. The tracks were made by cosmic rays, radiation from particles that strike our atmosphere from the far reaches of our galaxy. Anderson, to get even better data, transported his apparatus from Pasadena to the top of a mountain in Colorado, where the air is thin and the cosmic rays are more intense.

  A front-page photograph of Anderson in the New York Times, announcing the discovery, was an inspiration to the young Lederman, his first exposure to the romantic adventure of schlepping equipment to the top of a high mountain to make important scientific measurements. Antimatter turned out to be a very big deal, inextricably involved in the lives of particle physicists, and I promise to say more about it in later chapters. Another quantum-theory success.

  UNCERTAINTY AND ALL THAT

  In 1927 Heisenberg invented his uncertainty relations, which put the cap on the great scientific revolution we call quantum theory. In truth, quantum theory wasn't wrapped up until the 1940s. Indeed, in its quantum field theory version, its evolution continues today, and the theory will not be complete until it is fully combined with gravitation. But for our purposes the uncertainty principle is a good place to end. Heisenberg's uncertainty relations are a mathematical consequence of the Schrödinger equation. They could also have been the logical postulates, or assumptions, of the new quantum mechanics. Since Heisenberg's ideas are crucial to understanding just how new the quantum world is, we need to dwell a bit here.

  Quantum designers insist that only measurements, dear to the hearts of experimenters, count. All we can ask of a theory is to predict the results of events that can be measured. This sounds like an obvious point, but forgetting it leads to the so-called paradoxes that popular writers without culture are fond of exploiting. And, I should add, it is in the theory of measurement that the quantum theory meets its past, present, and no doubt future critics.

  Heisenberg announced that our simultaneous knowledge of a particle's location and its motion is limited and that the combined uncertainty of these two properties must exceed ... nothing other than Planck's constant, h, which we first met in the formula E = hf. Our measurements of the particle's location and its motion (actually, its momentum) are reciprocally related to each other. The more we know about one, the less we know about the other. The Schrödinger equation gives us probabilities for these factors. If we devise an experiment that pinpoints the location of the electron—say it's at some coordinate with an extremely small uncertainty of position—the spread in the possible values of the momentum is correspondingly large according to Heisenberg's relation. The product of the two uncertainties (we can assign them numbers) is always greater than Planck's ubiquitous h. Heisenberg's relations dispose, once and for all, of the classical picture of orbits. The very concept of location or place is now less definite. Let's go back to Newton and to something we can visualize.

  Suppose we have a straight road on which a Hyundai is tooling along at some respectable speed. We decide that we are going to measure its location at some instant of time as it whizzes past us. We also want to know how fast it is going. In Newtonian physics, pinpointing the position and velocity of an object at a specific time allows one to predict precisely where it will be at any future time. However, when we assemble our rulers and clocks, our flashbulbs and cameras, we find that the more carefully we measure the position, the poorer our ability to measure the speed and vice versa. (Recall that the speed is the change of position divided by the time.) However, in classical physics we can continually improve on our accuracy in both quantities to arbitrary precision. We simply ask some government agency for more funds to build better equipment.

  In the atomic domain, by contrast, Heisenberg proposed a basic unknowability that cannot be reduced by any amount of equipment, ingenuity, or federal funding. He proposed that it is a fundamental property of nature that the product of the two uncertainties always exceeds Planck's constant. Strange as this may sound, there is a firm physical basis for this uncertainty in measurability of the microworld. For example, let's try to nail down the position of an electron. To do so, you must "see" it. That is, you have to bounce light, a beam of photons, off the electron. Okay, there! Now you see the electron. You know its location at a moment in time. But a photon glancing off the electron changes the electron's state of motion. One measurement undermines the other. In quantum mechanics, measurement inevitably produces change because you are dealing with atomic systems, and your measuring tools cannot be any smaller, gentler, or kinder. Atoms are one ten-billionth of a centimeter in radius and weigh a millionth of a billion-billionth of a gram, so it doesn't take much to influence them profoundly. By contrast, in a classical system, one can make sure that the act of measuring barely influences the system being measured. Suppose we want to measure water temperature. We don't change the temperature of a lake, say, by dipping a small thermometer into it. But dipping a fat thermometer into a thimble of water would be stupid since the thermometer would change the temperature of the water. In atomic systems, quantum theory says, we must include the measurement as part of the system.

  THE AGONY OF THE DOUBLE SLIT

  The most famous and most instructive example of the counterintuitive nature of quantum theory is the double-slit experiment. This experiment was first carried out by Thomas Young, a physician, in 1804 and was heralded as experimental proof of the wave nature of light. The experimenter aimed a beam of, say, yellow light at a wall in which he had cut two very fine para
llel slits a very short distance apart. A distant screen caught the light that squirted through the slits. When Young covered one of the slits, a simple, bright, slightly broadened image of the other slit was projected on the screen. But when both slits were uncovered, the result was surprising. A careful examination of the light area on the screen revealed a series of equally spaced bright and dark fringes. Dark fringes are places where no light arrives.

  The fringes are proof, said Young, that light is a wave. Why? They are part of an interference pattern, which occurs when waves of any kind bump into each other. When two water waves, for example, collide crest to crest, they reinforce each other, creating a bigger wave. When they collide trough to crest, they cancel each other out. The wave flattens.

  Young's interpretation of the double-slit experiment was that at certain locations the wavelike disturbances from the two slits arrive on the screen in just the right phases to cancel each other out: a peak of the light wave from slit one arrives exactly at a trough of light from slit two. A dark fringe results. Such cancellations are quintessential indicators of wave interference. When two peaks or two troughs coincide at the screen, we get a bright fringe. The fringe pattern was accepted as proof that light was a wave phenomenon.

 

‹ Prev