Book Read Free

Through Two Doors at Once: The Elegant Experiment That Captures the Enigma of Our Quantum Reality

Page 3

by Ananthaswamy, Anil


  Hertz, however, had seen only glimmers of this phenomenon. His receiver, which was intercepting invisible radio waves, worked better when it was illuminated by light, compared to when it was in darkness inside an enclosure. The radio waves had nothing to do with the light, yet something about the light was influencing the receiver. In a letter he wrote to his father in July 1887, Hertz was characteristically modest about his finding: “ To be sure, it is a discovery, because it deals with a completely new and very puzzling phenomenon. I am of course less capable of judging whether it is a beautiful discovery, but of course it does please me to hear others call it that; it seems to be that only the future can tell whether it is important or unimportant.”

  It’s not surprising that what Hertz had observed could not be explained at the time. Physicists were yet to discover electrons, let alone understand the photoelectric effect in all its intricacies. Even as late as the early 1890s, our conception of reality was that atoms were the smallest constituents of the material world, but the structure of the atom was still unknown. The discovery of the electron and other important milestones lay on the path from Hertz to Einstein to quantum mechanics.

  Hertz, sadly, didn’t live to see any of them. He died on January 1, 1894. An obituary in the journal Nature recounted his last days: “ A chronic, and painful, disease of the nose spread . . . and gradually led to blood poisoning. He was conscious to the last, and must have been aware that recovery was hopeless; but he bore his sufferings with the greatest patience and fortitude.” Hertz was only thirty-seven. His mentor, Hermann von Helmholtz (who would himself die later that year) wrote in the preface to Hertz’s monograph The Principles of Mechanics: “ Heinrich Hertz seemed to be predestined to open up to mankind many of the secrets that nature had hitherto concealed from us; but all these hopes were frustrated by the malignant disease which . . . robbed us of this precious life and of the achievements which it promised.”

  —

  The secrets of nature that Hertz would surely have helped discover came thick and fast. The first one was the discovery of the electron, thanks to something called a cathode ray tube. The tube—essentially a sealed glass cylinder with electrodes on either end, and from which much of the air had been removed— was a scientific curiosity in the mid-nineteenth century. When a high voltage was applied across the electrodes, the tube would light up, and scientists reveled in showing these off to lay audiences. Soon, physicists discovered that pumping out more air, but not all of it, revealed something dramatic: rays seemed to emerge from the negative electrode (the cathode) and streak across to the positive electrode (the anode).

  Three years after Hertz’s death, the English physicist J. J. Thomson, using a series of elegant experiments, showed unequivocally that these rays were constituents of matter that were smaller than atoms, and their trajectories could be bent by an electric field in ways that proved the rays had negative charge. Thomson had discovered the electron. He, however, called them corpuscles. Thomson speculated these were literally bits of atoms. Not everyone agreed with his pronouncements. “ At first there were very few who believed in the existence of these bodies smaller than atoms,” he would later say. “I was even told long afterwards by a distinguished physicist who had been present at my lecture at the Royal Institution that he thought I had been ‘pulling their legs.’”

  Such doubts aside, Thomson changed our conception of the atom forever.

  Meanwhile, after Hertz had made his initial discovery of the photoelectric effect, his assistant, Philipp Lenard, took up the cause. He was a fantastic experimentalist. His experiments clearly showed that ultraviolet light falling on metals produced the same kind of particles as seen in the cathode ray tubes: electrons. Crucially, the velocity of these electrons (and hence their energy) did not depend on the intensity of the incident light. Lenard, however, was a dodgy theorist and made a hash of trying to explain why.

  Enter Einstein. In 1905, Einstein wrote a paper on the photoelectric effect. In this paper, he referred to work by the German physicist Max Planck, who five years earlier had drawn first blood in the tussle between classical Newtonian physics and the soon-to-be-formulated quantum mechanics. Planck was trying to explain the behavior of certain types of objects called black bodies, which are idealized objects in thermal equilibrium that absorb all the radiation and radiate it back out. If the electromagnetic energy being emitted is infinitely divisible into smaller and smaller amounts, as it is in classical physics, thus making for a seamless continuum, then the predictions made by theory were at odds with experimental data. Something was not quite right with classical notions of energy.

  To solve the puzzle, Planck argued that the spectrum of the black body electromagnetic radiation could be explained only if one thought of energy as coming in quanta, which are the smallest units of energy. Each unit is a quantum, and this quantum is a floor: for a given frequency of electromagnetic radiation, you cannot divide the energy into packets any smaller (the way you cannot divide a dollar into anything smaller than a cent). Using this assumption, Planck beautifully explained the observations. The idea of the quantum was born.

  While Einstein did not fully embrace Planck’s ideas in his 1905 paper to explain the photoelectric effect, he would eventually do so. Einstein argued that since light is electromagnetic radiation, it too comes in quanta: the higher the frequency of the light, the higher the energy of each quantum. This relation is linear—doubling the frequency doubles the energy of the quantum. Einstein’s claim about light coming in quanta was crucial to understanding the photoelectric effect, in which light falling on a metal can sometimes dislodge an electron from an atom of the metal. For any given metal, said Einstein, an electron can be freed from the metal’s surface only if the incident quantum of light has a certain minimum amount of energy: anything less, and the electrons stay put. This explains why electrons never leave the metal surface if the incident light is below a threshold frequency: the quantum of energy is too low. And it does not matter if two quanta put together have the necessary amount of energy. The interaction between light and an atom of metal happens one quantum at a time. So, just pumping more and more quanta below the threshold frequency has no effect.

  With this theory, Einstein also predicted that the ejected electrons will get more energetic (or have greater velocities) as the frequency of the incident light increases. There is more energy in each quantum of light, and this imparts a stronger kick to the electron, causing it to fly out of the metal at a greater speed—a prediction that would soon get experimentally verified.

  Einstein’s profound claim here was that light is made of small, indivisible particles, where the energy of each particle or quantum depends on the frequency or color of the light. The odd thing, of course, is that terms like frequency and wavelength refer to the wave nature of light, and yet these were getting tied to the idea of light as particles. A disturbing duality was beginning to raise its head. Things were getting confusing.

  Both Lenard and Einstein got Nobel Prizes for their work, Lenard in 1905 for his “work on cathode rays,” and Einstein in 1921, for explaining the photoelectric effect using Planck’s quantum hypothesis. Lenard, however, became deeply resentful of the accolades given to Einstein for theorizing about what Lenard regarded as his result. Lenard was an anti-Semite. In 1924, he became a member of Hitler’s National Socialist party. In front of his office at the Physics Institute in Heidelberg, there appeared a sign that read: “ Entrance is forbidden to Jews and members of the so-called German Physical Society.” Lenard viciously attacked Einstein and his theories of relativity, with undisguised racism and anti-Semitism. “ Einstein was the embodiment of all that Lenard detested. Where Lenard was a militaristic nationalist, Einstein was a pacifistic internationalist . . . Lenard decided that relativity was a ‘Jewish fraud’ and that anything important in the theory had been discovered already by ‘Aryans,’” Philip Ball wrote in Scientific American .

  In the midst of terrible social unrest and unh
inged ideologies across Europe, the quantum revolution was set in motion.

  —

  As things stood in 1905, electrons were constituents of atoms (but it was still unclear whether that was the full story about the makeup of atoms). Plus, there were electromagnetic fields, which were described by Maxwell’s equations. These came in quanta. It was clear that light too is an electromagnetic wave, which came in quanta and these quanta could be thought of as particles. Microscopic reality did not make a whole lot of sense.

  J. J. Thomson, meanwhile, had a question he wanted answered. What would happen when a few quanta of light went through a single slit (rather than two slits)? In 1909, a young scientist named Geoffrey Ingram Taylor started working with Thomson at his laboratory in Cambridge. Taylor decided to design an experiment to try and answer Thomson’s question. The answer resonates within quantum mechanics even today, and is particularly relevant for the story of the double-slit experiment.

  Picture a source of light that shines on an opaque sheet with a single slit. On the other side of the opaque sheet is a screen. Again, our naive expectation is that we’ll see a single strip of light on the screen. Instead, what appear are fringes (albeit a different pattern than seen with the double slit. In the case of a single slit, the fringes can be explained by thinking of each point in the opening or aperture of the slit as a source of a new wave. These waves then interfere with each other, leading to what’s called a diffraction pattern). It’s another proof that light behaves like a wave. When there’s lots of light, the results are easy to explain: light is an electromagnetic wave, and so we should see fringes.

  But given that light also comes in quanta, or particles, Thomson wanted to understand the single-slit phenomenon when the intensity of light falling on the single slit is turned way down, so that only a few quanta of light go through the slit at any one time. Now, if the screen on the far side is a photographic plate that records each quantum of light, then over time, would one see interference fringes? Thomson argued that there should be blurry fringes, because in order to get sharp fringes, numerous quanta should arrive simultaneously at the screen and interfere. Reducing the quanta reaching the screen at the same time to a trickle should reduce the amount of interference and hence the sharpness of the fringes, Thomson hypothesized.

  Taylor was in his twenties and starting out on his career as an experimental physicist. He chose this experiment as the subject of his first scientific paper, but oddly, he recalled years later, “ I chose that project for reasons which, I fear, had nothing to do with its scientific merits.” Consequently, he performed the experiment in the children’s playroom of his parents’ home. To create a single slit, he stuck metal foil onto a piece of glass and, using a razor blade, etched a slit in the metal foil. For a source of light he used a gas flame. Between the flame and the slit, he placed many layers of darkened glass. Taylor calculated that the light falling on the single slit was so faint that it was equivalent to a candle burning a mile away. On the other side of the slit, Taylor placed a needle, whose shadow he captured on a photographic plate. The light—ostensibly just a few quanta at a time—passed through the slit and landed on the photographic plate. What would the plate record after weeks of exposure to the faint light?

  Taylor’s mind, meanwhile, was elsewhere. He was becoming an accomplished sailor. He set up his experiment so that he could get enough of an exposure on the photographic plate after six weeks. “ I had, I think rather skillfully, arranged that this stage would be reached about the time when I hoped to start a month’s cruise in a little sailing yacht I had recently purchased,” he said. During the longest stretch of the experiment, in which the photographic plate was exposed for three months, Taylor reportedly went away sailing.

  After that three-month-long exposure, Taylor saw interference fringes—as sharp as if the photographic plate had been exposed to more intense light for a very short time. Thomson was proved wrong. Taylor never followed up on this negative result. If he had, he might have played an important role in the development of quantum mechanics—for his results were hinting at the odd behavior of photons. Instead of pursuing this any further, Taylor changed directions and went on to make seminal contributions to other fields of physics, particularly fluid mechanics.

  —

  Thomson, however, wasn’t done being a mentor. In the autumn of 1911, a young Danish scientist named Niels Bohr came to work with Thomson. Soon thereafter, Bohr moved to Manchester to study with New Zealand–born British physicist Ernest Rutherford, who was probing the structure of the atom. Rutherford’s work had established that the atom, besides having electrons, also has a positively charged nucleus. Calculations showed that much of the mass of the atom is in the nucleus. What emerged was a new picture of an atom: negatively charged electrons orbiting a positively charged nucleus, the way planets orbit the sun.

  Almost immediately, physicists realized that this model had serious shortcomings. Newton’s laws mandated that orbiting electrons had to be accelerating, if they were to remain in their orbits without falling into the nucleus. And Maxwell’s equations showed that accelerating electrons should radiate electromagnetic energy, thus lose energy and eventually spiral into the nucleus, making all atoms unstable. Of course, that’s not what happens in nature. The model was wrong.

  An interim solution came courtesy of the young Bohr. In 1913, Bohr proposed that the energy levels of electrons orbiting a nucleus did not change in a continuous manner, and also that there was a limit to the lowest energy level of an electron in an atom. Bohr was arguing that the orbits of the electrons and hence their energy levels were quantized. For any given nucleus, there’s an orbit with the lowest possible energy. This orbit would be stable, said Bohr. If an electron were in this lowest-energy orbit, it could not fall into the nucleus, because to do so, it’d have to occupy even smaller orbits with lower and lower energies. But Bohr’s model prohibited orbits with energies smaller than the smallest quantum of orbital energy. There was nowhere lower for the electron to fall. And apart from this stable, lowest-energy orbit, an atom has other orbits, which are also quantized: an electron cannot go from one orbit to another in a continuous fashion. It has to jump.

  To get a sense for how weird it must have been for physicists in the early twentieth century to understand Bohr’s ad hoc claims, imagine you are driving your car and want to go from 10 to 60 miles per hour. In an analogy to the way electrons behave in orbits, the car jumps from 10 mph to 60 mph in chunks of 10 mph, without going through any of the intermediate speeds. Moreover, no matter how hard you brake, you cannot slow the car down to below 10 mph, for that’s the smallest quantum of speed for your car.

  Bohr also argued that if an electron moves from a high-energy to a low-energy orbit, it does so by emitting radiation that carries away the difference in energy; and to jump to an orbit with higher energy, an electron has to absorb radiation with the requisite energy.

  To prevent electrons from losing energy while orbiting the nucleus, which they would have to according to Maxwell’s theory, Bohr argued that the electrons existed in special “stationary” states, in which they did not radiate energy. The upshot of this somewhat arbitrary postulate was that another property of electrons, their angular momentum, was also quantized: it could have certain values and not others.

  It was all terribly confounding. Nonetheless, there were connections emerging between the work of Planck, Einstein, and Bohr. Planck had shown that the energy of electromagnetic radiation was quantized, where the smallest quantum of energy (E) was equal to a number called Planck’s constant (h) multiplied by the frequency of the radiation (v ), producing his famous equation E=hv . Einstein showed that light came in quanta, and the energy of each quantum or photon was also given by the same equation, E=hv (where v refers to the frequency of the light).

  While Bohr had shown that the energy levels in atoms were quantized, it’d take him a decade or so more to accept that when electrons jumped energy levels, the radiation going
in or coming out of the atom was in the form of quanta of light (Bohr initially insisted that the radiation was classical, wavelike).

  But when Bohr did accept Einstein’s idea of light quanta, he saw that the absorbed or emitted energy of the photons was given by, again, E=hv . (Bohr wasn’t the only big name resisting Einstein’s ideas. The notion of light being quantized was hard to stomach for physicists, given the success of Maxwell’s equations of electromagnetism in describing the wave nature of light. For instance, Planck, when he was enthusiastically recommending Einstein for a seat in the Prussian Academy of Sciences in 1913, slipped in this caveat about Einstein: “ That he might sometimes have overshot the target in his speculations, as for example in his light quantum hypothesis, should not be counted against him too much.”)

  Still, the evidence for nature’s predilection for sometimes acting like waves and sometimes like particles continued to grow. In 1924, Louis de Broglie, in his PhD thesis, extended this relationship to particles of matter too, and provided a more intuitive way to envision why the orbits of electrons are quantized. Matter, said de Broglie, also exhibited the same wave-particle duality that Einstein had shown for light. So an electron could be thought of as both a wave and a particle. And atoms too. Nature, it seems, did not discriminate: everything had wavelike behavior and particle-like behavior.

  The idea helped make some sense of Bohr’s model of the atom. Now, instead of thinking of an electron as a particle orbiting the nucleus, de Broglie’s ideas let physicists think of the electron as a wave that circles the nucleus, the argument being that the only allowed orbits are those that let the electron complete one full wavelength, or two, three, four, and so on. Fractional wavelengths are not allowed.

  It was clear by then that physics was undergoing a profound transformation. Physicists were beginning to explain previously inexplicable phenomena, using these ideas of quantized electromagnetic radiation, quantized electron orbits, and the like, at least for the simplest atom, that of hydrogen, which has one electron orbiting the nucleus. More complex atoms were not so easily tamed, even with these new concepts. Still, what was being explored was the very structure of reality—how atoms behave and how the electrons inside atoms interact with the outside world via radiation, or light. But the successes notwithstanding, the puzzles were also mounting.

 

‹ Prev