Richard Feynman

Home > Other > Richard Feynman > Page 5
Richard Feynman Page 5

by John Gribbin


  By the time Feynman went to MIT, the structure of the atom, and the way it operated in accordance with both quantum mechanics and special relativity, were pretty well understood, except for some annoying details. The electron had been identified in the 1890s by the British physicist J. J. Thomson, the role of the proton was appreciated by the beginning of the 1920s, and the neutron was identified in 1932. This combination of particles was all that was needed to explain the structure of atoms. Each atom contains a nucleus that is a ball of positively charged protons and electrically neutral neutrons, held together (in spite of the tendency of the positive charge on the protons to make them repel one another) by a very short range force of attraction, called the strong nuclear force. Outside the nucleus, each atom ‘owns’ a cloud of electrons, with one negatively charged electron for each proton in the nucleus, held in place by the mutual attraction between the negative charge on the electrons and the overall positive charge on the nucleus. In addition, during the early 1930s physicists began to suspect the existence of another type of particle, dubbed the neutrino, which had never been detected directly but was required to balance the energy budget whenever a neutron transformed itself into a proton by spitting out an electron (a process known as beta decay). Beta decay involves a fourth kind of force (after gravity, electromagnetism and the strong force), dubbed the weak force, or weak interaction.

  Together with light, that’s all you need to explain the workings of the everyday world. But to anyone brought up on classical ideas (the kind of physics you get taught in school), there’s an obvious puzzle about this picture of the atom. Why don’t all the negatively charged electrons in the outer part of the atom get pulled into the nucleus by the attraction of all the positively charged protons? The world would be a far different place if they did, because the nucleus is typically about 100,000 times smaller than the electron cloud that surrounds it. The nucleus contains almost all of the mass of an atom (protons and neutrons have roughly the same mass, each about 2,000 times the mass of an electron), but the electrons are responsible for the atom’s relatively large size, and for the ‘face’ it shows to the world (that is, to other atoms). The reason they don’t fall into the nucleus is explained by the second revolution in 20th-century physics, the quantum revolution. Like the relativity revolution, this was also triggered by studies of the behaviour of light.

  At the end of the 19th century, the world seemed to be made up of two components. There were particles, like the newly discovered electrons, and there were waves, like the electromagnetic waves described by Maxwell’s equations. You can make waves in a bowl of water by jiggling your fingers about in the water, and you can make electromagnetic waves by jiggling an electrically charged particle to and fro. So it was pretty clear, even then, that light was produced by electrons jiggling about in some way inside atoms. Unfortunately, though, the best 19th-century theories predicted that this jiggling would produce a completely different spectrum of light from what we actually see.

  Figure 3. A wave. Two waves are in phase if they move in step so that the peaks reinforce one another. They are out of phase if the peaks of one wave exactly coincide with the troughs of the other wave, so that they cancel each other out. In-between states, with partial cancellation, are also possible.

  What the theorists had to do was to explain the way light would be emitted from an idealized source called a ‘black body’. This seemingly bizarre choice of name (if it is black, how can it radiate any light at all?) results from the fact that when such an object is cold, it absorbs all the light that falls on it, without reflecting any away. It treats all colours (each colour corresponds to a particular wavelength of light) the same. But if it is gradually heated up, it will first begin to radiate invisible infrared radiation, then it will start to glow red, then orange, yellow and blue at successively higher temperatures, until eventually it is white hot. You can tell the temperature of a black body precisely by measuring the wavelength of the light it is emitting. This light forms a continuous spectrum (the ‘black body curve’), with most energy radiated in a peak at the characteristic wavelength for that temperature (corresponding to red light, or blue light, or whatever) but some energy coming out in the form of electromagnetic waves with shorter wavelengths than this peak intensity, and some with longer wavelengths. The shape of the black body curve is like the outline of a smooth hill, and the peak itself shifts from longer wavelengths to shorter ones as the black body gets hotter.

  But according to 19th-century physics, none of this should happen. If you try to treat the behaviour of electromagnetic waves in exactly the same way that you would treat vibrations of a guitar string, it turns out that it ought to be easier for an electromagnetic oscillator to radiate energy at shorter wavelengths, regardless of its temperature – so easy, in fact, that all of the energy put into a black body as heat should come pouring out as very short wavelength radiation, beyond the blue part of the spectrum, in the ultraviolet. This was known as the ‘ultraviolet catastrophe’, because the prediction certainly did not match up with the real world, where such things as red hot pokers (which behave in some ways very much like black bodies) were well known to the Victorians.

  The puzzle was resolved – up to a point – by the German physicist Max Planck, in the last decade of the 19th century. Planck, who lived from 1858 to 1947, spent years puzzling over the nature of black body radiation, and eventually (in 1900), as a result of a mixture of hard work, insight and luck, came up with a mathematical description of what was going on. Crucially, he was only able to find the right equation because he knew the answer he was looking for – the black body curve. If he had simply been trying to predict the nature of light radiated from a hot black body, he would never have produced the key new idea that did actually appear in his calculations.

  Planck’s new idea, or trick, was to assume that the electric oscillators inside atoms cannot emit any amount of radiation they like, but only lumps of a certain size, called quanta. In the same way, they would only be able to absorb individual quanta, not in-between amounts of energy. And in order to make Planck’s formula match the black body curve, the amount of energy in each quantum had to be determined by a new rule, relating the energy of the quantum involved to the frequency (f) of the radiation. Frequency is just one over the wavelength, and Planck found that for electromagnetic radiation such as light

  E = hf

  where h is a new constant, now known as Planck’s constant.

  For very short wavelengths, f is very big, so the energy in each quantum is very big. For very long wavelengths, f is very small and the energy in each quantum is small. This explains the shape of the black body curve, and avoids the ultraviolet catastrophe. The total amount of energy being radiated at each part of the black body spectrum is made up of the contributions of all the quanta being radiated with the frequency (and wavelength) corresponding to that part of the spectrum. At long wavelengths, it is easy for atoms to radiate very many quanta, but each quantum has only a little energy, so only a little energy is radiated overall. At short wavelengths, each quantum radiated carries a lot of energy, but very few atoms are able to generate such high-energy quanta, so, again, only a little energy is radiated overall. But in the middle of the spectrum, where medium-sized quanta are radiated, there are many atoms which each contain enough energy to make these quanta, so the numbers add up to produce a lot of energy – the hill in the black body curve. And, naturally, the wavelength at which the peak energy is radiated shifts to shorter wavelengths as the black body gets hotter and more atoms are able to produce higher-energy (shorter wavelength) quanta.

  Although physicists were pleased to have a black body formula that worked, at first this was regarded as no more than a mathematical trick, and there was no suggestion (least of all from Planck himself) that light could only exist in little lumps, the quanta. It took the genius of Albert Einstein to suggest, initially in 1905, that the quanta might be real entities, and that light could just as well be descri
bed as a stream of tiny particles as by a wave equation. Although Einstein’s interpretation of the quantum idea neatly solved an outstanding puzzle in physics (the way in which light shining on a metal surface releases electrons in the photoelectric effect), initially it met with a hostile reaction. One American researcher, Robert Millikan, was so annoyed by it that he spent ten years trying to prove Einstein was wrong, but succeeded only in convincing himself (and everybody else) that Einstein was right.

  After Millikan’s definitive experimental results were published (in 1916) it was only a matter of time before first Planck (in 1919) and then Einstein (in 1922, although it was actually the prize from 1921 held over for a year) received the Nobel Prize for these contributions. But the ‘particles of light’ were only given their modern name, photons, in 1926, by the American physicist Gilbert Lewis. By then, the Indian physicist Satyendra Bose had shown that the equation describing the black body curve (Planck’s equation) could actually be derived entirely by treating light as a ‘gas’ made up of these fundamental particles, without using the idea of electromagnetic waves at all.

  So, by the mid-1920s, there were two equally well-founded, accurate and useful ways of explaining the behaviour of light – either in terms of waves, or in terms of particles. But this was only half the story. We still haven’t explained why electrons don’t fall into the nucleus of an atom.

  The first step, producing a picture of the structure of the atom that is still the one often taught in schools, was taken by the Dane Niels Bohr, in the second decade of the 20th century. Bohr had been born in 1885 and lived until 1962. He completed his PhD studies in 1911 and a year later began a period of work in Manchester, where he stayed until 1916, working in the group headed by the New Zealand-born physicist Ernest Rutherford.

  Bohr’s model of the atom was like a miniature Solar System. The nucleus was in the middle and the electrons circled around the nucleus in orbits rather like the orbits of the planets around the Sun. According to classical theory, electrons moving in orbits like this would steadily radiate electromagnetic radiation away, losing energy and very quickly spiralling into the nucleus. But Bohr guessed that they could not do this because, extending Planck’s idea, they were only ‘allowed’ to radiate energy in distinct lumps, the quanta. So an electron could not spiral steadily inwards; instead it would have to jump from one stable orbit to another as it lost energy and moved inward – rather as if the planet Mars were suddenly to jump into the orbit now occupied by the Earth. But, Bohr said, the electrons could not all pile up in the innermost orbit (like all the planets in the Solar System suddenly jumping into the orbit of Mercury) because there was a limit on the number of electrons allowed in each orbit. If an inner orbit was full up, then any additional electrons belonging to that atom had to sit further out from the nucleus.

  The picture Bohr painted was based on a bizarre combination of classical ideas (orbits), the new quantum ideas, guesswork and new rules invoked to explain why all the electrons were not in the same orbit. But it had one great thing going for it – it explained the way in which bright and dark lines are produced in spectra.

  Most hot objects do not radiate light purely in the smooth, hillshaped spectrum of a black body. If light from the Sun, say, is spread out using a prism to make a rainbow pattern, the spectrum is seen to be marked by sharp lines, some dark and some bright, at particular wavelengths (corresponding to particular colours). These individual lines are associated with particular kinds of atoms – for example, when sodium atoms are heated or energized electrically they produce two bright, yellow-orange lines in the spectrum, familiar today from the colour of certain street lamps. Bohr explained such lines as the result of electrons jumping from one orbit (one energy level) to another within the atoms. You can think of this as like jumping from one step to another on a staircase. A bright line is where identical electrons in many identical atoms (like the sodium atoms in street lights) have all jumped inward by the appropriate step, each releasing the same amount of electromagnetic energy in the form of many quanta of light each with the same frequency given by Planck’s formula E = hf. A dark line is where background energy has been absorbed by electrons making the appropriate jump up in energy, outward from one stable orbit into a more distant stable orbit (‘up a step’ on the staircase).

  But why should only some orbits be stable, and others not? It was this puzzle that led the French physicist Louis de Broglie to make the next breakthrough in quantum theory, in the 1920s.

  De Broglie, who was born in 1892, only began serious scientific work after his military service during the First World War and completed his PhD in 1924, at the relatively ripe old age of 32 (he lived to an even riper old age, until 1982). De Broglie suggested that the way in which electrons could only occupy certain orbits around a nucleus was reminiscent of the way waves behaved, rather than particles. If you pluck an open violin string, for example, you can make waves on it in which there are exactly 1, or 2, or 3, or any whole number of wavelengths, corresponding to different notes (harmonics) ‘fitting in’ to the length of the string, by lightly touching the string at various points that are simple fractions (½, ⅓, ¼ and so on) of the length. But you can’t make a note corresponding to a wave with, say, 4.7 wavelengths filling the open string. In order to play that note you have to change the length of the string by pressing it hard with your finger against the neck of the violin. If electrons were really waves, said De Broglie, then each orbit in an atom might correspond to patterns in which a whole number of electron waves fitted around the orbit, making a so-called standing wave. The transition from one step on the energy level staircase to another would then correspond more to the transition from one harmonic to another than to a particle jumping from one orbit to another.

  De Broglie’s suggestion was so revolutionary that his thesis supervisor, Paul Langevin, didn’t trust himself to decide on its merits, and sent a copy to Einstein, who responded that he thought the work was reliable. De Broglie got his PhD, and the scientific world had to come to terms with the fact that just as light, which they were used to thinking of as a wave, could also be described in terms of particles, so the electron, which they were used to thinking of as a particle, could also be described in terms of waves. In 1927, both an American team of physicists and George Thomson in England carried out experiments demonstrating the wave behaviour of electrons, scattering them from crystals. The wavelengths (frequencies) of electrons with a certain energy, measured in this way, exactly match Planck’s formula E = hf. George Thomson, who thereby proved that electrons are waves, was the son of J. J. Thomson, who, a generation before, had first proved the existence of electrons as particles.

  The notion of ‘wave–particle duality’ became one of the key ingredients in the quantum theory that was developed in the mid-1920s, and which Richard Feynman studied as an undergraduate. In fact, the quantum theory was developed twice at that time, almost simultaneously, once using what was essentially a particle approach and once using what was essentially a wave approach. The leading light in the development of the particle version was Werner Heisenberg, the first major participant in the quantum game to have been born in the 20th century (on 5 December 1901, at Wurzburg, in Germany). A variation on this theme (in many ways, more complete) was also developed independently by another young physicist, Paul Dirac, who was just a few months younger than Heisenberg, having been born at Bristol, in England, on 8 August 1902.

  Erwin Schrödinger, an Austrian physicist, was the odd one out among the pioneers of the new quantum theory, having been born in 1887, and had obtained his doctorate back in 1910. He built from De Broglie’s ideas about electron waves, and came up with a version of quantum theory that was intended to do away with all the mysterious jumping of electrons from one level in an atom to another, deliberately harking back to the classical ideas of wave theory.

  It was Dirac who proved that all of these ideas were, in fact, equivalent to one another, and that even Schrödinger’s version did i
nclude this ‘quantum jumping’, among other things, in its equations. Schrödinger was disgusted, and famously commented of the theory he had helped to develop, ‘I don’t like it, and I wish I’d never had anything to do with it.’ Ironically, because most physicists learn about wave equations very early in their education, and feel comfortable with them, ever since quantum mechanics was established in the 1920s it is Schrödinger’s version that has been most widely used for tackling practical problems, like interpreting spectra.

  We don’t want to go over the whole story of the development of quantum theory in the 1920s here,4 and instead we’ll jump straight to the final picture, which can best be understood (as far as anything in quantum physics can be understood) in terms of an example which Feynman himself would, much later, call the ‘central mystery’ of quantum mechanics. It is the famous ‘experiment with two holes’.

  In this example you can imagine sending either a beam of light or a stream of electrons through two tiny holes in a screen – the experiment has actually been done with both, and everything we are going to discuss here has been proved by experiment. When waves travel through two holes in this way, the ripples fan out from each hole on the other side of the screen and combine to form what is called an interference pattern, exactly like the interference pattern you would see on the surface of a still pond if you dropped two pebbles into it at the same time. In the case of light, this basic experiment was one of the techniques used to prove, early in the 19th century, that light is a wave – a second screen placed beyond the one with the two holes will show a pattern of light and dark stripes, ‘interference fringes’, produced in this way (see Figure 4a).

 

‹ Prev