by Brian Cox
Figure 6.8. The Coulomb potential well around a proton. The well is deepest where the proton is located.
Being blunt, we might say that the way to do this is to ‘solve Schrödinger’s wave equation for the Coulomb potential well’, which is one way to implement the clock-hopping rules. The details are technical, even for something as simple as a hydrogen atom, but fortunately we do not really learn much more than we have appreciated already. For that reason, we shall jump straight to the answer, and Figure 6.9 shows some of the resulting standing waves for an electron in a hydrogen atom. What is shown is a map of the probability to find the electron somewhere. The bright regions are where the electron is most likely to be. The real hydrogen atom is, of course, three-dimensional, and these pictures correspond to slices through the centre of the atom. The figure on the top left is the ground state wavefunction, and it tells us that the electron is, in this case, typically to be found around 1 × 10−10 m from the proton. The energies of the standing waves increase from the top left to the bottom right. The scale also changes by a factor of eight from the top left to the bottom right – in fact the bright region covering most of the top-left picture is approximately the same size as the small bright spots in the centre of the two pictures on the right. This means the electron is likely to be farther away from the proton when it is in the higher energy levels (and hence that it is more weakly bound to it). It is clear that these waves are not sine waves, which means they do not correspond to states of definite momentum. But, as we have been at pains to emphasize, they do correspond to states of definite energy.
Figure 6.9. Four of the lowest energy quantum waves describing the electron in a hydrogen atom. The light regions are where the electron is most likely to be found and the proton is in the centre. The top-right and bottom-left pictures are zoomed out by a factor of 4 relative to the first and the bottom-right picture is zoomed out by a factor of 8 relative to the first. The first picture is around 3 × 10−10 m across.
The distinctive shape of the standing waves is due to the shape of the well and some features are worth discussing in a little more detail. The most obvious feature of the well around a proton is that it is spherically symmetric. This means that it looks the same no matter which angle you view it from. To picture this, think of a basketball with no markings on it: it’s a perfect sphere and it will look exactly the same no matter how you rotate it around. Perhaps we might dare to think of an electron inside a hydrogen atom as if it were trapped inside a tiny basketball? This is certainly more plausible than saying the electron is trapped in a square well and, remarkably, there is a similarity. Figure 6.10 shows, on the left, two of the lowest-energy standing sound waves that can be produced within a basketball. Again we have taken a slice through the ball, and the air pressure within the ball varies from black to white as the pressure increases. On the right are two possible electron standing waves in a hydrogen atom. The pictures are not identical, but they are very similar. So, it is not entirely stupid to imagine that the electron within a hydrogen atom is being trapped within something akin to a tiny basketball. This picture really serves to illustrate the wavelike behaviour of quantum particles, and it hopefully takes some of the mystery out of things: understanding the electron in a hydrogen atom is not more complicated than understanding how the air vibrates inside a basketball.
Figure 6.10: Two of the simplest standing sound waves inside a basketball (left) compared to the corresponding electron waves in a hydrogen atom (right). They are very similar. The top picture for hydrogen is a close-up of the central region in the bottom left picture in Figure 6.9.
Before we leave the hydrogen atom, we would like to say a little more about the potential created by the proton and how it is that the electron can leap from a higher energy level to a lower one with the emission of a photon. We avoided any discussion of how the proton and the electron communicate with each other, quite legitimately, by introducing the idea of a potential. This simplification allowed us to understand the quantization of energy for trapped particles. But if we want a serious understanding of what’s going on, we should try to explain the underlying mechanism for trapping particles. In the case of a particle moving in an actual box, we might imagine some impenetrable wall that is presumably made up of atoms, and the particle is prevented from passing through the wall by interacting with the atoms within it. A proper understanding of ‘impenetrability’ comes from understanding how the particles interact with each other. Likewise, we said that the proton in a hydrogen atom ‘produces a potential’ in which the electron moves, and we said that the potential traps the electron in a manner analogous to the way a particle is trapped in a box. That too ducks the deeper issue, because clearly the electron interacts with the proton and it is that interaction which dictates how the electron is confined.
In Chapter 10 we’ll see that we need to supplement the quantum rules we’ve articulated so far with some new rules dealing with particle interactions. At the moment, we have very simple rules: particles hop around, carrying imaginary clocks which wind back by clearly specified amounts depending on the size of the hop. All hops are allowed, and so a particle can hop from A to B via an infinity of different routes. Each route delivers its own quantum clock to B and we must add up the clocks to determine a single resultant clock. That clock then tells us the chance of actually finding the particle at B. Adding interactions into the game turns out to be surprisingly simple. We supplement the hopping rules with a new rule, stating that a particle can emit or absorb another particle. If there was one particle before the interaction, then there can be two particles afterwards; if there were two particles before the interaction, then there can be one particle afterwards. Of course, if we are going to work out the maths then we need to be more precise about which particles can fuse together or split apart, and we need to say what happens to the clock that each particle carries when it interacts. This is the subject of Chapter 10, but the implications for atoms should be clear. If there is a rule saying that an electron can interact by emitting a photon, then we have the possibility that the electron in a hydrogen atom can spit out a photon, lose energy and drop down to a lower energy level. It could also absorb a photon, gain energy and leap up to a higher energy level.
The existence of spectral lines indicates that this is what is happening, and this process is ordinarily heavily biased one way. In particular, the electron can spit out a photon and lose energy at any time, but the only way it can gain energy and jump up to a higher energy level is if there is a photon (or some other source of energy) available to collide with it. In a gas of hydrogen, such photons are typically few and far between, and an atom in an excited state is much more likely to emit a photon than absorb one. The net effect is that hydrogen atoms tend to de-excite, by which we mean that emission wins over absorption and, given time, the atom will make its way down to the n = 1 ground state. This is not always the case, because it is possible to arrange to continually excite atoms by feeding them energy in a controlled way. This is the basis of a technology that has become ubiquitous: the laser. The basic idea of a laser is to pump energy into atoms, excite them, and collect the photons that are produced when the electrons drop down in energy. Those photons are very useful for reading data with high precision from the surface of a CD or DVD: quantum mechanics affects our lives in myriad ways.
In this chapter, we have succeeded in explaining the origin of spectral lines using the simple idea of quantized energy levels. It would seem we have a way of thinking about atoms that works. But something is not quite right. We are missing one final piece of the jigsaw, without which we have no chance of explaining the structure of atoms heavier than hydrogen. More prosaically, we will also be unable to explain why we don’t fall through the floor, and that is problematic for our best theory of Nature. The insight we are looking for comes from the work of Austrian physicist Wolfgang Pauli.
7. The Universe in a Pin-head
(and Why We Don’t Fall Through the Floor)
r /> That we do not fall through the floor is something of a mystery. To say the floor is ‘solid’ is not very helpful, not least because Rutherford discovered that atoms are almost entirely empty space. The situation is made even more puzzling because, as far as we can tell, the fundamental particles of Nature are of no size at all.
Dealing with particles ‘of no size’ sounds problematic, and perhaps impossible. But nothing we said in the previous chapters presupposed or required that particles have any physical extent. The notion of truly point-like objects need not be wrong, even if it flies in the face of common sense – if indeed the reader has any common sense left at this stage of a book on quantum theory. It is, of course, entirely possible that a future experiment, perhaps even the Large Hadron Collider, will reveal that electrons and quarks are not infinitesimal points, but for now this is not mandated by experiment and there is no place for ‘size’ in the fundamental equations of particle physics. That’s not to say that point particles don’t have their problems – the idea of a finite charge compressed into an infinitely small volume is a thorny one – but so far the theoretical pitfalls have been circumvented. Perhaps the outstanding problem in fundamental physics, the development of a quantum theory of gravity, hints at finite extent, but the evidence is just not there to force physicists to abandon the idea of elementary particles. To be emphatic: point-like particles are really of no size and to ask ‘What happens if I split an electron in half?’ makes no sense at all – there is no meaning to the idea of ‘half an electron’.
A pleasing bonus of working with elementary fragments of matter that have no size at all is that we don’t have any trouble with the idea that the entire visible Universe was once compressed into a volume the size of a grapefruit, or even a pin-head. Mind-boggling though that may seem – it’s hard enough to imagine compressing a mountain to the size of a pea, never mind a star, a galaxy, or the 350 billion large galaxies in the observable Universe – there is absolutely no reason why this shouldn’t be possible. Indeed, present-day theories of the origins of structure in the Universe deal directly with its properties when it was in such an astronomically dense state. Such theories, whilst outlandish, have a good deal of observational evidence in their favour. In the final chapter we will meet objects with densities, if not at the ‘Universe in a pin-head’ scale, then certainly in ‘mountain in a pea’ territory: white dwarves are objects with the mass of a star squashed to the size of the Earth, and neutron stars have similar masses condensed into perfect, city-sized spheres. These objects are not science fiction; astronomers have observed them and made high-precision measurements of them, and quantum theory will allow us to calculate their properties and compare them with the observational data. As a first step on the road to understanding white dwarves and neutron stars, we will need to address the more prosaic question with which we began this chapter: if the floor is largely empty space, why do we not fall through it?
This question has a long and venerable history, and the answer was not established until surprisingly recently, in 1967, in a paper by Freeman Dyson and Andrew Lenard. They embarked on the quest because a colleague had offered a bottle of vintage champagne to anyone who could prove that matter shouldn’t simply collapse in on itself. Dyson referred to the proof as extraordinarily complicated, difficult and opaque, but what they showed was that matter can only be stable if electrons obey something called the Pauli Exclusion Principle, one of the most fascinating facets of our quantum universe.
We shall begin with some numerology. We saw in the last chapter that the structure of the simplest atom, hydrogen, can be understood by searching for the allowed quantum waves that fit inside the proton’s potential well. This allowed us to understand, at least qualitatively, the distinctive spectrum of the light emitted from hydrogen atoms. If we had had the time, we could have calculated the energy levels in a hydrogen atom. Every undergraduate physics student performs this calculation at some stage in their studies and it works beautifully, agreeing with the experimental data. As far as the last chapter was concerned, the ‘particle in a box’ simplification was good enough because it contains all the key points that we wanted to highlight. However, there is a feature of the full calculation that we shall need, which comes about because the real hydrogen atom is extended in three dimensions. For our particle in a box example, we only considered one dimension and obtained a series of energy levels labelled by a single number that we called n. The lowest energy level was labelled n = 1, the next n = 2 and so on. When the calculation is extended to the full three-dimensional case it turns out, perhaps unsurprisingly, that three numbers are needed to characterize all of the allowed energy levels. These are traditionally labelled n, l and m, and they are referred to as quantum numbers (in this chapter, m is not to be confused with the mass of the particle). The quantum number n is the counterpart of the number n for a particle in a box. It takes on integer values (n = 1, 2, 3, etc.) and the particle energies tend to increase as n increases. The possible values of l and m turn out to be linked to n; l must be smaller than n and it can be zero, e.g. if n = 3 then l can be 0, 1 or 2. m can take on any value ranging from minus l to plus l in integer steps. So if l = 2 then m can be equal to −2, −1, 0, 1 or 2. We are not going to explain where those numbers come from, because it won’t add anything to our understanding. Suffice to say that the four waves in Figure 6.9 have (n,l) = (1,0), (2,0), (2,1) and (3,0) respectively (all have m = 0).1
As we have said, the quantum number n is the main number controlling the values of the allowed energies of the electrons. There is also a small dependence of the allowed energies upon the value of l but it only shows up in very precise measurements of the emitted light. Bohr didn’t consider it when he first calculated the energies of the spectral lines of hydrogen, and his original formula was expressed entirely in terms of n. There is absolutely no dependence of the electron energy upon m unless we put the hydrogen atom inside a magnetic field (in fact m is known as the ‘magnetic quantum number’), but this certainly doesn’t mean that it isn’t important. To see why, let’s get on with our bit of numerology.
If n = 1 then how many different energy levels are there? Applying the rules we stated above, l and m can both only be 0 if n = 1, and so there is just the one energy level.
Now let’s do it for n = 2: l can take on two values, 0 and 1. If l = 1, then m can be equal to −1, 0 or +1, which is 3 more energy levels, making 4 in total.
For n = 3, l can be 0, 1 or 2. For l = 2, m can be equal to −2, −1, 0, +1, or +2, giving 5 levels. So in total, there are 1 + 3 + 5 = 9 levels for n = 3. And so on.
Remember those numbers for the first three values of n: 1, 4 and 9. Now take a look at Figure 7.1, which shows the first four rows of the periodic table of the chemical elements, and count how many elements there are in each row. Divide that number by 2, and you’ll get 1, 4, 4 and 9. The significance of all this will soon be revealed.
Figure 7.1. The first four rows of the periodic table.
Credit for arranging the chemical elements in this way is usually given to the Russian chemist Dmitri Mendeleev, who presented it to the Russian Chemical Society on 6 March 1869, which was a good few years before anyone had worked out how to count the allowed energy levels in a hydrogen atom. Mendeleev arranged the elements in order of their atomic weights, which in modern language corresponds to the number of protons and neutrons inside the atomic nucleus, although of course he didn’t know that at the time either. The ordering of the elements actually corresponds to the number of protons inside the nucleus (the number of neutrons is irrelevant) but for the lighter elements this makes no difference, which is why Mendeleev got it right. He chose to arrange the elements in rows and columns because he noticed that certain elements had very similar chemical properties, even though they had different atomic weights; the vertical columns group together such elements – helium, neon, argon and krypton on the far right of the table are all unreactive gases. Mendeleev didn’t just get the pattern right, he also
predicted the existence of new elements to fill gaps in his table: elements 31 and 32 (gallium and germanium) were discovered in 1875 and 1886. These discoveries confirmed that Mendeleev had uncovered something deep about the structure of atoms, but nobody knew what.
What is striking is that there are two elements in row one, eight in rows two and three and eighteen in row four, and those numbers are exactly twice the numbers we just worked out by counting the allowed energy levels in hydrogen. Why is this?
As we have already mentioned, the elements in the periodic table are ordered from left to right in a row by the number of protons in the nucleus, which is the same as the number of electrons they contain. Remember that all atoms are electrically neutral – the positive electric charges of the protons are exactly balanced by the negative charges of the electrons. There is clearly something interesting going on that relates the chemical properties of the elements to the allowed energies that the electrons can have when they orbit around a nucleus.
We can imagine building up heavier atoms from lighter ones by adding protons, neutrons and electrons one at a time, bearing in mind that whenever we add an extra proton into the nucleus we should add an extra electron into one of the energy levels. The exercise in numerology will generate the pattern we see in the periodic table if we simply assert that each energy level can contain two and only two electrons. Let’s see how this works.
Hydrogen has only one electron, so that would slot into the n = 1 level. Helium has two electrons, which would both fit into the n = 1 level. Now the n = 1 level is full up. We must add a third electron to make lithium, but it will have to go into the n = 2 level. The next seven electrons, corresponding to the next seven elements (beryllium, boron, carbon, nitrogen, oxygen, fluorine and neon), can also sit in a level with n = 2 because that has four slots available, corresponding to l = 0 and l l = 1, m = −1, 0 and +1. In that way we can account for all of the elements up to neon. With neon, the n = 2 levels are all full and we must move to n = 3, starting with sodium. The next eight electrons, one by one, start to fill up the n = 3 levels; first the electrons go into l = 0, and then into l = 1. That accounts for all the elements in the third row, up to argon. The fourth row of the table can be explained if we assume that it contains all of the remaining n = 3 electrons (i.e. the ten electrons with l = 2) and the n = 4 electrons with l = 0 and 1 (which makes eight electrons), making the magic number of eighteen electrons in total. We’ve sketched how the electrons fill up the energy levels for the heaviest element in our table, krypton (which has thirty-six electrons) in Figure 7.2.