by Bill Bryson
A traditional rendering of an atom showing electrons orbiting a nucleus as planets orbit a sun. The image was originally created in 1904 by a Japanese physicist named Hantaro Nagaoka, but in fact is not accurate. (credit 9.6)
Neutrons don’t influence an atom’s identity, but they do add to its mass. The number of neutrons is generally about the same as the number of protons, but they can vary up and down slightly. Add or subtract a neutron or two and you get an isotope. The terms you hear in reference to dating techniques in archaeology refer to isotopes—carbon-14, for instance, which is an atom of carbon with six protons and eight neutrons (the fourteen being the sum of the two).
Neutrons and protons occupy the atom’s nucleus. The nucleus of an atom is tiny—only one-millionth of a billionth of the full volume of the atom—but fantastically dense, since it contains virtually all the atom’s mass. As Cropper has put it, if an atom were expanded to the size of a cathedral, the nucleus would be only about the size of a fly—but a fly many thousands of times heavier than the cathedral. It was this spaciousness—this resounding, unexpected roominess—that had Rutherford scratching his head in 1910.
It is still a fairly astounding notion to consider that atoms are mostly empty space, and that the solidity we experience all around us is an illusion. When two objects come together in the real world—billiard balls are most often used for illustration—they don’t actually strike each other. “Rather,” as Timothy Ferris explains, “the negatively charged fields of the two balls repel each other…[W]ere it not for their electrical charges they could, like galaxies, pass right through each other unscathed.” When you sit in a chair, you are not actually sitting there, but levitating above it at a height of one angstrom (a hundred millionth of a centimetre), your electrons and its electrons implacably opposed to any closer intimacy.
The picture of an atom that nearly everybody has in mind is of an electron or two flying around a nucleus, like planets orbiting a sun. This image was created in 1904, based on little more than clever guesswork, by a Japanese physicist named Hantaro Nagaoka. It is completely wrong, but durable just the same. As Isaac Asimov liked to note, it inspired generations of science-fiction writers to create stories of worlds-within-worlds, in which atoms become tiny inhabited solar systems or our solar system turns out to be merely a mote in some much larger scheme. Even now CERN, the European Organization for Nuclear Research, uses Nagaoka’s image as a logo on its website. In fact, as physicists were soon to realize, electrons are not like orbiting planets at all, but more like the blades of a spinning fan, managing to fill every bit of space in their orbits simultaneously (but with the crucial difference that the blades of a fan only seem to be everywhere at once; electrons are).
Needless to say, very little of this was understood in 1910 or for many years afterwards. Rutherford’s finding presented some large and immediate problems, not least that no electron should be able to orbit a nucleus without crashing. Conventional electrodynamic theory demanded that a flying electron should run out of energy very quickly—in only an instant or so—and spiral into the nucleus, with disastrous consequences for both. There was also the problem of how protons, with their positive charges, could bundle together inside the nucleus without blowing themselves and the rest of the atom apart. Clearly, whatever was going on down there in the world of the very small was not governed by the laws that applied in the macro world where our expectations reside.
As physicists began to delve into this subatomic realm, they realized that it wasn’t merely different from anything we knew, but different from anything ever imagined. “Because atomic behaviour is so unlike ordinary experience,” Richard Feynman once observed, “it is very difficult to get used to and it appears peculiar and mysterious to everyone, both to the novice and to the experienced physicist.” When Feynman made that comment, physicists had had half a century to adjust to the strangeness of atomic behaviour. So think how it must have felt to Rutherford and his colleagues in the early 1910s when it was all brand new.
A computer graphic of an atom of helium, one of the commonest elements in the universe, showing a nucleus of two protons and two neutrons surrounded by an electron cloud. The atom’s electrons are able to create such a cloud through their weird ability to be “at once everywhere and nowhere.” (credit 9.7)
One of the people working with Rutherford was a mild and affable young Dane named Niels Bohr. In 1913, while puzzling over the structure of the atom, Bohr had an idea so exciting that he postponed his honeymoon to write what became a landmark paper.
Because physicists couldn’t see anything so small as an atom, they had to try to work out its structure from how it behaved when they did things to it, as Rutherford had done by firing alpha particles at foil. Sometimes, not surprisingly, the results of these experiments were puzzling. One puzzle that had been around for a long time was to do with spectrum readings of the wavelengths of hydrogen. These produced patterns showing that hydrogen atoms emitted energy at certain wavelengths but not others. It was rather as if someone under surveillance kept turning up at particular locations but was never observed travelling between them. No-one could understand why this should be.
It was while puzzling over this problem that Bohr was struck by a solution and dashed off his famous paper. Called “On the Constitutions of Atoms and Molecules,” the paper explained how electrons could keep from falling into the nucleus by suggesting that they could occupy only certain well-defined orbits. According to the new theory, an electron moving between orbits would disappear from one and reappear instantaneously in another without visiting the space between. This idea—the famous “quantum leap”—is of course utterly strange, but it was too good not to be true. It not only kept electrons from spiralling catastrophically into the nucleus, it also explained hydrogen’s bewildering wavelengths. The electrons only appeared in certain orbits because they only existed in certain orbits. It was a dazzling insight and it won Bohr the 1922 Nobel Prize in physics, the year after Einstein received his.
The Danish physicist Niels Bohr in 1926, four years after winning a Nobel Prize for working out the mysterious behaviour of electrons. (credit 9.8a)
Meanwhile the tireless Rutherford, now back at Cambridge having succeeded J. J. Thomson as head of the Cavendish Laboratory, came up with a model that explained why the nuclei didn’t blow up. He saw that the positive charge of the protons must be offset by some type of neutralizing particles, which he called neutrons. The idea was simple and appealing, but not easy to prove. Rutherford’s associate, James Chadwick, devoted eleven intensive years to hunting for neutrons before finally succeeding in 1932. He, too, was awarded a Nobel Prize in physics, in 1935. As Boorse and his colleagues point out in their history of the subject, the delay in discovery was probably a very good thing, as mastery of the neutron was essential to the development of the atomic bomb. (Because neutrons have no charge, they aren’t repelled by the electrical fields at the heart of an atom and thus could be fired like tiny torpedoes into an atomic nucleus, setting off the destructive process known as fission.) Had the neutron been isolated in the 1920s, they note, it is “very likely the atomic bomb would have been developed first in Europe, undoubtedly by the Germans.”
J. J. Thomson, Rutherford’s predecessor as director of the Cavendish Laboratory, in an undated photograph. (credit 9.8b)
James Chadwick’s neutron detector, the device he used to prove the existence of the elusive and long-sought particles in 1932. (credit 9.8c)
Left: James Chadwick, protege of Ernest Rutherford, who spent eleven years searching devotedly for neutrons. In 1935 he was awarded the Nobel Prize in physics for their discovery. (credit 9.9a) Right: Prince Louis-Victor de Broglie, who suggested that an electron should be regarded as a wave and not as a particle, as this minimized anomalies that had long baffled scientists. (credit 9.9b)
As it was, the Europeans had their hands full trying to understand the strange behaviour of the electron. The principal problem they faced was th
at the electron sometimes behaved like a particle and sometimes like a wave. This impossible duality drove physicists nearly mad. For the next decade all across Europe they furiously thought and scribbled and offered competing hypotheses. In France, Prince Louis-Victor de Broglie, the scion of a ducal family, found that certain anomalies in the behaviour of electrons disappeared when one regarded them as waves. The observation excited the attention of the Austrian Erwin Schrödinger, who made some deft refinements and devised a handy system called wave mechanics. At almost the same time, the German physicist Werner Heisenberg came up with a competing theory called matrix mechanics. This was so mathematically complex that hardly anyone really understood it, including Heisenberg himself (“I do not even know what a matrix is,” Heisenberg despaired to a friend at one point), but it did seem to solve certain problems that Schrödinger’s waves failed to explain.
The upshot is that physics had two theories, based on conflicting premises, that produced the same results. It was an impossible situation.
Finally, in 1926, Heisenberg came up with a celebrated compromise, producing a new discipline that came to be known as quantum mechanics. At the heart of it was Heisenberg’s Uncertainty Principle, which states that the electron is a particle but a particle that can be described in terms of waves. The uncertainty around which the theory is built is that we can know the path an electron takes as it moves through a space or we can know where it is at a given instant, but we cannot know both.3 Any attempt to measure one will unavoidably disturb the other. This isn’t a matter of simply needing more precise instruments; it is an immutable property of the universe.
What this means in practice is that you can never predict where an electron will be at any given moment. You can only list its probability of being there. In a sense, as Dennis Overbye has put it, an electron doesn’t exist until it is observed. Or, put slightly differently, until it is observed an electron must be regarded as being “at once everywhere and nowhere.”
If this seems confusing, you may take some comfort in knowing that it was confusing to physicists, too. Overbye notes: “Bohr once commented that a person who wasn’t outraged on first hearing about quantum theory didn’t understand what had been said.” Heisenberg, when asked how one could envision an atom, replied: “Don’t try.”
Left: Werner Heisenberg, whose Uncertainty Principle became the heart of the new discipline of quantum mechanics. (credit 9.10a) Right: Erwin Schrödinger, who published a series of papers in 1926 that founded the field of quantum wave mechanics. His famous hypothetical wave experiment linked quantum theory with philosophy by asserting that two possible outcomes of any situation will simultaneously exist until the actual outcome is observed. (credit 9.10b)
So the atom turned out to be quite unlike the image that most people had created. The electron doesn’t fly around the nucleus like a planet around its sun, but instead takes on the more amorphous aspect of a cloud. The “shell” of an atom isn’t some hard, shiny casing, as illustrations sometimes encourage us to suppose, but simply the outermost of these fuzzy electron clouds. The cloud itself is essentially just a zone of statistical probability marking the area beyond which the electron only very seldom strays. Thus an atom, if you could see it, would look more like a very fuzzy tennis ball than a hard-edged metallic sphere (but not much like either or, indeed, like anything you’ve ever seen; we are, after all, dealing here with a world very different from the one we see around us).
It seemed as if there was no end of strangeness. For the first time, as James Trefil has put it, scientists had encountered “an area of the universe that our brains just aren’t wired to understand.” Or, as Feynman expressed it, “things on a small scale behave nothing like things on a large scale.” As physicists delved deeper, they realized they had found a world not only where electrons could jump from one orbit to another without travelling across any intervening space, but where matter could pop into existence from nothing at all—“provided,” in the words of Alan Lightman of MIT, “it disappears again with sufficient haste.”
Perhaps the most arresting of quantum improbabilities is the idea, arising from Wolfgang Pauli’s Exclusion Principle of 1925, that certain pairs of subatomic particles, even when separated by the most considerable distances, can each instantly “know” what the other is doing. Particles have a quality known as spin and, according to quantum theory, the moment you determine the spin of one particle, its sister particle, no matter how distant away, will immediately begin spinning in the opposite direction and at the same rate.
It is as if, in the words of the science writer Lawrence Joseph, you had two identical pool balls, one in Ohio and the other in Fiji, and that the instant you sent one spinning the other would immediately spin in a contrary direction at precisely the same speed. Remarkably, the phenomenon was proved in 1997 when physicists at the University of Geneva sent photons seven miles in opposite directions and demonstrated that interfering with one provoked an instantaneous response in the other.
Things reached such a pitch that at one conference Bohr remarked of a new theory that the question was not whether it was crazy, but whether it was crazy enough. To illustrate the non-intuitive nature of the quantum world, Schrödinger offered a famous thought experiment in which a hypothetical cat was placed in a box with one atom of a radioactive substance attached to a vial of hydrocyanic acid. If the particle degraded within an hour, it would trigger a mechanism that would break the vial and poison the cat. If not, the cat would live. But we could not know which was the case, so there was no choice, scientifically, but to regard the cat as 100 per cent alive and 100 per cent dead at the same time. This means, as Stephen Hawking has observed with a touch of understandable excitement, that one cannot “predict future events exactly if one cannot even measure the present state of the universe precisely!”
(credit 9.11)
Because of its oddities, many physicists disliked quantum theory, or at least certain aspects of it, and none more so than Einstein. This was more than a little ironic since it was he, in his annus mirabilis of 1905, who had so persuasively explained how photons of light could sometimes behave like particles and sometimes like waves—the notion at the very heart of the new physics. “Quantum theory is very worthy of regard,” he observed politely, but he really didn’t like it. “God doesn’t play dice,” he said.4
Einstein couldn’t bear the notion that God could create a universe in which some things were for ever unknowable. Moreover, the idea of action at a distance—that one particle could instantaneously influence another trillions of miles away—was a stark violation of the Special Theory of Relativity. Nothing could outrace the speed of light and yet here were physicists insisting that, somehow, at the subatomic level, information could. (No-one, incidentally, has ever explained how the particles achieve this feat. Scientists have dealt with this problem, according to the physicist Yakir Aharanov, “by not thinking about it.”)
Above all, there was the problem that quantum physics introduced a level of untidiness that hadn’t previously existed. Suddenly you needed two sets of laws to explain the behaviour of the universe—quantum theory for the world of the very small and relativity for the larger universe beyond. The gravity of relativity theory was brilliant at explaining why planets orbited suns or why galaxies tended to cluster, but turned out to have no influence at all at the particle level. To explain what kept atoms together, other forces were needed and in the 1930s two were discovered: the strong nuclear force and the weak nuclear force. The strong force binds atoms together; it’s what allows protons to bed down together in the nucleus. The weak force engages in more miscellaneous tasks, mostly to do with controlling the rates of certain sorts of radioactive decay.
The weak nuclear force, despite its name, is ten billion billion billion times stronger than gravity, and the strong nuclear force is more powerful still—vastly so, in fact—but their influence extends to only the tiniest distances. The grip of the strong force reaches out only to about one
-hundred-thousandth of the diameter of an atom. That’s why the nuclei of atoms are so compacted and dense, and why elements with big, crowded nuclei tend to be so unstable: the strong force just can’t hold on to all the protons.
The upshot of all this is that physics ended up with two bodies of laws—one for the world of the very small, one for the universe at large—leading quite separate lives. Einstein disliked that, too. He devoted the rest of his life to searching for a way to tie up these loose ends by finding a Grand Unified Theory, and always failed. From time to time he thought he had it, but it always unravelled on him in the end. As time passed he became increasingly marginalized and even a little pitied. Almost without exception, wrote Snow, “his colleagues thought, and still think, that he wasted the second half of his life.”
Elsewhere, however, real progress was being made. By the mid-1940s scientists had reached a point where they understood the atom at an extremely profound level—as they all too effectively demonstrated in August 1945 by exploding a pair of atomic bombs over Japan.