A Short History of Nearly Everything
Page 16
What this means in practice is that you can never predict where an electron will be at any given moment. You can only list its probability of being there. In a sense, as Dennis Overbye has put it, an electron doesn't exist until it is observed. Or, put slightly differently, until it is observed an electron must be regarded as being "at once everywhere and nowhere."
If this seems confusing, you may take some comfort in knowing that it was confusing to physicists, too. Overbye notes: "Bohr once commented that a person who wasn't outraged on first hearing about quantum theory didn't understand what had been said." Heisenberg, when asked how one could envision an atom, replied: "Don't try."
So the atom turned out to be quite unlike the image that most people had created. The electron doesn't fly around the nucleus like a planet around its sun, but instead takes on the more amorphous aspect of a cloud. The "shell" of an atom isn't some hard shiny casing, as illustrations sometimes encourage us to suppose, but simply the outermost of these fuzzy electron clouds. The cloud itself is essentially just a zone of statistical probability marking the area beyond which the electron only very seldom strays. Thus an atom, if you could see it, would look more like a very fuzzy tennis ball than a hard-edged metallic sphere (but not much like either or, indeed, like anything you've ever seen; we are, after all, dealing here with a world very different from the one we see around us).
It seemed as if there was no end of strangeness. For the first time, as James Trefil has put it, scientists had encountered "an area of the universe that our brains just aren't wired to understand." Or as Feynman expressed it, "things on a small scale behave nothing like things on a large scale." As physicists delved deeper, they realized they had found a world where not only could electrons jump from one orbit to another without traveling across any intervening space, but matter could pop into existence from nothing at all--"provided," in the words of Alan Lightman of MIT, "it disappears again with sufficient haste."
Perhaps the most arresting of quantum improbabilities is the idea, arising from Wolfgang Pauli's Exclusion Principle of 1925, that the subatomic particles in certain pairs, even when separated by the most considerable distances, can each instantly "know" what the other is doing. Particles have a quality known as spin and, according to quantum theory, the moment you determine the spin of one particle, its sister particle, no matter how distant away, will immediately begin spinning in the opposite direction and at the same rate.
It is as if, in the words of the science writer Lawrence Joseph, you had two identical pool balls, one in Ohio and the other in Fiji, and the instant you sent one spinning the other would immediately spin in a contrary direction at precisely the same speed. Remarkably, the phenomenon was proved in 1997 when physicists at the University of Geneva sent photons seven miles in opposite directions and demonstrated that interfering with one provoked an instantaneous response in the other.
Things reached such a pitch that at one conference Bohr remarked of a new theory that the question was not whether it was crazy, but whether it was crazy enough. To illustrate the nonintuitive nature of the quantum world, Schrödinger offered a famous thought experiment in which a hypothetical cat was placed in a box with one atom of a radioactive substance attached to a vial of hydrocyanic acid. If the particle degraded within an hour, it would trigger a mechanism that would break the vial and poison the cat. If not, the cat would live. But we could not know which was the case, so there was no choice, scientifically, but to regard the cat as 100 percent alive and 100 percent dead at the same time. This means, as Stephen Hawking has observed with a touch of understandable excitement, that one cannot "predict future events exactly if one cannot even measure the present state of the universe precisely!"
Because of its oddities, many physicists disliked quantum theory, or at least certain aspects of it, and none more so than Einstein. This was more than a little ironic since it was he, in his annus mirabilis of 1905, who had so persuasively explained how photons of light could sometimes behave like particles and sometimes like waves--the notion at the very heart of the new physics. "Quantum theory is very worthy of regard," he observed politely, but he really didn't like it. "God doesn't play dice," he said. * 23
Einstein couldn't bear the notion that God could create a universe in which some things were forever unknowable. Moreover, the idea of action at a distance--that one particle could instantaneously influence another trillions of miles away--was a stark violation of the special theory of relativity. This expressly decreed that nothing could outrace the speed of light and yet here were physicists insisting that, somehow, at the subatomic level, information could. (No one, incidentally, has ever explained how the particles achieve this feat. Scientists have dealt with this problem, according to the physicist Yakir Aharanov, "by not thinking about it.")
Above all, there was the problem that quantum physics introduced a level of untidiness that hadn't previously existed. Suddenly you needed two sets of laws to explain the behavior of the universe--quantum theory for the world of the very small and relativity for the larger universe beyond. The gravity of relativity theory was brilliant at explaining why planets orbited suns or why galaxies tended to cluster, but turned out to have no influence at all at the particle level. To explain what kept atoms together, other forces were needed, and in the 1930s two were discovered: the strong nuclear force and weak nuclear force. The strong force binds atoms together; it's what allows protons to bed down together in the nucleus. The weak force engages in more miscellaneous tasks, mostly to do with controlling the rates of certain sorts of radioactive decay.
The weak nuclear force, despite its name, is ten billion billion billion times stronger than gravity, and the strong nuclear force is more powerful still--vastly so, in fact--but their influence extends to only the tiniest distances. The grip of the strong force reaches out only to about 1/100,000 of the diameter of an atom. That's why the nuclei of atoms are so compacted and dense and why elements with big, crowded nuclei tend to be so unstable: the strong force just can't hold on to all the protons.
The upshot of all this is that physics ended up with two bodies of laws--one for the world of the very small, one for the universe at large--leading quite separate lives. Einstein disliked that, too. He devoted the rest of his life to searching for a way to tie up these loose ends by finding a grand unified theory, and always failed. From time to time he thought he had it, but it always unraveled on him in the end. As time passed he became increasingly marginalized and even a little pitied. Almost without exception, wrote Snow, "his colleagues thought, and still think, that he wasted the second half of his life."
Elsewhere, however, real progress was being made. By the mid-1940s scientists had reached a point where they understood the atom at an extremely profound level--as they all too effectively demonstrated in August 1945 by exploding a pair of atomic bombs over Japan.
By this point physicists could be excused for thinking that they had just about conquered the atom. In fact, everything in particle physics was about to get a whole lot more complicated. But before we take up that slightly exhausting story, we must bring another straw of our history up to date by considering an important and salutary tale of avarice, deceit, bad science, several needless deaths, and the final determination of the age of the Earth.
10 GETTING THE LEAD OUT
IN THE LATE 1940s, a graduate student at the University of Chicago named Clair Patterson (who was, first name notwithstanding, an Iowa farm boy by origin) was using a new method of lead isotope measurement to try to get a definitive age for the Earth at last. Unfortunately all his samples came up contaminated--usually wildly so. Most contained something like two hundred times the levels of lead that would normally be expected to occur. Many years would pass before Patterson realized that the reason for this lay with a regrettable Ohio inventor named Thomas Midgley, Jr.
Midgley was an engineer by training, and the world would no doubt have been a safer place if he had stayed so. Instead, he developed an interest
in the industrial applications of chemistry. In 1921, while working for the General Motors Research Corporation in Dayton, Ohio, he investigated a compound called tetraethyl lead (also known, confusingly, as lead tetraethyl), and discovered that it significantly reduced the juddering condition known as engine knock.
Even though lead was widely known to be dangerous, by the early years of the twentieth century it could be found in all manner of consumer products. Food came in cans sealed with lead solder. Water was often stored in lead-lined tanks. It was sprayed onto fruit as a pesticide in the form of lead arsenate. It even came as part of the packaging of toothpaste tubes. Hardly a product existed that didn't bring a little lead into consumers' lives. However, nothing gave it a greater and more lasting intimacy than its addition to gasoline.
Lead is a neurotoxin. Get too much of it and you can irreparably damage the brain and central nervous system. Among the many symptoms associated with overexposure are blindness, insomnia, kidney failure, hearing loss, cancer, palsies, and convulsions. In its most acute form it produces abrupt and terrifying hallucinations, disturbing to victims and onlookers alike, which generally then give way to coma and death. You really don't want to get too much lead into your system.
On the other hand, lead was easy to extract and work, and almost embarrassingly profitable to produce industrially--and tetraethyl lead did indubitably stop engines from knocking. So in 1923 three of America's largest corporations, General Motors, Du Pont, and Standard Oil of New Jersey, formed a joint enterprise called the Ethyl Gasoline Corporation (later shortened to simply Ethyl Corporation) with a view to making as much tetraethyl lead as the world was willing to buy, and that proved to be a very great deal. They called their additive "ethyl" because it sounded friendlier and less toxic than "lead" and introduced it for public consumption (in more ways than most people realized) on February 1, 1923.
Almost at once production workers began to exhibit the staggered gait and confused faculties that mark the recently poisoned. Also almost at once, the Ethyl Corporation embarked on a policy of calm but unyielding denial that would serve it well for decades. As Sharon Bertsch McGrayne notes in her absorbing history of industrial chemistry, Prometheans in the Lab , when employees at one plant developed irreversible delusions, a spokesman blandly informed reporters: "These men probably went insane because they worked too hard." Altogether at least fifteen workers died in the early days of production of leaded gasoline, and untold numbers of others became ill, often violently so; the exact numbers are unknown because the company nearly always managed to hush up news of embarrassing leakages, spills, and poisonings. At times, however, suppressing the news became impossible, most notably in 1924 when in a matter of days five production workers died and thirty-five more were turned into permanent staggering wrecks at a single ill-ventilated facility.
As rumors circulated about the dangers of the new product, ethyl's ebullient inventor, Thomas Midgley, decided to hold a demonstration for reporters to allay their concerns. As he chatted away about the company's commitment to safety, he poured tetraethyl lead over his hands, then held a beaker of it to his nose for sixty seconds, claiming all the while that he could repeat the procedure daily without harm. In fact, Midgley knew only too well the perils of lead poisoning: he had himself been made seriously ill from overexposure a few months earlier and now, except when reassuring journalists, never went near the stuff if he could help it.
Buoyed by the success of leaded gasoline, Midgley now turned to another technological problem of the age. Refrigerators in the 1920s were often appallingly risky because they used dangerous gases that sometimes leaked. One leak from a refrigerator at a hospital in Cleveland, Ohio, in 1929 killed more than a hundred people. Midgley set out to create a gas that was stable, nonflammable, noncorrosive, and safe to breathe. With an instinct for the regrettable that was almost uncanny, he invented chlorofluorocarbons, or CFCs.
Seldom has an industrial product been more swiftly or unfortunately embraced. CFCs went into production in the early 1930s and found a thousand applications in everything from car air conditioners to deodorant sprays before it was noticed, half a century later, that they were devouring the ozone in the stratosphere. As you will be aware, this was not a good thing.
Ozone is a form of oxygen in which each molecule bears three atoms of oxygen instead of two. It is a bit of a chemical oddity in that at ground level it is a pollutant, while way up in the stratosphere it is beneficial, since it soaks up dangerous ultraviolet radiation. Beneficial ozone is not terribly abundant, however. If it were distributed evenly throughout the stratosphere, it would form a layer just one eighth of an inch or so thick. That is why it is so easily disturbed, and why such disturbances don't take long to become critical.
Chlorofluorocarbons are also not very abundant--they constitute only about one part per billion of the atmosphere as a whole--but they are extravagantly destructive. One pound of CFCs can capture and annihilate seventy thousand pounds of atmospheric ozone. CFCs also hang around for a long time--about a century on average--wreaking havoc all the while. They are also great heat sponges. A single CFC molecule is about ten thousand times more efficient at exacerbating greenhouse effects than a molecule of carbon dioxide--and carbon dioxide is of course no slouch itself as a greenhouse gas. In short, chlorofluorocarbons may ultimately prove to be just about the worst invention of the twentieth century.
Midgley never knew this because he died long before anyone realized how destructive CFCs were. His death was itself memorably unusual. After becoming crippled with polio, Midgley invented a contraption involving a series of motorized pulleys that automatically raised or turned him in bed. In 1944, he became entangled in the cords as the machine went into action and was strangled.
If you were interested in finding out the ages of things, the University of Chicago in the 1940s was the place to be. Willard Libby was in the process of inventing radiocarbon dating, allowing scientists to get an accurate reading of the age of bones and other organic remains, something they had never been able to do before. Up to this time, the oldest reliable dates went back no further than the First Dynasty in Egypt from about 3000 B.C. No one could confidently say, for instance, when the last ice sheets had retreated or at what time in the past the Cro-Magnon people had decorated the caves of Lascaux in France.
Libby's idea was so useful that he would be awarded a Nobel Prize for it in 1960. It was based on the realization that all living things have within them an isotope of carbon called carbon-14, which begins to decay at a measurable rate the instant they die. Carbon-14 has a half-life--that is, the time it takes for half of any sample to disappear * 24 --of about 5,600 years, so by working out how much a given sample of carbon had decayed, Libby could get a good fix on the age of an object--though only up to a point. After eight half-lives, only 1/256 of the original radioactive carbon remains, which is too little to make a reliable measurement, so radiocarbon dating works only for objects up to forty thousand or so years old.
Curiously, just as the technique was becoming widespread, certain flaws within it became apparent. To begin with, it was discovered that one of the basic components of Libby's formula, known as the decay constant, was off by about 3 percent. By this time, however, thousands of measurements had been taken throughout the world. Rather than restate every one, scientists decided to keep the inaccurate constant. "Thus," Tim Flannery notes, "every raw radiocarbon date you read today is given as too young by around 3 percent." The problems didn't quite stop there. It was also quickly discovered that carbon-14 samples can be easily contaminated with carbon from other sources--a tiny scrap of vegetable matter, for instance, that has been collected with the sample and not noticed. For younger samples--those under twenty thousand years or so--slight contamination does not always matter so much, but for older samples it can be a serious problem because so few remaining atoms are being counted. In the first instance, to borrow from Flannery, it is like miscounting by a dollar when counting to a thousand; i
n the second it is more like miscounting by a dollar when you have only two dollars to count.
Libby's method was also based on the assumption that the amount of carbon-14 in the atmosphere, and the rate at which it has been absorbed by living things, has been consistent throughout history. In fact it hasn't been. We now know that the volume of atmospheric carbon-14 varies depending on how well or not Earth's magnetism is deflecting cosmic rays, and that that can vary significantly over time. This means that some carbon-14 dates are more dubious than others. This is particularly so with dates just around the time that people first came to the Americas, which is one of the reasons the matter is so perennially in dispute.
Finally, and perhaps a little unexpectedly, readings can be thrown out by seemingly unrelated external factors--such as the diets of those whose bones are being tested. One recent case involved the long-running debate over whether syphilis originated in the New World or the Old. Archeologists in Hull, in the north of England, found that monks in a monastery graveyard had suffered from syphilis, but the initial conclusion that the monks had done so before Columbus's voyage was cast into doubt by the realization that they had eaten a lot of fish, which could make their bones appear to be older than in fact they were. The monks may well have had syphilis, but how it got to them, and when, remain tantalizingly unresolved.
Because of the accumulated shortcomings of carbon-14, scientists devised other methods of dating ancient materials, among them thermoluminesence, which measures electrons trapped in clays, and electron spin resonance, which involves bombarding a sample with electromagnetic waves and measuring the vibrations of the electrons. But even the best of these could not date anything older than about 200,000 years, and they couldn't date inorganic materials like rocks at all, which is of course what you need if you wish to determine the age of your planet.