Book Read Free

Sam Kean

Page 27

by Love;the History of the World From the Periodic Table of the Elements The Disappearing Spoon;Other True Tales of Madness


  Unfortunately for science, Glaser later said, the beer experiments flopped. Nor did lab partners appreciate the stink of vaporized ale. Undaunted, Glaser refined his experiments, and his colleague Luis Alvarez—of dinosaur-killing-asteroid fame—eventually determined the most sensible liquid to use was in fact hydrogen. Liquid hydrogen boils at −435°F, so even minute amounts of heat will make a froth. As the simplest element, hydrogen also avoided the messy complications that other elements (or beer) might cause when particles collided. Glaser’s revamped “bubble chamber” provided so many insights so quickly that in 1960 he appeared among the fifteen “Men of the Year” in Time magazine with Linus Pauling, William Shockley, and Emilio Segrè. He also won the Nobel Prize at the disgustingly young age of thirty-three. Having moved on to Berkeley by then, he borrowed Edwin McMillan and Segrè’s white vest for the ceremony.

  Bubbles aren’t usually counted as an essential scientific tool. Despite—or maybe because of—their ubiquity in nature and the ease of producing them, they were dismissed as toys for centuries. But when physics emerged as the dominant science in the 1900s, physicists suddenly found a lot of work for these toys in probing the most basic structures in the universe. Now that biology is ascendant, biologists use bubbles to study the development of cells, the most complex structures in the universe. Bubbles have proved to be wonderful natural laboratories for experiments in all fields, and the recent history of science can be read in parallel with the study of these “spheres of splendor.”

  One element that readily forms bubbles—as well as foam, a state where bubbles overlap and lose their spherical shape—is calcium. Cells are to tissues what bubbles are to foams, and the best example of a foam structure in the body (besides saliva) is spongy bone. We usually think of foams as no sturdier than shaving cream, but when certain air-infused substances dry out or cool down, they harden and stiffen, like durable versions of bath suds. NASA actually uses special foams to protect space shuttles on reentry, and calcium-enriched bones are similarly strong yet light. What’s more, sculptors for millennia have carved tombstones and obelisks and false gods from pliable yet sturdy calcium rocks such as marble and limestone. These rocks form when tiny sea creatures die and their calcium-rich shells sink and pile up on the ocean floor. Like bones, shells have natural pores, but calcium’s chemistry enhances their supple strength. Most natural water, such as rainwater, is slightly acidic, while calcium’s minerals are slightly basic. When water leaks into calcium’s pores, the two react like a mini grade-school volcano to release small amounts of carbon dioxide, which softens up the rock. On a large and geological scale, reactions between rainwater and calcium form the huge cavities we know as caves.

  Beyond anatomy and art, calcium bubbles have shaped world economics and empires. The many calcium-rich coves along the southern coast of England aren’t natural, but originated as limestone quarries around 55 BC, when the limestone-loving Romans arrived. Scouts sent out by Julius Caesar spotted an attractive, cream-colored limestone near modern-day Beer, England, and began chipping it out to adorn Roman facades. English limestone from Beer later was used in building Buckingham Palace, the Tower of London, and Westminster Abbey, and all that missing stone left gaping caverns in the seaside cliffs. By 1800, a few local boys who’d grown up sailing ships and playing tag in the labyrinths decided to marry their childhood pastimes by becoming smugglers, using the calcium coves to conceal the French brandy, fiddles, tobacco, and silk they ran over from Normandy in fast cutters.

  The smugglers (or, as they styled themselves, free traders) thrived because of the hateful taxes the English government levied on French goods to spite Napoleon, and the scarcity of the taxed items created, inevitably, a demand bubble. Among many other things, the inability of His Majesty’s expensive coast guard to crack down on smuggling convinced Parliament to liberalize trade laws in the 1840s—which brought about real free trade, and with it the economic prosperity that allowed Great Britain to expand its never-darkening empire.

  Given all this history, you’d expect a long tradition of bubble science, but no. Notable minds like Benjamin Franklin (who discovered why oil calms frothy water) and Robert Boyle (who experimented on and even liked to taste the fresh, frothy urine in his chamber pot) did dabble in bubbles. And primitive physiologists sometimes did things such as bubbling gases into the blood of half-living, half-dissected dogs. But scientists mostly ignored bubbles themselves, their structure and form, and left the study of bubbles to fields that they scorned as intellectually inferior—what might be called “intuitive sciences.” Intuitive sciences aren’t pathological, merely fields such as horse breeding or gardening that investigate natural phenomena but that long relied more on hunches and almanacs than controlled experiments. The intuitive science that picked up bubbles research was cooking. Bakers and brewers had long used yeasts—primitive bubble-making machines—to leaven bread and carbonate beer. But eighteenth-century haute cuisine chefs in Europe learned to whip egg whites into vast, fluffy foams and began to experiment with the meringues, porous cheeses, whipped creams, and cappuccinos we love today.

  Still, chefs and chemists tended to distrust one another, chemists seeing cooks as undisciplined and unscientific, cooks seeing chemists as sterile killjoys. Only around 1900 did bubble science coalesce into a respectable field, though the men responsible, Ernest Rutherford and Lord Kelvin, had only dim ideas of what their work would lead to. Rutherford, in fact, was mostly interested in plumbing what at the time were the murky depths of the periodic table.

  Shortly after moving from New Zealand to Cambridge University in 1895, Rutherford devoted himself to radioactivity, the genetics or nanotechnology of the day. Natural vigorousness led Rutherford to experimental science, for he wasn’t exactly a clean-fingernails guy. Having grown up hunting quail and digging potatoes on a family farm, he recalled feeling like “an ass in lion’s skin” among the robed dons of Cambridge. He wore a walrus mustache, toted radioactive samples around in his pockets, and smoked foul cigars and pipes. He was given to blurting out both weird euphemisms—perhaps his devout Christian wife discouraged him from swearing—and also the bluest curses in the lab, because he couldn’t help himself from damning his equipment to hell when it didn’t behave. Perhaps to make up for his cursing, he also sang, loudly and quite off-key, “Onward, Christian Soldiers” as he marched around his dim lab. Despite that ogre-like description, Rutherford’s outstanding scientific trait was elegance. Nobody was better, possibly in the history of science, at coaxing nature’s secrets out of physical apparatus. And there’s no better example than the elegance he used to solve the mystery of how one element can transform into another.

  After moving from Cambridge to Montreal, Rutherford grew interested in how radioactive substances contaminate the air around them with more radioactivity. To investigate this, Rutherford built on the work of Marie Curie, but the New Zealand hick proved cagier than his more celebrated female contemporary. According to Curie (among others), radioactive elements leaked a sort of gas of “pure radioactivity” that charged the air, just as lightbulbs flood the air with light. Rutherford suspected that “pure radioactivity” was actually an unknown gaseous element with its own radioactive properties. As a result, whereas Curie spent months boiling down thousands of pounds of black, bubbling pitchblende to get microscopic samples of radium and polonium, Rutherford sensed a shortcut and let nature work for him. He simply let an active sample decay in a closed container, then drew bubbles off the gas into an inverted flask, which gave him all the radioactive material he needed. Rutherford and his collaborator, Frederick Soddy, quickly proved the radioactive bubbles were in fact a new element, radon. And because the sample beneath the beaker shrank in proportion as the radon sample grew in volume, they realized that one element actually mutated into another.

  Not only did Rutherford and Soddy find a new element, they discovered novel rules for jumping around on the periodic table. Elements could suddenly move laterally as they decayed
and skip across spaces. This was thrilling but blasphemous. Science had finally discredited and excommunicated the chemical magicians who’d claimed to turn lead into gold, and here Rutherford and Soddy were opening the gate back up. When Soddy finally let himself believe what was happening and burst out, “Rutherford, this is transmutation!” Rutherford had a fit.

  “For Mike’s sake, Soddy,” he boomed. “Don’t call it transmutation. They’ll have our heads off as alchemists!”

  The radon sample soon midwifed even more startling science. Rutherford had arbitrarily named the little bits that flew off radioactive atoms alpha particles. (He also discovered beta particles.) Based on the weight differences between generations of decaying elements, Rutherford suspected that alphas were actually helium atoms breaking off and escaping like bubbles through a boiling liquid. If this was true, elements could do more than hop two spaces on the periodic table like pieces on a typical board game; if uranium emitted helium, elements were jumping from one side of the table to the other like a lucky (or disastrous) move in Snakes & Ladders.

  To test this idea, Rutherford had his physics department’s glassblowers blow two bulbs. One was soap-bubble thin, and he pumped radon into it. The other was thicker and wider, and it surrounded the first. The alpha particles had enough energy to tunnel through the first glass shell but not the second, so they became trapped in the vacuum cavity between them. After a few days, this wasn’t much of an experiment, since the trapped alpha particles were colorless and didn’t really do anything. But then Rutherford ran a battery current through the cavity. If you’ve ever traveled to Tokyo or New York, you know what happened. Like all noble gases, helium glows when excited by electricity, and Rutherford’s mystery particles began glowing helium’s characteristic green and yellow. Rutherford basically proved that alpha particles were escaped helium atoms with an early “neon” light. It was a perfect example of his elegance, and also his belief in dramatic science.

  With typical flair, Rutherford announced the alpha-helium connection during his acceptance speech for the 1908 Nobel Prize. (In addition to winning the prize himself, Rutherford mentored and hand-trained eleven future prizewinners, the last in 1978, more than four decades after Rutherford died. It was perhaps the most impressive feat of progeny since Genghis Khan fathered hundreds of children seven centuries earlier.) His findings intoxicated the Nobel audience. Nevertheless, the most immediate and practical application of Rutherford’s helium work probably escaped many in Stockholm. As a consummate experimentalist, however, Rutherford knew that truly great research didn’t just support or disprove a given theory, but fathered more experiments. In particular, the alpha-helium experiment allowed him to pick the scab off the old theological-scientific debate about the true age of the earth.

  The first semi-defensible guess for that age came in 1650, when Irish archbishop James Ussher worked backward from “data” such as the begats list in the Bible (“… and Serug lived thirty years, and begat Nahor… and Nahor lived nine and twenty years, and begat Terah,” etc.) and calculated that God had finally gotten around to creating the earth on October 23, 4004 BC. Ussher did the best he could with the available evidence, but within decades that date was proved laughably late by most every scientific field. Physicists could even pin precise numbers on their guesses by using the equations of thermodynamics. Just as hot coffee cools down in a freezer, physicists knew that the earth constantly loses heat to space, which is cold. By measuring the rate of lost heat and extrapolating backward to when every rock on earth was molten, they could estimate the earth’s date of origin. The premier scientist of the nineteenth century, William Thomson, known as Lord Kelvin, spent decades on this problem and in the late 1800s announced that the earth had been born twenty million years before.

  It was a triumph of human reasoning—and about as dead wrong as Ussher’s guess. By 1900, Rutherford among others recognized that however far physics had outpaced other sciences in prestige and glamour (Rutherford himself was fond of saying, “In science, there is only physics; all the rest is stamp collecting”—words he later had to eat when he won a Nobel Prize in Chemistry), in this case the physics didn’t feel right. Charles Darwin argued persuasively that humans could not have evolved from dumb bacteria in just twenty million years, and followers of Scottish geologist James Hutton argued that no mountains or canyons could have formed in so short a span. But no one could unravel Lord Kelvin’s formidable calculations until Rutherford started poking around in uranium rocks for bubbles of helium.

  Inside certain rocks, uranium atoms spit out alpha particles (which have two protons) and transmutate into element ninety, thorium. Thorium then begets radium by spitting out another alpha particle. Radium begets radon with yet another, and radon begets polonium, and polonium begets stable lead. This was a well-known deterioration. But in a stroke of genius akin to Glaser’s, Rutherford realized that those alpha particles, after being ejected, form small bubbles of helium inside rocks. The key insight was that helium never reacts with or is attracted to other elements. So unlike carbon dioxide in limestone, helium shouldn’t normally be inside rocks. Any helium that is inside rocks was therefore fathered by radioactive decay. Lots of helium inside a rock means that it’s old, while scant traces indicate it’s a youngster.

  Rutherford had thought about this process for a few years by 1904, when he was thirty-three and Kelvin was eighty. By that age, despite all that Kelvin had contributed to science, his mind had fogged. Gone were the days when he could put forward exciting new theories, like the one that all the elements on the periodic table were, at their deepest levels, twisted “knots of ether” of different shapes. Most detrimentally to his science, Kelvin never could incorporate the unsettling, even frightening science of radioactivity into his worldview. (That’s why Marie Curie once pulled him, too, into a closet to look at her glow-in-the-dark element—to instruct him.) In contrast, Rutherford realized that radioactivity in the earth’s crust would generate extra heat, which would bollix the old man’s theories about a simple heat loss into space.

  Excited to present his ideas, Rutherford arranged a lecture in Cambridge. But however dotty Kelvin got, he was still a force in scientific politics, and demolishing the old man’s proudest calculation could in turn jeopardize Rutherford’s career. Rutherford began the speech warily, but luckily, just after he started, Kelvin nodded off in the front row. Rutherford raced to get to his conclusions, but just as he began knocking the knees out from under Kelvin’s work, the old man sat up, refreshed and bright.

  Trapped onstage, Rutherford suddenly remembered a throwaway line he’d read in Kelvin’s work. It said, in typically couched scientific language, that Kelvin’s calculations about the earth’s age were correct unless someone discovered extra sources of heat inside the earth. Rutherford mentioned that qualification, pointed out that radioactivity might be that latent source, and with masterly spin ad-libbed that Kelvin had therefore predicted the discovery of radioactivity dozens of years earlier. What genius! The old man glanced around the audience, radiant. He thought that Rutherford was full of crap, but he wasn’t about to disregard the compliment.

  Rutherford laid low until Kelvin died, in 1907, then he soon proved the helium-uranium connection. And with no politics stopping him now—in fact, he became an eminent peer himself (and later ended up as scientific royalty, too, with a box on the periodic table, element 104, rutherfordium)—the eventual Lord Rutherford got some primordial uranium rock, eluted the helium from microscopic bubbles inside, and determined that the earth was at least 500 million years old—twenty-five times greater than Kelvin’s guess and the first calculation correct to within a factor of ten. Within years, geologists with more experience finessing rocks took over for Rutherford and determined that the helium pockets proved the earth to be at least two billion years old. This number was still 50 percent too low, but thanks to the tiny, inert bubbles inside radioactive rocks, human beings at last began to face the astounding age of the cosmos.
/>   After Rutherford, digging for small bubbles of elements inside rocks became standard work in geology. One especially fruitful approach uses zircon, a mineral that contains zirconium, the pawnshop heartbreaker and knockoff jewelry substitute.

  For chemical reasons, zircons are hardy—zirconium sits below titanium on the periodic table and makes convincing fake diamonds for a reason. Unlike soft rocks such as limestone, many zircons have survived since the early years of the earth, often as hard, poppy-seed grains inside larger rocks. Due to their unique chemistry, when zircon crystals formed way back when, they vacuumed up stray uranium and packed it into atomic bubbles inside themselves. At the same time, zircons had a distaste for lead and squeezed that element out (the opposite of what meteors do). Of course, that didn’t last long, since uranium decays into lead, but the zircons had trouble working the lead slivers out again. As a result, any lead inside lead-phobic zircons nowadays has to be a daughter product of uranium. The story should be familiar by now: after measuring the ratio of lead to uranium in zircons, it’s just a matter of graphing backward to year zero. Anytime you hear scientists announcing a record for the “world’s oldest rock”—probably in Australia or Greenland, where zircons have survived the longest—rest assured they used zircon-uranium bubbles to date it.

  Other fields adopted bubbles as a paradigm, too. Glaser began experimenting with his bubble chamber in the 1950s, and around that same time, theoretical physicists such as John Archibald Wheeler began speaking of the universe as foam on its fundamental level. On that scale, billions of trillions of times smaller than atoms, Wheeler dreamed that “the glassy smooth spacetime of the atomic and particle worlds gives way…. There would literally be no left and right, no before and after. Ordinary ideas of length would disappear. Ordinary ideas of time would evaporate. I can think of no better name than quantum foam for this state of affairs.” Some cosmologists today calculate that our entire universe burst into existence when a single submicronanobubble slipped free from that foam and began expanding at an exponential rate. It’s a handsome theory, actually, and explains a lot—except, unfortunately, why this might have happened.

 

‹ Prev