Ironically, Wheeler’s quantum foam traces its intellectual lineage to the ultimate physicist of the classical, everyday world, Lord Kelvin. Kelvin didn’t invent froth science—that was a blind Belgian with the fitting name (considering how little influence his work had) of Joseph Plateau. But Kelvin did popularize the science by saying things like he could spend a lifetime scrutinizing a single soap bubble. That was actually disingenuous, since according to his lab notebooks, Kelvin formulated the outline of his bubble work one lazy morning in bed, and he produced just one short paper on it. Still, there are wonderful stories of this white-bearded Victorian splashing around in basins of water and glycerin, with what looked like a miniature box spring on a ladle, to make colonies of interlocking bubbles. And squarish bubbles at that, reminiscent of the Peanuts character Rerun, since the box spring’s coils were shaped into rectangular prisms.
Plus, Kelvin’s work gathered momentum and inspired real science in future generations. Biologist D’Arcy Wentworth Thompson applied Kelvin’s theorems on bubble formation to cell development in his seminal 1917 book On Growth and Form, a book once called “the finest work of literature in all the annals of science that have been recorded in the English tongue.” The modern field of cell biology began at this point. What’s more, recent biochemical research hints that bubbles were the efficient cause of life itself. The first complex organic molecules may have formed not in the turbulent ocean, as is commonly thought, but in water bubbles trapped in Arctic-like sheets of ice. Water is quite heavy, and when water freezes, it crushes together dissolved “impurities,” such as organic molecules, inside bubbles. The concentration and compression in those bubbles might have been high enough to fuse those molecules into self-replicating systems. Furthermore, recognizing a good trick, nature has plagiarized the bubble blueprint ever since. Regardless of where the first organic molecules formed, in ice or ocean, the first crude cells were certainly bubble-like structures that surrounded proteins or RNA or DNA and protected them from being washed away or eroded. Even today, four billion years later, cells still have a basic bubble design.
Kelvin’s work also inspired military science. During World War I, another lord, Lord Rayleigh, took on the urgent wartime problem of why submarine propellers were so prone to disintegrate and decay, even when the rest of the hull remained intact. It turned out that bubbles produced by the churning propellers turned around and attacked the metal blades like sugar attacks teeth, and with similarly corrosive results. Submarine science led to another breakthrough in bubble research as well—though at the time this finding seemed unpromising, even dodgy. Thanks to the memory of German U-boats, studying sonar—sound waves moving in water—was as trendy in the 1930s as radioactivity had been before. At least two research teams discovered that if they rocked a tank with jet engine–level noise, the bubbles that appeared would sometimes collapse and wink at them with a flash of blue or green light. (Think of biting wintergreen Life Savers in a dark closet.) More interested in blowing up submarines, scientists didn’t pursue so-called sonoluminescence, but for fifty years it hung on as a scientific parlor trick, passed down from generation to generation.
It might have remained just that if not for a colleague taunting Seth Putterman one day in the mid-1980s. Putterman worked at the University of California at Los Angeles in fluid dynamics, a fiendishly tricky field. In some sense, scientists know more about distant galaxies than about turbulent water gushing through sewer pipes. The colleague was teasing Putterman about this ignorance, when he mentioned that Putterman’s ilk couldn’t even explain how sound waves can transmutate bubbles into light. Putterman thought that sounded like an urban legend. But after looking up the scant research that existed on sonoluminescence, he chucked his previous work to study blinking bubbles full-time.*
For Putterman’s first, delightfully low-tech experiments, he set a beaker of water between two stereo speakers, which were cranked to dog-whistle frequencies. A heated toaster wire in the beaker kicked up bubbles, and sound waves trapped and levitated them in the water. Then came the fun part. Sound waves vary between barren, low-intensity troughs and crests of high intensity. The tiny, trapped bubbles responded to low pressure by swelling a thousand times, like a balloon filling a room. After the sound wave bottomed out, the high-pressure front tore in and crushed the bubble’s volume by half a million times, at forces 100 billion times greater than gravity. Not surprisingly, it’s that supernova crush that produces the eerie light. Most amazingly, despite being squished into a “singularity,” a term rarely used outside the study of black holes, the bubble stays intact. After the pressure lifts, the bubble billows out again, unpopped, as if nothing had happened. It’s then squished again and blinks again, with the process repeating thousands of times every second.
Putterman soon bought more sophisticated equipment than his original garage-band setup, and upon doing so, he had a run-in with the periodic table. To help determine what exactly caused the bubbles to sparkle, he began trying different gases. He found that although bubbles of plain air produced nice crackles of blue and green, pure nitrogen or oxygen, which together make up 99 percent of air, wouldn’t luminesce, no matter what volume or shrillness he cranked the sound to. Perturbed, Putterman began pumping trace gases from air back into the bubbles until he found the elemental flint—argon.
That was odd, since argon is an inert gas. What’s more, the only other gases Putterman (and a growing cadre of bubble scientists) could get to work were argon’s heavier chemical cousins, krypton and especially xenon. In fact, when rocked with sonar, xenon and krypton flared up even brighter than argon, producing “stars in a jar” that sizzled at 35,000°F inside water—far hotter than the surface of the sun. Again, this was baffling. Xenon and krypton are often used in industry to smother fires or runaway reactions, and there was no reason to think those dull, inert gases could produce such intense bubbles.
Unless, that is, their inertness is a covert asset. Oxygen, carbon dioxide, and other atmospheric gases inside bubbles can use the incoming sonar energy to divide or react with one another. From the point of view of sonoluminescence, that’s energy squandered. Some scientists, though, think that inert gases under high pressure cannot help but soak up sonar energy. And with no way to dissipate the energy, bubbles of xenon or krypton collapse and have no choice but to propagate and concentrate energy in the bubbles’ cores. If that’s the case, then the noble gases’ nonreactivity is the key to sonoluminescence. Whatever the reason, the link to sonoluminescence will rewrite what it means to be an inert gas.
Unfortunately, tempted by harnessing that high energy, some scientists (including Putterman) have linked this fragile bubble science with desktop fusion, a cousin of that all-time favorite pathological science. (Because of the temperatures involved, it’s not cold fusion.) There has long been a vague, free-association link between bubbles and fusion, partly because Boris Deryagin, an influential Soviet scientist who studied the stability of foams, believed strongly in cold fusion. (Once, in an inconceivable experiment, the antithesis of one of Rutherford’s, Deryagin supposedly tried to induce cold fusion in water by firing a Kalashnikov rifle into it.)
The dubious link between sonoluminescence and fusion (sonofusion) was made explicit in 2002 when the journal Science ran a radioactively controversial paper on sonoluminescence-driven nuclear power. Unusually, Science also ran an editorial admitting that many senior scientists thought the paper flawed if not fraudulent; even Putterman recommended that the journal reject this one. Science printed it anyway (perhaps so everyone would have to buy a copy to see what all the fuss was about). The paper’s lead author was later hauled before the U.S. House of Representatives for faking data.
Thankfully, bubble science had a strong enough foundation* to survive that disgrace. Physicists interested in alternative energy now model superconductors with bubbles. Pathologists describe AIDS as a “foamy” virus, for the way infected cells swell before exploding. Entomologists know of insects that us
e bubbles like submersibles to breathe underwater, and ornithologists know that the metallic sheen of peacocks’ plumage comes from light tickling bubbles in the feathers. Most important, in 2008, in food science, students at Appalachian State University finally determined what makes Diet Coke explode when you drop Mentos into it. Bubbles. The grainy surface of Mentos candy acts as a net to snag small dissolved bubbles, which are stitched into large ones. Eventually, a few gigantic bubbles break off, rocket upward, and whoosh through the nozzle, spurting up to twenty magnificent feet. This discovery was undoubtedly the greatest moment in bubble science since Donald Glaser eyed his lager more than fifty years before and dreamed of subverting the periodic table.
18
Tools of Ridiculous Precision
Think of the most fussy science teacher you ever had. The one who docked your grade if the sixth decimal place in your answer was rounded incorrectly; who tucked in his periodic table T-shirt, corrected every student who said “weight” when he or she meant “mass,” and made everyone, including himself, wear goggles even while mixing sugar water. Now try to imagine someone whom your teacher would hate for being anal-retentive. That is the kind of person who works for a bureau of standards and measurement.
Most countries have a standards bureau, whose job it is to measure everything—from how long a second really is to how much mercury you can safely consume in bovine livers (very little, according to the U.S. National Institute of Standards and Technology, or NIST). To scientists who work at standards bureaus, measurement isn’t just a practice that makes science possible; it’s a science in itself. Progress in any number of fields, from post-Einsteinian cosmology to the astrobiological hunt for life on other planets, depends on our ability to make ever finer measurements based on ever smaller scraps of information.
For historical reasons (the French Enlightenment folk were fanatic measurers), the Bureau International des Poids et Mesures (BIPM) just outside Paris acts as the standards bureau’s standards bureau, making sure all the “franchises” stay in line. One of the more peculiar jobs of the BIPM is coddling the International Prototype Kilogram—the world’s official kilogram. It’s a two-inch-wide, 90 percent platinum cylinder that, by definition, has a mass of exactly 1.000000… kilogram (to as many decimal places as you like). I’d say that’s about two pounds, but I’d feel guilty about being inexact.
The two-inch-wide International Prototype Kilogram (center), made of platinum and iridium, spends all day every day beneath three nested bell jars inside a humidity- and temperature-controlled vault in Paris. Surrounding the Kilogram are six official copies, each under two bell jars. (Reproduced with permission of BIPM, which retains full international protected copyright)
Because the Kilogram is a physical object and therefore damageable, and because the definition of a kilogram ought to stay constant, the BIPM must make sure it never gets scratched, never attracts a speck of dust, never loses (the bureau hopes!) a single atom. For if any of that happened, its mass could spike to 1.000000… 1 kilograms or plummet to 0.999999… 9 kilograms, and the mere possibility induces ulcers in a national bureau of standards type. So, like phobic mothers, they constantly monitor the Kilogram’s temperature and the pressure around it to prevent microscopic bloating and contracting, stress that could slough off atoms. It’s also swaddled within three successively smaller bell jars to prevent humidity from condensing on the surface and leaving a nanoscale film. And the Kilogram is made from dense platinum (and iridium) to minimize the surface area exposed to unacceptably dirty air, the kind we breathe. Platinum also conducts electricity well, which cuts down on the buildup of “parasitic” static electricity (the BIPM’s word) that might zap stray atoms.
Finally, platinum’s toughness mitigates against the chance of a disastrous fingernail nick on the rare occasions when people actually lay a hand on the Kilogram. Other countries need their own official 1.000000… cylinder to avoid having to fly to Paris every time they want to measure something precisely, and since the Kilogram is the standard, each country’s knockoff has to be compared against it. The United States has had its official kilogram, called K20 (i.e., the twentieth official copy), which resides in a government building in exurban Maryland, calibrated just once since 2000, and it’s due for another calibration, says Zeina Jabbour, group leader for the NIST mass and force team. But calibration is a multimonth process, and security regulations since 2001 have made flying K20 to Paris an absolute hassle. “We have to hand-carry the kilograms through the flight,” says Jabbour, “and it’s hard to get through security and customs with a slug of metal, and tell people they cannot touch it.” Even opening K20’s customized suitcase in a “dusty airport” could compromise it, she says, “and if somebody insists on touching it, that’s the end of the calibration.”
Usually, the BIPM uses one of six official copies of the Kilogram (each kept under two bell jars) to calibrate the knockoffs. But the official copies have to be measured against their own standard, so every few years scientists remove the Kilogram from its vault (using tongs and wearing latex gloves, of course, so as not to leave fingerprints—but not the powdery kind of gloves, because that would leave a residue—oh, and not holding it for too long, because the person’s body temperature could heat it up and ruin everything) and calibrate the calibrators.* Alarmingly, scientists noticed during calibrations in the 1990s that, even accounting for atoms that rub off when people touch it, in the past few decades the Kilogram had lost an additional mass equal to that of a fingerprint(!), half a microgram per year. No one knows why.
The failure—and it is that—to keep the Kilogram perfectly constant has renewed discussions about the ultimate dream of every scientist who obsesses over the cylinder: to make it obsolete. Science owes much of its progress since about 1600 to adopting, whenever possible, an objective, non-human-centered point of view about the universe. (This is called the Copernican principle, or less flatteringly the mediocrity principle.) The kilogram is one of seven “base units” of measurement that permeate all branches of science, and it’s no longer acceptable for any of those units to be based on a human artifact, especially if it’s mysteriously shrinking.
The goal with every unit, as England’s bureau of national standards cheekily puts it, is for one scientist to e-mail its definition to a colleague on another continent and for the colleague to be able to reproduce something with exactly those dimensions, based only on the description in the e-mail. You can’t e-mail the Kilogram, and no one has ever come up with a definition more reliable than that squat, shiny, pampered cylinder in Paris. (Scientists are drawing closer, but so far the best ideas are either too impossibly involved—such as counting trillions of trillions of atoms—or require measurements too precise for even the best instruments today.) The inability to solve the kilogram conundrum, to either stop it from shrinking or superannuate it, has become an increasing source of international worry and embarrassment (at least for us anal types).
The pain is all the more acute because the kilogram is the last base unit bound to human strictures. A platinum rod in Paris defined 1.000000… meter through much of the twentieth century, until scientists redefined it with a krypton atom in 1960, fixing it at 1,650,763.73 wavelengths of red-orange light from a krypton-86 atom. This distance is virtually identical to the length of the old rod, but it made the rod obsolete, since that many wavelengths of krypton light would stretch the same distance in any vacuum anywhere. (That’s an e-mailable definition.) Since then, measurement scientists (metrologists) have re-redefined a meter (about three feet) as the distance any light travels in a vacuum in 1/299,792,458 of a second.
Similarly, the official definition of one second used to be about 1/86,400 of one spin on Earth’s axis (i.e., the number of seconds in one day). But a few pesky facts made that an inconvenient standard. Most important, the length of a day is slowly increasing trip because of the sloshing of ocean tides, which drag and slow earth’s rotation. To correct for this, metrologists slip in a “leap
second” about every third year, usually when no one’s paying attention, at midnight on December 31. But leap seconds are an ugly, ad hoc solution. And rather than tie a supposedly universal unit of time to the transit of an unremarkable rock around a forgettable star, the U.S. standards bureau has developed cesium-based atomic clocks.
Atomic clocks run on the same leaping and crashing of excited electrons we’ve discussed before. But atomic clocks also exploit a subtler movement, the electrons’ “fine structure.” If the normal jump of an electron resembles a singer jumping an octave from G to G, fine structure resembles a jump from G to G-flat or G-sharp. Fine structure effects are most noticeable in magnetic fields, and they’re caused by things you can safely ignore unless you find yourself in a dense, high-level physics course—such as the magnetic interactions between electrons and protons or corrections due to Einstein’s relativity. The upshot is that after those fine adjustments,* each electron jumps either slightly lower (G-flat) or slightly higher (G-sharp) than expected.
The electron “decides” which jump to make based on its intrinsic spin, so one electron never hits the sharp and the flat on successive leaps. It hits one or the other every time. Inside atomic clocks, which look like tall, skinny pneumatic tubes, a magnet purges all the cesium atoms whose outer electrons jump to one level, call it G-flat. That leaves only atoms with G-sharp electrons, which are gathered into a chamber and excited by an intense microwave. This causes cesium electrons to pop (i.e., jump and crash) and emit photons of light. Each cycle of jumping up and down is elastic and always takes the same (extremely short) amount of time, so the atomic clock can measure time simply by counting photons. Really, whether you purge the G-flat or G-sharp doesn’t matter, but you have to purge one of them because jumping to either level takes a different amount of time, and at the scales metrologists work with, such imprecision is unacceptable.
Sam Kean Page 28