Sam Kean

Home > Other > Sam Kean > Page 10


  But the announcement didn’t rally much excitement. The trio announced they’d discovered element sixty-one two years before and had sat on the results because they were too preoccupied with their work on uranium—their real work. The press gave the finding correspondingly tepid coverage. In the New York Times, the missing link shared a crowded headline with a dubious mining technique that promised a hundred uninterrupted years of oil. Time buried the news in its conference wrap-up and pooh-poohed the element as “not good for much.”* Then the scientists announced that they planned to name it promethium. Elements discovered earlier in the century had been given boastful or at least explanatory names, but promethium—after the Titan in Greek mythology who stole fire, gave it to humankind, and was tortured by having a vulture dine on his liver—evoked something stern and grim, even guilty.

  So what happened between Moseley’s time and the discovery of element sixty-one? Why had hunting for elements gone from work so important that a colleague had called Moseley’s death an irreparable crime to work worth barely a few lines of newsprint? Sure, promethium was useless, but scientists, of all people, cheer impractical discoveries, and the completion of the periodic table was epochal, the culmination of millions of man-hours. Nor had people simply gotten fatigued with seeking new elements—that pursuit caused sparring between American and Soviet scientists through much of the cold war. Instead, the nature and enormity of nuclear science had changed. People had seen things, and a mid-range element like promethium could no longer rouse them like the heavy elements plutonium and uranium, not to mention their famous offspring, the atomic bomb.

  One morning in 1939, a young physicist at the University of California at Berkeley settled into a pneumatic barber’s chair in the student union for a haircut. Who knows the topic of conversation that day—maybe that son of a bitch Hitler or whether the Yankees would win their fourth straight World Series. Regardless, Luis Alvarez (not yet famous for his dinosaur extinction theory) was chatting and leafing through the San Francisco Chronicle when he ran across a wire service item about experiments by Otto Hahn in Germany, on fission—the splitting of the uranium atom. Alvarez halted his barber “mid-snip,” as a friend recalled, tore off his smock, and sprinted up the road to his laboratory, where he scooped up a Geiger counter and made a beeline for some irradiated uranium. His hair still only half-cut, he summoned everyone within shouting distance to come see what Hahn had discovered.

  Beyond being amusing, Alvarez’s dash symbolizes the state of nuclear science at the time. Scientists had been making steady if slow progress in understanding how the cores of atoms work, little snippets of knowledge here and there—and then, with one discovery, they found themselves on a mad tear.

  Moseley had given atomic and nuclear science legitimate footing, and loads of talent had poured into those fields in the 1920s. Nevertheless, gains had proved more difficult than expected. Part of the confusion was, indirectly, Moseley’s fault. His work had proved that isotopes such as lead-204 and lead-206 could have the same net positive charge yet have different atomic weights. In a world that knew only about protons and electrons, this left scientists floundering with unwieldy ideas about positive protons in the nucleus that gobbled up negative electrons Pac-Man style.* In addition, to comprehend how subatomic particles behave, scientists had to devise a whole new mathematical tool, quantum mechanics, and it took years to figure out how to apply it to even simple, isolated hydrogen atoms.

  Meanwhile, scientists were also developing the related field of radioactivity, the study of how nuclei fall apart. Any old atom can shed or steal electrons, but luminaries such as Marie Curie and Ernest Rutherford realized that some rare elements could alter their nuclei, too, by blowing off atomic shrapnel. Rutherford especially helped classify all the shrapnel into just a few common types, which he named using the Greek alphabet, calling them alpha, beta, or gamma decay. Gamma decay is the simplest and deadliest—it occurs when the nucleus emits concentrated X-rays and is today the stuff of nuclear nightmares. The other types of radioactivity involve the conversion of one element to another, a tantalizing process in the 1920s. But each element goes radioactive in a characteristic way, so the deep, underlying features of alpha and beta decay baffled scientists, who were growing increasingly frustrated about the nature of isotopes as well. The Pac-Man model was failing, and a few daredevils suggested that the only way to deal with the proliferation of new isotopes was to scrap the periodic table.

  The giant collective forehead slap—the “Of course!” moment—took place in 1932, when James Chadwick, yet another of Rutherford’s students, discovered the neutral neutron, which adds weight without charge. Coupled with Moseley’s insights about the atomic number, atoms (at least lone, isolated atoms) suddenly made sense. The neutron meant that lead-204 and lead-206 could still both be lead—could still have the same positive nuclear charge and sit in the same box on the periodic table—even if they had different atomic weights. The nature of radioactivity suddenly made sense, too. Beta decay was understood as the conversion of neutrons to protons or vice versa—and it’s because the proton number changes that beta decay converts an atom into a different element. Alpha decay also converts elements and is the most dramatic change on a nuclear level—two neutrons and two protons are shorn away.

  Over the next few years, the neutron became more than a theoretical tool. For one thing, it supplied a fantastic way to probe atomic innards, because scientists could shoot a neutron at atoms without it being electrically repulsed, as charged projectiles were. Neutrons also helped scientists induce a new type of radioactivity. Elements, especially lighter elements, try to maintain a rough one-to-one ratio of neutrons to protons. If an atom has too many neutrons, it splits itself, releasing energy and excess neutrons in the process. If nearby atoms absorb those neutrons, they become unstable and spit out more neutrons, a cascade known as a chain reaction. A physicist named Leo Szilard dreamed up the idea of a nuclear chain reaction circa 1933 while standing at a London stoplight one morning. He patented it in 1934 and tried (but failed) to produce a chain reaction in a few light elements as early as 1936.

  But notice the dates here. Just as the basic understanding of electrons, protons, and neutrons fell into place, the old-world political order was disintegrating. By the time Alvarez read about uranium fission in his barber’s smock, Europe was doomed.

  The genteel old world of element hunting died at the same time. With their new model of atomic innards, scientists began to see that the few undiscovered elements on the periodic table were undiscovered because they were intrinsically unstable. Even if they had existed in abundance on the early earth, they had long since disintegrated. This conveniently explained the holes in the periodic table, but the work proved its own undoing. Probing unstable elements soon led scientists to stumble onto nuclear fission and neutron chain reactions. And as soon as they understood that atoms could be split—understood both the scientific and political implications of that fact—collecting new elements for display seemed like an amateur’s hobby, like the fusty, shoot-and-stuff biology of the 1800s compared with molecular biology today. Which is why, with a world war and the possibility of atomic bombs staring at them in 1939, no scientists bothered tracking promethium down until a decade later.

  No matter how keyed up scientists got about the possibility of fission bombs, however, a lot of work still separated the theory from the reality. It’s hard to remember today, but nuclear bombs were considered a long shot at best, especially by military experts. As usual, those military leaders were eager to enlist scientists in World War II, and the scientists dutifully exacerbated the war’s gruesomeness through technology such as better steel. But the war would not have ended with two mushroom clouds had the U.S. government, instead of just demanding bigger, faster weapons now, summoned the political will to invest billions in a hitherto pure and impractical field: subatomic science. And even then, figuring out how to divide atoms in a controlled manner proved so far beyond the scienc
e of the day that the Manhattan Project had to adopt a whole new research strategy to succeed—the Monte Carlo method, which rewired people’s conceptions of what “doing science” meant.

  As noted, quantum mechanics worked fine for isolated atoms, and by 1940 scientists knew that absorbing a neutron made an atom queasy, which made it explode and possibly release more neutrons. Following the path of one given neutron was easy, no harder than following a caroming billiard ball. But starting a chain reaction required coordinating billions of billions of neutrons, all of them traveling at different speeds in every direction. This made hash of scientists’ built-for-one theoretical apparatus. At the same time, uranium and plutonium were expensive and dangerous, so detailed experimental work was out of the question.

  Yet Manhattan Project scientists had orders to figure out exactly how much plutonium and uranium they needed to create a bomb: too little and the bomb would fizzle out. Too much and the bomb would blow up just fine, but at the cost of prolonging the war by months, since both elements were monstrously complicated to purify (or in plutonium’s case, synthesize, then purify). So, just to get by, some pragmatic scientists decided to abandon both traditional approaches, theory and experiment, and pioneer a third path.

  To start, they picked a random speed for a neutron bouncing around in a pile of plutonium (or uranium). They also picked a random direction for it and more random numbers for other parameters, such as the amount of plutonium available, the chance the neutron would escape the plutonium before being absorbed, even the geometry and shape of the plutonium pile. Note that selecting specific numbers meant that scientists were conceding the universality of each calculation, since the results applied to only a few neutrons in one of many designs. Theoretical scientists hate giving up universally applicable results, but they had no other choice.

  At this point, rooms full of young women with pencils (many of them scientists’ wives, who’d been hired to help out because they were crushingly bored in Los Alamos) would get a sheet with the random numbers and begin to calculate (sometimes without knowing what it all meant) how the neutron collided with a plutonium atom; whether it was gobbled up; how many new neutrons if any were released in the process; how many neutrons those in turn released; and so on. Each of the hundreds of women did one narrow calculation in an assembly line, and scientists aggregated the results. Historian George Dyson described this process as building bombs “numerically, neutron by neutron, nanosecond by nanosecond… [a method] of statistical approximation whereby a random sampling of events… is followed through a series of representative slices in time, answering the otherwise incalculable question of whether a configuration would go thermonuclear.”*

  Sometimes the theoretical pile did go nuclear, and this was counted as a success. When each calculation was finished, the women would start over with different numbers. Then do it again. And again. And yet again. Rosie the Riveter may have become the iconic symbol of empowered female employment during the war, but the Manhattan Project would have gone nowhere without these women hand-crunching long tables of data. They became known by the neologism “computers.”

  But why was this approach so different? Basically, scientists equated each computation with an experiment and collected only virtual data for the plutonium and uranium bombs. They abandoned the meticulous and mutually corrective interplay of theory and lab work and adopted methods one historian described unflatteringly as “dislocated… a simulated reality that borrowed from both experimental and theoretical domains, fused these borrowings together, and used the resulting amalgam to stake out a netherland at once nowhere and everywhere on the usual methodological map.”*

  Of course, such calculations were only as good as scientists’ initial equations, but here they got lucky. Particles on the quantum level are governed by statistical laws, and quantum mechanics, for all its bizarre, counterintuitive features, is the single most accurate scientific theory ever devised. Plus, the sheer number of calculations scientists pushed through during the Manhattan Project gave them great confidence—confidence that was proved justified after the successful Trinity test in New Mexico in mid-1945. The swift and flawless detonation of a uranium bomb over Hiroshima and a plutonium bomb over Nagasaki a few days later also testified to the accuracy of this unconventional, calculation-based approach to science.

  After the isolated camaraderie of the Manhattan Project ended, scientists scattered back to their homes to reflect on what they’d done (some proudly, some not). Many gladly forgot about their time served in the calculation wards. Some, though, were riveted by what they’d learned, including one Stanislaw Ulam. Ulam, a Polish refugee who’d passed hours in New Mexico playing card games, was playing solitaire one day in 1946 when he began wondering about the odds of winning any randomly dealt hand. The one thing Ulam loved more than cards was futile calculation, so he began filling pages with probabilistic equations. The problem soon ballooned to such complexity that Ulam smartly gave up. He decided it was better to play a hundred hands and tabulate what percentage of the time he won. Easy enough.

  The neurons of most people, even most scientists, wouldn’t have made the connection, but in the middle of his century of solitaire, Ulam recognized that he was using the same basic approach as scientists had used in the bomb-building “experiments” in Los Alamos. (The connections are abstract, but the order and layout of the cards were like the random inputs, and the “calculation” was playing the hand.) Discussions soon followed with his calculation-loving friend John von Neumann, another European refugee and Manhattan Project veteran. Ulam and von Neumann realized just how powerful the method might be if they could universalize it and apply it to other situations with multitudes of random variables. In those situations, instead of trying to take into account every complication, every butterfly flapping its wings, they would simply define the problem, pick random inputs, and “plug and chug.” Unlike an experiment, the results were not certain. But with enough calculations, they could be pretty darn sure of the probabilities.

  In a serendipitous coincidence, Ulam and von Neumann knew the American engineers developing the first electronic computers, such as the ENIAC in Philadelphia. The Manhattan Project “computers” had eventually employed a mechanical punch card system for calculations, but the tireless ENIAC showed more promise for the tedious iterations Ulam and von Neumann envisioned. Historically, the science of probability has its roots in aristocratic casinos, and it’s unclear where the nickname for Ulam and von Neumann’s approach came from. But Ulam liked to brag that he named it in memory of an uncle who often borrowed money to gamble on the “well-known generator of random integers (between zero and thirty-six) in the Mediterranean principality.”

  Regardless, Monte Carlo science caught on quickly. It cut down on expensive experiments, and the need for high-quality Monte Carlo simulators drove the early development of computers, pushing them to become faster and more efficient. Symbiotically, the advent of cheap computing meant that Monte Carlo–style experiments, simulations, and models began to take over branches of chemistry, astronomy, and physics, not to mention engineering and stock market analysis. Today, just two generations on, the Monte Carlo method (in various forms) so dominates some fields that many young scientists don’t realize how thoroughly they’ve departed from traditional theoretical or experimental science. Overall, an expedient, a temporary measure—using plutonium and uranium atoms like an abacus to compute nuclear chain reactions—has become an irreplaceable feature of the scientific process. It not only conquered science; it settled down, assimilated, and intermarried with other methods.

  In 1949, however, that transformation lay in the future. In those early days, Ulam’s Monte Carlo method mostly pushed through the next generation of nuclear weapons. Von Neumann, Ulam, and their ilk would show up at the gymnasium-sized rooms where computers were set up and mysteriously ask if they could run a few programs, starting at 12:00 a.m. and running through the night. The weapons they developed during those dead h
ours were the “supers,” multistage devices a thousand times more powerful than standard A-bombs. Supers used plutonium and uranium to ignite stellar-style fusion in extra-heavy liquid hydrogen, a complicated process that never would have moved beyond secret military reports and into missile silos without digital computation. As historian George Dyson neatly summarized the technological history of that decade, “Computers led to bombs, and bombs led to computers.”

  After a great struggle to find the proper design for a super, scientists hit upon a dandy in 1952. The obliteration of the Eniwetok atoll in the Pacific Ocean during a test of a super that year showed once again the ruthless brilliance of the Monte Carlo method. Nevertheless, bomb scientists already had something even worse than the supers in the pipeline.

  Atomic bombs can get you two ways. A madman who just wants lots of people dead and lots of buildings flattened can stick with a conventional, one-stage fission bomb. It’s easier to build, and the big flash-bang should satisfy his need for spectacle, as should aftereffects such as spontaneous tornadoes and the silhouettes of victims seared onto brick walls. But if the madman has patience and wants to do something insidious, if he wants to piss in every well and sow the ground with salt, he’ll detonate a cobalt-60 dirty bomb.

  Whereas conventional nuclear bombs kill with heat, dirty bombs kill with gamma radiation—malignant X-rays. Gamma rays result from frantic radioactive events, and in addition to burning people frightfully, they dig down into bone marrow and scramble the chromosomes in white blood cells. The cells either die outright, grow cancerous, or grow without constraint and, like humans with gigantism, end up deformed and unable to fight infections. All nuclear bombs release some radiation, but with dirty bombs, radiation is the whole point.

  Even endemic leukemia isn’t ambitious by some bombs’ standards. Another European refugee who worked on the Manhattan Project, Leo Szilard—the physicist who, to his regret, invented the idea of a self-sustaining nuclear chain reaction around 1933—calculated in 1950 as a wiser, more sober man that sprinkling a tenth of an ounce of cobalt-60 on every square mile of earth would pollute it with enough gamma rays to wipe out the human race, a nuclear version of the cloud that helped kill the dinosaurs. His device consisted of a multistage warhead surrounded by a jacket of cobalt-59. A fission reaction in plutonium would kick off a fusion reaction in hydrogen, and once the reaction started, obviously, the cobalt jacket and everything else would be obliterated. But not before something happened on the atomic level. Down there, the cobalt atoms would absorb neutrons from the fission and fusion, a step called salting. The salting would convert stable cobalt-59 into unsteady cobalt-60, which would then float down like ash.

 

‹ Prev