Strange Glow

Home > Other > Strange Glow > Page 6
Strange Glow Page 6

by Timothy J Jorgensen


  We will deal more with beta particles and other particulate radiations in later chapters, but for now it is sufficient to know that there are two subclasses of ionizing radiation—the electromagnetic short-wavelength radiations, like x-rays, and the particulate radiations, such as beta particles. Both have energies sufficient to generate ions by dislodging the orbital electrons of neighboring atoms, and thereby producing comparable biological effects. Consequently, they are simply grouped together as ionizing radiation for the purposes of understanding radiation biology. All radioisotopes emit ionizing radiation.

  GAMMA RAYS

  As we have just seen for carbon-14, radioactive decay often involves release of a high-energy beta particle that is ejected from the atom. In the case of carbon-14, particulate radiation is the only form of ionizing radiation that is released. For some radioisotopes, however, the energy that must be dissipated by their decay is too great to be carried by the energy of the particle alone. In those cases, an electromagnetic wave is released concurrent with the particle. The wave typically leaves the nucleus simultaneously with the particle but moves in a direction independent of the particle. We call these electromagnetic waves gamma rays. It was the gamma rays from uranium that were exposing Becquerel’s film.14

  Such gamma rays are typically indistinguishable from x-rays, although their wavelengths tend to be shorter and thus have higher energies.15 (Remember, as we said in chapter 2, the shorter the wavelength, the higher the energy.) The only difference between a gamma ray and an x-ray is that gamma rays emanate from the atom’s nucleus while x-rays emanate from the atom’s electron orbitals. So a gamma ray can simply be thought of as an x-ray coming from an atomic nucleus. And because, under normal circumstances, nuclear decay is required for it to be produced, a gamma ray is exclusively associated with radioactivity.

  HALF-LIFE

  The concept of half-life is a useful way to comprehend exactly how unstable a radioisotope is. As mentioned earlier, the stability of a radioisotope is an intrinsic property of the atomic nucleus that cannot be altered, so all radioisotopes have their own unique half-lives. A radioisotope with a long half-life is relatively stable, and a short half-life indicates instability.

  We’ll use carbon-14 again to illustrate. The half-life of carbon-14 is 5,730 years. That means if we had one gram (about a quarter of a teaspoon) of carbon-14, in 5,730 years we would have 0.5 gram of carbon-14 and 0.5 gram of nitrogen-14, which had been produced from the cumulative decay of carbon-14. In another 5,730 years, we would then have 0.25 gram of carbon-14 (half again of the remaining 0.5 gram), and a cumulative 0.75 gram of nitrogen-14. After 10 half-lives (57,300 years) there would only be trace quantities of carbon-14 left and nearly a full gram of the nitrogen-14. In contrast, during this entire time, one gram of stable carbon-12 would remain as one gram of carbon-12.

  We can use knowledge of radioisotope half-lives in a number of practical ways. For example, knowledge of carbon-14’s half-life allows scientists to determine the age of ancient biological artifacts (e.g., wooden tools). Since the ratio of carbon-14 to carbon-12 in our environment is constant, all living things have the same ratio of carbon-14 to carbon-12. When a plant or animal dies, however, the exchange of carbon between the environment and its tissues stops, and the carbon in the tissue at the time of death remains trapped there forever. With time, the carbon-14 in the tissue decays away while the carbon-12 does not. So the ratio of carbon-14 to carbon-12 drops with time, and drops at a predictable rate due to the constancy of carbon-14’s half-life. By measuring the ratio of carbon-14 to carbon-12 in a biological artifact a scientist is able to calculate how long ago that plant or animal died. This method of determining the age of artifacts is called radiocarbon dating (or simply carbon dating) and has contributed greatly to advancements in archeology, anthropology, and paleontology.16

  The only thing that we need to remember about half-lives for health-related purposes is that all radioisotopes have their own unique half-lives, and the shorter they are, the more radioactive they are. Some highly radioactive elements have half-lives on the order of minutes or seconds and, therefore, do not survive long enough to have any significant impact on our environmental radiation exposure levels. In contrast, others have half-lives so long (e.g., tens of thousands of years) that they, too, contribute little radiation to the environment. But those with intermediate lengths of half-lives persist long enough to contribute to our environmental radiation burden. We will talk more about these environmentally significant radioisotopes later. But first, we should consider the stories of the greatest radioactivity hunters of all time, and what they discovered about radioactivity.

  THE FRENCH TRIFECTA: BECQUEREL AND THE CURIES

  As mentioned, Becquerel had to share his Nobel Prize in 1903 with two other French scientists who ended up being even more famous—Marie Curie (1867–1934) and Pierre Curie (1859–1906). This husband and wife scientific team contributed mightily to the characterization of radioactivity. In fact, they were the ones who introduced the term “radioactive,” and they ended up going far beyond Becquerel with their studies.

  The Curies also realized something that Becquerel had overlooked. They realized that uranium ore—the crude material that contained elemental uranium—had more radioactivity in it than could be accounted for by its uranium content alone. And they thought they knew the reason. They correctly surmised that uranium ore contained other radioactive elements even more radioactive than uranium.

  Starting with a couple of tons of a tarry substance called pitchblende, the major mineral component of uranium ore, the Curies ultimately purified just 0.1 gram (about one-third of an aspirin tablet) of radium. The whole process involved the use of a radioactivity meter that Pierre designed, and segregating the nonradioactive components from the radioactive ones through various chemical processes.

  The Curies ultimately showed that uranium ore actually contains at least three radioactive elements. In addition to the known uranium, there were also two previously unknown elements. One they called polonium, in honor of Marie’s native Poland,17 which was being subjugated by the Russian Empire at the time; they called the other radium, a name derived from the Latin word for “ray.”

  What the Curies accomplished was the result of a Herculean effort on their part. Unlike Roentgen and Becquerel, they didn’t expose a few photographic films and wait for their Nobel Prizes to arrive. The Curies earned their awards through hard physical labor. They purified new radioactive elements from a mountain of rock.18

  The most distinguishing thing about the scientific contribution made by the Curies, as opposed to Becquerel’s, was that the former had actually discovered two previously unknown elements, polonium and radium, that both had radioactive properties. Becquerel simply discovered that an already known element, uranium, was radioactive. Since all the elements Becquerel tested were from his stash of known fluorescent elements and compounds, it was impossible for him to discover a completely new element. The Curies, however, traced the radioactivity in raw ore to its source by purifying it away from the nonradioactive minerals, and ended up adding two new elements to the periodic table of elements.19 So theirs was both a physical and chemical achievement. While the physicists continued to work on the mechanisms of radioactive decay, the chemists now had a couple of new elements to study, with their own novel chemistries.

  What the chemists soon learned about radium was that it fit into the second column of the periodic table; the so-called alkaline earth metals. This meant that radium shared chemical properties with another element in that column, calcium, which happens to be the major constituent of bone. The implications of this to human health would prove to be immense, but at that time little attention was paid to it, not even by the Curies, who worked with high levels of radium on a daily basis,20 and thus had the most to lose. A premature death in a horse cart accident, in 1906, would spare Pierre the worst health consequences of his work. Marie, however, would keep working for nearly three more decade
s until the radiation got the best of her, as we shall see.

  CUTTING THE PIONEERS SOME SLACK

  The straightforward interpretation of the discoveries surrounding radiation and radioactivity, as explained here, is enriched by the benefit of modern hindsight. Although our current understanding of the nature of radioactive decay is enlightened by our knowledge of the structure of the nucleus, these early radiation pioneers had no such information through which to interpret their own findings and discoveries. This tormented them and forced them toward explanations that even they themselves knew were seriously lacking. For example, Becquerel clung for some time to the idea that radioactivity represented some long-lived fluorescence that released energy from much earlier exposure of the radioactive material to light. Marie Curie proposed that heavy elements (e.g., uranium, polonium, and radium) could absorb background levels of x-rays in our environment and release them later as radioactivity, akin to an “invisible fluorescence” process produced by x-rays rather than visible light. Even Crookes, the father of the Crookes tube, promoted his own theory in which radioactive elements extracted kinetic energy from air molecules and then released it all at once in a radioactive decay event. (This idea was particularly easy to kill since it was quickly shown that radioactive elements displayed the same level of radioactivity both in and out of a vacuum.) The issue that haunted all these scientists, and caused them to doubt what their own eyes had seen, was the embarrassing problem of explaining where the energy released by radioactive elements came from. They all well knew, or at least thought they knew, that energy could neither be created nor destroyed. It could only be moved around.21

  But one can’t be expected to interpret new discoveries in the context of later discoveries. So the pioneers should be forgiven if they really didn’t understand their own discoveries. Besides, one of the pioneers even publicly owned up to it. Marconi, in his acceptance speech for the 1909 Nobel Prize received for his work with radio waves, freely admitted, with some embarrassment, that he had no idea how he was able to transmit radio waves across the entire Atlantic Ocean. The fact that he had even tried was a testimony to his ignorance. Classical physics had predicted it should not have been possible because electromagnetic waves traveled in straight lines, so their transmission distance should have been limited to less than 100 miles for a 1,000-foot-tall radio tower,22 due to the curvature of Earth.23 He told his audience in humble understatement, “Many facts connected with the transmission of electric waves over great distances await explanation.”24 It seems that, despite his apparent scientific ignorance, Marconi’s greatest genius was that he did not take the scientific dogma of the moment too seriously.25 He understood better than most that all dogma is ephemeral.

  In Marconi’s case, it turned out that radio waves can actually skip around the globe by reflecting off an inner layer of the upper atmosphere.26 This reflective layer is a stratum of ionized gas, unknown in Marconi’s day, that was discovered later by Oliver Heaviside (1850–1925), an electrical engineer and physicist.27 Heaviside had come to the rescue of Marconi. Similarly, the radioactivity pioneers would soon have their own knight in shining armor who would help them make sense of all that they had found. In fact, he would be a real knight—Sir Ernest Rutherford—and he would use his sword to cut open the atomic nucleus and reveal its contents to all who wished to see. And many did.

  CHAPTER 4

  SPLITTING HAIRS: ATOMIC PARTICLES AND NUCLEAR FISSION

  Nothing exists except atoms and empty space; everything else is opinion.

  —Democritus, fifth century BC

  PLUM PUDDING AND THE CONSERVATION OF CHARGE

  In 1904, Joseph John (J. J.) Thomson (1856–1940) unveiled a model of the atom in which electrons were described as being negatively charged plums floating around in a pudding of positive charge. The British love their plum pudding, so the image of all physical matter being an assembly of little plum puddings, as proposed by this British scientist, appealed to both their senses and their national pride. But it was an image not easily swallowed by everyone, even within Britain, and it would soon be shown to be wrong.1

  Nevertheless, Thomson was no fool. He had actually discovered the electron, and in 1906 he would be awarded the Nobel Prize for his work in the electrical conductivity of gases.2 So, he had a mind to be reckoned with, and that was the way Thomson’s mind envisioned the structure of the atom. He visualized the atom as simply a little ball of goop with electrons floating inside. It was a model that had served him well and allowed him to make his discoveries. But, by the end of the decade, the pudding was getting stale. Other scientists soon appreciated that they had gotten all they could out of the plum pudding model. They began to search for a new and better model of the atom. By 1910, physicists were beginning to understand that, rather than being pudding, an atom was mostly just empty space. At its center was a very small, positively charged nucleus, and flying around that nucleus were even smaller negatively charged electrons.

  We have since learned that the nucleus is incredibly small, even when compared to the dimensions of the atom itself. Consider this: If an atom were the size of a major league baseball stadium, with its center at the pitcher’s mound, the nucleus would be the size of the baseball in the pitcher’s hand. And the atom’s outermost electrons, each the size of a grain of sand, would be randomly moving around from seat to seat somewhere in the upper decks. All the rest of the stadium would just be empty space.3

  THE IMPORTANCE OF BEING ERNEST

  Eventually scientists learned that the nucleus of an atom is made up of a mixture of protons and neutrons in varying numbers, but this understanding of the nucleus’s architecture was hard won. And it was won mostly through the efforts of Ernest Rutherford (1871–1937), one of Thomson’s former students.4

  Rutherford was born and raised on a farm in New Zealand. He was more comfortable hunting pigeons and digging potatoes on his family’s farm than hobnobbing with intellectuals.5 Nevertheless, he was brilliant, and his family struggled to provide him with a first-rate scientific education. But there were few opportunities for the expression of his brilliance in New Zealand, and eventually he found himself at Cambridge University in England, in the laboratory of J. J. Thomson. At Cambridge he encountered some prejudice and belittlement because of his provincial roots. But messing with this muscular farmer was not without risk. In a letter home complaining of disparaging treatment from graduate teaching assistants, he wrote, “I’d like to do a Maori war-dance on the chest of one, and will do that in the future, if things don’t mend.”6 Things mended.

  FIGURE 4.1. ERNEST RUTHERFORD. The brilliant young Rutherford would become the father of nuclear physics. Fascinated by Antoine Henri Becquerel’s discovery of radioactivity, he picked up where Becquerel left off, pioneering the use of particulate radiations to probe the structure of the atomic nucleus. He was even able to show that when an atom radioactively decays it changes from one element into another, something that scientists had previously thought impossible. (Source: Photograph courtesy of Professor John Campbell)

  Initially, Rutherford was fascinated by radio waves, just as Marconi was, and delighted in demonstrating the bell-ringing tricks of Édouard Branly to his friends and roommates. From half a mile away, he was able to ring a bell in his living room, to everyone’s astonishment.7 But when Becquerel discovered radioactivity in 1896, Rutherford’s interests turned to radioactivity.

  Rutherford decided to move to McGill University in Canada to start his professional academic career and to focus his research specifically on radioactivity. McGill was a good choice. John Cox, the physics professor and x-ray researcher who had performed the first diagnostic x-ray on gunshot victim Toulson Cunning, would be working in the same research group as Rutherford. McGill also had Frederick Soddy (1877–1956), a brilliant chemistry professor who was as interested in radioactivity as Rutherford was.8 He and Rutherford would soon become close research collaborators.

  It was Rutherford who discovered that al
l radioisotopes have distinct half-lives.9 He also determined that radioactive decay could involve the change of an atom from one element into another (e.g., C-14 to N-14; see chapter 3) in a process he called nuclear transmutation. Rutherford used the word with some trepidation because he was well aware that the term was previously associated with the discredited alchemists—the medieval practitioners who sought to transmute lead into gold.10 Yet, that is exactly what was happening with radioactive decay. One element was transmuting into another!

  All of this work with radioactivity earned Rutherford the Nobel Prize in Chemistry in 1908. Nevertheless, his best work was yet to come. He would go on to describe the structure of the atom’s nucleus and propose a new model of the atom, now known as the Rutherford model, which would replace plum pudding and survive in substance, if not detail, to the present day. And although he never was awarded another Nobel Prize himself, he would mentor other scientists who would earn their own Nobel Prizes, in no small part due to his guidance.11

  In 1919, Rutherford succeeded his old professor, J. J. Thomson, as head of the Cavendish Laboratory, which was essentially the physics department of Cambridge University. First opened as a physics teaching laboratory in 1874, the Cavendish became the intellectual home of some of the greatest physics minds of all time. Even James Clerk Maxwell, author of the equations that had predicted the existence of radio waves, had also once headed the Cavendish.12 The Cavendish scientists took a special interest in anything to do with radiation and radioactivity, and much of what we know about radiation today can trace its roots to research first done at that laboratory.13

  The first known radioisotopes (uranium, polonium, and radium) all emitted a relatively large type of particle of unknown nature. Rutherford discovered that these large particles were essentially the same as a helium atom’s nucleus (i.e., a helium atom devoid of its electrons) traveling at very high speed. When they ultimately slowed to a stop, they picked up some electrons from the environment and formed helium gas, the lighter-than-air gas that makes party balloons float.14 He named these large particles alpha particles, to distinguish them from much smaller beta particles, which he also discovered and named. (A beta particle, as we’ve seen, is simply a high-speed electron that is ejected from the nucleus when an atom decays.)

 

‹ Prev