Book Read Free

Zapped

Page 11

by Bob Berman


  On August 12, 1896, Electrical Review reported that one Dr. H. D. Hawks had given a demonstration with a powerful new X-ray machine. After four days he noticed a drying of the skin, which he ignored. His right hand began to swell and gave the appearance of a deep skin burn. After two weeks the skin came off the hand, the knuckles become very sore, fingernail growth stopped, and the hair on the area exposed to the X-rays fell out. His eyes were bloodshot, and his vision became considerably impaired. His chest was also burned. Dr. Hawks’s physician treated this as a case of dermatitis. Hawks tried protecting his hands with petroleum jelly, then gloves, and finally by covering it with tinfoil. Within six weeks Hawks was partially recovered and was making light of his injuries. Electrical Review concluded by asking to hear from any of its readers who had had similar experiences.

  September brought a more troublesome account. A man named William Levy had been shot in the head by an escaping bank robber ten years previously. The bullet had entered his skull just above the left ear and presumably proceeded toward the back of the head. Having heard about X-rays, he decided he wanted the bullet located and extracted. Levy approached professor Frederick S. Jones of the Physical Laboratory at the University of Minnesota. Cautious by nature, Professor Jones warned Levy against the exposure, but Levy was undeterred, and an X-ray was taken on July 8, 1896. The X-ray tube was placed over his forehead, in front of his open mouth, and behind his right ear. Levy sat through the exposures from eight o’clock in the morning until ten o’clock at night. Within twenty-four hours his entire head was blistered, and within a few days his lips were badly swollen, cracked, and bleeding. His right ear had doubled in size, and the hair on his right side had entirely fallen out. Professor Jones concluded that the one feature that was satisfactory to the patient was that a good picture of the bullet was obtained, showing it to be about an inch beneath the skull under the occipital protuberance.

  Even more horrific were the side effects experienced by a glassblower at Thomas Edison’s Menlo Park laboratory, Clarence Madison Dally, who insisted on certifying each Crookes tube he’d created. He tested the tubes’ radiation output by placing his hands directly in the beam, turned to full power. Over the course of several months in 1896, Dally severely burned his hands—yet he continued that practice for another two years. In 1902 his right arm was amputated at the shoulder to arrest the spread of skin cancer, and two years later his left arm was amputated for the same reason. Clarence Dally died in October of 1904 at the age of thirty-nine—probably the first casualty of X-rays. His death horrified Thomas Edison and prompted him to discontinue X-ray research in his laboratory. Indeed, Edison was so appalled that he became afraid of X-rays. Later, when a dentist insisted on taking an X-ray to find the source of a persistent toothache, Edison asked the dentist to pull the tooth outright rather than subject him to the diagnostic rays.

  By late in 1896, despite these troubling reports, the consensus among physicians was that X-rays were safe and that in cases of adverse effects the equipment had been improperly used. No one guessed that the deadliest X-ray perils take years to surface. For example, Friedrich Otto Walkhoff and Fritz Giesel established the world’s first dental X-ray—or roentgenological—laboratory in 1896 and provided practitioners with images of the jaw and head for years to come. It took more than thirty years—until 1927—for Fritz Giesel to die of metastatic carcinoma, presumably caused by heavy radiation exposure to his hands.

  So it was that those early warnings went unheeded. And to be fair to the profession, many medical reports were reassuring. For example, in Boston in 1897, a Dr. Williams reported that he had examined approximately 250 patients with X-rays and had not seen any harmful effects at all. In 1902, in a major Philadelphia medical journal, Dr. E. A. Codman conscientiously reviewed all papers on X-ray injuries. Of the eighty-eight X-ray injuries reported, fifty-five had occurred in 1896, twelve in 1897, six in 1898, nine in 1899, three in 1900, and one in 1901. Thus it seemed as if the practice of radiology was being steadily perfected and the risk dramatically declining. In actuality the decline may have simply been caused by the fact that neither X-rays nor the injuries they caused were novel or newsworthy as time went on and that therefore they went unreported.

  It took more than a quarter century, during which time so many people had fallen victim to the misuse of X-rays that the dangers simply could no longer be ignored, before a book finally appeared to sound the alarm. American Martyrs to Science Through the Roentgen Rays was written in 1930 by the Boston radiologist Dr. Percy Brown, who himself died of cancer twenty years later, presumably from overexposure to X-rays.

  Since then, the news has been a mixed bag. Radiology itself has become more cautious; these days it would be rare for a radiologist to be present in the room while X-rays are taken. On the other hand, CT scans, which deliver far more radiation than simple X-rays, have proliferated and have earned cautionary warnings from medical professionals who fear that their overuse is creating its own crop of future cancers.

  But back in 1896, though no one suspected it, an even deadlier form of invisible energy was about to be discovered. This proved to be the ultimate: the most powerful kind of unseen light as well as the most mysterious. It provided the greatest amount of deadly radiation ever known, causing more than one hundred thousand deaths in a single week less than half a century later.

  But before we open that story, we must look at exactly what radiation is and how it is quantified. In our modern times, this vital information is widely misunderstood.

  CHAPTER 15

  What’s in Your Basement?

  Did you know that a single whole-body CT scan often delivers more radiation than was received by Hiroshima survivors a mile from ground zero? Or that living across the street from a nuclear power plant for a full year gives you less radiation than eating a single banana? (That fruit contains a tiny bit of radioactive potassium-40, the main source of radioactivity in our bodies. It has a half-life of 1.42 billion years, so you might as well learn to like it.)

  What is radiation? And how much of it is too much? For the vast majority of us, this issue is utterly bewildering. In our exploration of unseen rays and invisible hazards, we need a serious “time-out” to understand radiation.

  Few terms are more misunderstood. Since the middle of the nineteenth century and the work of Faraday and Maxwell, all forms of light have been termed electromagnetic radiation. By this definition, a candle emits radiation, as do a night-light and the moon. Of course, such radiation is totally harmless.

  By the final years of the nineteenth century, physicists began to discover invisible emanations in their labs and, as it turned out, throughout nature. Radium darkened photographic paper just the way X-rays did. This, too, was radiation, but an unknown kind. Was radium emitting tiny particles smaller than atoms? Or, instead, was it emitting unknown varieties of light rays? Whatever its nature, scientists wondered whether it was harmful or benign. Could it even be salutary? No one knew.

  Soon all invisible emissions—whether particles or rays—that proved capable of affecting the body were labeled radiation, even though everyone knew that the word would also continue to be applied to harmless emissions such as those from starlight and fireflies. In short, the word radiation came to have at least two different meanings—which, confusingly enough, is still the case today.

  So in this chapter, from this point on, unless I say “electromagnetic radiation,” which just means some form of light, visible or invisible, my use of the word radiation will follow the most common usage and refer only to potentially dangerous emissions. Radiation can take the form of a submicroscopically tiny, high-speed, bulletlike piece of an atom, such as a proton, or a bit of light whose waves are so short that they can damage atoms they hit and thus induce cellular changes in living organisms.

  Yet even that isn’t the end of it. Electricity can kill you, yet nobody thinks of electricity as a form of radiation. No, to qualify as radiation, an emission must have the ability to fl
y through space or the atmosphere and not merely be transmissible through wires and such. Got all that?

  Let’s try this again: for the rest of this chapter, I’ll call radiation anything invisibly tiny that doesn’t require another substance to travel through but instead flies at superhigh speeds from point A to point B and can penetrate living tissue to affect animals and people. In discussions of biological damage, the word radiation means particles or energies that can alter atoms and therefore a cell’s genes, causing birth defects and cancer.

  As we’ve seen, radiation can mean tiny solid particles or it can constitute waves. Long waves can’t damage atoms. That’s why visible light, radio waves, and even microwaves cannot possibly injure genes and cause cancer. Living next to a cell-phone tower can jiggle entire atoms in your body so that they heat up tissue, but it can’t actually break those atoms or cause tumors to form. (It still might not be good for you! And we’ll return to microwaves in a later chapter.)

  By contrast, gamma rays’ and X-rays’ short waves do break apart atoms, and UV radiation can, too. This is ionizing radiation, the bad kind, which can sabotage genes. Heavy, fast-moving subatomic particles such as neutrons can also destroy atoms, so they’re often called radiation even though they’re particles and not energy waves. (The distinction is blurry anyway, since all matter has a wavelike aspect.)

  The cause of radiation can be as simple as an atom’s electron falling inward, closer to its center, which creates a bit of energy that then flies off at high speed. Another common cause is something that befalls any atom with a large, unstable nucleus. Suddenly a piece of that nucleus can break away and fly like shrapnel through anyone and anything in the vicinity. Or that large nucleus, as it abruptly releases a fragment of itself, can emit a flash of energy, such as a gamma ray.

  There is no way to predict exactly when any of these things will happen to a particular atom. However, a specific kind of nucleus—say, carbon-14, with its six protons and eight neutrons—always has a tendency to disintegrate within a particular period of time. Depending on the atom in question, that can be a short period, meaning a fraction of a second, or it can be years, even billions of years. An atom of the most common kind of uranium, for example, is most likely to “break up” into two smaller objects after 4.5 billion years. But if you’re studying uranium atoms, it might be a frustrating wait, because the event might happen one second from now or more than 4.5 billion years from now.

  So science considers only large groups of such unstable atoms and specifies how long we expect it will take for half the sample to disintegrate. In short, the issue is a statistical one. The half-life of uranium, 4.5 billion years, means that after 4.5 billion years we expect that half the batch of uranium we’re studying will have come apart and changed into a different element (in this case, lead), accompanied by the release of particles or energy.

  What’s weird is that an atom has no “memory” of its past. It does not “age” with the passage of time. The chances of its breaking down remain the same throughout its existence, though again, each radioactive element’s nucleus has its own specific decay-probability rate.

  Half-Life of Selected Substances

  Oxygen-15 122.24 seconds

  Any neutron (outside an atom) 10.3 minutes

  Carbon-11 20.334 minutes

  Iodine-131 8.02 days

  Sodium-22 2.602 years

  Plutonium-238 87.7 years

  Carbon-14 5,730 years

  Uranium-238 4.468 billion years

  Beyond figuring out the half-life of various radioactive substances, another challenge that confronted scientists in the late nineteenth and early twentieth centuries was to determine what constitutes a harmful quantity of radiation. Early radiologists exposed themselves to immense amounts of it to figure out where the safety boundaries lay. It was brave and ultimately killed many of them, although often not for thirty years or more after first exposure.

  We’ve already seen how the earliest years of, say, X-rays brought disconcerting reports, but no widespread, consistent, hard-enough evidence surfaced to make radiologists take serious precautions. It wasn’t until the 1930s that radiation’s hazards started to become sufficiently clear—an awareness that became especially keen in the late 1940s.

  In 1946 a statistical study of obituaries conducted by Dr. Helmuth Ulrich, published in the New England Journal of Medicine, found that the leukemia rate among radiologists was eight times that of other doctors. In 1956 the National Academy of Sciences supported those findings in a report that concluded that radiologists lived 5.2 fewer years than other MDs. In 1963, a study carried out by Dr. Edward B. Lewis found a significant incidence of deaths from leukemia, multiple myeloma, and aplastic anemia among radiologists, and two years later two Johns Hopkins researchers discovered that there was a 70 percent higher incidence of cardiovascular disease and certain cancers—and 730 percent more leukemia deaths—among radiologists than in the general population.

  But back in the 1930s and even into the 1940s, the danger was still largely unknown. At that time, teenagers—including my own mother—were often treated for acne, pimples, and other routine adolescent rashes with massive X-ray beams, which would indeed dry out the skin and seemingly cure the condition. Until the mid-1950s, many shoe stores had customer-operated X-ray machines into which people inserted their feet. Then they could gaze as long as they wished to at the fluoroscope screen, pondering their bones and judging how their shoes fit. A friend of mine, in his mid-eighties at the time of this writing, witnessed seventeen H-bomb explosions while working as a technician with the navy. Wearing just casual clothing, he’d remove his protective glasses and gaze in awe at the mushroom cloud that he says was never more than ten miles away. That was in 1957, at Eniwetok, in the Pacific near the Mashall Islands. More than sixty years later, he’s still healthy and active. Obviously the danger of excess radiation has a hit-or-miss component as far as genetic damage is concerned. True, the quantity of radiation matters mightily. But when you talk to friends and family about radiation, most act as if any and all forms of it are highly perilous. This widespread fear, bordering on phobia, reveals the public’s deep ignorance of the subject.

  Consider the Three Mile Island nuclear power plant accident of March 28, 1979. This worst-ever nuclear accident in the United States started with a mechanical fault, a stuck valve, and was exacerbated by an operator who misread a warning light. It all eventually led to a partial meltdown of the nuclear fuel and the creation of a hydrogen bubble that had the potential to explode and perhaps breach the containment building, which would have released significant radiation over a wide area. As it turned out, however, the total radiation released by the accident, as experienced by the two million people closest to the Pennsylvania facility, was 1.4 millirems, or 0.014 millisieverts. (Millirems and millisieverts, abbreviated as mrem and mSv, are the units we use to measure radiation.) The final report compared this with the 80 millirems per year that residents of Denver receive from living in that high-altitude city. As further comparison, a patient receives more than twice the accident’s radiation, or 3.2 millirems, from a chest X-ray.

  The Three Mile Island accident resulted in no private property damage and not a single injury to anyone. But in several almanacs and handbooks of industrial accidents, that event is categorized as a “calamity” or a “catastrophe.” As far as the media is concerned, radiation accidents belong to an ultraperilous, headline-deserving class that bears no relation to actual injury or damage.

  Perhaps even more surprisingly, experts from eighteen countries, when they assessed the Fukushima nuclear “disaster” of 2011, found that not only were there no fatalities, the likeliest number of future cancer deaths from the leaked radiation was also zero. Nuclear power plant “disasters” are thus far less perilous to health than is commonly perceived.

  And yet at the same time, radiation is hazardous if you receive enough of it. Whereas some 23 percent of us will ultimately die of cancer, the rate is ful
ly 1 percent higher for career pilots and flight crew members, thanks to the additional radiation they receive from routinely being up at high altitudes, where the cosmic-ray intensity is greater than it is on the surface of the earth. Every day, real lives are lost from radiation.

  These days, fortunately, almost no one gets exposed to radiation in lethal doses. But it can be fatal. With an exposure of 700 rems (or 700,000 millirems, or 7,000 millisieverts), most people will die within a few days after very unpleasant initial effects, such as nausea, weakness, fever, and hair loss. Barring circumstances like the Chernobyl accident, which happened at a built-on-the-cheap reactor that lacked even a containment building, such intense exposures are unlikely for anyone on our planet except those in hospital settings, where a handful of people have indeed been fatally overexposed. These unfortunates were receiving routine radiation treatment when software glitches upped the dose to lethal levels.

  One of these involved the Therac-25, a radiation-therapy machine first sold in 1982 that was produced by a company called Atomic Energy of Canada. Michael Mah, managing partner of QSM Associates, a software consultancy, told me that IT engineers still cite the Therac-25 as an example of what can go horribly wrong when safety depends solely on software.

  The device was designed to treat cancer patients with beams of radiation. It could aim a beam of electrons for low-dose therapies if the tumor was not very deep, or it could be switched to an X-ray beam for deep or high-dose radiation treatment. Typically, a focused dose of around 70 rads, equal to around 70,000 millirems, was used. But in several instances, flaws in the software caused the patient to receive one hundred times more radiation than necessary—7,000 rads, or around 7 million millirems, which is a fatal dose. Between 1985 and 1987, the machine injured six people, three of whom died. One immediately felt the ultrahigh-dose electron beam as “an intense electrical shock,” and he leaped up from the table. Screaming, he ran to the door and tried to escape.

 

‹ Prev