Zapped

Home > Science > Zapped > Page 15
Zapped Page 15

by Bob Berman


  Now to the cancer question: as of 2016, there are more than seven thousand major studies of RF “radiation” in the medical literature. As the Times reported, the very largest studies have failed to detect an association between cell-phone use and brain tumors or other cancers.

  The largest investigation is the Interphone Study, which involved thirteen countries, including Canada, the United Kingdom, Denmark, and Japan. Researchers questioned more than seven thousand people who had been diagnosed with a brain tumor as well as a control group of fourteen thousand healthy people about their previous cell-phone use. The study found no association between cell-phone use and glioma (cancerous brain tumor) rates except in the group of participants who reported using their cell phone for at least 1,640 hours in their lifetimes without a headset. Those participants were 40 percent more likely than those who never used a cell phone to have a glioma. Since this finding contradicted other studies that uncovered no increased cancer risk, the Interphone Study authors speculated that people with brain tumors, looking for an explanation for the tragic disease that had befallen them, might be more likely than healthy people to exaggerate their cell-phone use.

  Also reassuring are the results of studies involving workers whose occupations expose them to more than a thousand times more RF energy than the rest of us get. These lab technicians, cell-phone-tower maintenance workers, radar technicians, and others show no increased cancer rate whatsoever.

  Still, some research leaves the door open to doubt. The results of ongoing studies, in progress since 2013, that expose animals to various microwave intensities have so far been generally reassuring, but in 2016, a study conducted on rats exposed to high levels of cell-phone-type rays for more than two years, starting before birth, found a 2 percent rate of brain cancer—but only in males, not in females. Oddly, none of the control-group rats developed tumors, though the usual rate would have been 2 percent. In other words, if it weren’t for an abnormally low cancer incidence in the control group, the cancer rate for the exposed rats and the unexposed rats would have been the same, and microwaves would have been given a clean bill of health. The whole thing was puzzling—so puzzling that most researchers do not accept the results, although some do.

  One Danish study found that brain-tumor incidence increased among the segment of the population that used cell phones the most hours per day. It didn’t help when the giant British insurance company Lloyd’s, in 2014, announced that it would no longer sell insurance against health effects from microwaves. Many started wondering whether they were damaging themselves and their children by permitting unrestricted cell-phone usage.

  These puzzling outlier studies need to be acknowledged, and we need to continue research into cell-phone radiation. But the fact remains that there has been no convincing evidence to date that cell-phone use increases the risk of cancer. So why the IARC 2B classification of microwaves as a “possible carcinogen”? Well, context is important. After all, the WHO classifies coffee in the 2B category, despite some investigative organizations such as Consumers Union saying that coffee is actually healthful. If after twenty years and seven thousand studies researchers had instead found evidence that microwaves are “probably” carcinogenic, that would have earned them a 2A classification, still lower than a class 1 “definite” cancer-causing rating. In other words, the 2B designation indicates that any effect must be very subtle. Indeed, the way things look now, the worst we might eventually find out about microwaves in terms of carcinogenesis is a tiny effect along the lines of one case per several million users—a hazard that would probably not inspire anyone to change his or her habits. Meanwhile, cell-phone-signal emissions have been steadily decreasing since 2005 as the technology has improved, rendering obsolete any findings from studies conducted before that.

  As the American Cancer Society wisely points out, the fact that most studies so far have not found a link between cell-phone use and the development of tumors is unlikely to end the controversy and put us completely at ease—nor should it. These studies suffer from a number of limitations, which the ACS lays out: “First, studies have not yet been able to follow people for very long periods of time. When tumors form after a known cancer-causing exposure, it often takes decades for them to develop. Because cell phones have been in widespread use for only about twenty years in most countries, it is not possible to rule out future health effects that have not yet appeared. Second, cell phone usage is constantly changing. People are using their cell phones much more than they were even ten years ago, and the phones themselves are very different from what was used in the past. This makes it hard to know if the results of studies looking at cell phone use in years past would still apply today.”

  Frequency, intensity, and duration of exposure can affect the response to radio-frequency radiation (RFR), and these factors can interact with one another and produce various effects. In addition, in order to understand the biological consequence of RFR exposure, one must know whether the effect is cumulative or whether compensatory responses result. In short, the issue of whether there is any adverse biological effect from the entire radio-frequency band (which includes TV and radio towers and not merely cell-phone microwaves) is complex. Major study results will be announced between 2017 and 2020, so the last word about microwave safety is still to come as of this writing.

  While we wait for that last word, why not do what we can to minimize our exposure? Like all electromagnetic radiation, both visible and invisible, RFR intensity falls off inversely with the square of distance. This means that if you step twice as far away from a lightbulb as you were in the first place it will appear 22, or four, times dimmer. Or if you spend your days wondering how bright the sun appears from Saturn, which is around ten times farther away from it than Earth is, simply calculate the square of ten. Thus from the ringed planet, the sun appears one hundred times dimmer than it does from Brooklyn. Quick and easy.

  Similarly, if instead of holding your cell phone tightly against the side of your head, a mere inch from your brain, you put it on speakerphone or use a headset so that the phone and its antenna are now in a pocket twelve inches from your brain, you have reduced your brain’s incoming microwave intensity by 1212, or a factor of 144. Or you could just join the under-twenty-five generation and switch to texting rather than talking. That way, you eliminate any hazard.

  CHAPTER 19

  Cosmic Rays

  When the thirty Apollo astronauts—in groups of three, with three “repeats”—sped outward from Earth between 1968 and 1972, they experienced something no human had ever been subjected to, before or since. They ventured beyond our planet’s magnetosphere—its protective magnetic field.

  The results were unexpected and bizarre. Each man saw something that resembled a streaking meteor cross his field of vision around once a minute. At first the astronauts kept this disquieting development to themselves. Nearly all of them were navy pilots, and long experience made it an unwritten rule that no pilot ever reveals to any physician that anything is medically wrong with him. Especially something that might be construed as mental in origin.

  But a few were close enough to their fellow astronauts to confide in them about what was happening. In this manner they came to realize that the streaking-meteor phenomenon was befalling them all. Then it was safe to report it to mission control.

  NASA physicians had an immediate theory that was later verified. Powerful cosmic rays were zooming through the astronauts’ eyeballs. Traveling beyond both Earth’s atmosphere and its magnetosphere meant that those high-speed intruders from deep space had nothing to block them. Each “ray” was ripping a path through the cerebral cortex, exciting and no doubt damaging neurons and triggering the streaks.

  It was a twenty-nine-year-old Austrian physicist named Victor Hess who discovered cosmic rays. Born in Austria in June of 1883, Hess earned his PhD from the University of Graz in 1906. He first decided to study optics under famed physicist Paul Drude, the man who gave us the symbol c for the speed of
light. Tragically and inexplicably, Drude committed suicide a few weeks before Hess was due to arrive.

  Hess instead accepted a teaching position at the University of Vienna. The discovery of radium by the Curies in 1898 had created a global sensation, and Hess began a serious study of that hottest issue in physics. Working as an assistant at the Institute for Radium Research at the Austrian Academy of Sciences, he became fascinated by a curious phenomenon: electrical charges were regularly detected inside electroscopes even when no radioactive elements were nearby, no matter how well those containers were insulated. The accepted explanation at the time was that earthly minerals such as quartz and granite emitted periodic radiation that caused such readings. If this were so, then the number of charges inside the device should diminish as one raised the electroscope farther off the ground.

  There were good hard-core physics reasons why this should be so. The intensity of light or any other electromagnetic radiation, as we saw in chapter 18, is inversely proportional to the square of the distance from the source. So if a radioactive bit of radium is twice as far away as it had been when it was last observed, you’d receive only one-quarter of its energy in the same time period. A widely accepted scientific paper spelled it out: assuming an even distribution of radioactive rocks on the earth’s surface, at an elevation of ten meters, or around three stories, the measured radiation should fall to 83 percent of its value on the ground. A height of ten stories should reduce it to 36 percent, and at a height of one thousand meters, or around three thousand feet, only 0.1 percent of the initial value should remain.

  So what accounted for the charges that Hess observed inside the electroscope? The answer lay in research that was just coming to light at the time. A few scientists were finding that radiation did not necessarily diminish with distance from the ground. For example, in 1910, Theodor Wulf took electroscope readings at both the bottom and top of the Eiffel Tower and found that there was far more ionization—i.e., radiation—at three hundred meters (the top) than one would expect if this effect were solely attributable to ground radiation.

  Could a major source of the ionization in Hess’s electroscopes be the sky rather than the ground? Hess first calculated that at a height of just 1,500 feet, enough insulating air should intervene to prevent any ground-based radiation from being detected. Then he mounted his instruments in a balloon, climbed in after them, and took a series of ionization measurements in ten ascents over three years, starting in 1911. He got the same results each time. Radiation activity first diminished as his balloon ascended but then started to rapidly rise. At an altitude of seventeen thousand feet, or just over three miles, the readings were always at least twice as great as they were at the surface. In a published scientific paper, Hess announced that “a radiation of very high penetrating power enters our atmosphere from above.”

  Hess was no coward. He conducted a perilous flight at night to eliminate the sun as a cause of the radiation. Sure enough, his readings remained just as strong after nightfall. He also went up on April 17, 1912, during a near-total solar eclipse, when most of the sun’s energy was blocked by the moon. Again the radiation intensity did not decrease.

  If the radiation Hess was picking up wasn’t coming from the sun or from earthly rocks, it must be coming from deep space. At a major 1913 science convention, the outer-space origin of these rays was generally accepted, but they were believed to be gamma rays. More than a decade later, Hess’s findings were confirmed by Robert Millikan, who dubbed the mysterious radiation “cosmic rays.” In 1936, Hess won the Nobel Prize in Physics, an acknowledgment of his discovery.*

  Turns out that Millikan was wrong to assume that these high-energy rays were a form of invisible light. In 1927, investigators began finding evidence that cosmic-ray intensity varied with one’s distance from the equator. This wouldn’t make sense if the rays were a form of light. But it would be reasonable if they were being deflected by our planet’s magnetic field. Thus they must be some kind of charged particle and not any kind of photon or ray. Nonetheless the word ray tenaciously stuck and remains in common use today.

  Then in 1930, an even weirder phenomenon emerged. Scientists started seeing a difference in intensity between cosmic rays arriving in our atmosphere from an easterly direction and those streaming in from the west. This “east-west effect” indicated that not only do cosmic-ray particles have a charge, the charge must also be positive, since the direction through the earth’s magnetic field would have the opposite effect on a negatively charged particle. It all meant that cosmic rays are mostly protons, or hydrogen nuclei, whose charges are positive.

  By the end of World War II, researchers had more or less determined the true composition of cosmic rays. It wasn’t pretty. Ninety percent of them are indeed simply protons—the nucleus of every hydrogen atom. Slightly less than 9 percent are alpha particles, which, as we’ll recall, are helium nuclei, meaning a hefty glob containing two protons and two neutrons. One percent are ordinary electrons, or beta particles. And a ragtag assortment of other stuff, including antimatter, makes up nearly another percent.

  We’ll get to antimatter later. For now, suffice it to say that the cosmic-ray ingredient list made no sense back then and remains just as bewildering today. True, protons and electrons are both expelled by the sun. But the superhigh energies of many cosmic rays rule out the sun as a likely source. So right from the beginning, astrophysicists assumed correctly that most come from supernova explosions far, far away. More recently, other violent celestial events, such as the explosions at the cores of some galaxies and the collapse of black holes, have also been pinned down as cosmic-ray sources. But supernovas and their remnants, such as the famous Crab Nebula, in Taurus, are the main cosmic-ray sources.

  Why their proton-heavy composition? There are just as many electrons in the universe as protons. Shouldn’t electrons be hurled outward by supernovas, too? What happened to them? Worse, a small percentage of cosmic rays are ultrafast, with unbelievable energies. Some of these subatomic bullets can deliver the same wallop as a baseball hitting your head at fifty-six miles per hour. Imagine: a single particle far smaller than an atom smashing anything it encounters with palpable impact. What would that do to one of your DNA strands? What about the particles’ overall effect on human health?

  If we broke down the radiation each of us is exposed to every year, we’d find that half of it does come from the ground—at least for those whose homes have radon leaking in from cracks in the basement. A small percentage emanates from within our own bodies, from radioactive carbon-14 and the potassium-40 in foods such as bananas. But around one-tenth of the radiation we receive penetrates us from above—and consists of these cosmic rays. As Hess discovered, they grow more intense the higher up we go. So those who live in high-altitude cities such as Denver get fully one-quarter of their annual radiation from cosmic rays alone.

  It’s even worse for those who spend their days higher than Mount Everest—career pilots and flight-crew members. Thanks to cosmic rays, they get twice as much radiation as people in other professions do, which translates into a 1 percent higher cancer rate than the general population. For the rest of us, to exceed the radiation limit established by the US government for nuclear power plant workers, you’d have to fly more than eighty-five thousand miles a year.

  Our exposure to cosmic rays depends not only on location but also on timing. Much of the cosmic radiation streaming in from deep space is ordinarily deflected by a boundary at the edge of the solar system, a shock wave where the outrushing solar wind goes from supersonic to subsonic. But the sun’s power varies with its eleven-year solar cycle, during which its storms and subatomic emissions, called the solar wind, alternately grow more intense and then less so. In years when the sun is wimpy, this termination shock zone becomes weaker and its protective barrier much more porous. That’s when deep-space cosmic rays stream to Earth with far greater intensity. At such times, and during particularly powerful solar flares, when the sun itself cran
ks up its cosmic-ray intensity, jetliners flying polar routes are bombarded with extra radiation.

  During those times, if you have a camcorder and look at its black screen while you’re in a dark closet, you’ll see flashes as cosmic rays strike the camera’s CCD chip. At such times you can feel lucky you’re not an astronaut—or a future Martian colonist—in a flimsy spacecraft outside Earth’s atmosphere and magnetosphere. You’d be particularly vulnerable during the half-year-long trip to Mars, when a severe cosmic-ray bombardment could produce truly hazardous radiation levels. You’d see more than mere streaks across your visual field. You might then be silently condemned to a short life span.

  By the late 1940s and early 1950s, cosmic rays were deemed a serious hazard to any living organism that ventured beyond our atmosphere. What did that mean for human space flight? To find out, the US Air Force, in a project led by Captain David Simons working under Colonel James Henry, first sent organisms such as fruit flies and mice, and then primates, to the upper regions of the atmosphere in captured German V-2 rockets.

  The first monkey flight was set for June 18, 1948, but the animal suffocated in the capsule before it had even left the ground. A year later, another monkey was sent aloft in a better-ventilated capsule. Sadly, this time the parachute failed, and so the creature was just as dead as its predecessor.

 

‹ Prev