Book Read Free

Denialism: How Irrational Thinking Hinders Scientific Progress, Harms the Planet, and Threatens Our Lives

Page 5

by Michael Specter


  On December 3, 2005, in a videotaped deposition presented under subpeona at one of the many trials following the recall, Topol argued that Vioxx posed an “extraordinary risk.” A colleague from the Cleveland Clinic, Richard Rudick, told him that Gilmartin, the Merck CEO, had become infuriated by Topol’s public attacks and had complained bitterly to the clinic’s board about the articles in the Times and the New England Journal of Medicine. “What has Merck ever done to the clinic to warrant this?” Gilmartin asked.

  Two days after that testimony, Topol received an early call telling him not to attend an 8 a.m. meeting of the board of governors. “My position—chief academic officer—had been abolished. I was also removed as provost of the medical school I founded.” The clinic released a statement saying that there was no connection between Topol’s Vioxx testimony and his sudden demotion, after fifteen years, from one of medicine’s most prominent positions. A spokeswoman for the clinic called it a simple reorganization. The timing, she assured reporters, was a coincidence.

  DID THE RECALL of Vioxx, or any other single event, cause millions of Americans to question the value of science as reflexively as they had once embraced it? Of course not. Over the decades, as our knowledge of the physical world has grown, we have also endured the steady drip of doubt—about both the definition of progress and whether the pursuit of science will always drive us in the direction we want to go. A market disaster like Vioxx, whether through malice, greed, or simply error, presented denialists with a rare opportunity: their claims of conspiracy actually came true. More than that, in pursuit of profits, it seemed as if a much-admired corporation had completely ignored the interests of its customers.

  It is also true, however, that spectacular technology can backfire spectacularly—and science doesn’t always live up to its expectations. When we see something fail that we had assumed would work, whether it’s a “miracle” drug or a powerful machine, we respond with fear and anger. People often point to the atomic bomb as the most telling evidence of that phenomenon. That’s not entirely fair: however much we may regret it, the bomb did what it was invented to do.

  That wasn’t the case in 1984, when a Union Carbide pesticide factory in Bhopal, India, released forty-two tons of toxic methyl isocyanate gas into the atmosphere, exposing more than half a million people to deadly fumes. The immediate death toll was 2,259; within two weeks that number grew to more than eight thousand. Nor was it true two years later, when an explosion at Unit 4 of the V. I. Lenin Atomic Power Station transformed a place called Chernobyl into a synonym for technological disaster. They were the worst industrial accidents in history—one inflicting immense casualties and the other a worldwide sense of dread. The message was hard to misinterpret: “Our lives depend on decisions made by other people; we have no control over these decisions and usually we do not even know the people who make them,” wrote Ted Kaczynski, better known as the Unabomber, in his essay “Industrial Society and Its Future”—the Unabomber Manifesto. “Our lives depend on whether safety standards at a nuclear power plant are properly maintained; on how much pesticide is allowed to get into our food or how much pollution into our air; on how skillful (or incompetent) our doctor is. . . . The individual’s search for security is therefore frustrated, which leads to a sense of powerlessness.”

  Kaczynski’s actions were violent, inexcusable, and antithetical to the spirit of humanity he professed to revere. But who hasn’t felt that sense of powerlessness or frustration? Reaping the benefits of technology often means giving up control. That only matters, of course, when something goes wrong. Few of us know how to fix our carburetors, or understand the mechanism that permits telephone calls to bounce instantly off satellites orbiting twenty-eight thousand miles above the earth only to land a split second later in somebody else’s phone on the other side of the world.

  That’s okay; we don’t need to know how they function, as long as they do. Two hundred or even fifty years ago, most people understood their material possessions—in many cases they created them. That is no longer the case. Who can explain how their computer receives its constant stream of data from the Internet? Or understands the fundamental physics of a microwave? When you swallow antibiotics, or give them to your children, do you have any idea how they work? Or how preservatives are mixed into many of the foods we eat or why? The proportion of our surroundings that any ordinary person can explain today is minute—and it keeps getting smaller.

  This growing gap between what we do every day and what we know how to do only makes us more desperate to find an easy explanation when something goes wrong. Denialism provides a way to cope with medical mistakes like Vioxx and to explain the technological errors of Chernobyl or Bhopal. There are no reassuring safety statistics during disasters and nobody wants to hear about the tens of thousands of factories that function flawlessly, because triumphs are expected, whereas calamities are unforgettable. That’s why anyone alive on January 28, 1986, is likely to remember that clear, cold day in central Florida, when the space shuttle Challenger lifted off from the Kennedy Space Center, only to explode seventy-three seconds later, then disintegrate in a dense white plume over the Atlantic. It would be hard to overstate the impact of that accident. The space program was the signature accomplishment of American technology: it took us to the moon, helped hold back the Russians, and made millions believe there was nothing we couldn’t accomplish. Even our most compelling disaster—the Apollo 13 mission—was a successful failure, ending with the triumph of technological mastery needed to bring the astronauts safely back to earth.

  By 1986, America had become so confident in its ability to control the rockets we routinely sent into space that on that particular January morning, along with its regular crew, NASA strapped a thirty-seven-year-old high school teacher named Christa McAuliffe from Concord, New Hampshire, onto what essentially was a giant bomb. She was the first participant in the new Teacher in Space program. And the last.

  The catastrophe was examined in merciless detail at many nationally televised hearings. During the most remarkable of them, Richard Feynman stunned the nation with a simple display of show-and-tell. Feynman, a no-nonsense man and one of the twentieth century’s greatest physicists, dropped a rubber O-ring into a glass of ice water, where it quickly lost resilience and cracked. The ring, used as a flexible buffer, couldn’t take the stress of the cold, and it turned out neither could one just like it on the shuttle booster rocket that unusually icy day in January. Like so many of our technological catastrophes, this was not wholly unforeseen. “My God, Thiokol, when do you want me to launch, next April?” Lawrence Mulloy, manager of the Solid Rocket Booster Project at NASA’s Marshall Space Flight Center, complained to the manufacturer, Morton Thiokol, when engineers from the company warned him the temperature was too low to guarantee their product would function properly.

  SCIENTISTS HAVE NEVER BEEN good about explaining what they do or how they do it. Like all human beings, though, they make mistakes, and sometimes abuse their power. The most cited of those abuses are the twins studies and other atrocities carried out by Nazi doctors under the supervision of Josef Mengele. While not as purely evil (because almost nothing could be), the most notorious event in American medical history occurred not long ago: from 1932 to 1972, in what became known as the Tuskegee Experiment, U.S. Public Health Service researchers refused to treat hundreds of poor, mostly illiterate African American share-croppers for syphilis in order to get a better understanding of the natural progression of their disease. Acts of purposeful malevolence like those have been rare; the more subtle scientific tyranny of the elite has not been.

  In 1883, Charles Darwin’s cousin Francis Galton coined the term “eugenics,” which would turn out to be one of the most heavily freighted words in the history of science. Taken from a Greek word meaning “good in birth,” eugenics, as Galton defined it, simply meant improving the stock of humanity through breeding. Galton was convinced that positive characteristics like intelligence and beauty, as wel
l as less desirable attributes like criminality and feeblemindedness, were wholly inherited and that a society could breed for them (or get rid of them) as they would, say, a Lipizzaner stallion or a tangerine. The idea was that with proper selection of mates we could dispense with many of the ills that kill us—high blood pressure, for instance, or obesity, as well as many types of cancer.

  Galton saw this as natural selection with a twist, and felt it would provide “the more suitable races or strains of blood a better chance of prevailing speedily over the less suitable.” Galton posed the fundemental question in his 1869 book, Hereditary Genius : would it not be “quite practicable to produce a highly gifted race of men by judicious marriages during consecutive generations?” As the Yale historian Daniel J. Kevles points out in his definitive 1985 study In the Name of Eugenics, geneticists loved the idea and eagerly attempted to put it into action. Eugenics found particular favor in the United States.

  In 1936, the American Neurological Association published a thick volume titled Eugenical Sterilization, a report issued by many of the leading doctors in the United States and funded by the Carnegie Foundation. There were chapters on who should be sterilized and who shouldn’t, and it was chock full of charts and scientific data—about who entered the New York City hospital system, for example, at what age and for what purpose. The board of the ANA noted in a preface that “the report was presented to and approved by the American Neurological Association at its annual meeting in 1935. It had such an enthusiastic reception that it was felt advisable to publish it in a more permanent form and make it available to the general public.”

  Had their first recommendation appeared in a novel, no reader would have taken it seriously: “Our knowledge of human genetics has not the precision nor amplitude which would warrant the sterilization of people who themselves are normal [italics in the original] in order to prevent the appearance, in their descendants, of manic-depressive psychosis, dementia praecox, feeblemindedness, epilepsy, criminal conduct or any of the conditions which we have had under consideration. An exception may exist in the case of normal parents of one or more children suffering from certain familial diseases, such as Tay-Sachs’ amaurotic idiocy.” Of course, for people who were not considered normal, eugenics had already arrived. Between 1907 and 1928, nearly ten thousand Americans were sterilized on the general grounds that they were feebleminded. Some lawmakers even tried to make welfare and unemployment relief contingent upon sterilization.

  Today, our knowledge of genetics has both the precision and the amplitude it lacked seventy years ago. The Nazis helped us bury thoughts of eugenics, at least for a while. The subject remains hard to contemplate—but eventually, in the world of genomics, impossible to ignore. Nobody likes to dwell on evil. Yet there has never been a worse time for myopia or forgetfulness. By forgetting the Vioxxes, Vytorins, the nuclear accidents, and constant flirtation with eugenics, and instead speaking only of science as a vehicle for miracles, we dismiss an important aspect of who we are. We need to remember both sides of any equation or we risk acting as if no mistakes are possible, no grievances just. This is an aspect of denialism shared broadly throughout society; we tend to consider only what matters to us now, and we create expectations for all kinds of technology that are simply impossible to meet. That always makes it easier for people, already skittish about their place in a complex world, to question whether vaccines work, or AIDS is caused by HIV, or why they ought to take prescribed pain medication instead of chondroitin or some other useless remedy recommended wholeheartedly by alternative healers throughout the nation.

  IF YOU LIVED with intractable pain, would you risk a heart attack to stop it? What chances would be acceptable? One in ten? One in ten thousand? “These questions are impossible to answer completely,” Eric Topol told me when I asked him about it one day as we walked along the beach in California. “Merck sold Vioxx in an unacceptable and unethical way. But I would be perfectly happy if it was back on the market.”

  Huh? Eric Topol endorsing Vioxx seemed to make as much sense as Alice Waters campaigning for Monsanto and genetically modified food. “I can’t stress strongly enough how deplorable this catastrophe has been,” he said. “But you have to judge risk properly and almost nobody does. For one thing, you rarely see a discussion of the effect of not having drugs available.” Risk always has a numerator and a denominator. People tend to look at only one of those numbers, though, and they are far more likely to remember the bad than the good. That’s why we can fear flying although it is hundreds of times safer than almost any other form of transportation. When a plane crashes we see it. Nobody comes on television to announce the tens of thousands of safe landings that occur throughout the world each day.

  We make similar mistakes when judging our risks of illness. Disease risks are almost invariably presented as statistics, and what does it mean to have a lifetime heart attack risk 1.75 times greater than average? Or four times the risk of developing a certain kind of cancer? That depends: four times the risk of developing a cancer that affects 1 percent of the population isn’t terrible news. On the other hand, a heart attack risk 75 percent greater than average, in a nation where heart attacks are epidemic, presents a real problem. Few people, however, see graphic reality in numbers. We are simply not good at processing probabilistic information. Even something as straightforward as the relationship between cigarette smoking and cancer isn’t all that straightforward. When you tell a smoker he has a 25 percent chance of dying from cancer, the natural response is to wonder, “From this cigarette? And how likely is that really?” It is genuinely hard to know, so all too often we let emotion take over, both as individuals and as a culture.

  The week in 2003 that SARS swept through Hong Kong, the territory’s vast new airport was deserted, and so were the city’s usually impassable streets. Terrified merchants sold face masks and hand sanitizer to anyone foolish enough to go out in public. SARS was a serious disease, the first easily transmitted virus to emerge in the new millennium. Still, it killed fewer than a thousand people, according to World Health Organization statistics. Nevertheless, “it has been calculated that the SARS panic cost more than $37 billion globally,” Lars Svendsen wrote in A Philosophy of Fear. “For such a sum one probably could have eradicated tuberculosis, which costs several million people’s lives every year.”

  Harm isn’t simply a philosophical concept; it can be quantified. When Merck, or any another company, withholds information that would have explained why a drug might “fail,” people have a right to their anger. Nonetheless, the bigger problem has little to do with any particular product or industry, but with the way we look at risk. America takes the Hollywood approach, going to extremes to avoid the rare but dramatic risk—the chance that minute residues of pesticide applied to our food will kill us, or that we will die in a plane crash. (There is no bigger scam than those insurance machines near airport gates, which urge passengers to buy a policy just in case the worst happens. A traveler is more likely to win the lottery than die in an airplane. According to Federal Aviation Administration statistics, scheduled American flights spent nearly nineteen million hours in the air in 2008. There wasn’t one fatality.)

  On the other hand, we constantly expose ourselves to the likely risks of daily life, riding bicycles (and even motorcycles) without helmets, for example. We think nothing of exceeding the speed limit, and rarely worry about the safety features of the cars we drive. The dramatic rarities, like plane crashes, don’t kill us. The banalities of everyday life do.

  We certainly know how to count the number of people who died while taking a particular medication, but we also ought to measure the deaths and injuries caused when certain drugs are not brought to market; that figure would almost always dwarf the harm caused by the drugs we actually use. That’s even true with Vioxx. Aspirin, ibuprofen, and similar medications, when used regularly for chronic pain, cause gastrointestinal bleeding that contributes to the death of more than fifteen thousand people in the United Sta
tes each year. Another hundred thousand are hospitalized. The injuries—including heart attacks and strokes—caused by Vioxx do not compare in volume. In one study of twenty-six hundred patients, Vioxx, when taken regularly for longer than eighteen months, caused fifteen heart attacks or strokes per every one thousand patients. The comparable figure for those who received a placebo was seven and a half per thousand. There was no increased cardiovascular risk reported for people who took Vioxx for less than eighteen months. In other words, Vioxx increased the risk of having a stroke or heart attack by less than 1 percent. Those are odds that many people might well have been happy to take.

  “All Merck had to do was acknowledge the risk, and they fought that to the end,” Topol said. “After fifteen months of haggling with the FDA they put a tiny label on the package that you would need a microscope to find. If they had done it properly and prominently, Vioxx would still be on the market. But doctors and patients would know that if they had heart issues they shouldn’t take it.”

  Most human beings don’t walk out the door trying to hurt other people. So if you are not deliberately trying to do harm, what are the rules for using medicine supposed to be? What level of risk would be unacceptable? A better question might be, Is any risk acceptable? Unfortunately, we have permitted the development of unrealistic standards that are almost impossible to attain. The pharmaceutical industry, in part through its own greed (but only in part), has placed itself in a position where the public expects it never to cause harm. Yet, drugs are chemicals we put into our body, and that entails risks. No matter how well they work, however, if one person in five thousand is injured, he could sue and have no trouble finding dozens of lawyers eager to represent him. People never measure the risk of keeping the drug off the market, though, and that is the problem. If you applied FDA phase I or II or III criteria—all required for drug approval—to driving an automobile in nearly any American city, nobody would be allowed to enter one. When we compare the risk of taking Vioxx to the risk of getting behind the wheel of a car, it’s not at all clear which is more dangerous.

 

‹ Prev