The Danger Within Us

Home > Other > The Danger Within Us > Page 17
The Danger Within Us Page 17

by Jeanne Lenzer


  When a study was finally conducted, in children, that did include a medically treated control group, the positive results were reported in the publication abstract this way: “55.4 percent of patients had at least 50 percent reduction of seizure frequency” and “VNS has been proven to be an effective alternative in the treatment of pediatric patients with drug-resistant epilepsy.” But one has to carefully read the entire text to find what the authors failed to include in the abstract, their table of adverse events, or their conclusion section: during the three-year study period there were no deaths among the seventy-two children treated with medicines only, yet among just thirty-six children implanted with a VNS device, there were two deaths—one occurring after an increase in seizures.240

  Research shows that most busy doctors, if they get to read studies at all, only read the abstracts and conclusions. They rarely read, much less critically analyze, the data presented, which not infrequently contradict the researchers’ conclusions.155

  A further clue that the VNS device might not have any real efficacy was buried in the company’s 1997 physician manual (Cyberonics did not present these data during the 1997 hearing). In a review of what happened after a patient’s VNS device stopped delivering shocks because of battery depletion, of seventy-two patients, 15 percent had more seizures, while 58 percent had fewer seizures. In other words, patients were far more likely to do better rather than worse when the VNS device stopped working.241

  Of course, for patients with debilitating seizures, it’s understandable that they might grasp at the hope that they will be among the ones to have fewer seizures. As Patricia Kroboth said, “The implant doesn’t need to stop all seizure activity to change someone’s life.” Nor did it need to reduce seizures in every patient. But, as when playing the lottery, patients can’t know in advance whether they will be winners—or losers. And the cost can be far greater than the $40,000 price tag for the VNS device itself. By 2016, the MAUDE database held reports of nearly two thousand deaths among VNS patients, and according to Tomes’s database, the actual number was far higher.

  Jerry Hoffman’s crusade to educate doctors about the prevalence of medical illusions suggests that physicians can be an important bulwark against the harm such illusions can produce. But there are limits to what individual doctors can do to protect their patients. Doctors such as Fegan’s neurologist, Juan Bahamon, might be faulted for recommending a flawed device like the VNS. But they rely on industry to paint a clear and accurate picture of the products that pour onto the market annually. And Cyberonics’ claims were not only positive, they also appeared to have been vetted by the FDA. Fegan doesn’t blame Bahamon.

  Doctors can’t be expected to review and digest several hundred pages of FDA transcripts and complex statistical data for each drug and device they prescribe. Ultimately they rely on FDA experts to interpret industry-funded studies for accuracy. There simply are no truly independent sources of research in the US. Industry funding has extended its reach into every sector, from medical journals that present and interpret the research to universities and contract research entities that conduct the research to patient advocacy organizations that promote various treatments to medical education for doctors to the agencies that are supposed to protect the public interest—including the Centers for Disease Control and Prevention, the National Institutes of Health, and, of course, the FDA.

  Even if doctors could find time in the day to read and digest thousands of pages of original research data, most have not mastered the art of critical appraisal, the interpretation of statistics, and medical research findings (also called methodology).

  Part of the problem is that doctors are on information overload. By the time they graduate from medical school, they’ve had to memorize every nerve, bone, and muscle in the human body. They’ve been trained to regurgitate detailed information about cell structure and biochemistry, disease pathology and pathophysiology, and pharmacology. They have to learn myriad laboratory tests, both common and unusual, and they will be expected to interpret certain X-rays and CAT scans and be adept at performing ultrasound. They will learn a wide range of surgical procedures, and they will even be taught some complex statistical concepts and calculations, which they will forget shortly after they take their board exams. They will get credit for “knowing the literature,” but rarely do they gain an in-depth understanding of the critical skills necessary to distinguish solid science from research-based “evidence” that is actually unproved if not clearly wrong, as is so often the case.

  To test how well doctors understand basic statistical claims, 160 gynecologists were given all the data they needed to come up with the correct answer to a question about mammogram testing. They were told:

  the probability that a woman has breast cancer is 1 percent (prevalence);

  if a woman has breast cancer, the probability that she tests positive is 90 percent (sensitivity);

  if a woman does not have breast cancer, the probability that she nevertheless tests positive is 9 percent (false positive rate).

  Then the doctors were asked: if a woman has a positive test on mammography (“positive” means she might have cancer), what is the chance she actually has cancer?

  A. The probability that she has breast cancer is about 81 percent

  B. Out of ten women with a positive mammogram, about nine have breast cancer

  C. Out of ten women with a positive mammogram, about one has breast cancer

  D. The probability that she has breast cancer is about 1 percent

  The correct answer is C: out of ten women who test positive on a mammogram, only one will actually have breast cancer. Yet only 21 percent of the gynecologists answered correctly, which is slightly less than the percentage who might have answered correctly had they merely guessed. The authors wrote: “Disconcertingly, the majority of [gynecologists] grossly overestimated the probability of cancer, answering ‘90%’ and ‘81%.’” Expressed differently, of the ten total mammograms that are positive among one hundred women, nine are false positives* and only one is a true positive.242

  Without an understanding of how research studies are designed, carried out, and interpreted, doctors—just like patients—fall prey to deceptive claims.

  Faith in biological plausibility rather than misunderstood statistics is a source of other medical missteps. For many years, doctors used a medical laser to perform “transmyocardial laser revascularization,” a procedure in which they would burn tiny holes in the heart muscle of patients with angina, or heart pain, in the belief that the holes would fill up with new blood vessels that would help deliver oxygen to the heart.243 The surgery had the requisite scientific-sounding name and the seductive allure of laser technology.

  It was biologically plausible that the holes could trigger new vascular growth—after all, the heart builds new blood vessels (called collateral circulation) to bypass clogged coronary arteries. But as Jerry Hoffman has pointed out, biological plausibility should be clinically tested before deciding that a treatment will be successful.

  It would be many years after the surgery was adopted before researchers discovered that the holes simply filled up with scar tissue. Despite this, 46 percent of patients declared that the surgery made them better and they had less angina. Yet tests showed there was no new vessel growth. The strong placebo effect, along with persistent, unfounded faith in biological plausibility, encouraged doctors to continue a useless intervention until reliable studies showed that the surgery caused a dramatic increase in deaths and the practice was deemed “obsolete.”244 As Steven Nissen, chair of the Cleveland Clinic’s department of cardiovascular medicine, likes to say, “The road to hell is paved with biological plausibility.”

  A patient’s hopes and beliefs can play a role in the placebo effect. Even seizures can have a subjective component in terms of perception and reporting—especially partial seizures, in which fleeting moments of mental blurriness assumed to be a seizure prior to implantation with the VNS device are instead inte
rpreted as trivial forgetfulness (rather than a seizure) after implantation. Some patients in the Cyberonics studies reported that although they still had the same number of seizures, the seizures they had were “less strong.”

  The patients and family members who testified during the FDA hearing were telling the truth from their perspective, and it’s possible that some benefited from the device, but it’s also possible that they were benefiting from the normal waxing and waning of seizure disorders, the placebo effect, or even spontaneous regression. Was it possible that Robert Cassidy, who testified that after implantation his seizures became close to nonexistent, was among the one-third of people with epilepsy who eventually undergo spontaneous regression? And what of George, who went nineteen days without a seizure after he was implanted? After all, he went nineteen days without a seizure at age nine, many years before he was implanted.

  One myth about the placebo effect relevant to the VNS device is the belief that it only applies to symptoms that are psychological in nature or origin. Yet several studies dispel that myth. For example, a study of pain medicine versus placebo in patients thought to have psychological or functional pain rather than active disease (such as peptic ulcer disease or inflammatory bowel disease) showed that individuals with functional pain, as well as those with organic diseases, responded to the placebo.245 Other studies have found that malingerers and certain patients with psychological pain may need their pain for some reason and therefore might be less responsive to the healing effects of a placebo than patients with organic pain.

  The placebo effect can extend even to care providers. One young health aide caring for a mentally handicapped patient implanted with a VNS device said that when she activated the patient’s device with a magnet, the results were miraculous! The patient’s seizures would stop immediately (although the device fires every three or five minutes normally, patients can use a magnet to activate the device, causing an extra shock, intended to abort seizures). When asked how long the patient’s seizures usually lasted if his VNS device wasn’t activated, the aide said about two to three minutes. On further questioning, the aide acknowledged that it would take her thirty to sixty seconds to reach the patient, get the magnet, and swipe the VNS device. Adding up the time it took to first recognize that her patient was having a seizure, the time it took to get to the patient and disengage the magnet from the patient’s wrist, and the time it took (another thirty seconds to a minute) for the seizure to stop after the aide swiped the device, it wasn’t clear that anything had been gained.

  The aide had been told that the device would stop seizures, and she interpreted the events in a way that made sense to her.

  As if oblivious to the concepts raised by Hoffman’s lecture about the impact of language and the fact that the words significant and benefit are frequently misunderstood, the FDA seemed to ignore the results of a study that demonstrated the gap between statistical significance, as emphasized by Cyberonics, and clinical significance. In an apparently unpublished study mentioned in a single brief paragraph of the 212-page transcript of the FDA advisory meeting, panelists learned that when the researchers examined thirty-four “quality of life” measures, they found no benefit in thirty-one of the thirty-four measures,133 hardly evidence that the VNS device was improving the lives of patients, because chance alone would ensure that at least a few measures would be positive.

  The sum of Cyberonics’ research data, stripped of the emotional sway of anecdote and the misleading presentation of “significant” findings of “benefit,” added up to this: 18 percent of test subjects improved in the short term, most patients failed to improve, there was no—or questionable—benefit over the long term, some patients experienced substantial worsening, and a concerning number died.

  When Dennis Fegan learned that a sizable portion of VNS test subjects developed more seizures after implantation, he wondered: How does anyone know whether a patient is having a seizure or a near-fainting or fainting spell caused by the device as it stops or slows the heart? And if it’s a seizure, how does anyone know whether it’s triggered by the device itself? Just as asystole caused by the VNS device could be mistaken for SUDEP, it was also possible that seizures caused by the VNS device could be mistaken for underlying epilepsy. Fegan was grasping the concept of cure as cause, a source of widespread illusion in medicine.

  When Cyberonics attributed decreases in seizures to the VNS device but attributed increased seizures to the worsening of epilepsy, it ensured that cure as cause was excluded from consideration. The problem is far from unique to Cyberonics. Whenever researchers assume that improvements are the result of treatment and bad outcomes are the result of an underlying disease or condition, they are ignoring the possibility of cure as cause.

  Separate reporting of benefits and harms is necessary, but here’s where things get sticky: because industry gets to decide whether an adverse event is caused by a device or an underlying disease, manufacturers can categorize the adverse effects of their devices as problems of underlying disease, creating another opening for industry to exploit the word benefit in ways that undermine objective research.

  Hoffman says that cure as cause is not uncommon and should not be a particularly surprising phenomenon. Medical interventions, including those that involve devices, invariably interfere with the same physiologic pathways or organ systems that give rise to a disorder—and it’s easy to overshoot or further disturb an already disordered system. And when outcomes such as hip pain, suicidality, and seizures mirror the symptoms of the underlying disorder, it’s easy to miss cure as cause.

  The problem is common with devices as well as drugs. Stents, intended to open clogged coronary arteries and prevent heart attacks, can themselves cause clots and heart attacks. Surgical mesh, used to prevent urinary incontinence, can slice through pelvic tissues, causing incontinence. Metal-on-metal hip implants that shed cobalt into the surrounding tissues can cause tissue destruction, leading to worsening hip pain that is mistaken for worsening arthritis. Filters placed in the large vein leading to the heart to prevent clots from reaching the heart have been found to trigger clotting.224, 239, 246

  This isn’t to say that every bad outcome negates the overall value of a device. It is a reminder, however, that all medical interventions come at a cost—costs that should be fairly measured and honestly reported so that the public can decide whether the risks that come with a device are worth taking.

  * * *

  Nearly one hundred years ago, Sinclair Lewis’s Pulitzer Prize–winning book, Arrowsmith, featured Max Gottlieb, a medical researcher who railed against institutions that betrayed objective academic inquiry in favor of profits. Since that time, academic institutions and government agencies have become far more deeply enmeshed with industry, and the marketplace has been celebrated as a driving force that delivers cutting-edge science.

  Jerry Hoffman, who has spent decades studying the ways in which industry manipulates study design, interpretation, and physicians’ perception, has linked widespread financial conflicts of interest to misleading medical claims. For many years, like Lewis’s character Max Gottlieb, Hoffman’s seemed to be a lone voice in the wilderness.

  But in the early twenty-first century, others within the medical community began to take up the cause. In 2005, John P. A. Ioannidis, professor of health research and policy and director of the Stanford Prevention Research Center at the Stanford University School of Medicine, wrote an article entitled “Why Most Published Research Findings Are False.” The piece took the medical world by storm, becoming the most downloaded article in the history of the journal PLoS Medicine. Ioannidis focused on the statistical illusions that cause bad science, but he also wrote, “There may be conflicts of interest that tend to ‘bury’ significant findings.”

  Studies consistently show that industry-funded research tends to exaggerate benefits while burying or failing to publish negative findings. Richard Smith, former editor in chief of The BMJ, the venerable professional journal once
known as the British Medical Journal, has called attention to the unreliability of research published in even the most prestigious journals, concluding, “Medical journals are an extension of the marketing arm of drug companies.” Marcia Angell, former editor in chief of the New England Journal of Medicine, echoed the same judgment, writing:

  Let me tell you the dirty secret of medical journals: It is very hard to find enough articles to publish. With a rejection rate of 90 percent for original research, we were hard pressed to find 10 percent that were worth publishing. So you end up publishing weak studies because there is so much bad work out there.

  When I spoke with Hoffman after his lecture at Bellevue, he was in a reflective mood. “I was given the task of addressing the topic ‘Diagnostic Decision-Making: How Clinicians Think,’” he said. “But I always like to bring this back to something I believe is just as important, which is our role as professionals. Drug companies have a fiduciary responsibility to their shareholders. As doctors, we have a fiduciary responsibility to put our patients first. By relying on industry-funded studies, we’re failing our patients. As professionals, we have a special responsibility to make independent assessments on behalf of our patients, and we can’t allow ourselves to be compromised by financial conflicts of interest.”

 

‹ Prev