The Affordable Care Act (ACA) was an attempt to broaden access for all Americans, and it did, although it did not provide complete coverage for everyone. In addition health care costs continued to rise faster than the rate of inflation (although not as quickly as before the ACA was enacted), and there was no improvement in overall mortality in the United States (admittedly an incomplete measure of quality). Besides this, not everyone who liked his or her doctor was able to keep them. Only the most sanguine observer would take the position that the ACA solved the access/cost/quality issue. Nor is it likely any replacement plan will work any better. It may well be that the access/cost/quality problem is unsolvable through any all-encompassing approach.
Perhaps there is another way for policy makers to approach the problem. They have been trying to solve American health care ills with a top-down broad-system approach. Have they been attempting too much? It might be more effective for the government to attack smaller problems—pharmaceutical prices, care for the indigent, the opiate and obesity crises—incrementally on a case-by-case basis.
No matter what is done, there is little reason to believe that twenty years down the line health care will be in any different state than it is now. In the words of baseball player Dan Quisenberry, “I have seen the future—and it is much like the present, only longer.”
29
THE DIGITAL INTRUSION INTO HEALTH CARE AND THE CREEPY LINE
* * *
And worse I may be yet: the worst is not
So long as we can say “This is the worst.”
—WILLIAM SHAKESPEARE, KING LEAR
IN 2010 TOP Google executive Eric Schmidt told the Atlantic, “Google policy is to get right up to the creepy line and not cross it. . . . We know where you are. We know where you’ve been. We can more or less know what you’re thinking about.”
Whether it intended to or not, Google has now crossed the creepy line—with ominous implications for patients everywhere. It partnered in a British medical project involving more than one million patients that was effectively hidden from the public until recently. The project’s lack of concern for privacy and informed consent was blatant exploitation of these patients, and unless greater attention is paid to digital companies entering the health care universe, the public will be at significant risk in the future.
It began, as so many notorious medical experiments do, with ostensibly good intentions. In 2015 Royal Free NHS Foundation Trust, which operates a number of British hospitals, entered into a seemingly benign agreement with a Google subsidiary, DeepMind. In an effort to develop an app to monitor patients at risk of kidney disease, DeepMind was granted access to the health information of 1.6 million patients. The assumption was that this information would be limited to factors related to kidney disease, but there was no explicit mention in the agreement of the nature or amount of data to be collected. Within months, Google-contracted servers were amassing sensitive personal medical information with little relation to kidney disease, from emergency room treatments to details of personal drug abuse.
Until journalists prompted a government investigation, DeepMind accessed the personally identifiable medical records of a large number of patients—with no guarantee of confidentiality, formal research protocol, research approval, or individual consent.
Also, neither Royal Free nor Google chose to explain why DeepMind, with virtually no health care experience, was selected for this project. Apparently neither British regulators nor physicians asked any substantive questions.
Elizabeth Denham of the UK Information Commissioner’s Office, the ombudsman for the country’s medical data, released a statement regarding a probe of the secretive DeepMind deal: “Our investigation found a number of shortcomings in the way patient records were shared for this trial. Patients would not have reasonably expected their information to have been used in this way, and the Trust could and should have been far more transparent with patients as to what was happening.” An admirable, albeit belated, first step by the organization that failed to anticipate the obvious dangers of an arrangement between one of Britain’s largest health care providers and the world’s dominant data mining/advertising corporation.
There is, of course, a larger issue at stake, one that Denham failed to address. Medical information is the last fragile redoubt of our rapidly eroding personal privacy. While professing good intentions, Google has an unstated but obvious conflict of interest in the data mining of large populations. Did Google have an ulterior motive in collecting the medical information of such a huge patient cohort? And more important, when monolithic digital companies like Google, Microsoft, Apple, Facebook, and Amazon, which already control much of our personal and professional activity, enter the health care industry as they inevitably will, who will protect patients’ interests?
Once these companies introduce artificial intelligence and proprietary algorithms into medical care, will there be transparency? If not, what recourse will the public have? One author has likened Google to a one-way mirror—it knows much about us and is learning more every day, but we really know virtually nothing about it. The paramount concern of any medical research is to preserve the rights of patients and subjects, and this one-way mirror does little to ensure that.
After the UK Information Commissioner’s investigation, DeepMind cofounder Mustafa Suleyman assured the public that new safeguards would be instituted and that the company’s goal is to have a positive social impact. We expect him to say that, but the twentieth century was replete with notorious studies that were kept secret or justified on the basis of their supposed societal benefit. If the history of medical ethics has taught us anything, it is that patients do not exist to serve medical science and that they must never be deprived of the right to control their medical treatment, regardless of researchers’ stated beneficence.
Big data is coming to medicine, and it would be remiss not to acknowledge the potential benefits of machine learning and artificial intelligence. But no matter how valuable the promise of these new approaches and how well intentioned the motives of those responsible, without transparency, safeguards, and continual oversight, the seeds of abuse and tragedy are never far away. And here in the United States, will HIPPA offer sufficient protection?
Be forewarned, the story of Royal Free and Google DeepMind is a clarion call. It is merely the introductory chapter in a new marriage of health care and digital companies that seek to collect and control medical information. One is reminded of the warning given to Charles Foster Kane in Citizen Kane: “You’re going to need more than one lesson, and you are going to get more than one lesson.”
IV
RESEARCH, ETHICS,
DRUGS, AND MONEY
30
SHOULD YOU PUT YOUR TRUST IN MEDICAL RESEARCH?
* * *
The saddest aspect of life right now is that science gathers knowledge faster than society gathers wisdom.
—ISAAC ASIMOV
A FRIEND OF MINE, a physician with thirty years of experience in medical research who has published in the world’s top medical journals, recently said to me, “I don’t believe most of the studies published in the medical literature anymore.” His candid skepticism was because he feels medical researchers are losing the trust of the public.
Trust is an essential ingredient of medical research, and the accelerating erosion of trust in the biomedical literature that my friend noted is the result of several factors: fraud, conflicts of interest, and inadequate scientific and journalistic peer review. These malign influences have corrupted scientific literature for generations, but their current manifestations are particularly acute in medical research.
A notable instance of fraud came to the fore in 2015 when the prestigious journal Science, thought by many to be the top science journal in the world, retracted a prominent paper on gay marriage after the lead author lied about certain features of the study. This high-profile retraction was only more evidence that fraud has become depressingly common in the biomedic
al literature.
A review of more than two thousand articles retracted by major journals revealed that more than two-thirds were retracted because of some type of fraud. Moreover, the percentage of articles retracted because of fraud is roughly ten times higher than it was in 1975. While some of this may be because of greater scrutiny, an increase of that magnitude should not be ignored, because the consequences of fraud in the medical literature can be devastating.
Consider two examples: The current movement against vaccination for children stemmed—in large part—from a well-publicized but fraudulent 1998 paper in the Lancet, one of Great Britain’s top medical journals. In another case, as many as forty thousand women were treated for breast cancer with bone marrow transplants in the 1990s at a cost of billions of dollars. This treatment was based on studies revealed to be fraudulent. Bone marrow transplant, effective for certain blood disorders, turned out to be not only relatively ineffective for breast cancer but also often dangerous.
The medical establishment has been lax about policing the rise in conflicts of interest. Scrutiny by the medical profession is supposed to guarantee against outside conflicts, but hypocrisy is rife as many doctors receive generous payments from pharmaceutical concerns and device manufacturers while publishing scientific studies that can hardly be described as disinterested. Likewise, doctors accept payment from the government to develop important nationwide medical guidelines that often involve millions of dollars. And government, like the private sector, has its own agendas.
While this goes on, medical journal editors openly acknowledge that they cannot find article reviewers without any financial ties to private companies. Quite often the physician-authors being reviewed have significant conflicts of interests. Despite this, the medical community assures itself, and the public, that the value of this expertise outweighs any bias the conflicts bring to medical science. Even politicians are held to a higher standard, if only slightly.
More than ever, medical researchers aim for articles that will attract the mainstream press. The rush to get research to the public sometimes means short-circuiting rigorous scientific peer review. That can leave the review process in the hands of journalists unqualified or unwilling to interpret data and conclusions.
To illustrate the problem, John Bohannon, a Harvard biologist, created a fake study in 2014. Using a pseudonym, he and his colleagues deliberately ran a poorly designed “clinical trial” with subjects they recruited and randomly assigned to different diet regimens. They mined the results for anything that looked interesting and found that people lost weight 10 percent faster if they ate a chocolate bar every day. It was nothing more than a random finding. Yet the study was accepted by several online scientific journals within twenty-four hours, and the study was reported on the front page of Europe’s largest daily newspaper. From there it went around the world via the Internet. Bohannon even concocted a cleverly designed news release to trumpet the findings. The results then appeared in magazines and on television in more than twenty countries. Ironically, only sharp-eyed online readers read the study critically. From the experience, Bohannon cautioned, “You have to know how to read a scientific paper—and actually bother to do it. . . . Hopefully our little experiment will make reporters and readers alike more skeptical.”
There is no quick fix for the erosion of trust in medical studies. There have been calls for a new spate of bureaucratic rules or greater federal funding for investigative bodies. This is a quixotic quest; trust in science does not come from rules or money. Doctors and researchers must be taught early in their careers that intellectual honesty is more valuable than anything else, even personal advancement—especially personal advancement. Along with that, the general public must be better educated and display greater interest in science. Journalists should be trained to read and interpret medical studies and be willing to question research.
The fault is not in our stars but in ourselves. Admittedly, the long-term prospects for these remedies are not promising. Unfortunately, if the current trend continues, medical studies will become the “reality television” of science—difficult for outside observers to tell what has been manipulated and what hasn’t.
31
COMPARATIVE EFFECTIVENESS RESEARCH: BUT WHAT IF THE RESEARCH DOESN’T SHOW
WHAT YOU WANT?
* * *
One man’s risky and over-priced treatment is another man’s income stream.
—HEALTH WRITER MAGGIE MAHAR
SEVERAL YEARS AGO, a congressional $787 billion economic stimulus package earmarked $1.1 billion to compare different medical treatments for specific illnesses. This “comparative effectiveness research” attempted to answer questions such as whether drugs or surgery work better in various medical conditions such as low back pain.
The impetus for this program, endorsed by President Obama during his 2008 campaign, was a growing skepticism voiced by health economists and policy experts who feel much of what doctors currently do is expensive and doesn’t actually work. There is unquestionably much to be gained from such research. In many cases, surgical treatments haven’t been randomized against nonsurgical therapy, and many drugs used in psychiatry haven’t been evaluated comparatively against nonpharmacologic treatments. In addition there is insufficient follow-up on the long-term side effects of many approved drugs now on the market, and many medical devices used today haven’t been sufficiently evaluated.
Comparative effectiveness research has generated great optimism in Washington. Representative Pete Stark (D-CA), former chairman of the Health Ways and Means subcommittee, summarized the optimism thusly: “The new research will eventually save money and lives.” He explained that the United States spends over $1 trillion a year on health care and patients are put at risk, with billions of dollars spent each year on ineffective or unnecessary treatments, but “we have little information about which treatments work best for which patients.” In a report accompanying the economic recovery package, the House Appropriations Committee echoed Stark’s hopes by saying this research could “yield significant payoffs” because less effective, more expensive treatments “will no longer be prescribed.”
Unfortunately, from a medical standpoint, these expectations often prove overconfident. It is foolish to predict the outcome of medical research in advance—the results may not be what we expect. We want cheaper therapies to be better, and in fact they often are. But in medicine, it simply doesn’t follow that cheaper is automatically better. What then? What does the government do if research finds expensive back surgery turns out to be more effective than medication and physical therapy? What happens if long-term psychotherapy happens to be more effective at treating depression than short-term antidepressant therapy? What if cardiac surgery demonstrates it prolongs life for patients in their eighties and nineties? Will researchers feel subtle, unstated pressure, or even overt pressure, to gear studies that will result in findings that allow the government or insurers to limit coverage for expensive treatments?
This is a fundamental dilemma with large-scale, government-sponsored medical research. Quite often the results depend on who does the studies. The makeup of any proposed council of government advisers will likely have a major influence not only on the type of studies but on the actual findings of those studies. This is not a theoretical concern. European countries have faced this exact problem translating government-sponsored comparative effectiveness research into public policy.
Completely disinterested researchers are not always those selected to perform studies. Some scientists may feel political pressure to turn out the results sought by their patrons. Moreover, it’s the rare specialist or surgeon who performs a study and acknowledges his or her specialty’s approach is inferior to the alternative, especially if these findings have major financial implications for that specialty. Even then, professional specialty organizations are often loath to accept such findings. Anyone who doubts this need only consult the nearest medical library.
All of this
is likely to result in warfare between strange bedfellows. Consumer groups, unions, employers, and insurers are likely to support government research efforts as a means of reducing waste (and cutting costs). The for-profit health care industry will find itself on the other side of the trenches since programs to make health care more efficient may be viewed as a threat to their economic well-being.
Despite the best efforts of our best scientists who do comparative effectiveness research, there will always be uncertainty in medical treatment. I hope that the economists, policy makers, and doctors remember that fact, since billions of dollars and the nation’s health is at stake. They should heed Swedish physicians with extensive experience in comparative effectiveness research in Europe, who caution, “A decision to prioritize a less therapeutically effective medicine because of cost-based considerations over an effective, but more expensive, medicine could lead to some serious political, social and moral dilemmas.”
32
THE EASIEST PERSON TO FOOL IS YOURSELF
* * *
The Doctor Will See You Now Page 10