Here’s how the reasoning goes. The probability of your original choice of door number one being correct is one in three, unaffected by the new information. The probably of your original guess being wrong is then two in three. Since now there is only one other choice and your original guess is more likely to be wrong than to be right, you should change your guess to door number three. That may require some digesting, but the point is that new information may change probabilities of a given outcome in unexpected ways.
But what does this have to do with how the good doctor would deal with her cancer patient? Well, she would start with the available population data, but then she would look carefully at this specific person. How old is he? Does he have any other health problems? What is his race or ethnic group, his socioeconomic status? Does he have a personal support system that he can rely on? Where does he live? Does he have readily available transportation? What are his health related beliefs? Is there a family history of cancer and if so how was it treated and what was the outcome? And even, yes, what are the relevant base sequences in his genome DNA? In short, this doctor gathers every bit of information about the person that has any chance of influencing how likely he is to accept and respond to a given treatment. Then, what about the cancer—the cell type, the clinical stage (caught early and still localized, widely spread to other sites in the body, or somewhere in between), and perhaps the cancer’s genotype as well? All of those variables and any other information that indicates possible effects of the personal characteristics of this patient and the specifics of his cancer are taken into account in designing an initial treatment plan.
Bayesian probability also changes as new information becomes available; if it’s raining on race day we just have to grit our teeth and put our money on Dogmeat. So this patient’s initial responses to a regimen, new discoveries about the biology of his kind of cancer, and other changes in his condition or circumstances may alter his prognosis and his therapy. One of University of Minnesota professor Phillip Peterson’s ten rules for doctoring is: “If What You’re Doing Doesn’t Seem to Be Working, Think About Doing Something Else.” Another of his rules is: “If What You’re Doing Seems to Be Working, Think About Continuing It.”112
Although perhaps less than thoroughly versed in the history, theory, and mathematics of Bayesian statistics, a doctor who has made peace with uncertainty does think like a Bayesean. The very fact that predicting outcomes is complex and that such probabilities are not static but are subject to change based on what happens in real time requires that the entire spectrum of possibilities remains open from beginning to end of a person’s illness. The responsible doctor knows this and stays in touch with all the possibilities. It also means that if “evidence” means the results of clinical trials analyzed by frequency-based statistics, then relying on evidence as the sole basis for deciding on therapy may limit the chances of getting the best results in any specific case.
Living in this twenty-first century, it is impossible to avoid attempts to convert all problems to number problems. There is a prevailing notion that most of the human dilemma will eventually be made tractable by the truly scary power of the computer, and medicine is not an exception. Granted, Stanford and Cambridge professor Sam Savage’s Flaw of Averages exposes the naiveté of imagining that a single number will magically transfer all critical decisions from the human brain to Watson or who(what)ever. But if we knew all the numbers and how they influence each other, that might be possible; it would just be a matter of working out the math. And given a marketplace teeming with devices for measuring stuff related to health that are not tethered to a laboratory or a doctor, is it possible that even the subjective part of diagnosis and treatment, the art of medicine, might become essentially a machine activity with little need for flesh-and-blood professionals?
There is a rising chorus of people who favor moving health care in that direction. (New York Times physician writer, Abigail Zuger, wonders whether such people are practicing medicine on a different planet than the one where she works.113) A serious challenge to this approach is converting each step of the process into formulas, equations, and, ultimately, numbers, what has been called the digitization of healthcare.114
There are mathematical approaches to analyzing how decisions emerge from complex interactions among people; one approach is called game theory.115 This is not game as in parlor game. This is serious stuff. Game theory has been used to analyze the Cuban missile crisis, the Viet Nam peace negotiations, the Watergate scandal, and even the existence of God. You can’t get much more serious than that! There are several Nobel laureates in economics among the game theorists and the species is also to be found comfortably ensconced in the ranks of sociologists, psychologists, astronomers, and evolutionary biologists. Maybe game theory can rescue medical decision making and models of medical consultation from the murky netherworld of subjectivity and intuition.
There have been efforts to do just that. For example, Carolyn Tarrant and her associates at the University of Leicester examined the utility of several game theory models with snappy names—the Prisoner’s Dilemma game, the Assurance game, the Centipede game—to analyze medical consultations.116 Others are doing similar work in an effort to understand and predict how medical decisions are made with the ultimate goal of making it a numbers game and delegating responsibility to an algorithm. No doubt a lot will be learned about how people behave in a medical setting from the game theorists, but the ultimate goal may be elusive. Perhaps some processes can be digitized, but people tend to be pretty determinedly analog. The effectiveness and efficiency of health care will benefit from efforts of the game theorists, but it seems unlikely that they will seriously threaten your doctor’s place in the care of the real person that is you.
Archimedes might be a more serious threat. Not the ancient Greek mathematician who ran into the streets naked from his bath shouting Eureka! Eureka! upon discovering that the displacement of water could be used to determine the volume of an irregular object. That guy did a lot of interesting things, but we mean, “Archimedes: An Analytical Tool for Improving the Quality and Efficiency of Health Care,” the brainchild of Kaiser Permanente’s David M. Eddy, who may have been the first person to use the term evidence based medicine.117
Dr. Eddy opens an article describing his brainchild this way: “The practice of medicine has become extraordinarily complex, and it promises to become even more complex as the pace of innovation accelerates.” As he goes on to describe his model, we are thinking, “You can say that again.” Digitizing health care, as Bette Davis said about getting old, ain’t no place for sissies. A fundamental complexity is what Joe Loscalzo and colleagues have called network medicine.118 It’s not just that there are probably millions of bits of information to deal with, but that each of those million bits interacts with all of the other millions of bits and those interactions have to be accounted for if anything useful is to come out of the effort. But computers are capable of dealing with mindboggling complexity if you can give them the numbers, so Dr. Eddy, who is both a physician and an information expert, had a go at it; Archimedes is the result.
Dr. Eddy and his colleagues scoured the literature for the relevant basic research, epidemiological studies, and clinical trials. They built a three-part model: a model of human physiology; care process models; and models of system resources, personnel, facilities, equipment, costs, etc. Then they wrote over two hundred (we didn’t count them) equations describing the interactions of all these variables, fed them into the computer, and flipped the start switch.
Does Archimedes work? Well, it can produce results of virtual clinical trials that very closely simulate the results of actual trials. More needs to be done, but if it were possible to do clinical trials in silico instead of in vivo, sparing living people the inconvenience and risks of such studies, that would be pretty amazing.
Archimedes has been acquired by Evidera, a “health care modeling and analytics organization.
”119 They claim that “the company enables people to combine real-world health care data and simulation data to create compelling and actionable evidence used in individual health care decision making, as well as in populations, with applications in health and economic outcomes research, policy creation, and clinical trial design and operations.” Time will tell whether those promises will be kept and how they color the practice of medicine. Although the thinking doctor isn’t holding her breath, she will keep a close eye on Archimedes.
So, how does your doctor, knowing a good bit about how science comes by and interprets numbers, consider your personal health? Will she see you as that unique N of 1 for whom the general experiences of humanity may not always apply? Or will she lump you together with the statistical crowd, confidently relying on those general human experiences to tell her how to keep you healthy or get you well? Should we all start preparing ourselves for the brave new world of digitized medicine while doctors look for another job? Or can we have it both ways? Can you and your doctor preserve your individuality as a critical factor in your care while still benefitting from all of the quantitative stuff?
Stephen Jay Gould’s fascinating essay “The Median Isn’t the Message,” which we mentioned earlier, speaks elegantly to this quandary.120 He was diagnosed with a disease that the literature describes as incurable with a median survival time from diagnosis of five months. He died twenty years later of an unrelated condition. What was his secret?
Dr. Gould believed that at least part of his secret lay in his understanding of statistics and how they applied to his specific circumstance. Not a fatalist by nature, he was unwilling to accept the conclusion that he had only five months to live. He knew that a median survival time meant that half of the people with the disease lived longer than that. Further he discovered that survivals of patients with his disease were “right skewed,” that is a few people, although not many, lived a lot longer than the median—years longer. So Dr. Gould set about to see where he fit on this oddly right skewed survival curve. He concluded that there were several reasons to think that his niche on the curve was way out to the right on the longevity tail—he was young, he was getting excellent medical care, his disease was diagnosed early, and he was enthusiastic about the future. During the ensuing twenty years, Dr. Gould made major contributions to our understanding of the world and inspired two generations of Harvard students to better understand and care about that world.
It is possible that you are a statistic but not a common one, that you have a number but that it falls somewhere outside the boundaries of conventional thinking. All the more reason to find yourself a doctor who can think conventionally, but is perfectly comfortable with unconventionality when the occasion calls for it.
Sam Savage, The Flaw of Averages (Hoboken: Wiley, 2009).
“Thomas Bayes,” Wikipedia, https://en.wikipedia.org/wiki/Thomas_Bayes.
Thomas Bayes, “An Essay Towards Solving a Problem in the Doctrine of Chances,” Philosophical Transactions of the Royal Society of London 53 (1763): 370.
Kevin Boone, “Bayesian Statistics for Dummies,” http://www.kevinboone.net/bayes.html.
F. D. Flam, “ The Odds, Continually Updated,” The New York Times, September 29, 2014, http://nyti.ms/Ylzkqv.
Phillip K. Peterson, Get Inside Your Doctor’s Head (Baltimore: Johns Hopkins University Press, 2015).
Abigail Zuger, MD, “Patient, Heal Thyself,” The New York Times, January 5, 2015.
Jessica Oaks, “The Digitization of Health care,” IT Briefcase: IT News Resources and Events, July 7, 2015, http://www.itbriefcase.net/the-digitization-of-healthcare.
George Diamond, Alan Rozanski, and Michael Steuer. “Playing Doctor: Application of Game Theory to Medical Decision-Making,” Journal of Chronic Diseases 39 (1986): 669-677.
C. Tarrant, T. Stokes, and A. M. Colman, “Models of the Medical Consultation: Opportunities and Limitations of a Game Theory Perspective.” Quality and Safety in Health Care 13 (2004): 461-466.
David Eddy and Leonard Schlessinger, “Archimedes: An Analytical Tool for Improving the Quality and Efficiency of Health Care,” NCBI Bookshelf, http://www.ncbi.nlm.nih.gov/books/NBK22837/.
Albert-Laszlo Barabasi, Natalie Gulbahce, and Joseph Loscalzo, “Network Medicine: A Network-Based Approach to Human Disease,” Nature Reviews Genetics 12 (2011): 56-58.
“Archimedes Has Been Acquired by Evidera: Expanding Modeling and Analytics Services and Providing Additional EvidencE-based Solutions for Our Customers,” Evidera: Evidence Value Insight, http://archimedesmodel.com/.
Stephen Jay Gould, “The Median Isn’t the Message,” https://people.umass.edu/biep540w/pdf/Stephen%20Jay%20Gould.pdf.
CHAPTER 11
Don’t Believe Everything You Read, No Matter Where You Read It
The good doctor doesn’t need this meme to convince her of its message.121 She learned somewhere early in her education to be suspicious of claimed truths, especially those lacking any convincing evidence. And although she depends heavily on reputable scientific sources, this doctor knows that even highly respected science journals, her primary source of medical information, occasionally, in spite of their best efforts, wind up publishing things that turn out to be untrue, and once in a while they fail to publish potentially important and valid discoveries. While not a cynic, she rarely takes information at face value, no matter the source, and she always remembers that she may not be seeing the entire picture.
In 1914 American writer and explorer of anomalous phenomena Charles Fort published a work titled The Book of the Damned.122 He was convinced that mainstream scientists were slaves to current trends, excluding (damning) observations that didn’t fit their preconceptions. In The Book of the Damned he expounded his theory and discussed a number of truly strange phenomena that he concluded were damned to obscurity without a fair and objective hearing by the arrogant conventionality of popular science. While the doctor who accepts uncertainty is no Fortean—those students of strange phenomena who are ready to believe almost anything—she recognizes that Fort’s idea may sometimes have merit. Subjective factors do sometimes affect, at least for the time being, what science gets published and thus what facts become generally accepted.
For American medicine, as for all of western medicine, the primary source of credible evidence for or against a given intervention is the published medical literature. If research is not published, preferably in a reputable scientific journal, then the only people who know about it are the scientists who did the work and perhaps their friends and family and the bartender at the after-work hangout behind the hospital. Whether work is published is decided by the researchers who do it; they must analyze the data, do the statistics, and write the manuscript. But decisions about publication also involve the journal editors and a select group of usually two or three presumed experts in the area who review the manuscript and give it a thumbs up or down; that is called peer review. This process is not perfect; there are multiple opportunities for slips twixt the metaphorical cup and lip. The thinking doctor is very familiar with this system (at various times in her career she was probably both rewarded and punished by it) and she always remembers that the available information may not be the whole story.
For one thing, while well-done studies with negative results are seriously considered for publication by editors of highly regarded medical journals, they and their peer reviewers tend to be more excited by positive results. And academic careers depend heavily on publishing original research in well-regarded journals. Negative experimental outcomes can languish in lab notebooks, never even converted into a manuscript. Even if an industrious post doc gins up enough energy to write up an elegantly done but thoroughly negative study, it can be hard to make it through a highly regarded journal’s editorial and peer r
eviews, and the return on the necessary investment of time and energy needed to find a lesser journal that will publish it may not be enough to make it worth the trouble. Should the negative studies find their way into some less than stellar journal, they will have a limited impact on conventional understanding of the subject because they will be less exposed to and less respected by influential people than the positive studies that are discussed ad infinitum by the elite readers of higher impact publications.
So, there is a body of research with negative results that no one knows about and that is not considered when compiling the evidence for or against a given intervention. Your doctor just doesn’t know a lot about what doesn’t work or even what works sometimes under some circumstances in some people but doesn’t work in other situations. This is scientifically sound research which too rarely gets into the body of medical evidence because of the bias of a publication process favoring positive results that are not too far afield from conventional thinking.
And—a sad commentary on the whole scientific enterprise—occasionally there are scientists who lie and cheat and are clever enough at it to get their “work” published in respected journals. Some people think that this behavior is getting worse and blame it on increasing academic pressures to publish. There are academic pressures, but they don’t excuse dishonesty. Pressures to publish may smoke out the scoundrels among us, but lying and cheating are failures of character, not faults of a system.
How rampant is dishonesty in medical research? One measure, which no doubt underestimates the problem, is the number of articles that are published and subsequently retracted. Retraction Watch, which follows numbers of and reasons for retractions, estimates that scientific articles are retracted at a rate of one per day and that the rate is increasing.123 Surprisingly, the very journals that your doctor ought to be able to trust, the ones from which articles are most often cited by publishing scientists, are the ones that retract the most articles!124 The good doctor knows that you can’t really trust any one source, that figuring out what is right in a given circumstance requires gathering all of the information one can find and assembling as complete a picture as possible with all of the materials in hand.
The Good Doctor Page 11