by Robin Hanson
Not surprisingly, the king died on February 6. But notice all the conspicuous effort in this story. If Charles’s physicians had simply prescribed soup and bed rest, everyone might have questioned whether “enough” had been done. Instead, the king’s treatments were elaborate and esoteric. By sparing no expense or effort—by procuring fluids from a torture victim and stones from exotic goat bellies—the physicians were safe from accusations of malpractice. Their heroic measures also reflected well on their employers, that is, the king’s family and advisers.
On Charles’s part, receiving these treatments was proof that he had the best doctors in the kingdom looking after him. And by agreeing to the especially painful treatments, he demonstrated that he was resolved to get well by any means necessary, which would have inspired confidence among his subjects (at least until his untimely demise).
This third-party scrutiny of medical treatments isn’t just a historical phenomenon. Even today, there are strong incentives to be seen receiving the best possible care. Consider what happened to Steve Jobs. When he died of pancreatic cancer in 2011, the world mourned the loss of a tech-industry titan. At the same time, many were harsh in condemning Jobs for refusing to follow the American Medical Association’s best practices for treating his cancer. “Jobs’s faith in alternative medicine likely cost him his life,” said Barrie Cassileth, a department chief at the Memorial Sloan Kettering Cancer Center. “He essentially committed suicide.”8
Now, imagine that, hypothetically, Jobs’s son had come down with pancreatic cancer. If the Jobs family had pursued the same line of alternative treatment, the public outrage would have been considerably more severe. Cassileth’s remark that Jobs “essentially committed suicide,” for example, would turn into the accusation that he “essentially committed murder.” We see a similar accusation leveled at Christian Scientists when they refuse mainstream medical treatment for their children.9
The point here is that whenever we fail to uphold the (perceived) highest standards for medical treatment, we risk becoming the subject of unwanted gossip and even open condemnation. Our seemingly “personal” medical decisions are, in fact, quite public and even political.
MEDICINE TODAY: TOO MUCH
Now, the evolutionary and historical perspectives suggest that our ancestors had reasons to value medicine apart from its therapeutic benefits. But medicine today is different in one crucial regard: it’s often very effective. Vaccines prevent dozens of deadly diseases. Emergency medicine routinely saves people from situations that would have killed them in the past. Obstetricians and advanced neonatal care save countless infants and mothers from the otherwise dangerous activity of childbirth. The list goes on.
But the fact that medicine is often effective doesn’t prevent us from also using it as a way to show that we care (and are cared for). So the question remains: Does modern medicine function, in part, as a conspicuous caring ritual? And if so, how important is the hidden caring motive relative to the overt healing motive? For example, if conspicuous caring were only 1/100th as important as the therapeutic motive, then we could, for all practical purposes, safely ignore it. However, if the conspicuous caring motive is half as strong as the healing motive, then it could make a huge difference to our medical behaviors.
To find out just how important conspicuous caring really is, we will need to look at some actual data on our medical behaviors.
The biggest prediction of the conspicuous caring hypothesis is that we’ll end up consuming too much medicine, that is, more than we need strictly for health purposes. After all, this is what usually happens when products or services are used as gifts. When people buy chocolates for their sweethearts on Valentine’s Day, for example, they usually buy special fancy chocolates in elaborate packaging, not the standard grocery-store Hershey’s bar. A feast usually offers more and better food than people eat at a typical meal. And Christmas gifts are usually more expensive, and often less useful, than items you would have bought for yourself.10 (Though, yes, some kids do get socks.)
Medical treatments vary greatly, in both their costs and potential health benefits. If patients are focused entirely on getting well, we should expect them to pay only for treatments whose expected health benefits exceed their costs (whether financial costs, time costs, or opportunity costs). But when there’s another source of demand (i.e., conspicuous caring), then we should expect consumption to rise past the point where treatments are cost-effective, to include treatments with higher costs and lower health benefits. Thus conspicuous care is to some extent excessive care.
(There’s another way to look at it, of course, which is that we are getting our money’s worth when we buy medicine, but the value isn’t just health; it’s also the opportunity to demonstrate support. It only looks like we’re getting ripped off if we measure the health benefits but ignore the social benefits.)
We will now look to see if people today consume too much medicine. For the most part, we won’t be looking at individual treatments. It’s easy to find specific drugs or surgeries that don’t work particularly well, but that won’t tell us much about the overall impact of medical spending. Instead we’re going to step back and examine the aggregate relationship between medicine and health. Given the treatments that people choose to undergo, across a wide range of circumstances, does more spending lead on average to better health outcomes? We’re also going to restrict our investigation to marginal medical spending. It’s not a question of whether some medicine is better than no medicine—it almost certainly is—but whether, say, $7,000 per year of medicine is better for our health than $5,000 per year, given the treatment options available to us in developed countries.11
One place to start this investigation is by comparing health outcomes across different regions of the same country. As it happens, there are often huge differences in how the same medical conditions are treated in different regions. In the United States, for example, the surgery rates for men with enlarged prostates vary more than fourfold across different regions, and the rates of bypass surgery and angioplasty vary more than threefold. Total medical spending on people in the last six months of life varies fivefold.12 These differences in practice are largely arbitrary; medical communities in different regions have mainly just converged on different standards for how to treat each condition.13
These variations result in a kind of natural experiment, allowing us to study the effects of regionally marginal medicine, that is, the medicine consumed in high-spending regions but not consumed in low-spending regions. And the research is fairly consistent in showing that the extra medicine doesn’t help. Patients in higher-spending regions, who get more treatment for their conditions, don’t end up healthier, on average, than patients in lower-spending regions who get fewer treatments. These results hold up even after controlling for many factors that affect both medical use and health—things like age, sex, race, education, and income.
One of the earliest of these studies was published in 1969.14 It found that variations in death rates15 across the 50 U.S. states were predicted by variations in income, education, and other variables, but not by variations in medical spending. A later study looked at 18,000 Medicare patients across the country who were diagnosed with the same condition, but who received different levels of treatment.16 Yet another study did the same for Veterans Affairs’ patients.17 All these studies found that patients treated in higher-spending places were no healthier than other patients.
Perhaps the largest study of regional variations looked at end-of-life hospital care for 5 million Medicare patients across 3,400 U.S. hospital regions. We might hope to see that patients live longer when local hospitals decide to keep them in the intensive care unit (ICU) for longer periods of time, relative to patients in hospitals that kick them out sooner. What the study found, however, was the opposite. For each extra day in the ICU, patients were estimated to live roughly 40 fewer days.18 The same study also estimated that spending an additional $1,000 on a patient resulted in somewhere be
tween a gain of 5 days and a loss of 20 days of life.19 In short, the researchers found “no evidence that improved survival outcomes are associated with increased levels of spending.”20
These studies—along with many others (but not all21)—show that patients who receive more medicine don’t achieve better health outcomes. Still, these are just correlational studies, leaving open the possibility that some hidden factors are influencing the outcomes, and that somehow (despite the absence of correlation) more medicine really does improve our health. To really make a strong case, then, we need to turn to the scientific gold standard: the randomized controlled study. This can better reveal if increased medical care actually causes better outcomes.
Spoiler alert: it doesn’t.
THE RAND HEALTH INSURANCE EXPERIMENT
Between 1974 and 1982, the RAND Corporation, a nonprofit policy think tank, spent $50 million to study the causal effect of medicine on health. It was, and remains, “one of the largest and most comprehensive social science experiments ever performed in the United States.”22
Here’s how the RAND experiment worked. First, 5,800 non-elderly adults were drawn from six U.S. cities. Within each city, all participants were given access to the same set of doctors and hospitals, but they were randomly assigned different levels of medical subsidies. Some patients received a full subsidy for all medical visits and treatments; they could consume as much medicine as they wanted without paying a dime. Other patients received discounts ranging from 75 percent to 5 percent off their total bill.23 Note that a 5 percent discount is effectively unsubsidized, but the researchers needed to give patients some incentive to enroll in the study. Patients remained in the program between three and five years.24
As expected, patients whose medicine was fully subsidized (i.e., free) consumed a lot more of it than other patients. As measured by total spending, patients with full subsidies consumed 45 percent more than patients in the unsubsidized group.25 This 45 percent difference constituted the marginal medicine examined in this study, that is, the medicine that some people got that others did not.
Despite the large differences in medical consumption, however, the RAND experiment found almost no detectable health differences across these groups. To measure health, comprehensive physical exams were given to all participants both before and after the study.26 These exams included 22 physiological measurements like blood pressure, lung capacity, walking speed, and cholesterol levels. The exams also used extensive questionnaires to gauge five measures of overall well-being: physical functioning, role functioning (i.e., at work), mental health, social health, and general health perception.27
For the five measures of overall well-being, all groups fared the same.28 Of the 22 physiological measurements, only one—diastolic blood pressure—showed a statistically significant improvement in the fully subsidized group (relative to the other groups).29 But this is an outcome we should expect purely by chance. Out of 20 noisy measurements, on average, 1 of them will randomly appear to differ from zero (at a 95 percent confidence interval), even if all the underlying values are actually zero.
Needless to say, the RAND experiment researchers were surprised by their results. To look more closely, they wondered if their fully subsidized patients were choosing treatments that were less effective than the treatments chosen by other patients. For example, maybe the fully subsidized patients decided to get unnecessary surgeries, or to visit the doctor when they had milder symptoms. Unfortunately, this wasn’t the case. Doctors who were asked to look at patient records couldn’t tell the difference between the fully subsidized and unsubsidized patients. Severity of diagnosis and appropriateness of treatment were statistically indistinguishable between the two groups.30 The marginal medicine wasn’t “less useful medicine,” at least in the eyes of trained professionals.
Now, put yourself in the shoes of someone chosen to participate in the RAND study. Imagine getting assigned to the unsubsidized group, while a lucky friend of yours is assigned a full subsidy. Naturally you’re going to feel disappointed. For the next three to five years, you’ll have to pay for all of your medicine, while your friend gets everything for free. But in addition to the financial burden, you might also fear for your health. If you have a persistent cough, for example, you might decide not to go to the clinic, hoping your cough will clear up on its own. Or you might decide that you can’t afford the cholesterol medication recommended by your doctor.
This fear, however, is misplaced. The RAND study tells us that, on average, you’re going to end up just as healthy as your friend. Your bank account may suffer, but your body will be just fine.
The only other large, randomized study like the RAND experiment is the Oregon Health Insurance Experiment. In 2008, the state of Oregon held a lottery to decide who was eligible to enroll in Medicaid. This gave researchers the opportunity to compare the health outcomes of lottery winners and losers.31
Like in the RAND study, lottery winners ended up consuming more medicine than lottery losers.32 Unlike the RAND study, however, the Oregon study found two areas where lottery winners fared significantly better than lottery losers. One of these areas was mental health: lottery winners had lower incidence of depression.33 The other area was subjective: winners reported that they felt healthier. Surprisingly, however, two-thirds of this subjective benefit appeared immediately following the lottery, before the winning patients had any chance to avail themselves of their newly subsidized healthcare.34 In other words, lottery winners experienced something akin to the placebo effect.
In terms of physiological health, however, the Oregon study echoes the RAND study. By all objective measures, including blood pressure, lottery winners and losers ended up statistically indistinguishable.35
BUT! . . . BUT! . . .
We’ve now arrived at the unpalatable conclusion that people in the United States currently consume too much medicine. We could probably cut back our medical consumption by a third without suffering a large adverse effect on our health.36
This conclusion is more or less a consensus among health policy experts, but it isn’t nearly as well-known or well-received by the general public. Many people find the conclusion hard to reconcile with the extraordinary health gains we have achieved over the past century or two. Relative to our great-great-grandparents, today we live longer, healthier lives—and most of those gains are due to medicine, right?
Actually, no. Most scholars don’t see medicine as responsible for most improvements in health and longevity in developed countries.37 Yes, vaccines, penicillin, anesthesia, antiseptic techniques, and emergency medicine are all great, but their overall impact is actually quite modest. Other factors often cited as plausibly more important include better nutrition, improvements in public sanitation, and safer and easier jobs. Since 1600, for example, people have gotten a lot taller, owing mainly to better nutrition.
More to the point, however, the big historical improvements in medical technology don’t tell us much about the value of the marginal medicine we consume in developed countries. Remember, we’re not asking whether some medicine is better than no medicine, but whether spending $7,000 in a year is better for our health than spending $5,000. It’s perfectly consistent to believe that modern medicine performs miracles and that we frequently overtreat ourselves.
People also find it hard to reconcile the unpalatable conclusion with all the stories we hear from the media about promising new medical research. Today, it’s a better drug for reducing blood pressure. Tomorrow, a new and improved surgical technique. Why don’t these individual improvements add up to large gains in our aggregate studies?
There’s a simple and surprisingly well-accepted answer to this question: most published medical research is wrong.38 (Or at least overstated.) Medical journals are so eager to publish “interesting” new results that they don’t wait for the results to be replicated by others. Consequently, even the most celebrated studies are often statistical flukes. For example, one study looked at the 49 most-cit
ed articles published in the three most prestigious medical journals. Of the 34 of these studies that were later tested by other researchers, only 20 were confirmed.39 And these were among the best-designed and most respected studies in all of published medical research. Less-celebrated research would probably be confirmed even less often.
Another hang-up some people have (toward the unpalatable conclusion) is their belief in the value of specific marginal treatments. For example, if your uncle was helped by a pacemaker, but many people can’t afford pacemakers, you might think, “This marginal treatment has great value, so how could marginal medicine on average have no value?” The problem is that marginal medical treatments are just as likely to do harm as good. Prescription drugs almost always have side effects, some of them quite nasty. Surgeries often come with complications. Staying in the hospital puts patients at higher risk of contracting infections and communicable diseases. According to the Centers for Disease Control and Prevention, improper catheter use alone is responsible for 80,000 infections and 30,000 deaths every year.40 Few medical treatments are without risk.
TESTING CONSPICUOUS CARE
The fact that we consume too much medicine has many possible explanations. Perhaps the most tempting is the idea that health is so important to us that we’re willing to try anything, even if it’s unlikely to help much (like the RAND experiment shows).
To show that our medical behaviors are driven by the conspicuous caring motive, rather than “health at any cost,” we have to look at other predictions made by the conspicuous caring hypothesis.