SuperFreakonomics

Home > Other > SuperFreakonomics > Page 21
SuperFreakonomics Page 21

by Steven D. Levitt

By Latham’s calculations, an increase of just 10 or 12 percent of the reflectivity of oceanic clouds would cool the earth enough to counteract even a doubling of current greenhouse gas levels. His solution: use the ocean itself to make more clouds.

  As it happens, the salt-rich spray from seawater creates excellent nuclei for cloud formation. All you have to do is get the spray into the air several yards above the ocean’s surface. From there, it naturally lofts upward to the altitude where clouds form.

  IV has considered a variety of ways to make this happen. At the moment, the favorite idea is a fleet of wind-powered fiberglass boats, designed by Stephen Salter, with underwater turbines that produce enough thrust to kick up a steady stream of spray. Because there is no engine, there is no pollution. The only ingredients—seawater and air—are of course free. The volume of spray (and, therefore, of cloud reflectivity) would be easily adjustable. Nor would the clouds reach land, where sunshine is so important to agriculture. The estimated price tag: less than $50 million for the first prototypes and then a few billion dollars for a fleet of vessels large enough to offset projected warming at least until 2050. In the annals of cheap and simple solutions to vexing problems, it is hard to think of a more elegant example than John Latham’s soggy mirrors—geoengineering that the greenest green could love.

  That said, Myhrvold fears that even IV’s gentlest proposals will find little favor within certain environmentalist circles. To him, this doesn’t compute.

  “If you believe that the scary stories could be true, or even possible, then you should also admit that relying only on reducing carbon-dioxide emissions is not a very good answer,” he says. In other words: it’s illogical to believe in a carbon-induced warming apocalypse and believe that such an apocalypse can be averted simply by curtailing new carbon emissions. “The scary scenarios could occur even if we make herculean efforts to reduce our emissions, in which case the only real answer is geoengineering.”

  Al Gore, meanwhile, counters with his own logic. “If we don’t know enough to stop putting 70 million tons of global-warming pollution into the atmosphere every day,” he says, “how in God’s name can we know enough to precisely counteract that?”

  But if you think like a cold-blooded economist instead of a warmhearted humanist, Gore’s reasoning doesn’t track. It’s not that we don’t know how to stop polluting the atmosphere. We don’t want to stop, or aren’t willing to pay the price.

  Most pollution, remember, is a negative externality of our consumption. As hard as engineering or physics may be, getting human beings to change their behavior is probably harder. At present, the rewards for limiting consumption are weak, as are the penalties for overconsuming. Gore and other environmentalists are pleading for humankind to consume less and therefore pollute less, and that is a noble invitation. But as incentives go, it’s not a very strong one.

  And collective behavior change, as beguiling as that may sound, can be maddeningly elusive. Just ask Ignatz Semmelweis.

  Back in 1847, when he solved the mystery of puerperal fever, Semmelweis was hailed as a hero—wasn’t he?

  Quite the opposite. Yes, the death rate in Vienna General’s maternity ward plummeted when he ordered doctors to wash their hands after performing autopsies. Elsewhere, however, doctors ignored Semmelweis’s findings. They even ridiculed him. Surely, they reasoned, such a ravaging illness could not be prevented simply by washing one’s hands! Moreover, doctors of that era—not the humblest lot—couldn’t accept the idea that they were the root of the trouble.

  Semmelweis grew frustrated, and in time his frustration curdled into vitriol. He cast himself as a scorned messiah, labeling every critic of his theory a murderer of women and babies. His arguments were often nonsensical; his personal behavior became odd, marked by lewdness and sexual impropriety. In retrospect, it’s safe to say that Ignatz Semmelweis was going mad. At the age of forty-seven, he was tricked into entering a sanitarium. He tried to escape, was forcibly restrained, and died within two weeks, his reputation shattered.

  But that doesn’t mean he wasn’t right. Semmelweis was posthumously vindicated by Louis Pasteur’s research in germ theory, after which it became standard practice for doctors to scrupulously clean their hands before treating patients.

  So do contemporary doctors follow Semmelweis’s orders?

  A raft of recent studies have shown that hospital personnel wash or disinfect their hands in fewer than half the instances they should. And doctors are the worst offenders, more lax than either nurses or aides.

  This failure seems puzzling. In the modern world, we tend to believe that dangerous behaviors are best solved by education. That is the thinking behind nearly every public-awareness campaign ever undertaken, from global warming to AIDS prevention to drunk driving. And doctors are the most educated people in the hospital.

  In a 1999 report called “To Err Is Human,” the Institute of Medicine estimated that between 44,000 and 98,000 Americans die each year because of preventable hospital errors—more than deaths from motor-vehicle crashes or breast cancer—and that one of the leading errors is wound infection. The best medicine for stopping infections? Getting doctors to wash their hands more frequently.

  In the wake of this report, hospitals all over the country hustled to fix the problem. Even a world-class hospital like Cedars-Sinai Medical Center in Los Angeles found it needed improvement, with a hand-hygiene rate of just 65 percent. Its senior administrators formed a committee to identify the reasons for this failure.

  For one, they acknowledged, doctors are incredibly busy, and time spent washing hands is time not spent treating patients. Craig Feied, our emergency-room revolutionary from Washington, estimates that he often interacted with more than one hundred patients per shift. “If I ran to wash my hands every time I touched a patient, following the protocol, I’d spend nearly half my time just standing over a sink.”

  Sinks, furthermore, aren’t always as accessible as they should be and, in patient rooms especially, they are sometimes barricaded by equipment or furniture. Cedars-Sinai, like a lot of other hospitals, had wall-mounted Purell dispensers for handy disinfection, but these too were often ignored.

  Doctors’ hand-washing failures also seem to have psychological components. The first might be (generously) called a perception deficit. During a five-month study in the intensive-care unit of an Australian children’s hospital, doctors were asked to track their own hand-washing frequency. Their self-reported rate? Seventy-three percent. Not perfect, but not so terrible either.

  Unbeknownst to these doctors, however, their nurses were spying on them, and recorded the docs’ actual hand-hygiene rate: a paltry 9 percent.

  Paul Silka, an emergency-room doctor at Cedars-Sinai who also served as the hospital’s chief of staff, points to a second psychological factor: arrogance. “The ego can kick in after you’ve been in practice a while,” he explains. “You say: ‘Hey, I couldn’t be carrying the bad bugs. It’s the other hospital personnel.’”

  Silka and the other administrators at Cedars-Sinai set out to change their colleagues’ behavior. They tried all sorts of incentives: gentle cajoling via posters and e-mail messages; greeting doctors every morning with a bottle of Purell; establishing a Hand Hygiene Safety Posse that roamed the wards, giving a $10 Starbucks card to doctors who were seen properly washing their hands. You might think the highest earners in a hospital would be immune to a $10 incentive. “But none of them turned down the card,” Silka says.

  After several weeks, the hand-hygiene rate at Cedars-Sinai had increased but not nearly enough. This news was delivered by Rekha Murthy, the hospital’s epidemiologist, during a lunch meeting of the Chief of Staff Advisory Committee. There were roughly twenty members, most of them top doctors in the hospital. They were openly discouraged by the report. When lunch was over, Murthy handed each of them an agar plate—a sterile petri dish loaded with a spongy layer of agar. “I would love to culture your hand,” she told them.

  They pressed t
heir palms into the plates, which Murthy sent to the lab. The resulting images, Silka recalls, “were disgusting and striking, with gobs of colonies of bacteria.”

  Here were the most important people in the hospital, telling everyone else how to change their behavior, and yet even their own hands weren’t clean! (And, most disturbingly, this took place at a lunch meeting.)

  It may have been tempting to sweep this information under the rug. Instead, the administration decided to harness the disgusting power of the bacteria-laden handprints by installing one of them as the screen saver on computers throughout the hospital. For doctors—lifesavers by training, and by oath—this grisly warning proved more powerful than any other incentive. Hand-hygiene compliance at Cedars-Sinai promptly shot up to nearly 100 percent.

  As word got around, other hospitals began copying the screen-saver solution. And why not? It was cheap, simple, and effective.

  A happy ending, right?

  Yes, but…think about it for a moment. Why did it take so much effort to persuade doctors to do what they have known to do since the age of Semmelweis? Why was it so hard to change their behavior when the price of compliance (a simple hand-wash) is so low and the potential cost of failure (the loss of a human life) so high?

  Once again, as with pollution, the answer has to do with externalities.

  When a doctor fails to wash his hands, his own life isn’t the one that is primarily endangered. It is the next patient he treats, the one with the open wound or the compromised immune system. The dangerous bacteria that patient receives are a negative externality of the doctor’s actions—just as pollution is a negative externality of anyone who drives a car, jacks up the air conditioner, or sends coal exhaust up a smokestack. The polluter has insufficient incentive to not pollute, and the doctor has insufficient incentive to wash his hands.

  This is what makes the science of behavior change so difficult.

  So instead of collectively wringing our filthy hands about behavior that is so hard to change, what if we can come up with engineering or design or incentive solutions that supersede the need for such change?

  That’s what Intellectual Ventures has in mind for global warming, and that is what public-health officials have finally embraced to cut down on hospital-acquired infections. Among the best solutions: using disposable blood-pressure cuffs on incoming patients; infusing hospital equipment with silver ion particles to create an antimicrobial shield; and forbidding doctors to wear neckties because, as the U.K. Department of Health has noted, they “are rarely laundered,” “perform no beneficial function in patient care,” and “have been shown to be colonized by pathogens.”

  That’s why Craig Feied has worn bow ties for years. He has also helped develop a virtual-reality interface that allows a gowned and gloved-up surgeon to scroll through X-rays on a computer without actually touching it—because computer keyboards and mice tend to collect pathogens at least as effectively as a doctor’s necktie. And the next time you find yourself in a hospital room, don’t pick up the TV remote control until you’ve disinfected the daylights out of it.

  Perhaps it’s not so surprising that it’s hard to change people’s behavior when someone else stands to reap most of the benefit. But surely we are capable of behavior change when our own welfare is at stake, yes?

  Sadly, no. If we were, every diet would always work (and there would be no need for diets in the first place). If we were, most smokers would be ex-smokers. If we were, no one who ever took a sex-ed class would be party to an unwanted pregnancy. But knowing and doing are two different things, especially when pleasure is involved.

  Consider the high rate of HIV and AIDS in Africa. For years, public-health officials from around the world have been fighting this problem. They have preached all sorts of behavior change—using condoms, limiting the number of sexual partners, and so on. Recently, however, a French researcher named Bertran Auvert ran a medical trial in South Africa and came upon findings so encouraging that the trial was halted so the new preventive measure could be applied at once.

  What was this magical treatment?

  Circumcision. For reasons Auvert and other scientists do not fully understand, circumcision was found to reduce the risk of HIV transmission by as much as 60 percent in heterosexual men. Subsequent studies in Kenya and Uganda corroborated Auvert’s results.

  All over Africa, foreskins began to fall. “People are used to policies that target behaviors,” said one South African health official, “but circumcision is a surgical intervention—it’s cold, hard steel.”

  The decision to undergo an adult circumcision is obviously a deeply personal one. We would hardly presume to counsel anyone in either direction. But for those who do choose circumcision, a simple word of advice: before the doctor gets anywhere near you, please make sure he washes his hands.

  EPILOGUE

  MONKEYS ARE PEOPLE TOO

  The branch of economics concerned with issues like inflation, recessions, and financial shocks is known as macroeconomics. When the economy is going well, macroeconomists are lauded as heroes; when it turns sour, as it did recently, they catch a lot of the blame. In either case, the headlines go to the macroeconomists.

  We hope that after reading this book, you’ll realize there is a whole different breed of economist out there—microeconomists—lurking in the shadows. They seek to understand the choices that individuals make, not just in terms of what they buy but also how often they wash their hands and whether they become terrorists.

  Some of these microeconomists do not even limit their research to the human race.

  Keith Chen, the son of Chinese immigrants, is a hyper-verbal, sharp-dressing thirty-three-year-old with spiky hair. After an itinerant upbringing in the rural Midwest, Chen attended Stanford, where after a brief infatuation with Marxism, he made an about-face and took up economics. Now he is an associate professor of economics at Yale.

  His research agenda was inspired by something written long ago by Adam Smith, the founder of classical economics: “Nobody ever saw a dog make a fair and deliberate exchange of one bone for another with another dog. Nobody ever saw one animal by its gestures and natural cries signify to another, this is mine, that yours; I am willing to give this for that.”

  In other words, Smith was certain that humankind alone had a knack for monetary exchange.

  But was he right?

  In economics, as in life, you’ll never find the answer to a question unless you’re willing to ask it, as silly as it may seem. Chen’s question was simply this: What would happen if I could teach a bunch of monkeys to use money?

  Chen’s monkey of choice was the capuchin, a cute, brown New World monkey about the size of a one-year-old child, or at least a scrawny one-year-old who has a very long tail. “The capuchin has a small brain,” Chen says, “and it’s pretty much focused on food and sex.” (This, we would argue, doesn’t make the capuchin so different from many people we know, but that’s another story.) “You should really think of a capuchin as a bottomless stomach of want. You can feed them marshmallows all day, they’ll throw up, and then come back for more.”

  To an economist, this makes the capuchin an excellent research subject.

  Chen, along with Venkat Lakshminarayanan, went to work with seven capuchins at a lab set up by the psychologist Laurie Santos at Yale–New Haven Hospital. In the tradition of monkey labs everywhere, the capuchins were given names—in this case, derived from characters in James Bond films. There were four females and three males. The alpha male was named Felix, after the CIA agent Felix Leiter. He was Chen’s favorite.

  The monkeys lived together in a large, open cage. Down at one end was a much smaller cage, the testing chamber, where one monkey at a time could enter to take part in experiments. For currency, Chen settled on a one-inch silver disc with a hole in the middle—“kind of like Chinese money,” he says.

  The first step was to teach the monkeys that the coins had value. This took some effort. If you give a capuchin
a coin, he will sniff it and, after determining he can’t eat it (or have sex with it), he’ll toss it aside. If you repeat this several times, he may start tossing the coins at you, and hard.

  So Chen and his colleagues gave the monkey a coin and then showed a treat. Whenever the monkey gave the coin back to the researcher, it got the treat. It took many months, but the monkeys eventually learned that the coins could buy the treats.

  It turned out that individual monkeys had strong preferences for different treats. A capuchin would be presented with twelve coins on a tray—his budget constraint—and then be offered, say, Jell-O cubes by one researcher and apple slices by another. The monkey would hand his coins to whichever researcher held the food he preferred, and the researcher would fork over the goodies.

  Chen now introduced price shocks and income shocks to the monkeys’ economy. Let’s say Felix’s favorite food was Jell-O, and he was accustomed to getting three cubes of it for one coin. How would he respond if one coin suddenly bought just two cubes?

  To Chen’s surprise, Felix and the others responded rationally. When the price of a given food rose, the monkeys bought less of it, and when the price fell, they bought more. The most basic law of economics—that the demand curve slopes downward—held for monkeys as well as humans.

  Now that he had witnessed their rational behavior, Chen wanted to test the capuchins for irrational behavior. He set up two gambling games. In the first, a capuchin was shown one grape and, dependent on a coin flip, either got only that grape or won a bonus grape as well. In the second game, the capuchin started out seeing two grapes, but if the coin flip went against him, the researchers took away one grape and the monkey got only one.

  In both cases, the monkey got the same number of grapes on average. But the first gamble was framed as a potential gain while the second was framed as a potential loss.

  How did the capuchins react?

  Given that the monkeys aren’t very smart in the first place, you might assume that any gambling strategy was well beyond their capabilities. In that case, you’d expect them to prefer it when a researcher initially offered them two grapes instead of one. But precisely the opposite happened! Once the monkeys figured out that the two-grape researcher sometimes withheld the second grape and that the one-grape researcher sometimes added a bonus grape, the monkeys strongly preferred the one-grape researcher. A rational monkey wouldn’t have cared, but these irrational monkeys suffered from what psychologists call “loss aversion.” They behaved as if the pain from losing a grape was greater than the pleasure from gaining one.

 

‹ Prev