Book Read Free

Black Box Thinking

Page 7

by Matthew Syed


  In health care, this scientific approach to learning from failure has long been applied to creating new drugs, through clinical trials and other techniques. But the lesson of Virginia Mason is that it is vital to apply this approach to the complex question of how treatments are delivered by real people working in large systems. This is what health care has lacked for so long, and explains, in large part, why preventable medical error kills more people than traffic accidents.

  As Peter Pronovost, professor at the Johns Hopkins University School of Medicine and medical director of the Center for Innovation in Quality Patient Care, put it: “The fundamental problem with the quality of American medicine is that we have failed to view the delivery of health care as a science. You find genes, you find therapies, but how you deliver them is up to you . . . That has been a disaster. It is why we have so many people being harmed.”20

  Pronovost became interested in patient safety when his father died at the age of fifty due to medical error. He was wrongly diagnosed with leukemia when he, in fact, had lymphoma. “When I was a first-year medical student here at Johns Hopkins, I took him to one of our experts for a second opinion,” Pronovost said in an interview with the New York Times. “The specialist said, ‘If you had come earlier, you would have been eligible for a bone marrow transplant, but the cancer is too advanced now.’ The word ‘error’ was never spoken. But it was crystal clear. I was devastated. I was angry at the clinicians and myself. I kept thinking, ‘Medicine has to do better than this.’”21

  Over the following few years, Pronovost devoted his professional life to changing the culture. He wasn’t going to shrug his shoulders at the huge number of deaths occurring every day in American hospitals. He wasn’t prepared to regard these tragedies as unavoidable, or as a price worth paying for a system doing its best in difficult circumstances. Instead, he studied them. He compiled data. He looked for accident “signatures.” He tested and trialed possible reforms.

  One of his most seminal investigations was into the 30,000 to 60,000 deaths caused annually by central line infections (a central line is a catheter placed into a large vein to administer drugs, obtain blood tests, and so on). Pronovost discovered a number of pathways to failure, largely caused by doctors and nurses failing to wear masks or put sterile dressings over the catheter site once the line was in.22 Under the pressure of time, professionals were missing key steps.

  So Pronovost instituted a five-point checklist to insure that all the steps were properly taken and, crucially, empowered nurses to speak up if surgeons failed to comply. Nurses would normally have been reluctant to do so, but they were provided with reassurance that they would be backed by the administration if they did. Almost instantly, the ten-day line-infection rate dropped from 11 percent to 0. This one reform saved 1,500 lives and $100 million over the course of eighteen months in the state of Michigan alone. In 2008 Time magazine voted Pronovost as one of the most influential 100 individuals in the world due to the scale of suffering he had helped to avert.

  In his remarkable book Safe Patients, Smart Hospitals Pronovost wrote:

  My dad had suffered and died needlessly at the premature age of fifty thanks to medical errors and poor quality of care. In addition, my family and I also needlessly suffered. As a young doctor I vowed that, for my father and my family, I would do all that I could to improve the quality and safety of care delivered to patients . . . [And that meant] turning the delivery of health care into a science.

  Gary Kaplan, whose work at Virginia Mason has also saved thousands of lives, put the point rather more pithily: “We learn from our mistakes. It is as simple and as difficult as that.”

  The difference between aviation and health care is sometimes couched in the language of incentives. When pilots make mistakes, it results in their own deaths. When a doctor makes a mistake, it results in the death of someone else. That is why pilots are better motivated than doctors to reduce mistakes.

  But this analysis misses the crucial point. Remember that pilots died in large numbers in the early days of aviation. This was not because they lacked the incentive to live, but because the system had so many flaws. Failure is inevitable in a complex world. This is precisely why learning from mistakes is so imperative.

  But in health care, doctors are not supposed to make mistakes. The culture implies that senior clinicians are infallible. Is it any wonder that errors are stigmatized and that the system is set up to ignore and deny rather than investigate and learn?

  To put it a different way, incentives to improve performance can only have an impact, in many circumstances, if there is a prior understanding of how improvement actually happens. Think back to medieval doctors who killed patients, including their own family members, with bloodletting. This happened not because they didn’t care but because they did care. They thought the treatment worked.

  They trusted in the authority of Galen rather than trusting in the power of criticism and experimentation to reveal the inevitable flaws in his ideas, thus setting the stage for progress. Unless we alter the way we conceptualize failure, incentives for success can often be impotent.

  IV

  Virginia Mason and Michigan are two of the many bright spots that have emerged in health care in recent years. There are others, too. In anesthetics, for example, a study into adverse events in Massachusetts found that in half the anesthetic machines, a clockwise turn of the dial increased the concentration of drugs, but in the other half the very same turn of the dial decreased it.

  This was a defect of a kind similar to the one that had bedeviled the B-17 aircraft in the 1940s, which had identical switches with different functions side by side in the cockpit. But the flaw had not been spotted for a simple reason: accidents had never been analyzed or addressed.

  In the aftermath of the report, however, the machines were redesigned and the death rate dropped by 98 percent.23 This may sound miraculous, but we should not be surprised. Think back to the redesign of the B-17 cockpit display, which pretty much eliminated runway crashes altogether.

  But amid these bright spots, there remain huge challenges. The Mid Staffordshire NHS Foundation Trust in England, for example, did not address repeated failures for more than a decade, leading to potentially hundreds of avoidable deaths. Warning signs of neglect and substandard care were obvious for years, but were overlooked not only by staff at the hospital, but also by every organization responsible for regulating the NHS, including the government’s Department of Health.24

  In many ways this reveals the depth of the cultural problem in health care. It wasn’t just the professionals failing to be open about their errors (and, in some cases, neglect); the regulators were also failing to investigate those mistakes.

  A different scandal at Furness General Hospital in the north of England revealed similar problems. Repeated errors and poor care in its maternity unit were not revealed for more than ten years. An influential 205-page report published in 2015 found “20 instances of significant or major failures of care at FGH, associated with three maternal deaths and the deaths of 16 babies at or shortly after birth.”25

  But these high-profile tragedies are, in fact, the tip of the iceberg; the deeper problem is the “routine” tragedies happening every day in hospitals around the world. It is about health care in general. Shortly before this book went to print, a landmark report by the House of Commons Public Administration Select Committee revealed that the NHS is still struggling to learn from mistakes. “There is no systematic and independent process for investigating incidents and learning from the most serious clinical failures. No single person or organization is responsible and accountable for the quality of clinical investigations or for ensuring that lessons learned drive improvement in safety across the NHS.”

  The committee acknowledged that various reporting and incident structures are now in place, but made it clear that deeper cultural problems continue to prevent them from working. Scott Morrish,
for example, a father who lost his son to medical error, found that the subsequent investigations were designed not to expose lessons but to conceal them. “Most of what we know now did not come to light through the analytical or investigative work of the NHS: it came to light despite the NHS,” he said in his evidence to the committee. Looking at NHS England as a whole, the committee concluded: “the processes for investigating and learning from incidents are complicated, take far too long and are preoccupied with blame or avoiding financial liability.* The quality of most investigations therefore falls far short of what patients, their families and NHS staff are entitled to expect.”26

  In the United States similar observations apply. In 2009 a report by the Hearst Foundation found that “20 states have no medical error reporting at all” and that “of the 20 states that require medical error reporting, hospitals report only a tiny percentage of their mistakes, standards vary wildly and enforcement is often nonexistent.” It also found that “only 17 states have systematic adverse-event reporting systems that are transparent enough to be useful to [patients].”27

  One particular problem in health care is not just the capacity to learn from mistakes, but also that even when mistakes are detected, the learning opportunities do not flow throughout the system. This is sometimes called the “adoption rate.” Aviation, as we have seen, has protocols that enable every airline, pilot, and regulator to access every new piece of information in almost real time. Data is universally accessible and rapidly absorbed around the world. The adoption rate is almost instantaneous.

  However, in health care, the adoption rate has been sluggish for many years, as Michael Gillam, director of the Microsoft Medical Media Lab, has pointed out. In 1601, Captain James Lancaster, an English sailor, performed an experiment on the prevention of scurvy, one of the biggest killers at sea. On one of four ships bound for India, he prescribed three teaspoons of lemon juice a day for the crew. By the halfway point 110 men out of 278 had died on the other three ships. On the lemon-supplied ship, however, everyone survived.

  This was a vital finding. It was a way of avoiding hundreds of needless deaths on future journeys. But it took another 194 years for the British Royal Navy to enact new dietary guidelines. And it wasn’t until 1865 that the British Board of Trade created similar guidelines for the merchant fleet. That is a glacial adoption rate. “The total time from Lancaster’s definitive demonstration of how to prevent scurvy to adoption across the British Empire was 264 years,” Gillam says.28

  Today, the adoption rate in medicine remains chronically slow. One study examined the aftermath of nine major discoveries, including one finding that the pneumococcal vaccine protects adults from respiratory infections, and not just children. The study showed that it took doctors an average of seventeen years to adopt the new treatments for half of American patients. A major review published in the New England Journal of Medicine found that only half of Americans receive the treatment recommended by U.S. national standards.29

  The problem is not that the information doesn’t exist; rather, it is the way it is formatted. As Atul Gawande, a doctor and author, puts it:

  The reason . . . is not usually laziness or unwillingness. The reason is more often that the necessary knowledge has not been translated into a simple, usable and systematic form. If the only thing people did in aviation was issue dense, pages-long bulletins . . . it would be like subjecting pilots to the same deluge of almost 700,000 medical journal articles per year that clinicians must contend with. The information would be unmanageable. Instead . . . crash investigators [distill] the information into its practical essence.30

  Perhaps the most telling example of how far the culture of health care still has to travel is in the attitude to autopsies. A doctor can use intuition, run tests, use scanners, and much else besides to come up with a diagnosis while a patient is still alive. But an autopsy allows his colleagues to look inside a body and actually determine the precise cause of death. It is the medical equivalent of a black box.

  This has rather obvious implications for progress. After all, if the doctor turns out to be wrong in his diagnosis of the cause of death, he may also have been wrong in his choice of treatment in the days, perhaps months, leading up to death. That might enable him to reassess his reasoning, providing learning opportunities for him and his colleagues. It could save the lives of future patients.

  It is for this reason that autopsies have triggered many advances. They have been used to understand the causes of tuberculosis, how to combat Alzheimer’s disease, and so forth. In the armed forces, autopsies on American servicemen and -women who died in Iraq and Afghanistan in the years since 2001 have yielded vital data about injuries from bullets, blasts, and shrapnel.

  This information revealed deficiencies in body armor and vehicle shielding and has led to major improvements in battlefield helmets, protective clothing, and medical equipment31 (just as the “black box” analysis by Abraham Wald improved the armoring of bombers during World War II). Before 2001, however, military personnel were rarely autopsied, meaning that the lessons were not surfaced—leaving their comrades vulnerable to the same, potentially fatal, injuries.

  In the civilian world around 80 percent of families give permission for autopsies to be performed when asked, largely because it provides them with answers as to why a loved one died.32 But despite this willingness, autopsies are hardly ever performed. Data in the United States indicate that less than 10 percent of deaths are followed by an autopsy.33 Many hospitals perform none at all. Since 1995, we don’t know how many are conducted: the American National Center for Health Statistics doesn’t collect the data any longer.*34

  All of this precious information is effectively disappearing. A huge amount of potentially life-saving learning is being frittered away. And yet it is not difficult to identify why doctors are reluctant to access the data: it hinges on the prevailing attitude toward failure.

  After all, why conduct an investigation if it might demonstrate that you made a mistake?

  • • •

  This chapter is not intended as a criticism of doctors, nurses, and other staff, who do heroic work every day. I have been looked after with diligence and compassion every time I have been hospitalized. It is also worth pointing out that aviation is not perfect. There are many occasions when it doesn’t live up to its noble ambition of learning from adverse events.

  But the cultural difference between these two institutions is of deep importance if we are to understand the nature of closed loops, how they develop even when people are smart, motivated, and caring—and how to break free of them.

  It is also important to note that any direct comparison between aviation and health care should be handled with caution. For a start, health care is more complex. It has a huge diversity of equipment: for example, there are 300 types of surgical pump but just two models of long-distance aircraft. It is also more hands-on, and rarely has the benefit of autopilot—all of which adds to the scope for error.

  But this takes us to the deepest irony of all. When the probability of error is high, the importance of learning from mistakes is more essential, not less. As Professor James Reason, one of the world’s leading experts on system safety, put it: “This is the paradox in a nutshell: health care by its nature is highly error-provoking—yet health workers stigmatize fallibility and have had little or no training in error management or error detection.”35

  There are, of course, limits to the extent to which you can transfer procedures from one safety-critical industry to another. Checklists have transferred successfully from aviation to some health-care systems, but that is no guarantee that other procedures will do so. The key issue, however, is not about transferring procedures, but about transferring an attitude.

  As Gary Kaplan, CEO of Virginia Mason Health System, has said: “You can have the best procedures in the world but they won’t work unless you change attitudes toward error.”

&nb
sp; The underlying problem is not psychological or motivational. It is largely conceptual. And until we change the way we think about failure, the ambition of high performance will often remain a mirage, not just in health care but elsewhere, too.

  • • •

  In May 2005 Martin Bromiley’s persistence paid off. An investigation was commissioned by the general manager of the hospital where his wife died. It was headed by Michael Harmer, professor of Anesthetics and Intensive Care Medicine at Cardiff University School of Medicine.

  On July 30, Martin was called into the hospital to listen to its findings. The report listed a number of recommendations. Each of them could have been lifted directly from the National Transportation Safety Board’s report into United Airlines 173 almost thirty years previously. It called for better communication in operating theaters so that “any member of staff feels comfortable to make suggestions on treatment.”

  It also articulated the concern over the limitations of human awareness. “Given the problem with time passing unnoticed, should such an event occur again, a member of staff should be allocated to record timings of events and keep all involved aware of the elapsed time,” the report said.

  The findings were, in one sense, obvious. In another sense they were revolutionary. Bromiley published the report (with the names of medical staff altered to protect anonymity). He gave it maximum exposure. He wanted all clinicians to read it and learn from it. He even managed to get a BBC television documentary commissioned that explored the case and its ramifications.

  He then started a safety group to push forward reforms. The focus was not merely on the problem of blocked airways, but on the whole field of institutional learning. He heads the organization—the Clinical Human Factors Group—in a voluntary capacity to this day.

 

‹ Prev