Book Read Free

Every Patient Tells a Story

Page 26

by Lisa Sanders


  Podell started the patient on the corticosteroid prednisone, which is a highly effective anti-inflammatory medicine. Almost immediately her breathing became easier and the cough disappeared. Within a few days she was walking up and down stairs, something she hadn’t been able to do for more than a year. The damage to the nerves in her legs would take longer to treat and may not be completely reversible, but with the diagnosis now clear and effective treatments known, the prognosis for a full recovery was excellent.

  Dr. Podell wasn’t born an excellent diagnostician. He didn’t always know to check and double-check the work of other doctors earlier in the “train” for any particular patient. He learned this and many other invaluable lessons about diagnosis over the course of a long career. And that, in the end, is why we can be hopeful that doctors and other health care providers can avoid or even eliminate the types of cognitive errors we have encountered in this chapter. Yes, doctors are human beings and, thus, are prone to biases, distortions of perspective, and blind spots. But doctors have the capacity to learn from their mistakes, overcome built-in biases, and guard against the kinds of thinking errors that in other professions might only be an annoyance.

  I recall a rather mortifying moment in my own learning curve. I was in my third year of medical school. I was given a very simple task by an experienced doctor: to intubate an unconscious patient. Intubation is to medicine what boiling water is to cooking—one of the most basic techniques you can think of. And yet I blew it. Because both the trachea (the tube for air) and the esophagus (the tube for food) diverge at the back of the throat, it is relatively easy to slide the breathing tube into the esophagus. Doing so, of course, is a potentially deadly mistake. Students are therefore repeatedly taught to listen to the lungs for sounds of air movement after placing the breathing tube. If you’ve accidentally put the tube into the stomach, the lungs will be silent. When I listened I heard the terrible silence that means you’ve made this basic error. Under the gaze of my supervising doctor, I removed the tube and tried again, feeling extremely embarrassed in the process. But the doctor was not annoyed or disappointed. And what he said next has always stuck with me.

  “There’s no shame in intubating the esophagus,” he said. “But there is shame in not checking or catching the error.”

  His point was that errors themselves are unavoidable. Mistakes will always happen—all types of mistakes, from the technical to the cognitive. But that doesn’t mean we throw up our hands in helplessness. The key is designing our systems, our procedures, our protocols, and our own thinking processes to minimize mistakes as much as possible and then to catch mistakes when they are made.

  Medicine is not the only field in which mistakes can be deadly. The airline industry, to take just one example, has had to put into place many systems for preventing and catching human errors. In the 1930s, following a crash in which a test pilot and crewman were killed due to “pilot error,” the air force responded by requiring every pilot and copilot to complete a pre-takeoff checklist before each flight. The rate of accidents plummeted, and eventually this became standard practice for military and commercial pilots. Most airlines also now require pilots and crew to review the flight plan just before takeoff. This is done as a group and anyone in the crew, from pilot to steward, can bring up any problems they see or anticipate. Pilot and crew are drilled on safety procedures for a wide variety of problems, often using flight simulators to make the experience as real and useful as possible. These basic steps are part of a broader movement that has dramatically improved air travel safety.

  There is a national effort now being made to eliminate many of the errors in medicine, to implement layers of checks and double checks to catch errors before they happen. Many of the strategies developed by the airline industry have been adapted and incorporated into hospitals and operating rooms throughout the United States. For example, there is an effort to require surgeons to complete a pre-surgical checklist with all the members of the surgical team. Before any operation, the team meets and anyone, from the anesthesiologist to the scrub nurse, can bring up any problem they see or anticipate. A recent study in the New England Journal of Medicine showed that the use of a nineteen-item surgical safety checklist decreased mortality by nearly 50 percent and the rate of complications overall by a third. A recent study showed that the use of a checklist before certain procedures in the ICU can also reduce medical errors by 80 percent and save lives.

  Most of this effort has been directed at system errors—when the wrong drug is given or the wrong type of blood transfused. When the wrong leg is amputated. These were the errors identified in a report by the Institute of Medicine (IOM), To Err Is Human, published in 2000. Hospitals have been in the forefront of this movement and there are efforts to punish hospitals that have been slow to address these problems.

  Diagnostic error, however, hasn’t been part of that effort. In fact, when one researcher searched the text of the IOM report, the term “medication error” came up seventy times but the term “diagnostic error” came up only twice. This was true even though the study that this report was based on found that diagnostic errors accounted for 17 percent of all the errors made.

  Research into the cause and solution of diagnostic error is in its infancy. Most of the focus in the area of diagnostic errors is aimed at addressing one of the most fundamental cognitive limitations that doctors must deal with: the limited capacity of our own brains. Medical knowledge has grown so vast that no single human can know it all—no matter how much experience they have, no matter how many patients they’ve seen, and no matter how many textbooks they’ve read, no matter how many journals they keep up with. Some classes of cognitive errors are rooted in this limitation—you can’t see what you don’t know to look for. And even if you know about an illness, you may not think of it if a patient presents with an unusual version of the disease.

  One obvious solution to this dilemma is for doctors to augment their own, personal neural computers with actual computers, which don’t get tired, don’t get confused, and have memory capacities that far outstrip that of any single human brain. But, as we will see, this “obvious” solution has not been nearly as easy to implement as many medical professionals once believed.

  CHAPTER TEN

  Digital Diagnosis

  In 1976, Peter Szolovits had a vision of the future. He had a newly minted doctorate in information sciences from Caltech. He was in the vanguard of the computer-savvy. And he had a dream: that joining the data-gathering skills of the physician with the almost limitless memory and data-crunching ability of the computer would allow unprecedented accuracy in the physician’s art of diagnosis.

  Szolovits came of intellectual age in a time of heady optimism about the capacity of these marvelous inventions. It was the dawn of the modern computer era. Microcomputers were the cutting edge. These computers were the size of a desk rather than the room-sized mainframes that had been the previous state of the art. The personal computer—one that ordinary people could use in their homes—was still just a dream in a Palo Alto garage. Data was still stored on enormous reels of electromagnetic tape. The newly invented LP-sized disk drives were marvels of data storage technology because they could hold 7 megabytes of information.

  The rapidly growing ability of computers to store vast amounts of information seemed to fit perfectly with the needs of medicine, in particular the challenges of medical diagnosis. It was obvious that medical knowledge was also growing exponentially. In a 1976 article, a group of doctors working on a computer simulation of “clinical cognition” estimated that a practicing doctor draws on a store of at least two million medical facts. And it was clear that this mountain of knowledge would only grow larger with time. Using a computer “brain” to augment and support human brains in the often bedeviling work of diagnosing illness seemed to Szolovits a logical and technologically feasible goal.

  During these heady times Szolovits began a series of conversations with physicians about collaborating t
o design a computer to help doctors meet the demands of the rapidly expanding universe of medical knowledge. He was surprised by what he found. One conversation in particular with a highly respected senior physician in a university teaching hospital stands out from those days. After listening to Szolovits describe the possibilities of, for instance, entering a set of symptoms into a computer that would then generate a list of likely diagnoses, the physician interrupted him.

  “Son,” he said, raising his bare hands in front of Szolovits, “these are the hands of a surgeon, not a typist.” And he turned on his heel and walked away.

  It was an early indication that the application of computers to medical diagnosis might not be as straightforward as Szolovits had thought.

  Flash forward thirty years.

  By 2006, Szolovits was a full professor at MIT. An energetic man with just the barest hint of middle-aged thickening and a salt-and-pepper beard, Szolovits heads the group at the Massachusetts Institute of Technology devoted to designing computers and systems of artificial intelligence to address problems of medical decision making and diagnosis. Every fall he shares his ideas and insights into this world in a graduate student seminar called Bio-medical Decision Support. I had read about this course and wanted to see what the future of diagnostic software was going to look like.

  I visited at the end of the semester, when students presented their final projects. Sitting in a hard plastic chair in the classroom, I watched as PowerPoint slides whizzed by, accompanied by rapid-fire, acronym-studded sentences. One group presented a new technique to look for “interesting hits” amid vast databases; another presented a user-friendly interface for a Web-based electronic medical records program; a third presented a program that bolsters the privacy of genetic test data. One group exceeded their fifteen-minute slot to describe an elegant program for identifying potentially harmful interactions between prescription drugs that performs better than the current state-of-the-art software.

  All of the projects seemed to improve or expand the boundaries of one or another aspect of health care delivery. Indeed, after the presentations Szolovits chatted with the team who’d created the drug interaction program because not only did it appear to be publishable, it might also be something the students could turn into a business opportunity.

  And yet something was missing. Despite the title of the course, none of the projects addressed the issue that had beckoned so alluringly to Szolovits thirty years ago—the task of improving clinical diagnosis with computers.

  In his office after the class, Szolovits leaned back in his chair, musing.

  “Thirty years ago we thought we could identify all of the best practices in medicine, create a system that would make diagnosis faster and easier, and bring it all to doctors via a computer,” he said. Twenty years ago he wrote a paper for the Annals of Internal Medicine that proclaimed artificial intelligence techniques would eventually give the computer a major role as an expert consultant to the physician. And today? Szolovits sighed. “As it turns out, it’s simply not possible.” It might be an interesting idea, but there’s no market for it. Doctors aren’t interested in buying it and so companies aren’t interested in designing and building it. “Rather than trying to bring the average doctors up to a level of being super-diagnosticians, the emphasis and attention has shifted toward bringing below-average doctors up to current standards and helping even good doctors avoid doing really stupid things. That turns out to provide greater benefits to patients. Plus, there is a financial model for it.”

  Szolovits ticked off some of the major reasons that most doctors today still rely on their own brains and the brains of their colleagues when making a diagnosis rather than a computerized diagnostic aid.

  First, computers can’t collect the data from the patients themselves. These machines excel in data crunching, not data collecting. Physicians must collect the data and then enter it into the program. And the programs themselves don’t make this easy. There are many ways of describing a patient’s symptoms and physical exam findings, but most computers don’t have adequate language skills to understand. You’re left with long pull-down lists of every possible symptom variation or using terms that the computer simply doesn’t recognize.

  There are technical difficulties as well. Doctors, laboratories, and hospitals all use different kinds of computer software. No single system can interface with the huge variety of software used to store patient data. Once again the physician must provide the data if she wants it to be considered. Then there are financial difficulties. Who is going to pay the doctor or hospital to invest in this kind of software? Szolovits noted that hospitals don’t get reimbursed for understanding things, they get reimbursed for doing things.

  But perhaps the greatest difficulty lies in persuading doctors themselves to use this kind of software. When confronted with a confusing clinical picture, it is often faster and easier for doctors to do what doctors have always done—ask for help from other doctors.

  For these and many other reasons, the medical community has yet to embrace any particular computerized diagnostic support system. The dream of a computer system that can “think” better, faster, and more comprehensively than any human doctor has not been realized. For all their limitations, well-trained human beings are still remarkably good at sizing up a problem, rapidly eliminating irrelevant information, and zeroing in on a “good-enough” decision.

  This is why human chess players held out for so long against computer opponents whose raw computational and memory abilities were many orders of magnitude better than those of a human brain. Humans devise shortcut strategies for making decisions and drawing conclusions that are simply impossible for computers. Humans are also extraordinarily good at pattern recognition—in chess, skilled players are able to size up the entire board at a glance and develop a feel, an intuition, for potential threats or opportunities.

  It took decades and millions of dollars to create a computer that was as good as a human at the game of chess. It is a complex game requiring higher order thinking but is two-dimensional and based on clear, fixed rules using pieces that never vary. The diagnosis of human beings, in contrast, is four-dimensional (encompassing the three spatial dimensions and the fourth dimension of time), has no invariable rules, and involves “pieces” (bodies), no two of which are exactly the same.

  In addition, of course, humans have a set of diagnostic tools that computers may never equal—five independent and exquisitely powerful sense organs. At a glance, a doctor can take in and almost immediately process reams of information about a patient—their posture, skin tone, quality of eye contact, aroma, voice quality, personal hygiene, and hints and clues so subtle they defy verbal description. A computer, in contrast, has only words and numbers, typed in by a human, that inadequately represent a living, breathing, and immensely complicated patient.

  Despite the challenges, Szolovits was among those who first attempted to develop computer programs to diagnose medical conditions. Dozens of prototype models were created and tested in a laboratory setting. But most foundered when attempts were made to scale them up, move them into a clinical setting, or make a profit on them. Computers lacked the necessary memory and processing speeds to make vast databases rapidly usable. Until the advent of the World Wide Web, programs had to be distributed via diskettes, or as part of a dedicated computer, or via dial-up modem connections. All of these challenges slowed momentum in the field.

  But even systems that have embraced more recent technological improvements have not seen wide success. A case in point is one of the earlier attempts to use computers to improve diagnosis. In 1984 a team of computer scientists from MIT’s Laboratory for Computer Science teamed up with a group of doctors from Massachusetts General Hospital, just across the river. They worked for two years to develop an electronic medical reference system and an aid to diagnosis. In 1986 the program, dubbed DXplain, was launched with a database of information on five hundred diseases. National distribution of DXplain with an e
xpanded database of about two thousand diseases began in 1987 over a precursor to the Internet—a dedicated computer network using dial-up access. Between 1991 and 1996, DXplain was also distributed as a stand-alone version that could be loaded on an individual PC. Since 1996, Internet access to a Web-based version of DXplain has replaced all previous methods of distribution. The program has been continually expanded over the years and is now available to about 35,000 medical personnel, almost all of them in medical schools and teaching hospitals where the program is used as an educational tool.

  DXplain and other first-generation diagnostic decision support software programs use compiled knowledge bases of syndromes and diseases with their characteristic symptoms, signs, and laboratory findings. Users enter the data from their own patients by selecting from a menu of choices, and the programs use Bayesian logic or pattern-matching algorithms to suggest diagnostic possibilities.

  “There was a lot of work in the 1980s on using computers in diagnostic problem solving and then, in the 1990s, it sort of petered out,” says Eta Berner, a professor of Health Informatics at the University of Alabama. Berner may have been part of the reason this work petered out. In 1994 she and a group of thirteen other physicians tested four of the most widely used programs in a paper published in the New England Journal of Medicine. They collected just over one hundred difficult cases from specialists from around the country. They entered the data from each of the patients into each of the four databases. All four programs correctly diagnosed 63 out of the 105 cases included in the study. Overall the four programs provided the correct diagnosis anywhere from 50 to 70 percent of the time—a solid C performance at best.

 

‹ Prev