Black Box Thinking

Home > Other > Black Box Thinking > Page 26
Black Box Thinking Page 26

by Matthew Syed


  This reflects the point about the Virginia Mason Health System. It was only when professionals believed that reports on errors and near misses would be treated as learning opportunities rather than a pretext to blame that this crucial information started to flow. Managers were initially worried that reducing the penalties for error would lead to an increase in the number of errors. In fact, the opposite happened. Insurance claims fell by a dramatic 74 percent. Similar results have been found elsewhere. Claims and lawsuits made against the University of Michigan Health System, for example, dropped from 262 in August 2001 to 83 following the introduction of an open disclosure policy in 2007. The number of lawsuits against the University of Illinois Medical Center fell by half in two years after creating a system of open reporting.

  “Holding people accountable and [unfairly] blaming people are two quite different things,” Sidney Dekker, one of the world’s leading thinkers on complex systems, has said. “Blaming people may in fact make them less accountable: They will tell fewer accounts, they may feel less compelled to have their voice heard, to participate in improvement efforts.”5

  In a simple world, blame, as a management technique, made sense. When you are on a one-dimensional production line, for example, mistakes are obvious, transparent, and are often caused by a lack of focus. Management can reduce them by increasing the penalties for noncompliance. They can also send a motivational message by getting heavy once in a while. People rarely lose concentration when their jobs are on the line.

  But in a complex world this analysis flips on its head. In the worlds of business, politics, aviation, and health care, people often make mistakes for subtle, situational reasons. The problem is often not a lack of focus, it is a consequence of complexity. Increasing punishment, in this context, doesn’t reduce mistakes, it reduces openness. It drives the mistakes underground. The more unfair the culture, the greater the punishment for honest mistakes and the faster the rush to judgment, the deeper this information is buried. This means that lessons are not learned, so the same mistakes are made again and again, leading to more punitive punishment, and even deeper concealment and back-covering.

  Consider the case of a major financial institution, which sustained heavy losses after a problem emerged in an automated trading program (I cannot name the bank for legal reasons). The chief technology officer (CTO) admitted that nobody fully understood the IT system that had been created.6 This is entirely normal: major IT systems are invariably complex beyond the understanding of their designers.

  He therefore recommended to the board that the engineers should not be fired. He didn’t think it would be fair. They had done their best, the program had been stress-tested, and it had operated perfectly for a number of months. But he was overruled. The board, which had not engaged in any systematic attempt to understand what had happened, thought that it was “just obvious” that the IT staff were to blame. After all, they had been closest to the system.

  The board had other concerns, too. The failure had cost millions of dollars and had been widely reported in the press. They were worried the event might “contaminate the franchise.” They thought that acting decisively would play better in PR terms. They also argued that it would send a resolute message to staff about the company’s sharp-edged attitude toward failure.

  All this sounds plausible, but now think of the cultural ramifications. The board thought they had sent a strong signal that they were tough on mistakes; they had, in fact, sent a chilling message to their staff. If you fail, we will blame you. If you mess up, you will be scapegoated. They had told their staff, with an eloquence that no memo could ever match: “Act defensively, cover your backs, and cover up the precious information that we need to flourish.”

  The IT department changed rather a lot after the firings, according to the CTO. Meetings became more fraught, colleagues stopped coming up with new ideas, and the flow of information dried up. The board felt that they had protected the brand, but they had, in reality, poisoned it. They had destroyed much of the data crucial to successful adaptation. They have had more than a dozen major IT incidents since the initial failure.7

  In management courses today, a contrast is often offered between a “blame culture” and an “anything goes” culture. In this conception, the cultural challenge is to find a sensible balance between these two, seemingly competing objectives. Blame too much and people will clam up. Blame too little and they will become sloppy.

  But judged from a deeper level, these are not in conflict after all. The reconciliation of these seemingly contradictory objectives (discipline and openness) lies in black box thinking. A manager who takes the time to probe the data and who listens to the various perspectives has a crucial advantage. Not only does he figure out what really happened in the specific case, he also sends an empowering message to his staff: if you make an honest mistake we will not penalize you.

  This doesn’t mean that blame is never justified. If, after investigation, it turns out that a person was genuinely negligent, then punishment is not only justifiable, but imperative. Professionals themselves demand this. In aviation, for example, pilots are the most vocal in calling for punishments for colleagues who get drunk or demonstrate gross negligence. They don’t want the reputation of their profession undermined by irresponsible behavior.

  But the crucial point here is that justifiable blame does not undermine openness. Why? Because management has taken the time to find out what really happened rather than blaming preemptively, giving professionals the confidence that they can speak up without being penalized for honest mistakes. This is what is sometimes called a “just culture.”

  The question, according to Sidney Dekker, is not Who is to blame? It is not even Where, precisely, is the line between justifiable blame and an honest mistake? because this can never be determined in the abstract. Rather, the question is, Do those within the organization trust the people who are tasked with drawing that line? It is only when people trust those sitting in judgment that they will be open and diligent.8

  The nurses in the high-blame unit at Memorial Hospital didn’t trust their manager. To the hospital bosses, the manager doubtless looked like a no-nonsense leader, the kind of person who instiled toughness and discipline, someone who insured that nurses were held accountable for their mistakes. It looked as if she was on the side of the most important people of all: patients.

  In reality, however, she was guilty of a distinctive kind of laziness. By failing to engage with the complexity of the system she managed, she was blaming preemptively and thus undermining openness and learning. She was weakening the most important accountability of all: what the philosopher Virginia Sharpe calls “forward-looking accountability.” This is the accountability to learn from adverse events so that future patients are not harmed by avoidable mistakes.

  The nurse managers in the low-blame units did not lack toughness. In many ways, they were the toughest of all. They didn’t wear suits; they wore scrubs. They got their hands dirty. They understood the high-pressure reality of those they managed. They were intimately aware of the complexity of the system and were therefore far more willing to engage with the demanding work of learning from mistakes. They were black box thinkers.

  Here is the summary of the findings for Memorial Nurse Unit 3, rated as the least open culture. Espoused attitude: blame. Nurse manager: hands off. Nurse manager attire: business suit. Nurse manager attitude toward staff: views residents as kids needing discipline, treats nurses in the same way, pays careful attention to reporting structures. Staff’s view of nurse manager: “Treats you as guilty if you make a mistake.” Staff’s view of errors: “You get put on trial.”

  Here is the summary of the findings for Memorial Nurse Unit 1, rated as the most open culture of all. Espoused attitude: learn. Nurse manager: hands on. Nurse manager attire: scrubs. Nurse manager attitude toward staff: “They are capable and seasoned.” Staff’s view of manager: “A superb leader
and nurse.” Staff’s view of errors: normal, natural, important to document.

  This is not just about health care; it is about organizational culture in general. When we are dealing with complexity, blaming without proper analysis is one of the most common as well as one of the most perilous things an organization can do. And it rests, in part, on the erroneous belief that toughness and openness are in conflict with each other. They are not.

  This analysis is not just true of learning from the mistakes that emerge from complex systems. It is also about the risk-taking and experimentation vital for innovation. Think back to the biologists at Unilever who tested rapidly to drive learning. In all they made 449 “failures.” This kind of process cannot happen if mistakes are regarded as blameworthy. When we are testing assumptions, we are pushing out the frontiers of our knowledge about what works and what doesn’t. Penalizing these mistakes has a simple outcome: it destroys innovation and enlightened risk-taking.

  In short, blame undermines the information vital for meaningful adaptation. It obscures the complexity of our world, deluding us into thinking we understand our environment when we should be learning from it.

  As Amy Edmondson of Harvard Business School put it:

  Executives I’ve interviewed in organizations as different as hospitals and investment banks admit to being torn. How can they respond constructively to failures without giving rise to an anything-goes attitude? If people aren’t blamed for their failures, what will insure they try as hard as possible? But this concern is based on a false dichotomy. In actuality, a culture that makes it safe to admit and report on failure can—and in some organizational contexts must—coexist with high standards for performance.9

  It is worth noting here, if only briefly, the link between blame and cognitive dissonance. In a culture where mistakes are considered blameworthy they are also likely to be dissonant. When the external culture stigmatizes mistakes, professionals are likely to internalize these attitudes. Blame and dissonance, in effect, are driven by the same misguided attitude to error, something we will return to in Part 5.

  IV

  The blame response can be observed in the laboratory. When volunteers are shown a film of a driver cutting across lanes, for example, they will almost unanimously apportion blame. They will infer that he is selfish, impatient, and out of control. And this inference may turn out to be true. But the situation is not always as cut-and-dried as it first appears.

  After all, the driver may have had the sun in his eyes. He may have been swerving to avoid a car that had veered into his lane. In fact, there are many possible mitigating factors. To most observers looking from the outside in, these do not register. It is not because they don’t think such possibilities are irrelevant, it is that often they don’t even consider them. The brain just plumps for the simplest, most intuitive narrative: “He’s a homicidal fool!” This is sometimes called by the rather inelegant name of the fundamental attribution error.

  It is only when the question is flipped—“What happened the last time you jumped lanes?”—that volunteers pause to consider the situational factors. “Oh, yeah, that was because I thought a child was about to run across the street!” Often these excuses are self-serving. But they are not always so. Sometimes there really are wider issues that lead to mistakes—but we cannot even see them if we do not consider them, still less investigate them.

  Even in an absurdly simple event like this, then, it pays to pause, to look beneath the surface, to challenge the most obvious, reductionist narrative. This is not about being “soft,” but about learning what really went wrong. How much more important is it to engage in this kind of activity in a complex, interdependent system, like a hospital or business?

  It is noteworthy that even experienced aviation investigators fall prey to the fundamental attribution error. When they are first confronted with an accident, the sense-making part of the brain is already creating explanations before the black box has been discovered. This is why studies have shown that their first instinct is almost always (around 90 percent of the time) to blame “operator error.”

  As one airline investigator told me: “When you see an incident, your brain just seems to scream out: ‘What the hell was the pilot thinking!’ It is a knee-jerk response. It takes real discipline to probe the black box data without prejudging the issue.”*

  In a sense, blame is a subversion of the narrative fallacy. It is a way of collapsing a complex event into a simple and intuitive explanation: “It was his fault!”

  Of course, blame can sometimes be a matter not of cognitive bias, but of pure expediency. If we place the blame on someone else, it takes the heat off of ourselves. This process can happen at a collective as well as at an individual level.

  Take, for example, the credit crunch of 2007–2008. This was a disaster involving investment bankers, regulators, politicians, mortgage brokers, central bankers, and retail creditors. But the public (and many politicians) chose to focus the blame almost exclusively on bankers.

  Many bankers did indeed behave recklessly. Some would argue that they should have been penalized more severely. But the narrow focus on bankers served to obscure a different truth. Many people had taken out loans they couldn’t afford to repay. Many had maxed out their credit cards. To put it simply: the public had contributed to the crisis too.

  But if we can’t accept our own failures, how can we learn from them?

  • • •

  Overcoming the blame tendency is a defining issue in the corporate world. Ben Dattner, a psychologist and organizational consultant, tells of an experience when he was working at the Republic National Bank of New York. He noticed a piece of paper that a coworker had stapled to his cubicle wall. It read:

  The six phases of a project:

  1. Enthusiasm

  2. Disillusionment

  3. Panic

  4. Search for the guilty

  5. Punishment of the innocent

  6. Rewards for the uninvolved.

  Dattner writes: “I have yet to come across a more accurate description of how most dramas play out in our working lives.”10

  His point is that you do not need to examine a high-profile failure to glimpse the dangers of blame; they can be seen in the most conventional of office environments.

  And this is the real problem. The evolutionary process cannot function without information about what is working, and what isn’t. This information can come from many sources, depending on the context (patients, consumers, experiments, whistleblowers, etc.). But professionals working on the ground have crucial data to share in almost any context. Health care, for example, cannot begin to reform procedures if doctors do not report their failures. And scientific theories cannot evolve if scientists cover up data that reveal the weaknesses in existing hypotheses.

  That is why openness is not an optional extra, a useful cultural add-on. Rather, it is a prerequisite for any adaption worthy of the name. In a complex world, which we cannot fully understand from above, and must therefore discover from below, this cultural requirement trumps almost every other management issue.

  A transparent approach should not merely determine the response to failures; it should infiltrate decisions on strategy and preferment. Meritocracy is synonymous with forward accountability.

  The alternative is not just that people will spend their time shielding themselves from blame and deflecting it onto others. They will also spend huge amounts of time trying to take credit for other people’s work. When a culture is unfair and opaque, it creates multiple perverse incentives. When a culture is fair and transparent, on the other hand, it bolsters the adaptive process.

  Our public culture is, if anything, the most blame-orientated of all. Politicians are vilified, sometimes with justification, often without. There is little understanding that the mistakes committed in public institutions provide precious opportunities to learn. They are just taken
as evidence that political leaders are incompetent, negligent, or both. This adds to the wider phobia toward error, and increases the dissonance of mistakes. It inexorably leads to a culture of spin and subterfuge.

  It might be expedient to condemn newspapers for the tendency to blame public figures, but this would be to miss the point. The reason that it is commercially profitable for papers to run stories that apportion instant blame is because there is a ready market for them. After all, we prefer easy stories; we all have an inbuilt bias toward simplicity over complexity. These stories are, in effect, mass-printed by-products of the narrative fallacy.

  In a more progressive culture, this market would be undermined. Such stories would be met with incredulity. Newspapers would have an incentive to provide deeper analysis before apportioning blame. This may sound like wishful thinking, but it indicates a direction of travel.

  The impetus that drives learning from mistakes is precisely the same as the one that aims at a just culture. Forward-looking accountability is nothing more and nothing less than learning from failure. To generate openness, we must avoid preemptive blaming. All these things interlock in a truly adaptive system.

  As the philosopher Karl Popper put it: “True ignorance is not the absence of knowledge, but the refusal to acquire it.”

  Chapter 12

  The Second Victim

  I

  To glimpse the full consequences of a blame culture, let us examine one of the defining British tragedies of recent years: the death of Peter Connelly, a seventeen-month-old baby in Haringey, North London, in 2007. During the course of his trial, to protect his anonymity, he was referred to in the British press as “Baby P.”1

 

‹ Prev