Book Read Free

Black Box Thinking

Page 25

by Matthew Syed


  Libyan Arab Airlines Flight 114 is, in fact, a perfectly ordinary passenger flight from Benghazi to Cairo, which has veered off course, inadvertently flying into the Israeli warzone. Of the 113 passengers and crew 108 die in the fireball.

  The following day there is understandable outrage around the world. How could the Israelis (who initially denied responsibility) have shot down an unarmed civilian plane? How dare they massacre so many innocents? What on earth were they thinking? The Israeli military leadership is blamed for a terrible tragedy.

  The Israelis, for their part, are perplexed when they discover that Libyan Arab Airlines Flight 114 was a routine flight from Benghazi to Cairo with no terrorist agenda. The Egyptian state was not involved. It was a plane full of innocent travelers and vacationers. The Israeli Air Force have been involved in a devastating tragedy.

  But from their perspective, which the rest of the world has not yet had access to, there was an equal and opposite response: to blame the crew of the airliner. After all, why didn’t they land? They had come within a few thousand feet of the Rephidim runway. Why did they turn west? Why did they keep going even after having their wing tips shot at by the Phantoms?

  Were they mad? Or just criminally negligent?

  • • •

  This is a chapter about the psychology of blame. We will see that this is an all-too-common response to failures and adverse events of all kinds. When something goes wrong, we like to point the finger at someone else. We like to collapse what could be a highly complex event into a simple headline: “Israeli murderers kill 108 innocents” or “negligent crew willfully ignore instruction to land.”

  For the most part in this chapter, we will look at how blame attaches to the failures that occur in safety-critical industries such as aviation and health care, before extending this analysis to other organizations and contexts. We will see that blame is, in many respects, a subversion of the narrative fallacy: an oversimplification driven by biases in the human brain. We will also see that it has subtle but measurable consequences, undermining our capacity to learn.

  A quick recap. We have seen that progress is driven by learning from failure and, in the previous two sections, looked at the evolutionary framework that underpins this idea. We also looked at organizations that have harnessed the evolutionary mechanism to drive progress, and confronted failure to inspire creative leaps. But we have also seen that an evolutionary system on its own is not enough. When we looked at the Virginia Mason Health System in chapter 3, we noted that a new system created to learn from mistakes initially made no difference because professionals didn’t make any reports. The information was suppressed due to a fear of blame and cognitive dissonance.

  If the previous two sections of the book were about systems that institutionalize the evolutionary mechanism, the next two sections will look at the psychological and cultural conditions that enable it to flourish. In Part 5 we will return to our study of cognitive dissonance, which can be thought of as the internal anxieties that cause us to squander the information provided by failure. And we will look at how to combat this tendency, thus unleashing openness, resilience, and growth. In this chapter and the next, we will look at the external pressures that lead people to suppress the information vital for adaptation: namely, the fear of blame. The instinct to blame creates powerful and often self-reinforcing dynamics within organizations and cultures that have to be addressed if meaningful evolution is going to take place.

  Think of it like this: if our first reaction is to assume that the person closest to a mistake has been negligent or malign, then blame will flow freely and the anticipation of blame will cause people to cover up their mistakes. But if our first reaction is to regard error as a learning opportunity, then we will be motivated to investigate what really happened.

  It may be that after proper investigation we discover the person who made the error really has been negligent or malign, in which case blame will be fully justified. But we may find that the error was caused not by negligence, but by a systemic defect—just as with the B-17 bombers in chapter 1, where identical levers side by side in the cockpit (one linked to the flaps and the other to the landing gear) were causing accidents during landing.

  Proper investigation achieves two things: it reveals a crucial learning opportunity, which means that the systemic problem can be fixed, leading to meaningful evolution. But it has a cultural consequence too: professionals will feel empowered to be open about honest mistakes, along with other vital information, because they know that they will not be unfairly penalized—thus driving evolution still further.

  In short, we have to engage with the complexity of the world if we are to learn from it; we have to resist the hardwired tendency to blame instantly, and look deeper into the factors surrounding error if we are going to figure out what really happened and thus create a culture based upon openness and honesty rather than defensiveness and back-covering.

  With this in mind, let us return to Libyan Arab Airlines Flight 114 and try to figure out what actually happened on the afternoon of February 21, 1973. In revisiting the tragedy we will return to the work of Zvi Lanir, a decision-researcher whose influential article “The Reasonable Choice of Disaster,” published in the Journal of Strategic Studies, must rate as among the most gripping academic papers ever written.

  Why, he asks, did the airliner keep flying when it had been confronted by Israeli Phantom jets? Why did it try to escape back toward Egypt? If it was a passenger jet, why did the crew endanger the lives of their passengers, as well as their own lives?

  We only have the answers to these questions for a simple but profound reason: the black box survived the fireball. This provides us with the opportunity for a proper investigation, and therefore to do something that the emotionally driven, often self-serving blame game, with its crude simplifications, can never achieve: reform of the system.

  II

  Libyan Arab Airlines Flight 114 is on a routine flight from Benghazi to Cairo. The captain, in the front left of the cockpit, is French, as is the flight engineer, who is sitting behind him. The co-pilot, front right, is Libyan. There has been a sandstorm across Egypt, reducing visibility.

  The pilot and flight engineer are chatting amiably. The co-pilot, who is not proficient in French, is not taking part in the conversation. All three are oblivious to the fact that the aircraft has drifted more than sixty miles off course, and has been flying over Egyptian military installations.

  This deviation should have been picked up by the Egyptian military’s early-warning system, but because of the sandstorm and other subtleties associated with the setup of the system, it is not. The airliner is now about to enter the Israeli warzone over Sinai.

  It is not until 13:44 that the pilot begins to have doubts about their position. He raises his concerns with the engineer, but not with the co-pilot. At 13:52, he receives permission from air traffic control at Cairo Approach to begin his descent.

  At 13:56 the pilot tries to pick up the radio transmitter signal from Cairo airport, but it is in a different position from where he was expecting. His confusion mounts. Are they off course? Is that the correct signal? He continues to fly “as scheduled” but he is now losing situational awareness. Cairo Approach has not yet indicated that he is now more than eighty miles off course.

  At 13:59 Cairo Approach finally informs the pilot that the airliner is deviating from the airport. They tell him to “stick to BEACON and report position,” but the Libyan co-pilot indicates that they are struggling to receive the signal from the radio beacon. A couple of minutes later Cairo Approach ask the pilot to start communicating directly with Cairo Control at the airport, indicating that they believe he is nearing his destination.

  The confusion in the cockpit mounts. Are they near Cairo? Why is that beacon signal so far to the west? But even as they are trying to figure out their position, they are startled by something completely unexpected: the r
oar of fighter jets. They are now surrounded by high-speed military aircraft.

  Crucially, the co-pilot misidentifies the aircraft as Egyptian MiGs rather than Israeli F-4 Phantoms, despite the highly visible Shield of David on their bodies. “Four MiGs behind us,” he says.

  Given the good relationship between Libya and Egypt, the crew assume that these planes must be friendly. They assume that they have come to guide the plane, which they now accept must be off course, to Cairo airport. The captain informs Cairo Control: “I guess we have some problems with our heading and we now have four MiGs trying to get behind us.”

  But one of the “MiGs” pulls up alongside the cockpit and starts to gesticulate. He seems to be ordering them to land. Why the aggression? They are friendly, aren’t they? The pilot, clearly now in a state of bewilderment, reacts vocally. “Oh, no! I don’t understand such language,” he says (in other words “that’s no way for the MiGs to behave!”), but he is still communicating in French, and the co-pilot doesn’t understand.

  The crew are beginning to panic. Perception is narrowing. What on earth do these jets want?

  Between 14:06 and 14:10 Cairo Control is silent but the crew are no longer focused on their position. Tracer shells are fired in front of the nose of the aircraft. The crew are becoming frantic. Why are they firing at us?

  They know that there are two airports in the Egyptian capital: Cairo West, the civilian airport, and Cairo East, a military airport. Could it be that they have overflown Cairo West and veered into the territory of Cairo East? If so, perhaps the MiGs are trying to chivvy the airliner back to the civilian airport. Perhaps that is where they want them to land.

  They turn the plane toward the west and start to descend. The captain drops the landing gear into place. But now they notice that they are not at Cairo West after all. They can see military aircraft and hangars below. This is not a civilian airport at all. Where are they? (In fact, they are now descending toward the Israeli Rephidim airbase, more than 100 miles from the Egyptian capital.)

  Their confusion escalates even more. They make the logical decision to ascend and turn west once again, seeking out Cairo West, when the fatal endgame commences. To their horror, the MiGs start to shoot at their wingtips. They are seized by panic. Why are Egyptians firing at a Libyan aircraft? Are they mad?

  At 14:09 the pilot radios to Cairo Control: “We are now shot by your fighter [my italics].” Cairo Control answers: “We are going to tell them [the military authorities] that you are an unreported aircraft . . . and we do not know where you are.” But the call to the military authorities merely adds to the bewilderment. The Egyptian military has no MiGs currently in the air.

  The crew are straining their eyes out of the window of the cockpit. They are desperately trying to make sense of a situation that has grown to Kafkaesque proportions. But it is too late. They are hit by direct fire to the base of their wings. The plane is crippled. They are going down.

  Too late, the co-pilot notices a sign that had been there all along, and which could have solved the entire mystery: the Shield of David on the body of the jet fighters. They are not MiGs after all. They are Israeli Phantoms. They are not in Egyptian airspace. They are over occupied Sinai. If they had known that, they would have landed at Rephidim, and everything would have been solved.

  The crew lose control as the plane careers down into the desert.

  • • •

  Now, who is to blame? The Israeli Air Force command, which shot down a commercial jet? The crew of the Libyan airliner, who flew off course and were unable to understand what the Phantoms were trying to tell them? Egyptian air traffic control, who were not quick enough to alert Flight 114 as to how far they had drifted off course? All three?

  What should be crystal clear is that a desire to apportion blame, before taking the time to understand what really happened, is senseless. It may be intellectually satisfying to have a culprit, someone to hang the disaster on. And it certainly makes life simple. After all, why get into the fine print? It was clearly the fault of Israel/the crew/Egypt Control. What else needs saying?

  Instant blame often leads to what has been called a “circular firing squad.” This is where everyone is blaming everyone else. It is familiar in business, politics and the military. Sometimes, this is a mutual exercise in deflecting responsibility. But often everyone in a circular firing squad is being sincere. They all really think that it is the other guy’s fault.

  It is only when you look at the problem in the round that you glimpse how these contradictory perspectives can be reconciled and you can attempt something that an instantaneous blame game can never achieve: reform of the system. After all, if you don’t know what went wrong, how can you put things right?

  In the aftermath of the shooting down of Libyan Arab Airlines Flight 114, new laws and protocols were developed in an attempt to reduce the number of inadvertent attacks on civilian aircraft by military forces. An amendment to the Chicago Convention governing the problem of aerial intrusions into theaters of war was signed by an extraordinary session of the International Civil Aviation Organization on May 10, 1984. The black box analysis helped to make future tragedies less likely.*2

  It set the stage for evolution.

  III

  Let us move away from the high-altitude misunderstandings that caused Libyan Arab Airlines Flight 114 to crash and focus, instead, on the kinds of errors that blight major organizations. Mistakes are made at businesses, hospitals, and government departments all the time. It is an inevitable part of our everyday interaction with a complex world.

  And yet if professionals think they are going to be blamed for honest mistakes, why would they be open about them? If they do not trust their managers to take the trouble to see what really happened, why would they report what is going wrong, and how can the system adapt?

  And the truth is that companies blame all the time. It is not just because managers instinctively jump to the blame response. There is also a more insidious reason: managers often feel that it is expedient to blame. After all, if a major company disaster can be conveniently pinned on a few “bad apples,” it may play better in PR terms. “It wasn’t us; it was them!”

  There is also a widespread management view that punishment can exert a benign disciplinary effect. It will make people sit up and take notice. By stigmatizing mistakes, by being tough on them, managers think that staff will become more diligent and motivated.

  Perhaps these considerations explain the sheer pervasiveness of the blame game. According to one report by Harvard Business School, it was found that executives believe that around 2 to 5 percent of the failures in their organizations were “truly blameworthy.” But when asked how many of these mistakes were treated as blameworthy, they admitted that the number was “between 70 to 90 percent.”

  This is one of the most pressing cultural issues in the corporate and political worlds today.3

  In 2004, Amy Edmondson, a professor at Harvard Business School, and colleagues conducted an influential study into the consequences of a blame culture. Her particular focus was on drug administration errors at two hospitals in the United States (she calls them University Hospital and Memorial Hospital to protect anonymity), but the implications reached far wider.4

  Drug administration errors are alarmingly common in health care. Edmondson cites the example of a nurse reporting for duty at 3 p.m. and noticing that a bag hanging upside down on an Intensive Care drip was not heparin, a blood thinner used routinely to prevent clotting after surgery, but lidocaine, a heart rhythm stabilizer. The absence of heparin could have been fatal, although on this occasion the error was addressed before the patient suffered ill effects.

  Sadly, as we know from the first part of the book, medical errors are often much more serious. According to a paper published by the U.S. Food and Drug Administration, errors in drug administration, just one type of medical error, injure approximately 1.3
million patients each year in the United States. Edmondson cites evidence that the average patient can expect between one and two medication errors during every hospital stay.

  In her six-month investigation Edmondson focused on eight different units in Memorial and University hospitals. She found that some of these units, across both hospitals, had tough, disciplined cultures. In one unit, the nurse manager was “dressed impeccably in a business suit” and she had tough discussions with the nurses “behind closed doors.” In another the manager was described as “an authority.”

  Blame in these units was common. Nurses said things like: “The environment is unforgiving; heads will roll,” “You get put on trial” and “You’re guilty if you make a mistake.” The managers thought they had their staff on a tight leash. They thought they had a disciplined, high-performance culture. Mistakes were penalized. The managers believed they were on the side of patients, holding the clinicians to account.

  And, at first, it seemed as if these managers were right. Blame seemed to be having a positive impact on performance. Edmondson was amazed to discover that the nurses in these units were hardly ever reporting mistakes. Remarkably, at the toughest unit of all (as determined by a questionnaire and a subjective survey undertaken by an independent researcher), the number of errors reported was less than 10 percent of another unit’s.

  But then Edmondson probed deeper with the help of an anthropologist and found something curious. These nurses in the so-called disciplined cultures may have been reporting fewer errors, but they were making more errors. In the low-blame teams, on the other hand, this finding was reversed. They were reporting more errors, but were making fewer errors overall.*

  What was going on? The mystery was, in fact, easy to solve. It was precisely because the nurses in low-blame teams were reporting so many errors that they were learning from them, and not making the same mistakes again. Nurses in the high-blame teams were not speaking up because they feared the consequences, and so learning was being squandered.

 

‹ Prev