Book Read Free

The Intelligence Trap

Page 27

by David Robson


  Tellingly, Toyota had set up a high-level task force in 2005 to deal with quality control, but the company disbanded the group in early 2009, claiming that quality ‘was part of the company’s DNA and therefore they didn’t need a special committee to enforce it’. Senior management also turned a deaf ear to specific warnings from more junior executives, while focusing on rapid corporate growth.17 This was apparently a symptom of a generally insular way of operating that did not welcome outside input, in which important decisions were made only by those at the very top of the hierarchy. Like Nokia’s management, it seems they simply didn’t want to hear bad news that might sidetrack them from their broader goals.

  The ultimate cost to Toyota’s brand was greater than any of the savings they imagined they would make by not heeding these warnings. By 2010, 31 per cent of Americans believed that Toyota cars were unsafe18 ? a dramatic fall from grace for a company that was once renowned for its products’ quality and customer satisfaction.

  Or consider Air France Flight 4590 from Paris to New York City. As it prepared for take-off on 25 July 2000, the Concorde airliner ran over some sharp debris left on the runway, causing a 4.5 kg chunk of tyre to fly into the underside of the aircraft’s wing. The resulting shockwave ruptured a fuel tank, leading it to catch light during take-off. The plane crashed into a nearby hotel, killing 113 people in total. Subsequent analyses revealed 57 previous instances in which the Concorde tyre had burst on the runway, and in one case the damage was very nearly the same as for Flight 4590 – except, through sheer good luck, the leaking fuel had failed to ignite. Yet these near misses were not taken as serious warning signs requiring urgent action.19

  These crises are dramatic case studies in high-risk industries, but Tinsley argues that the same thinking processes will present latent dangers for many other organisations. She points to research on workplace safety, for instance, showing that for every thousand near misses, there will be one serious injury or fatality and at least ten smaller injuries.20

  Tinsley does not frame her work as an example of ‘functional stupidity’, but the outcome bias appears to arise from the same lack of reflection and curiosity that Spicer and Alvesson have outlined.

  And even small changes to a company’s environment can increase the chances that near misses are spotted. In both lab experiments and data gathered during real NASA projects, Tinsley has found that people are far more likely to note and report near misses when safety is emphasised as part of the overall culture, in its mission statements – sometimes with as much as a five-fold increase in reporting.21

  As an example, consider one of those scenarios involving the NASA manager planning the unmanned space mission. Participants told that ‘NASA, which pushes the frontiers of knowledge, must operate in a high-risk, risk-tolerant environment’ were much less likely to notice the near miss. Those told that ‘NASA, as a highly visible organization, must operate in a high-safety, safety-first environment’, in contrast, successfully identified the latent danger. The same was also true when the participants were told that they would need to justify their judgement to the board. ‘Then the near miss also looks more like the failure condition.’

  Remember we are talking about unconscious biases here: no participants had weighed it up and considered the near miss was worth ignoring; but unless they were prompted, they just didn’t really think about it at all. Some companies may expect that the value of safety is already implicitly understood, but Tinsley’s work demonstrates that it needs to be highly salient. It is telling that NASA’s motto had been ‘Faster, Better, Cheaper’ for most of the decade leading up to the Columbia disaster.

  Before we end our conversation, Tinsley emphasises that some risks will be inevitable; the danger is when we are not even aware they exist. She recalls a seminar during which a NASA engineer raised his hand in frustration. ‘Do you not want us to take any risks?’ he asked. ‘Space missions are inherently risky.’

  ‘And my response was that I’m not here to tell you what your risk tolerance should be. I’m here to say that when you experience a near miss, your risk tolerance will increase and you won’t be aware of it.’ As the fate of the Challenger and Columbia missions shows, no organisation can afford that blind spot.

  In hindsight, it is all too easy to see how Deepwater Horizon became a hotbed of irrationality before the spill. By the time of the explosion, it was six weeks behind schedule, with the delay costing $1 million a day, and some staff were unhappy with the pressure they were subjected to. In one email, written six days before the launch, the engineer Brian Morel labelled it ‘a nightmare well that has everyone all over the place’.

  These are exactly the high-pressure conditions that are now known to reduce reflection and analytical thinking. The result was a collective blind spot that prevented many of Deepwater Horizon’s employees (from BP and its partners, Halliburton and Transocean) from seeing the disaster looming, and contributed to a series of striking errors.

  To try to reduce the accumulating costs, for instance, they chose to use a cheaper mix of cement to secure the well, without investigating the possibility that it may not have been stable enough for the job at hand. They also reduced the total volume of cement used – violating their own guidelines – and scrimped on the necessary equipment required to hold the well in place.

  On the day of the accident itself, the team avoided completing the full suite of tests to ensure the seal was secure, while also ignoring anomalous results that might have predicted the build-up of pressure inside the well.22 Worse still, the equipment necessary to contain the blowout, once it occurred, was in ill-repair.

  Each of these risk factors could have been identified long before disaster struck; as we have seen, there were many minor blowouts that should have been significant warnings of the underlying dangers, leading to new and updated safety procedures. Thanks to lucky circumstances, however – even the random direction of the wind – none had been fatal, and so the underlying factors, including severe corner-cutting and inadequate safety training, had not been examined.23 And the more they played with fate, the more they were lulled into a false sense of complacency and became less concerned about cutting corners.24 It was a classic case of the outcome bias that Tinsley has documented – and the error seemed to have been prevalent across the whole of the oil industry.

  Eight months previously, another oil and gas company, PTT, had even witnessed a blowout and spill in the Timor Sea, off Australia. Halliburton, which had also worked on the Macondo well, was the company behind the cement job there, too, and although a subsequent report had claimed that Halliburton itself held little responsibility, it might have still been taken as a vivid reminder of the dangers involved. A lack of communication between operators and experts, however, meant the lessons were largely ignored by the Deepwater Horizon team.25

  In this way, we can see that the disaster wasn’t down to the behaviour of any one employee, but to an endemic lack of reflection, engagement and critical thinking that meant decision makers across the project had failed to consider the true consequences of their actions.

  ‘It is the underlying “unconscious mind” that governs the actions of an organization and its personnel’, a report from the Center for Catastrophic Risk Management (CCRM) at the University of California, Berkeley, concluded.26 ‘These failures . . . appear to be deeply rooted in a multi-decade history of organizational malfunction and short-sightedness.’ In particular, the management had become so obsessed with pursuing further success, they had forgotten their own fallibilities and the vulnerabilities of the technology they were using. They had ‘forgotten to be afraid’.

  Or as Karlene Roberts, the director of the CCRM, told me in an interview, ‘Often, when organisations look for the errors that caused something catastrophic to happen, they look for someone to name, blame and then train or get rid of . . . But it’s rarely what happened on the spot that caused the accident. It’s often what happened years before.’

  If this ‘u
nconscious mind’ represents an organisational intelligence trap, how can an institution wake up to latent risks?

  In addition to studying disasters, Roberts’ team has also examined the common structures and behaviours of ‘high-reliability organisations’ such as nuclear power plants, aircraft carriers, and air traffic control systems that operate with enormous uncertainty and potential for hazard, yet somehow achieve extremely low failure rates.

  Much like the theories of functional stupidity, their findings emphasise the need for reflection, questioning, and the consideration of long-term consequences – including, for example, policies that give employees the ‘licence to think’.

  Refining these findings to a set of core characteristics, Karl Weick and Kathleen Sutcliffe have shown that high-reliability organisations all demonstrate:27

  Preoccupation with failure: The organisation complacent with success, and workers assume ‘each day will be a bad day’. The organisation rewards employees for self-reporting errors.

  Reluctance to simplify interpretations: Employees are rewarded for questioning assumptions and for being sceptical of received wisdom. At Deepwater Horizon, for instance, more engineers and managers may have raised concerns about the poor quality of the cement and asked for further tests.

  Sensitivity to operations: Team members continue to communicate and interact, to update their understanding of the situation at hand and search for the root causes of anomalies. On Deepwater Horizon, the rig staff should have been more curious about the anomalous pressure tests, rather than accepting the first explanation.

  Commitment to resilience: Building the necessary knowledge and resources to bounce back after error occurs, including regular ‘pre-mortems’ and regular discussions of near misses. Long before the Deepwater Horizon explosion, BP might have examined the underlying organisational factors leading to previous, less serious accidents, and ensured all team members were adequately prepared to deal with a blowout.

  Deference to expertise: This relates to the importance of communication between ranks of the hierarchy, and the intellectual humility of those at the top. Executives need to trust the people on the ground. Toyota and NASA, for instance, both failed to heed the concerns of engineers; similarly, after the Deepwater Horizon explosion, the media reported that workers at BP had been scared of raising concerns in case they would be fired.28

  The commitment to resilience may be evident in small gestures that allow workers to know that their commitment to safety is valued. On one aircraft carrier, the USS Carl Vinson, a crewmember reported that he had lost a tool on deck that could have been sucked into a jet engine. All aircraft were redirected to land – at significant cost – but rather than punishing the team member for his carelessness, he was commended for his honesty in a formal ceremony the next day. The message was clear – errors would be tolerated if they were reported, meaning that the team as a whole were less likely to overlook much smaller mistakes.

  The US Navy, meanwhile, has employed the SUBSAFE system to reduce accidents on its nuclear submarines. The system was first implemented following the loss of the USS Thresher in 1963, which flooded due to a poor joint in its pumping system, resulting in the deaths of 112 Navy personnel and 17 civilians.29 SUBSAFE specifically instructs officers to experience ‘chronic uneasiness’, summarised in the saying ‘trust, but verify’, and in more than five decades since, they haven’t lost a single submarine using the system.30

  Inspired by Ellen Langer’s work, Weick refers to these combined characteristics as ‘collective mindfulness’. The underlying principle is that the organisation should implement any measures that encourage its employees to remain attentive, proactive, open to new ideas, questioning of every possibility, and devoted to discovering and learning from mistakes, rather than simply repeating the same behaviours over and over.

  There is good evidence that adopting this framework can result in dramatic improvements. Some of the most notable successes of applying collective mindfulness have come from healthcare. (We’ve already seen how doctors are changing how individuals think – but this specifically concerns the overall culture and group reasoning.) The available measures involve empowering junior staff to question assumptions and to be more critical of the evidence presented to them, and encouraging senior staff to actively engage the opinions of those beneath them so that everyone is accountable to everyone else. The staff also have regular ‘safety huddles’, proactively report errors and perform detailed ‘root-cause analyses’ to examine the underlying processes that may have contributed to any mistake or near miss.

  Using such techniques, one Canadian hospital, St Joseph’s Healthcare in London, Ontario, has reduced medication errors (the wrong drugs given to the wrong person) to just two mistakes in more than 800,000 medications dispensed in the second quarter of 2016. The Golden Valley Memorial in Missouri, meanwhile, has reduced drug-resistant Staphylococcus aureus infections to zero using the same principles, and patient falls – a serious cause of unnecessary injury in hospitals – have dropped by 41 per cent.31

  Despite the additional responsibilities, staff in mindful organisations often thrive on the extra workload, with a lower turnover rate than institutions that do not impose these measures.32 Contrary to expectations, it is more rewarding to feel like you are fully engaging your mind for the greater good, rather than simply going through the motions.

  In these ways, the research on functional stupidity and mindful organisations perfectly complement each other, revealing the ways that our environment can either involve the group brain in reflection and deep thinking, or dangerously narrow its focus so that it loses the benefits of its combined intelligence and expertise. They offer us a framework to understand the intelligence trap and evidence-based wisdom on a grand scale.

  Beyond these general principles, the research also reveals specific practical steps for any organisation hoping to reduce error. Given that our biases are often amplified by feelings of time pressure, Tinsley suggests that organisations should encourage employees to examine their actions and ask: ‘If I had more time and resources, would I make the same decisions?’ She also believes that people working on high-stakes projects should take regular breaks to ‘pause and learn’, where they may specifically look for near misses and examine the factors underlying them – a strategy, she says, that NASA has now applied. They should institute near-miss reporting systems; ‘and if you don’t report a near miss, you are then held accountable’.

  Spicer, meanwhile, proposes adding regular reflective routines to team meetings, including pre-mortems and post-mortems, and appointing a devil’s advocate whose role is to question decisions and look for flaws in their logic. ‘There’s lots of social psychology that says it leads to slightly dissatisfied people but better-quality decisions.’ He also recommends taking advantage of the outside perspective, by either inviting secondments from other companies, or encouraging staff to shadow employees from other organisations and other industries, a strategy that can help puncture the bias blind spot.

  The aim is to do whatever you can to embrace that ‘chronic uneasiness’ – the sense that there might always be a better way of doing things.

  Looking to research from further afield, organisations may also benefit from tests such as Keith Stanovich’s rationality quotient, which would allow them to screen employees working on high-risk projects and to check whether they are more or less susceptible to bias, and if they are in need of further training. They might also think of establishing critical thinking programmes within the company.

  They may also analyse the mindset embedded in its culture: whether it encourages the growth of talent or leads employees to believe that their abilities are set in stone. Carol Dweck’s team of researchers asked employees at seven Fortune 1000 companies to rate their level of agreement with a series of statements, such as: ‘When it comes to being successful, this company seems to believe that people have a certain amount of talent, and they really can’t do much to change it’
(reflecting a collective fixed mindset) or ‘This company genuinely values the development and growth of its employees’ (reflecting a collective growth mindset).

  As you might hope, companies cultivating a collective growth mindset enjoyed greater innovation and productivity, more collaboration within teams and higher employee commitment. Importantly, employees were also less likely to cut corners, or cheat to get ahead. They knew their development would be encouraged and were therefore less likely to cover up for their perceived failings.33

  During their corporate training, organisations could also make use of productive struggle and desirable difficulties to ensure that their employees process the information more deeply. As we saw in Chapter 8, this not only means that the material is recalled more readily; it also increases overall engagement with the underlying concepts and means that the lessons are more readily transferable to new situations.

  Ultimately, the secrets of wise decision making for the organisation are very similar to the secrets of wise decision making for the intelligent individual. Whether you are a forensic scientist, doctor, student, teacher, financier or aeronautical engineer, it pays to humbly recognise your limits and the possibility of failure, take account of ambiguity and uncertainty, remain curious and open to new information, recognise the potential to grow from errors, and actively question everything.

  In the Presidential Commission’s damning report on the Deepwater Horizon explosion, one particular recommendation catches the attention, inspired by a revolutionary change in US nuclear power plants as a model for how an industry may deal with risk more mindfully.34

  As you might have come to expect, the trigger was a real crisis. (‘Everyone waits to be punished before they act,’ Roberts said.) In this case it was the partial meltdown of a radioactive core in the Three Mile Island Nuclear Generating Station in 1979. The disaster led to the foundation of a new regulator, the Institute of Nuclear Power Operations (INPO), which incorporates a number of important characteristics.

 

‹ Prev