by Matthew Syed
Thompson suggests that brainwriting should have just one rule: nobody is allowed to identify themselves with their written contributions. The marketing director should not offer a ‘tell’ by referring on the card to a client associated with him. ‘This is crucial,’ Thompson says. ‘By anonymising the contributions, you separate the idea from the status of the person who came up with it. This creates a meritocracy of ideas. People vote on the quality of the proposal, rather than the seniority of the person who suggested it, or to curry favour. It changes the dynamic.’
After voting on the ideas, groups are typically divided into four-person teams to ‘take them to the next level’, combining ideas, or sparking new insights. ‘Using this iterative technique, brainwriting can be woven into interactive team meetings in a way that engages everyone,’ Thompson says. When brainwriting is put head to head with brainstorming, it generates twice the volume of ideas, and also produces higher quality ideas when rated by independent assessors. The reason is simple. Brainwriting liberates diversity from the constraints of dominance dynamics.
Ray Dalio has built one of the most successful hedge funds with similar methods. Bridgewater operates according to more than two hundred behavioural ‘principles’, but the key theme can be summarised in one sentence: the expression of rebel ideas. He calls it ‘radical transparency’. The culture is one where people are not fearful of expressing what they think; it is a duty. As Dalio put it in an interview with the psychologist Adam Grant: ‘The greatest tragedy of mankind comes from the inability of people to have thoughtful disagreement to find out what’s true.’41
At another company, every person invited to a meeting is asked to submit a one-pager on their views. This is what you might call the price of attendance. These one-pagers are then shuffled, handed around the table, and read out in random order. This is another way of separating a perspective from the status of the person who proposed it.
All these techniques may seem different, but they share the same underlying pattern. They protect cognitive diversity from the dangers of dominance.
VI
In 2014, Eric M. Anicich, a psychologist at the University of Southern California, collected data from over 30,625 Himalayan climbers from 56 nations on over 5,100 expeditions. It was the largest analysis of high-altitude mountaineering ever conducted. The researchers were interested in one issue above all else: do dominance hierarchies lead to a higher probability of disaster?42
They couldn’t measure the hierarchies of the teams directly given that the climbers were dispersed around the world, and many of the expeditions had occurred years previously. But they did the next best thing. They examined the nations they hailed from. Some cultures are deferential to authority figures, and are, on average, less likely to speak up. Other cultures tolerate and even encourage speaking up to those in positions of leadership.
Would these small national differences show up in the data? Would they emerge in the number of fatalities? As Anicich probed the evidence, the answer was clear: teams with more dominant hierarchies are ‘significantly more likely to die’. This finding did not apply to solo expeditions. It was only teams from hierarchical nations that had the problem and shows that it wasn’t about the skill of individual climbers but how they interacted. Adam Galinsky, one of the co-authors, has written:
In cultures that are hierarchical, decision-making tends to be a top-down process. People from these countries are more likely to die on difficult mountain climbs because they are less likely to speak up and less likely to alert leaders to changing conditions and impending problems. By not speaking up, these climbers preserved order but endangered their own lives. Importantly, we isolated the role of group processes by showing that the higher fatality rate occurred for group, but not solo, expeditions. It was only when a group of individuals had to communicate effectively that hierarchical cultures produced disaster.43
This finding, published in The Proceedings of the National Academy of Sciences, is significant enough on its own. But it becomes compelling as an explanation for the Everest disaster when corroborated by the evidence from Google, anthropological data, controlled lab studies and more. As Galinsky put it: ‘The Himalayan context highlights a key feature that creates complex decisions: a dynamic and changing environment. When the environment can change dramatically and suddenly, people have to adapt and come up with a new plan. In these cases, we need everyone’s perspective brought to bear and hierarchy can hurt by supressing these insights.’44
It is worth reiterating that none of this invalidates the notion of hierarchy. Most teams function better with a chain of command. Hierarchy creates a division of labour, where leaders can focus on the big picture while others grapple with the detail. It also ensures that teams can coordinate their actions. If there is no hierarchy, team members might constantly argue about what to do next. This can be disruptive and dangerous.
But the real choice is not between hierarchy and diversity, but about how to gain the benefits of both. As Galinsky put it:
In complex tasks, from flying a plane to performing surgery to deciding whether a country should go to war, people need to process and integrate a vast amount of information while also imagining myriad possible future scenarios . . . To make the best complex decisions, we need to tap the idea from all rungs of the hierarchical ladder and learn from everyone who has relevant knowledge to share.
*
Let us conclude our analysis of hierarchy with perhaps the deepest irony of all. One emphatic finding from psychological research is that humans dislike uncertainty and the sense that we lack control over our lives. When faced by uncertainty, we often attempt to regain control by putting our faith in a dominant figurehead who can restore order. This is sometimes called ‘compensatory control’.
Look at the rise of authoritarian states in times of economic uncertainty, such as in Germany and Italy after the chaos of the First World War. A study led by Michele Gelfand of the University of Maryland analysed more than thirty nations and found that they responded to external forces that threatened certainty or security by reaching for steeper political hierarchies.45
This has religious implications, too. One study analysed church membership in the United States over two periods, one characterised by job security (1920s) and one characterised by intense uncertainty (1930s). The researchers then divided churches into two categories: hierarchical churches with many layers of authority (Roman Catholic Church, Church of Jesus Christ of Latter Day Saints) and non-hierarchical with few layers (Protestant Episcopal Church, Presbyterian, etc.).46
Sure enough, when the economy was rosy, people were far more likely to join non-hierarchical churches. When jobs were insecure, and people felt that they lacked control over their lives, on the other hand, they converted to the hierarchical churches. They compensated for their feelings of insecurity by putting their faith in theologies with higher levels of theistic power and control.
If this seems a little abstract, think of the last time you experienced heavy turbulence on a plane. Did you whisper a silent prayer? This is another classic manifestation of compensatory control. In the teeth of uncertain events, you reintroduced certainty by imputing power to God, or Fate, or some other omnipotent force. It may not have made the plane any safer (depending on your religious outlook), but it made you feel a little more secure.
This happens in organisations, too. When a company faces external threats or economic uncertainty, its shareholders are significantly more likely to appoint a dominant leader. Within organisations, too, dominant individuals tend to rise more rapidly during times of uncertainty. The strong voice, the authoritarian personality, provides reassurance for the loss of control we collectively feel.
This leaves us with a dangerous paradox. When the environment is complex and uncertain, this is precisely when one brain – even a dominant brain – is insufficient to solve the problem. It is precisely when we need diverse voices to maximise collective intelligence. And yet this is precisely th
e time when we unconsciously acquiesce in the dubious comfort of a dominant leader. Dominance is not just about the leaders, then, but often linked to the silent wishes of those who constitute a team, organisation or nation. Indeed, individuals who might naturally favour a prestigious style of leadership can find themselves morphing towards dominance the moment the team starts to lose control of events – with disastrous consequences.
Rob Hall was an admirable man. The more one reads about him, the more one understands why he was regarded with such awe. One obituary, written just after the Everest disaster, captured his heroism: ‘Crippled by frostbite, running out of oxygen and stranded without food, fluid or shelter, he . . . died that night . . . The fact that he died whilst trying to save an exhausted client confirmed his status as the world’s most respected leader of commercial Himalayan expeditions.’47
Hall was not naturally dominant. He was open and inclusive, a man loved by almost all who knew him. The problem is that he had come to believe that a dominant style of leadership would prove an asset on one of the most challenging climbs of his career – and was encouraged in that view by a team experiencing deep anxiety about the sheer volatility in the Death Zone. These unconscious dynamics play out in organisations, charities, unions, schools and governments in millions of ways, every day, all over the world, but in the high-stakes circumstances of a Himalayan climb, they would have fatal consequences.
In his last radio communication to Base Camp, amid the storm on the south-east ridge, Hall was told by his comrades that they would patch him through to his wife, Jan, in New Zealand, seven months pregnant with their first child. Hall asked for a moment to steady himself. He knew that he was now beyond hope, but didn’t want his deteriorating condition to cause grief to his beloved. ‘Give me a minute,’ he said. ‘My mouth’s dry. I want to eat a bit of snow before I talk to her.’
Finally, with his mouth moistened, Hall spoke: ‘Hi, my sweetheart,’ he said. ‘I hope you’re tucked up in a nice warm bed . . .’
‘I can’t tell you how much I’m thinking about you,’ Jan replied. ‘I’m looking forward to making you completely better when you come home . . . Don’t feel you’re alone. I’m sending all my positive energy your way.’
One hundred and twenty-five vertical metres above the South Summit, his friends Doug Hansen and Andy Harris dead, and with the hurricane still raging around him, Hall uttered his last words:
‘I love you. Sleep well, my sweetheart. Please don’t worry too much.’48
4
Innovation
I
David Dudley Bloom was, by any reckoning, a remarkable chap. Born in Pennsylvania on 20 September 1922, he served in the navy during the Second World War and, according to some accounts, became the youngest commanding officer in the fleet, taking charge of USS Liberty in December 1944 during the New Guinea campaign. At the time he was just twenty-two.
After leaving the army in 1945, he worked in different jobs – including as a clerk in a law firm and a buyer in a department store – before becoming director of product research at American Metals Specialties Corporation (AMSCO), a small toy manufacturer. Perhaps because of his experiences of war, he sought to move the company away from military-themed toys – such as guns, rifles and soldiers. As he put it in an interview in the 1950s: ‘If we teach our children war and crime, we haven’t much of a future to look forward to.’
His first big idea was a ‘magic milk bottle’, the milk seeming to disappear from the bottle as it was turned upside down. He also came up with miniature consumer products, such as kitchen utensils, so that children could pretend to be chefs.
But it wasn’t until 1958 that Bloom hit upon the idea that should have changed his life, not to mention the wider world. He had left AMSCO a few months earlier to work for the Atlantic Luggage Company of Elwood City, Pennsylvania, where he had been offered the job of director of product development for a popular line of travel luggage.
It was here that he was struck by a thought: why do suitcases, heavy and burdensome, and which had been partly responsible for his own back pain, not have wheels? Wouldn’t this make them easier to move around? Wouldn’t it remove the need for expensive porters? Wouldn’t it alleviate the sense of foreboding when arriving in transit, the ever more frequent shifting from one hand to the other as one trudged from place to place, until one merely had a choice between continuing pain in one arm, or instant pain in the other? More generally, wasn’t this a perfect solution for a world moving in the direction of mass travel?
He took a prototype of his idea – a suitcase attached to a platform with castors and a handle – to the chairman of the Atlantic Luggage Company. Bloom was expectant, almost exultant. The product would be cheap to make, would tie in with the company’s existing designs and distribution channels, and seemed like the most sure-fire thing in the history of the sector, enabling them to dominate a multibillion-dollar global market.
The chairman’s reaction? He described it as ‘impractical’ and ‘unwieldy’. ‘Who’d want to buy luggage on wheels?’ he scoffed.
In 2010, Ian Morris, a British archaeologist and historian, completed a seminal study into the history of innovation. He was nothing if not thorough. He examined development from 14,000 bc to today, carefully tabulating the consequences of every leap forward.
The major episodes were not difficult to spot. The domestication of animals. The birth of organised religion. The invention of writing. Morris noted that each of these events had eloquent advocates when it came to the question: which single change had the greatest impact upon humanity? Morris wanted an objective answer so he painstakingly quantified the various breakthroughs on social development. This he defined as ‘a group’s ability to master its physical and intellectual environment to get things done’, an idea that correlates closely with economic growth.1
His data is striking for it shows that all the various innovations already mentioned did indeed influence social development. The line gradually slopes upwards over the course of the millennia. But one innovation had an impact beyond any other, shifting the curve from near horizontal to near vertical: the Industrial Revolution. Morris writes: ‘The western-led take-off since 1800 made a mockery of all the drama of the world’s earlier history.’ Erik Brynjolfsson and Andrew McAfee, two professors at the MIT Sloan School of Management, concur: ‘The industrial revolution ushered in humanity’s first machine age – the first time our progress was driven primarily by technological innovation – and it was the most profound time of transformation our world has ever seen.’2
But there was one anomaly in this picture. When historians zoomed in on this transformation, and could see the granularity of the curve, they noticed something odd. The second phase of the Industrial Revolution came with electrification in the late nineteenth century. This meant that electrical motors could replace the older, less efficient steam engines. It created a second surge in growth and productivity, the consequences of which we are still living with today.
Except for one thing. This surge was curiously delayed. It didn’t happen instantly, but seemed to pause, pregnant and static, for around twenty-five years before taking off. Perhaps the most curious thing of all is that many of the most successful corporations in the United States, which were in the perfect position to benefit from electrification, did nothing of the kind. On the contrary, many went bust. They were on the verge of victory, and snatched defeat from its jaws.
Electricity, it is worth noting, offered huge dividends, not just in terms of power but in the redesign of the manufacturing process itself. In a traditional factory, machines were positioned around water and, later, the steam engine. They clustered in this way out of necessity. The production process was umbilically linked to the sole source of power, with the various machines connected via an elaborate – but often unreliable – set of pulleys, gears and crankshafts.3
Electrification meant that manufacturing could break free of these constraints. Electric motors do not suffer maj
or reductions in efficiency when reduced in size, so machines could have their own source of power, allowing the layout of factories to be based around the most efficient workflow of materials. Instead of a single unit of power (the steam engine), electricity permitted ‘group power’. It is as obvious an advantage as adding wheels to a suitcase. As McAfee and Brynjolfsson put it: ‘Today, of course, it’s completely ridiculous to imagine doing anything other than this; indeed, many machines now go even further and have multiple electric motors built into their design . . . It’s clear that intelligent electrification made a factory much more productive than it could otherwise be.’4
Electrification, then, was a gift from the gods, an opportunity for the companies that dominated US production to increase their efficiency and profits. They had the existing plants. They had the existing machines. And they now had a technology – electricity – to increase their efficiency, streamlining their operations and opening up new streams of growth.
And yet they didn’t do anything of the kind. In a move eerily reminiscent of the early luggage companies that turned their backs on wheels, many stuck to unit drive. Instead of streamlining their factories, they dumped a large electric motor in the middle of the factory, as if it were a substitute steam engine. In doing so, they completely – almost inexplicably – missed the point. This would prove catastrophic. The economist Shaw Livermore found that more than 40 per cent of the industrial trusts formed between 1888 and 1905 had failed by the early 1930s.5 A study by Richard Caves, another expert in economic history, found that even those that managed to remain in existence shrank by over a third. It was one of the most brutal periods in industrial history.6 This is a pattern that repeats endlessly. Organisations in the perfect position to win instead manage, against all odds, to lose.