Book Read Free

Black Box Thinking

Page 15

by Matthew Syed


  The vapor is siphoned away while the powder is collected in a vat, where collagen and various other ingredients are added. Then it is packed into boxes, branded with names like Daz and Bold, and sold at a hefty markup. It is a neat business concept, and has become a huge industry. Annual sales of detergent are over $3 billion in the United States alone.

  But the problem for Unilever was that the nozzles didn’t work smoothly. To quote Steve Jones, who briefly worked at the Liverpool soap factory in the 1970s before going on to become one of the world’s most influential evolutionary biologists, they kept clogging up.1 “The nozzles were a damn nuisance,” he has said. “They were inefficient, kept blocking and made detergent grains of different sizes.”

  This was a major problem for the company, not just because of maintenance and lost time, but also in terms of the quality of the product. They needed to come up with a superior nozzle. Fast.

  And so they turned to their crack team of mathematicians. Unilever, even back then, was a rich company, so it could afford the brightest and best. These were not just ordinary mathematicians, but experts in high-pressure systems, fluid dynamics, and other aspects of chemical analysis. They had special grounding in the physics of “phase transition”: the processes governing the transformation of matter from one state (liquid) to another (gas or solid).

  These mathematicians were what we today might call “intelligent designers.” These are the kind of people we generally turn to when we need to solve problems, whether business, technical, or political: get the right people, with the right training, to come up with the optimal plan.

  They delved ever deeper into the problems of phase transition, and derived sophisticated equations. They held meetings and seminars. And, after a long period of study, they came up with a new design.

  You have probably guessed what is coming: it didn’t work. It kept blocking. The powder granularity remained inconsistent. It was inefficient.

  Almost in desperation, Unilever turned to its team of biologists. These people had little understanding of fluid dynamics. They would not have known a phase transition if it had jumped up and bitten them. But they had something more valuable: a profound understanding of the relationship between failure and success.

  They took ten copies of the nozzle and applied small changes to each one, and then subjected them to failure by testing them. “Some nozzles were longer, some shorter, some had a bigger or smaller hole, maybe a few grooves on the inside,” Jones says. “But one of them improved a very small amount on the original, perhaps by just one or two percent.”

  They then took the “winning” nozzle and created ten slightly different copies, and repeated the process. They then repeated it again, and again. After 45 generations and 449 ‘failures,’ they had a nozzle that was outstanding. It worked “many times better than the original.”

  Progress had been delivered not through a beautifully constructed master plan (there was no plan), but by rapid interaction with the world. A single, outstanding nozzle was discovered as a consequence of testing, and discarding, 449 failures.

  II

  So far in the book, we have seen that learning from mistakes relies on two components: first, you need to have the right kind of system—one that harnesses errors as a means of driving progress; and second, you need a mindset that enables such a system to flourish.

  In the previous section we concerned ourselves with the mindset aspect of this equation. Cognitive dissonance occurs when mistakes are too threatening to admit to, so they are reframed or ignored. This can be thought of as the internal fear of failure: how we struggle to admit mistakes to ourselves.

  The original nozzle is at the top. The final nozzle, after 45 generations and 449 iterations, is at the bottom. It has a shape no mathematician could possibly have anticipated.

  In sections 5 and 6, we will return to this crucial issue. We will look at how to create a culture where mistakes are not reframed or suppressed, but wielded as a means of driving progress. We will also look at the external fear of failure—the fear of being unfairly blamed or punished—which also undermines learning from mistakes.

  Ultimately, we will see that strong, resilient, growth-orientated cultures are built from specific psychological foundations, and we will look at practical examples of cutting-edge companies, sports teams, and even schools that are leading the way.

  But now we are going to delve into the system side of the equation. We have already touched upon this in our examination of institutions that successfully learn from mistakes, such as aviation and the Virginia Mason Health System. But now we are going to look at the rich theoretical framework that underpins these examples. We will see that all systems that learn from failure have a distinctive structure, one that can be found in many places, including the natural world, artificial intelligence, and science. This will then give us an opportunity to examine the ways in which some of the most innovative organizations in the world are harnessing this structure—with often startling results.

  It is this structure that is so marvelously evoked by the Unilever example. What the development of the nozzle reveals, above all, is the power of testing. Even though the biologists knew nothing about the physics of phase transition, they were able to develop an efficient nozzle by trialing lots of different ones, rejecting those that didn’t work and then varying the best nozzle in each generation.

  It is not coincidental that the biologists chose this strategy: it mirrors how change happens in nature. Evolution is a process that relies on a “failure test” called natural selection. Organisms with greater “fitness” survive and reproduce, with their offspring inheriting their genes subject to a random process known as mutation. It is a system, like the one that created the Unilever nozzle, of trial and error.

  In one way, these failures are different from those we examined in aviation, health care, and the criminal justice system. The biologists realized they would create many failures: in fact they did so deliberately to find out which designs worked and which didn’t. In aviation nobody sets out to fail deliberately. The whole idea is to minimize accidents.

  But despite this difference there is a vital similarity. Failures in aviation set the stage for reform. The errors are part and parcel of the dynamic process of change: not just real accidents and failures, but also those that occur in simulators and near-miss events. Likewise, the rejected nozzles helped to drive the progression of the design. They all share an essential pattern: an adaptive process driven by the detection and response to failure.

  Evolution as a process is powerful because of its cumulative nature. Richard Dawkins offers a neat way to think about cumulative selection in his wonderful book The Blind Watchmaker. He invites us to consider a monkey trying to type a single line from Hamlet: “Methinks it is like a weasel.” The odds are pretty low for the monkey to get it right.

  If the monkey is typing at random and there are 27 letters (counting the space bar as a letter), it has a 1 in 27 chance to get the first letter right, a 1 in 27 for the next letter, and so on. So just to get the first three in a row correct are 1/27 multiplied by 1/27 multiplied by 1/27. That is one chance in 19,683. To get all 28 in the sequence, the odds are around 1 in 10,000 million, million, million, million, million, million.

  But now suppose that we provide a selection mechanism (i.e., a failure test) that is cumulative. Dawkins set up a computer program to do just this. Its first few attempts at getting the phrase is random, just like a monkey. But then the computer scans the various nonsense phrases to see which is closest, however slightly, to the target phrase. It rejects all the others. It then randomly varies the winning phrase, and then scans the new generation. And so on.

  The winning phrase after the first generation of running the experiment on the computer was: WDLTMNLT DTJBSWIRZREZLMQCO P. After ten generations, by honing in on the phrase closest to the target phrase, and rejecting the others, it was: MDLD
MNLS ITJISWHRZREZ MECS P. After twenty generations, it looked like this: MELDINLS IT ISWPRKE Z WECSEL. After thirty generations, the resemblance is visible to the naked eye: METHINGS IT ISWLIKE B WECSEL. By the forty-third generation, the computer got the right phrase. It took only a few moments to get there.

  Cumulative selection works, then, if there is some form of “memory”: i.e., if the results of one selection test are fed into the next, and into the next, and so on. This process is so powerful that, in the natural world, it confers what has been called “the illusion of design”: animals that look as if they were designed by a vast intelligence when they were, in fact, created by a blind process.

  An echo of this illusion can be seen in the nozzle example. The final shape is so uniquely suited to creating fine-grained detergent that it invites the thought that a master designer must have been at work. In fact, as we have seen, the biologists used no “design” capability at all. They simply harnessed the power of the evolutionary process.

  There are many systems in the world that are essentially evolutionary in nature. Indeed, many of the greatest thinkers of the last two centuries favored free market systems because they mimic the process of biological change,2 as the author Tim Harford notes in his excellent book Adapt.3 Different companies competing with each other, with some failing and some surviving, facilitate the adaptation of the system. This is why markets—provided they are well regulated—are such efficient solvers of problems: they create an ongoing process of trial and error.

  The equivalent of natural selection in a market system is bankruptcy. When a company goes bust it is a bit like the failure of a particular nozzle design. It reveals that something (product, price, strategy, advertising, management, process, etc.) wasn’t working compared with the competition. Weaker ideas and products are jettisoned. Successful ideas are replicated by other companies. The evolution of the system is driven, just like the design of the Unilever nozzle, by cumulative adaptation.

  The failure of companies in a free market, then, is not a defect of the system, or an unfortunate by-product of competition; rather, it is an indispensable aspect of any evolutionary process. According to one economist, 10 percent of American companies go bankrupt every year.4 The economist Joseph Schumpeter called this “creative destruction.”

  Now, compare this with centrally planned economies, where there are almost no failures at all. Companies are protected from failure by subsidy. The state is protected from failure by the printing press, which can inflate its way out of trouble. At first, this may look like an enlightened way to go about solving the problems of economic production, distribution, and exchange. Nothing ever fails and, by implication, everything looks successful.

  But this is precisely why planned economies didn’t work. They were manned by intelligent planners who decided how much grain to produce, how much iron to mine, and who used complicated calculations to determine the optimal solutions. But they faced the same problem as the Unilever mathematicians: their ideas, however enlightened, were not tested rapidly enough—and so had little opportunity to be reformed in the light of failure.

  Even if the planners were ten times smarter than the businessmen operating in a market economy, they would still fall way behind. Without the benefit of a valid test, the system is plagued by rigidity. In markets, on the other hand, it is the thousands of little failures that lubricate and, in a sense, guide the system. When companies go under, other entrepreneurs learn from these mistakes, the system creates new ideas, and consumers ultimately benefit.

  In a roughly similar way, accidents in aviation, while tragic for the passengers on the fatal flights, bolster the safety of future flights. The failure sets the stage for meaningful change.

  That is not to say that markets are perfect. There are problems of monopoly, collusion, inequality, price-fixing, and companies that are too big to fail and therefore protected by a taxpayer guarantee. All these things militate against the adaptive process. But the underlying point remains: markets work not in spite of the many business failures that occur, but because of them.

  It is not just systems that can benefit from a process of testing and learning; so, too, can organizations. Indeed, many of the most innovative companies in the world are bringing some of the basic lessons of evolutionary theory into the way they think about strategy. Few companies tinker randomly like the Unilever biologists, because with complex problems it can take a long time to home in on a solution.

  Rather, they make judicious use of tests, challenge their own assumptions, and wield the lessons to guide strategy. It is a mix of top-down reasoning (as per the mathematicians) and bottom-up iteration (as per the biologists); the fusing of the knowledge they already have with the knowledge that can be gained by revealing its inevitable flaws. It is about having the courage of one’s convictions, but also the humility to test early, and to adapt rapidly.

  • • •

  An echo of these ideas can be seen in the process of technological change. The conventional way we think about technology is that it is essentially top-down in character. Academics conduct high-level research, which creates scientific theories, which are then used by practical people to create machines, gadgets, and other technologies.

  This is sometimes called the linear model and it can be represented with a simple flowchart: Research and theory à Technology à Practical applications. In the case of the Industrial Revolution, for example, the conventional picture is that it was largely inspired by the earlier scientific revolution; the ideas of Boyle, Hooke, and Locke gave rise to the machinery that changed the world.

  But there is a problem with the linear model: in most areas of human development, it severely underestimates the role of bottom-up testing and learning of the kind adopted by the Unilever biologists. In his book The Economic Laws of Scientific Research, Terence Kealey, a practicing scientist, debunks the conventional narrative surrounding the Industrial Revolution:

  In 1733, John Kay invented the flying shuttle, which mechanized weaving, and in 1770 James Hargreaves invented the spinning jenny, which as its name implies, mechanized spinning. These major developments in textile technology, as well as those of Wyatt and Paul (spinning frame, 1758), Arkwright (water frame, 1769), presaged the Industrial Revolution, yet they owed nothing to science; they were empirical developments based on the trial, error and experimentation of skilled craftsmen who were trying to improve the productivity, and so the profits, of their factories.5

  Note the final sentence: these world-changing machines were developed, like Unilever’s nozzle, through trial and error. Amateurs and artisans, men of practical wisdom, motivated by practical problems, worked out how to build these machines, by trying, failing, and learning. They didn’t fully understand the theory underpinning their inventions. They couldn’t have talked through the science. But—like the Unilever biologists—they didn’t really need to.*

  And this is where the direction of causality can flip. Take the first steam engine for pumping water. This was built by Thomas Newcomen, a barely literate, provincial ironmonger and Baptist lay preacher, and developed further by James Watt. The understanding of both men was intuitive and practical. But the success of the engine raised a deep question: why does this incredible device actually work (it broke the then laws of physics)? This question inspired Nicolas Léonard Sadi Carnot, a French physicist, to develop the laws of thermodynamics. Trial and error inspired the technology, which in turn inspired the theory. This is the linear model in reverse.

  In his seminal book Antifragile, Nassim Nicholas Taleb shows how the linear model is wrong (or, at best, misleading) in everything from cybernetics, to derivatives, to medicine, to the jet engine. In each case history reveals that these innovations emerged as a consequence of a similar process utilized by the biologists at Unilever, and became encoded in heuristics (rules of thumb) and practical know-how. The problems were often too complex to solve theoretically, or via a blueprint
, or in the seminar room. They were solved by failing, learning, and failing again.

  Architecture is a particularly interesting case, because it is widely believed that ancient buildings and cathedrals, with their wonderful shapes and curves, were inspired by the formal geometry of Euclid. How else could the ancients have built these intricate structures? In fact, geometry played almost no role. As Taleb shows, it is almost certain that the practical wisdom of architects inspired Euclid to write his Book of Elements, so as to formalize what the builders already knew.

  “Take a look at Vitruvius’ manual, De architectura, the bible of architects, written about three hundred years after Euclid’s Elements,” Taleb writes. “There is little formal geometry in it, and, of course, no mention of Euclid, mostly heuristics, the kind of knowledge that comes out of a master guiding his apprentices . . . Builders could figure out the resistance of materials without the equations we have today—buildings that are, for the most part, still standing.”6

  These examples do not show that theoretical knowledge is worthless. Quite the reverse. A conceptual framework is vital even for the most practical men going about their business. In many circumstances, new theories have led to direct technological breakthroughs (such as the atom bomb emerging from the Theory of Relativity).

  The real issue here is speed. Theoretical change is itself driven by a feedback mechanism, as we noted in chapter 3: science learns from failure. But when a theory fails, like say when the Unilever mathematicians failed in their attempt to create an efficient nozzle design, it takes time to come up with a new, all-encompassing theory. To gain practical knowledge, however, you just need to try a different-sized aperture. Tinkering, tweaking, learning from practical mistakes: all have speed on their side. Theoretical leaps, while prodigious, are far less frequent.

  Ultimately, technological progress is a complex interplay between theoretical and practical knowledge, each informing the other in an upward spiral*. But we often neglect the messy, iterative, bottom-up aspect of this change because it is easy to regard the world, so to speak, in a top-down way. We try to comprehend it from above rather than discovering it from below.

 

‹ Prev