Antifragile: Things That Gain from Disorder

Home > Other > Antifragile: Things That Gain from Disorder > Page 33
Antifragile: Things That Gain from Disorder Page 33

by Taleb, Nassim Nicholas


  How to Exit a Movie Theater

  Another example of the costs of a squeeze: Imagine how people exit a movie theater. Someone shouts “fire,” and you have a dozen persons squashed to death. So we have a fragility of the theater to size, stemming from the fact that every additional person exiting brings more and more trauma (such disproportional harm is a negative convexity effect). A thousand people exiting (or trying to exit) in one minute is not the same as the same number exiting in half an hour. Someone unfamiliar with the business who naively optimizes the size of the place (Heathrow airport, for example) might miss the idea that smooth functioning at regular times is different from the rough functioning at times of stress.

  It so happens that contemporary economic optimized life causes us to build larger and larger theaters, but with the exact same door. They no longer make this mistake too often while building cinemas, theaters, and stadiums, but we tend to make the mistake in other domains, such as, for instance, natural resources and food supplies. Just consider that the price of wheat more than tripled in the years 2004–2007 in response to a small increase in net demand, around 1 percent.3

  Bottlenecks are the mothers of all squeezes.

  PROJECTS AND PREDICTION

  Why Planes Don’t Arrive Early

  Let us start as usual with a transportation problem, and generalize to other areas. Travelers (typically) do not like uncertainty—especially when they are on a set schedule. Why? There is a one-way effect.

  I’ve taken the very same London–New York flight most of my life. The flight takes about seven hours, the equivalent of a short book plus a brief polite chat with a neighbor and a meal with port wine, stilton cheese, and crackers. I recall a few instances in which I arrived early, about twenty minutes, no more. But there have been instances in which I got there more than two or three hours late, and in at least one instance it has taken me more than two days to reach my destination.

  Because travel time cannot be really negative, uncertainty tends to cause delays, making arrival time increase, almost never decrease. Or it makes arrival time decrease by just minutes, but increase by hours, an obvious asymmetry. Anything unexpected, any shock, any volatility, is much more likely to extend the total flying time.

  This also explains the irreversibility of time, in a way, if you consider the passage of time as an increase in disorder.

  Let us now apply this concept to projects. Just as when you add uncertainty to a flight, the planes tend to land later, not earlier (and these laws of physics are so universal that they even work in Russia), when you add uncertainty to projects, they tend to cost more and take longer to complete. This applies to many, in fact almost all, projects.

  The interpretation I had in the past was that a psychological bias, the underestimation of the random structure of the world, was the cause behind such underestimation—projects take longer than planned because the estimates are too optimistic. We have evidence of such bias, called overconfidence. Decision scientists and business psychologists have theorized something called the “planning fallacy,” in which they try to explain the fact that projects take longer, rarely less time, using psychological factors.

  But the puzzle was that such underestimation did not seem to exist in the past century or so, though we were dealing with the very same humans, endowed with the same biases. Many large-scale projects a century and a half ago were completed on time; many of the tall buildings and monuments we see today are not just more elegant than modernistic structures but were completed within, and often ahead of, schedule. These include not just the Empire State Building (still standing in New York), but the London Crystal Palace, erected for the Great Exhibition of 1851, the hallmark of the Victorian reign, based on the inventive ideas of a gardener. The Palace, which housed the exhibition, went from concept to grand opening in just nine months. The building took the form of a massive glass house, 1,848 feet long by 454 feet wide; it was constructed from cast iron frame components and glass made almost exclusively in Birmingham and Smethwick.

  The obvious is usually missed here: the Crystal Palace project did not use computers, and the parts were built not far from the source, with a small number of businesses involved in the supply chain. Further, there were no business schools at the time to teach something called “project management” and increase overconfidence. There were no consulting firms. The agency problem (which we defined as the divergence between the interest of the agent and that of his client) was not significant. In other words, it was a much more linear economy—less complex—than today. And we have more nonlinearities—asymmetries, convexities—in today’s world.

  Black Swan effects are necessarily increasing, as a result of complexity, interdependence between parts, globalization, and the beastly thing called “efficiency” that makes people now sail too close to the wind. Add to that consultants and business schools. One problem somewhere can halt the entire project—so the projects tend to get as weak as the weakest link in their chain (an acute negative convexity effect). The world is getting less and less predictable, and we rely more and more on technologies that have errors and interactions that are harder to estimate, let alone predict.

  And the information economy is the culprit. Bent Flyvbjerg, the one of bridge and road projects mentioned earlier in this chapter, showed another result. The problem of cost overruns and delays is much more acute in the presence of information technologies (IT), as computer projects cause a large share of these cost overruns, and it is better to focus on these principally. But even outside of these IT-heavy projects, we tend to have very severe delays.

  But the logic is simple: again, negative convexity effects are the main culprit, a direct and visible cause. There is an asymmetry in the way errors hit you—the same as with travel.

  No psychologist who has discussed the “planning fallacy” has realized that, at the core, it is not essentially a psychological problem, not an issue with human errors; it is inherent to the nonlinear structure of the projects. Just as time cannot be negative, a three-month project cannot be completed in zero or negative time. So, on a timeline going left to right, errors add to the right end, not the left end of it. If uncertainty were linear we would observe some projects completed extremely early (just as we would arrive sometimes very early, sometimes very late). But this is not the case.

  Wars, Deficits, and Deficits

  The Great War was estimated to last only a few months; by the time it was over, it had gotten France and Britain heavily in debt; they incurred at least ten times what they thought their financial costs would be, aside from all the horrors, suffering, and destruction. The same of course for the second war, which added to the U.K. debt, causing it to become heavily indebted, mostly to the United States.

  In the United States the prime example remains the Iraq war, expected by George W. Bush and his friends to cost thirty to sixty billion, which so far, taking into account all the indirect costs, may have swelled to more than two trillion—indirect costs multiply, causing chains, explosive chains of interactions, all going in the same direction of more costs, not less. Complexity plus asymmetry (plus such types as George W. Bush), once again, lead to explosive errors.

  The larger the military, the disproportionally larger the cost overruns.

  But wars—with more than twentyfold errors—are only illustrative of the way governments underestimate explosive nonlinearities (convexity effects) and why they should not be trusted with finances or any large-scale decisions. Indeed, governments do not need wars at all to get us in trouble with deficits: the underestimation of the costs of their projects is chronic for the very same reason 98 percent of contemporary projects have overruns. They just end up spending more than they tell us. This has led me to install a governmental golden rule: no borrowing allowed, forced fiscal balance.

  WHERE THE “EFFICIENT” IS NOT EFFICIENT

  We can easily see the costs of fragility swelling in front of us, visible to the naked eye. Global disaster costs are tod
ay more than three times what they were in the 1980s, adjusting for inflation. The effect, noted a while ago by the visionary researcher on extreme events Daniel Zajdenweber, seems to be accelerating. The economy can get more and more “efficient,” but fragility is causing the costs of errors to be higher.

  The stock exchanges have converted from “open outcry” where wild traders face each other, yelling and screaming as in a souk, then go drink together. Traders were replaced by computers, for very small visible benefits and massively large risks. While errors made by traders are confined and distributed, those made by computerized systems go wild—in August 2010, a computer error made the entire market crash (the “flash crash”); in August 2012, as this manuscript was heading to the printer, the Knight Capital Group had its computer system go wild and cause $10 million dollars of losses a minute, losing $480 million.

  And naive cost-benefit analyses can be a bit harmful, an effect that of course swells with size. For instance, the French have in the past focused on nuclear energy as it seemed “clean” and cheap. And “optimal” on a computer screen. Then, after the wake-up call of the Fukushima disaster of 2011, they realized that they needed additional safety features and scrambled to add them, at any cost. In a way this is similar to the squeeze I mentioned earlier: they are forced to invest, regardless of price. Such additional expense was not part of the cost-benefit analysis that went into the initial decision and looked good on a computer screen. So when deciding on one source of fuel against another, or similar comparisons, we do not realize that model error may hit one side more than the other.

  Pollution and Harm to the Planet

  From this we can generate a simple ecological policy. We know that fossil fuels are harmful in a nonlinear way. The harm is necessarily concave (if a little bit of it is devoid of harm, a lot can cause climatic disturbances). While on epistemological grounds, because of opacity, we do not need to believe in anthropogenic climate change (caused by humans) in order to be ecologically conservative, we can put these convexity effects to use in producing a risk management rule for pollution. Simply, just as with size, split your sources of pollution among many natural sources. The harm from polluting with ten different sources is smaller than the equivalent pollution from a single source.4

  Let’s look at naturelike ancestral mechanisms for regulating the concentration effects. We contemporary humans go to the stores to purchase the same items, say tuna, coffee or tea, rice, mozzarella, Cabernet wine, olive oil, and other items that appear to us as not easily substitutable. Because of sticky contemporary habits, cultural contagion, and the rigidity of factories, we are led to the excessive use of specific products. This concentration is harmful. Extreme consumption of, say, tuna, can hurt other animals, mess with the ecosystem, and lead species to extinction. And not only does the harm scale nonlinearly, but the shortages lead to disproportional rises in prices.

  Ancestral humans did it differently. Jennifer Dunne, a complexity researcher who studies hunter-gatherers, examined evidence about the behavior of the Aleuts, a North American native tribe, for which we have ample data, covering five millennia. They exhibit a remarkable lack of concentration in their predatorial behavior, with a strategy of prey switching. They were not as sticky and rigid as us in their habits. Whenever they got low on a resource, they switched to another one, as if to preserve the ecosystem. So they understood convexity effects—or, rather, their habits did.

  Note that globalization has had the effect of making contagions planetary—as if the entire world became a huge room with narrow exits and people rushing to the same doors, with accelerated harm. Just as about every child reads Harry Potter and joins (for now) Facebook, people when they get rich are starting to engage in the same activities and buy the same items. They drink Cabernet wine, hope to visit Venice and Florence, dream of buying a second home in the South of France, etc. Tourist locations are becoming unbearable: just go to Venice next July.

  The Nonlinearity of Wealth

  We can certainly attribute the fragilizing effect of contemporary globalization to complexity, and how connectivity and cultural contagions make gyrations in economic variables much more severe—the classic switch to Extremistan. But there is another effect: wealth. Wealth means more, and because of nonlinear scaling, more is different. We are prone to make more severe errors because we are simply wealthier. Just as projects of one hundred million dollars are more unpredictable and more likely to incur overruns than five-million-dollar ones, simply by being richer, the world is troubled with additional unpredictability and fragility. This comes with growth—at a country level, this Highly Dreamed-of GDP Growth. Even at an individual level, wealth means more headaches; we may need to work harder at mitigating the complications arising from wealth than we do at acquiring it.

  Conclusion

  To conclude this chapter, fragility in any domain, from a porcelain cup to an organism, to a political system, to the size of a firm, or to delays in airports, resides in the nonlinear. Further, discovery can be seen as an antideficit. Think of the exact opposite of airplane delays or project overruns—something that benefits from uncertainty. And discovery presents the mirror image of what we saw as fragile, randomness-hating situations.

  1 Actually there are different muscle fibers, each one responding to different sets of conditions with varied asymmetries of responses. The so-called “fast-twitch” fibers, the ones used to lift very heavy objects, are very antifragile, as they are convex to weight. And they die in the absence of intensity.

  2 A nuance: the notions of “large” and “small” are relative to a given ecology or business structure. Small for an airplane maker is different from “small” when it comes to a bakery. As with the European Union’s subsidiarity principle, “small” here means the smallest possible unit for a given function or task that can operate with a certain level of efficiency.

  3 The other problem is that of misunderstanding the nonlinearity of natural resources, or anything particularly scarce and vital. Economists have the so-called law of scarcity, by which things increase in value according to the demand for them—but they ignore the consequences of nonlinearities on risk. My former thesis director, Hélyette Geman, and I are currently studying a “law of convexity” that makes commodities, particularly vital ones, even dearer than previously thought.

  4 Volatility and uncertainty are equivalent, as we saw with the table of the Disorder family. Accordingly, note that the fragile is harmed by an increase in uncertainty.

  CHAPTER 19

  The Philosopher’s Stone and Its Inverse

  They tell you when they are going bust—Gold is sometimes a special variety of lead

  And now, reader, after the Herculean effort I put into making the ideas of the last few chapters clearer to you, my turn to take it easy and express things technically, sort of. Accordingly, this chapter—a deepening of the ideas of the previous one—will be denser and should be skipped by the enlightened reader.

  HOW TO DETECT WHO WILL GO BUST

  Let us examine a method to detect fragility—the inverse philosopher’s stone. We can illustrate it with the story of the giant government-sponsored lending firm called Fannie Mae, a corporation that collapsed leaving the United States taxpayer with hundreds of billions of dollars of losses (and, alas, still counting).

  One day in 2003, Alex Berenson, a New York Times journalist, came into my office with the secret risk reports of Fannie Mae, given to him by a defector. It was the kind of report getting into the guts of the methodology for risk calculation that only an insider can see—Fannie Mae made its own risk calculations and disclosed what it wanted to whomever it wanted, the public or someone else. But only a defector could show us the guts to see how the risk was calculated.

  We looked at the report: simply, a move upward in an economic variable led to massive losses, a move downward (in the opposite direction), to small profits. Further moves upward led to even larger additional losses and further moves downward to even sm
aller profits. It looked exactly like the story of the stone in Figure 9. Acceleration of harm was obvious—in fact it was monstrous. So we immediately saw that their blowup was inevitable: their exposures were severely “concave,” similar to the graph of traffic in Figure 14: losses that accelerate as one deviates economic variables (I did not even need to understand which one, as fragility to one variable of this magnitude implies fragility to all other parameters). I worked with my emotions, not my brain, and I had a pang before even understanding what numbers I had been looking at. It was the mother of all fragilities and, thanks to Berenson, The New York Times presented my concern. A smear campaign ensued, but nothing too notable. For I had in the meantime called a few key people charlatans and they were not too excited about it.

  The key is that the nonlinear is vastly more affected by extreme events—and nobody was interested in extreme events since they had a mental block against them.

 

‹ Prev