Book Read Free

Antifragile: Things That Gain from Disorder

Page 50

by Taleb, Nassim Nicholas


  Green Lumber Fallacy: Mistaking the source of important or even necessary knowledge—the greenness of lumber—for another, less visible from the outside, less tractable one. How theoreticians impute wrong weights to what one should know in a certain business or, more generally, how many things we call “relevant knowledge” aren’t so much so.

  Skin in the Game / Captain and Ship Rule: Every captain goes down with every ship. This removes the agency problem and the lack of doxastic commitment.

  Empedocles’ Tile: A dog sleeps on the same tile because of a natural, biological, explainable or nonexplainable match, confirmed by long series of recurrent frequentation. We may never know the reason, but the match is there. Example: why we read books.

  Cherry-picking: Selecting from the data what serves to prove one’s point and ignoring disconfirming elements.

  Ethical Problems as Transfers of Asymmetry (fragility): Someone steals antifragility and optionality from others, getting the upside and sticking others with the downside. “Others’ skin in the game.”

  The Robert Rubin violation: Stolen optionality. Getting upside from a strategy without downside for oneself, leaving the harm to society. Rubin got $120 million in compensation from Citibank; taxpayers are retrospectively paying for his errors.

  The Alan Blinder problem: (1) Using privileges of office retrospectively at the expense of citizens. (2) Violating moral rules while complying perfectly with the law; confusion of ethical and legal. (3) The regulator’s incentive to make complicated regulations in order to subsequently sell his “expertise” to the private sector.

  The Joseph Stiglitz problem: Lack of penalty from bad recommendation causing harm to others. Mental cherry-picking, leading to contributing to the cause of a crisis while being convinced of the opposite—and thinking he predicted it. Applies to people with opinions without skin in the game.

  Rational Optionality: Not being locked into a given program, so one can change his mind as he goes along based on discovery or new information. Also applies to rational flâneur.

  Ethical Inversion: Fitting one’s ethics to actions (or profession) rather than the reverse.

  Narrative Fallacy: Our need to fit a story, or pattern, to a series of connected or disconnected facts. The statistical application is data mining.

  Narrative Discipline: Discipline that consists of fitting a convincing and good-sounding story to the past. Opposed to experimental discipline. A great way to fool people is to use statistics as part of the narrative, by ferreting out “good stories” from the data thanks to cherry picking; in medicine, epidemiological studies tend to be marred with the narrative fallacy, less so controlled experiments. Controlled experiments are more rigorous, less subjected to cherry-picking.

  Non-narrative action: Does not depend on a narrative for the action to be right—the narrative is just there to motivate, entertain, or prompt action. See flâneur.

  Robust Narrative: When the narrative does not produce opposite conclusions or recommendations for action under change of assumption or environment. The narrative is otherwise fragile. Similarly, a robust model or mathematical tool does not lead to different policies when you change some parts of the model.

  Subtractive Knowledge: You know what is wrong with more certainty than you know anything else. An application of via negativa.

  Via negativa: In theology and philosophy, the focus on what something is not, an indirect definition. In action, it is a recipe for what to avoid, what not to do—subtraction, not addition, say, in medicine.

  Subtractive Prophecy: Predicting the future by removing what is fragile from it rather than naively adding to it. An application of via negativa.

  Lindy Effect: A technology, or anything nonperishable, increases in life expectancy with every day of its life—unlike perishable items (such as humans, cats, dogs, and tomatoes). So a book that has been a hundred years in print is likely to stay in print another hundred years.

  Neomania: A love of change for its own sake, a form of philistinism that does not comply with the Lindy effect and understands fragility. Forecasts the future by adding, not subtracting.

  Opacity: You do not see the barrel when someone is playing Russian roulette. More generally, some things remain opaque to us, leading to illusions of understanding.

  Mediocristan: A process dominated by the mediocre, with few extreme successes or failures (say, income for a dentist). No single observation can meaningfully affect the aggregate. Also called “thin-tailed,” or member of the Gaussian family of distributions.

  Extremistan: A process where the total can be conceivably impacted by a single observation (say, income for a writer). Also called “fat-tailed.” Includes the fractal, or power-law, family of distributions.

  Nonlinearities, Convexity Effects (smiles and frowns): Nonlinearities can be concave or convex, or a mix of both. The term convexity effects is an extension and generalization of the fundamental asymmetry. The technical name for fragility is negative convexity effects and for antifragility is positive convexity effects. Convex is good (a smiley), concave is bad (a frowny).

  Philosopher’s Stone, also called Convexity Bias (very technical): The exact measure of benefits derived from nonlinearity or optionality (or, even more technically, the difference between x and a convex function of x). For instance, such bias can quantify the health benefits of variable intensity of pulmonary ventilation over steady pressure, or compute the gains from infrequent feeding. The Procrustean bed from the neglect of nonlinearity (to “simplify”) lies in assuming such convexity bias does not exist.

  Appendix I:

  A GRAPHICAL TOUR OF THE BOOK

  For those nonliterary folks who like to see things in graphs, rather than words, and those only.

  NONLINEARITY AND LESS IS MORE (& PROCRUSTEAN BED)

  FIGURE 19. This graph explains both the nonlinear response and the “less is more” idea. As the dose increases beyond a certain point, benefits reverse. We saw that everything nonlinear is either convex, concave, or, as in this graph, mixed. Also shows how under nonlinearities, reductions fail: the Procrustean bed of words “good for you” or “bad” is severely distorting.

  Also shows why tinkering-derived heuristics matter because they don’t take you into the danger zone—words and narratives do. Note how the “more is more” zone is convex, meaning accelerated initial benefits. (In Levantine Arabic, the zone beyond the saturation has a name: “more of it is like less of it.”)

  Finally, it shows why competitive “sophistication” (rather, complication masked as sophistication) is harmful, as compared to the practitioner’s craving for optimal simplicity.

  Fragility Transfer Theorem:

  Note that by the Fragility Transfer Theorem,

  CONVEX EXPOSURE [OVER SOME RANGE] ↔ LIKES VOLATILITY [UP TO SOME POINT]

  (volatility and other members of the disorder cluster), and

  CONCAVE EXPOSURE ↔ DISLIKES VOLATILITY

  MAPPING OF FRAGILITIES

  In Time Series Space

  FIGURE 20. Fragile variations through time, two types of fragilities. A representative series. The horizontal axis shows time, the vertical one shows variations. This can apply to anything: a health indicator, changes in wealth, your happiness, etc. We can see small (or no) benefits and variations most of the time and occasional large adverse outcomes. Uncertainty can hit in a rather hard way. Notice that the loss can occur at any time and exceed the previous cumulative gains. Type 2 (top) and Type 1 (bottom) differ in that Type 2 does not experience large positive effects from uncertainty while Type 1 does.

  FIGURE 21. The Just Robust (but not antifragile) (top): It experiences small or no variations through time. Never large ones. The Antifragile system (bottom): Uncertainty benefits a lot more than it hurts—the exact opposite of the first graph in Figure 20.

  Seen in Probabilities

  FIGURE 22. The horizontal axis represents outcomes, the vertical their probability (i.e., their frequency). The Robust: Small p
ositive and negative outcomes. The Fragile (Type 1, very rare): Can deliver both large negative and large positive outcomes. Why is it rare? Symmetry is very, very rare empirically yet all statistical distributions tend to simplify by using it. The Fragile (Type 2): We see large improbable downside (often hidden and ignored), small upside. There is a possibility of a severe unfavorable outcome (left), much more than a hugely favorable one, as the left side is thicker than the right one. The Antifragile: Large upside, small downside. Large favorable outcomes are possible, large unfavorable ones less so (if not impossible). The right “tail,” for favorable outcomes, is larger than the left one.

  Click here for a larger image of this table.

  Fragility has a left tail and, what is crucial, is therefore sensitive to perturbations of the left side of the probability distribution.

  FIGURE 23. Definition of Fragility (top graph): Fragility is the shaded area, the increase in the mass in left tail below a certain level K of the target variable in response to any change in parameter of the source variable—mostly the “volatility” or something a bit more tuned. We subsume all these changes in s–, about which later in the notes section (where I managed to hide equations).

  For a definition of antifragility (bottom graph), which is not exactly symmetric, the same mirror image for right tail plus robustness in left tail. The parameter perturbated is s+.

  It is key that while we may not be able to specify the probability distribution with any precision, we can probe the response through heuristics thanks to the “transfer theorem” in Taleb and Douady (2012). In other words, we do not need to understand the future probability of events, but we can figure out the fragility to these events.

  BARBELL TRANSFORMATION IN TIME SERIES

  FIGURE 24. Barbell seen in time series space. Flooring payoffs while keeping upside.

  BARBELLS (CONVEX TRANSFORMATIONS) AND THEIR PROPERTIES IN PROBABILITY SPACE

  A graphical expression of the barbell idea.

  FIGURE 25. Case 1, the Symmetric Case. Injecting uncertainty into the system makes us move from one bell shape—the first, with narrow possible spate of outcomes—to the second, a lower peak but more spread out. So it causes an increase of both positive and negative surprises, both positive and negative Black Swans.

  FIGURE 26. Case 2 (top): Fragile. Limited gains, larger losses. Increasing uncertainty in the system causes an augmentation of mostly (sometimes only) negative outcomes, just negative Black Swans. Case 3 (bottom): Antifragile. Increasing randomness and uncertainty in the system raises the probability of very favorable outcomes, and accordingly expand the expected payoff. It shows how discovery is, mathematically, exactly like an anti–airplane delay.

  TECHNICAL VERSION OF FAT TONY’S “NOT THE SAME ‘TING,’ ” OR THE CONFLATION OF EVENTS AND EXPOSURE TO EVENTS

  This note will also explain a “convex transformation.”

  f(x) is exposure to the variable x. f(x) can equivalently be called “payoff from x,” “exposure to x,” even “utility of payoff from x” where we introduce in f a utility function. x can be anything.

  Example: x is the intensity of an earthquake on some scale in some specific area, f(x) is the number of persons dying from it. We can easily see that f(x) can be made more predictable than x (if we force people to stay away from a specific area or build to some standards, etc.).

  Example: x is the number of meters of my fall to the ground when someone pushes me from height x, f(x) is a measure of my physical condition from the effect of the fall. Clearly I cannot predict x (who will push me, rather f(x)).

  Example: x is the number of cars in NYC at noon tomorrow, f(x) is travel time from point A to point B for a certain agent. f(x) can be made more predictable than x (take the subway, or, even better, walk).

  Some people talk about f(x) thinking they are talking about x. This is the problem of the conflation of event and exposure. This error present in Aristotle is virtually ubiquitous in the philosophy of probability (say, Hacking).

  One can become antifragile to x without understanding x, through convexity of f(x).

  The answer to the question “what do you do in a world you don’t understand?” is, simply, work on the undesirable states of f(x).

  It is often easier to modify f(x) than to get better knowledge of x. (In other words, robustification rather than forecasting Black Swans.)

  Example: If I buy an insurance on the market, here x, dropping more than 20 percent, f(x) will be independent of the part of the probability distribution of x that is below 20 percent and impervious to changes in its scale parameter. (This is an example of a barbell.)

  FIGURE 27. Convex Transformation (f(x) is a convex function of x). The difference between x and exposure to x. There is no downside risk in the second graph. The key is to modify f(x) in order to make knowledge of the properties of x on the left side of the distribution as irrelevant as possible. This operation is called convex transformation, nicknamed “barbell” here.

  Green lumber fallacy: When one confuses f(x) for another function g(x), one that has different nonlinearities.

  More technically: If one is antifragile to x, then the variance (or volatility, or other measures of variation) of x benefit f(x), since distributions that are skewed have their mean depend on the variance and when skewed right, their expectation increases with variance (the lognormal, for instance, has for mean a term that includes +½ σ2).

  Further, the probability distribution of f(x) is markedly different from that of x, particularly in the presence of nonlinearities.

  When f(x) is convex (concave) monotonically, f(x) is right (left) skewed.

  When f(x) is increasing and convex on the left then concave to the right, the probability distribution of f(x) is thinner-tailed than that of x. For instance, in Kahneman-Tversky’s prospect theory, the so-called utility of changes in wealth is more “robust” than that of wealth.

  Why payoff matters more than probability (technical): Where p(x) is the density, the expectation, that is ∫ f(x)p(x)dx, will depend increasingly on f rather than p, and the more nonlinear f, the more it will depend on f rather than p.

  THE FOURTH QUADRANT (TALEB, 2009)

  The idea is that tail events are not computable (in fat-tailed domains), but we can assess our exposure to the problem. Assume f(x) is an increasing function, Table 10 connects the idea to the notion of the Fourth Quadrant.

  Click here for a larger image of this table.

  LOCAL AND GLOBAL CONVEXITIES (TECHNICAL)

  Nothing is open-ended in nature—death is a maximum outcome for a unit. So things end up convex on one end, concave on the other.

  In fact, there is maximum harm at some point in things biological. Let us revisit the concave figure of the stone and pebbles in Chapter 18: by widening the range we see that boundedness of harm brings convexities somewhere. Concavity was dominant, but local. Figure 28 looks at the continuation of the story of the stone and pebbles.

  FIGURE 28. The top graph shows a broader range in the story of the stone and pebbles in Chapter 18. At some point, the concave turns convex as we hit maximum harm. The bottom graph shows strong antifragility, with no known upper limit (leading to Extremistan). These payoffs are only available in economic variables, say, sales of books, or matters unbounded or near-unbounded. I am unable to find such an effect in nature.

  FIGURE 29. Weak Antifragility (Mediocristan), with bounded maximum. Typical in nature.

  FREAK NONLINEARITIES (VERY TECHNICAL)

  The next two types of nonlinearities are almost never seen outside of economic variables; they are particularly limited to those caused by derivatives.

  FIGURE 30. The top graph shows a convex-concave increasing function, the opposite of the bounded dose-response functions we see in nature. It leads to Type 2, Fragile (very, very fat tails). The bottom graph shows the most dangerous of all: pseudoconvexity. Local antifragility, global fragility.

  MEDICAL NONLINEARITIES AND THEIR PROBABILITY CORRESPONDENCE (CHA
PTERS 21 & 22)

  FIGURE 31. Medical Iatrogenics: Case of small benefits and large Black Swan–style losses seen in probability space. Iatrogenics occurs when we have small identifiable gains (say, avoidance of small discomfort or a minor infection) and exposure to Black Swans with delayed invisible large side effects (say, death). These concave benefits from medicine are just like selling a financial option (plenty of risk) against small tiny immediate gains while claiming “evidence of no harm.”

 

‹ Prev