The Black Swan

Home > Other > The Black Swan > Page 37
The Black Swan Page 37

by Nassim Nicholas Taleb


  Then, during the summer of 1998, a combination of large events, triggered by a Russian financial crisis, took place that lay outside their models. It was a Black Swan. LTCM went bust and almost took down the entire financial system with it, as the exposures were massive. Since their models ruled out the possibility of large deviations, they allowed themselves to take a monstrous amount of risk. The ideas of Merton and Scholes, as well as those of Modern Portfolio Theory, were starting to go bust. The magnitude of the losses was spectacular, too spectacular to allow us to ignore the intellectual comedy. Many friends and I thought that the portfolio theorists would suffer the fate of tobacco companies: they were endangering people’s savings and would soon be brought to account for the consequences of their Gaussian-inspired methods.

  None of that happened.

  Instead, MBAs in business schools went on learning portfolio theory. And the option formula went on bearing the name Black-Scholes-Merton, instead of reverting to its true owners, Louis Bachelier, Ed Thorp, and others.

  How to “Prove” Things

  Merton the younger is a representative of the school of neoclassical economics, which, as we have seen with LTCM, represents most powerfully the dangers of Platonified knowledge.* Looking at his methodology, I see the following pattern. He starts with rigidly Platonic assumptions, completely unrealistic—such as the Gaussian probabilities, along with many more equally disturbing ones. Then he generates “theorems” and “proofs” from these. The math is tight and elegant. The theorems are compatible with other theorems from Modern Portfolio Theory, themselves compatible with still other theorems, building a grand theory of how people consume, save, face uncertainty, spend, and project the future. He assumes that we know the likelihood of events. The beastly word equilibrium is always present. But the whole edifice is like a game that is entirely closed, like Monopoly with all of its rules.

  A scholar who applies such methodology resembles Locke’s definition of a madman: someone “reasoning correctly from erroneous premises.”

  Now, elegant mathematics has this property: it is perfectly right, not 99 percent so. This property appeals to mechanistic minds who do not want to deal with ambiguities. Unfortunately you have to cheat somewhere to make the world fit perfect mathematics; and you have to fudge your assumptions somewhere. We have seen with the Hardy quote that professional “pure” mathematicians, however, are as honest as they come.

  So where matters get confusing is when someone like Merton tries to be mathematical and airtight rather than focus on fitness to reality.

  This is where you learn from the minds of military people and those who have responsibilities in security. They do not care about “perfect” ludic reasoning; they want realistic ecological assumptions. In the end, they care about lives.

  I mentioned in Chapter 11 how those who started the game of “formal thinking,” by manufacturing phony premises in order to generate “rigorous” theories, were Paul Samuelson, Merton’s tutor, and, in the United Kingdom, John Hicks. These two wrecked the ideas of John Maynard Keynes, which they tried to formalize (Keynes was interested in uncertainty, and complained about the mind-closing certainties induced by models). Other participants in the formal thinking venture were Kenneth Arrow and Gerard Debreu. All four were Nobeled. All four were in a delusional state under the effect of mathematics—what Dieudonné called “the music of reason,” and what I call Locke’s madness. All of them can be safely accused of having invented an imaginary world, one that lent itself to their mathematics. The insightful scholar Martin Shubik, who held that the degree of excessive abstraction of these models, a few steps beyond necessity, makes them totally unusable, found himself ostracized, a common fate for dissenters.*

  If you question what they do, as I did with Merton Jr., they will ask for “tight proof.” So they set the rules of the game, and you need to play by them. Coming from a practitioner background in which the principal asset is being able to work with messy, but empirically acceptable, mathematics, I cannot accept a pretense of science. I much prefer a sophisticated craft, focused on tricks, to a failed science looking for certainties. Or could these neoclassical model builders be doing something worse? Could it be that they are involved in what Bishop Huet calls the manufacturing of certainties?

  TABLE 4: TWO WAYS TO APPROACH RANDOMNESS

  Skeptical Empiricism and the a-Platonic School The Platonic Approach

  Interested in what lies outside the Platonic fold Focuses on the inside of the Platonic fold

  Respect for those who have the guts to say “I don’t know” “You keep criticizing these models. These models are all we have.”

  Fat Tony Dr. John

  Thinks of Black Swans as a dominant source of randomness Thinks of ordinary fluctuations as a dominant source of randomness, with jumps as an afterthought

  Bottom-up Top-down

  Would ordinarily not wear suits (except to funerals) Wears dark suits, white shirts; speaks in a boring tone

  Prefers to be broadly right Precisely wrong

  Minimal theory, consides theorizing as a disease to resist Everything needs to fit some grand, general socioeconomic model and “the rigor of economic theory;” frowns on the “descriptive”

  Does not believe that we can easily compute probabilities Built their entire apparatus on the assumptions that we can compute probabilities

  Model: Sextus Empiricus and the school of evidence-based, minimum-theory empirical medicine Model: Laplacian mechanics, the world and the economy like a clock

  Develops intuitions from practice, goes from observations to books Relies on scientific papers, goes from books to practice

  Not inspired by any science, uses messy mathematics and computational methods Inspired by physics, relies on abstract mathematics

  Ideas based on skepticism, on the unread books in the library Ideas based on beliefs, on what they think they know

  Assumes Extremistan as a starting point Assumes Mediocristan as a starting point

  Sophisticated craft Poor science

  Seeks to be approximately right across a broad set of eventualities Seeks to be perfectly right in a narrow model, under precise assumptions

  Let us see.

  Skeptical empiricism advocates the opposite method. I care about the premises more than the theories, and I want to minimize reliance on theories, stay light on my feet, and reduce my surprises. I want to be broadly right rather than precisely wrong. Elegance in the theories is often indicative of Platonicity and weakness—it invites you to seek elegance for elegance’s sake. A theory is like medicine (or government): often useless, sometimes necessary, always self-serving, and on occasion lethal. So it needs to be used with care, moderation, and close adult supervision.

  The distinction in the above table between my model modern, skeptical empiricist and what Samuelson’s puppies represent can be generalized across disciplines.

  I’ve presented my ideas in finance because that’s where I refined them. Let us now examine a category of people expected to be more thoughtful: the philosophers.

  * This is a simple illustration of the general point of this book in finance and economics. If you do not believe in applying the bell curve to social variables, and if, like many professionals, you are already convinced that “modern” financial theory is dangerous junk science, you can safely skip this chapter.

  * Granted, the Gaussian has been tinkered with, using such methods as complementary “jumps,” stress testing, regime switching, or the elaborate methods known as GARCH, but while these methods represent a good effort, they fail to address the bell curve’s fundamental flaws. Such methods are not scale-invariant. This, in my opinion, can explain the failures of sophisticated methods in real life as shown by the Makridakis competition.

  * More technically, remember my career as an option professional. Not only does an option on a very long shot benefit from Black Swans, but it benefits disproportionately from them—something Scholes and Merton’s “formula” misse
s. The option payoff is so powerful that you do not have to be right on the odds: you can be wrong on the probability, but get a monstrously large payoff. I’ve called this the “double bubble”: the mispricing of the probability and that of the payoff.

  * I am selecting Merton because I found him very illustrative of academically stamped obscurantism. I discovered Merton’s shortcomings from an angry and threatening seven-page letter he sent me that gave me the impression that he was not too familiar with how we trade options, his very subject matter. He seemed to be under the impression that traders rely on “rigorous” economic theory—as if birds had to study (bad) engineering in order to fly.

  * Medieval medicine was also based on equilibrium ideas when it was top-down and similar to theology. Luckily its practitioners went out of business, as they could not compete with the bottom-up surgeons, ecologically driven former barbers who gained clinical experience, and after whom a-Platonic clinical science was born. If I am alive, today, it is because scholastic top-down medicine went out of business a few centuries ago.

  Chapter Eighteen

  THE UNCERTAINTY OF THE PHONY

  Philosophers in the wrong places—Uncertainty about (mostly) lunch—What I don’t care about—Education and intelligence

  This final chapter of Part Three focuses on a major ramification of the ludic fallacy: how those whose job it is to make us aware of uncertainty fail us and divert us into bogus certainties through the back door.

  LUDIC FALLACY REDUX

  I have explained the ludic fallacy with the casino story, and have insisted that the sterilized randomness of games does not resemble randomness in real life. Look again at Figure 7 in Chapter 15. The dice average out so quickly that I can say with certainty that the casino will beat me in the very near long run at, say, roulette, as the noise will cancel out, though not the skills (here, the casino’s advantage). The more you extend the period (or reduce the size of the bets) the more randomness, by virtue of averaging, drops out of these gambling constructs.

  The ludic fallacy is present in the following chance setups: random walk, dice throwing, coin flipping, the infamous digital “heads or tails” expressed as 0 or 1, Brownian motion (which corresponds to the movement of pollen particles in water), and similar examples. These setups generate a quality of randomness that does not even qualify as randomness—protorandomness would be a more appropriate designation. At their core, all theories built around the ludic fallacy ignore a layer of uncertainty. Worse, their proponents do not know it!

  One severe application of such focus on small, as opposed to large, uncertainty concerns the hackneyed greater uncertainty principle.

  Find the Phony

  The greater uncertainty principle states that in quantum physics, one cannot measure certain pairs of values (with arbitrary precision), such as the position and momentum of particles. You will hit a lower bound of measurement: what you gain in the precision of one, you lose in the other. So there is an incompressible uncertainty that, in theory, will defy science and forever remain an uncertainty. This minimum uncertainty was discovered by Werner Heisenberg in 1927. I find it ludicrous to present the uncertainty principle as having anything to do with uncertainty. Why? First, this uncertainty is Gaussian. On average, it will disappear—recall that no one person’s weight will significantly change the total weight of a thousand people. We may always remain uncertain about the future positions of small particles, but these uncertainties are very small and very numerous, and they average out—for Pluto’s sake, they average out! They obey the law of large numbers we discussed in Chapter 15. Most other types of randomness do not average out! If there is one thing on this planet that is not so uncertain, it is the behavior of a collection of subatomic particles! Why? Because, as I have said earlier, when you look at an object, composed of a collection of particles, the fluctuations of the particles tend to balance out.

  But political, social, and weather events do not have this handy property, and we patently cannot predict them, so when you hear “experts” presenting the problems of uncertainty in terms of subatomic particles, odds are that the expert is a phony. As a matter of fact, this may be the best way to spot a phony.

  I often hear people say, “Of course there are limits to our knowledge,” then invoke the greater uncertainty principle as they try to explain that “we cannot model everything”—I have heard such types as the economist Myron Scholes say this at conferences. But I am sitting here in New York, in August 2006, trying to go to my ancestral village of Amioun, Lebanon. Beirut’s airport is closed owing to the conflict between Israel and the Shiite militia Hezbollah. There is no published airline schedule that will inform me when the war will end, if it ends. I can’t figure out if my house will be standing, if Amioun will still be on the map—recall that the family house was destroyed once before. I can’t figure out whether the war is going to degenerate into something even more severe. Looking into the outcome of the war, with all my relatives, friends, and property exposed to it, I face true limits of knowledge. Can someone explain to me why I should care about subatomic particles that, anyway, converge to a Gaussian? People can’t predict how long they will be happy with recently acquired objects, how long their marriages will last, how their new jobs will turn out, yet it’s subatomic particles that they cite as “limits of prediction.” They’re ignoring a mammoth standing in front of them in favor of matter even a microscope would not allow them to see.

  Can Philosophers Be Dangerous to Society?

  I will go further: people who worry about pennies instead of dollars can be dangerous to society. They mean well, but, invoking my Bastiat argument of Chapter 8, they are a threat to us. They are wasting our studies of uncertainty by focusing on the insignificant. Our resources (both cognitive and scientific) are limited, perhaps too limited. Those who distract us increase the risk of Black Swans.

  This commoditization of the notion of uncertainty as symptomatic of Black Swan blindness is worth discussing further here.

  Given that people in finance and economics are seeped in the Gaussian to the point of choking on it, I looked for financial economists with philosophical bents to see how their critical thinking allows them to handle this problem. I found a few. One such person got a PhD in philosophy, then, four years later, another in finance; he published papers in both fields, as well as numerous textbooks in finance. But I was disheartened by him: he seemed to have compartmentalized his ideas on uncertainty so that he had two distinct professions: philosophy and quantitative finance. The problem of induction, Mediocristan, epistemic opacity, or the offensive assumption of the Gaussian—these did not hit him as true problems. His numerous textbooks drilled Gaussian methods into students’ heads, as though their author had forgotten that he was a philosopher. Then he promptly remembered that he was when writing philosophy texts on seemingly scholarly matters.

  The same context specificity leads people to take the escalator to the StairMasters, but the philosopher’s case is far, far more dangerous since he uses up our storage for critical thinking in a sterile occupation. Philosophers like to practice philosophical thinking on me-too subjects that other philosophers call philosophy, and they leave their minds at the door when they are outside of these subjects.

  The Problem of Practice

  As much as I rail against the bell curve, Platonicity, and the ludic fallacy, my principal problem is not so much with statisticians—after all, these are computing people, not thinkers. We should be far less tolerant of philosophers, with their bureaucratic apparatchiks closing our minds. Philosophers, the watchdogs of critical thinking, have duties beyond those of other professions.

  HOW MANY WITTGENSTEINS CAN DANCE ON THE HEAD OF A PIN?

  A number of semishabbily dressed (but thoughtful-looking) people gather in a room, silently looking at a guest speaker. They are all professional philosophers attending the prestigious weekly colloquium at a New York–area university. The speaker sits with his nose drowned in a set of typewrit
ten pages, from which he reads in a monotone voice. He is hard to follow, so I daydream a bit and lose his thread. I can vaguely tell that the discussion revolves around some “philosophical” debate about Martians invading your head and controlling your will, all the while preventing you from knowing it. There seem to be several theories concerning this idea, but the speaker’s opinion differs from those of other writers on the subject. He spends some time showing where his research on these head-hijacking Martians is unique. After his monologue (fifty-five minutes of relentless reading of the typewritten material) there is a short break, then another fifty-five minutes of discussion about Martians planting chips and other outlandish conjectures. Wittgenstein is occasionally mentioned (you can always mention Wittgenstein since he is vague enough to always seem relevant).

  Every Friday, at four P.M., the paychecks of these philosophers will hit their respective bank accounts. A fixed proportion of their earnings, about 16 percent on average, will go into the stock market in the form of an automatic investment into the university’s pension plan. These people are professionally employed in the business of questioning what we take for granted; they are trained to argue about the existence of god(s), the definition of truth, the redness of red, the meaning of meaning, the difference between the semantic theories of truth, conceptual and nonconceptual representations … Yet they believe blindly in the stock market, and in the abilities of their pension plan manager. Why do they do so? Because they accept that this is what people should do with their savings, because “experts” tell them so. They doubt their own senses, but not for a second do they doubt their automatic purchases in the stock market. This domain dependence of skepticism is no different from that of medical doctors (as we saw in Chapter 8).

 

‹ Prev