Book Read Free

The Black Swan

Page 10

by Nassim Nicholas Taleb


  Even in testing a hypothesis, we tend to look for instances where the hypothesis proved true. Of course we can easily find confirmation; all we have to do is look, or have a researcher do it for us. I can find confirmation for just about anything, the way a skilled London cabbie can find traffic to increase the fare, even on a holiday.

  Some people go further and give me examples of events that we have been able to foresee with some success—indeed there are a few, like landing a man on the moon and the economic growth of the twenty-first century. One can find plenty of “counterevidence” to the points in this book, the best being that newspapers are excellent at predicting movie and theater schedules. Look, I predicted yesterday that the sun would rise today, and it did!

  NEGATIVE EMPIRICISM

  The good news is that there is a way around this naïve empiricism. I am saying that a series of corroborative facts is not necessarily evidence. Seeing white swans does not confirm the nonexistence of black swans. There is an exception, however: I know what statement is wrong, but not necessarily what statement is correct. If I see a black swan I can certify that all swans are not white! If I see someone kill, then I can be practically certain that he is a criminal. If I don’t see him kill, I cannot be certain that he is innocent. The same applies to cancer detection: the finding of a single malignant tumor proves that you have cancer, but the absence of such a finding cannot allow you to say with certainty that you are cancer-free.

  We can get closer to the truth by negative instances, not by verification! It is misleading to build a general rule from observed facts. Contrary to conventional wisdom, our body of knowledge does not increase from a series of confirmatory observations, like the turkey’s. But there are some things I can remain skeptical about, and others I can safely consider certain. This makes the consequences of observations one-sided. It is not much more difficult than that.

  This asymmetry is immensely practical. It tells us that we do not have to be complete skeptics, just semiskeptics. The subtlety of real life over the books is that, in your decision making, you need be interested only in one side of the story: if you seek certainty about whether the patient has cancer, not certainty about whether he is healthy, then you might be satisfied with negative inference, since it will supply you the certainty you seek. So we can learn a lot from data—but not as much as we expect. Sometimes a lot of data can be meaningless; at other times one single piece of information can be very meaningful. It is true that a thousand days cannot prove you right, but one day can prove you to be wrong.

  The person who is credited with the promotion of this idea of one-sided semiskepticism is Sir Doktor Professor Karl Raimund Popper, who may be the only philosopher of science who is actually read and discussed by actors in the real world (though not as enthusiastically by professional philosophers). As I am writing these lines, a black-and-white picture of him is hanging on the wall of my study. It was a gift I got in Munich from the essayist Jochen Wegner, who, like me, considers Popper to be about all “we’ve got” among modern thinkers—well, almost. He writes to us, not to other philosophers. “We” are the empirical decision makers who hold that uncertainty is our discipline, and that understanding how to act under conditions of incomplete information is the highest and most urgent human pursuit.

  Popper generated a large-scale theory around this asymmetry, based on a technique called “falsification” (to falsify is to prove wrong) meant to distinguish between science and nonscience, and people immediately started splitting hairs about its technicalities, even though it is not the most interesting, or the most original, of Popper’s ideas. This idea about the asymmetry of knowledge is so liked by practitioners, because it is obvious to them; it is the way they run their business. The philosopher maudit Charles Sanders Peirce, who, like an artist, got only posthumous respect, also came up with a version of this Black Swan solution when Popper was wearing diapers—some people even called it the Peirce-Popper approach. Popper’s far more powerful and original idea is the “open” society, one that relies on skepticism as a modus operandi, refusing and resisting definitive truths. He accused Plato of closing our minds, according to the arguments I described in the Prologue. But Popper’s biggest idea was his insight concerning the fundamental, severe, and incurable unpredictability of the world, and that I will leave for the chapter on prediction.*

  Of course, it is not so easy to “falsify,” i.e., to state that something is wrong with full certainty. Imperfections in your testing method may yield a mistaken “no.” The doctor discovering cancer cells might have faulty equipment causing optical illusions; or he could be a bell-curve-using economist disguised as a doctor. An eyewitness to a crime might be drunk. But it remains the case that you know what is wrong with a lot more confidence than you know what is right. All pieces of information are not equal in importance.

  Popper introduced the mechanism of conjectures and refutations, which works as follows: you formulate a (bold) conjecture and you start looking for the observation that would prove you wrong. This is the alternative to our search for confirmatory instances. If you think the task is easy, you will be disappointed—few humans have a natural ability to do this. I confess that I am not one of them; it does not come naturally to me.*

  Counting to Three

  Cognitive scientists have studied our natural tendency to look only for corroboration; they call this vulnerability to the corroboration error the confirmation bias. There are some experiments showing that people focus only on the books read in Umberto Eco’s library. You can test a given rule either directly, by looking at instances where it works, or indirectly, by focusing on where it does not work. As we saw earlier, disconfirming instances are far more powerful in establishing truth. Yet we tend to not be aware of this property.

  The first experiment I know of concerning this phenomenon was done by the psychologist P. C. Wason. He presented subjects with the three-number sequence 2, 4, 6, and asked them to try to guess the rule generating it. Their method of guessing was to produce other three-number sequences, to which the experimenter would respond “yes” or “no” depending on whether the new sequences were consistent with the rule. Once confident with their answers, the subjects would formulate the rule. (Note the similarity of this experiment to the discussion in Chapter 1 of the way history presents itself to us: assuming history is generated according to some logic, we see only the events, never the rules, but need to guess how it works.) The correct rule was “numbers in ascending order,” nothing more. Very few subjects discovered it because in order to do so they had to offer a series in descending order (that the experimenter would say “no” to). Wason noticed that the subjects had a rule in mind, but gave him examples aimed at confirming it instead of trying to supply series that were inconsistent with their hypothesis. Subjects tenaciously kept trying to confirm the rules that they had made up.

  This experiment inspired a collection of similar tests, of which another example: Subjects were asked which questions to ask to find out whether a person was extroverted or not, purportedly for another type of experiment. It was established that subjects supplied mostly questions for which a “yes” answer would support the hypothesis.

  But there are exceptions. Among them figure chess grand masters, who, it has been shown, actually do focus on where a speculative move might be weak; rookies, by comparison, look for confirmatory instances instead of falsifying ones. But don’t play chess to practice skepticism. Scientists believe that it is the search for their own weaknesses that makes them good chess players, not the practice of chess that turns them into skeptics. Similarly, the speculator George Soros, when making a financial bet, keeps looking for instances that would prove his initial theory wrong. This, perhaps, is true self-confidence: the ability to look at the world without the need to find signs that stroke one’s ego.*

  Sadly, the notion of corroboration is rooted in our intellectual habits and discourse. Consider this comment by the writer and critic John Updike: “When Julian
Jaynes … speculates that until late in the second millennium B.C. men had no consciousness but were automatically obeying the voices of gods, we are astounded but compelled to follow this remarkable thesis through all the corroborative evidence.” Jaynes’s thesis may be right, but, Mr. Updike, the central problem of knowledge (and the point of this chapter) is that there is no such animal as corroborative evidence.

  Saw Another Red Mini!

  The following point further illustrates the absurdity of confirmation. If you believe that witnessing an additional white swan will bring confirmation that there are no black swans, then you should also accept the statement, on purely logical grounds, that the sighting of a red Mini Cooper should confirm that there are no black swans.

  Why? Just consider that the statement “all swans are white” is equivalent to “all nonwhite objects are not swans.” What confirms the latter statement should confirm the former. Therefore, a mind with a confirmation bent would infer that the sighting of a nonwhite object that is not a swan should bring such confirmation. This argument, known as Hempel’s raven paradox, was rediscovered by my friend the (thinking) mathematician Bruno Dupire during one of our intense meditating walks in London—one of those intense walk-discussions, intense to the point of our not noticing the rain. He pointed to a red Mini and shouted, “Look, Nassim, look! No Black Swan!”

  Not Everything

  We are not naïve enough to believe that someone will be immortal because we have never seen him die, or that someone is innocent of murder because we have never seen him kill. The problem of naïve generalization does not plague us everywhere. But such smart pockets of inductive skepticism tend to involve events that we have encountered in our natural environment, matters from which we have learned to avoid foolish generalization.

  For instance, when children are presented with the picture of a single member of a group and are asked to guess the properties of other unseen members, they are capable of selecting which attributes to generalize. Show a child a photograph of someone overweight, tell her that he is a member of a tribe, and ask her to describe the rest of the population: she will (most likely) not jump to the conclusion that all the members of the tribe are weight-challenged. But she would respond differently to generalizations involving skin color. If you show her people of dark complexion and ask her to describe their co-tribesmen, she will assume that they too have dark skin.

  So it seems that we are endowed with specific and elaborate inductive instincts showing us the way. Contrary to the opinion held by the great David Hume, and that of the British empiricist tradition, that belief arises from custom, as they assumed that we learn generalizations solely from experience and empirical observations, it was shown from studies of infant behavior that we come equipped with mental machinery that causes us to selectively generalize from experiences (i.e., to selectively acquire inductive learning in some domains but remain skeptical in others). By doing so, we are not learning from a mere thousand days, but benefiting, thanks to evolution, from the learning of our ancestors—which found its way into our biology.

  Back to Mediocristan

  And we may have learned things wrong from our ancestors. I speculate here that we probably inherited the instincts adequate for survival in the East African Great Lakes region where we presumably hail from, but these instincts are certainly not well adapted to the present, post-alphabet, intensely informational, and statistically complex environment.

  Indeed our environment is a bit more complex than we (and our institutions) seem to realize. How? The modern world, being Extremistan, is dominated by rare—very rare—events. It can deliver a Black Swan after thousands and thousands of white ones, so we need to withhold judgment for longer than we are inclined to. As I said in Chapter 3, it is impossible—biologically impossible—to run into a human several hundred miles tall, so our intuitions rule these events out. But the sales of a book or the magnitude of social events do not follow such strictures. It takes a lot more than a thousand days to accept that a writer is ungifted, a market will not crash, a war will not happen, a project is hopeless, a country is “our ally,” a company will not go bust, a brokerage-house security analyst is not a charlatan, or a neighbor will not attack us. In the distant past, humans could make inferences far more accurately and quickly.

  Furthermore, the sources of Black Swans today have multiplied beyond measurability.* In the primitive environment they were limited to newly encountered wild animals, new enemies, and abrupt weather changes. These events were repeatable enough for us to have built an innate fear of them. This instinct to make inferences rather quickly, and to “tunnel” (i.e., focus on a small number of sources of uncertainty, or causes of known Black Swans) remains rather ingrained in us. This instinct, in a word, is our predicament.

  * Neither Peirce nor Popper was the first to come up with this asymmetry. The philosopher Victor Brochard mentioned the importance of negative empiricism in 1878, as if it were a matter held by the empiricists to be the sound way to do business—ancients understood it implicitly. Out-of-print books deliver many surprises.

  * As I said in the Prologue, the likely not happening is also a Black Swan. So disconfirming the likely is equivalent to confirming the unlikely.

  * This confirmation problem pervades our modern life, since most conflicts have at their root the following mental bias: when Arabs and Israelis watch news reports they see different stories in the same succession of events. Likewise, Democrats and Republicans look at different parts of the same data and never converge to the same opinions. Once your mind is inhabited with a certain view of the world, you will tend to only consider instances proving you to be right. Paradoxically, the more information you have, the more justified you will feel in your views.

  * Clearly, weather-related and geodesic events (such as tornadoes and earthquakes) have not changed much over the past millennium, but what have changed are the socioeconomic consequences of such occurrences. Today, an earthquake or hurricane commands more and more severe economic consequences than it did in the past because of the interlocking relationships between economic entities and the intensification of the “network effects” that we will discuss in Part Three. Matters that used to have mild effects now command a high impact. Tokyo’s 1923 earthquake caused a drop of about a third in Japan’s GNP. Extrapolating from the tragedy of Kobe in 1994, we can easily infer that the consequences of another such earthquake in Tokyo would be far costlier than that of its predecessor.

  Chapter Six

  THE NARRATIVE FALLACY

  The cause of the because—How to split a brain—Effective methods of pointing at the ceiling—Dopamine will help you win—I will stop riding motorcycles (but not today)—Both empirical and psychologist? Since when?

  ON THE CAUSES OF MY REJECTION OF CAUSES

  During the fall of 2004, I attended a conference on aesthetics and science in Rome, perhaps the best possible location for such a meeting since aesthetics permeates everything there, down to one’s personal behavior and tone of voice. At lunch, a prominent professor from a university in southern Italy greeted me with extreme enthusiasm. I had listened earlier that morning to his impassioned presentation; he was so charismatic, so convinced, and so convincing that, although I could not understand much of what he said, I found myself fully agreeing with everything. I could only make out a sentence here and there, since my knowledge of Italian worked better in cocktail parties than in intellectual and scholarly venues. At some point during his speech, he turned all red with anger—thus convincing me (and the audience) that he was definitely right.

  He assailed me during lunch to congratulate me for showing the effects of those causal links that are more prevalent in the human mind than in reality. The conversation got so animated that we stood together near the buffet table, blocking the other delegates from getting close to the food. He was speaking accented French (with his hands), I was answering in primitive Italian (with my hands), and we were so vivacious that the other guests wer
e afraid to interrupt a conversation of such importance and animation. He was emphatic about my previous book on randomness, a sort of angry trader’s reaction against blindness to luck in life and in the markets, which had been published there under the musical title Giocati dal caso. I had been lucky to have a translator who knew almost more about the topic than I did, and the book found a small following among Italian intellectuals. “I am a huge fan of your ideas, but I feel slighted. These are truly mine too, and you wrote the book that I (almost) planned to write,” he said. “You are a lucky man; you presented in such a comprehensive way the effect of chance on society and the overestimation of cause and effect. You show how stupid we are to systematically try to explain skills.”

  He stopped, then added, in a calmer tone: “But, mon cher ami, let me tell you quelque chose [uttered very slowly, with his thumb hitting his index and middle fingers]: had you grown up in a Protestant society where people are told that efforts are linked to rewards and individual responsibility is emphasized, you would never have seen the world in such a manner. You were able to see luck and separate cause and effect because of your Eastern Orthodox Mediterranean heritage.” He was using the French à cause. And he was so convincing that, for a minute, I agreed with his interpretation.

 

‹ Prev