They all had come to pay the price of sophistication. Soon they would listen for several hours to a collection of oversize men and women singing endlessly in Russian. Many of the opera-bound people looked like they worked for the local office of J. P. Morgan, or some other financial institution where employees experience differential wealth from the rest of the local population, with concomitant pressures on them to live by a sophisticated script (wine and opera). But I was not there to take a peek at the neosophisticates. I had come to look at the Sydney Opera House, a building that adorns every Australian tourist brochure. Indeed, it is striking, though it looks like the sort of building architects create in order to impress other architects.
That evening walk in the very pleasant part of Sydney called the Rocks was a pilgrimage. While Australians were under the illusion that they had built a monument to distinguish their skyline, what they had really done was to construct a monument to our failure to predict, to plan, and to come to grips with our unknowledge of the future—our systematic underestimation of what the future has in store.
The Australians had actually built a symbol of the epistemic arrogance of the human race. The story is as follows. The Sydney Opera House was supposed to open in early 1963 at a cost of AU$ 7 million. It finally opened its doors more than ten years later, and, although it was a less ambitious version than initially envisioned, it ended up costing around AU$ 104 million. While there are far worse cases of planning failures (namely the Soviet Union), or failures to forecast (all important historical events), the Sydney Opera House provides an aesthetic (at least in principle) illustration of the difficulties. This opera-house story is the mildest of all the distortions we will discuss in this section (it was only money, and it did not cause the spilling of innocent blood). But it is nevertheless emblematic.
This chapter has two topics. First, we are demonstrably arrogant about what we think we know. We certainly know a lot, but we have a built-in tendency to think that we know a little bit more than we actually do, enough of that little bit to occasionally get into serious trouble. We shall see how you can verify, even measure, such arrogance in your own living room.
Second, we will look at the implications of this arrogance for all the activities involving prediction.
Why on earth do we predict so much? Worse, even, and more interesting: Why don’t we talk about our record in predicting? Why don’t we see how we (almost) always miss the big events? I call this the scandal of prediction.
ON THE VAGUENESS OF CATHERINE’S LOVER COUNT
Let us examine what I call epistemic arrogance, literally, our hubris concerning the limits of our knowledge. Epistēmē is a Greek word that refers to knowledge; giving a Greek name to an abstract concept makes it sound important. True, our knowledge does grow, but it is threatened by greater increases in confidence, which make our increase in knowledge at the same time an increase in confusion, ignorance, and conceit.
Take a room full of people. Randomly pick a number. The number could correspond to anything: the proportion of psychopathic stockbrokers in western Ukraine, the sales of this book during the months with r in them, the average IQ of business-book editors (or business writers), the number of lovers of Catherine II of Russia, et cetera. Ask each person in the room to independently estimate a range of possible values for that number set in such a way that they believe that they have a 98 percent chance of being right, and less than 2 percent chance of being wrong. In other words, whatever they are guessing has about a 2 percent chance to fall outside their range. For example:
“I am 98 percent confident that the population of Rajastan is between 15 and 23 million.”
“I am 98 percent confident that Catherine II of Russia had between 34 and 63 lovers.”
You can make inferences about human nature by counting how many people in your sample guessed wrong; it is not expected to be too much higher than two out of a hundred participants. Note that the subjects (your victims) are free to set their range as wide as they want: you are not trying to gauge their knowledge but rather their evaluation of their own knowledge.
Now, the results. Like many things in life, the discovery was unplanned, serendipitous, surprising, and took a while to digest. Legend has it that Albert and Raiffa, the researchers who noticed it, were actually looking for something quite different, and more boring: how humans figure out probabilities in their decision making when uncertainty is involved (what the learned call calibrating). The researchers came out befuddled. The 2 percent error rate turned out to be close to 45 percent in the population being tested! It is quite telling that the first sample consisted of Harvard Business School students, a breed not particularly renowned for their humility or introspective orientation. MBAs are particularly nasty in this regard, which might explain their business success. Later studies document more humility, or rather a smaller degree of arrogance, in other populations. Janitors and cabdrivers are rather humble. Politicians and corporate executives, alas … I’ll leave them for later.
Are we twenty-two times too comfortable with what we know? It seems so.
This experiment has been replicated dozens of times, across populations, professions, and cultures, and just about every empirical psychologist and decision theorist has tried it on his class to show his students the big problem of humankind: we are simply not wise enough to be trusted with knowledge. The intended 2 percent error rate usually turns out to be between 15 percent and 30 percent, depending on the population and the subject matter.
I have tested myself and, sure enough, failed, even while consciously trying to be humble by carefully setting a wide range—and yet such underestimation happens to be, as we will see, the core of my professional activities. This bias seems present in all cultures, even those that favor humility—there may be no consequential difference between downtown Kuala Lumpur and the ancient settlement of Amioun, (currently) Lebanon. Yesterday afternoon, I gave a workshop in London, and had been mentally writing on my way to the venue because the cabdriver had an above-average ability to “find traffic.” I decided to make a quick experiment during my talk.
I asked the participants to take a stab at a range for the number of books in Umberto Eco’s library, which, as we know from the introduction to Part One, contains 30,000 volumes. Of the sixty attendees, not a single one made the range wide enough to include the actual number (the 2 percent error rate became 100 percent). This case may be an aberration, but the distortion is exacerbated with quantities that are out of the ordinary. Interestingly, the crowd erred on the very high and the very low sides: some set their ranges at 2,000 to 4,000; others at 300,000 to 600,000.
True, someone warned about the nature of the test can play it safe and set the range between zero and infinity; but this would no longer be “calibrating”—that person would not be conveying any information, and could not produce an informed decision in such a manner. In this case it is more honorable to just say, “I don’t want to play the game; I have no clue.”
It is not uncommon to find counterexamples, people who overshoot in the opposite direction and actually overestimate their error rate: you may have a cousin particularly careful in what he says, or you may remember that college biology professor who exhibited pathological humility; the tendency that I am discussing here applies to the average of the population, not to every single individual. There are sufficient variations around the average to warrant occasional counterexamples. Such people are in the minority—and, sadly, since they do not easily achieve prominence, they do not seem to play too influential a role in society.
Epistemic arrogance bears a double effect: we overestimate what we know, and underestimate uncertainty, by compressing the range of possible uncertain states (i.e., by reducing the space of the unknown).
The applications of this distortion extend beyond the mere pursuit of knowledge: just look into the lives of the people around you. Literally any decision pertaining to the future is likely to be infected by it. Our human race is affected by a ch
ronic underestimation of the possibility of the future straying from the course initially envisioned (in addition to other biases that sometimes exert a compounding effect). To take an obvious example, think about how many people divorce. Almost all of them are acquainted with the statistic that between one-third and one-half of all marriages fail, something the parties involved did not forecast while tying the knot. Of course, “not us,” because “we get along so well” (as if others tying the knot got along poorly).
I remind the reader that I am not testing how much people know, but assessing the difference between what people actually know and how much they think they know. I am reminded of a measure my mother concocted, as a joke, when I decided to become a businessman. Being ironic about my (perceived) confidence, though not necessarily unconvinced of my abilities, she found a way for me to make a killing. How? Someone who could figure out how to buy me at the price I am truly worth and sell me at what I think I am worth would be able to pocket a huge difference. Though I keep trying to convince her of my internal humility and insecurity concealed under a confident exterior; though I keep telling her that I am an introspector—she remains skeptical. Introspector shmintrospector, she still jokes at the time of this writing that I am a little ahead of myself.
BLACK SWAN BLINDNESS REDUX
The simple test above suggests the presence of an ingrained tendency in humans to underestimate outliers—or Black Swans. Left to our own devices, we tend to think that what happens every decade in fact only happens once every century, and, furthermore, that we know what’s going on.
This miscalculation problem is a little more subtle. In truth, outliers are not as sensitive to underestimation since they are fragile to estimation errors, which can go in both directions. As we saw in Chapter 6, there are conditions under which people overestimate the unusual or some specific unusual event (say when sensational images come to their minds)—which, we have seen, is how insurance companies thrive. So my general point is that these events are very fragile to miscalculation, with a general severe underestimation mixed with an occasional severe overestimation.
The errors get worse with the degree of remoteness to the event. So far, we have only considered a 2 percent error rate in the game we saw earlier, but if you look at, say, situations where the odds are one in a hundred, one in a thousand, or one in a million, then the errors become monstrous. The longer the odds, the larger the epistemic arrogance.
Note here one particularity of our intuitive judgment: even if we lived in Mediocristan, in which large events are rare (and, mostly, inconsequential), we would still underestimate extremes—we would think that they are even rarer. We underestimate our error rate even with Gaussian variables. Our intuitions are sub-Mediocristani. But we do not live in Mediocristan. The numbers we are likely to estimate on a daily basis belong largely in Extremistan, i.e., they are run by concentration and subjected to Black Swans.
Guessing and Predicting
There is no effective difference between my guessing a variable that is not random, but for which my information is partial or deficient, such as the number of lovers who transited through the bed of Catherine II of Russia, and predicting a random one, like tomorrow’s unemployment rate or next year’s stock market. In this sense, guessing (what I don’t know, but what someone else may know) and predicting (what has not taken place yet) are the same thing.
To further appreciate the connection between guessing and predicting, assume that instead of trying to gauge the number of lovers of Catherine of Russia, you are estimating the less interesting but, for some, more important question of the population growth for the next century, the stockmarket returns, the social-security deficit, the price of oil, the results of your great-uncle’s estate sale, or the environmental conditions of Brazil two decades from now. Or, if you are the publisher of Yevgenia Krasnova’s book, you may need to produce an estimate of the possible future sales. We are now getting into dangerous waters: just consider that most professionals who make forecasts are also afflicted with the mental impediment discussed above. Furthermore, people who make forecasts professionally are often more affected by such impediments than those who don’t.
INFORMATION IS BAD FOR KNOWLEDGE
You may wonder how learning, education, and experience affect epistemic arrogance—how educated people might score on the above test, as compared with the rest of the population (using Mikhail the cabdriver as a benchmark). You will be surprised by the answer: it depends on the profession. I will first look at the advantages of the “informed” over the rest of us in the humbling business of prediction.
I recall visiting a friend at a New York investment bank and seeing a frenetic hotshot “master of the universe” type walking around with a set of wireless headphones wrapped around his ears and a microphone jutting out of the right side that prevented me from focusing on his lips during my twenty-second conversation with him. I asked my friend the purpose of that contraption. “He likes to keep in touch with London,” I was told. When you are employed, hence dependent on other people’s judgment, looking busy can help you claim responsibility for the results in a random environment. The appearance of busyness reinforces the perception of causality, of the link between results and one’s role in them. This of course applies even more to the CEOs of large companies who need to trumpet a link between their “presence” and “leadership” and the results of the company. I am not aware of any studies that probe the usefulness of their time being invested in conversations and the absorption of small-time information—nor have too many writers had the guts to question how large the CEO’s role is in a corporation’s success.
Let us discuss one main effect of information: impediment to knowledge.
Aristotle Onassis, perhaps the first mediatized tycoon, was principally famous for being rich—and for exhibiting it. An ethnic Greek refugee from southern Turkey, he went to Argentina, made a lump of cash by importing Turkish tobacco, then became a shipping magnate. He was reviled when he married Jacqueline Kennedy, the widow of the American president John F. Kennedy, which drove the heartbroken opera singer Maria Callas to immure herself in a Paris apartment to await death.
If you study Onassis’s life, which I spent part of my early adulthood doing, you would notice an interesting regularity: “work,” in the conventional sense, was not his thing. He did not even bother to have a desk, let alone an office. He was not just a dealmaker, which does not necessitate having an office, but he also ran a shipping empire, which requires day-to-day monitoring. Yet his main tool was a notebook, which contained all the information he needed. Onassis spent his life trying to socialize with the rich and famous, and to pursue (and collect) women. He generally woke up at noon. If he needed legal advice, he would summon his lawyers to some nightclub in Paris at two A.M. He was said to have an irresistible charm, which helped him take advantage of people.
Let us go beyond the anecdote. There may be a “fooled by randomness” effect here, of making a causal link between Onassis’s success and his modus operandi. I may never know if Onassis was skilled or lucky, though I am convinced that his charm opened doors for him, but I can subject his modus to a rigorous examination by looking at empirical research on the link between information and understanding. So this statement, additional knowledge of the minutiae of daily business can be useless, even actually toxic, is indirectly but quite effectively testable.
Show two groups of people a blurry image of a fire hydrant, blurry enough for them not to recognize what it is. For one group, increase the resolution slowly, in ten steps. For the second, do it faster, in five steps. Stop at a point where both groups have been presented an identical image and ask each of them to identify what they see. The members of the group that saw fewer intermediate steps are likely to recognize the hydrant much faster. Moral? The more information you give someone, the more hypotheses they will formulate along the way, and the worse off they will be. They see more random noise and mistake it for information.
The problem is that our ideas are sticky: once we produce a theory, we are not likely to change our minds—so those who delay developing their theories are better off. When you develop your opinions on the basis of weak evidence, you will have difficulty interpreting subsequent information that contradicts these opinions, even if this new information is obviously more accurate. Two mechanisms are at play here: the confirmation bias that we saw in Chapter 5, and belief perseverance, the tendency not to reverse opinions you already have. Remember that we treat ideas like possessions, and it will be hard for us to part with them.
The fire hydrant experiment was first done in the sixties, and replicated several times since. I have also studied this effect using the mathematics of information: the more detailed knowledge one gets of empirical reality, the more one will see the noise (i.e., the anecdote) and mistake it for actual information. Remember that we are swayed by the sensational. Listening to the news on the radio every hour is far worse for you than reading a weekly magazine, because the longer interval allows information to be filtered a bit.
In 1965, Stuart Oskamp supplied clinical psychologists with successive files, each containing an increasing amount of information about patients; the psychologists’ diagnostic abilities did not grow with the additional supply of information. They just got more confident in their original diagnosis. Granted, one may not expect too much of psychologists of the 1965 variety, but these findings seem to hold across disciplines.
Finally, in another telling experiment, the psychologist Paul Slovic asked bookmakers to select from eighty-eight variables in past horse races those that they found useful in computing the odds. These variables included all manner of statistical information about past performances. The bookmakers were given the ten most useful variables, then asked to predict the outcome of races. Then they were given ten more and asked to predict again. The increase in the information set did not lead to an increase in their accuracy; their confidence in their choices, on the other hand, went up markedly. Information proved to be toxic. I’ve struggled much of my life with the common middlebrow belief that “more is better”—more is sometimes, but not always, better. This toxicity of knowledge will show in our investigation of the so-called expert.
The Black Swan Page 20