This subtle but extremely consequential property of scalable randomness is unusually counterintuitive. We misunderstand the logic of large deviations from the norm.
I will get deeper into these properties of scalable randomness in Part Three. But let us say for now that they are central to our misunderstanding of the business of prediction.
DON’T CROSS A RIVER IF IT IS (ON AVERAGE) FOUR FEET DEEP
Corporate and government projections have an additional easy-to-spot flaw: they do not attach a possible error rate to their scenarios. Even in the absence of Black Swans this omission would be a mistake.
I once gave a talk to policy wonks at the Woodrow Wilson Center in Washington, D.C., challenging them to be aware of our weaknesses in seeing ahead.
The attendees were tame and silent. What I was telling them was against everything they believed and stood for; I had gotten carried away with my aggressive message, but they looked thoughtful, compared to the testosterone-charged characters one encounters in business. I felt guilty for my aggressive stance. Few asked questions. The person who organized the talk and invited me must have been pulling a joke on his colleagues. I was like an aggressive atheist making his case in front of a synod of cardinals, while dispensing with the usual formulaic euphemisms.
Yet some members of the audience were sympathetic to the message. One anonymous person (he is employed by a governmental agency) explained to me privately after the talk that in January 2004 his department was forecasting the price of oil for twenty-five years later at $27 a barrel, slightly higher than what it was at the time. Six months later, around June 2004, after oil doubled in price, they had to revise their estimate to $54 (the price of oil is currently, as I am writing these lines, close to $79 a barrel). It did not dawn on them that it was ludicrous to forecast a second time given that their forecast was off so early and so markedly, that this business of forecasting had to be somehow questioned. And they were looking twenty-five years ahead! Nor did it hit them that there was something called an error rate to take into account.*
Forecasting without incorporating an error rate uncovers three fallacies, all arising from the same misconception about the nature of uncertainty.
The first fallacy: variability matters. The first error lies in taking a projection too seriously, without heeding its accuracy. Yet, for planning purposes, the accuracy in your forecast matters far more than the forecast itself. I will explain it as follows.
Don’t cross a river if it is four feet deep on average. You would take a different set of clothes on your trip to some remote destination if I told you that the temperature was expected to be seventy degrees Fahrenheit, with an expected error rate of forty degrees than if I told you that my margin of error was only five degrees. The policies we need to make decisions on should depend far more on the range of possible outcomes than on the expected final number. I have seen, while working for a bank, how people project cash flows for companies without wrapping them in the thinnest layer of uncertainty. Go to the stockbroker and check on what method they use to forecast sales ten years ahead to “calibrate” their valuation models. Go find out how analysts forecast government deficits. Go to a bank or security-analysis training program and see how they teach trainees to make assumptions; they do not teach you to build an error rate around those assumptions—but their error rate is so large that it is far more significant than the projection itself!
The second fallacy lies in failing to take into account forecast degradation as the projected period lengthens. We do not realize the full extent of the difference between near and far futures. Yet the degradation in such forecasting through time becomes evident through simple introspective examination—without even recourse to scientific papers, which on this topic are suspiciously rare. Consider forecasts, whether economic or technological, made in 1905 for the following quarter of a century. How close to the projections did 1925 turn out to be? For a convincing experience, go read George Orwell’s 1984. Or look at more recent forecasts made in 1975 about the prospects for the new millennium. Many events have taken place and new technologies have appeared that lay outside the forecasters’ imaginations; many more that were expected to take place or appear did not do so. Our forecast errors have traditionally been enormous, and there may be no reasons for us to believe that we are suddenly in a more privileged position to see into the future compared to our blind predecessors. Forecasting by bureaucrats tends to be used for anxiety relief rather than for adequate policy making.
The third fallacy, and perhaps the gravest, concerns a misunderstanding of the random character of the variables being forecast. Owing to the Black Swan, these variables can accommodate far more optimistic—or far more pessimistic—scenarios than are currently expected. Recall from my experiment with Dan Goldstein testing the domain-specificity of our intuitions, how we tend to make no mistakes in Mediocristan, but make large ones in Extremistan as we do not realize the consequences of the rare event.
What is the implication here? Even if you agree with a given forecast, you have to worry about the real possibility of significant divergence from it. These divergences may be welcomed by a speculator who does not depend on steady income; a retiree, however, with set risk attributes cannot afford such gyrations. I would go even further and, using the argument about the depth of the river, state that it is the lower bound of estimates (i.e., the worst case) that matters when engaging in a policy—the worst case is far more consequential than the forecast itself. This is particularly true if the bad scenario is not acceptable. Yet the current phraseology makes no allowance for that. None.
It is often said that “is wise he who can see things coming.” Perhaps the wise one is the one who knows that he cannot see things far away.
Get Another Job
The two typical replies I face when I question forecasters’ business are: “What should he do? Do you have a better way for us to predict?” and “If you’re so smart, show me your own prediction.” In fact, the latter question, usually boastfully presented, aims to show the superiority of the practitioner and “doer” over the philosopher, and mostly comes from people who do not know that I was a trader. If there is one advantage of having been in the daily practice of uncertainty, it is that one does not have to take any crap from bureaucrats.
One of my clients asked for my predictions. When I told him I had none, he was offended and decided to dispense with my services. There is in fact a routine, unintrospective habit of making businesses answer questionnaires and fill out paragraphs showing their “outlooks.” I have never had an outlook and have never made professional predictions—but at least I know that I cannot forecast and a small number of people (those I care about) take that as an asset.
There are those people who produce forecasts uncritically. When asked why they forecast, they answer, “Well, that’s what we’re paid to do here.”
My suggestion: get another job.
This suggestion is not too demanding: unless you are a slave, I assume you have some amount of control over your job selection. Otherwise this becomes a problem of ethics, and a grave one at that. People who are trapped in their jobs who forecast simply because “that’s my job,” knowing pretty well that their forecast is ineffectual, are not what I would call ethical. What they do is no different from repeating lies simply because “it’s my job.”
Anyone who causes harm by forecasting should be treated as either a fool or a liar. Some forecasters cause more damage to society than criminals. Please, don’t drive a school bus blindfolded.
At JFK
At New York’s JFK airport you can find gigantic newsstands with walls full of magazines. They are usually manned by a very polite family from the Indian subcontinent (just the parents; the children are in medical school). These walls present you with the entire corpus of what an “informed” person needs in order “to know what’s going on.” I wonder how long it would take to read every single one of these magazines, excluding the fishing and motorcycle perio
dicals (but including the gossip magazines—you might as well have some fun). Half a lifetime? An entire lifetime?
Caravaggio’s The Fortune-Teller. We have always been suckers for those who tell us about the future. In this picture the fortune-teller is stealing the victim’s ring.
Sadly, all this knowledge would not help the reader to forecast what is to happen tomorrow. Actually, it might decrease his ability to forecast.
There is another aspect to the problem of prediction: its inherent limitations, those that have little to do with human nature, but instead arise from the very nature of information itself. I have said that the Black Swan has three attributes: unpredictability, consequences, and retrospective explainability. Let us examine this unpredictability business.*
* The book you have in your hands is approximately and “unexpectedly” fifteen months late.
* While forecast errors have always been entertaining, commodity prices have been a great trap for suckers. Consider this 1970 forecast by U.S. officials (signed by the U.S. Secretaries of the Treasury, State, Interior, and Defense): “the standard price of foreign crude oil by 1980 may well decline and will in any event not experience a substantial increase.” Oil prices went up tenfold by 1980. I just wonder if current forecasters lack in intellectual curiosity or if they are intentionally ignoring forecast errors.
Also note this additional aberration: since high oil prices are marking up their inventories, oil companies are making record bucks and oil executives are getting huge bonuses because “they did a good job”—as if they brought profits by causing the rise of oil prices.
* I owe the reader an answer concerning Catherine’s lover count. She had only twelve.
Chapter Eleven
HOW TO LOOK FOR BIRD POOP
Popper’s prediction about the predictors—Poincaré plays with billiard balls—Von Hayek is allowed to be irreverent—Anticipation machines—Paul Samuelson wants you to be rational—Beware the philosopher—Demand some certainties.
We’ve seen that a) we tend to both tunnel and think “narrowly” (epistemic arrogance), and b) our prediction record is highly overestimated—many people who think they can predict actually can’t.
We will now go deeper into the unadvertised structural limitations on our ability to predict. These limitations may arise not from us but from the nature of the activity itself—too complicated, not just for us, but for any tools we have or can conceivably obtain. Some Black Swans will remain elusive, enough to kill our forecasts.
HOW TO LOOK FOR BIRD POOP
In the summer of 1998 I worked at a European-owned financial institution. It wanted to distinguish itself by being rigorous and farsighted. The unit involved in trading had five managers, all serious-looking (always in dark blue suits, even on dress-down Fridays), who had to meet throughout the summer in order “to formulate the five-year plan.” This was supposed to be a meaty document, a sort of user’s manual for the firm. A five-year plan? To a fellow deeply skeptical of the central planner, the notion was ludicrous; growth within the firm had been organic and unpredictable, bottom-up not top-down. It was well known that the firm’s most lucrative department was the product of a chance call from a customer asking for a specific but strange financial transaction. The firm accidentally realized that they could build a unit just to handle these transactions, since they were profitable, and it rapidly grew to dominate their activities.
The managers flew across the world in order to meet: Barcelona, Hong Kong, et cetera. A lot of miles for a lot of verbiage. Needless to say they were usually sleep-deprived. Being an executive does not require very developed frontal lobes, but rather a combination of charisma, a capacity to sustain boredom, and the ability to shallowly perform on harrying schedules. Add to these tasks the “duty” of attending opera performances.
The managers sat down to brainstorm during these meetings, about, of course, the medium-term future—they wanted to have “vision.” But then an event occurred that was not in the previous five-year plan: the Black Swan of the Russian financial default of 1998 and the accompanying meltdown of the values of Latin American debt markets. It had such an effect on the firm that, although the institution had a sticky employment policy of retaining managers, none of the five was still employed there a month after the sketch of the 1998 five-year plan.
Yet I am confident that today their replacements are still meeting to work on the next “five-year plan.” We never learn.
Inadvertent Discoveries
The discovery of human epistemic arrogance, as we saw in the previous chapter, was allegedly inadvertent. But so were many other discoveries as well. Many more than we think.
The classical model of discovery is as follows: you search for what you know (say, a new way to reach India) and find something you didn’t know was there (America).
If you think that the inventions we see around us came from someone sitting in a cubicle and concocting them according to a timetable, think again: almost everything of the moment is the product of serendipity. The term serendipity was coined in a letter by the writer Hugh Walpole, who derived it from a fairy tale, “The Three Princes of Serendip.” These princes “were always making discoveries by accident or sagacity, of things which they were not in quest of.”
In other words, you find something you are not looking for and it changes the world, while wondering after its discovery why it “took so long” to arrive at something so obvious. No journalist was present when the wheel was invented, but I am ready to bet that people did not just embark on the project of inventing the wheel (that main engine of growth) and then complete it according to a timetable. Likewise with most inventions.
Sir Francis Bacon commented that the most important advances are the least predictable ones, those “lying out of the path of the imagination.” Bacon was not the last intellectual to point this out. The idea keeps popping up, yet then rapidly dying out. Almost half a century ago, the bestselling novelist Arthur Koestler wrote an entire book about it, aptly called The Sleepwalkers. It describes discoverers as sleepwalkers stumbling upon results and not realizing what they have in their hands. We think that the import of Copernicus’s discoveries concerning planetary motions was obvious to him and to others in his day; he had been dead seventy-five years before the authorities started getting offended. Likewise we think that Galileo was a victim in the name of science; in fact, the church didn’t take him too seriously. It seems, rather, that Galileo caused the uproar himself by ruffling a few feathers. At the end of the year in which Darwin and Wallace presented their papers on evolution by natural selection that changed the way we view the world, the president of the Linnean society, where the papers were presented, announced that the society saw “no striking discovery,” nothing in particular that could revolutionize science.
We forget about unpredictability when it is our turn to predict. This is why people can read this chapter and similar accounts, agree entirely with them, yet fail to heed their arguments when thinking about the future.
Take this dramatic example of a serendipitous discovery. Alexander Fleming was cleaning up his laboratory when he found that penicillium mold had contaminated one of his old experiments. He thus happened upon the antibacterial properties of penicillin, the reason many of us are alive today (including, as I said in Chapter 8, myself, for typhoid fever is often fatal when untreated). True, Fleming was looking for “something,” but the actual discovery was simply serendipitous. Furthermore, while in hindsight the discovery appears momentous, it took a very long time for health officials to realize the importance of what they had on their hands. Even Fleming lost faith in the idea before it was subsequently revived.
In 1965 two radio astronomists at Bell Labs in New Jersey who were mounting a large antenna were bothered by a background noise, a hiss, like the static that you hear when you have bad reception. The noise could not be eradicated—even after they cleaned the bird excrement out of the dish, since they were convinced that bird poop was behind the
noise. It took a while for them to figure out that what they were hearing was the trace of the birth of the universe, the cosmic background microwave radiation. This discovery revived the big bang theory, a languishing idea that was posited by earlier researchers. I found the following comments on Bell Labs’ website commenting on how this “discovery” was one of the century’s greatest advances:
Dan Stanzione, then Bell Labs president and Lucent’s chief operating officer when Penzias [one of the radio astronomers involved in the discovery] retired, said Penzias “embodies the creativity and technical excellence that are the hallmarks of Bell Labs.” He called him a Renaissance figure who “extended our fragile understanding of creation, and advanced the frontiers of science in many important areas.”
Renaissance shmenaissance. The two fellows were looking for bird poop! Not only were they not looking for anything remotely like the evidence of the big bang but, as usual in these cases, they did not immediately see the importance of their find. Sadly, the physicist Ralph Alpher, the person who initially conceived of the idea, in a paper coauthored with heavyweights George Gamow and Hans Bethe, was surprised to read about the discovery in The New York Times. In fact, in the languishing papers positing the birth of the universe, scientists were doubtful whether such radiation could ever be measured. As happens so often in discovery, those looking for evidence did not find it; those not looking for it found it and were hailed as discoverers.
The Black Swan Page 23