Fooled by Randomness

Home > Other > Fooled by Randomness > Page 21
Fooled by Randomness Page 21

by Nassim Nicholas Taleb


  Just as Nero cannot “think” in complicated shades, consumers consider a 75% fat-free hamburger to be different from a 25% fat one. Likewise with statistical significance. Even specialists tend to infer too fast from data in accepting or rejecting things. Recall the dentist whose emotional well-being depends on the recent performance of his portfolio. Why? Because as we will see, rule-determined behavior does not require nuances. Either you kill your neighbor or you don’t. Intermediate sentiments (leading, say, to only half his killing) are either useless or downright dangerous when you do things. The emotional apparatus that jolts us into action does not understand such nuances—it is not efficient to understand things. The rest of this chapter will rapidly illustrate some manifestations of such blindness, with a cursory exposition of the research in that area (only what connects to the topics in this book).

  BEWARE THE PHILOSOPHER BUREAUCRAT

  For a long time we had the wrong product specifications when we thought of ourselves. We humans have been under the belief that we were endowed with a beautiful machine for thinking and understanding things. However, among the factory specifications for us is the lack of awareness of the true factory specifications (why complicate things?). The problem with thinking is that it causes you to develop illusions. And thinking may be such a waste of energy! Who needs it!

  Consider that you are standing in front of a government clerk in a heavily socialist country where being a bureaucrat is held to be what respectable people do for a living. You are there to get your papers stamped by him so you can export some of their lovely chocolate candies to the New Jersey area, where you think the local population would have a great taste for them. What do you think his function is? Do you think for a minute that he cares about the general economic theory behind the transaction? His job is just to verify that you have the twelve or so signatures from the right departments, true/false; then stamp your papers and let you go. General considerations of economic growth or balance of trade are none of his interests. In fact you are lucky that he doesn’t spend any time meditating about these things: Consider how long the procedure would take if he had to solve balance of trade equations. He just has a rulebook and, over a career spanning forty to forty-five years, he will just stamp documents, be mildly rude, and go home to drink nonpasteurized beer and watch soccer games. If you gave him Paul Krugman’s book on international economics he would either sell it in the black market or give it to his nephew.

  Accordingly, rules have their value. We just follow them not because they are the best but because they are useful and they save time and effort. Consider that those who started theorizing upon seeing a tiger on whether the tiger was of this or that taxonomic variety, and the degree of danger it represented, ended up being eaten by it. Others who just ran away at the smallest presumption and were not slowed down by the smallest amount of thinking ended up either outchasing the tiger or outchasing their cousin who ended up being eaten by it.

  Satisficing

  It is a fact that our brains would not be able to operate without such shortcuts. The first thinker who figured it out was Herbert Simon, an interesting fellow in intellectual history. He started out as a political scientist (but he was a formal thinker, not the literary variety of political scientists who write about Afghanistan in Foreign Affairs); he was an artificial-intelligence pioneer, taught computer science and psychology, did research in cognitive science, philosophy, and applied mathematics, and received the Bank of Sweden Prize for Economics in honor of Alfred Nobel. His idea is that if we were to optimize at every step in life, then it would cost us an infinite amount of time and energy. Accordingly, there has to be in us an approximation process that stops somewhere. Clearly he got his intuitions from computer science—he spent his entire career at Carnegie-Mellon University in Pittsburgh, which has a reputation as a computer science center. “Satisficing” was his idea (the melding together of satisfy and suffice): You stop when you get a near-satisfactory solution. Otherwise it may take you an eternity to reach the smallest conclusion or perform the smallest act. We are therefore rational, but in a limited way: “boundedly rational.” He believed that our brains were a large optimizing machine that had built-in rules to stop somewhere.

  Not quite so, perhaps. It may not be just a rough approximation. For two (initially) Israeli researchers on human nature, how we behave seemed to be a completely different process from the optimizing machine presented by Simon. The two sat down introspecting in Jerusalem looking at aspects of their own thinking, compared it to rational models, and noticed qualitative differences. Whenever they both seemed to make the same mistake of reasoning they ran empirical tests on subjects, mostly students, and discovered very surprising results on the relation between thinking and rationality. It is to their discovery that we turn next.

  FLAWED, NOT JUST IMPERFECT

  Kahneman and Tversky

  Who has exerted the most influence on economic thinking over the past two centuries? No, it is not John Maynard Keynes, not Alfred Marshall, not Paul Samuelson, and certainly not Milton Friedman. The answer is two noneconomists: Daniel Kahneman and Amos Tversky, the two Israeli introspectors, and their specialty was to uncover areas where human beings are not endowed with rational probabilistic thinking and optimal behavior under uncertainty. Strangely, economists studied uncertainty for a long time and did not figure out much—if anything, they thought they knew something and were fooled by it. Aside from some penetrating minds like Keynes, Knight, and Shackle, economists did not even figure out that they had no clue about uncertainty—the discussions on risk by their idols show that they did not know how much they did not know. Psychologists, on the other hand, looked at the problem and came out with solid results. Note that, unlike economists, they conducted experiments, true controlled experiments of a repeatable nature, that can be done in Ulan Bator, Mongolia, tomorrow if necessary. Conventional economists do not have this luxury as they observe the past and make lengthy and mathematical comments, then bicker with each other about them.

  Kahneman and Tversky went in a completely different direction than Simon and started figuring out rules in humans that did not make them rational—but things went beyond the shortcut. For them, these rules, which are called heuristics, were not merely a simplification of rational models, but were different in methodology and category. They called them “quick and dirty” heuristics. There is a dirty part: These shortcuts came with side effects, these effects being the biases, most of which I discussed previously throughout the text (such as the inability to accept anything abstract as risk). This started an empirical research tradition called the “heuristics and biases” tradition that attempted to catalogue them—it is impressive because of its empiricism and the experimental aspect of the methods used.

  Since the Kahneman and Tversky results, an entire discipline called behavioral finance and economics has flourished. It is in open contradiction with the orthodox so-called neoclassical economics taught in business schools and economics departments under the normative names of efficient markets, rational expectations, and other such concepts. It is worth stopping, at this juncture, and discussing the distinction between normative and positive sciences. A normative science (clearly a self-contradictory concept) offers prescriptive teachings; it studies how things should be. Some economists, for example those of the efficient-market religion, believe that our studies should be based on the hypothesis that humans are rational and act rationally because it is the best thing for them to do (it is mathematically “optimal”). The opposite is a positive science, which is based on how people actually are observed to behave. In spite of economists’ envy of physicists, physics is an inherently positive science while economics, particularly microeconomics and financial economics, is predominantly a normative one. Normative economics is like religion without the aesthetics.

  Note that the experimental aspect of the research implies that Daniel Kahneman and the experimental ponytailed economist Vernon Smith were the first true scientis
ts ever to bow in front of the Swedish king for the economics prize, something that should give credibility to the Nobel academy, particularly if, like many, one takes Daniel Kahneman far more seriously than a collection of serious-looking (and very human, hence fallible) Swedes. There is another hint of the scientific firmness of this research: It is extremely readable for someone outside of psychology, unlike papers in conventional economics and finance that even people in the field have difficulty reading (as the discussions are jargon-laden and heavily mathematical to give the illusion of science). A motivated reader can get concentrated in four volumes the collection of the major heuristics and biases papers.

  Economists were not at the time very interested in hearing these stories of irrationality: Homo economicus as we said is a normative concept. While they could easily buy the “Simon” argument that we are not perfectly rational and that life implies approximations, particularly when the stakes are not large enough, they were not willing to accept that people were flawed rather than imperfect. But they are. Kahneman and Tversky showed that these biases do not disappear when there are incentives, which means that they are not necessarily cost saving. They were a different form of reasoning, and one where the probabilistic reasoning was weak.

  WHERE IS NAPOLEON WHEN WE NEED HIM?

  If your mind operates by series of different disconnected rules, these may not be necessarily consistent with each other, and if they may still do the job locally, they will not necessarily do so globally. Consider them stored as a rulebook of sorts. Your reaction will depend on which page of the book you open to at any point in time. I will illustrate it with another socialist example.

  After the collapse of the Soviet Union, Western businesspeople involved in what became Russia discovered an annoying (or entertaining) fact about the legal system: It had conflicting and contradictory laws. It just depended on which chapter you looked up. I don’t know whether the Russians wanted it as a prank (after all, they lived long, humorless years of oppression) but the confusion led to situations where someone had to violate a law to comply with another. I have to say that lawyers are quite dull people to talk to; talking to a dull lawyer who speaks broken English with a strong accent and vodka breath can be quite straining—so you give up. This spaghetti legal system came from the piecewise development of the rules: You add a law here and there and the situation is too complicated as there is no central system that is consulted every time to ensure compatibility of all the parts together. Napoleon faced a similar situation in France and remedied it by setting up a top-down code of law that aimed to dictate a full logical consistency. The problem with us humans is not so much that no Napoleon has showed up so far to dynamite the old structure then reengineer our minds like a big central program; it is that our minds are far more complicated than just a system of laws, and the requirement for efficiency is far greater.

  Consider that your brain reacts differently to the same situation depending on which chapter you open to. The absence of a central processing system makes us engage in decisions that can be in conflict with each other. You may prefer apples to oranges, oranges to pears, but pears to apples—it depends on how the choices are presented to you. The fact that your mind cannot retain and use everything you know at once is the cause of such biases. One central aspect of a heuristic is that it is blind to reasoning.

  “I’m As Good As My Last Trade” and Other Heuristics

  There exist plenty of different catalogues of these heuristics in the literature (many of them overlapping); the object of this discussion is to provide the intuition behind their formation rather than list them. For a long time we traders were totally ignorant of the behavioral research and saw situations where there was with strange regularity a wedge between the simple probabilistic reasoning and people’s perception of things. We gave them names such as the “I’m as good as my last trade” effect, the “sound-bite effect,” the “Monday morning quarterback” heuristic, and the “It was obvious after the fact” effect. It was both vindicating for traders’ pride and disappointing to discover that they existed in the heuristics literature as the “anchoring,” the “affect heuristic,” and the “hindsight bias” (it makes us feel that trading is true, experimental scientific research). The correspondence between the two worlds is shown in Table 11.1.

  I start with the “I’m as good as my last trade” heuristic (or the “loss of perspective” bias)—the fact that the counter is reset at zero and you start a new day or month from scratch, whether it is your accountant who does it or your own mind. This is the most significant distortion and the one that carries the most consequences. In order to be able to put things in general context, you do not have everything you know in your mind at all times, so you retrieve the knowledge that you require at any given time in a piecemeal fashion, which puts these retrieved knowledge chunks in their local context. This means that you have an arbitrary reference point and react to differences from that point, forgetting that you are only looking at the differences from that particular perspective of the local context, not the absolutes.

  Table 11.1 Trader and Scientific Approach

  There is the well-known trader maxim “life is incremental.” Consider that as an investor you examine your performance like the dentist in Chapter 3, at some set interval. What do you look at: your monthly, your daily, your life-to-date, or your hourly performance? You can have a good month and a bad day. Which period should dominate?

  When you take a gamble, do you say: “My net worth will end up at $99,000 or $101,500 after the gamble” or do you say “I lose $1,000 or make $1,500?” Your attitude toward the risks and rewards of the gamble will vary according to whether you look at your net worth or changes in it. But in fact in real life you will be put in situations where you will only look at your changes. The fact that the losses hurt more than the gains, and differently, makes your accumulated performance, that is, your total wealth, less relevant than the last change in it.

  This dependence on the local rather than the global status (coupled with the effect of the losses hitting harder than the gains) has an impact on your perception of well-being. Say you get a windfall profit of $1 million. The next month you lose $300,000. You adjust to a given wealth (unless of course you are very poor) so the following loss would hurt you emotionally, something that would not have taken place if you received the net amount of $700,000 in one block, or, better, two sums of $350,000 each. In addition, it is easier for your brain to detect differences rather than absolutes, hence rich or poor will be (above the minimum level) in relation to something else (remember Marc and Janet). Now, when something is in relation to something else, that something else can be manipulated. Psychologists call this effect of comparing to a given reference anchoring. If we take it to its logical limit we would realize that, because of this resetting, wealth itself does not really make one happy (above, of course, some subsistence level); but positive changes in wealth may, especially if they come as “steady” increases. More on that later with my discussion of option blindness.

  Other aspects of anchoring. Given that you may use two different anchors in the same situation, the way you act depends on so little. When people are asked to estimate a number, they will position it with respect to a number they have in mind or one they just heard, so “big” or “small” will be comparative. Kahneman and Tversky asked subjects to estimate the proportion of African countries in the United Nations after making them consciously pull a random number between 0 and 100 (they knew it was a random number). People guessed in relation to that number, which they used as anchor: Those who randomized a high number guessed higher than those who randomized a low one. This morning I did my bit of anecdotal empiricism and asked the hotel concierge how long it takes to go to the airport. “40 minutes?” I asked. “About 35,” he answered. Then I asked the lady at the reception if the journey was 20 minutes. “No, about 25,” she answered. I timed the trip: 31 minutes.

  This anchoring to a number is the reason people
do not react to their total accumulated wealth, but to differences of wealth from whatever number they are currently anchored to. This is the major conflict with economic theory, as according to economists, someone with $1 million in the bank would be more satisfied than if he had half a million. But we saw John reaching $1 million having had a total of $10 million; he was happier when he only had half a million (starting at nothing) than where we left him in Chapter 1. Also recall the dentist whose emotions depended on how frequently he checked his portfolio.

  Degree in a Fortune Cookie

  I used to attend a health club in the middle of the day and chat with an interesting Eastern European fellow with two Ph.D. degrees, one in physics (statistical no less), the other in finance. He worked for a trading house and was obsessed with the anecdotal aspects of the markets. He once asked me doggedly what I thought the stock market would do that day. Clearly I gave him a social answer of the kind “I don’t know, perhaps lower”—quite possibly the opposite answer to what I would have given him had he asked me an hour earlier. The next day he showed great alarm upon seeing me. He went on and on discussing my credibility and wondering how I could be so wrong in my “predictions,” since the market went up subsequently. The man was able to derive conclusions about my ability to predict and my “credibility” with a single observation. Now, if I went to the phone and called him and disguised my voice and said, “Hello, this is Doktorr Talebski from the Academy of Lodz and I have an interrresting prrroblem,” then presented the issue as a statistical puzzle, he would laugh at me. “Doktorr Talevski, did you get your degree in a fortune cookie?” Why is it so?

  Clearly there are two problems. First, the quant did not use his statistical brain when making the inference, but a different one. Second, he made the mistake of overstating the importance of small samples (in this case just one single observation, the worst possible inferential mistake a person can make). Mathematicians tend to make egregious mathematical mistakes outside of their theoretical habitat. When Tversky and Kahneman sampled mathematical psychologists, some of whom were authors of statistical textbooks, they were puzzled by their errors. “Respondents put too much confidence in the result of small samples and their statistical judgment showed little sensitivity to sample size.”The puzzling aspect is that not only should they have known better, “they did know better.” And yet . . .

 

‹ Prev