Book Read Free

The Art of Thinking Clearly

Page 22

by Rolf Dobelli


  In certain areas, skill plays no role whatsoever. In his book Thinking, Fast and Slow, Kahneman describes his visit to an asset management company. To brief him, they sent him a spreadsheet showing the performance of each investment adviser over the past eight years. From this, a ranking was assigned to each: number 1, 2, 3, and so on in descending order. This was compiled every year. Kahneman quickly calculated the relationship between the years’ rankings. Specifically, he calculated the correlation of the rankings between year 1 and year 2, between year 1 and year 3, year 1 and year 4, up until year 7 and year 8. The result: pure coincidence. Sometimes the adviser was at the very top and sometimes the very bottom. If an adviser had a great year, this was neither bolstered by previous years nor carried into subsequent years. The correlation was zero. And yet the consultants pocketed bonuses for their performance. In other words, the company was rewarding luck rather than skill.

  In conclusion: Certain people make a living from their abilities, such as pilots, plumbers, and lawyers. In other areas, skill is necessary but not critical, as with entrepreneurs and leaders. Finally, chance is the deciding factor in a number of fields, such as in financial markets. Here, the illusion of skill pervades. So, give plumbers due respect and chuckle at successful financial jesters.

  95

  Why Checklists Deceive You

  Feature-Positive Effect

  Two series of numbers: The first, series A, consists of: 724, 947, 421, 843, 394, 411, 054, 646. What do these numbers have in common? Don’t read on until you have an answer. It’s simpler than you think: The number 4 features in each of them. Now examine series B: 349, 851, 274, 905, 772, 032, 854, 113. What links these numbers? Do not read further until you’ve figured it out. Series B is more difficult, right? Answer: None use the number 6. What can you learn from this? Absence is much harder to detect than presence. In other words, we place greater emphasis on what is present than on what is absent.

  Last week, while on a walk, it occurred to me that nothing hurt. It was an unexpected thought. I rarely experience pain anyway, but when I do, it is very present. But the absence of pain I rarely recognize. It was such a simple, obvious fact, it amazed me. For a moment, I was elated—until this little revelation slipped from my mind again.

  At a classical recital, an orchestra performed Beethoven’s Ninth Symphony. A storm of enthusiasm gripped the concert hall. During the ode in the fourth movement, tears of joy could be seen here and there. How fortunate we are that this symphony exists, I thought. But is that really true? Would we be less happy without the work? Probably not. Had the symphony never been composed, no one would miss it. The director would receive no angry calls saying: “Please have this symphony written and performed immediately.” In short, what exists means a lot more than what is missing. Science calls this the feature-positive effect.

  Prevention campaigns utilize this well. “Smoking causes lung cancer” is much more powerful than “Not smoking leads to a life free of lung cancer.” Auditors and other professionals who employ checklists are prone to the feature-positive effect: Outstanding tax declarations are immediately obvious because they feature on their lists. What does not appear, however, is more artistic fraud, such as the goings-on at Enron and with Bernie Madoff’s Ponzi scheme. Also absent are the undertakings of “rogue traders,” such as Nick Leeson and Jerome Kerviel, to whom Barings and Société Générale fell victim. Financial vagaries of this kind are not on any checklist. And they do not have to be illegal: A mortgage bank will be on the lookout for credit risk due to a drop in the debtor’s income because this appears on its list; however, it will overlook the devaluation of property, say, through the construction of an incineration plant in the vicinity.

  Suppose you manufacture a dubious product, such as a salad dressing with a high level of cholesterol. What do you do? On the label, you promote the twenty different vitamins in the dressing and omit the cholesterol level. Consumers won’t notice its absence. And the positive, present features will make sure that they feel safe and informed.

  In academia, we constantly encounter the feature-positive effect. The confirmation of hypotheses leads to publications, and in exceptional cases these are rewarded with Nobel Prizes. On the other hand, the falsification of a hypothesis is a lot harder to get published, and as far as I know, there has never been a Nobel Prize awarded for this. However, such falsification is as scientifically valuable as confirmation. Another consequence of the effect is that we are also much more open to positive advice (do X) than to negative suggestions (forget about Y)—no matter how useful the latter may be.

  In conclusion: We have problems perceiving nonevents. We are blind to what does not exist. We realize if there is a war, but we do not appreciate the absence of war during peacetime. If we are healthy, we rarely think about being sick. Or, if we get off the plane in Cancún, we do not stop to notice that we did not crash. If we thought more frequently about absence, we might well be happier. But it is tough mental work. The greatest philosophical question is: Why does something and not nothing exist? Don’t expect a quick answer; rather, the question itself represents a useful instrument for combating the feature-positive effect.

  96

  Drawing the Bull’s-Eye around the Arrow

  Cherry Picking

  On their websites, hotels present themselves in the very best light. They carefully select each photo, and only beautiful, majestic images make the cut. Unflattering angles, dripping pipes, and drab breakfast rooms are swept under the tattered carpet. Of course, you know this is true. When you are confronted by the shabby lobby for the first time, you simply shrug your shoulders and head to the registration desk.

  What the hotel did explains Nassim Taleb is called cherry picking: showcasing the most attractive features and hiding the rest. As with the hotel experience, you approach other things with the same muted expectations: brochures for cars, real estate, or law firms. You know how they work, and you don’t fall for them.

  However, you respond differently to the annual reports of companies, foundations, and government organizations. Here, you tend to expect objective depictions. You are mistaken. These bodies also cherry-pick: If goals are achieved, they are talked up; if they falter, they are not even mentioned.

  Suppose you are the head of a department. The board invites you to present your team’s state of play. How do you tackle this? You devote most of your PowerPoint slides to elaborate on the team’s triumphs and throw in a token few to identify “challenges.” Any other unmet achievements you conveniently forget.

  Anecdotes are a particularly tricky sort of cherry picking. Imagine you are the managing director of a company that manufactures some kind of technical device. A survey has revealed that the vast majority of customers cannot operate your gadget. It’s too complicated. Now the HR manager gives his two cents, proclaiming: “My father-in-law picked it up yesterday and figured out how to work it right away.” How much weight would you attach to this particular cherry? Right: close to zero. To rebuff an anecdote is difficult because it is a mini-story, and we know how vulnerable our brains are to those. To prevent this, cunning leaders train themselves throughout their careers to be hypersensitive to such anecdotes and to shoot them down as soon as they are uttered.

  The more elevated or elite a field is, the more we fall for cherry picking. In Antifragile, Taleb describes how all areas of research—from philosophy to medicine to economics—brag about their results: “Like politicians, academia is well equipped to tell us what it did for us, not what it did not—hence it shows how indispensable her methods are.” Pure cherry picking. But our respect for academics is far too great for us to notice this.

  Or consider the medical profession: To tell people that they should not smoke is the greatest medical contribution of the past sixty years—superior to all the research and medical advances since the end of the Second World War. Physician Druin Burch confirms this in his book Taking the Medic
ine. A few cherries—antibiotics, for instance—distract us, and so drug researchers are celebrated while antismoking activists are not.

  Administrative departments in large companies glorify themselves like hoteliers do. They are masters at showcasing all they have done, but they never communicate what they haven’t achieved for the company. What should you do? If you sit on the supervisory board of such an organization, ask about the “leftover cherries,” the failed projects and missed goals. You learn a lot more from this than from the successes. It is amazing how seldom such questions are asked. Second: Instead of employing a horde of financial controllers to calculate costs to the nearest cent, double-check targets. You will be amazed to find that, over time, the original goals have faded. These have been replaced, quietly and secretly, with self-set goals that are always attainable. If you hear of such targets, alarm bells should sound. It is the equivalent of shooting an arrow and drawing a bull’s-eye around where it lands.

  97

  The Stone Age Hunt for Scapegoats

  Fallacy of the Single Cause

  Chris Matthews is one of MSNBC’s top journalists. In his news show, so-called political experts are wheeled in one after the other and interviewed. I’ve never understood what a political expert is or why such a career is worthwhile. In 2003, the U.S. invasion of Iraq was the issue on everybody’s lips. More important than the experts’ answers were Chris Matthews’s questions: “What is the motive behind the war?” “I wanted to know whether 9/11 is the reason, because a lot of people think it’s payback.” “Do you think that the weapons of mass destruction was the reason for this war?” “Why do you think we invaded Iraq? The real reason, not the sales pitch.” And so on.

  I can’t abide questions like that anymore. They are symptomatic of the most common of all mental errors, a mistake for which, strangely enough, there is no everyday term. For now, the awkward phrase, the fallacy of the single cause, will have to do.

  Five years later, in 2008, panic reigned in the financial markets. Banks caved in and had to be nursed back to health with tax dollars. Investors, politicians, and journalists probed furiously for the root of the crisis: Greenspan’s loose monetary policy? The stupidity of investors? The dubious rating agencies? Corrupt auditors? Bad risk models? Pure greed? Not a single one, and yet every one of these, is the cause.

  A balmy Indian summer, a friend’s divorce, the First World War, cancer, a school shooting, the worldwide success of a company, the invention of writing—any clear-thinking person knows that no single factor leads to such events. Rather, there are hundreds, thousands, an infinite number of factors that add up. Still, we keep trying to pin the blame on just one.

  “When an apple ripens and falls—what makes it fall? Is it that it is attracted to the ground, is it that the stem withers, is it that the sun has dried it up, that is has grown heavier, that the wind shakes it, that the boy standing underneath it wants to eat it? No one thing is the cause.” In this passage from War and Peace, Tolstoy hit the nail on the head.

  Suppose you are the product manager for a well-known breakfast cereal brand. You have just launched an organic, low-sugar variety. After a month, it’s painfully clear that the new product is a flop. How do you go about investigating the cause? First, you know that there will never be one sole factor. Take a sheet of paper and sketch out all the potential reasons. Do the same for the reasons behind these reasons. After a while, you will have a network of possible influencing factors. Second, highlight those you can change and delete those you cannot (such as “human nature”). Third, conduct empirical tests by varying the highlighted factors in different markets. This costs time and money, but it’s the only way to escape the swamp of superficial assumptions.

  The fallacy of the single cause is as ancient as it is dangerous. We have learned to see people as the “masters of their own destinies.” Aristotle proclaimed this 2,500 years ago. Today we know that it is wrong. The notion of free will is up for debate. Our actions are brought about by the interaction of thousands of factors—from genetic predisposition to upbringing, from education to the concentration of hormones between individual brain cells. Still we hold firmly to the old image of self-governance. This is not only wrong but also morally questionable. As long as we believe in singular reasons, we will always be able to trace triumphs or disasters back to individuals and stamp them “responsible.” The idiotic hunt for a scapegoat goes hand in hand with the exercise of power—a game that people have been playing for thousands of years.

  And yet the fallacy of the single cause is so popular that Tracy Chapman was able to build her worldwide success on it. “Give Me One Reason” is the song that secured her success. But hold on—weren’t there a few others, too?

  98

  Why Speed Demons Appear to Be Safer Drivers

  Intention-to-Treat Error

  You’ll find it hard to believe, but speed demons drive more safely than so-called careful drivers. Why? Well, consider this: The distance from Miami to West Palm Beach is around seventy-five miles. Drivers who cover the distance in an hour or less we’ll categorize as “reckless drivers” because they’re traveling at an average of 75 mph or more. All others we put into the group of careful drivers. Which group experiences fewer accidents? Without a doubt, it is the “reckless drivers.” They all completed the journey in less than an hour, so they could not have been involved in any accidents. This automatically puts all drivers who end up in accidents in the slower drivers’ category. This example illustrates a treacherous fallacy, the so-called intention-to-treat error. Unfortunately, there is no catchier term for it.

  This might sound to you like the survivorship bias (chapter 1), but it’s different. In the survivorship bias you see only the survivors, not the failed projects or cars involved in accidents. In the intention-to-treat error, the failed projects or cars with accidents prominently show up, just in the wrong category.

  A banker showed me an interesting study recently. Its conclusion: Companies with debt on their balance sheets are significantly more profitable than firms with no debt (equity only). The banker vehemently insisted that every company should borrow at will, and, of course, his bank is the best place to do it. I examined the study more closely. How could that be? Indeed, from one thousand randomly selected firms, those with large loans displayed higher returns not only on their equity but also on their total capital. They were in every respect more successful than the independently financed firms. Then the penny dropped: Unprofitable companies don’t get corporate loans. Thus, they form part of the “equity-only” group. The other firms that make up this set have bigger cash cushions, stay afloat longer, and, no matter how sickly they are, remain part of the study. On the other side, firms that have borrowed a lot go bankrupt more quickly. Once they cannot pay back the interest, the bank takes over, and the companies are sold off—thus disappearing from the sample. The ones that remain in the “debt group” are relatively healthy, regardless of how much debt they have amassed on their balance sheets.

  If you’re thinking, “Okay, got it,” watch out. The intention-to-treat error is not easy to recognize. A fictional example from medicine: A pharmaceutical company has developed a new drug to fight heart disease. A study “proves” that it significantly reduces patients’ mortality rates. The data speaks for itself: Among patients who have taken the drug regularly, the five-year mortality rate is 15 percent. For those who have swallowed placebo pills, it is about the same, indicating that the pill doesn’t work. However—and this is crucial—the mortality rate of patients who have taken the drug at irregular intervals is 30 percent—twice as high! A big difference between regular and irregular intake. So, the pill is a complete success. Or is it?

  Here’s the snag: The pill is probably not the decisive factor; rather, it is the patients’ behavior. Perhaps patients discontinued the pill following severe side effects and thus landed in the “irregular intake” category. Maybe they were so il
l that there was no way to continue it on a regular basis. Either way, only relatively healthy patients remain in the “regular” group, which makes the drug look a lot more effective than it really is. The really sick patients who, for this very reason, couldn’t take the drug on a regular basis ended up populating the “irregular intake” group.

  In reputable studies, medical researchers evaluate the data of all patients whom they originally intend to treat (hence the title); it doesn’t matter if they take part in the trial or they drop out. Unfortunately, many studies flout this rule. Whether this is intentional or accidental remains to be seen. Therefore, be on your guard: Always check whether test subjects—drivers who end up in accidents, bankrupt companies, critically ill patients—have, for whatever reason, vanished from the sample. If so, you should file the study where it belongs: in the trash can.

  99

  Why You Shouldn’t Read the News

  News Illusion

 

‹ Prev