by Rolf Dobelli
Let’s apply the same thinking to the phone call. Keep in mind the many occasions when “Andy” thinks of you but doesn’t call; when you think of him and he doesn’t call; when you don’t think of him and he calls; when he doesn’t think of you and you call. . . . There is an almost infinite number of occasions when you don’t think of him and he doesn’t call. But since people spend about 90 percent of their time thinking about others, it is not unlikely that, eventually, two people will think of each other and one of them will pick up the phone. And it must not be just Andy: If you have a hundred other friends, the probability of this happening increases manifold.
We tend to stumble when estimating probabilities. If someone says “never,” I usually register this as a minuscule probability greater than zero since “never” cannot be compensated by a negative probability.
In sum: Let’s not get too excited. Improbable coincidences are precisely that: rare but very possible events. It’s not surprising when they finally happen. What would be more surprising is if they never came to be.
25
The Calamity of Conformity
Groupthink
Have you ever bitten your tongue in a meeting? Surely. You sit there, say nothing, and nod along to proposals. After all, you don’t want to be the (eternal) naysayer. Moreover, you might not be 100 percent sure why you disagree, whereas the others are unanimous—and far from stupid. So you keep your mouth shut for another day. When everyone thinks and acts like this, groupthink is at work: This is where a group of smart people makes reckless decisions because everyone aligns their opinions with the supposed consensus. Thus, motions are passed that each individual group member would have rejected if no peer pressure had been involved. Groupthink is a special branch of social proof, a flaw that we discussed in chapter 4.
In March 1960, the U.S. Secret Service began to mobilize anticommunist exiles from Cuba, most of them living in Miami, to use against Fidel Castro’s regime. In January 1961, two days after taking office, President Kennedy was informed about the secret plan to invade Cuba. Three months later, a key meeting took place at the White House, where Kennedy and his advisers all voted in favor of the invasion. On April 17, 1961, a brigade of 1,400 exiled Cubans landed at the Bay of Pigs, on Cuba’s south coast, with the help of the U.S. Navy, the Air Force, and the CIA. The aim was to overthrow Castro’s government. However, nothing went as planned. On the first day, not a single supply ship reached the coast. The Cuban air force sank the first two, and the next two turned around and fled back to the United States. A day later, Castro’s army completely surrounded the brigade. On the third day, the 1,200 survivors were taken into custody and sent to military prisons.
Kennedy’s invasion of the Bay of Pigs is regarded as one of the biggest flops in American foreign policy. That such an absurd plan was ever agreed upon, never mind put into action, is astounding. All of the assumptions that spoke in favor of the invasion were erroneous. For example, Kennedy’s team completely underestimated the strength of Cuba’s air force. Also, it was expected that, in an emergency, the brigade would be able to hide in the Escambray Mountains and carry out an underground war against Castro from there. A glance at the map shows that the refuge was 100 miles away from the Bay of Pigs, with an insurmountable swamp in between. And yet Kennedy and his advisers were among the most intelligent people to ever run an American government. What went wrong between January and April 1961?
Psychology professor Irving Janis has studied many fiascoes. He concluded that they share the following pattern: Members of a close-knit group cultivate team spirit by (unconsciously) building illusions. One of these fantasies is a belief in invincibility: “If both our leader [in this case, Kennedy] and the group are confident that the plan will work, then luck will be on our side.” Next comes the illusion of unanimity: If the others are of the same opinion, any dissenting view must be wrong. No one wants to be the naysayer that destroys team unity. Finally, each person is happy to be part of the group. Expressing reservations could mean exclusion from it. In our evolutionary past, such banishment guaranteed death; hence our strong urge to remain in the group’s favor.
Groupthink is no stranger in the business world. A classic example is the fate of the world-class airline Swissair. Here, a group of highly paid consultants rallied around the former CEO and, bolstered by the euphoria of past successes, they developed a high-risk expansion strategy (including the acquisition of several European airlines). The zealous team built up such a strong consensus that even rational reservations were suppressed, leading to the airline’s collapse in 2001.
If you ever find yourself in a tight, unanimous group, you must speak your mind, even if your team does not like it. Question tacit assumptions, even if you risk expulsion from the warm nest. And, if you lead a group, appoint someone as devil’s advocate. She will not be the most popular member of the team, but she might be the most important.
26
Why You’ll Soon Be Playing Mega Trillions
Neglect of Probability
Two games of chance: In the first, you can win $10 million, and in the second, $10,000. Which do you play? If you win the first game, it changes your life completely: You can quit your job, tell your boss where to go, and live off the winnings. If you hit the jackpot in the second game, you can take a nice vacation in the Caribbean, but you’ll be back at your desk quick enough to see your postcard arrive. The probability of winning is one in 100 million in the first game, and one in 10,000 in the second game. So which do you choose?
Our emotions draw us to the first game, even though the second is ten times better, objectively considered (expected win times probability). Therefore, the trend is toward ever-larger jackpots—Mega Millions, Mega Billions, Mega Trillions—no matter how small the odds are.
In a classic experiment from 1972, participants were divided into two groups. The members of the first group were told that they would receive a small electric shock. In the second group, subjects were told that the risk of this happening was only 50 percent. The researchers measured physical anxiety (heart rate, nervousness, sweating, etc.) shortly before commencing. The result were, well, shocking: There was absolutely no difference. Participants in both groups were equally stressed. Next, the researchers announced a series of reductions in the probability of a shock for the second group: from 50 percent to 20 percent, then 10 percent, then 5 percent. The result: still no difference! However, when they declared they would increase the strength of the expected current, both groups’ anxiety levels rose—again, by the same degree. This illustrates that we respond to the expected magnitude of an event (the size of the jackpot or the amount of electricity), but not to its likelihood. In other words: We lack an intuitive grasp of probability.
The proper term for this is neglect of probability, and it leads to errors in decision making. We invest in start-ups because the potential profit makes dollar signs flash before our eyes, but we forget (or are too lazy) to investigate the slim chances of new businesses actually achieving such growth. Similarly, following extensive media coverage of a plane crash, we cancel flights without really considering the minuscule probability of crashing (which, of course, remains the same before and after such a disaster). Many amateur investors compare their investments solely on the basis of yield. For them, Google shares with a return of 20 percent must be twice as good as property that returns 10 percent. That’s wrong. It would be a lot smarter to also consider both investments’ risks. But then again, we have no natural feel for this, so we often turn a blind eye to it.
Back to the experiment with the electric shocks: In group B, the probability of getting a jolt was further reduced: from 5 percent to 4 percent to 3 percent. Only when the probability reached zero did group B respond differently than group A. To us, 0 percent risk seems infinitely better than a (highly improbable) 1 percent risk.
To test this, let’s examine two methods of treating drinking water. Suppose a river
has two equally large tributaries. One is treated using method A, which reduces the risk of dying from contaminated water from 5 percent to 2 percent. The other is treated using method B, which reduces the risk from 1 percent to 0 percent, that is, the threat is completely eliminated. So, method A or B? If you think like most people, you will opt for method B—which is silly because with measure A, 3 percent fewer people die, and with B, just 1 percent fewer. Method A is three times as good! This fallacy is called the “zero-risk bias.”
A classic example of this is the U.S. Food Act of 1958, which prohibits food that contains cancer-causing substances. Instituted to achieve zero risk of cancer, this ban sounds good at first, but it ended up leading to the use of more dangerous (but noncarcinogenic) food additives. It is also absurd: As Paracelsus illustrated in the sixteenth century, poisoning is always a question of dosage. Furthermore, this law can never be enforced properly since it is impossible to remove the last “banned” molecule from food. Each farm would have to function like a hyper-sterile computer-chip factory, and the cost of food would increase a hundredfold. Economically, zero risk rarely makes sense. One exception is when the consequences are colossal, such as a deadly, highly contagious virus escaping from a biotech laboratory.
We have no intuitive grasp of risk and thus distinguish poorly among different threats. The more serious the threat and the more emotional the topic (such as radioactivity), the less reassuring a reduction in risk seems to us. Two researchers at the University of Chicago have shown that people are equally afraid of a 99 percent chance as they are of a 1 percent chance of contamination by toxic chemicals. An irrational response, but a common one.
27
Why the Last Cookie in the Jar Makes Your Mouth Water
Scarcity Error
Coffee at a friend’s house. We sat trying to make conversation while her three children grappled with one another on the floor. Suddenly I remembered that I had brought some glass marbles with me—a whole bag full. I spilled them out on the floor, in the hope that the little angels would play with them in peace. Far from it: A heated argument ensued. I didn’t understand what was happening until I looked more closely. Apparently, among the countless marbles, there was just one blue one, and the children scrambled for it. All the marbles were exactly the same size and shiny and bright. But the blue one had an advantage over the others—it was one of a kind. I had to laugh at how childish children are!
In August 2005, when I heard that Google would launch its own e-mail service, I was dead-set on getting an account. (In the end I did.) At the time, new accounts were very restricted and were given out only by invitation. This made me want one even more. But why? Certainly not because I needed another e-mail account (back then, I already had four), or because Gmail was better than the competition, but simply because not everyone had access to it. Looking back, I have to laugh at how childish adults are!
Rara sunt cara, said the Romans. Rare is valuable. In fact, the scarcity error is as old as mankind. My friend with the three children is a part-time real estate agent. Whenever she has an interested buyer who cannot decide, she calls and says: “A doctor from London saw the plot of land yesterday. He liked it a lot. What about you? Are you still interested?” The doctor from London—sometimes it’s a professor or a banker—is, of course, fictitious. The effect is very real, though: It causes prospects to see the opportunity disappearing before their eyes, so they act and close the deal. Why? This is the potential shortage of supply, yet again. Objectively, this situation is incomprehensible: Either the prospect wants the land for the set price or he does not—regardless of any doctors from London.
To assess the quality of cookies, Professor Stephen Worchel split participants into two groups. The first group received an entire box of cookies, and the second group just two. In the end, the subjects with just two cookies rated the quality much higher than the first group did. The experiment was repeated several times and always showed the same result.
“Only while stocks last,” the ads alert. “Today only,” warn the posters. Gallery owners take advantage of the scarcity error by placing red “sold” dots under most of their paintings, transforming the remaining few works into rare items that must be snatched up quickly. We collect stamps, coins, vintage cars even when they serve no practical purpose. The post office doesn’t accept the old stamps, the banks don’t take old coins, and the vintage cars are no longer allowed on the road. These are all side issues; the attraction is that they are in short supply.
In one study, students were asked to arrange ten posters in order of attractiveness—with the agreement that afterward they could keep one poster as a reward for their participation. Five minutes later, they were told that the poster with the third-highest rating was no longer available. Then they were asked to judge all ten from scratch. The poster that was no longer available was suddenly classified as the most beautiful. In psychology, this phenomenon is called “reactance”: When we are deprived of an option, we suddenly deem it more attractive. It is a kind of act of defiance. It is also known as the “Romeo and Juliet effect”: Because the love between the tragic Shakespearean teenagers is forbidden, it knows no bounds. This yearning must not necessarily be in a romantic way. In the United States, student parties are often littered with desperately drunk teenagers. In Europe, where the age limit is eighteen, you don’t witness this type of behavior.
In conclusion: The typical response to scarcity is a lapse in clear thinking. Assess products and services solely on the basis of their price and benefits. It should be of no importance if an item is disappearing fast or if any doctors from London take an interest.
28
When You Hear Hoofbeats, Don’t Expect a Zebra
Base-Rate Neglect
Mark is a thin man from Germany with glasses who likes to listen to Mozart. Which is more likely? That (a) Mark is a truck driver or (b) he is a professor of literature in Frankfurt. Most will bet on B, which is wrong. Germany has ten thousand times more truck drivers than Frankfurt has literature professors. Therefore, it is more likely that Mark is a truck driver. So what just happened? The detailed description enticed us to overlook the statistical reality. Scientists call this fallacy base-rate neglect: a disregard of fundamental distribution levels. It is one of the most common errors in reasoning. Virtually all journalists, economists, and politicians fall for it on a regular basis.
Here is a second example: A young man is stabbed and fatally injured. Which of these is more likely? (a) The attacker is a Russian immigrant and imports combat knives illegally, or (b) the attacker is a middle-class American. You know the drill now: Option B is much more likely because there are a million times more middle-class Americans than there are Russian knife importers.
In medicine, base-rate neglect plays an important role. For example, migraines can point (among others) to a viral infection or a brain tumor. However, viral infections are much more common (in other words, they have a higher base rate), so doctors assess patients for these first before testing for tumors. This is very reasonable. In medical school, residents spend a lot of time purging base-rate neglect. The motto drummed into any prospective doctor in the United States is: “When you hear hoofbeats behind you, don’t expect to see a zebra,” which means: Investigate the most likely ailments before you start diagnosing exotic diseases, even if you are a specialist in that. Doctors are the only professionals who enjoy this base-rate training.
Regrettably, few people in business are exposed to base-rate training. Now and then I see high-flying entrepreneurs’ business plans and get very excited by their products, ideas, and personalities. I often catch myself thinking: This could be the next Google! But a glance at the base rate brings me back down to earth. The probability that a firm will survive the first five years is 20 percent. So what, then, is the probability that they will grow into a global corporation? Almost zero. Warren Buffett once explained why he does not invest in biotech companies:
“How many of these companies make revenues of several hundred million dollars? It simply does not happen. . . . The most likely scenario is that these firms will just hover somewhere in the middle.” This is clear base-rate thinking. For most people, survivorship bias (chapter 1) is one of the causes for their base-rate neglect. They tend to see only the successful individuals and companies because the unsuccessful cases are not reported (or underreported). This makes them neglect the large part of the “invisible” cases.
Imagine you are sampling wine in a restaurant and have to guess from which country it is. The label of the bottle is covered. If, like me, you are not a wine connoisseur, the only lifeline you have is the base rate. You know from experience that about three-quarters of the wines on the menu are of French origin, so reasonably, you guess France, even if you suspect a Chilean or Californian twist.