Everything Is Obvious

Home > Other > Everything Is Obvious > Page 12
Everything Is Obvious Page 12

by Duncan J. Watts


  This tendency, which psychologists call creeping determinism, is related to the better-known phenomenon of hindsight bias, the after-the-fact tendency to think that we “knew it all along.” In a variety of lab experiments, psychologists have asked participants to make predictions about future events and then reinterviewed them after the events in question had taken place. When recalling their previous predictions, subjects consistently report being more certain of their correct predictions, and less certain of their incorrect predictions, than they had reported at the time they made them. Creeping determinism, however, is subtly different from hindsight bias and even more deceptive. Hindsight bias, it turns out, can be counteracted by reminding people of what they said before they knew the answer or by forcing them to keep records of their predictions. But even when we recall perfectly accurately how uncertain we were about the way events would transpire—even when we concede to have been caught completely by surprise—we still have a tendency to treat the realized outcome as inevitable. Ahead of time, for example, it might have seemed that the surge was just as likely to have had no effect as to lead to a drop in violence. But once we know that the drop in violence is what actually happened, it doesn’t matter whether or not we knew all along that it was going to happen (hindsight bias). We still believe that it was going to happen, because it did.3

  SAMPLING BIAS

  Creeping determinism means that we pay less attention than we should to the things that don’t happen. But we also pay too little attention to most of what does happen. We notice when we just miss the train, but not all the times when it arrives shortly after we do. We notice when we unexpectedly run into an acquaintance at the airport, but not all the times when we do not. We notice when a mutual fund manager beats the S&P 500 ten years in a row or when a basketball player has a “hot hand” or when a baseball player has a long hitting streak, but not all the times when fund managers and sportsmen alike do not display streaks of any kind. And we notice when a new trend appears or a small company becomes phenomenally successful, but not all the times when potential trends or new companies disappear before even registering on the public consciousness.

  Just as with our tendency to emphasize the things that happened over those that didn’t, our bias toward “interesting” things is completely understandable. Why would we be interested in uninteresting things? Nevertheless, it exacerbates our tendency to construct explanations that account for only some of the data. If we want to know why some people are rich, for example, or why some companies are successful, it may seem sensible to look for rich people or successful companies and identify which attributes they share. But what this exercise can’t reveal is that if we instead looked at people who aren’t rich or companies that aren’t successful, we might have found that they exhibit many of the same attributes. The only way to identify attributes that differentiate successful from unsuccessful entities is to consider both kinds, and to look for systematic differences. Yet because what we care about is success, it seems pointless—or simply uninteresting—to worry about the absence of success. Thus we infer that certain attributes are related to success when in fact they may be equally related to failure.

  This problem of “sampling bias” is especially acute when the things we pay attention to—the interesting events—happen only rarely. For example, when Western Airlines Flight 2605 crashed into a truck that had been left on an unused runway at Mexico City on October 31, 1979, investigators quickly identified five contributing factors. First, both the pilot and the navigator were fatigued, each having had only a few hours’ sleep in the past twenty-four hours. Second, there was a communication mix-up between the crew and the air traffic controller, who had instructed the plane to come in on the radar beam that was oriented on the unused runway, and then shift to the active runway for the landing. Third, this mix-up was compounded by a malfunctioning radio, which failed for a critical part of the approach, during which time the confusion might have been clarified. Fourth, the airport was shrouded in heavy fog, obscuring both the truck and the active runway from the pilot’s view. And fifth, the ground controller got confused during the final approach, probably due to the stressful situation, and thought that it was the inactive runway that had been lit.

  As the psychologist Robyn Dawes explains in his account of the accident, the investigation concluded that although no one of these factors—fatigue, communication mix-up, radio failure, weather, and stress—had caused the accident on its own, the combination of all five together had proven fatal. It seems like a pretty reasonable conclusion, and it’s consistent with the explanations we’re familiar with for plane crashes in general. But as Dawes also points out, these same five factors arise all the time, including many, many instances where the planes did not crash. So if instead of starting with the crash and working backward to identify its causes, we worked forward, counting all the times when we observed some combination of fatigue, communication mix-up, radio failure, weather, and stress, chances are that most of those events would not result in crashes either.4

  The difference between these two ways of looking at the world is illustrated in the figure below. In the left-hand panel, we see the five risk factors identified by the Flight 2605 investigation and all the corresponding outcomes. One of those outcomes is indeed the crash, but there are many other noncrash outcomes as well. These factors, in other words, are “necessary but not sufficient” conditions: Without them, it’s extremely unlikely that we’d have a crash; but just because they’re present doesn’t mean that a crash will happen, or is even all that likely. Once we do see a crash, however, our view of the world shifts to the right-hand panel. Now all the “noncrashes” have disappeared, because we’re no longer trying to explain them—we’re only trying to account for the crash—and all the arrows from the factors to the noncrashes have disappeared as well. The result is that the very same set of factors that in the left-hand panel appeared do a poor job of predicting the crash now seems to do an excellent job.

  By identifying necessary conditions, the investigations that follow plane crashes help to keep them rare—which is obviously a good thing—but the resulting temptation to treat them as sufficient conditions nevertheless plays havoc with our intuition for why crashes happen when they do. And much the same is true of other rare events, like school shootings, terrorist attacks, and stock market crashes. Most school shooters, for example, are teenage boys who have distant or strained relationships with their parents, have been exposed to violent TV and video games, are alienated from their peers, and have fantasized about taking revenge. But these same attributes describe literally thousands of teenage boys, almost all of whom do not go on to hurt anyone, ever.5 Likewise, the so-called systemic failure that almost allowed Umar Farouk Abdulmutallab, a twenty-three-year-old Nigerian, to bring down a Northwest Airlines flight landing in Detroit on Christmas Day 2009 comprised the sorts of errors and oversights that likely happen in the intelligence and homeland security agencies thousands of times every year—almost always with no adverse consequences. And for every day in which the stock market experiences a wild plunge, there are thousands of days in which roughly the same sorts of circumstances produce nothing remarkable at all.

  IMAGINED CAUSES

  Together, creeping determinism and sampling bias lead commonsense explanations to suffer from what is called the post-hoc fallacy. The fallacy is related to a fundamental requirement of cause and effect—that in order for A to be said to cause B, A must precede B in time. If a billiard ball starts to move before it is struck by another billiard ball, something else must have caused it to move. Conversely, if we feel the wind blow and only then see the branches of a nearby tree begin to sway, we feel safe concluding that it was the wind that caused the movement. All of this is fine. But just because B follows A doesn’t mean that A has caused B. If you hear a bird sing or see a cat walk along a wall, and then see the branches start to wave, you probably don’t conclude that either the bird or the cat is causing the branches to
move. It’s an obvious point, and in the physical world we have good enough theories about how things work that we can usually sort plausible from implausible. But when it comes to social phenomena, common sense is extremely good at making all sorts of potential causes seem plausible. The result is that we are tempted to infer a cause-and-effect relationship when all we have witnessed is a sequence of events. This is the post-hoc fallacy.

  Malcolm Gladwell’s “law of the few,” discussed in the last chapter, is a poster child for the post-hoc fallacy. Any time something interesting happens, whether it is a surprise best seller, a breakout artist, or a hit product, it will invariably be the case that someone was buying it or doing it before everyone else, and that person is going to seem influential. The Tipping Point, in fact, is replete with stories about interesting people who seem to have played critical roles in important events: Paul Revere and his famous midnight ride from Boston to Lexington that energized the local militias and triggered the American Revolution. Gaëtan Dugas, the sexually voracious Canadian flight attendant who became known as Patient Zero of the American HIV epidemic. Lois Weisberg, the title character of Gladwell’s earlier New Yorker article, who seems to know everyone, and has a gift for connecting people. And the group of East Village hipsters whose ironic embrace of Hush Puppies shoes preceded a dramatic revival in the brand’s fortunes.

  These are all great stories, and it’s hard to read them and not agree with Gladwell that when something happens that is as surprising and dramatic as the Minutemen’s unexpectedly fierce defense of Lexington on April 17, 1775, someone special—someone like Paul Revere—must have helped it along. Gladwell’s explanation is especially convincing because he also relates the story of William Dawes, another rider that night who also tried to alert the local militia, but who rode a different route than Revere. Whereas the locals along Revere’s route turned out in force the next day, the townsfolk in places like Waltham, Massachusetts, which Dawes visited, seemed not to have found out about the British movements until it was too late. Because Revere rode one route and Dawes rode the other, it seems clear that the difference in outcomes can be attributed to differences between the two men. Revere was a connector, and Dawes wasn’t.6

  What Gladwell doesn’t consider, however, is that many other factors were also different about the two rides: different routes, different towns, and different people who made different choices about whom to alert once they had heard the news themselves. Paul Revere may well have been as remarkable and charismatic as Gladwell claims, while William Dawes may not have been. But in reality there was so much else going on that night that it’s no more possible to attribute the outcomes the next day to the intrinsic attributes of the two men than it is to attribute the success of the Mona Lisa to its particular features, or the drop in violence in the Sunni Triangle of Iraq in 2008 to the surge. Rather, people like Revere, who after the fact seem to have been influential in causing some dramatic outcome, may instead be more like the “accidental influentials” that Peter Dodds and I found in our simulations—individuals whose apparent role actually depended on a confluence of other factors.

  To illustrate how easily the post-hoc fallacy can generate accidental influentials, consider the following example from a real epidemic: the SARS epidemic that exploded in Hong Kong in early 2003. One of the most striking findings of the subsequent investigation was that a single patient, a young man who had traveled to Hong Kong by train from mainland China, and had been admitted to the Prince of Wales Hospital, had directly infected fifty others, leading eventually to 156 cases in the hospital alone. Subsequently the Prince of Wales outbreak led to a second major outbreak in Hong Kong, which in turn led to the epidemic’s spread to Canada and other countries. Based on examples like the SARS epidemic, a growing number of epidemiologists have become convinced that the ultimate seriousness of the epidemic depends disproportionately on the activities of superspreaders—individuals like Gaëtan Dugas and the Prince of Wales patient who single-handedly infect many others.7

  But how special are these people really? A closer look at the SARS case reveals that the real source of the problem was a misdiagnosis of pneumonia when the patient checked into the hospital. Instead of being isolated—the standard procedure for a patient infected with an unknown respiratory virus—the misdiagnosed SARS victim was placed in an open ward with poor air circulation. Even worse, because the diagnosis was pneumonia, a bronchial ventilator was placed into his lungs, which then proceeded to spew vast numbers of viral particles into the air around him. The conditions in the crowded ward resulted in a number of medical workers as well as other patients becoming infected. The event was important in spreading the disease—at least locally. But what was important about it was not the patient himself so much as the particular details of how he was treated. Prior to that, nothing you could have known about the patient would have led you to suspect that there was anything special about him, because there was nothing special about him.

  Even after the Prince of Wales outbreak, it would have been a mistake to focus on superspreading individuals rather than the circumstances that led to the virus being spread. The next major SARS outbreak, for example, took place shortly afterward in a Hong Kong apartment building, the Amoy Gardens. This time the responsible person, who had become infected at the hospital while being treated for renal failure, also had a bad case of diarrhea. Unfortunately, the building’s plumbing system was also poorly maintained, and the infection spread to three hundred other individuals in the building via a leaking drain, where none of these victims were even in the same room. Whatever lessons one might have inferred about superspreaders by studying the particular characteristics of the patient in the Prince of Wales Hospital, therefore, would have been next to useless in the Amoy Gardens. In both cases, the so-called superspreaders were simply accidental by-products of other, more complicated circumstances.

  We’ll never know what would have happened at Lexington on July 17, 1775, had Paul Revere instead ridden William Dawes’s midnight ride and Dawes ridden Revere’s. But it’s entirely possible that it would have worked out the same way, with the exception that it would have been William Dawes’s name that was passed down in history, not Paul Revere’s. Just as the outbreaks at the Prince of Wales Hospital and the Amoy Gardens happened for a complex combination of reasons, so too the victory at Lexington depended on the decisions and interactions of thousands of people, not to mention other accidents of fate. In other words, although it is tempting to attribute the outcome to a single special person, we should remember that the temptation arises simply because this is how we’d like the world to work, not because that is how it actually works. In this example, as in many others, common sense and history conspire to generate the illusion of cause and effect where none exists. On the one hand, common sense excels in generating plausible causes, whether special people, or special attributes, or special circumstances. And on the other hand, history obligingly discards most of the evidence, leaving only a single thread of events to explain. Commonsense explanations therefore seem to tell us why something happened when in fact all they’re doing is describing what happened.

  HISTORY CANNOT BE TOLD WHILE IT’S HAPPENING

  The inability to differentiate the “why” from the “what” of historical events presents a serious problem to anyone hoping to learn from the past. But surely we can at least be confident that we know what happened, even if we can’t be sure why. If anything seems like a matter of common sense, it is that history is a literal description of past events. And yet as the Russian-British philosopher Isaiah Berlin argued, the kinds of descriptions that historians give of historical events wouldn’t have made much sense to the people who actually participated in them. Berlin illustrated this problem with a scene from Tolstoy’s War and Peace, in which “Pierre Bezukhov wanders about, ‘lost’ on the battlefield of Borodino, looking for something which he imagines as a kind of set-piece; a battle as depicted by the historians or the painters. But he finds only
the ordinary confusion of individual human beings haphazardly attending to this or that human want … a succession of ‘accidents’ whose origins and consequences are, by and large, untraceable and unpredictable; only loosely strung groups of events forming an ever-varying pattern, following no discernable order.”8

  Faced with such an objection, a historian might reasonably respond that Bezukhov simply lacked the ability to observe all the various parts of the battlefield puzzle, or else the wherewithal to put all the pieces together in his mind in real time. Perhaps, in other words, the only difference between the historian’s view of the battle and Bezukhov’s is that the historian has had the time and leisure to gather and synthesize information from many different participants, none of who was in a position to witness the whole picture. Viewed from this perspective, it may indeed be difficult or even impossible to understand what is happening at the time it is happening. But the difficulty derives solely from a practical problem about the speed with which one can realistically assemble the relevant facts. If true, this response implies that it ought to be possible for someone like Bezukhov to have known what was going on at the battle of Borodino in principle, even if not in practice.9

  But let’s imagine for a moment that we could solve this practical problem. Imagine that we could summon up a truly panoptical being, able to observe in real time every single person, object, action, thought, and intention in Tolstoy’s battle, or any other event. In fact, the philosopher Arthur Danto proposed precisely such a hypothetical being, which he called the Ideal Chronicler, or IC. Replacing Pierre Bezukhov with Danto’s Ideal Chronicler, one could then ask the question, What would the IC observe? To begin with, the Ideal Chronicler would have a lot of advantages over poor Bezukhov. Not only could it observe every action of every combatant at Borodino, but it could also observe everything else going on in the world as well. Having been around forever, moreover, the Ideal Chronicler would also know everything that had happened right up to that point, and would have the power to synthesize all that information, and even make inferences about where it might be leading. The IC, in other words, would have far more information, and infinitely greater ability to process it, than any mortal historian.

 

‹ Prev