Everything Is Obvious

Home > Other > Everything Is Obvious > Page 13
Everything Is Obvious Page 13

by Duncan J. Watts


  Amazingly, in spite of all that, the Ideal Chronicler would still have essentially the same problem as Bezukhov; it could not give the kind of descriptions of what was happening that historians provide. The reason is that when historians describe the past, they invariably rely on what Danto calls narrative sentences, meaning sentences that purport to be describing something that happened at a particular point in time but do so in a way that invokes knowledge of a later point. For example, consider the following sentence: “One afternoon about a year ago, Bob was out in his garden planting roses.” This is what Danto calls a normal sentence, in that it does nothing more than describe what was happening at the time. But consider now the same sentence, slightly modified: “One afternoon about a year ago, Bob was out in his garden planting his prize-winning roses.” This is a narrative sentence, because it implicitly refers to an event—Bob’s roses winning a prize—that hadn’t happened at the time of the planting.

  The difference between the two sentences seems negligible. But what Danto points out is that only the first kind of sentence—the normal one—would have made sense to the participants at the time. That is, Bob might have said at the time “I am planting roses” or even “I am planting roses and they are going to be prizewinners.” But it would be very strange for him to have said “I am planting my prize-winning roses” before they’d actually won any prizes. The reason is that while the first two statements make predictions about the future—that the roots Bob is putting in the ground will one day bloom into a rosebush, or that he intends to submit them to a contest and thinks he will win—the third is something different: It assumes foreknowledge of a very specific event that will only color the events of the present after it has actually happened. It’s the kind of thing that Bob could say only if he were a prophet—a character who sees the future with sufficient clarity that he can speak about the present as though looking back on it.

  Danto’s point is that the all-knowing, hypothetical Ideal Chronicler can’t use narrative sentences either. It knows everything that is happening now, as well as everything that has led up to now. It can even make inferences about how all the events it knows about might fit together. But what it can’t do is foresee the future; it cannot refer to what is happening now in light of future events. So when English and French ships began to skirmish in the English Channel in 1337, the Ideal Chronicler might have noted that a war of some kind seemed likely, but it could not have recorded the observation “The Hundred Years War began today.” Not only was the extent of the conflict between the two countries unknown at the time, but the term “Hundred Years War” was only invented long after it ended as shorthand to describe what was in actuality a series of intermittent conflicts from 1337 to 1453. Likewise, when Isaac Newton published his masterpiece, Principia, the Ideal Chronicler might have been able to say it was a major contribution to celestial mechanics, and even predicted that it would revolutionize science. But to claim that Newton was laying the foundation for what became modern science, or was playing a key role in the Enlightenment, would be beyond the IC. These are narrative sentences that could only be uttered after the future events had taken place.10

  This may sound like a trivial argument over semantics. Surely even if the Ideal Chronicler can’t use exactly the words that historians use, it can still perceive the essence of what is happening as well as they do. But in fact Danto’s point is precisely that historical descriptions of “what is happening” are impossible without narrative sentences—that narrative sentences are the very essence of historical explanations. This is a critical distinction, because historical accounts do often claim to be describing “only” what happened in detached, dispassionate detail. Yet as Berlin and Danto both argue, literal descriptions of what happened are impossible. Perhaps even more important, they would also not serve the purpose of historical explanation, which is not to reproduce the events of the past so much as to explain why they mattered. And the only way to know what mattered, and why, is to have been able to see what happened as a result—information that, by definition, not even the impossibly talented Ideal Chronicler possesses. History cannot be told while it is happening, therefore, not only because the people involved are too busy or too confused to puzzle it out, but because what is happening can’t be made sense of until its implications have been resolved. And when will that be? As it turns out, even this innocent question can pose problems for commonsense explanations.

  IT’S NOT OVER TILL IT’S OVER

  In the classic movie Butch Cassidy and the Sundance Kid, Butch, Sundance, and Etta decide to escape their troubles in the United States by fleeing to Bolivia, where, according to Butch, the gold is practically digging itself out of the ground. But when they finally arrive, after a long and glamorous journey aboard a steamer from New York, they are greeted by a dusty yard filled with pigs and chickens and a couple of run-down stone huts. The Sundance Kid is furious and even Etta looks depressed. “You get much more for your money in Bolivia,” claims Butch optimistically. “What could they possibly have that you could possibly want to buy?” replies the Kid in disgust. Of course we know that things will soon be looking up for our pair of charming bank robbers. And sure enough, after some amusing misadventures with the language, they are. But we also know that it is eventually going to end in tears, with Butch and Sundance frozen in that timeless sepia image, bursting out of their hiding place, pistols drawn, into a barrage of gunfire.

  So was the decision to go to Bolivia a good decision or a bad one? Intuitively, it seems like the latter because it led inexorably to Butch and the Kid’s ultimate demise. But now we know that that way of thinking suffers from creeping determinism—the assumption that because we know things ended badly, they had to have ended badly. To avoid this error, therefore, we need to imagine “running” history many times, and comparing the different potential outcomes that Butch and the Kid might have experienced had they made different decisions. But at what point in these various histories should we make our comparison? At first, leaving the United States seemed like a great idea—they were escaping what seemed like certain death at the hands of the lawman Joe Lefors and his posse, and the journey was all fun and games. Later in the story, the decision seemed like a terrible idea—of all the many places they might have escaped to, why this godforsaken wasteland? Then it seemed like a good decision again—they were making loads of easy money robbing small-town banks. And then, finally, it seemed like a bad idea again as their exploits caught up to them. Even if you granted them the benefit of foresight, in other words—something we already know is impossible—they may still have reached very different conclusions about their choice, depending on which point in the future they chose to evaluate it. Which one is right?

  Within the narrow confines of a movie narrative, it seems obvious that the right time to evaluate everything should be at the end. But in real life, the situation is far more ambiguous. Just as the characters in a story don’t know when the ending is, we can’t know when the movie of our own life will reach its final scene. And even if we did, we could hardly go around evaluating all choices, however trivial, in light of our final state on our deathbed. In fact, even then we couldn’t be sure of the meaning of what we had accomplished. At least when Achilles decided to go to Troy, he knew what the bargain was: his life, in return for everlasting fame. But for the rest of us, the choices we make are far less certain. Today’s embarrassment may become tomorrow’s valuable lesson. Or yesterday’s “mission accomplished” may become today’s painful irony. Perhaps that painting we picked up at the market will turn out to be an old master. Perhaps our leadership of the family firm will be sullied by the unearthing of some ethical scandal, about which we may not have known. Perhaps our children will go on to achieve great things and attribute their success to the many small lessons we taught them. Or perhaps we will have unwittingly pushed them into the wrong career and undermined their chances of real happiness. Choices that seem insignificant at the time we make them may one day tu
rn out to be of immense import. And choices that seem incredibly important to us now may later seem to have been of little consequence. We just won’t know until we know. And even then we still may not know, because it may not be entirely up to us to decide.

  In much of life, in other words, the very notion of a well-defined “outcome,” at which point we can evaluate, once and for all, the consequences of an action is a convenient fiction. In reality, the events that we label as outcomes are never really endpoints. Instead, they are artificially imposed milestones, just as the ending of a movie is really an artificial end to what in reality would be an ongoing story. And depending on where we choose to impose an “end” to a process, we may infer very different lessons from the outcome. Let’s say, for example, that we observe that a company is hugely successful and we want to emulate that success with our own company. How should we go about doing that? Common sense (along with a number of bestselling business books) suggests that we should study the successful company, identify the key drivers of its success, and then replicate those practices and attributes in our own organization. But what if I told you that a year later this same company has lost 80 percent of its market value, and the same business press that is raving about it now will be howling for blood? Common sense would suggest that perhaps you should look somewhere else for a model of success. But how will you know that? And how will you know what will happen the year after, or the year after that?

  Problems like this actually arise in the business world all the time. In the late 1990s, for example, Cisco Systems—a manufacturer of Internet routers and telecommunications switching equipment—was a star of Silicon Valley and the darling of Wall Street. It rose from humble beginnings at the dawn of the Internet era to become, in March 2000, the most valuable company in the world, with a market capitalization of over $500 billion. As you might expect, the business press went wild. Fortune called Cisco “computing’s new superpower” and hailed John Chambers, the CEO, as the best CEO of the information age. In 2001, however, Cisco’s stock plummeted, and in April of 2001, it bottomed out at $14, down from its high of $80 just over a year earlier. The same business press that had fallen over itself to praise the firm now lambasted its strategy, its execution, and its leadership. Was it all a sham? It seemed so at the time, and many articles were written explaining how a company that had seemed so successful could have been so flawed. But not so fast: by late 2007, the stock had more than doubled to over $33, and the company, still guided by the same CEO, was handsomely profitable.11

  So was Cisco the great company that it was supposed to have been in the late 1990s after all? Or was it still the house of cards that it appeared to be in 2001? Or was it both, or neither? Following the stock price since 2007, you couldn’t tell. At first, Cisco dropped again to $14 in early 2009 in the depths of the financial crisis. But by 2010, it had recovered yet again to $24. No one knows where Cisco’s stock price will be a year from now, or ten years from now. But chances are that the business press at the time will have a story that “explains” all the ups and downs it has experienced to that point in a way that leads neatly to whatever the current valuation is. Unfortunately, these explanations will suffer from exactly the same problem as all the explanations that went before them—that at no point in time is the story ever really “over.” Something always happens afterward, and what happens afterward is liable to change our perception of the current outcome, as well as our perception of the outcomes that we have already explained. It’s actually quite remarkable in a way that we are able to completely rewrite our previous explanations without experiencing any discomfort about the one we are currently articulating, each time acting as if now is the right time to evaluate the outcome. Yet as we can see from the example of Cisco, not to mention countless other examples from business, politics, and planning, there is no reason to think that now is any better time to stop and evaluate than any other.

  WHOEVER TELLS THE BEST STORY WINS

  Historical explanations, in other words, are neither causal explanations nor even really descriptions—at least not in the sense that we imagine them to be. Rather, they are stories. As the historian John Lewis Gaddis points out, they are stories that are constrained by certain historical facts and other observable evidence.12 Nevertheless, like a good story, historical explanations concentrate on what’s interesting, downplaying multiple causes and omitting all the things that might have happened but didn’t. As with a good story, they enhance drama by focusing the action around a few events and actors, thereby imbuing them with special significance or meaning. And like good stories, good historical explanations are also coherent, which means they tend to emphasize simple, linear determinism over complexity, randomness, and ambiguity. Most of all, they have a beginning, a middle, and an end, at which point everything—including the characters identified, the order in which the events are presented, and the manner in which both characters and events are described—all has to make sense.

  So powerful is the appeal of a good story that even when we are trying to evaluate an explanation scientifically—that is, on the basis of how well it accounts for the data—we can’t help judging it in terms of its narrative attributes. In a range of experiments, for example, psychologists have found that simpler explanations are judged more likely to be true than complex explanations, not because simpler explanations actually explain more, but rather just because they are simpler. In one study, for example, when faced with a choice of explanations for a fictitious set of medical symptoms, a majority of respondents chose an explanation involving only one disease over an alternative explanation involving two diseases, even when the combination of the two diseases was statistically twice as likely as the single-disease explanation.13 Somewhat paradoxically, explanations are also judged to be more likely to be true when they have informative details added, even when the extra details are irrelevant or actually make the explanation less likely. In one famous experiment, for example, students shown descriptions of two fictitious individuals, “Bill” and “Linda” consistently preferred more detailed backstories—that Bill was both an accountant and a jazz player rather than simply a jazz player, or that Linda was a feminist bank teller rather than just a bank teller—even though the less detailed descriptions were logically more likely.14 In addition to their content, moreover, explanations that are skillfully delivered are judged more plausible than poorly delivered ones, even when the explanations themselves are identical. And explanations that are intuitively plausible are judged more likely than those that are counterintuitive—even though, as we know from all those Agatha Christie novels, the most plausible explanation can be badly wrong. Finally, people are observed to be more confident about their judgments when they have an explanation at hand, even when they have no idea how likely the explanation is to be correct.15

  It’s true, of course, that scientific explanations often start out as stories as well, and so have some of the same attributes.16 The key difference between science and storytelling, however, is that in science we perform experiments that explicitly test our “stories.” And when they don’t work, we modify them until they do. Even in branches of science like astronomy, where true experiments are impossible, we do something analogous—building theories based on past observations and testing them on future ones. Because history is only run once, however, our inability to do experiments effectively excludes precisely the kind of evidence that would be necessary to infer a genuine cause-and-effect relation. In the absence of experiments, therefore, our storytelling abilities are allowed to run unchecked, in the process burying most of the evidence that is left, either because it’s not interesting or doesn’t fit with the story we want to tell. Expecting history to obey the standards of scientific explanation is therefore not just unrealistic, but fundamentally confused—it is, as Berlin concluded, “to ask it to contradict its essence.”17

  For much the same reason, professional historians are often at pains to emphasize the difficulty of generalizing from any one
particular context to any other. Nevertheless, because accounts of the past, once constructed, bear such a strong resemblance to the sorts of theories that we construct in science, it is tempting to treat them as if they have the same power of generalization—even for the most careful historians.18 When we try to understand why a particular book became a bestseller, in other words, we are implicitly asking a question about how books in general become bestsellers, and therefore how that experience can be repeated by other authors or publishers. When we investigate the causes of the recent housing bubble or of the terrorist attacks of September 11, we are inevitably also seeking insight that we hope we’ll be able to apply in the future—to improve our national security or the stability of our financial markets. And when we conclude from the surge in Iraq that it caused the subsequent drop in violence, we are invariably tempted to apply the same strategy again, as indeed the current administration has done in Afghanistan. No matter what we say we are doing, in other words, whenever we seek to learn about the past, we are invariably seeking to learn from it as well—an association that is implicit in the words of the philosopher George Santayana: “Those who cannot remember the past are condemned to repeat it.”19

  This confusion between stories and theories gets to the heart of the problem with using common sense as a way of understanding the world. In one breath, we speak as if all we’re trying to do is to make sense of something that has already happened. But in the next breath we’re applying the “lesson” that we think we have learned to whatever plan or policy we’re intending to implement in the future. We make this switch between storytelling and theory building so easily and instinctively that most of the time we’re not even aware that we’re doing it. But the switch overlooks that the two are fundamentally different exercises with different objectives and standards of evidence. It should not be surprising then that explanations that were chosen on the basis of their qualities as stories do a poor job of predicting future patterns or trends. Yet that is nonetheless what we use them for. Understanding the limits of what we can explain about the past ought therefore to shed light on what it is that we can predict about the future. And because prediction is so central to planning, policy, strategy, management, marketing, and all the other problems that we will discuss later, it is to prediction that we now turn.

 

‹ Prev