Everything Is Obvious

Home > Other > Everything Is Obvious > Page 14
Everything Is Obvious Page 14

by Duncan J. Watts


  CHAPTER 6

  The Dream of Prediction

  Humans love to make predictions—whether about the movements of the stars, the gyrations of the stock market, or the upcoming season’s hot color. Pick up the newspaper on any given day and you’ll immediately encounter a mass of predictions—so many, in fact, that you probably don’t even notice them. To illustrate the point, let’s consider a single news story chosen more or less at random from the front page of the New York Times. The story, which was published in the summer of 2009, was about trends in retail sales and contained no fewer than ten predictions about the upcoming back-to-school season. For example, according to one source cited in the article—an industry group called the National Retail Federation—the average family with school-age children was predicted to spend “nearly 8 percent less this year than last,” while according to the research firm ShopperTrak, customer traffic in stores was predicted to be down 10 percent. Finally, an expert who was identified as president of Customer Growth Partners, a retailing consultant firm, was quoted as claiming that the season was “going to be the worst back-to-school season in many, many years.”1

  All three predictions were made by authoritative-sounding sources and were explicit enough to have been scored for accuracy. But how accurate were they? To be honest, I have no idea. The New York Times doesn’t publish statistics on the accuracy of the predictions made in its pages, nor do most of the research companies that provide them. One of the strange things about predictions, in fact, is that our eagerness to make pronouncements about the future is matched only by our reluctance to be held accountable for the predictions we make. In the mid-1980s, the psychologist Philip Tetlock noticed exactly this pattern among political experts of the day. Determined to make them put their proverbial money where their mouths were, Tetlock designed a remarkable test that was to unfold over twenty years. To begin with, he convinced 284 political experts to make nearly a hundred predictions each about a variety of possible future events, ranging from the outcomes of specific elections to the likelihood that two nations would engage in armed conflict with each other. For each of these predictions, Tetlock insisted that the experts specify which of two outcomes they expected and also assign a probability to their prediction. He did so in a way that confident predictions scored more points when correct, but also lost more points when mistaken. With those predictions in hand, he then sat back and waited for the events themselves to play out. Twenty years later, he published his results, and what he found was striking: Although the experts performed slightly better than random guessing, they did not perform as well as even a minimally sophisticated statistical model. Even more surprisingly, the experts did slightly better when operating outside their area of expertise than within it.2

  Tetlock’s results are often interpreted as demonstrating the fatuousness of so-called experts, and no doubt there’s some truth to that. But although experts are probably no better than the rest of us at making predictions, they are also probably no worse. When I was young, for example, many people believed that the future would be filled with flying cars, orbiting space cities, and endless free time. Instead, we drive internal combustion cars on crumbling, congested freeways; endure endless cuts in airplane service, and work more hours than ever. Meanwhile, Web search, mobile phones, and online shopping—the technologies that have, in fact, affected our lives—came more or less out of nowhere. Around the same time that Tetlock was beginning his experiment, in fact, a management scientist named Steven Schnaars tried to quantify the accuracy of technology-trend predictions by combing through a large collection of books, magazines, and industry reports, and recording hundreds of predictions that had been made during the 1970s. He concluded that roughly 80 percent of all predictions were wrong, whether they were made by experts or not.3

  Nor is it just forecasters of long-term social and technology trends that have lousy records. Publishers, producers, and marketers—experienced and motivated professionals in business with plenty of skin in the game—have just as much difficulty predicting which books, movies, and products will become the next big hit as political experts have in predicting the next revolution. In fact, the history of cultural markets is crowded with examples of future blockbusters—Elvis, Star Wars, Seinfeld, Harry Potter, American Idol—that publishers and movie studios left for dead while simultaneously betting big on total failures.4 And whether we consider the most spectacular business meltdowns of recent times—Long-Term Capital Management in 1998, Enron in 2001, WorldCom in 2002, the near-collapse of the entire financial system in 2008—or spectacular success stories like the rise of Google and Facebook, what is perhaps most striking about them is that virtually nobody seems to have had any idea what was about to happen. In September 2008, for example, even as Lehman Brothers’ collapse was imminent, Treasury and Federal Reserve officials—who arguably had the best information available to anyone in the world—failed to anticipate the devastating freeze in global credit markets that followed. Conversely, in the late 1990s the founders of Google, Sergey Brin and Larry Page, tried to sell their company for $1.6M. Fortunately for them, nobody was interested, because Google went on to attain a market value of over $160 billion, or about 100,000 times what they and everybody else apparently thought it was worth only a few years earlier.5

  Results like these seem to show that humans are simply bad at making predictions, but in fact that’s not quite right either. In reality there are all sorts of predictions that we could make very well if we chose to. I would bet, for example, that I could do a pretty good job of forecasting the weather in Santa Fe, New Mexico—in fact, I bet I would be correct more than 80 percent of the time. As impressive as that sounds compared to the lousy record of Tetlock’s experts, however, my ability to predict the weather in Santa Fe is not going to land me a job at the Weather Bureau. The problem is that in Santa Fe it is sunny roughly 300 days a year, so one can be right 300 days out of 365 simply by making the mindless prediction that “tomorrow it will be sunny.” Likewise, predictions that the United States will not go to war with Canada in the next decade or that the sun will continue to rise in the east are also likely to be accurate, but impress no one. The real problem of prediction, in other words, is not that we are universally good or bad at it, but rather that we are bad at distinguishing predictions that we can make reliably from those that we can’t.

  LAPLACE’S DEMON

  In a way this problem goes all the way back to Newton. Starting from his three laws of motion, along with his universal law of gravitation, Newton was able to derive not only Kepler’s laws of planetary motion but also the timing of the tides, the trajectories of projectiles, and a truly astonishing array of other natural phenomena. It was a singular scientific accomplishment, but it also set an expectation for what could be accomplished by mathematical laws that would prove difficult to match. The movements of the planets, the timing of the tides—these are amazing things to be able to predict. But aside from maybe the vibrations of electrons or the time required for light to travel a certain distance, they are also about the most predictable phenomena in all of nature. And yet, because predicting these movements was among the first problems that scientists and mathematicians set their sights on, and because they met with such stunning success, it was tempting to conclude that everything worked that way. As Newton himself wrote:

  If only we could derive the other phenomena of nature from mechanical principles by the same kind of reasoning! For many things lead me to have a suspicion that all phenomena may depend on certain forces by which particles of bodies, by causes not yet known, either are impelled toward one another and cohere in regular figures, or are repelled from one another and recede.6

  A century later, the French mathematician and astronomer Pierre-Simon Laplace pushed Newton’s vision to its logical extreme, claiming in effect that Newtonian mechanics had reduced the prediction of the future—even the future of the universe—to a matter of mere computation. Laplace envisioned an “intellect” that
knew all the forces that “set nature in motion, and all positions of all items of which nature is composed.” Laplace went on, “for such an intellect nothing would be uncertain and the future just like the past would be present before its eyes.”7

  The “intellect” of Laplace’s imagination eventually received a name—“Laplace’s demon”—and it has been lurking around the edges of mankind’s view of the future ever since. For philosophers, the demon was controversial because in reducing the prediction of the future to a mechanical exercise, it seemed to rob humanity of free will. As it turned out, though, they needn’t have worried too much. Starting with the second law of thermodynamics, and continuing through quantum mechanics and finally chaos theory, Laplace’s idea of a clockwork universe—and with it the concerns about free will—has been receding for more than century now. But that doesn’t mean the demon has gone away. In spite of the controversy over free will, there was something incredibly appealing about the notion that the laws of nature, applied to the appropriate data, could be used to predict the future. People of course had been making predictions about the future since the beginnings of civilization, but what was different about Laplace’s boast was that it wasn’t based on any claim to magical powers, or even special insight, that he possessed himself. Rather it depended only on the existence of scientific laws that in principle anyone could master. Thus prediction, once the realm of oracles and mystics, was brought within the objective, rational sphere of modern science.

  In doing so, however, the demon obscured a critical difference between two different sorts of processes, which for the sake of argument I’ll call simple and complex.8 Simple systems are those for which a model can capture all or most of the variation in what we observe. The oscillations of pendulums and the orbits of satellites are therefore “simple” in this sense, even though it’s not necessarily a simple matter to be able to model and predict them. Somewhat paradoxically, in fact, the most complicated models in science—models that predict the trajectories of interplanetary space probes, or pinpoint the location of GPS devices—often describe relatively simple processes. The basic equations of motion governing the orbit of a communications satellite or the lift on an aircraft wing can be taught to a high-school physics student. But because the difference in performance between a good model and a slightly better one can be critical, the actual models used by engineers to build satellite GPS systems and 747s need to account for all sorts of tiny corrections, and so end up being far more complicated. When the NASA Mars Climate Orbiter burned up and disintegrated in the Martian atmosphere in 1999, for example, the mishap was traced to a simple programming error (imperial units were used instead of metric) that put the probe into an orbit of about 60km instead of 140km from Mars’s surface. When you consider that in order to get to Mars, the orbiter first had to traverse more than 50 million kilometers, the magnitude of the error seems trivial. Yet it was the difference between a triumphant success for NASA and an embarrassing failure.

  Complex systems are another animal entirely. Nobody really agrees on what makes a complex system “complex” but it’s generally accepted that complexity arises out of many interdependent components interacting in nonlinear ways. The U.S. economy, for example, is the product of the individual actions of millions of people, as well as hundreds of thousands of firms, thousands of government agencies, and countless other external and internal factors, ranging from the weather in Texas to interest rates in China. Modeling the trajectory of the economy is therefore not like modeling the trajectory of a rocket. In complex systems, tiny disturbances in one part of the system can get amplified to produce large effects somewhere else—the “butterfly effect” from chaos theory that came up in the earlier discussion of cumulative advantage and unpredictability. When every tiny factor in a complex system can get potentially amplified in unpredictable ways, there is only so much that a model can predict. As a result, models of complex systems tend to be rather simple—not because simple models perform well, but because incremental improvements make little difference in the face of the massive errors that remain. Economists, for example, can only dream of modeling the economy with the same kind of accuracy that led to the destruction of the Mars Climate Orbiter. The problem, however, is not so much that their models are bad as that all models of complex systems are bad.9

  The fatal flaw in Laplace’s vision, therefore, is that his demon works only for simple systems. Yet pretty much everything in the social world—from the effect of a marketing campaign to the consequences of some economic policy or the outcome of a corporate plan—falls into the category of complex systems. Whenever people get together—in social gatherings, sports crowds, business firms, volunteer organizations, markets, political parties, or even entire societies—they affect one another’s thinking and behavior. As I discussed in Chapter 3, it is these interactions that make social systems “social” in the first place—because they cause a collection of people to be something other than just a collection of people. But in the process they also produce tremendous complexity.

  THE FUTURE IS NOT LIKE THE PAST

  The ubiquity of complex systems in the social world is important because it severely restricts the kinds of predictions we can make. In simple systems, that is, it is possible to predict with high probability what will actually happen—for example when Halley’s Comet will next return or what orbit a particular satellite will enter. For complex systems, by contrast, the best that we can hope for is to correctly predict the probability that something will happen.10 At first glance, these two exercises sound similar, but they’re fundamentally different. To see how, imagine that you’re calling the toss of a coin. Because it’s a random event, the best you can do is predict that it will come up heads, on average, half the time. A rule that says “over the long run, 50 percent of coin tosses will be heads, and 50 percent will be tails” is, in fact, perfectly accurate in the sense that heads and tails do, on average, show up exactly half the time. But even knowing this rule, we still can’t correctly predict the outcome of a single coin toss any more than 50 percent of the time, no matter what strategy we adopt.11 Complex systems are not really random in the same way that a coin toss is random, but in practice it’s extremely difficult to tell the difference. As the Music Lab experiment demonstrated earlier, you could know everything about every person in the market—you could ask them a thousand survey questions, follow them around to see what they do, and put them in brain scanners while they’re doing it—and still the best you could do would be to predict the probability that a particular song will be the winner in any particular virtual world. Some songs were more likely to win on average than others, but in any given world the interactions between individuals magnified tiny random fluctuations to produce unpredictable outcomes.

  To understand why this kind of unpredictability is problematic, consider another example of a complex system about which we like to make predictions—namely, the weather. At least in the very near future—which generally means the next forty-eight hours—weather predictions are actually pretty accurate, or as forecasters call it, “reliable.” That is, of the days when the weather service says there is a 60 percent chance of rain, it does, in fact, rain on about 60 percent of them.12 So why is it that people complain about the accuracy of weather forecasts? The reason is not that they aren’t reliable—although possibly they could be more reliable than they are—but rather that reliability isn’t the kind of accuracy that we want. We don’t want to know what is going to happen 60 percent of the time on days like tomorrow. Rather, we want to know what is actually going to happen tomorrow—and tomorrow, it will either rain or it will not. So when we hear “60 percent chance of rain tomorrow,” it’s natural to interpret the information as the weather service telling us that it’s probably going to rain tomorrow. And when it fails to rain almost half the times we listen to them and take an umbrella to work, we conclude that they don’t know what they’re talking about.

  Thinking of future events in terms
of probabilities is difficult enough for even coin tossing or weather forecasting, where more or less the same kind of thing is happening over and over again. But for events that happen only once in a lifetime, like the outbreak of a war, the election of a president, or even which college you get accepted to, the distinction becomes almost impossible to grasp. What does it mean, for example, to have said the day before Barack Obama’s victory in the 2008 presidential election that he had a 90 percent chance of winning? That he would have won nine out of ten attempts? Clearly not, as there will only ever be one election, and any attempt to repeat it—say in the next election—will not be comparable in the way that consecutive coin tosses are. So does it instead translate to the odds one ought to take in a gamble? That is, to win $10 if he is elected, I will have to bet $9, whereas if he loses, you can win $10 by betting only $1? But how are we to determine what the “correct” odds are, seeing as this gamble will only ever be resolved once? If the answer isn’t clear to you, you’re not alone—even mathematicians argue about what it means to assign a probability to a single event.13 So if even they have trouble wrapping their heads around the meaning of the statement that “the probability of rain tomorrow is 60 percent,” then it’s no surprise that the rest of us do as well.

 

‹ Prev