Everything Is Obvious

Home > Other > Everything Is Obvious > Page 4
Everything Is Obvious Page 4

by Duncan J. Watts


  HOW COMMON SENSE FAILS US

  Without a doubt, the experience of participating in the social world greatly facilitates our ability to understand it. Were it not for the intimate knowledge of our own thought processes, along with countless observations of the words, actions, and explanations of others—both experienced in person and also learned remotely—the vast intricacies of human behavior might well be inscrutable. Nevertheless, the combination of intuition, experience, and received wisdom on which we rely to generate commonsense explanations of the social world also disguises certain errors of reasoning that are every bit as systematic and pervasive as the errors of commonsense physics. Part One of this book is devoted to exploring these errors, which fall into three broad categories.

  The first type of error is that when we think about why people do what they do, we invariably focus on factors like incentives, motivations, and beliefs, of which we are consciously aware. As sensible as it sounds, decades of research in psychology and cognitive science have shown that this view of human behavior encompasses just the tip of the proverbial iceberg. It doesn’t occur to us, for example, that the music playing in the background can influence our choice of wine in the liquor store, or that the font in which a statement is written may make it more or less believable; so we don’t factor these details into our anticipation of how people will react. But they do matter, as do many other apparently trivial or seemingly irrelevant factors. In fact, as we’ll see, it is probably impossible to anticipate everything that might be relevant to a given situation. The result is that no matter how carefully we try to put ourselves in someone else’s shoes, we are likely to make serious mistakes when predicting how they’ll behave anywhere outside of the immediate here and now.

  If the first type of commonsense error is that our mental model of individual behavior is systematically flawed, the second type is that our mental model of collective behavior is even worse. The basic problem here is that whenever people get together in groups—whether at social events, workplaces, volunteer organizations, markets, political parties, or even as entire societies—they interact with one another, sharing information, spreading rumors, passing along recommendations, comparing themselves to their friends, rewarding and punishing each other’s behaviors, learning from the experience of others, and generally influencing one another’s perspectives about what is good and bad, cheap and expensive, right and wrong. As sociologists have long argued, these influences pile up in unexpected ways, generating collective behavior that is “emergent” in the sense that it cannot be understood solely in terms of its component parts. Faced with such complexity, however, commonsense explanations instinctively fall back on the logic of individual action. Sometimes we invoke fictitious “representative individuals” like “the crowd,” “the market,” “the workers,” or “the electorate,” whose actions stand in for the actions and interactions of the many. And sometimes we single out “special people,” like leaders, visionaries, or “influencers” to whom we attribute all the agency. Regardless of which trick we use, however, the result is that our explanations of collective behavior paper over most of what is actually happening.

  The third and final type of problem with commonsense reasoning is that we learn less from history than we think we do, and that this misperception in turn skews our perception of the future. Whenever something interesting, dramatic, or terrible happens—Hush Puppies become popular again, a book by an unknown author becomes an international best seller, the housing bubble bursts, or terrorists crash planes into the World Trade Center—we instinctively look for explanations. Yet because we seek to explain these events only after the fact, our explanations place far too much emphasis on what actually happened relative to what might have happened but didn’t. Moreover, because we only try to explain events that strike us as sufficiently interesting, our explanations account only for a tiny fraction even of the things that do happen. The result is that what appear to us to be causal explanations are in fact just stories—descriptions of what happened that tell us little, if anything, about the mechanisms at work. Nevertheless, because these stories have the form of causal explanations, we treat them as if they have predictive power. In this way, we deceive ourselves into believing that we can make predictions that are impossible, even in principle.

  Commonsense reasoning, therefore, does not suffer from a single overriding limitation but rather from a combination of limitations, all of which reinforce and even disguise one another. The net result is that common sense is wonderful at making sense of the world, but not necessarily at understanding it. By analogy, in ancient times, when our ancestors were startled by lightning bolts descending from the heavens, accompanied by claps of thunder, they assuaged their fears with elaborate stories about the gods, whose all-too-human struggles were held responsible for what we now understand to be entirely natural processes. In explaining away otherwise strange and frightening phenomena in terms of stories they did understand, they were able to make sense of them, effectively creating an illusion of understanding about the world that was enough to get them out of bed in the morning. All of which is fine. But we would not say that our ancestors “understood” what was going on, in the sense of having a successful scientific theory. Indeed, we tend to regard the ancient mythologies as vaguely amusing.

  What we don’t realize, however, is that common sense often works just like mythology. By providing ready explanations for whatever particular circumstances the world throws at us, commonsense explanations give us the confidence to navigate from day to day and relieve us of the burden of worrying about whether what we think we know is really true, or is just something we happen to believe. The cost, however, is that we think we have understood things that in fact we have simply papered over with a plausible-sounding story. And because this illusion of understanding in turn undercuts our motivation to treat social problems the way we treat problems in medicine, engineering, and science, the unfortunate result is that common sense actually inhibits our understanding of the world. Addressing this problem is not easy, although in Part Two of the book I will offer some suggestions, along with examples of approaches that are already being tried in the worlds of business, policy, and science. The main point, though, is that just as an unquestioning belief in the correspondence between natural events and godly affairs had to give way in order for “real” explanations to be developed, so too, real explanations of the social world will require us to examine what it is about our common sense that misleads us into thinking that we know more than we do.25

  CHAPTER 2

  Thinking About Thinking

  In many countries around the world, it is common for the state to ask its citizens if they will volunteer to be organ donors. Now, organ donation is one of those issues that elicit strong feelings from many people. On the one hand, it’s an opportunity to turn one person’s loss into another person’s salvation. But on the other hand, it’s more than a little unsettling to be making plans for your organs that don’t involve you. It’s not surprising, therefore, that different people make different decisions, nor is it surprising that rates of organ donation vary considerably from country to country. It might surprise you to learn, however, how much cross-national variation there is. In a study conducted a few years ago, two psychologists, Eric Johnson and Dan Goldstein, found that rates at which citizens consented to donate their organs varied across different European countries, from as low as 4.25 percent to as high as 99.98 percent. What was even more striking about these differences is that they weren’t scattered all over the spectrum, but rather were clustered into two distinct groups—one group that had organ-donation rates in the single digits and teens, and one group that had rates in the high nineties—with almost nothing in between.1

  What could explain such a huge difference? That’s the question I put to a classroom of bright Columbia undergraduates not long after the study was published. Actually, what I asked them to consider was two anonymous countries, A and B. In country A, roughly
12 percent of citizens agree to be organ donors, while in country B 99.9 percent do. So what did they think was different about these two countries that could account for the choices of their citizens? Being smart and creative students, they came up with lots of possibilities. Perhaps one country was secular while the other was highly religious. Perhaps one had more advanced medical care, and better success rates at organ transplants, than the other. Perhaps the rate of accidental death was higher in one than another, resulting in more available organs. Or perhaps one had a highly socialist culture, emphasizing the importance of community, while the other prized the rights of individuals.

  All were good explanations. But then came the curveball. Country A was in fact Germany, and country B was … Austria. My poor students were stumped—what on earth could be so different about Germany and Austria? But they weren’t giving up yet. Maybe there was some difference in the legal or education systems that they didn’t know about? Or perhaps there had been some important event or media campaign in Austria that had galvanized support for organ donation. Was it something to do with World War II? Or maybe Austrians and Germans are more different than they seem. My students didn’t know what the reason for the difference was, but they were sure it was something big—you don’t see extreme differences like that by accident. Well, no—but you can get differences like that for reasons that you’d never expect. And for all their creativity, my students never pegged the real reason, which is actually absurdly simple: In Austria, the default choice is to be an organ donor, whereas in Germany the default is not to be. The difference in policies seems trivial—it’s just the difference between having to mail in a simple form and not having to—but it’s enough to push the donor rate from 12 percent to 99.9 percent. And what was true for Austria and Germany was true across all of Europe—all the countries with very high rates of organ donation had opt-out policies, while the countries with low rates were all opt-in.

  DECISIONS, DECISIONS

  Understanding the influence of default settings on the choices we make is important, because our beliefs about what people choose and why they choose it affect virtually all our explanations of social, economic, and political outcomes. Read the op-ed section of any newspaper, watch any pundit on TV, or listen to any late-night talk radio, and you will be bombarded with theories of why we choose this over that. And although we often decry these experts, the broader truth is that all of us—from politicians and bureaucrats, to newspaper columnists, to corporate executives and ordinary citizens—are equally willing to espouse our own theory of human choice. Indeed, virtually every argument of social consequence—whether about politics, economic policy, taxes, education, healthcare, free markets, global warming, energy policy, foreign policy, immigration policy, sexual behavior, the death penalty, abortion rights, or consumer demand—is either explicitly or implicitly an argument about why people make the choices they make. And, of course, how they can be encouraged, educated, legislated, or coerced into making different ones.

  Given the ubiquity of choice in the world and its relevance to virtually every aspect of life—from everyday decisions to the grand events of history—it should come as little surprise that theories about how people make choices are also central to most of the social sciences. Commenting on an early paper by the Nobel laureate Gary Becker, the economist James Duesenberry famously quipped that “economics is all about choice, while sociology is about why people have no choices.”2 But the truth is that sociologists are every bit as interested in how people make choices as economists are—not to mention political scientists, anthropologists, psychologists, and legal, business, and management scholars. Nevertheless, Duesenberry had a point in that for much of the last century, social and behavioral scientists of different stripes have tended to view the matter of choice in strikingly different ways. More than anything, they have differed, sometimes acrimoniously, over the nature and importance of human rationality.

  COMMON SENSE AND RATIONALITY

  To many sociologists, the phrase “rational choice” evokes the image of a cold, calculating individual who cares only for himself and who relentlessly seeks to maximize his economic well-being. Nor is this reaction entirely unjustified. For many years, economists seeking to understand market behavior invoked something like this notion of rationality—sometimes referred to as “homo economicus”—in large part because it lends itself naturally to mathematical models that are simple enough to be written down and solved. And yet, as countless examples like the ultimatum game from the previous chapter show, real people care not only about their own welfare, economic or otherwise, but also the welfare of others for whom they will often make considerable sacrifices. We also care about upholding social norms and conventions, and frequently punish others who violate them—even when doing so is costly.3 And finally, we often care about intangible benefits, like our reputation, belonging to a group, and “doing the right thing,” sometimes as much as or even more than we care about wealth, comfort, and worldly possessions.

  Critics of homo economicus have raised all these objections, and many more, over the years. In response, advocates of what is often called rational choice theory have expanded the scope of what is considered rational behavior dramatically to include not just self-interested economic behavior, but also more realistic social and political behavior as well.4 These days, in fact, rational choice theory is not so much a single theory at all as it is a family of theories that make often rather different assumptions depending on the application in question. Nevertheless, all such theories tend to include variations on two fundamental insights—first, that people have preferences for some outcomes over others; and second, that given these preferences they select among the means available to them as best they can to realize the outcomes that they prefer. To take a simple example, if my preference for ice cream exceeds my preference for the money I have in my pocket, and there is an available course of action that allows me to exchange my money for the ice cream, then that’s what I’ll choose to do. But if, for example, the weather is cold, or the ice cream is expensive, my preferred course of action may instead be to keep the money for a sunnier day. Similarly, if buying the ice cream requires a lengthy detour, my preference to get where I am going may also cause me to wait for another time. Regardless of what I end up choosing—the money, the ice cream, the walk followed by the ice cream, or some other alternative—I am always doing what is “best” for me, given the preferences I have at the time I make the decision.

  What is so appealing about this way of thinking is its implication that all human behavior can be understood in terms of individuals’ attempts to satisfy their preferences. I watch TV shows because I enjoy the experience enough to devote the time to them rather than doing something else. I vote because I care about participating in politics, and when I vote, I choose the candidate I think will best serve my interests. I apply to the colleges that I think I can get into, and of those I get accepted to, I attend the one that offers the best combination of status, financial aid, and student life. When I get there, I study what is most interesting to me, and when I graduate, I take the best job I can get. I make friends with people I like, and keep those friends whose company I continue to enjoy. I get married when the benefits of stability and security outweigh the excitement of dating. We have children when the benefits of a family (the joy of having children who we can love unconditionally, as well as having someone to care for us in our old age) outweigh the costs of increased responsibility, diminished freedom, and extra mouths to feed.5

  In Freakonomics, Steven Levitt and Stephen Dubner illustrate the explanatory power of rational choice theory in a series of stories about initially puzzling behavior that, upon closer examination, turns out to be perfectly rational. You might think, for example, that because your real estate agent works on commission, she will try to get you the highest price possible for your house. But as it turns out, real estate agents keep their own houses on the market longer, and sell them for higher prices, than
the houses of their clients. Why? Because when it’s your house they’re selling, they make only a small percentage of the difference of the higher price, whereas when it’s their house, they get the whole difference. The latter is enough money to hold out for, but the former isn’t. Once you understand the incentives that real estate agents face, in other words, their true preferences, and hence their actions, become instantly clear.

  Likewise, it might at first surprise you to learn that parents at an Israeli day school, when fined for picking up their children late, actually arrived late more often than they did before any fine was imposed. But once you understand that the fine assuaged the pangs of guilt they were feeling at inconveniencing the school staff—essentially, they felt they were paying for the right to be late—it makes perfect sense. So does the initially surprising observation that most gang members live with their mothers. Once you do the math, it turns out that gang members don’t make nearly as much money as you would think; thus it makes perfect economic sense for them to live at home. Similarly, one can explain the troubling behavior of a number of high-school teachers who, in response to the new accountability standards introduced by the Bush Administration’s 2002 No Child Left Behind legislation, actually altered the test responses of their students. Even though cheating could cost them their jobs, the risk of getting caught seemed small enough that the cost of being stuck with a low-performing class outweighed the potential for being punished for cheating.6

 

‹ Prev