Everything Is Obvious

Home > Other > Everything Is Obvious > Page 1
Everything Is Obvious Page 1

by Duncan J. Watts




  Copyright © 2011 by Duncan Watts

  All rights reserved.

  Published in the United States by Crown Business,

  an imprint of the Crown Publishing Group,

  a division of Random House, Inc., New York.

  www.crownpublishing.com

  Crown Business is a trademark and the Rising Sun colophon is a registered trademark of Random House, Inc.

  Library of Congress Cataloging-in-Publication Data

  Watts, Duncan J., 1971–

  Everything is obvious : once you know the answer /

  Duncan Watts. — 1st ed.

  p. cm.

  1. Thought and thinking. 2. Common sense. 3. Reasoning.

  I. Title.

  BF441.W347 2011

  153.4–dc22

  2010031550

  eISBN: 978-0-385-53169-6

  Jacket design by Laura Duffy

  Jacket Photographs © George Diebold Photography / Getty Images

  v3.1

  For Jack and Lily

  CONTENTS

  Cover

  Title Page

  Copyright

  Dedication

  PREFACE A Sociologist’s Apology

  PART I COMMON SENSE

  CHAPTER 1 The Myth of Common Sense

  CHAPTER 2 Thinking About Thinking

  CHAPTER 3 The Wisdom (and Madness) of Crowds

  CHAPTER 4 Special People

  CHAPTER 5 History, The Fickle Teacher

  CHAPTER 6 The Dream of Prediction

  PART II UNCOMMON SENSE

  CHAPTER 7 The Best-Laid Plans

  CHAPTER 8 The Measure of All Things

  CHAPTER 9 Fairness and Justice

  CHAPTER 10 The Proper Study of Mankind

  ACKNOWLEDGMENTS

  BIBLIOGRAPHY

  NOTES

  About the Author

  PREFACE

  A Sociologist’s Apology

  In January 1998, about halfway through my first year out of graduate school, my housemate at the time handed me a copy of New Scientist magazine containing a book review by the physicist and science writer John Gribbin. The book Gribbin was reviewing was called Tricks of the Trade, by the Chicago sociologist Howard Becker, and was mostly a collection of Becker’s musings on how to do productive social science research. Gribbin clearly hated it, judging Becker’s insights to be the kind of self-evident checks that “real scientists learn in the cradle.” But he didn’t stop there. He went on to note that the book had merely reinforced his opinion that all of social science was “something of an oxymoron” and that “any physicist threatened by cuts in funding ought to consider a career in the social sciences, where it ought to be possible to solve the problems the social scientists are worked up about in a trice.”1

  There was a reason my roommate had given me this particular review to read and why that particular line stuck in my head. I had majored in physics at college, and at the time when I read Gribbin’s review I had just finished my PhD in engineering; I had written my dissertation on the mathematics of what are now called small-world networks.2 But although my training had been in physics and mathematics, my interests had turned increasingly toward the social sciences and I was just beginning what turned out to be a career in sociology. So I felt that in a sense I was embarking on a miniature version of Gribbin’s proposed experiment. And to be honest, I might have suspected that he was right.

  Twelve years later, however, I think I can say that the problems sociologists, economists, and other social scientists are “worked up about” are not going to be solved in a trice, by me or even by a legion of physicists. I say this because since the late 1990s many hundreds, if not thousands of physicists, computer scientists, mathematicians, and other “hard” scientists have taken an increasing interest in questions that have traditionally been the province of the social sciences—questions about the structure of social networks, the dynamics of group formation, the spread of information and influence, or the evolution of cities and markets. Whole fields have arisen over the past decade with ambitious names like “network science” and “econophysics.” Datasets of immense proportions have been analyzed, countless new theoretical models have been proposed, and thousands of papers have been published, many of them in the world’s leading science journals, such as Science, Nature, and Physical Review Letters. Entire new funding programs have come into existence to support these new research directions. Conferences on topics such as “computational social science” increasingly provide forums for scientists to interact across old disciplinary boundaries. And yes, many new jobs have appeared that offer young physicists the chance to explore problems that once would have been deemed beneath them.

  The sum total of this activity has far exceeded the level of effort that Gribbin’s offhand remark implied was required. So what have we learned about those problems that social scientists were so worked up about back in 1998? What do we really know about the nature of deviant behavior or the origins of social practices or the forces that shift cultural norms—the kinds of problems that Becker talks about in his book—that we didn’t know then? What new solutions has this new science provided to real-world problems, like helping relief agencies respond more effectively to humanitarian disasters in places like Haiti or New Orleans, or helping law enforcement agencies stop terrorist attacks, or helping financial regulatory agencies police Wall Street and reduce systemic risk? And for all the thousands of papers that have been published by physicists in the past decade, how much closer are we to answering the really big questions of social science, like the economic development of nations, the globalization of the economy, or the relationship between immigration, inequality, and intolerance? Pick up the newspaper and judge for yourself, but I would say not much.3

  If there’s a lesson here, you might think it would be that the problems of social science are hard not just for social scientists, but for physicists as well. But this lesson, it seems, has not been learned. Quite to the contrary, in fact, in 2006 Senator Kay Bailey Hutchison, a Republican from Texas, proposed that Congress cut the entire social and behavioral sciences budget of the National Science Foundation. Bailey Hutchison, it should be noted, is not antiscience—in 2005 she proposed doubling funds for medical science. Rather, it was exclusively social science research that she felt “is not where we should be directing [NSF] resources at this time.” Ultimately the proposal was defeated, but one might still wonder what the good senator was thinking. Presumably she doesn’t think that social problems are unimportant—surely no one would argue that immigration, economic development, and inequality are problems that are somehow unworthy of attention. Rather it appears that, like Gribbin, she doesn’t consider social problems to be scientific problems, worthy of the prolonged attention of serious scientists. Or as Hutchinson’s colleague from Oklahoma, Senator Tom Coburn, put it three years later in a similar proposal, “Theories on political behavior are best left to CNN, pollsters, pundits, historians, candidates, political parties, and the voters.”4

  Senators Hutchinson and Coburn are not alone in their skepticism of what social science has to offer. Since becoming a sociologist, I have frequently been asked by curious outsiders what sociology has to say about the world that an intelligent person couldn’t have figured out on their own. It’s a reasonable question, but as the sociologist Paul Lazarsfeld pointed out nearly sixty years ago, it also reveals a common misconception about the nature of social science. Lazarsfeld was writing about The American Soldier, a then-recently published study of more than 600,000 servicemen that had been conducted by the research branch of the war department during and immediately after the Second World War. To make his point, Lazarsfeld listed six findings from the study that he claimed were
representative of the report. For example, number two was that “Men from rural backgrounds were usually in better spirits during their Army life than soldiers from city backgrounds.” “Aha,” says Lazarsfeld’s imagined reader, “that makes perfect sense. Rural men in the 1940s were accustomed to harsher living standards and more physical labor than city men, so naturally they had an easier time adjusting. Why did we need such a vast and expensive study to tell me what I could have figured out on my own?”

  Why indeed.… But Lazarsfeld then reveals that all six of the “findings” were in fact the exact opposite of what the study actually found. It was city men, not rural men, who were happier during their Army life. Of course, had the reader been told the real answers in the first place she could just as easily have reconciled them with other things that she already thought she knew: “City men are more used to working in crowded conditions and in corporations, with chains of command, strict standards of clothing and social etiquette, and so on. That’s obvious!” But that’s exactly the point that Lazarsfeld was making. When every answer and its opposite appears equally obvious, then, as Lazarsfeld put it, “something is wrong with the entire argument of ‘obviousness.’ ”5

  Lazarsfeld was talking about social science, but what I will argue in this book is that his point is equally relevant to any activity—whether politics, business, marketing, philanthropy—that involves understanding, predicting, changing, or responding to the behavior of people. Politicians trying to decide how to deal with urban poverty already feel that they have a pretty good idea why people are poor. Marketers planning an advertising campaign already feel that they have a decent sense of what consumers want and how to make them want more of it. And policy makers designing new schemes to drive down healthcare costs or to improve teaching quality in public schools or to reduce smoking or to improve energy conservation already feel that they can do a reasonable job of getting the incentives right. Typically people in these positions do not expect to get everything right all the time. But they also feel that the problems they are contemplating are mostly within their ability to solve—that “it’s not rocket science,” as it were.6 Well, I’m no rocket scientist, and I have immense respect for the people who can land a machine the size of a small car on another planet. But the sad fact is that we’re actually much better at planning the flight path of an interplanetary rocket than we are at managing the economy, merging two corporations, or even predicting how many copies of a book will sell. So why is it that rocket science seems hard, whereas problems having to do with people—which arguably are much harder—seem like they ought to be just a matter of common sense? In this book, I argue that the key to the paradox is common sense itself.

  Criticizing common sense, it must be said, is a tricky business, if only because it’s almost universally regarded as a good thing—when was the last time you were told not to use it? Well, I’m going to tell you that a lot. As we’ll see, common sense is indeed exquisitely adapted to handling the kind of complexity that arises in everyday situations. And for those situations, it’s every bit as good as advertised. But “situations” involving corporations, cultures, markets, nation-states, and global institutions exhibit a very different kind of complexity from everyday situations. And under these circumstances, common sense turns out to suffer from a number of errors that systematically mislead us. Yet because of the way we learn from experience—even experiences that are never repeated, or that take place in other times and places—the failings of commonsense reasoning are rarely apparent to us. Rather, they manifest themselves to us simply as “things we didn’t know at the time” but which seem obvious in hindsight. The paradox of common sense, therefore, is that even as it helps us make sense of the world, it can actively undermine our ability to understand it. If you don’t quite understand what that last sentence means, that’s OK, because explaining it, along with its implications for policy, planning, forecasting, business strategy, marketing, and social science is what the rest of this book is about.

  Before I start, though, I would like to make one related point: that in talking with friends and colleagues about this book, I’ve noticed an interesting pattern. When I describe the argument in the abstract—that the way we make sense of the world can actually prevent us from understanding it—they nod their heads in vigorous agreement. “Yes,” they say, “I’ve always thought that people believe all sorts of silly things in order to make themselves feel like they understand things that in fact they don’t understand at all.” Yet when the very same argument calls into question some particular belief of their own, they invariably change their tune. “Everything you are saying about the pitfalls of common sense and intuition may be right,” they are in effect saying, “but it doesn’t undermine my own confidence in the particular beliefs I happen to hold.” It’s as if the failure of commonsense reasoning is only the failure of other people’s reasoning, not their own.

  People, of course, make this sort of error all the time. Around 90 percent of Americans believe they are better-than-average drivers, and a similarly impossible number of people claim that they are happier, more popular, or more likely to succeed than the average person. In one study, an incredible 25 percent of respondents rated themselves in the top 1 percent in terms of leadership ability.7 This “illusory superiority” effect is so common and so well known that it even has a colloquial catchphrase—the Lake Wobegone effect, named for Prairie Home Companion host Garrison Keillor’s fictitious town where “all the children are above average.” It’s probably not surprising, therefore, that people are much more willing to believe that others have misguided beliefs about the world than that their own beliefs are misguided. Nevertheless, the uncomfortable reality is that what applies to “everyone” necessarily applies to us, too. That is, the fallacies embedded in our everyday thinking and explanations, which I will be discussing in more detail later, must apply to many of our own, possibly deeply held, beliefs.

  None of this is to say that we should abandon all our beliefs and start over from scratch—only that we should hold them up to a spotlight and regard them with suspicion. For example, I do think that I’m an above-average driver—even though I know that statistically speaking, nearly half the people who think the same thing as I do are wrong. I just can’t help it. Knowing this, however, I can at least consider the possibility that I might be deluding myself, and so try to pay attention to when I make mistakes as well as when others do. Possibly I can begin to accept that not every altercation is necessarily the other guy’s fault, even if I’m still inclined to think it is. And perhaps I can learn from these experiences to determine what I should do differently as well as what others should be doing differently. Even after doing all this, I can’t be sure that I’m a better-than-average driver. But I can at least become a better driver.

  In the same way, when we challenge our assumptions about the world—or even more important, when we realize we’re making an assumption that we didn’t even know we were making—we may or may not change our views. But even if we don’t, the exercise of challenging them should at least force us to notice our own stubbornness, which in turn should give us pause. Questioning our own beliefs in this way isn’t easy, but it is the first step in forming new, hopefully more accurate, beliefs. Because the chances that we’re already correct in everything we believe are essentially zero. In fact, the argument that Howard Becker was really making in the book that I read about all those years ago—an argument that was obviously lost on his reviewer, and at the time would have been lost on me, too—was that learning to think like a sociologist means learning to question precisely your instincts about how things work, and possibly to unlearn them altogether. So if reading this book only confirms what you already thought you knew about the world, then I apologize. As a sociologist, I will not have done my job.

  PART ONE

  COMMON SENSE

  CHAPTER 1

  The Myth of Common Sense

  Every day in New York City five million people
ride the subways. Starting from their homes throughout the boroughs of Manhattan, Brooklyn, Queens, and the Bronx, they pour themselves in through hundreds of stations, pack themselves into thousands of cars that barrel though the dark labyrinth of the Metropolitan Transportation Authority’s tunnel system, and then once again flood the platforms and stairwells—a subterranean river of humanity urgently seeking the nearest exit and the open air beyond. As anyone who has ever participated in this daily ritual can attest, the New York subway system is something between a miracle and nightmare, a Rube Goldberg contraption of machines, concrete, and people that in spite of innumerable breakdowns, inexplicable delays, and indecipherable public announcements, more or less gets everyone where they’re going, but not without exacting a certain amount of wear and tear on their psyche. Rush hour in particular verges on a citywide mosh pit—of tired workers, frazzled mothers, and shouting, shoving teenagers, all scrabbling over finite increments of space, time, and oxygen. It’s not the kind of place you go in search of the milk of human kindness. It’s not the kind of place where you’d expect a perfectly healthy, physically able young man to walk up to you and ask you for your seat.

  And yet that’s precisely what happened one day in the early 1970s when a group of psychology students went out into the subway system on the suggestion of their teacher, the social psychologist Stanley Milgram. Milgram was already famous for his controversial “obedience” studies, conducted some years earlier at Yale, in which he had shown that ordinary people brought into a lab would apply what they thought were deadly electrical shocks to a human subject (really an actor who was pretending to be shocked) simply because they were told to do so by a white-coated researcher who claimed to be running an experiment on learning. The finding that otherwise respectable citizens could, under relatively unexceptional circumstances, perform what seemed like morally incomprehensible acts was deeply disturbing to many people—and the phrase “obedience to authority” has carried a negative connotation ever since.1

 

‹ Prev