Everything Is Obvious

Home > Other > Everything Is Obvious > Page 20
Everything Is Obvious Page 20

by Duncan J. Watts


  But let’s say that none of these attempts is effective. Perhaps the brand in question is just not appealing to particular demographics, or perhaps those people don’t respond to online advertising. Even in that event, however, the advertiser can at least stop wasting money advertising to them, freeing more resources to focus on the population that might actually be swayed. Regardless, the only way to improve one’s marketing effectiveness over time is to first know what is working and what isn’t. Advertising experiments, therefore, should not be viewed as a one-off exercise that either yields “the answer” or doesn’t, but rather as part of an ongoing learning process that is built into all advertising.21

  A small but growing community of researchers is now arguing that the same mentality should be applied not just to advertising but to all manner of business and policy planning, both online and off. In a recent article in MIT Sloan Management Review, for example, MIT professors Erik Brynjolfsson and Michael Schrage argue that new technologies for tracking inventory, sales, and other business parameters—whether the layout of links on a search page, the arrangement of products on a store shelf, or the details of a special direct mail offer—are bringing about a new era of controlled experiments in business. Brynjolfsson and Schrage even quote Gary Loveman, the chief executive of the casino company Harrah’s, as saying, “There are two ways to get fired from Harrah’s: stealing from the company, or failing to include a proper control group in your business experiment.” You might find it disturbing that casino operators are ahead of the curve in terms of science-based business practice, but the mind-set of routinely including experimental controls is one from which other businesses could clearly benefit.22

  Field experiments are even beginning to gain traction in the more tradition-bound worlds of economics and politics. Researchers associated with the MIT Poverty Action Lab, for example, have conducted more than a hundred field experiments to test the efficacy of various aid policies, mostly in the areas of public health, education, and savings and credit. Political scientists have tested the effect of advertising and phone solicitations on voter turnout, as well as the effect of newspapers on political opinions. And labor economists have conducted numerous field experiments to test the effectiveness of different compensation schemes, or how feedback affects performance. Typically the questions these researchers pose are quite specific. Should aid agencies give away mosquito nets or charge for them? How do workers respond to fixed wages versus performance-based pay? Does offering people a savings plan help them to save more? Yet answers to even these modest goals would be useful to managers and planners. And field experiments could be conducted on grander scales as well. For example, public policy analyst Randal O’Toole has advocated conducting field experiments for the National Park Service that would test different ways to manage and govern the national parks by applying them randomly to different parks (Yellowstone, Yosemite, Glacier, etc.) and measuring which ones work the best.23

  THE IMPORTANCE OF LOCAL KNOWLEDGE

  The potential of field experiments is exciting, and there is no doubt that they are used far less often than they could be. Nevertheless, it isn’t always possible to conduct experiments. The United States cannot go to war with half of Iraq and remain at peace with the other half just to see which strategy works better over the long haul. Nor can a company easily rebrand just a part of itself, or rebrand itself with respect to only some consumers and not others.24 For decisions like these, it’s unlikely that an experimental approach will be of much help; nevertheless, the decisions still have to get made. It’s all well and good for academics and researchers to debate the finer points of cause and effect, but our politicians and business leaders must often act in the absence of certainty. In such a world, the first rule of order is not to let the perfect be the enemy of the good, or as my Navy instructors constantly reminded us, sometimes even a bad plan is better than no plan at all.

  Fair enough. In many circumstances, it may well be true that realistically all one can do is pick the course of action that seems to have the greatest likelihood of success and commit to it. But the combination of power and necessity can also lead planners to have more faith in their instincts than they ought to, often with disastrous consequences. As I mentioned in Chapter 1, the late nineteenth and early twentieth centuries were characterized by pervasive optimism among engineers, architects, scientists, and government technocrats that the problems of society could be solved just like problems in science and engineering. Yet as the political scientist James Scott has written, this optimism was based on a misguided belief that the intuition of planners was as precise and reliable as mankind’s accumulated scientific expertise.

  According to Scott, the central flaw in this “high modernist” philosophy was that it underemphasized the importance of local, context-dependent knowledge in favor of rigid mental models of cause and effect. As Scott put it, applying generic rules to a complex world was “an invitation to practical failure, social disillusionment, or most likely both.” The solution, Scott argued, is that plans should be designed to exploit “a wide array of practical skills and acquired intelligence in responding to a constantly changing natural and human environment.” This kind of knowledge, moreover, is hard to reduce to generally applicable principles precisely because “the environments in which it is exercised are so complex and non-repeatable that formal procedures of rational decision making are impossible to apply.” In other words, the knowledge on which plans should be based is necessarily local to the concrete situation in which it is to be applied.25

  Scott’s argument in favor of local knowledge was in fact presaged many years earlier in a famous paper titled “The Use of Knowledge in Society” by the economist Friedrich Hayek, who argued that planning was fundamentally a matter of aggregating knowledge. Knowing what resources to allocate, and where, required knowing who needed how much of what relative to everyone else. Hayek also argued, however, that aggregating all this knowledge across a broad economy made up of hundreds of millions of people is impossible for any single central planner no matter how smart or well intentioned. Yet it is precisely the aggregation of all this information that markets achieve every day, without any oversight or direction. If, for example, someone, somewhere invents a new use for iron that allows him to make more profitable use of it than anyone else, that person will also be willing to pay more for the iron than anyone else will. And because aggregate demand has now gone up, then all else being equal, so will its price. The people who have less productive uses will therefore buy less iron, while the people who have more productive uses will buy more of it. Nobody needs to know why the price went up, or who it is that suddenly wants more iron—in fact, no one needs to know anything about the process at all. Rather, it is the “invisible hand” of the market that automatically allocates the limited amount of iron in the world to whomever can make the best use of it.

  Hayek’s paper is often held up by free market advocates as an argument that government-designed solutions are always worse than market-based ones, and no doubt there are cases where this conclusion is correct. For example, “cap and trade” policies to reduce carbon emissions explicitly invoke Hayek’s reasoning. Rather than the government instructing businesses on how to reduce their carbon emissions—as would be the case with typical government regulation—it should simply place a cost on carbon by “capping” the total amount that can be emitted by the economy as a whole, and then leave it up to individual businesses to figure out how best to respond. Some businesses would find ways to reduce their energy consumption, while others would switch to alternative sources of energy, and others still would look for ways to clean up their existing emissions. Finally, some businesses would prefer to pay for the privilege of continuing to emit carbon by buying credits from those who prefer to cut back, where the price of the credits would depend on the overall supply and demand—just as in other markets.26

  Market-based mechanisms like cap and trade do indeed seem to have more chance of working
than centralized bureaucratic solutions. But market-based mechanisms are not the only way to exploit local knowledge, nor are they necessarily the best way. Critics of cap-and-trade policies, for example, point out that markets for carbon credits are likely to spawn all manner of complex derivatives—like the derivatives that brought the financial system to its knees in 2008—with consequences that may undermine the intent of the policy. A less easily gamed approach, they argue, would be to increase the cost of carbon simply by taxing it, thereby still offering incentives to businesses to reduce emissions and still giving them the flexibility to decide how best to reduce them, but without all the overhead and complexity of a market.

  Another nonmarket approach to harnessing local knowledge that is increasingly popular among governments and foundations alike is the prize competition. Rather than allocating resources ahead of time to preselected recipients, prize competitions reverse the funding mechanism, allowing anyone to work on the problem, but only rewarding solutions that satisfy prespecified objectives. Prize competitions have attracted a lot of attention in recent years for the incredible amount of creativity they have managed to leverage out of relatively small prize pools. The funding agency DARPA, for example, was able to harness the collective creativity of dozens of university research labs to build self-driving robot vehicles by offering just a few million dollars in prize money—far less than it would have cost to fund the same amount of work with conventional research grants. Likewise, the $10 million Ansari X Prize elicited more than $100 million worth of research and development in pursuit of building a reusable spacecraft. And the video rental company Netflix got some of the world’s most talented computer scientists to help it improve its movie recommendation algorithms for just a $1 million prize.

  Inspired by these examples—along with “open innovation” companies like Innocentive, which conducts hundreds of prize competitions in engineering, computer science, math, chemistry, life sciences, physical sciences, and business—governments are wondering if the same approach can be used to solve otherwise intractable policy problems. In the past year, for example, the Obama administration has generated shock waves throughout the education establishment by announcing its “Race to the Top”—effectively a prize competition among US states for public education resources allocated on the basis of plans that the states must submit, which are scored on a variety of dimensions, including student performance measurement, teacher accountability, and labor contract reforms. Much of the controversy around the Race to the Top takes issue with its emphasis on teacher quality as the primary determinant of student performance and on standardized testing as a way to measure it. These legitimate critiques notwithstanding, however, the Race to the Top remains an interesting policy experiment for the simple reason that, like cap and trade, it specifies the “solution” only at the highest level, while leaving the specifics up to the states themselves.27

  DON’T “SOLVE”: BOOTSTRAP

  Market-based solutions and prize competitions are both good ideas, but they’re not the only way that centralized bureaucracies can take advantage of local knowledge. A different approach altogether begins with the observation that in any troubled system, there are often instances of individuals and groups—called bright spots by marketing scientists Chip and Dan Heath in their book Switch—who have figured out workable solutions to specific problems. The bright-spot approach was first developed by Tufts University nutrition professor Marian Zeitlin, who noticed that a number of studies of child nutrition in impoverished communities had found that within any given community, some children seemed to be better nourished than others. After understanding these naturally occurring success stories—how the children’s mothers behaved differently, what they fed them and when—Zeitlin realized that he could help other mothers to take better care of their children simply by teaching them the homegrown solutions that already existed in their own communities. Subsequently, the bright-spot approach has been used successfully in developing nations, and even in the United States where certain hand-washing practices in a small number of hospitals are being replicated in order to help reduce bacterial infections—the leading cause of preventable hospital deaths—throughout the medical system.28

  The bright-spot approach is also similar to what political scientist Charles Sabel calls bootstrapping, a philosophy that has begun to gain popularity in the world of economic development. Bootstrapping is modeled on the famous Toyota Production System, which has been embraced not only across the Japanese automotive firms but also more broadly across industries and cultures. The basic idea is that production systems should be engineered along “just in time” principles, which assure that if one part of the system fails, the whole system must stop until the problem is fixed. At first, this sounds like a bad idea (and it has led Toyota to the brink of disaster at least once), but its advantage is that it forces organizations to address problems quickly and aggressively. It also forces them to trace problems to their “root causes”—a process that frequently requires looking beyond the immediate cause of the failure to discover how flaws in one part of the system can result in failures somewhere else. And finally, it forces them to look either for existing solutions or else adapt solutions from related activities—a process known as benchmarking. Together these three practices—identifying failure points, tracing problems to root causes, and searching for solutions outside the confines of existing routines—can transform the organization itself from one that offers solutions to complex problems in a centralized managerial manner into one that searches for solutions among a broad network of collaborators.29

  Like bright spots, bootstrapping focuses on concrete solutions to local problems, and seeks to extract solutions that are working from what is already happening on the ground. However, bootstrapping goes one step further, sniffing out not only what is working, but also what could work if certain impediments were removed, constraints lifted, or problems solved elsewhere in the system. A potential downside of bootstrapping is that it requires a motivated workforce with strong incentives to solve problems as they arise. So one might legitimately wonder whether the model can be translated from highly competitive industrial settings to the world of economic development or public policy. But as Sabel points out, there are now so many examples of local successes—footwear producers in the Sinos Valley of Brazil, wine growers in Mendoza, Argentina, or soccer ball manufacturers in Sialkot, Pakistan—that have flourished on the strength of the bootstrapping approach that it is hard to dismiss them as mere aberrations.30

  PLANNING AND COMMON SENSE

  Most important, what both bright spots and bootstrapping have in common is that they require a shift in mind-set on the part of planners. First, planners must recognize that no matter what the problem is—creating a more nutritious diet in impoverished villages, reducing infection rates in hospitals, or improving the competitiveness of local industries—chances are that somebody out there already has part of the solution and is willing to share it with others. And second, having realized that they do not need to figure out the solution to every problem on their own, planners can instead devote their resources to finding the existing solutions, wherever they occur, and spreading their practice more widely.31

  In effect, this is also the lesson of thinkers like Scott and Hayek, whose proposed solutions also advocate that policy makers devise plans that revolve around the knowledge and motivation of local actors rather than relying on their own. Planners, in other words, need to learn to behave more like what the development economist William Easterly calls searchers. As Easterly puts it,

  A Planner thinks he already knows the answer; he thinks of poverty as a technical engineering problem that his answers will solve. A Searcher admits he doesn’t know the answers in advance; he believes that poverty is a complicated tangle of political, social, historical, institutional, and technological factors … and hopes to find answers to individual problems by trial and error.… A Planner believes outsiders know enough to impose solutio
ns. A Searcher believes only insiders have enough knowledge to find solutions, and that most solutions must be homegrown.32

  As different as they appear on the surface, in fact, all these approaches to planning—along with Mintzberg’s emergent strategy, Peretti’s mullet strategy, crowdsourcing, and field experiments—are really just variations on the same general theme of “measuring and reacting.” Sometimes what is being measured is the detailed knowledge of local actors, and sometimes it is mouse clicks or search terms. Sometimes it is sufficient merely to gather data, and sometimes one must conduct a randomized experiment. Sometimes the appropriate reaction is to shift resources from one program or topic or ad campaign to another, while at other times it is to expand on someone else’s homegrown solution. There are, in fact, as many ways to measure and react to different problems as there are problems to solve, and no one-size-fits-all approach exists. What they all have in common, however, is that they require planners—whether government planners trying to reduce global poverty or advertising planners trying to launch a new campaign for a client—to abandon the conceit that they can develop plans on the basis of intuition and experience alone. Plans fail, in other words, not because planners ignore common sense, but rather because they rely on their own common sense to reason about the behavior of people who are different from them.

  This seems like an easy trap to avoid, but it isn’t. Whenever we contemplate the question of why it is that things turned out the way they did, or why people do what they do, we are always able to come up with plausible answers. We may even be so convinced by our answers that whatever prediction or explanation we arrive at may seem obvious. We will always be tempted to think that we know how other people will react to a new product, or to a politician’s campaign speech, or to a new tax law. “It’ll never work,” we will want to say, “because people just don’t like that kind of thing,” or “No one will be fooled by his obvious chicanery,” or “Such a tax will reduce incentives to work hard and invest in the economy.” None of this can be helped—we cannot suppress our commonsense intuition any more than we can will our heart to stop beating. What we can do, however, is remember that whenever it comes to questions of business strategy or government policy, or even marketing campaigns and website design, we must rely less on our common sense and more on what we can measure.

 

‹ Prev