Book Read Free

Everyday Chaos

Page 19

by David Weinberger


  There’s nothing extraordinary about this story. From the checkup through the postoperative treatment, Timo received excellent health care. All has gone as well as he and his medical team hoped. But we should be at least curious about the everyday fact that while Timo’s initial checkup did not lead to a prediction of a heart attack, once the event occurred, the same evidence was read backward as an explanation of that event.

  Pierre-Simon Laplace would have been pleased. His omniscient demon that can predict everything that will happen based on its complete knowledge about any one moment can just as easily “postdict” everything that has happened. For the demon, explanations are exactly the same as predictions, except the predictions look forward and the explanations look backward.

  We humans, of course, don’t know now what we will know later, so predictions and explanations are different for us.1 In Timo’s case, the most important difference between what the physicians knew before and after his heart attack was that a heart attack had occurred. Once we know that, we can see the path to the attack. We can reconstruct it.

  Or at least we think we can. It should concern us that when we look backward, we find reasons for just about everything. The stock market fell yesterday because of fears about the Middle East. Our town voted down the school tax increase because people think the town government is fiscally irresponsible. The car ahead of us sat through the entire green light because the driver was probably texting. If something happens, we envision the path that led up to it. We are a species that explains things even when we’re just making it up.

  We can do this because we’ve decided that usually an explanation need only point to the sine qua non cause, or the “but for x” cause, as in, “But for the want of a nail, a kingdom was lost”—or, more likely for the rest of us, “But for that nail, I wouldn’t have gotten a flat tire.”

  “We ran over a nail” is a fine explanation of a flat tire, especially if the nail is still sticking in the tire, but in truth there are many other but fors that apply to that situation: but for our being late and having to take the Gardner Street shortcut, where the nail was; but for tires being made out of a material softer than iron; but for pointy objects being able to penetrate materials as stiff as tires; but for our having been born after pneumatic tires were invented; but for rust-based extraterrestrials not using space magnets to pull all iron objects off the surface of the earth … and so on until we, demon-like, are done listing everything that had to happen and not happen for us to find ourselves pulled over on a dark road thumbing through a manual to find out where we’re supposed to attach the car jack.

  The sine qua non form of explanation has such deep roots in our thinking in part because of the social role of explanations. Outside of scientific research, we generally want explanations for events that vary from our expectations: Why did we get a flat? Why did I get a stomachache? Why did the guy in the car ahead of me sit through an entire green light even though I honked? For each of these special cases, we find the “but for x” explanation that points to what was unique in each case: the exceptional, differentiating fact.

  Sine qua nons work well when the exceptional case is a problem: the nail in the tire is the explanation because the nail is the thing we can change that will fix the problem. We can’t go back in time and take a different road or change the relative hardness of rubber and metal. But we can take the nail out of the tire. Explanations are tools, as we discussed back in chapter 2. They are not a picture of how the world works; more often, they are a picture of how the world went wrong. By isolating one factor, they enable us to address problems—pull the nail out of the tire, put stents into Timo—which is no small thing, but the world is not a single-cause sort of place. In focusing on what’s unusual, explanations can mask the usual in all its enormous richness and complexity.

  Then there’s the unsettling truth that machine learning is putting before our reluctant eyes: in some instances, there may be no dominant unusual fact that can serve as a useful explanation. A machine learning diagnostic system’s conclusion that there is a 73 percent chance that Aunt Ida will have a heart attack within the next five years might be based on a particular constellation of variables. Changing any of those variables may only minutely affect the percentage probability. There may be no dominant “but for x” in this case.

  This can make machine learning “explanations” more like how we think about our own lives when we pause to marvel—in joy or regret—at how we got wherever we are at the moment. All the ifs, too many to count! If Dad hadn’t been so supportive, or so angry. If you hadn’t mistakenly signed up for that college course that changed your life. If you had looked right instead of left when stepping off that curb. If you hadn’t walked into that one bar of all the gin joints in all the towns in all the world. We got to here—wherever we are—because of countless things that happened and a larger number of things that did not. We got here because of everything.

  In moments like that, we remember what explanations hide from us.

  Levers without Explanations

  In 2008 the editor of Wired magazine, Chris Anderson, angered many scientists by declaring the “end of theory.”2 The anger came in part from the post’s subtitle, which declared the scientific method to be “obsolete,” a claim not made or discussed in the article itself. Apparently even editors of magazines don’t get to write the headlines for their stories.

  Anderson in fact maintained that models are always simplifications and pointed to areas where we’ve succeeded without them: Google can translate one language to another based only on statistical correlations among word-usage patterns, geneticists can find correlations between genes and biological effects without having a hypothesis about why the correlations hold, and so on.

  Massimo Pigliucci, a philosophy professor, summarized many scientists’ objections in a report published for molecular biologists: “[I]f we stop looking for models and hypotheses, are we still really doing science? Science … is not about finding patterns—although that is certainly part of the process—it is about finding explanations for those patterns.”3

  Not all scientists agreed. A 2009 book of essays by scientists argued for using Big Data analysis to find patterns, titling the approach “the Fourth Paradigm,” a phrase coined by Jim Gray, a Microsoft researcher who had disappeared at sea two years before.4 Many but not all of the contributors assumed those patterns would yield theories and explanations, but now, as claims about the power of Big Data have morphed into claims about inexplicable deep learning, Anderson’s claim is again being debated.

  In one particular field, though, the practice of model-free explanations is outrunning that debate. When it comes to understanding human motivation—how we decide what to make happen—we are getting accustomed to the notion that much of what we do may not have, and does not need, an explanation.

  * * *

  In 2008, the highly acclaimed academics Richard Thaler and Cass Sunstein opened their best seller, Nudge, with a hypothetical example of a school system that discovers that arbitrary changes in the placement of food items on the cafeteria counter can dramatically change the choices students make.5 “[S]mall and apparently insignificant details can have major impacts on people’s behavior,” the book concludes6—a lesson we learned in the introduction of this book when we looked at A/B testing. Since all design decisions affect our behavior—“there is no such thing as a ‘neutral’ design”—Nudge argues we should engineer systems to nudge people toward the behavior we want.7

  This is a powerful idea that is being widely deployed by businesses and governments—Sunstein worked in the Obama White House—because we’ve gotten better at it. And we’ve gotten better at it because we’ve largely given up on trying to find explanations of how it works. But it is not the first time our culture has heard that there are surprising, and surprisingly effective, nonrational levers for changing behavior.

  You can see the distance this idea has traveled by comparing Nudge to Vance Packard’s
1957 Hidden Persuaders, a best seller that today is best remembered for its warnings about subliminal advertising: flashing an image of an ice cream bar onto a movie screen so briefly that it does not consciously register was said to increase ice cream sales at the concession stand. In truth, Packard’s book spends less than two pages on the topic, most of it casting doubt on it.8 Nowadays, other than the occasional crank who finds the word sex written in the nighttime stars over Simba’s head in The Lion King, one does not hear much about subliminal advertising of this sort.9

  Packard’s real concern was the way advertisers were short-circuiting our decision-making processes through what was then called motivational research, or MR. MR assumed the Freudian model that said our unconscious mind is a cauldron of desires, fears, and memories suppressed by our higher levels of consciousness. By using coded words and images to appeal to those repressed urges, advertisers could stimulate powerful associations. For example, since smoking cigarettes is “really” a way of assuaging men’s anxieties about their virility, ads should show manly men smoking as sexy ladies look on. Likewise, cars express aggression, and home freezers represent “security, warmth, and safety.” Air conditioners are for people “yearning for a return to the security of the womb.” Shaving “is a kind of daily castration.” Those associations may sound outlandish now, but Fortune magazine in 1956 estimated that $1 billion—worth $9 billion today—spent on advertising in 1955 came from firms using MR to guide them.10

  Both nudges and MR-based ads aim at influencing our choices without our knowing it, but the theories behind them are very different. Nudge is based on a well-supported modern theory of the brain: beneath the Reflective Brain is the Automatic System that we share with lizards … and, as Thaler and Sunstein playfully point out, also with puppies.11 The Automatic System responds so quickly that it often leaps to the wrong conclusion. By appealing to it, advertisers can nudge us in ways that our Reflective Brain would not have agreed to. In contrast, MR is based on an out-of-favor psychological theory that assumes that even the nonrational parts of our minds are still understandable in terms of human desires, fears, anxieties, and the like. We can give a Freudian explanation of why men prefer razors with thick, meaty handles, but the explanation of why a nudge works—in the cases where an explanation is even offered—will be more like the explanation of why giraffes have long necks: What about our evolutionary history might have led to our being susceptible to being nudged in that direction? We have moved so far from explaining our behavior based on our rationality that we don’t even point to our irrational psychology.

  Theories, of course, still have value, but if there’s a way to influence a shopper’s behavior or to cure a genetic disease, we’re not waiting for a theory before we give the lever a pull.

  Unlevered

  I’m just a bill. Yes I am only a bill. And I’m sitting here on Capitol Hill.

  If those words have invoked a melody in your head that you will not be able to extricate until tomorrow afternoon, then it’s highly likely you were either a child or had young children sometime between 1976 and 1983 when ABC’s Schoolhouse Rock! aired its most famous educational music video.12

  In the unlikely event you’ve never heard it, it’s about how the levers of government work. Or, in its many parodies, how they don’t work.13 Even so, complaining that a machine doesn’t work as well as it should accepts that it should be working like a machine. That has been our model.

  The Occupy movement disagreed. A loose confederation of people who established communal camps at institutions they thought had too much power, Occupy sought to bring change but refused to pull on any of the known levers. It rejected the idea that it was a citizens’ lobbying group. It didn’t try to raise money or circulate petitions. It resisted even coming up with a list of the changes it wanted to bring about.

  Occupy was about gravity, not levers.

  Granted, Occupy was weird. And one might certainly argue that it failed. But that assumes a particular definition of success. Gravity—or “pull,” as John Hagel, John Seely Brown, and Lang Davison call it—works differently from how levers do.14

  The essence of a lever is that it has a direct effect on something. If it doesn’t, it’s a broken lever. Or possibly it’s not attached to anything, in which case it’s like a child’s pretend steering wheel mounted on the dashboard of a real car. If Occupy thought that a bunch of young people hanging out in tents for several months was going to bring about legislative change, then Occupy was pretend politics. If the aim of Occupy was to directly bring about government reform or a more equitable society, it failed.

  That’s how it looks if we take Occupy as an attempt to pull on a lever. In fact, Occupy and many protest movements are like gravity in Einstein’s sense: space-time is reshaped by objects with mass. The more people who are pulled into the gravity well, the greater the movement’s mass. As its gravity grows, it starts to affect the environment more widely and more powerfully, sometimes at such a distance that people don’t always know they are being pulled by it. If you now think about tax and budget proposals in terms of what they mean for the 1 percent, then Occupy has shaped your space-time with its one-percenter rhetoric.

  Occupy’s rejection of the lever-based theory of change is espoused not just by activists camping out in city squares but also by every marketing professional or individual with a Facebook or Twitter account. We now talk about social influencers shaping their environments. We measure our likes, our followers, and our upvotes as a type of mass that increases our gravity the more followers we attract.

  Public relations agencies used to try to manage a client’s brand by managing its communications. Now they are likely to talk about reaching the influencers by giving them something to talk about, free products, or cash. This is very unlike the MR approach that for decades assumed customers could be manipulated by putting words and images in front of them that would trigger their unconscious Freudian fears and desires. It is not even as direct as nudges that use evolutionary accidents of our brain to move us in the desired direction. It is instead about increasing the gravitational pull of the people who populate our online universe.

  Even the functional elements of online tools frequently work gravitationally. For example, in a literal sense, a hashtag on Twitter is nothing but a label: a # followed by a word or a spaceless phrase that acts as a searchable ID for disconnected tweets on the same topic. But that misses what’s significant about them: hashtags exert more influence the more often they’re used. For example, the #MeToo hashtag took on mass in 2018, attracting more women (and some men) to attach their story to it, and more people of every gender to retweet it. It became a massive comet dense with stories, anger, pain, and commitment. Its pull was so strong that it reached beyond the internet and deep into culture, business, politics, and personal lives. The significance of Occupy is arguable; the significance of #MeToo is not.

  Levers are for machines. Gravity is for worlds held together by interests, attention, ideas, words, and every other driver of connection.

  Stories

  Tweets scroll past at a pace that would dismay Laplace’s demon. A news site that hasn’t changed since we visited it ten minutes ago feels as if it’s printed on yellowing paper. There’s no point in pretending we’re keeping up with every friend’s post on Facebook and every colleague’s latest job news on LinkedIn. News used to come in diurnal cycles, the paper thumping onto our archetypal porch each morning, and the nightly news showing up on our televisions at dinnertime. Now you can’t step into the same network twice.

  In his provocative book Present Shock, Douglas Rushkoff argues that the net is wiping out our sense of the future and the past.15 As one piece of evidence for this “presentism,” as he calls it, Rushkoff points to our impatience with stories. We don’t have the attention span for anything but the quickest hit off of YouTube, and then it’s time to carom to the next shiny online object.

  Rushkoff’s book talks about something we all feel, but t
here’s a second phenomenon that points in the opposite direction: we love long narratives more than ever.16 When people talk about the “new golden age of television,” they almost always point first to series with scores of characters, and arcs that stretch over years: Game of Thrones, The Sopranos, Breaking Bad. We are in the age of hundred-hour stories, as Steven Johnson points out in Everything Bad Is Good for You.17 He presents evidence that our television series have become far more complex over time, perhaps not coincidentally as the internet has come to prominence—far more complex than a Dickens novel, although with six hundred characters, War and Peace still sets a high water mark. Even beyond the blockbuster long narratives, storytelling is entrenching itself just about everywhere we look. Podcasts that tell stories are a rising cultural force, whether it’s fiction (Welcome to Night Vale, Fruit), journalistic investigations (Serial, S-Town), personal stories (The Moth Radio Hour), or the storifying of ideas (This American Life, Radiolab). There are courses on storytelling and conferences about the future of storytelling, including one with exactly that name. Our story these days is all about storytelling.

  How can we be simultaneously approaching Peak Storytelling and Peak Distraction?

  Any ordinary person of two centuries ago could expect to die in the bed in which he had been born. He lived on a virtually changeless diet, eating from a bowl that would be passed on to his grandchildren.

 

‹ Prev