Everyday Chaos

Home > Other > Everyday Chaos > Page 9
Everyday Chaos Page 9

by David Weinberger


  * * *

  There are some good reasons to move to governing optimizations rather than through a blanket insistence on explicability, at least in some domains:

  1. It lets us benefit from AI systems that have advanced beyond the ability of humans to understand them.

  2. It focuses the discussion at the system level rather than on individual incidents, letting us evaluate AI in comparison to the processes it replaces, thus swerving around some of the moral panic AI is occasioning.

  3. It treats the governance questions as social questions to be settled through our existing democratic processes, rather than leaving it up to the AI vendors.

  4. It places the governance of these systems within our human, social framework, subordinating them to human needs, desires, and rights.

  By treating the governance of AI as a question of optimizations, we can focus the necessary argument about them on what truly matters:

  What is it that we want from a system, and what are we willing to give up to get it?

  Chapter Three

  Beyond Preparation

  Unanticipation

  We have long known that the world is too big for us to fully understand, much less control, but we have long thought that underneath the chaos there must be an order. For the past few hundred years, we thought that order consisted of simple rules governing a complicated universe. The rules were enough to make the place explicable, and when the inevitably unexpected happened, we would just put it in the “accidents” column.

  Even before machine learning began to give us a different model, the old model had been shaken up by twenty years of online life. We have developed tools and processes that make perfect sense given the internet’s capabilities and weaknesses, but that implicitly present us with a view of how things happen that’s very different from the classical one we’ve continued to think we inhabit.

  Our experiences on the net have not just retrained our thinking. They have proved the pragmatic benefits of adopting a new stance that undoes one of the most basic premises of our decision making and plans. Perhaps thriving in a scaled, connected world requires at least sometimes forgoing anticipating and preparing for the future.

  * * *

  After Henry Ford spent more than ten years selling millions of cars with no changes beyond removing an unnecessary water pump, engineers showed Ford a prototype of an updated version. Ford’s response was to smash it to pieces with a sledgehammer. He then turned and walked out without saying a word. Henry Ford was the paragon of getting something right from the start—a strategy (or perhaps just a personality trait) that assumes the future is predictable in ways our new models are bringing us to question.1

  He had gotten his car right, but it hadn’t come easy. In 1906, Ford initiated a two-year design process by handpicking a small group of engineers he knew and felt comfortable with. He installed them in a fifteen-by-twelve-foot room with blackboards and metal machining tools because Ford would rather hold a part in his hand than evaluate its design specification written on paper. Day after day, Ford and his team focused on designing a car that would meet the minimum requirements of customers. It had to be easy to learn how to control because for most buyers it would be the first car they’d ever driven. It had to be high off the ground because it would be traveling rutted roads designed for horses. It had to be cheap to manufacture so they could make and sell them like four-wheeled hotcakes.

  After it launched, Ford made no significant alterations to the Model T’s design for nineteen years. The company by then had sold fifteen million of them, revolutionizing transportation, manufacturing, what it means to be middle class, and the open road as a symbol of freedom. The room where it was designed is now a national heritage site.2 That’s a great American success story.

  But the Model T’s design process is also a great Paleolithic success story, for its design methodology was essentially the same one humanity employed for tens of thousands of years. The seventy-one-thousand-year-old arrowheads that anthropologists found in a cave on the southern coast of South Africa were produced via a process as inflexible as Ford’s: Find a type of stone called silcrete. Build a fire to heat it so you can begin the slow process of chipping away at it with other rocks gathered for the purpose. Gather wood or bone to make mounts so that you can attach the arrowheads to wooden shafts using resin gathered from particular plants. Let it dry. This was such a complex and extended process that some have said it’s evidence that our forebears must have had language at that point.3

  The particulars of Ford’s design process are of course very different from those of our Paleolithic ancestors’, except for their most fundamental element: both succeeded by anticipating the future and preparing for it.

  Anticipations need not—and usually do not—rise to the level of a prediction in which one makes an explicit statement about what the future holds. An anticipation can be as implicit as the expectations that let us confidently fish our keys out of our pocket as we approach our house. Or anticipation can be as deliberate as Ford’s correctly thinking that customers would want headlights on their car so they could drive it at night.

  Either way, anticipation that leads to some preparatory action is the fundamental way we engage with our world. If we were to stop anticipating and preparing, we literally would not dip a spoon into a bowl of soup or look ahead at where we’re walking. It’s at the heart of our strategies, as well as the way we navigate the most mundane of our everyday activities.

  For example, it explains why there’s a bottle of cream of tartar over forty years old on my spice shelf. At some point many decades ago, some recipe—probably for lemon meringue pie—called for it; it’s not the sort of item one buys on a madcap impulse. Whatever its origin story, I have dutifully moved that bottle from apartment to apartment and house to house, always thinking it makes more sense to pack it than to toss it. You never know when a recipe is going to call for a pinch of the “spice” officially known as potassium bitartrate.

  This is a perfectly rational strategy. But it’s also a slightly crazy one, since at this rate I’m going to be buried with the bottle without ever having used it again. Assuming that I am not a pathological hoarder, why do I still have it? Because this is how we prepare for an unknown future.

  It’s obviously a good strategy, since we have survived so long by employing it. But it’s not without its costs. Otherwise, we would all be proud owners of the Wenger 16999 Swiss Army Knife, an eighty-seven-tool beast that makes you ready for anything. Got a divot stuck between your golf shoe cleats? One of its tools is designed precisely for that. Need to adjust your gun sight? It’s got just the thing. Ream a chisel point, disgorge a fishhook, and then clip a congratulatory cigar? Yes, yes, and yes.

  So why don’t we all have Wenger 16999 Swiss Army Knives? For one thing, it costs about $1,200. But even if it were free, we still wouldn’t be rocking one in our tool belts because, weighing in at seven pounds and with a width of about nine inches, it’s only technically portable and requires two hands to use, which makes it an awkward screwdriver and renders its nail clipper useless for anything except your toes. The Wenger 16999 is a collector’s item, a curiosity, a conversation piece, not a real tool, even though on paper a single Boy Scout equipped with one would be as well prepared as a troop of about thirty Scouts with their pathetic three-blade knives in their pockets.

  The 16999 makes clear the risks inherent in our anticipate-and-prepare strategy. There’s a price to being overprepared. Each additional blade, useful on its own, makes the knife more unwieldy. Cream of tartar lurks on our spice shelves and gathers dust. Cave people may have wasted time preparing arrows for a flock of birds that never showed up. But go too far in the other direction and we run the risk of being underprepared, as when a cave person makes five arrowheads and then runs into a flock of a hundred slow birds. Worst of all, if the cave person prepares arrows but runs into a saber-toothed tiger that requires a spear to be subdued, then she is misprepared, and probably d
ead.

  Affluent societies routinely over-, under-, and misprepare without even recognizing them as failures, just as no one cares that a spice shelf holds an undisturbed jar of cream of tartar that will be a silent witness as we marry, we have children, and then our children have grandchildren who one day ask, “What’s that dusty bottle of white powder for, Gramps?” We don’t count that as a failure of the anticipate-and-prepare strategy because its cost is so low. But we also discount far more consequential failures as just the cost of doing business. Factories tend toward overpreparation when stocking materials because a single missing component can bring the entire operation to a halt. Your local artisanal ice cream shop probably tends to underprepare because it knows that if it runs out of strawberry crème brûlée, it can always plop in a bucket of burnt banana cacao instead and not lose any customers.

  But the cost of the anticipate-and-prepare strategy can also be tragic. In a horrifying testament to the problem of over- and mispreparing, we Americans throw out a full 40 percent of our food and ingredients—equivalent to $165 billion each year—because we cooked too much or bought supplies that outran their use-by dates.4 In 1995, at the height of the personal computer boom, one study showed that costs related to “mismatches between demand and supply leading to excess inventory … equaled the PC business’s total operating margin”; excess inventory is just part of the cost of doing business—CODB.5 Likewise, publishers so accept that they will print more books than they’ll sell that they have had to establish a process to deal with the overstock without burdening the bookstores: booksellers rip the covers off of paperbacks, mail the covers to the publisher, and pulp what remains. CODB.

  There’s a simple reason none of this goes onto the scales when we assess our reliance on our prehistoric strategy of anticipating and preparing for a future most marked by its unpredictability: we have no scales and we do no weighing because we have had no alternative.

  Now we do. We can adopt strategies of unanticipation.

  Modes of Unanticipation

  Unanticipation has shown itself in how we’ve been conducting business and living our lives over the past twenty years. Here are some of the more illustrative, important, and sometimes quite familiar examples.

  Minimum Viable Anticipation

  In 2004 the software startup IMVU was feeling some urgency to get its product into people’s hands. So, says cofounder Eric Ries, they decided to do “everything wrong.” In his 2011 best seller, The Lean Startup, Ries explains, “[I]nstead of spending years perfecting our technology, we build a minimum viable product … that is full of bugs.… Then we ship it to customers way before it’s ready.” He adds, “And we charge money for it.”6

  IMVU was developing an instant messaging app that would represent users with visual avatars in a simulated 3-D space of the sort familiar to video game players. The users would be able to create and sell the online items that would turn this space into an inhabited world. Ries notes that shipping product before it’s ready goes against every best practice developed over the past generation for ensuring quality, but, he writes, “[t]hese discussions of quality presuppose that the company already knows what attributes of the product the customer will perceive as worthwhile.”7 These practices assume that the company can anticipate customer needs and values.

  Often we only think that we can. For example, IMVU assumed that customers would want to be able to move their avatars around. But adding the programming code to enable animated walking was relatively complex since it meant not only doing the graphics work but also creating path-finding algorithms that would let avatars move from point A to point B without bumping into the objects customers had unpredictably plopped down into their world. So IMVU shipped the product without providing even this most basic animation. Instead, users could “teleport” their avatars from A to B without any transitional animation, or even any fancy sound effects.

  “You can imagine our surprise when we started to get customer feedback,” Ries recounts. “[W]hen asked to name the top things about IMVU they liked best, customers constantly listed avatar ‘teleportation’ among the top three.” Many of them even specifically said that it was an advance over the slick animated travel in the game The Sims that IMVU had assumed had set the bar for this type of visualization.8

  IMVU may seem to have lucked out, but the real strength of its approach was that it diminished the role of luck. If customers hated the lack of animations, then IMVU would know what feature to add next—not because it guessed correctly, but because real, paying users were bellyaching.

  IMVU was following the new strategy of releasing a “minimum viable product” (MVP), a term coined by Frank Robinson, the cofounder of a product development consultancy, in 2001.9 An MVP reverses the usual order of “design, build, sell,” a process followed even by the earliest arrow-makers, except they would have replaced “sell” with “shoot.” Or, in our terms, it replaces “anticipate and prepare” with “unanticipate and learn.”

  It is hard for even the most diligent of companies to anticipate customer needs because customers don’t know what they want. That’s not because we customers are dumb. It’s because products are complex, and how they best fit into our complex workflows and lives can only be discovered by actually using them. And then those usages can give rise to new needs and new ideas.

  That’s why when Dropbox launched in September 2008, it shipped a product that did just one thing well: users could work on the same file from multiple machines without hitting any speed bumps.10 Since then, Dropbox has incrementally added more features based on what users turn out to want: publicly shareable files, automatic backups, collaborative editing, and more. Dropbox has continued to add major features as it learns from customers what they will actually use.

  There’s a similar story behind Slack, a workgroup chat app modeled on an ancient internet service, IRC (Internet Relay Chat), that lets people create “channels” over which they can communicate by typing. When it launched in 2013, Slack offered minimal functionality. As it got taken up by more and larger organizations, it discovered it needed to provide better navigation tools for users who may now have dozens of channels. Slack continues to devote a great deal of its resources to learning what its users actually want. Founder Stewart Butterfield says they get about eight thousand help and enhancement requests every month, and ten thousand tweets, “and we respond to all of them.” “Whenever they hear something new that seems like it’s actually a really good idea—or it’s a pretty good idea but it’s very easy for us to implement—it gets posted to a [Slack] channel where we discuss new features. That’s an ongoing, daily thing.” He adds, “There have already been 50 messages posted today.”11

  Why anticipate when you can launch, learn, and iterate?

  * * *

  The MVP approach is now familiar to many segments of business. It was even featured in Harvard Business Review in 2013. But we should pause to remember just how counterintuitive the MVP process is … or at least was, until our success with such strategies changed our intuitions.

  Business has worked on systematizing quality processes at least since W. Edwards Deming started teaching his management techniques to Americans in the 1950s. In the early 1980s, the US Navy started applying Deming’s techniques and dubbed the program Total Quality Management. As taken up by companies such as Ford, Motorola, and ExxonMobil, TQM is a cultural and organizational commitment to “[d]o the right things, right the first time, every time”—very Henry Ford.12

  It’s hard to argue with that. But not impossible. The emphasis on quality has often led to efforts to systematize “best practices” on the grounds that “there’s only one best way to do things,” as adherents say. For repetitive processes on an assembly line, a best practices approach—mirroring Taylorism, with its clipboards and stopwatches—makes sense, except of course for its abject dehumanization of workers. But when best practices apply uniform processes to unique situations, as they do in virtually every nonmechanized
environment, they can miss opportunities or create inefficiencies. They can become ritualized and outlast their utility. As Tom Peters, coauthor of In Search of Excellence, says, “[I]n a world with so much change … what is the shelf life of a best practice anyway?”13

  Our temptation to rely on best practices is backed by the real benefits they can bring, but also by a misapplication of one of the rules of how things happen we discussed in the introduction: equal causes have equal effects, but only if the situations are truly the same. Only in the most mechanized of environments are the situations so self-similar, and even there, emergencies and opportunities arise that can turn best practices into suboptimal practices or even disastrous practices.

  Certainly the companies that have released MVPs are not arguing against quality, even as they charge customers for the privilege of using underfeatured and possibly buggy products. Rather, they are against the idea that quality is best achieved the way Ford did: by knowing beforehand exactly what you want and then planning the perfect procedures that will get it right every time. Releasing an imperfect, incomplete product to users who want to help shape its future often results in a higher-quality product that is more highly valued by its users.

 

‹ Prev