Everyday Chaos

Home > Other > Everyday Chaos > Page 2
Everyday Chaos Page 2

by David Weinberger


  No matter which sort of explanation we choose, we’re holding to an unexpressed basic tenet: if the same operation is done on the same sort of object, if it doesn’t have the same effect, then either it wasn’t really the same object (the exploding phone was different from the 99.99 percent) or the causes weren’t the same (the exploding phone was squashed in a bouncy environment). Things happen in law-like ways.

  But …

  Now it’s becoming increasingly clear that these laws may not always be the most useful tools for us to grapple with the world’s complexity. A/B tests may be so sensitive to the minute particularities of each case that applying the laws would be as difficult as determining exactly which piece of gravel is going to strike your windshield exactly right to leave you looking through a glass spider web. We know this because if we could use laws to determine the outcome of A/B tests, we’d skip doing the testing and just apply the laws; we’d skip building Deep Patient and just let physicians predict diagnoses; we’d know which phones to leave out of our baggage; and we’d cease to murmur in wonder at how beautifully a machine is playing a complex game.

  2. We can understand how things happen

  The ancient Egyptians knew that if they ate some willow bark, their aches and pains would be reduced. They didn’t have anything that we would recognize as a scientific theory of why it worked—their medical practices were advanced for their time, but were based on ideas about gods, spirits, and blockages in bodily channels—but the willow bark worked. The British reverend Edward Stone likewise did not have a scientific theory when he rediscovered the power of willow bark in the 1760s. Neither did the Bayer company in 1899 when it began producing what we now know as aspirin, based on the chemicals in willow bark. The theory did not arrive until the late 1970s, resulting in the 1982 Nobel Prize for its discoverers.10

  But there is a difference between the Egyptians’ lack of a theory and Bayer’s: unlike the Egyptians, Bayer’s chemists believed that there is a theory—a causal connection explained by law-governed chemical interactions—and that we would eventually discover it. We hold firmly to the tenet that not only are changes caused by laws that apply equally to all similar cases, but we humans can know those laws. That makes us special in the universe.

  But …

  Important predictions like the ones made by Deep Patient are being made more accurately than ever before by machine learning systems that we may never be able to understand. We are losing our naive confidence that we can understand how things happen.

  3. We can make things happen by pulling the right levers

  Based on her examination of the BuzzFeed site, home of viral posts, Josefina Casas advises that if you want your post to go viral, give it a title with a number in it. Appropriately, the title of her post is “5 Tricks for Writing Great Headlines on Twitter and Facebook as Awesome and Clickable as BuzzFeed’s.”11

  Her post repeats one of the most basic promises our theory of change makes to us: because what happens follows knowable laws, you just have to find the right levers to pull when you want to make something specific happen.

  But …

  A stunt video much like a million others is posted on the internet, and for reasons we may never understand, it inspires seventeen million people around the world to dump a bucket of ice water on their heads, raising $100 million for a good cause.12 A thousand other charities are inspired to try some variation on that campaign. None work. Our feeds are filled with the results of all sorts of nonreproducible lever pulls, as unpredictable as which A/B is going to get more clicks.

  If a lever behaves differently every time you pull it, is it a lever at all?

  4. Change is proportional to effect

  If you want to lift a hundred-pound bag of potatoes, it’s going to take twice as much effort as lifting a fifty-pound bag. When it comes to simple physics, that’s just the way it is.

  But …

  A tiny pebble that hits your windshield can shatter it. A snowball can unleash an avalanche. An amateur video can go viral, bringing millions of people out into the streets. In each of these cases, it still takes a lot of energy to make a big change, but that energy can come from tiny changes distributed throughout the system, if the system is large, complex, and densely connected enough.

  Now most of us spend a good portion of our day in just such a system: the internet. And a configuration of thousands of tiny variables in a deep learning system may foretell life-threatening cardiac problems for the complex system we call the human body.

  * * *

  As we inch away from each of these four assumptions, perhaps our everyday understanding of how things happen is finally catching up with the way the world actually works, and how scientists have been thinking about it for a while now.

  Normal Chaos

  You get in your car. You drive to the mall. Along the way, you pull over to let an ambulance go by. It’s a totally normal trip.

  Braden R. Allenby and Daniel Sarewitz in The Techno-Human Condition want us to understand just how complex that normal trip actually is. Your car is what they call a Level I complex system because you can open up the hood and figure out how it works. The mall owes its existence to Level II complexity: malls weren’t feasible before there were cars, yet you could not predict their rise just by examining a car. The ambulance is explicable only as part of a Level III system that exists because of the intersection of multiple systems: cars, roads, traffic laws, a health care system that relies on centralized facilities, and more. If all you knew was what you saw under the hood of your car, you could never, ever predict ambulances.13 Allenby and Sarewitz lay this out to dissuade us from continuing to apply Level I solutions to Level III problems such as climate change, but another consequence of their analysis is the recognition that simple things around us can only seem simple because we ignore the complex systems that make them possible.

  Yet we didn’t have a theory that directly addressed complexity until about sixty years ago. If we’re willing to ironically oversimplify its history, we can mark Chaos Theory’s rise to public awareness from a 1972 talk by Edward Lorenz, one of the parents of this new science: “Predictability: Does the Flap of a Butterfly’s Wing in Brazil Set Off a Tornado in Texas?”14 That arrestingly implausible idea made it easy for the media to present the new discipline to the public as just another one of science’s crazy theories, in the “What will they think of next?” category.

  But of course Chaos Theory isn’t crazy at all. In fact, before machine learning let us put data to use without always requiring us to understand how it fits together, and before the internet let us directly experience just how unpredictable a complex system can be, Chaos Theory prepared the ground for the disruption of our settled ideas about how change happens.

  Chaos Theory isn’t crazy, but it can seem that way because it describes nonlinear systems—systems that work differently as they scale up. For example, if you want to add people to your dinner party for four, at a certain point you won’t be able to just add more chairs and increase the ingredients in your recipes; that would be a linear system. At some precise point you’re going to throw up your arms with the realization that you have to hire a hall, find a caterer, make arrangements with the local police to manage the traffic, and give up on having everyone stand up and introduce themselves. It’s going to be a very different sort of party.

  Weather is a more typical nonlinear system because, for example, a tiny rise in temperature can affect the air pressure and wind speed enough to change the pattern of evaporation and condensation, resulting in a hurricane. When a small effect produces a large change in how a system works, you’ve got a nonlinear system.

  Chaos Theory gave us mathematical tools for modeling highly complex, nonlinear systems, making it possible to rigorously analyze everything from the flow of water around a boulder, to climate change, to the direction a bead of water takes when flowing down Laura Dern’s hand.15 Of course, this new science’s explanations are usually beyond the compre
hension of those of us who, like me, lack advanced math degrees.

  Not long after Chaos Theory started taking shape, a related type of phenomenon became an object of study: complex adaptive systems. Some of the ground for the public’s appreciation of this phenomenon was prepared by Rachel Carson’s 1962 best seller, Silent Spring, that brought to public awareness the delicacy of intertwined ecosystems—a term only coined in 1935.16 Altering one element can have surprising and dramatic effects on entire enmeshed systems, the way a butterfly can theoretically cause a hurricane, or the way the actual reintroduction of wolves into Yellowstone National Park kicked off a set of changes that ultimately altered the course of local rivers.17 Such complex systems can have emergent effects that can’t be understood just by looking at their constituent parts: no matter how finely you dissect a brain, you won’t find an idea, a pain, or a person.

  Over the past few decades, lots of developments outside the scientific realms of Chaos Theory and complex adaptive systems theory have conspired to make the world seem not nearly as neatly understandable as we’d thought for hundreds of years. Many of these developments occurred on a global scale: World War II shook up our faith in the reasonableness of Western cultures. Philosophical existentialism taught a generation that meanings are just our inventions. Feminism has challenged the exaltation of purely analytical thinking as often a male power move. What’s called postmodern philosophy has denied that there is a single reality grounding our differing interpretations of it. Behavioral economics has pointed out just how irrational we are in our behavior; for instance, hearing a lie debunked turns out to set that lie more firmly in our minds.

  All of those influences and more have brought us to question whether our understanding of how things happen is too simple, too constrained by historic drives for power and mastery, too naive about the brain’s reliability as an instrument that aims at truth. Instead, we are beginning to see that the factors that determine what happens are so complex, so difficult, and so dependent on the finest-grained particularities of situations that to understand them we have had to turn them into stories far simpler than the phenomena themselves.

  Our vision has been clarified because at last we have tools that extract value from vast and chaotic details. We have tools that let us get everyday value out of the theory. The internet has plunged us into a world that does not hide its wildness but rather revels in it. AI in the form of machine learning, and especially deep learning, is letting us benefit from data we used to exclude as too vast, messy, and trivial.

  So now, at last, we are moving from Chaos Theory to chaos practice—putting the heady ideas of that theory to everyday use.

  Complexity beyond Prediction

  We’re going to spend the rest of this book thinking about the strategies we’re adopting as we face up to and embrace the overwhelming complexity of our world, but here are some quick examples of practices that leverage our growing recognition of the chaos beneath the apparent order:

  In business, we take on-demand manufacturing for granted because it helps us avoid under- or overestimating demand in essentially unpredictable markets. We talk admiringly about companies that can pivot or that disrupt themselves. Some leading companies are launching minimum viable products that have as few features as customers are willing to pay for so that the company can see what the users actually want. Companies often rely on agile development techniques that are more responsive to new ideas and developmental interdependencies than traditional task management processes are. Many companies are preparing for black swans that could at any moment smash a business’s foundations.18

  Governments, nonprofits, and other public institutions, as well as some for-profit companies, have been adopting open platforms that provide data and services without trying to anticipate what users might do with them. Using them, independent developers can create apps and services that the original institution never anticipated. By adopting open standards, users can mash up data from multiple organizations, thus creating new findings and resources not foreseen by the original publishers of the data.

  In science, advanced statistical analysis tools can outrun hypotheses and theories. Machine learning and deep learning are opening up new domains for prediction based on more factors than humans can count and more delicately balanced interrelationships than we can comprehend.

  Video games—the revenues of which dwarf the movie industry—routinely enable users to create their own mods and total conversions that transform games in ways beyond the intentions and imagination of the games’ creators.

  In our personal lives, from the free agent nation19 to the gig economy, we’ve been getting used to the idea that the current generation is not going to have careers that carry them through their futures the way that Boomers did.

  If all you knew were these italicized buzzwords, you might think that we’ve spent the past twenty years or so coming up with ways to avoid having to anticipate what’s going to happen next.

  You’d be right. That’s exactly what we’ve been doing.

  How This Book Works

  The aim of this book is to reveal a shift that explains many of the changes around us in business, our personal lives, and our institutions.

  The plan of this book is to examine the before and after of these changes in particular domains of our lives, even though in most instances we have not yet reached the full “after.” What was our old system for understanding how things happened in the world, and why? How is that changing—and with what benefits (and challenges) to us as business leaders, citizens, and humans?

  The structure of this book skips around in time a bit. Chapters 1 and 2 look at the old way we predict and then the new AI-based ways in order to see the change in how we think things happen. But AI isn’t the only technology that’s transforming our ideas about how the world works. So, in chapter 3, we look at the many ways we’ve taken advantage of digital networks over the past twenty years in order to escape from our age-old patterns of dealing with the future by trying to out-guess it. Chapter 4 looks for the ground of all the changes discussed so far. Chapters 5 and 6 explore two examples of the profound effect this new ground is having: how our high-level approaches to strategies have mirrored changes in how we think about the nature of possibilities, and what progress now looks like. Chapter 7 is a reflection on what all this means beyond business and practicalities.

  The oddness of this book is that each chapter, except this introduction and the closing chapter, ends with an essay about how these changes are affecting some of the most basic formations of our understanding—things like how we think about what it means to be moral, or the way in which we divide the course of our lives into what’s normal and the accidents that befall us. I’m calling these brief essays “codas,” although a musical coda closes a piece, whereas I hope that these essays will open the chapters up, giving an indication of how deep and far-reaching these changes are likely to be in our lives.

  That sense of a future opening up is entirely appropriate given the themes we are about to explore.

  The author of this book dislikes talking about himself, but a little context might help. I’ve been driven toward the questions this book approaches ever since I was a philosophy major in college (although technically I majored in meaning—it was the 1960s), and I continued to pursue them throughout my doctoral studies and my six years long ago as a philosophy professor. How do our tools affect our experience and understanding of the world, and vice versa? What does our everyday experience teach us that our ideas deny? And, most of all, what have we sacrificed in our attempt to make the world understandable and controllable?

  My interest in these questions only intensified when I went into high tech, initially as a marketing writer but ultimately as a vice president of marketing and a strategic marketing consultant. I became fascinated by the internet in the 1980s and then by the early web precisely because they seemed to me to tear down institutions and ways of knowing that maintained control by narrowing our
possibilities; that is the subtext of the four books I’ve written about the internet, starting with The Cluetrain Manifesto (as a coauthor) and most recently with Too Big to Know. I have been a fellow at Harvard’s Berkman Klein Center for Internet & Society since 2004 and am now a senior researcher there, and I have also been a journalism fellow at Harvard’s Shorenstein Center, a Franklin Fellow at the US State Department, and a writer-in-residence at Google’s People + AI Research (PAIR) group.20 Additionally, for almost five years, I codirected Harvard’s Library Innovation Lab, where I got to try out in practical ways some of the central ideas in this book.

  The why of this book is that we are living through a deep change in our understanding of ourselves and our world. We should be “leaning in” to this by rethinking our fundamental premises about how change happens. How much control do we have? Is finding the right levers to pull still the most effective way to turn events our way? What can we learn from how our technology is already enabling us to succeed and even thrive? What is the role of explanations? What constitutes success? The pursuit of these questions and more throughout this book will lead us down some unexpected paths, and at the end there won’t be a chapter with a numbered list of rules for success. If only. Instead, as we will see, leaning in means embracing the complexity and chaos our tech is letting us see and put to use.

 

‹ Prev