Everyday Chaos

Home > Other > Everyday Chaos > Page 1
Everyday Chaos Page 1

by David Weinberger




  EVERYDAY

  CHAOS

  Technology, Complexity, and How We’re Thriving in a New World of Possibility

  DAVID WEINBERGER

  HARVARD BUSINESS REVIEW PRESS

  Boston, Massachusetts

  HBR Press Quantity Sales Discounts

  Harvard Business Review Press titles are available at significant quantity discounts when purchased in bulk for client gifts, sales promotions, and premiums. Special editions, including books with corporate logos, customized covers, and letters from the company or CEO printed in the front matter, as well as excerpts of existing books, can also be created in large quantities for special needs.

  For details and discount information for both print and ebook formats, contact [email protected], tel. 800-988-0886, or www.hbr.org/bulksales.

  Copyright 2019 David Weinberger

  All rights reserved

  No part of this publication may be reproduced, stored in or introduced into a retrieval system, or transmitted, in any form, or by any means (electronic, mechanical, photocopying, recording, or otherwise), without the prior permission of the publisher. Requests for permission should be directed to [email protected], or mailed to Permissions, Harvard Business School Publishing, 60 Harvard Way, Boston, Massachusetts 02163.

  The web addresses referenced in this book were live and correct at the time of the book’s publication but may be subject to change.

  Library of Congress Cataloging-in-Publication Data

  Names: Weinberger, David, 1950– author.

  Title: Everyday chaos : technology, complexity, and how we’re thriving in a new world of possibility / David Weinberger.

  Description: Boston, Massachusetts : Harvard Business Review Press, [2019] | Includes bibliographical references and index.

  Identifiers: LCCN 2018049644 | ISBN 9781633693951 (hardcover)

  Subjects: LCSH: Chaotic behavior in systems—Industrial applications. | Prediction theory—Technological innovations. | Economic forecasting. | Technological innovations.

  Classification: LCC Q172.5.C45 W44 2019 | DDC 006.3/101—dc23 LC record available at https://lccn.loc.gov/2018049644

  CONTENTS

  Introduction

  Everything All at Once

  Chapter One

  The Evolution of Prediction

  Chapter Two

  Inexplicable Models

  Chapter Three

  Beyond Preparation: Unanticipation

  Chapter Four

  Beyond Causality: Interoperability

  Chapter Five

  Strategy and Possibility

  Chapter Six

  Progress and Creativity

  Chapter Seven

  Make. More. Meaning.

  Notes

  Bibliography

  Index

  Acknowledgments

  About the Author

  Introduction

  Everything All at Once

  Deep Patient doesn’t know that being knocked on the head can make us humans dizzy or that diabetics shouldn’t eat five-pound Toblerone bars in one sitting. It doesn’t even know that the arm bone is connected to the wrist bone. All it knows is what researchers at Mount Sinai Hospital in New York fed it in 2015: the medical records of seven hundred thousand patients as discombobulated data, with no skeleton of understanding to hang it all on. Yet after analyzing the relationships among these blind bits, not only was Deep Patient able to diagnose the likelihood of individual patients developing particular diseases, it was in some instances more accurate than human physicians, including about some diseases that until now have utterly defied predictability.1

  If you ask your physician why Deep Patient thinks it might be wise for you to start taking statins or undergo preventive surgery, your doctor might not be able to tell you, but not because she’s not sufficiently smart or technical. Deep Patient is a type of artificial intelligence called deep learning (itself a type of machine learning) that finds relationships among pieces of data, knowing nothing about what that data represents. From this it assembles a network of information points, each with a weighting that determines how likely the points it’s connected to will “fire,” which in turn affects the points they’re connected to, the way firing a neuron in a brain would. To understand why Deep Patient thinks, say, that there’s a 72 percent chance that a particular patient will develop schizophrenia, a doctor would have to internalize those millions of points and each of their connections and weightings. But there are just too many, and they are in relationships that are too complex. You as a patient are, of course, free to reject Deep Patient’s probabilistic conclusions, but you do so at a risk, for the reality is that we use “blackbox” diagnostic systems that cannot explain their predictions because in some cases they are significantly more accurate than human doctors.

  This is the future, and not just for medicine. Your phone’s navigation system, type-ahead predictions, language translation, music recommendations, and much more already rely on machine learning.

  As this form of computation gets more advanced, it can get more mysterious. For example, if you subtract the number of possible chess moves from the number of possible moves in the Chinese game of go, the remainder is still many times larger than the number of atoms in the universe.2 Yet Google’s AI-based AlphaGo program routinely beats the top-ranked human players, even though it knows nothing about go except what it’s learned from analyzing sixty million moves in 130,000 recorded games. If you examine AlphaGo’s inner states to try to discover why it made any one particular move, you are likely to see nothing but an ineffably complex set of weighted relationships among its data. AlphaGo simply may not be able to tell you in terms a human can understand why it made the moves that it did.

  Yet about an AlphaGo move that left some commenters literally speechless, one go master, Fan Hui, said, “It’s not a human move. I’ve never seen a human play this move.” Then, softly, “So beautiful. Beautiful. Beautiful. Beautiful.”3

  Deep learning’s algorithms work because they capture better than any human can the complexity, fluidity, and even beauty of a universe in which everything affects everything else, all at once.

  As we will see, machine learning is just one of many tools and strategies that have been increasingly bringing us face to face with the incomprehensible intricacy of our everyday world. But this benefit comes at a price: we need to give up our insistence on always understanding our world and how things happen in it.

  * * *

  We humans have long been under the impression that if we can just understand the immutable laws of how things happen, we’ll be able to perfectly predict, plan for, and manage the future. If we know how weather happens, weather reports can tell us whether to take an umbrella to work. If we know what makes people click on one thing and not another in their Facebook feeds, we can design the perfect ad campaign. If we know how epidemics happen, we can prevent them from spreading. We have therefore made it our business to know how things happen by discovering the laws and models that govern our world.

  Given how imperfect our knowledge has always been, this assumption has rested on a deeper one. Our unstated contract with the universe has been that if we work hard enough and think clearly enough, the universe will yield its secrets, for the universe is knowable, and thus at least somewhat pliable to our will.

  But now that our new tools, especially machine learning and the internet,4 are bringing home to us the immensity of the data and information around us, we’re beginning to accept that the true complexity of the world far outstrips the laws and models we devise to explain it. Our newly capacious machines can get closer to understanding it than we can, and they, as machines, don’t really understand anything at all.

  This
, in turn, challenges another assumption we hold one level further down: the universe is knowable to us because we humans (we’ve assumed) are uniquely able to understand how the universe works. At least since the ancient Hebrews, we have thought ourselves to be the creatures uniquely made by God with the capacity to receive His revelation of the truth. Since the ancient Greeks, we’ve defined ourselves as the rational animals who are able to see the logic and order beneath the apparent chaos of the world. Our most basic strategies have relied on this special relationship between us and our world.

  Giving up on this traditional self-image of our species is wrenching and painful. Feeling crushed by information overload and nervously awaiting the next disruption of our business, government, or culture are just the localized pains of a deeper malady: the sense—sometimes expressed in uneasy jokes about the rise of robot overlords—that we are not as well adapted to our universe as we’d thought. Evolution has given us minds tuned for survival and only incidentally for truth. Our claims about what makes our species special—emotion, intuition, creativity—are beginning to sound overinsistent and a bit desperate.

  This literal disillusionment is something for us to embrace—and not only because it’s happening whether we embrace it or not. We are at the beginning of a great leap forward in our powers of understanding and managing the future: rather than always having to wrestle our world down to a size we can predict, control, and feel comfortable with, we are starting to build strategies that take our world’s complexity into account.

  We are taking this leap because these strategies are already enabling us to be more efficient and effective, in touch with more people and ideas, more creative, and more joyful. It is already re-contextualizing many of our most basic ideas and our most deeply accustomed practices in our business and personal lives. It is reverberating through every reach of our culture.

  The signs are all around us, but in many cases they’re hidden in practices and ideas that already seem normal and obvious. For example, before machine learning came to prominence, the internet was already getting us used to these changes.…

  The A/B Mystery

  When Barack Obama’s first presidential campaign tried out two versions of a sign-up button on its website, it found the one labeled “Learn More” drew dramatically more clicks than the same button labeled “Join Us Now” or “Sign Up Now.”

  Another test showed that a black-and-white photo of the Obama family unexpectedly generated far more clicks than the color image the site had been using.

  Then, when they put the “Learn More” button together with the black-and-white photo, sign-ups increased 40 percent.

  Overall, the campaign estimated that almost a third of the thirteen million names on its email list and about $75 million in donations were due to the improved performance provided by this sort of A/B testing, in which a site tries out variants of an ad or content on unknowing sets of random users and then uses the results to decide which version the rest of the users will see.5

  It was even more surprising when the Obama team realized that a video of the candidate whipping up a crowd at a rally generated far fewer clicks than displaying a purely text-based message. What could explain this difference, given their candidate’s talents as an orator? The team did not know. Nor did they need to know. The empirical data told them which content to post on the campaign site, even if it didn’t tell them why. The results: more clicks, more donations, and probably more votes.

  A/B testing has become a common practice. The results you get on a search page at Google are the results of A/B testing.6 The layout of movies at Netflix results from A/B testing. Even some headlines used by the New York Times are the result of A/B testing.7 Between 2014 and 2016, Bing software engineers performed 21,200 A/B tests, a third of which led to changes to the service.8

  A/B testing works without needing, or generating, a hypothesis about why it works. Why does some ad at Amazon generate more sales if the image of the smiling young woman is on the left instead of the right? We can make up a theory, but we’d still be well advised to A/B test the position of the model in the next ad we create. That a black-and-white photo worked for Obama does not mean that his opponent, John McCain, should have ditched his color photos. That using a blue background instead of a green one worked for Amazon’s pitch for an outdoor grill gives us no reason to think it will work for an indoor grill or for a book of barbecue recipes.

  In fact, it’s entirely plausible that the factors affecting people’s preferences are microscopic and fleeting. Maybe men over fifty prefer the ad with the model on the left but only if they are coming from a page that had a funny headline, while women from Detroit prefer the model on the right if the sun just peeked through their windows after two overcast days. Maybe some people prefer the black-and-white photo if they were just watching a high-contrast video and others prefer the color version if the Yankees just lost a game. Maybe some generalizations will emerge. Maybe not. We don’t know. The reasons may be as varied as the world itself is.

  We’ve been brought up to believe that the truth and reality of the world are expressed by a handful of immutable laws. Learn the laws and you can make predictions. Discover new laws and you can predict more things. If someone wants to know how you came up with a prediction, you can trot out the laws and the data you’ve plugged into them. But with A/B testing, we often don’t have a mental framework that explains why one version of an ad works better than another.

  Think about throwing a beach ball. You expect the ball to arc while moving in the general direction you threw it in, for our mental model—the set of rules for how we think things interact—takes account of gravity and momentum. If the ball goes in another direction, you don’t throw out the model. Rather, you assume you missed some element of the situation; maybe there was a gust of wind, or your hand slipped.

  That is precisely what we don’t do for A/B testing. We don’t need to know why a black-and-white photo and a “Learn More” label increased donations to one particular campaign. And if the lessons we learned from a Democrat’s ad turn out not to work for her Republican opposition—and they well may not—that’s OK too, for it’s cheap enough just to run another A/B test.

  A/B testing is just one example of a technique that inconspicuously shows us that principles, laws, and generalizations aren’t as important as we thought. Maybe—maybe—principles are what we use when we can’t handle the fine grains of reality.

  * * *

  We’ve just looked at examples of two computer-based technologies that are quite different: a programming technique (machine learning) and a global place (the internet) where we encounter others and their expressions of meaning and creativity. Of course, these technologies are often enmeshed: machine learning uses the internet to gather information at the scale it needs, and ever more internet-based services both use and feed machine learning.

  These two technologies also have at least three things in common that have been teaching us about how the world works: Both are huge. Both are connected. Both are complex.

  Their hugeness—their scale—is not of the sort we encounter when we visit the home of the world’s largest ball of twine or imagine all the world’s potatoes in a single pile. The importance of the hugeness of both machine learning and the internet is the level of detail they enable. Rather than having to get rid of detail by generalizing or suppressing “marginal” information and ideas, both of these technologies thrive on details and uniqueness.

  The connectedness of both of these technologies means that the bits and pieces contained within them can affect one another without a backward glance at the barriers that physical distance imposes. This connectedness is essential to both of these technologies: a network that connected one piece to another, one at a time, would be not the internet but the old telephone system. Our new technologies’ connectedness is massive, multiway, distanceless, and essential.

  The scale and connectedness of machine learning and the internet result in thei
r complexity. The connections among the huge number of pieces can sometimes lead to chains of events that end up wildly far from where they started. Tiny differences can cause these systems to take unexpectedly sharp turns.

  We don’t use these technologies because they are huge, connected, and complex. We use them because they work. Our success with these technologies—rather than the technologies themselves—is showing us the world as more complex and chaotic than we thought, which, in turn, is encouraging us to explore new approaches and strategies, challenging our assumptions about the nature and importance of understanding and explanations, and ultimately leading us to a new sense of how things happen.

  How We Think Things Happen

  Over the millennia, we’ve had plenty of ideas about how things happen. Whether it’s the ancient Greek idea that things naturally strive to blossom into what they are, or our more modern idea of cause and effect operating with the cold ruthlessness of a machine, we have, throughout our culture’s history, generally accepted four assumptions about how the next emerges from the now—assumptions that are now being challenged.

  1. Things happen according to laws

  There are few worse nightmares a company can imagine than having airlines add a line to their safety spiel that instructs people to turn off its product before it explodes.

  In 2016, passengers heard that warning about the Galaxy Note 7.

  After 35 of the phones had caught fire—a number that eventually reached about 400—Samsung recalled all 2.5 million of the devices, losing perhaps $5 billion in revenues and reducing the company’s market capitalization by $14 billion.

  The issue turned out to be with the lithium-ion batteries, a defect Samsung says affected only 0.01 percent of the handsets sold.9

  So why didn’t the other 99.99 percent catch fire? We only have a few different sorts of answers available. First, maybe the combustible ones were manufactured in some faulty way: the materials were substandard, or the assembly process was imprecise. Or maybe there was something unusual about the circumstances that caused the phones to explode: perhaps they were stressed by being sat on by users. Or perhaps we need to combine the two explanations: some people subjected a handful of poorly manufactured units to unusual circumstances.

 

‹ Prev