Everyday Chaos

Home > Other > Everyday Chaos > Page 18
Everyday Chaos Page 18

by David Weinberger


  Why have we so insisted on turning complex histories into simple stories? Marshall McLuhan was right: the medium is the message. We shrank our ideas to fit on pages sewn in a sequence that we then glued between cardboard stops. Books are good at telling stories and bad at guiding us through knowledge that bursts out in every conceivable direction, as all knowledge does when we let it.

  But now the medium of our daily experience—the internet—has the capacity, the connections, and the engine needed to express the richly chaotic nature of the world. This comes at the price of the comforting illusion of comprehension, as artificial intelligence has been teaching us. Indeed, when it comes to AI, the stages of innovation now sometimes seem to mirror the stages of grief: denial, anger, bargaining, depression, and acceptance. Before long, AI will be the fully accepted norm. In fact, it already is for many of the services we already rely on.

  Our acceptance of machine learning is likely to shape our idea of progress as much as computers themselves did. In some domains we may, for good reason, decide to require AI to produce conclusions only through processes that we can understand, much as Samuel Butler’s 1872 novel Erewhon prophesied that we would stop the development of new machines for fear that they would supplant us.33 But in most domains we are likely to continue to embrace machines that make recommendations based on data and relationships that surpass our understanding.

  The line of progress is not an arrow pointing up a hill. It looks much more like the densely branched maps of machine learning’s model of the world. Those models may be impenetrable to our will to understand, but they are nevertheless enabling us to see that the world, its people, its things, and its history are like those models but ever so much more so.

  The Shape of Surprise

  Bob: I can can I I everything else.

  Alice: Balls have zero to me to me to me to me to me to me to me to me to.

  These are two Facebook bots talking to one another in a language they invented. They started out with English, but as they negotiated with one another in what’s known as a “generative adversarial network,” they invented their own pidgin English.34 Or recall the two AlphaGo programs we encountered in this book’s introduction that played each other and came up with what seemed like nonhuman strategies. In another case, Antonio Torralba, a computer science professor at MIT, was seeing whether he could train a machine learning system to differentiate photos of residential bedrooms from those of motel rooms without telling the system what to look for. When he examined how the system was making the distinction, he found to his amazement that it had taught itself to identify wall sconces, and was using their presence as a strong indicator of a motel room.35 It’s a little bit like a machine learning system that’s designed to distinguish human voices from traffic noise beginning to understand what the humans are saying.

  Tick marks on a time line don’t seem to do justice to this sort of autonomous generativity. These machines don’t necessarily proceed step by step. The mass of deeply related data points can give rise to unexpected, emergent phenomena, the way dead-simple starting configurations of John Conway’s Game of Life can result in blocky creatures that grow wings and fly off the grid.

  When a machine learning system goes not from A to B but from A to G or perhaps from A to mauve, we have tick marks but no lines. We have advances but no story.

  We are already familiar with this type of lineless movement. On the net, a click can take us to a subworld we did not anticipate and that may be related in ways we do not understand. We’ve built a world together in which anything can be connected in any way that one of us imagines. We do this online, and we are doing it now with connections that machines on their own make among the real-world data we provide them. The densely linked structure of the net seems to be reflected in the picture of the world machine learning is constructing from the data we feed it.

  * * *

  The idea of progress was first applied to knowledge and our moral nature. In both of those domains, there is a perfect end to which we can aspire. Our knowledge can edge toward complete and error-free understanding. Our souls and behavior can move closer to their divine purpose. When we began to apply progress to our tools and technology, it too could be seen as advancing toward a perfect end: the train tracks stretch across the nation, and the train engines run faster, with fewer breakdowns, and require less fuel. Clocks keep better time, work on rocking ships, and then on rocket ships.

  The pull of that perfect endpoint made sense of progress. It still does. Each version number of a product should make it better or cheaper, and occasionally both. That’s old-school progress. And it’s powerful enough to get Apple fanboys to line up for days waiting for the latest iProduct.

  But if tech progress suddenly meant only that we get upgrades to our products, we would feel that we were in fact in an age of decline. We instead now measure technological progress not by its movement closer to perfection but by its generativity, its left turns, its disruption of expectations. As I write, virtual reality systems are making rapid and traditional-style progress in their quality—screen resolution, sound, weight, ease of set up—and price. But quality and price now seem simply like rather boring inhibitions we have to overcome in order to unleash the imagination of creators who will do things with VR that will startle us. Even before most of us have played a VR game, we cannot wait to see how VR will be deployed as a platform for everything from therapy sessions to interactive storytelling to new ways to engage socially. VR holds promise as a generative tick mark from which will emanate lines that lead to ideas in expected domains and to domains we never ever expected we’d be strapping on silly-looking goggles to experience.

  What drives this type of progress does not compel it to move in a particular direction. There is no perfection pulling it forward. Interoperability isn’t directional. It can’t be, for we are not the animators of the cold metal of the world, bending it to our will. Our relationship to technology is far more complex than that. Our will—our being—has been shaped from its beginnings by what the world offers us. We and our things work each other out in mutual play. That we will do so is inevitable. How we do so in an interoperable world is unpredictable.36

  Generative progress leaves lines that sprout, rush forward, twist back, abruptly stop, and then perhaps suddenly start again sprouting new branches with their own convolutions. Generativity turns what had been laborious into child’s play, sometimes literally. There is no Newtonian force driving this. Often even looking backward we cannot see even a trace of inevitability. The motive force of this new type of progress may be commercial or social, but more often is—perhaps simultaneously—someone feeling in her heart that playing with some thing or system will reveal more of what it is, what she is, what we are, and what we could be.

  We have a word for the shape formed when movement rapidly emanates from a single point in myriad directions. It’s not an inclined line.

  It’s an explosion.

  Coda: What We Learn from Things

  I declare the global social space we are building to be naturally independent of the tyrannies you seek to impose on us. You have no moral right to rule us nor do you possess any methods of enforcement we have true reason to fear.

  When John Perry Barlow wrote these words in his “Declaration of the Independence of Cyberspace” in 1996, they captured the sense many of us early enthusiasts had that not only would the World Wide Web give us a second chance at creating the era of peace, love, and understanding we’d tried for in the 1960s, but that this new world would arrive inevitably.

  We may have been wrong about the opportunity, but we were definitely wrong about the inevitability of the outcome.

  While more of the transformation occurred than we often acknowledge, and more of that transformation has been positive than we allow ourselves to believe, there’s no denying that the net hasn’t worked out the way we’d thought. There are many reasons why my cohort went so wrong about it. We vastly underestimated the tenacity and
power of the existing institutions. We didn’t foresee the centralization of online power. While the open web had connected people to pages, we didn’t anticipate commercial entities being the ones that connect people to people. Perhaps most humiliating for me is the extent to which my vision was blurred by my privileged position as a middle-class, Western, white man. You mean not everyone would have the leisure time to browse, or the freedom and confidence to blog their views? Shocking—or at least an inconvenient truth. And those in more vulnerable positions might find their comments overrun by racist, sexist threats? That was worse than shocking, and remains appalling.

  The sense of inevitability with which the web’s early cohort, including me, greeted its supposed triumph is harder to understand. It seems to be a classic case of falling for technodeterminism, but I was aware of that trap. So why did I, at least, seem to walk straight into it?

  Technodeterminism is the belief that technology causes changes in a culture and society. How wide and deep those changes are, and how inevitable they are, determines just how much of a technodeterminist you are. For example, in 1962 the historian Lynn White Jr. wrote, “Few inventions have been so simple as the stirrup, but have had so catalytic an influence on history.”37 By putting the full thrust of a horse behind a lance, stirrups changed warfare, which then changed the social structure required to support horse-borne soldiers, eventually resulting in feudalism.38 In the case of the internet, technodeterminists say things like, “The internet will transform politics! The internet will make us free!”

  Technodeterminism has fallen so far into disrepute that just about the only people who seem to be technodeterminists are those who think the internet is a threat to civilization. When Nicholas Carr says that using the internet damages our ability to engage in long-form thought, he is being a technodeterminist.39 When Sherry Turkle says that using mobile phones is turning our children into narcissists, she is being a technodeterminist.40 Perhaps correctly.

  It took me an embarrassingly long time to realize the source of my assumption of the inevitability of the net’s triumph. It wasn’t technology that was the driver but rather my idealistic conviction that given an opportunity, people would rush to satisfy their human yearning to connect, to create, to speak in their own voices about what matters to them. Give us the means to do those things, and we will not let anything stop us. The determinacy I sensed was coming not from the tech but from our deep human need to connect and to create.

  * * *

  But that is too simple an answer. If technodeterminism attributes too much power over us to our tools, attributing all of tech’s effect to our humanity undervalues the role of tools in shaping us.

  Around the same time that my cohort was initially besotted with the internet, a Scottish philosopher named Andy Clark was coming up with an insight that helps explain how technology affects us: we think out in the world with tools.

  Clark means this quite literally. Take the whiteboard away from the physicist and she can no longer do the math that is her work. Take the graph paper away from the architect and she can no longer think about exactly where to place the stairs, and probably won’t come up with the idea of moving the closet to the left so the stairway can take an extra turn.

  As soon as I read the idea, I felt as if I had known it all along—the sign of a powerful idea. Nor am I alone in this: Clark’s article on the topic is the most cited philosophy paper of the 1990s.41

  Clark’s idea seems novel because in the West we’ve been brought up to think that our minds are radically separate from our bodies. Our bodies are physical objects, subject to the same physical laws as the clothes they wear and the ground they tread. But our minds escape those laws, at least according to thousands of years of Western tradition. Our minds are immaterial, and, as souls, are possibly eternal.

  There is beauty to that vision, but also terrible problems with it. Once you’ve decided the physical and mental realms are separate, you have to go through philosophical contortions to explain how the two can in fact affect each other. Morally, you may well end up denigrating the body not only as a mere vessel but as the source of desires that degrade our minds.

  Clark instead asks us to consider how we actually think. We figure out seating charts by shuffling name cards on a diagram of the tables. We figure out what we think about a topic by using an outliner—or PowerPoint—that lets us see how our ideas flow and fit together. We count coins by making physical stacks of them. We confirm the existence of the Higgs boson by building a particle collider 16.8 miles long. We know more than Socrates did because, despite his objections, we became literate and wrote down what we learned.42 We think not just with our heads but also with our hands and the tools they hold.

  In fact, all of our experience exists in our engagement in the world outside our heads; knowledge is just a special case of this.

  * * *

  Walking by the water’s edge, we see a flat rock and heft it. The stone suggests a project. We jiggle it slightly, our hand assessing its suitability. But for this project to have presented itself to us, we need more than stones and ponds. We need to have learned the nature of water’s ever-changing surface by having had to drink quickly from our cupped hands. Only then could our older cousin’s skipping of a rock on a lake make us laugh with the discovery of water’s hard top. We need fingers that can find the stone’s thinner edge that now shows itself as its front. We need a moment free of chores and countdown timers, and access to water that isn’t barricaded by wire fencing. We need everything to be ready for the plink of a stone marking with circles where it forgot to sink.

  The whole world is in this experience of our body’s engagement with a flat rock. We learn about ourselves by playing with things. We learn how the world works by playing with things.

  That is why technodeterminism is too simple to accept or to refute. We are not an effect of things, and things are not simply caused by us. Our purposes are shaped by what the things of the world allow, and those purposes reveal things in their relevant facets: the smoothness of the stone, the resistance of the water’s surface.43

  If we think out in the world with tools, and if our use of those tools shows us what sort of place the world is, and if our new tools are substantially different from the old ones, then perhaps we are beginning to understand our world differently.

  Perhaps very differently. We can disassemble a car engine to see how it works, and while no single person understands everything about the Large Hadron Collider, we can inquire about any aspect of it and expect to be able to find the answer. But not always with machine learning. Machine learning works, yet we cannot always interrogate it about why it works.

  Machine learning thereby undoes a founding idea of Western civilization: The Agreement that the human mind is uniquely attuned to the truth of the universe. For the ancient Hebrews, this was expressed by God’s making us in His image, not physically but in giving us minds that within our mortal limits can understand and appreciate His creation. For the ancient Greeks, the Logos was both the beautiful order of the universe and the rationality by which we mortals can apprehend that order. The Agreement has meant that our attempts to understand how things happen are not futile. It has meant that we belong in this universe. It has meant that we are special in this universe.

  The fact that a new technology is leading us to recognize that our ancient agreement is broken is not itself technodeterminist, any more than saying the flat stone reveals the pond as having a hidden surface is technodeterminist. We think out in the world with things in our hands. We experience out in the world with things in our hands. Each revelation is mutual. Each revelation is of the whole.

  Now we have a new tool in our hands.

  Chapter Seven

  Make. More. Meaning.

  Our success with the internet and machine learning is changing our minds about how things happen.

  As our tools head toward having the power to model dust in all its particularity, we are more willingly accepting
the overwhelming complexity of our world.

  We are learning this first through our engaged hands. Our heads are lagging, as is to be expected.

  We are in transition. We are confused.

  Good.

  * * *

  Yes, it’s odd for a book to have a coda for every chapter, an essay different in style of writing and thought. That oddness is intentional. The codas are there to signal that this book does not intend to encapsulate its topics but to open them up. How could it be otherwise when the Twitter version of this book’s imperative is “Make. More. Future.”?

  A new paradigm for something as fundamental as how things happen affects not just business, government, education, and the other large-scale domains into which we traditionally divide our world. It pervades our understanding of everything.

  This last chapter—a coda of codas—attempts to trace some of the ways our embrace of complexity, even as it overwhelms our understanding, is enabling us to discover more of what our understanding aims at: a sense of meaning.

  Explanations

  My friend Timo Hannay was forty-six when he gave in to his wife’s counsel and went for his first physical exam in about ten years. He was told that all his systems were in good shape, although he could stand to lose a little weight.

  Three months later, he woke up on a Saturday morning feeling ill enough that his wife took him to the Royal Free Hospital in north London. “I ended up spending a week there,” he told me in an email. “They gave me an angiogram (thus diagnosing it formally as a myocardial infarction), inserted three stents and put me on a cocktail of drugs (anti-platelets, beta-blockers and statins), some of which I’ll continue to take for life.” He’s been following the regimen of meds, exercise, and diet, and feels healthier than he has in years.

 

‹ Prev