Book Read Free

Everyday Chaos

Page 4

by David Weinberger


  Yet when Halley asked Newton for help calculating the path of the comet that would come to bear Halley’s name, Newton said no. The task was just too complicated.

  For Halley to prove that the heavenly body he had observed in 1682 was the same one recorded in 1607, 1531, and multiple times earlier, he had to predict when it would again return. This would be straightforward if the interval between observations were constant, but there was about a year’s difference in them. Halley thought that this might be caused by the gravitational attraction of Jupiter, Saturn, and the sun as the comet passed through the solar system. All he had to do was use Newton’s equations to factor in the pull of the planets and the sun, and out would pop the comet’s path and the date of its next pass through our solar system.

  That sounds easy. But the combined gravitational pulls of those three bodies is different at every moment because they are in constant motion relative to one another, which means the gravitational forces they exert on each other are also constantly changing. This makes it a classic “three-body problem.” Such problems were so notoriously difficult that Newton declined Halley’s request because it would simply take too long to do the calculations. He had bigger thoughts to think.

  Halley took a swing at it on his own. Through mathematical cleverness and some approximations, he came to expect the comet sometime around the end of 1758. He died fourteen years before he could see whether his hypothesis would be confirmed.

  The predictability of the return of that particular light in the sky became a thumbs-up or thumbs-down moment for Newton’s theories themselves, touted by well-known intellectuals such as Adam Smith.23 With that much riding on it, three French aristocrats—Alexis Claude Clairaut, Joseph Jérôme Lefrançois de Lalande, and Nicole-Reine Étable de Labrière Lepaute (back when names were names!)—stepped in. They spent the summer months of 1757 filling in row after row of a table, calculating where the sun, Saturn, and Jupiter were relative to each other at one moment, how their gravitational fields would affect the course of the comet, and where their slightly altered positions would put the comet at the next moment. They did this in incremental steps of one or two degrees for each of 150 years of the comet’s path, with Clairaut checking the results for errors that, if not caught, could throw off all the subsequent calculations based on them.

  It was painstaking work, day after day from June through September, that Lalande later claimed left him permanently ill. On the other hand, Clairaut reported that Lepaute exhibited an “ardor” that was “surprising”—perhaps surprising to him because Lepaute was a woman; he later removed the acknowledgment of Lepaute’s considerable contribution from the published text. (Much of her later work was published without attribution by other people, including her husband, France’s royal clockmaker.)

  Clairaut presented the trio’s findings to the Académie des Sciences in November 1757, giving a two-month stretch during which they believed the comet would return. On March 13, 1758, a German astronomer spotted it, just two days outside that window. Modern scientists attribute this minor inaccuracy to the team’s failure to figure in the gravitational influence of Uranus and Neptune, planets undiscovered at the time. Later scientists also found two errors in the trio’s calculations that, luckily, canceled each other out.24

  In this story we see the next level of complexity in applying Newton’s laws. At the first level, those laws let us skip ahead: plug in the right data and you can tell that there will be a solar eclipse on January 30, 2921, as easily as you can predict the one on June 10, 2021. But there was no jumping ahead to predict the path of Halley’s comet—not because Newton’s laws don’t govern its motion, but because when multiple bodies are moving relative to one another and that movement affects where they are in relation to one another, the numbers to be plugged into the equations are constantly changing. The formulas remained simple, but the computation process was complicated. That’s why solving the problem took a summer of three aristocrats walking through it one step at a time.

  At this level, the complexity merely requires the patient reapplication of the known laws. We still do this today, although our computers do many summers of French aristocratic work in instants.

  Level-two predictions show the world as a complicated but still predictable place. Their success maintains and reinforces our traditional paradigm of how things happen: knowable rules ensure that similar causes have similar, knowable effects.

  The path of a comet between massive moving bodies is a relatively simple problem involving only a tiny handful of moving parts, isolated from each other in the vastness of space. Prediction soon took the first of two turns. The first turn, toward statistics and probability, acknowledged what Newton knew: the universe is so complicated that in fact we can’t always know the conditions under which the rules are operating. The second, as we’ll see, found an important problem with one of Newton’s—and our—assumptions about laws.

  Simple but Complicated

  We flip a coin to come up with a random decision because we can’t predict how it will land. But we also know that Newton’s laws fully explain the coin’s flight, its descent, and whether it lands on its face or its tail. But we also flip coins because we know something else: the odds are fifty-fifty that it will land either particular way.

  Probability theory originated several decades before the publication of Newton’s major work. It’s usually traced back to correspondence between Blaise Pascal and Pierre de Fermat in the mid-1600s about a question posed by a gambler: If you roll a pair of dice twenty-four times, were the chances that one of those rolls will be double sixes really fifty-fifty, as was assumed at the time? Pascal and Fermat’s work led to the publication of the first book on probability, written in 1657 by the mathematician and astronomer Christian Huygens. De Ratiociniis in Ludo Aleae—The Value of All Chances in Games of Fortune—was about gambling, a specialized case in which we want randomness to reign, but its math applied more broadly.

  The idea of probability arose far earlier but was not pursued as a science. In fact, Plato himself dismissed it as the opposite of mathematics because the point and beauty of math was its provable, knowable, absolute rightness; the perfection of the heavenly sphere was embodied in its geometric precision.25 Here on Earth, the Greeks assumed that the gods determined the outcome of what we think of as random events.

  But by the seventeenth century, science was on the rise, the gods were well in retreat, and the world seemed to be ruled by systematic, repeatable causes. The roll of dice obeyed the causal laws, although the outcome was determined by minute, unmeasurable differences in the starting positions of the dice, the strength of the toss, the bounciness of the surface they landed on, and who knows (literally) what else. We realized that in some controlled instances, such as dice throws in which there’s a limited set of causes and possible outcomes, we can use the logic of mathematics to predict the probability of the various possible outcomes.

  Then, beginning with a paper in 1774, Laplace inverted probability theory, giving impetus to what we today call statistics. As Leonard Mlodinow puts it in The Drunkard’s Walk, probability “concerns predictions based on fixed probabilities”: you know the likelihood of rolling double sixes without having to gather data about the history of dice rolls. But statistics “concerns the inference of those probabilities based on observed data.”26 For example, based on the data about the punctuality of the number 66 bus, what are the chances that you’re going to arrive on time?

  From the early nineteenth century on, statistics have had a tremendous influence on policy making. While from its start some have argued that it demeans human dignity to say not only that our behavior is predictable but that it can be read from mere columns of numbers, it’s demonstrably true that clouds of facts and data can yield insights into the behavior of masses of people, markets, and crowds, as well as into systems that, like the weather, lack free will.

  This has at least two profound effects on how we think about how what happens next e
merges from what’s happening now. First, statistical research still at times shocks us by finding regularities in what appear to us to be events subject to many influences: people walking wherever they want on errands of every sort nevertheless wear paths into the ground. Second, probability theory and statistics have gotten us to accept that Plato was wrong: a statement that is uncertain can still count as scientific knowledge if it includes an accurate assessment of that uncertainty. “There’s a 50 percent chance that this coin will land heads up” is as true as “This coin just landed heads up.” Without these two consequences of probability theory and statistics, we would not be able to run our modern governments, businesses, or lives.

  But for all their importance, probability theory and statistics remain solidly within the Newtonian clockwork universe, usually yielding level-two predictions. They assume a causal universe that follows knowable laws. Their outcomes are probabilistic because the starting conditions are too complicated or minute to measure. These mathematical sciences not only did not contradict the clockwork paradigm about how the universe works but confirmed it by extending its reach to outcomes that once seemed to be random or accidental, due to the gods’ machinations, or the result of unpredictable free will. Such events may not be completely explicable, but they are probabilistically predictable because they are fully determined by the same laws that explain and predict the comets.

  From dust to stars, one set of laws shall rule them all.

  Simple and Complex

  When the age of computers began in the 1950s, it further cemented the Newtonian view of how things happen. A computer program was a tiny universe completely governed by knowable laws, with the important difference that humans got to decide what the laws would be. And we saw and it was good, and we called it programming. Once programmed, a computer operated like a perfect clockwork, resulting in completely reliable and predictable output no matter what data was entered—assuming the programmers had done their jobs well and the data was good.

  For sure, back then computers were very limited in the amount and complexity of the data they could deal with, even though it seemed so overwhelming that in the 1960s we started hearing about “information overload” as an imminent danger.27 That’s why computers looked like instruments of conformity to much of the culture: Exactly the same set of information was tracked for every person in a human resources database, and a different but equally uniform set for every product in a computerized inventory system. Because of the limits on computer memory and processing speed, these systems tracked the minimal information required. So, while IBM’s own internal personnel system tracked employees’ names, social security numbers, and wage scales, it’s highly unlikely that there was a field to note that that troublemaker in operations sometimes showed up to work in a sports coat instead of a conservative blue suit, or that Sasha in accounting is a serious student of flamenco dancing. Computers were a domain of perfect order enabled by a ruthless and uniform concision.

  So it was when, in 1970, John Conway invented a simple little game.

  Conway has held a distinguished professorship at Princeton for over twenty-five years and has authored seminal books on everything from telecommunications to particle physics. In 2015, the Guardian called him “the world’s most charismatic mathematician” and “the world’s most lovable egomaniac.” Outside his scholarly fields, he is best known for “the Game of Life,” a “no-player never-ending” game.”28 The game may not have players or winners, but it does have a board, counters, and rules.

  The board is a grid. Each square represents a space where a person (represented by a counter) might live. At each turn, four rules are applied to each square to determine whether that square will have a counter placed in it; the rules look at how many of the eight surrounding squares are occupied.29 A turn in the game consists in the application of the rules to each of the squares. This may sound a bit like the French aristocrats calculating the path of Halley’s comet, but the results are startlingly different.

  When the game was first made famous by a Scientific American column about it by Martin Gardner, computing time was so expensive that the Game of Life was designed to be played with graph paper, a pencil, and an eraser.30 In addition to being laborious, applying the rules by hand can mask what the computer’s rapid application of them makes clear: some of the starting patterns can fly.

  Most initial sets of filled-in squares either fritter away into random patterns or become uninterestingly repetitive, perhaps the same two squares blinking on and off forever. But some mutate into unexpected shapes. Some endlessly cycle between a set of quite different patterns. Some spawn “spaceships” or “gliders” that, when looked at in sequence, move across the page or shoot “bullets.” To this day, enthusiasts are still discovering patterns that move particularly quickly, that complexify rapidly, or that “eat” other shapes that come near them. Even in 2016, forty-six years after the invention of the game, people were breathlessly announcing new finds.31

  That’s evidence of the depth and complexity of this four-rule game. And that’s the point: Conway’s puzzle shows that simple rules can generate results that range from the boring, to the random and unpredictable, to animated birds that flap their wings and fly off the page. If, in a clockwork universe, simple rules yield simple, predictable results, in this universe, simple rules yield complexity and surprises. A clockwork that generated such results would be not just broken but surreal.

  The Game of Life was taken up as no mere game. The philosopher Daniel C. Dennett in the early 1990s thought the ideas behind it might explain consciousness itself.32 The technologist Raymond Kurzweil thinks that simple rules instantiated as computer programs will give rise to machines that not only think but think better than we do.33 The Game of Life influenced the mathematician and chaos theorist Stephen Wolfram’s development of a “New Kind of Science,” which explains the universe as a vast computer.34 Wolfram uses this approach—simple rules with complex results—to explain everything from the patterns in shattered glass to the placement of branches circling a sapling’s trunk.

  The Game of Life might even have confounded Laplace’s demon. Put yourself in the demon’s position. The board is set up, and some counters are already in place. How do you take the board to its next state? Apply the rules to square 1, and record the result. Then square 2. Continue until you’ve gone through all the squares. But now suppose you, the Imp of All Knowing, want to know how the board will look in two moves, or ten moves, or a thousand moves. Even someone with your powers can only get those answers by going through each of the moves. There are no shortcuts, no way to leap ahead, not even for an all-knowing imp. (Wolfram calls this the principle of computational irreducibility.) Isn’t it odd that we think we can leap ahead in predicting our own lives and business, but not when playing a game not much more complicated than tic-tac-toe?

  The Game of Life shows that a universe with simple rules doesn’t itself have to be as predictable as a clock, where each tick is followed by a tock. Instead, what follows might be a tock, or it might be the blare of a foghorn or the smell of rye toast … and the only way to find out is to wind it up and give it a go. When small changes can have giant effects, even when we know the rules, we may not be able to predict the future. To know it, we have to live through it.

  While this third level of prediction means we have less control than we thought, it also has a certain appeal. Few of us would trade the net for a medium as predictable as the old three-network television was. Likewise, few of the generation brought up on open-world video games long to go back to the arcade days when, for a quarter, you got to move your avatar left and right while shooting at a steadily advancing line of low-resolution space aliens that inevitably overran you. And who would ban today’s best, most unpredictable television shows so we can go back to the good old days of 1950s TV?

  But even in the realms where the surprises that simple rules can generate are not pleasant diversions but rather global threats—biologi
cal, geopolitical, climactic—we are accepting this lack of predictability for two reasons. First, it’s a fact. Second, we are—seemingly paradoxically—getting better at predicting. We can predict further ahead. We can predict with greater accuracy. We can predict in domains—including the social—that we’d thought were simply impervious to any attempts to anticipate them.

  We are getting so much better at predicting in some domains because our technology—especially machine learning—does not insist on reducing complexity to a handful of simple rules but instead embraces complexity that far surpasses human understanding. When we were unable to do anything with this complexity, we ignored it and cast it aside as mere noise. Now that unfathomable complexity is enabling our machines to break the old boundaries of prediction, we’re able to open our eyes to the complexity in which our lives have always been embedded.

  As we’ll see in the next chapter, our new engines of prediction are able to make more accurate predictions and to make predictions in domains that we used to think were impervious to them because this new technology can handle far more data, constrained by fewer human expectations about how that data fits together, with more complex rules, more complex interdependencies, and more sensitivity to starting points. Our new technology is both further enlightening us and removing the requirement that we understand the how of our world in order to be able to predict what happens next.

  Such a radical change in the mechanics of prediction means fundamental changes in how we think the world works, and our role in what happens. In the next chapter, we’ll explore these changes by asking how AI “thinks” about the world.

 

‹ Prev