Algorithms to Live By

Home > Nonfiction > Algorithms to Live By > Page 25
Algorithms to Live By Page 25

by Brian Christian


  In physics, what we call “temperature” is really velocity—random motion at the molecular scale. This was directly analogous, Kirkpatrick reasoned, to the random jitter that can be added to a hill-climbing algorithm to make it sometimes backtrack from better solutions to worse ones. In fact, the Metropolis Algorithm itself had initially been designed to model random behavior in physical systems (in that case, nuclear explosions). So what would happen, Kirkpatrick wondered, if you treated an optimization problem like an annealing problem—if you “heated it up” and then slowly “cooled it off”?

  Taking the ten-city vacation problem from above, we could start at a “high temperature” by picking our starting itinerary entirely at random, plucking one out of the whole space of possible solutions regardless of price. Then we can start to slowly “cool down” our search by rolling a die whenever we are considering a tweak to the city sequence. Taking a superior variation always makes sense, but we would only take inferior ones when the die shows, say, a 2 or more. After a while, we’d cool it further by only taking a higher-price change if the die shows a 3 or greater—then 4, then 5. Eventually we’d be mostly hill climbing, making the inferior move just occasionally when the die shows a 6. Finally we’d start going only uphill, and stop when we reached the next local max.

  This approach, called Simulated Annealing, seemed like an intriguing way to map physics onto problem solving. But would it work? The initial reaction among more traditional optimization researchers was that this whole approach just seemed a little too … metaphorical. “I couldn’t convince math people that this messy stuff with temperatures, all this analogy-based stuff, was real,” says Kirkpatrick, “because mathematicians are trained to really distrust intuition.”

  But any distrust regarding the analogy-based approach would soon vanish: at IBM, Kirkpatrick and Gelatt’s simulated annealing algorithms started making better chip layouts than the guru. Rather than keep mum about their secret weapon and become cryptic guru figures themselves, they published their method in a paper in Science, opening it up to others. Over the next few decades, that paper would be cited a whopping thirty-two thousand times. To this day, simulated annealing remains one of the most promising approaches to optimization problems known to the field.

  Randomness, Evolution, and Creativity

  In 1943, Salvador Luria didn’t know he was about to make a discovery that would lead to a Nobel Prize; he thought he was going to a dance. A recent immigrant to the United States from Mussolini’s Italy, where his Sephardic Jewish family had lived, Luria was a researcher studying how bacteria developed immunity from viruses. But at this moment his research was far from his mind, as he attended a faculty gathering at a country club near Indiana University.

  Luria was watching one of his colleagues play a slot machine:

  Not a gambler myself, I was teasing him about his inevitable losses, when he suddenly hit the jackpot, about three dollars in dimes, gave me a dirty look, and walked away. Right then I began giving some thought to the actual numerology of slot machines; in doing so it dawned on me that slot machines and bacterial mutations have something to teach each other.

  In the 1940s, it wasn’t known exactly why or how bacterial resistance to viruses (and, for that matter, to antibiotics) came about. Were they reactions within the bacteria to the virus, or were there simply ongoing mutations that occasionally produced resistance by accident? There seemed no way to devise an experiment that would offer a decisive answer one way or the other—that is, until Luria saw that slot machine and something clicked. Luria realized that if he bred several generations of different lineages of bacteria, then exposed the last generation to a virus, one of two radically different things would happen. If resistance was a response to the virus, he’d expect roughly the same amount of resistant bacteria to appear in every one of his bacterial cultures, regardless of their lineage. On the other hand, if resistance emerged from chance mutations, he’d expect to see something a lot more uneven—just like a slot machine’s payouts. That is, bacteria from most lineages would show no resistance at all; some lineages would have a single “grandchild” culture that had mutated to become resistant; and on rare occasions, if the proper mutation had happened several generations up the “family tree,” there would be a jackpot: all the “grandchildren” in the lineage would be resistant. Luria left the dance as soon as he could and set the experiment in motion.

  After several days of tense, restless waiting, Luria returned to the lab to check on his colonies. Jackpot.

  Luria’s discovery was about the power of chance: about how random, haphazard mutations can produce viral resistance. But it was also, at least in part, due to the power of chance. He was in the right place at the right time, where seeing the slot machine triggered a new idea. Tales of discovery often feature a similar moment: Newton’s (possibly apocryphal) apple, Archimedes’ bathtub “Eureka!,” the neglected petri dish that grew Penicillium mold. Indeed, it’s a common enough phenomenon that a word was invented to capture it: in 1754, Horace Walpole coined the term “serendipity,” based on the fairy tale adventures of The Three Princes of Serendip (Serendip being the archaic name of Sri Lanka), who “were always making discoveries, by accidents and sagacity, of things they were not in quest of.”

  This double role of randomness—a key part of biology, a key part of discovery—has repeatedly caught the eye of psychologists who want to explain human creativity. An early instance of this idea was offered by William James. In 1880, having recently been appointed assistant professor of psychology at Harvard, and ten years away from publishing his definitive Principles of Psychology, James wrote an article in the Atlantic Monthly called “Great Men, Great Thoughts, and the Environment.” The article opens with his thesis:

  A remarkable parallel, which to my knowledge has never been noticed, obtains between the facts of social evolution and the mental growth of the race, on the one hand, and of zoölogical evolution, as expounded by Mr. Darwin, on the other.

  At the time James was writing, the idea of “zoölogical evolution” was still fresh—On the Origin of Species having been published in 1859 and Mr. Darwin himself still alive. James discussed how evolutionary ideas might be applied to different aspects of human society, and toward the end of the article turned to the evolution of ideas:

  New conceptions, emotions, and active tendencies which evolve are originally produced in the shape of random images, fancies, accidental out-births of spontaneous variation in the functional activity of the excessively unstable human brain, which the outer environment simply confirms or refutes, adopts or rejects, preserves or destroys—selects, in short, just as it selects morphological and social variations due to molecular accidents of an analogous sort.

  James thus viewed randomness as the heart of creativity. And he believed it was magnified in the most creative people. In their presence, he wrote, “we seem suddenly introduced into a seething caldron of ideas, where everything is fizzling and bobbing about in a state of bewildering activity, where partnerships can be joined or loosened in an instant, treadmill routine is unknown, and the unexpected seems the only law.” (Note here the same “annealing” intuition, rooted in metaphors of temperature, where wild permutation equals heat.)

  The modern instantiation of James’s theory appears in the work of Donald Campbell, a psychologist who lived a hundred years later. In 1960, Campbell published a paper called “Blind Variation and Selective Retention in Creative Thought as in Other Knowledge Processes.” Like James, he opened with his central thesis: “A blind-variation-and-selective-retention process is fundamental to all inductive achievements, to all genuine increases in knowledge, to all increases in fit of system to environment.” And like James he was inspired by evolution, thinking about creative innovation as the outcome of new ideas being generated randomly and astute human minds retaining the best of those ideas. Campbell supported his argument liberally with quotes from other scientists and mathematicians about the processes behind their ow
n discoveries. The nineteenth-century physicists and philosophers Ernst Mach and Henri Poincaré both seemed to offer an account similar to Campbell’s, with Mach going so far as to declare that “thus are to be explained the statements of Newton, Mozart, Richard Wagner, and others, when they say that thought, melodies, and harmonies had poured in upon them, and that they had simply retained the right ones.”

  When it comes to stimulating creativity, a common technique is introducing a random element, such as a word that people have to form associations with. For example, musician Brian Eno and artist Peter Schmidt created a deck of cards known as Oblique Strategies for solving creative problems. Pick a card, any card, and you will get a random new perspective on your project. (And if that sounds like too much work, you can now download an app that will pick a card for you.) Eno’s account of why they developed the cards has clear parallels with the idea of escaping local maxima:

  When you’re very in the middle of something, you forget the most obvious things. You come out of the studio and you think “why didn’t we remember to do this or that?” These [cards] really are just ways of throwing you out of the frame, of breaking the context a little bit, so that you’re not a band in a studio focused on one song, but you’re people who are alive and in the world and aware of a lot of other things as well.

  Being randomly jittered, thrown out of the frame and focused on a larger scale, provides a way to leave what might be locally good and get back to the pursuit of what might be globally optimal.

  And you don’t need to be Brian Eno to add a little random stimulation to your life. Wikipedia, for instance, offers a “Random article” link, and Tom has been using it as his browser’s default homepage for several years, seeing a randomly selected Wikipedia entry each time he opens a new window. While this hasn’t yet resulted in any striking discoveries, he now knows a lot about some obscure topics (such as the kind of knife used by the Chilean armed forces) and he feels that some of these have enriched his life. (For example, he’s learned that there is a word in Portuguese for a “vague and constant desire for something that does not and probably cannot exist,” a problem we still can’t solve with a search engine.) An interesting side effect is that he now also has a better sense not just of what sorts of topics are covered on Wikipedia, but also of what randomness really looks like. For example, pages that feel like they have some connection to him—articles about people or places he knows—show up with what seems like surprising frequency. (In a test, he got “Members of the Western Australian Legislative Council, 1962–1965” after just two reloads, and he grew up in Western Australia.) Knowing that these are actually randomly generated makes it possible to become better calibrated for evaluating other “coincidences” in the rest of his life.

  In the physical world, you can randomize your vegetables by joining a Community-Supported Agriculture farm, which will deliver a box of produce to you every week. As we saw earlier, a CSA subscription does potentially pose a scheduling problem, but being sent fruits and vegetables you wouldn’t normally buy is a great way to get knocked out of a local maximum in your recipe rotation. Likewise, book-, wine-, and chocolate-of-the-month clubs are a way to get exposed to intellectual, oenophilic, and gustatory possibilities that you might never have encountered otherwise.

  You might worry that making every decision by flipping a coin could lead to trouble, not least with your boss, friends, and family. And it’s true that mainlining randomness into your life is not necessarily a recipe for success. The cult classic 1971 novel The Dice Man by Luke Rhinehart (real name: George Cockcroft) provides a cautionary tale. Its narrator, a man who replaces decision-making with dice rolling, quickly ends up in situations that most of us would probably like to avoid.

  But perhaps it’s just a case of a little knowledge being a dangerous thing. If the Dice Man had only had a deeper grasp of computer science, he’d have had some guidance. First, from Hill Climbing: even if you’re in the habit of sometimes acting on bad ideas, you should always act on good ones. Second, from the Metropolis Algorithm: your likelihood of following a bad idea should be inversely proportional to how bad an idea it is. Third, from Simulated Annealing: you should front-load randomness, rapidly cooling out of a totally random state, using ever less and less randomness as time goes on, lingering longest as you approach freezing. Temper yourself—literally.

  This last point wasn’t lost on the novel’s author. Cockcroft himself apparently turned, not unlike his protagonist, to “dicing” for a time in his life, living nomadically with his family on a Mediterranean sailboat, in a kind of Brownian slow motion. At some point, however, his annealing schedule cooled off: he settled down comfortably into a local maximum, on a lake in upstate New York. Now in his eighties, he’s still contentedly there. “Once you got somewhere you were happy,” he told the Guardian, “you’d be stupid to shake it up any further.”

  *Interestingly, some of these experiments appear to have produced a far better estimate of π than would be expected by chance—which suggests that they may have been deliberately cut short at a good stopping point, or faked altogether. For example, in 1901 the Italian mathematician Mario Lazzarini supposedly made 3,408 tosses and obtained an estimate of π ≈ 355⁄113 = 3.1415929 (the actual value of π to seven decimal places is 3.1415927). But if the number of times the needle crossed the line had been off by just a single toss, the estimate would have been far less pretty—3.1398 or 3.1433—which makes Lazzarini’s report seem suspicious. Laplace might have found it fitting that we can use Bayes’s Rule to confirm that this result is unlikely to have arisen from a valid experiment.

  *You don’t need to check beyond the square root, because if a number has a factor greater than its square root then by definition it must also have a corresponding factor smaller than the square root—so you would have caught it already. If you’re looking for factors of 100, for instance, every factor that’s greater than 10 will be paired with a factor smaller than 10: 20 is matched up with 5, 25 with 4, and so on.

  *Twin primes are consecutive odd numbers that are both prime, like 5 and 7.

  *Note that we deliberately took the very first story from the site—that is, we did not read through all of them to pick one to share, which would have defeated the purpose.

  10 Networking

  How We Connect

  The term connection has a wide variety of meanings. It can refer to a physical or logical path between two entities, it can refer to the flow over the path, it can inferentially refer to an action associated with the setting up of a path, or it can refer to an association between two or more entities, with or without regard to any path between them.

  —VINT CERF AND BOB KAHN

  Only connect.

  —E. M. FORSTER

  The long-distance telegraph began with a portent—Samuel F. B. Morse, standing in the chambers of the US Supreme Court on May 24, 1844, wiring his assistant Alfred Vail in Baltimore a verse from the Old Testament: “WHAT HATH GOD WROUGHT.” The first thing we ask of any new connection is how it began, and from that origin can’t help trying to augur its future.

  The first telephone call in history, made by Alexander Graham Bell to his assistant on March 10, 1876, began with a bit of a paradox. “Mr. Watson, come here; I want to see you”—a simultaneous testament to its ability and inability to overcome physical distance.

  The cell phone began with a boast—Motorola’s Martin Cooper walking down Sixth Avenue on April 3, 1973, as Manhattan pedestrians gawked, calling his rival Joel Engel at AT&T: “Joel, I’m calling you from a cellular phone. A real cellular phone: a handheld, portable, real cellular phone.” (“I don’t remember exactly what he said,” Cooper recalls, “but it was really quiet for a while. My assumption was that he was grinding his teeth.”)

  And the text message began, on December 3, 1992, with cheer: Neil Papworth at Sema Group Telecoms wishing Vodafone’s Richard Jarvis an early “Merry Christmas.”

  The beginnings of the Internet were, somehow fi
ttingly, much humbler and more inauspicious than all of that. It was October 29, 1969, and Charley Kline at UCLA sent to Bill Duvall at the Stanford Research Institute the first message ever transmitted from one computer to another via the ARPANET. The message was “login”—or would have been, had the receiving machine not crashed after “lo.”

  Lo—verily, Kline managed to sound portentous and Old Testament despite himself.

  The foundation of human connection is protocol—a shared convention of procedures and expectations, from handshakes and hellos to etiquette, politesse, and the full gamut of social norms. Machine connection is no different. Protocol is how we get on the same page; in fact, the word is rooted in the Greek protokollon, “first glue,” which referred to the outer page attached to a book or manuscript.

  In interpersonal affairs, these protocols prove a subtle but perennial source of anxiety. I sent so-and-so a message however many days ago; at what point do I begin to suspect they never received it? It’s now 12:05 p.m. and our call was set for noon; are we both expecting each other to be the one calling? Your answer seems odd; did I mishear you or did you mishear me? Come again?

  Most of our communication technology—from the telegraph to the text—has merely provided us with new conduits to experience these familiar person-to-person challenges. But with the Internet, computers became not only the conduit but also the endpoints: the ones doing the talking. As such, they’ve needed to be responsible for solving their own communication issues. These machine-to-machine problems—and their solutions—at once mimic and illuminate our own.

 

‹ Prev