Book Read Free

Algorithms to Live By

Page 32

by Brian Christian


  Algorithmic game theory has made huge contributions to a number of practical applications over the past twenty years: helping us understand packet routing on the Internet, improving FCC spectrum auctions that allocate precious (if invisible) public goods, and enhancing the matching algorithms that pair medical students with hospitals, among others. And this is likely just the beginning of a much larger transformation. “We are just scratching the surface,” says Nisan. “Even in the theory we are just starting to understand it. And there is another generation probably until what I completely understand today theoretically will successfully be applied to humans. It’s a generation; I think not more than that. It will take a generation.”

  French existentialist philosopher Jean-Paul Sartre famously wrote that “Hell is other people.” He didn’t mean that others are inherently malicious or unpleasant, but rather that they complicate our own thoughts and beliefs:

  When we think about ourselves, when we try to know ourselves … we use the knowledge of us which other people already have. We judge ourselves with the means other people have and have given us for judging ourselves. Into whatever I say about myself someone else’s judgment always enters. Into whatever I feel within myself someone else’s judgment enters.… But that does not at all mean that one cannot have relations with other people. It simply brings out the capital importance of all other people for each one of us.

  Perhaps, given what we’ve seen in this chapter, we might endeavor to revise Sartre’s statement. Interacting with others doesn’t have to be a nightmare—although in the wrong game it surely can be. As Keynes observed, popularity is complicated, intractable, a recursive hall of mirrors; but beauty, in the eye of the beholder, is not. Adopting a strategy that doesn’t require anticipating, predicting, reading into, or changing course because of the tactics of others is one way to cut the Gordian knot of recursion. And sometimes that strategy is not just easy—it’s optimal.

  If changing strategies doesn’t help, you can try to change the game. And if that’s not possible, you can at least exercise some control about which games you choose to play. The road to hell is paved with intractable recursions, bad equilibria, and information cascades. Seek out games where honesty is the dominant strategy. Then just be yourself.

  *Indeed, it’s the origin of all modern computers—it was the halting problem that inspired Turing to formally define computation, via what we now call the Turing machine.

  *Binmore adds another insight: games like the prisoner’s dilemma seemingly obliterate Immanuel Kant’s argument that rationality consists of what he called the “categorical imperative,” acting the way you wish everyone else would act. The categorical imperative would give us a better outcome in the prisoner’s dilemma than the equilibrium strategy, but there’s no getting around the fact that this outcome isn’t a stable one.

  Conclusion

  Computational Kindness

  I firmly believe that the important things about humans are social in character and that relief by machines from many of our present demanding intellectual functions will finally give the human race time and incentive to learn how to live well together.

  —MERRILL FLOOD

  Any dynamic system subject to the constraints of space and time is up against a core set of fundamental and unavoidable problems. These problems are computational in nature, which makes computers not only our tools but also our comrades. From this come three simple pieces of wisdom.

  First, there are cases where computer scientists and mathematicians have identified good algorithmic approaches that can simply be transferred over to human problems. The 37% Rule, the Least Recently Used criterion for handling overflowing caches, and the Upper Confidence Bound as a guide to exploration are all examples of this.

  Second, knowing that you are using an optimal algorithm should be a relief even if you don’t get the results you were looking for. The 37% Rule fails 63% of the time. Maintaining your cache with LRU doesn’t guarantee that you will always find what you’re looking for; in fact, neither would clairvoyance. Using the Upper Confidence Bound approach to the explore/exploit tradeoff doesn’t mean that you will have no regrets, just that those regrets will accumulate ever more slowly as you go through life. Even the best strategy sometimes yields bad results—which is why computer scientists take care to distinguish between “process” and “outcome.” If you followed the best possible process, then you’ve done all you can, and you shouldn’t blame yourself if things didn’t go your way.

  Outcomes make news headlines—indeed, they make the world we live in—so it’s easy to become fixated on them. But processes are what we have control over. As Bertrand Russell put it, “it would seem we must take account of probability in judging of objective rightness.… The objectively right act is the one which will probably be most fortunate. I shall define this as the wisest act.” We can hope to be fortunate—but we should strive to be wise. Call it a kind of computational Stoicism.

  Finally, we can draw a clear line between problems that admit straightforward solutions and problems that don’t. If you wind up stuck in an intractable scenario, remember that heuristics, approximations, and strategic use of randomness can help you find workable solutions. A theme that came up again and again in our interviews with computer scientists was: sometimes “good enough” really is good enough. What’s more, being aware of complexity can help us pick our problems: if we have control over which situations we confront, we should choose the ones that are tractable.

  But we don’t only pick the problems that we pose to ourselves. We also pick the problems we pose each other, whether it’s the way we design a city or the way we ask a question. This creates a surprising bridge from computer science to ethics—in the form of a principle that we call computational kindness.

  * * *

  There’s a certain paradox the two of us observed when it came to scheduling the interviews that went into this book. Our interviewees were on average more likely to be available when we requested a meeting, say, “next Tuesday between 1:00 and 2:00 p.m. PST” than “at a convenient time this coming week.” At first this seems absurd, like the celebrated studies where people on average donate more money to save the life of one penguin than eight thousand penguins, or report being more worried about dying in an act of terrorism than about dying from any cause, terrorism included. In the case of interviews, it seems that people preferred receiving a constrained problem, even if the constraints were plucked out of thin air, than a wide-open one. It was seemingly less difficult for them to accommodate our preferences and constraints than to compute a better option based on their own. Computer scientists would nod knowingly here, citing the complexity gap between “verification” and “search”—which is about as wide as the gap between knowing a good song when you hear it and writing one on the spot.

  One of the implicit principles of computer science, as odd as it may sound, is that computation is bad: the underlying directive of any good algorithm is to minimize the labor of thought. When we interact with other people, we present them with computational problems—not just explicit requests and demands, but implicit challenges such as interpreting our intentions, our beliefs, and our preferences. It stands to reason, therefore, that a computational understanding of such problems casts light on the nature of human interaction. We can be “computationally kind” to others by framing issues in terms that make the underlying computational problem easier. This matters because many problems—especially social ones, as we’ve seen—are intrinsically and inextricably hard.

  Consider this all-too-common scenario. A group of friends are standing around, trying to figure out where to go for dinner. Each of them clearly has some preferences, albeit potentially weak ones. But none of them wants to state those preferences explicitly, so they politely navigate the social hazards with guesses and half-hints instead.

  They may well come to a resolution that is satisfying to all. But this procedure can easily go awry. The summer after college, for inst
ance, Brian and two friends took a trip to Spain. They negotiated the trip itinerary on the fly, and at one point it became clear that they wouldn’t have time to go to the bullfight they’d researched and planned. Only then, as each of the three attempted to console the others, did they suddenly discover that in fact none of them had wanted to see the bullfight in the first place. Each had just gamely adopted what they’d perceived to be the others’ level of enthusiasm, thereby producing the level of enthusiasm that the others gamely adopted in turn.

  Likewise, seemingly innocuous language like “Oh, I’m flexible” or “What do you want to do tonight?” has a dark computational underbelly that should make you think twice. It has the veneer of kindness about it, but it does two deeply alarming things. First, it passes the cognitive buck: “Here’s a problem, you handle it.” Second, by not stating your preferences, it invites the others to simulate or imagine them. And as we have seen, the simulation of the minds of others is one of the biggest computational challenges a mind (or machine) can ever face.

  In such situations, computational kindness and conventional etiquette diverge. Politely withholding your preferences puts the computational problem of inferring them on the rest of the group. In contrast, politely asserting your preferences (“Personally, I’m inclined toward x. What do you think?”) helps shoulder the cognitive load of moving the group toward resolution.

  Alternatively, you can try to reduce, rather than maximize, the number of options that you give other people—say, offering a choice between two or three restaurants rather than ten. If each person in the group eliminates their least preferred option, that makes the task easier for everyone. And if you’re inviting somebody out to lunch, or scheduling a meeting, offering one or two concrete proposals that they can accept or decline is a good starting point.

  None of these actions is necessarily “polite,” but all of them can significantly lower the computational cost of interaction.

  * * *

  Computational kindness isn’t just a principle of behavior; it’s also a design principle.

  In 2003, University of Waterloo computer scientist Jeffrey Shallit investigated the question of what coin, if put into circulation in the United States, would most help to minimize the number of coins needed on average to make change. Delightfully, the answer turned out to be an 18-cent piece—but Shallit was somewhat stayed from making a policy recommendation by computational concerns.

  At present, change-making is dead simple: for any given amount, just use as many quarters as you can without going over, then as many dimes as possible, and so on down the denominations. For instance, fifty-four cents is two quarters, then four pennies. With an 18-cent piece, that simple algorithm is no longer optimal: fifty-four cents is then best made with three 18-cent pieces—and no quarters at all. In fact, Shallit observed that ungainly denominations turn change-making into something “at least as hard … as the traveling salesman problem.” That’s a lot to ask of a cashier. If ease of computation is taken into account, Shallit found, then what the US money supply could best make use of is either a 2-cent or a 3-cent piece. Not quite as exciting as an 18-cent coin—but almost as good, and computationally kinder by a long shot.

  The deeper point is that subtle changes in design can radically shift the kind of cognitive problem posed to human users. Architects and urban planners, for instance, have choices about how they construct our environment—which means they have choices about how they will structure the computational problems we have to solve.

  Consider a large parking lot, with an array of different lanes, of the kind often found at stadiums and shopping centers. You may drive in one lane toward the destination, see a spot, decide to let it go in favor of (hopefully) a better one farther ahead—but then, finding no such luck, reach the destination and head away down a neighboring lane. After a certain amount of driving, you must decide whether another space is good enough to take, or so far away that you’ll try searching in a third lane instead.

  An algorithmic perspective here is useful not just for the driver but also for the architect. Contrast the hairy, messy decision problem posed by one of those lots to a single linear path going away from one’s destination. In that case, one simply takes the first available space—no game theory, no analysis, no look-then-leap rule needed. Some parking garages are structured this way, with a single helix winding upward from the ground level. Their computational load is zero: one simply drives forward until the first space appears, then takes it. Whatever the other possible factors for and against this kind of construction, we can definitely say that it’s cognitively humane to its drivers—computationally kind.

  One of the chief goals of design ought to be protecting people from unnecessary tension, friction, and mental labor. (This is not just an abstract concern; when mall parking becomes a source of stress, for instance, shoppers may spend less money and return less frequently.) Urban planners and architects routinely weigh how different lot designs will use resources such as limited space, materials, and money. But they rarely account for the way their designs tax the computational resources of the people who use them. Recognizing the algorithmic underpinnings of our daily lives—in this case, optimal stopping—would not only allow drivers to make the best decisions when they’re in a particular scenario, but also encourage planners to be more thoughtful about the problems they’re forcing drivers into in the first place.

  There are a number of other cases where computationally kinder designs suggest themselves. For example, consider restaurant seating policies. Some restaurants have an “open seating” policy, where waiting customers simply hover until a table opens up, and the first to sit down gets the table. Others will take your name, let you have a drink at the bar, and notify you when a table is ready. These approaches to the management of scarce shared resources mirror the distinction in computer science between “spinning” and “blocking.” When a processing thread requests a resource and can’t get it, the computer can either allow that thread to “spin”—to continue checking for the resource in a perpetual “Is it ready yet?” loop—or it can “block”: halt that thread, work on something else, and then come back around whenever the resource becomes free. To a computer scientist, this is a practical tradeoff: weighing the time lost to spinning against the time lost in context switching. But at a restaurant, not all of the resources being traded off are their own. A policy of “spinning” fills empty tables faster, but the CPUs being worn out in the meantime are the minds of their customers, trapped in a tedious but consuming vigilance.

  As a parallel example, consider the computational problem posed by a bus stop. If there is a live display saying that the next bus is “arriving in 10 minutes,” then you get to decide once whether to wait, rather than taking the bus’s continued not-coming as a stream of inferential evidence, moment by moment, and having to redecide and redecide. Moreover, you can take your attention away from squinting down the road—spinning—for those ten minutes straight. (For cities that aren’t up to the implementation necessary to predict the next arrival, we saw how Bayesian inference can even make knowing when the last bus left a useful proxy.) Such subtle acts of computational kindness could do as much for ridership, if not more, as subsidizing the fares: think of it as a cognitive subsidy.

  * * *

  If we can be kinder to others, we can also be kinder to ourselves. Not just computationally kinder—all the algorithms and ideas we have discussed will help with that. But also more forgiving.

  The intuitive standard for rational decision-making is carefully considering all available options and taking the best one. At first glance, computers look like the paragons of this approach, grinding their way through complex computations for as long as it takes to get perfect answers. But as we’ve seen, that is an outdated picture of what computers do: it’s a luxury afforded by an easy problem. In the hard cases, the best algorithms are all about doing what makes the most sense in the least amount of time, which by no means involves giving carefu
l consideration to every factor and pursuing every computation to the end. Life is just too complicated for that.

  In almost every domain we’ve considered, we have seen how the more real-world factors we include—whether it’s having incomplete information when interviewing job applicants, dealing with a changing world when trying to resolve the explore/exploit dilemma, or having certain tasks depend on others when we’re trying to get things done—the more likely we are to end up in a situation where finding the perfect solution takes unreasonably long. And indeed, people are almost always confronting what computer science regards as the hard cases. Up against such hard cases, effective algorithms make assumptions, show a bias toward simpler solutions, trade off the costs of error against the costs of delay, and take chances.

  These aren’t the concessions we make when we can’t be rational. They’re what being rational means.

  Notes

  Please note that some of the links referenced are no longer working.

  The page numbers for the notes that appeared in the print version of this title are not in your e-book. Please use the search function on your e-reading device to search for the relevant passages documented or discussed.

  INTRODUCTION

  al-Jabr wa’l-Muqābala: Al-Jabr wa’l-Muqābala brought with it a truly disruptive technology—the Indian decimal system—and the fact that we refer to this system somewhat erroneously as Arabic numerals is testament to the book’s influence. The introduction of Arabic numerals, and the algorithms they support, kicked off a medieval showdown between the advocates of this newfangled math (the “algorists”) and more traditional accountants who favored Roman numerals backed up by an abacus (the “abacists”). It got pretty intense: the city of Florence passed a law in 1399 that banned the use of Arabic numerals by banks. Ironically, Roman numerals were themselves a controversial innovation when they were offered as an alternative to just writing out numbers with words, being declared “unfitted for showing a sum, since names have been invented for that purpose.” See Murray, Chapters in the History of Bookkeeping.

 

‹ Prev