Algorithms to Live By

Home > Nonfiction > Algorithms to Live By > Page 3
Algorithms to Live By Page 3

by Brian Christian


  This setup is arguably a far cry from most searches for an apartment, a partner, or even a secretary. Imagine instead that we had some kind of objective criterion—if every secretary, for instance, had taken a typing exam scored by percentile, in the fashion of the SAT or GRE or LSAT. That is, every applicant’s score will tell us where they fall among all the typists who took the test: a 51st-percentile typist is just above average, a 75th-percentile typist is better than three test takers out of four, and so on.

  Suppose that our applicant pool is representative of the population at large and isn’t skewed or self-selected in any way. Furthermore, suppose we decide that typing speed is the only thing that matters about our applicants. Then we have what mathematicians call “full information,” and everything changes. “No buildup of experience is needed to set a standard,” as the seminal 1966 paper on the problem put it, “and a profitable choice can sometimes be made immediately.” In other words, if a 95th-percentile applicant happens to be the first one we evaluate, we know it instantly and can confidently hire her on the spot—that is, of course, assuming we don’t think there’s a 96th-percentile applicant in the pool.

  And there’s the rub. If our goal is, again, to get the single best person for the job, we still need to weigh the likelihood that there’s a stronger applicant out there. However, the fact that we have full information gives us everything we need to calculate those odds directly. The chance that our next applicant is in the 96th percentile or higher will always be 1 in 20, for instance. Thus the decision of whether to stop comes down entirely to how many applicants we have left to see. Full information means that we don’t need to look before we leap. We can instead use the Threshold Rule, where we immediately accept an applicant if she is above a certain percentile. We don’t need to look at an initial group of candidates to set this threshold—but we do, however, need to be keenly aware of how much looking remains available.

  The math shows that when there are a lot of applicants left in the pool, you should pass up even a very good applicant in the hopes of finding someone still better than that—but as your options dwindle, you should be prepared to hire anyone who’s simply better than average. It’s a familiar, if not exactly inspiring, message: in the face of slim pickings, lower your standards. It also makes clear the converse: with more fish in the sea, raise them. In both cases, crucially, the math tells you exactly by how much.

  The easiest way to understand the numbers for this scenario is to start at the end and think backward. If you’re down to the last applicant, of course, you are necessarily forced to choose her. But when looking at the next-to-last applicant, the question becomes: is she above the 50th percentile? If yes, then hire her; if not, it’s worth rolling the dice on the last applicant instead, since her odds of being above the 50th percentile are 50/50 by definition. Likewise, you should choose the third-to-last applicant if she’s above the 69th percentile, the fourth-to-last applicant if she’s above the 78th, and so on, being more choosy the more applicants are left. No matter what, never hire someone who’s below average unless you’re totally out of options. (And since you’re still interested only in finding the very best person in the applicant pool, never hire someone who isn’t the best you’ve seen so far.)

  The chance of ending up with the single best applicant in this full-information version of the secretary problem comes to 58%—still far from a guarantee, but considerably better than the 37% success rate offered by the 37% Rule in the no-information game. If you have all the facts, you can succeed more often than not, even as the applicant pool grows arbitrarily large.

  Optimal stopping thresholds in the full-information secretary problem.

  The full-information game thus offers an unexpected and somewhat bizarre takeaway. Gold digging is more likely to succeed than a quest for love. If you’re evaluating your partners based on any kind of objective criterion—say, their income percentile—then you’ve got a lot more information at your disposal than if you’re after a nebulous emotional response (“love”) that might require both experience and comparison to calibrate.

  Of course, there’s no reason that net worth—or, for that matter, typing speed—needs to be the thing that you’re measuring. Any yardstick that provides full information on where an applicant stands relative to the population at large will change the solution from the Look-Then-Leap Rule to the Threshold Rule and will dramatically boost your chances of finding the single best applicant in the group.

  There are many more variants of the secretary problem that modify its other assumptions, perhaps bringing it more in line with the real-world challenges of finding love (or a secretary). But the lessons to be learned from optimal stopping aren’t limited to dating or hiring. In fact, trying to make the best choice when options only present themselves one by one is also the basic structure of selling a house, parking a car, and quitting when you’re ahead. And they’re all, to some degree or other, solved problems.

  When to Sell

  If we alter two more aspects of the classical secretary problem, we find ourselves catapulted from the realm of dating to the realm of real estate. Earlier we talked about the process of renting an apartment as an optimal stopping problem, but owning a home has no shortage of optimal stopping either.

  Imagine selling a house, for instance. After consulting with several real estate agents, you put your place on the market; a new coat of paint, some landscaping, and then it’s just a matter of waiting for the offers to come in. As each offer arrives, you typically have to decide whether to accept it or turn it down. But turning down an offer comes at a cost—another week (or month) of mortgage payments while you wait for the next offer, which isn’t guaranteed to be any better.

  Selling a house is similar to the full-information game. We know the objective dollar value of the offers, telling us not only which ones are better than which, but also by how much. What’s more, we have information about the broader state of the market, which enables us to at least roughly predict the range of offers to expect. (This gives us the same “percentile” information about each offer that we had with the typing exam above.) The difference here, however, is that our goal isn’t actually to secure the single best offer—it’s to make the most money through the process overall. Given that waiting has a cost measured in dollars, a good offer today beats a slightly better one several months from now.

  Having this information, we don’t need to look noncommittally to set a threshold. Instead, we can set one going in, ignore everything below it, and take the first option to exceed it. Granted, if we have a limited amount of savings that will run out if we don’t sell by a certain time, or if we expect to get only a limited number of offers and no more interest thereafter, then we should lower our standards as such limits approach. (There’s a reason why home buyers look for “motivated” sellers.) But if neither concern leads us to believe that our backs are against the wall, then we can simply focus on a cost-benefit analysis of the waiting game.

  Here we’ll analyze one of the simplest cases: where we know for certain the price range in which offers will come, and where all offers within that range are equally likely. If we don’t have to worry about the offers (or our savings) running out, then we can think purely in terms of what we can expect to gain or lose by waiting for a better deal. If we decline the current offer, will the chance of a better one, multiplied by how much better we expect it to be, more than compensate for the cost of the wait? As it turns out, the math here is quite clean, giving us an explicit function for stopping price as a function of the cost of waiting for an offer.

  This particular mathematical result doesn’t care whether you’re selling a mansion worth millions or a ramshackle shed. The only thing it cares about is the difference between the highest and lowest offers you’re likely to receive. By plugging in some concrete figures, we can see how this algorithm offers us a considerable amount of explicit guidance. For instance, let’s say the range of offers we’re expecting runs from $400,000 t
o $500,000. First, if the cost of waiting is trivial, we’re able to be almost infinitely choosy. If the cost of getting another offer is only a dollar, we’ll maximize our earnings by waiting for someone willing to offer us $499,552.79 and not a dime less. If waiting costs $2,000 an offer, we should hold out for an even $480,000. In a slow market where waiting costs $10,000 an offer, we should take anything over $455,279. Finally, if waiting costs half or more of our expected range of offers—in this case, $50,000—then there’s no advantage whatsoever to holding out; we’ll do best by taking the very first offer that comes along and calling it done. Beggars can’t be choosers.

  Optimal stopping thresholds in the house-selling problem.

  The critical thing to note in this problem is that our threshold depends only on the cost of search. Since the chances of the next offer being a good one—and the cost of finding out—never change, our stopping price has no reason to ever get lower as the search goes on, regardless of our luck. We set it once, before we even begin, and then we quite simply hold fast.

  The University of Wisconsin–Madison’s Laura Albert McLay, an optimization expert, recalls turning to her knowledge of optimal stopping problems when it came time to sell her own house. “The first offer we got was great,” she explains, “but it had this huge cost because they wanted us to move out a month before we were ready. There was another competitive offer … [but] we just kind of held out until we got the right one.” For many sellers, turning down a good offer or two can be a nerve-racking proposition, especially if the ones that immediately follow are no better. But McLay held her ground and stayed cool. “That would have been really, really hard,” she admits, “if I didn’t know the math was on my side.”

  This principle applies to any situation where you get a series of offers and pay a cost to seek or wait for the next. As a consequence, it’s relevant to cases that go far beyond selling a house. For example, economists have used this algorithm to model how people look for jobs, where it handily explains the otherwise seemingly paradoxical fact of unemployed workers and unfilled vacancies existing at the same time.

  In fact, these variations on the optimal stopping problem have another, even more surprising property. As we saw, the ability to “recall” a past opportunity was vital in Kepler’s quest for love. But in house selling and job hunting, even if it’s possible to reconsider an earlier offer, and even if that offer is guaranteed to still be on the table, you should nonetheless never do so. If it wasn’t above your threshold then, it won’t be above your threshold now. What you’ve paid to keep searching is a sunk cost. Don’t compromise, don’t second-guess. And don’t look back.

  When to Park

  I find that the three major administrative problems on a campus are sex for the students, athletics for the alumni, and parking for the faculty.

  —CLARK KERR, PRESIDENT OF UC BERKELEY, 1958–1967

  Another domain where optimal stopping problems abound—and where looking back is also generally ill-advised—is the car. Motorists feature in some of the earliest literature on the secretary problem, and the framework of constant forward motion makes almost every car-trip decision into a stopping problem: the search for a restaurant; the search for a bathroom; and, most acutely for urban drivers, the search for a parking space. Who better to talk to about the ins and outs of parking than the man described by the Los Angeles Times as “the parking rock star,” UCLA Distinguished Professor of Urban Planning Donald Shoup? We drove down from Northern California to visit him, reassuring Shoup that we’d be leaving plenty of time for unexpected traffic. “As for planning on ‘unexpected traffic,’ I think you should plan on expected traffic,” he replied. Shoup is perhaps best known for his book The High Cost of Free Parking, and he has done much to advance the discussion and understanding of what really happens when someone drives to their destination.

  We should pity the poor driver. The ideal parking space, as Shoup models it, is one that optimizes a precise balance between the “sticker price” of the space, the time and inconvenience of walking, the time taken seeking the space (which varies wildly with destination, time of day, etc.), and the gas burned in doing so. The equation changes with the number of passengers in the car, who can split the monetary cost of a space but not the search time or the walk. At the same time, the driver needs to consider that the area with the most parking supply may also be the area with the most demand; parking has a game-theoretic component, as you try to outsmart the other drivers on the road while they in turn are trying to outsmart you.* That said, many of the challenges of parking boil down to a single number: the occupancy rate. This is the proportion of all parking spots that are currently occupied. If the occupancy rate is low, it’s easy to find a good parking spot. If it’s high, finding anywhere at all to park is a challenge.

  Shoup argues that many of the headaches of parking are consequences of cities adopting policies that result in extremely high occupancy rates. If the cost of parking in a particular location is too low (or—horrors!—nothing at all), then there is a high incentive to park there, rather than to park a little farther away and walk. So everybody tries to park there, but most of them find the spaces are already full, and people end up wasting time and burning fossil fuel as they cruise for a spot.

  Shoup’s solution involves installing digital parking meters that are capable of adaptive prices that rise with demand. (This has now been implemented in downtown San Francisco.) The prices are set with a target occupancy rate in mind, and Shoup argues that this rate should be somewhere around 85%—a radical drop from the nearly 100%-packed curbs of most major cities. As he notes, when occupancy goes from 90% to 95%, it accommodates only 5% more cars but doubles the length of everyone’s search.

  The key impact that occupancy rate has on parking strategy becomes clear once we recognize that parking is an optimal stopping problem. As you drive along the street, every time you see the occasional empty spot you have to make a decision: should you take this spot, or go a little closer to your destination and try your luck?

  Assume you’re on an infinitely long road, with parking spots evenly spaced, and your goal is to minimize the distance you end up walking to your destination. Then the solution is the Look-Then-Leap Rule. The optimally stopping driver should pass up all vacant spots occurring more than a certain distance from the destination and then take the first space that appears thereafter. And the distance at which to switch from looking to leaping depends on the proportion of spots that are likely to be filled—the occupancy rate. The table on the next page gives the distances for some representative proportions.

  How to optimally find parking.

  If this infinite street has a big-city occupancy rate of 99%, with just 1% of spots vacant, then you should take the first spot you see starting at almost 70 spots—more than a quarter mile—from your destination. But if Shoup has his way and occupancy rates drop to just 85%, you don’t need to start seriously looking until you’re half a block away.

  Most of us don’t drive on perfectly straight, infinitely long roads. So as with other optimal stopping problems, researchers have considered a variety of tweaks to this basic scenario. For instance, they have studied the optimal parking strategy for cases where the driver can make U-turns, where fewer parking spaces are available the closer one gets to the destination, and where the driver is in competition against rival drivers also heading to the same destination. But whatever the exact parameters of the problem, more vacant spots are always going to make life easier. It’s something of a policy reminder to municipal governments: parking is not as simple as having a resource (spots) and maximizing its utilization (occupancy). Parking is also a process—an optimal stopping problem—and it’s one that consumes attention, time, and fuel, and generates both pollution and congestion. The right policy addresses the whole problem. And, counterintuitively, empty spots on highly desirable blocks can be the sign that things are working correctly.

  We asked Shoup if his research allows him to optimiz
e his own commute, through the Los Angeles traffic to his office at UCLA. Does arguably the world’s top expert on parking have some kind of secret weapon?

  He does: “I ride my bike.”

  When to Quit

  In 1997, Forbes magazine identified Boris Berezovsky as the richest man in Russia, with a fortune of roughly $3 billion. Just ten years earlier he had been living on a mathematician’s salary from the USSR Academy of Sciences. He made his billions by drawing on industrial relationships he’d formed through his research to found a company that facilitated interaction between foreign carmakers and the Soviet car manufacturer AvtoVAZ. Berezovky’s company then became a large-scale dealer for the cars that AvtoVAZ produced, using a payment installment scheme to take advantage of hyperinflation in the ruble. Using the funds from this partnership he bought partial ownership of AvtoVAZ itself, then the ORT Television network, and finally the Sibneft oil company. Becoming one of a new class of oligarchs, he participated in politics, supporting Boris Yeltsin’s re-election in 1996 and the choice of Vladimir Putin as his successor in 1999.

  But that’s when Berezovsky’s luck turned. Shortly after Putin’s election, Berezovsky publicly objected to proposed constitutional reforms that would expand the power of the president. His continued public criticism of Putin led to the deterioration of their relationship. In October 2000, when Putin was asked about Berezovsky’s criticisms, he replied, “The state has a cudgel in its hands that you use to hit just once, but on the head. We haven’t used this cudgel yet.… The day we get really angry, we won’t hesitate.” Berezovsky left Russia permanently the next month, taking up exile in England, where he continued to criticize Putin’s regime.

  How did Berezovsky decide it was time to leave Russia? Is there a way, perhaps, to think mathematically about the advice to “quit while you’re ahead”? Berezovsky in particular might have considered this very question himself, since the topic he had worked on all those years ago as a mathematician was none other than optimal stopping; he authored the first (and, so far, the only) book entirely devoted to the secretary problem.

 

‹ Prev