Algorithms to Live By

Home > Nonfiction > Algorithms to Live By > Page 31
Algorithms to Live By Page 31

by Brian Christian


  So the rational argument for love is twofold: the emotions of attachment not only spare you from recursively overthinking your partner’s intentions, but by changing the payoffs actually enable a better outcome altogether. What’s more, being able to fall involuntarily in love makes you, in turn, a more attractive partner to have. Your capacity for heartbreak, for sleeping with the emotional fishes, is the very quality that makes you such a trusty accomplice.

  Information Cascades: The Tragic Rationality of Bubbles

  Whenever you find yourself on the side of the majority, it is time to pause and reflect.

  —MARK TWAIN

  Part of the reason why it’s a good idea to pay attention to the behavior of others is that in doing so, you get to add their information about the world to your own. A popular restaurant is probably good; a half-empty concert hall is probably a bad sign; and if someone you’re talking to abruptly yanks their gaze toward something you can’t see, it’s probably not a bad idea to turn your head, too.

  On the other hand, learning from others doesn’t always seem particularly rational. Fads and fashions are the result of following others’ behavior without being anchored to any underlying objective truth about the world. What’s worse, the assumption that other people’s actions are a useful guide can lead to the sort of herd-following that precipitates economic disaster. If everybody else is investing in real estate, it seems like a good idea to buy a house; after all, the price is only going to go up. Isn’t it?

  An interesting aspect of the 2007–2009 mortgage crisis is that everybody involved seemed to feel like they were unfairly punished for simply doing what they were supposed to. A generation of Americans who grew up believing that houses were fail-safe investments, and who saw everyone around them buying houses despite (or because of) rapidly rising prices, were badly burned when those prices finally started to tumble. Bankers, meanwhile, felt they were unfairly blamed for doing what they had always done—offering opportunities, which their clients could accept or decline. In the wake of an abrupt market collapse, the temptation is always to assign blame. Here game theory offers a sobering perspective: catastrophes like this can happen even when no one’s at fault.

  Properly appreciating the mechanics of financial bubbles begins with understanding auctions. While auctions may seem like niche corners of the economy—evoking either million-dollar oil paintings at Sotheby’s and Christie’s, or Beanie Babies and other collectibles on eBay—they actually power a substantial portion of the economy. Google, for instance, makes more than 90% of its revenue from selling ads, and those ads are all sold via auctions. Meanwhile, governments use auctions to sell rights to bands of the telecommunications spectrum (such as cell phone transmission frequencies), raising tens of billions of dollars in revenue. In fact, many global markets, in everything from homes to books to tulips, operate via auctions of various styles.

  One of the simplest auction formats has each participant write down their bid in secret, and the one whose bid is highest wins the item for whatever price they wrote down. This is known as a “sealed-bid first-price auction,” and from an algorithmic game theory perspective there’s a big problem with it—actually, several. For one thing, there’s a sense in which the winner always overpays: if you value an item at $25 and I value it at $10, and we both bid our true valuations ($25 and $10), then you end up buying it for $25 when you could have had it for just a hair over $10. This problem, in turn, leads to another one, which is that in order to bid properly—that is, in order not to overpay—you need to predict the true valuation of the other players in the auction and “shade” your bid accordingly. That’s bad enough—but the other players aren’t going to bid their true valuations either, because they’re shading their bids based on their prediction of yours! We are back in the land of recursion.

  Another classic auction format, the “Dutch auction” or “descending auction,” gradually lowers an item’s price until someone is willing to buy it. The name references the Aalsmeer Flower Auction, the largest flower auction in the world, which takes place daily in the Netherlands—but Dutch auctions are more prevalent than they might initially seem. A store marking down its unsold items, and landlords listing apartments at the highest price they think the market will bear, both share its basic quality: the seller is likely to begin optimistically and nudge the price down until a buyer is found. The descending auction resembles the first-price auction in that you’re more likely to win by paying near the top of your range (i.e., you’ll be poised to bid as the price falls to $25), and therefore will want to shade your offer by some complexly strategic amount. Do you buy at $25, or stay your hand and try to wait for a lower price? Every dollar you save risks losing out altogether.

  The inverse of a Dutch or descending auction is what’s known as an “English auction” or “ascending auction”—the most familiar auction format. In an English auction, bidders alternate raising the price until all but one of them drop out. This seems to offer something closer to what we want: here, if you value an item at $25 and I value it at $10, you’ll win it for just over $10 without either having to go all the way to $25 or disappearing down the strategic rabbit hole.

  Both the Dutch auction and English auction introduce an extra level of complexity when compared to a sealed-bid auction, however. They involve not only the private information that each bidder has but also the public flow of bidding behavior. (In a Dutch auction, it is the absence of a bid that reveals information, by making it clear that none of the other bidders value the item at the current price level.) And under the right circumstances, this mixing of private and public data can prove toxic.

  Imagine the bidders are doubtful about their own estimations of the value of an auction lot—say, the right to drill for oil in some part of the ocean. As University College London game theorist Ken Binmore notes, “the amount of oil in a tract is the same for everybody, but the buyers’ estimates of how much oil is likely to be in a tract will depend on their differing geological surveys. Such surveys aren’t only expensive, but notoriously unreliable.” In such a situation, it seems natural to look closely at your opponents’ bids, to augment your own meager private information with the public information.

  But this public information might not be nearly as informative as it seems. You don’t actually get to know the other bidders’ beliefs—only their actions. And it is entirely possible that their behavior is based on your own, just as your behavior is being influenced by theirs. It’s easy to imagine a bunch of people all going over a cliff together because “everyone else” was acting as though it’d all be fine—when in reality each person had qualms, but suppressed them because of the apparent confidence of everyone else in the group.

  Just as with the tragedy of the commons, this failure is not necessarily the players’ fault. An enormously influential paper by the economists Sushil Bikhchandani, David Hirshleifer, and Ivo Welch has demonstrated that under the right circumstances, a group of agents who are all behaving perfectly rationally and perfectly appropriately can nonetheless fall prey to what is effectively infinite misinformation. This has come to be known as an “information cascade.”

  To continue the oil drilling rights scenario, imagine there are ten companies that might bid on the rights for a given tract. One of them has a geological survey suggesting the tract is rich with oil; another’s survey is inconclusive; the reconnaissance of the other eight suggests it’s barren. But being competitors, of course, the companies do not share their survey results with each other, and instead can only watch each other’s actions. When the auction begins, the first company, with the promising report, makes a high initial bid. The second company, encouraged by this bid to take an optimistic view of their own ambiguous survey, bids even higher. The third company has a weak survey but now doesn’t trust it in light of what they take to be two independent surveys that suggest it’s a gold mine, so they make a new high bid. The fourth company, which also has a lackluster survey, is now even more strongly
inclined to disregard it, as it seems like three of their competitors all think it’s a winner. So they bid too. The “consensus” unglues from reality. A cascade has formed.

  No single bidder has acted irrationally, yet the net result is catastrophe. As Hirshleifer puts it, “Something very important happens once somebody decides to follow blindly his predecessors independently of his own information signal, and that is that his action becomes uninformative to all later decision makers. Now the public pool of information is no longer growing. That welfare benefit of having public information … has ceased.”

  To see what happens in the real world when an information cascade takes over, and the bidders have almost nothing but one another’s behavior to estimate an item’s value, look no further than Peter A. Lawrence’s developmental biology text The Making of a Fly, which in April 2011 was selling for $23,698,655.93 (plus $3.99 shipping) on Amazon’s third-party marketplace. How and why had this—admittedly respected—book reached a sale price of more than $23 million? It turns out that two of the sellers were setting their prices algorithmically as constant fractions of each other: one was always setting it to 0.99830 times the competitor’s price, while the competitor was automatically setting their own price to 1.27059 times the other’s. Neither seller apparently thought to set any limit on the resulting numbers, and eventually the process spiraled totally out of control.

  It’s possible that a similar mechanism was in play during the enigmatic and controversial stock market “flash crash” of May 6, 2010, when, in a matter of minutes, the price of several seemingly random companies in the S&P 500 rose to more than $100,000 a share, while others dropped precipitously—sometimes to $0.01 a share. Almost $1 trillion of value instantaneously went up in smoke. As CNBC’s Jim Cramer reported live, dumbfounded, “That … it can’t be there. That is not a real price. Oh well, just go buy Procter! Just go buy Procter & Gamble, they reported a decent quarter, just go buy it.… I mean, this is ridi—this is a good opportunity.” Cramer’s incredulity is his private information holding up against the public information. He’s seemingly the only person in the world willing to pay, in this case, $49 for a stock that the market is apparently valuing at under $40, but he doesn’t care; he’s seen the quarterly reports, he’s certain in what he knows.

  Investors are said to fall into two broad camps: “fundamental” investors, who trade on what they perceive as the underlying value of a company, and “technical” investors, who trade on the fluctuations of the market. The rise of high-speed algorithmic trading has upset the balance between these two strategies, and it’s frequently complained that computers, unanchored to the real-world value of goods—unbothered at pricing a texbook at tens of millions of dollars and blue-chip stocks at a penny—worsen the irrationality of the market. But while this critique is typically leveled at computers, people do the same kind of thing too, as any number of investment bubbles can testify. Again, the fault is often not with the players but with the game itself.

  Information cascades offer a rational theory not only of bubbles, but also of fads and herd behavior more generally. They offer an account of how it’s easily possible for any market to spike and collapse, even in the absence of irrationality, malevolence, or malfeasance. The takeaways are several. For one, be wary of cases where public information seems to exceed private information, where you know more about what people are doing than why they’re doing it, where you’re more concerned with your judgments fitting the consensus than fitting the facts. When you’re mostly looking to others to set a course, they may well be looking right back at you to do the same. Second, remember that actions are not beliefs; cascades get caused in part when we misinterpret what others think based on what they do. We should be especially hesitant to overrule our own doubts—and if we do, we might want to find some way to broadcast those doubts even as we move forward, lest others fail to distinguish the reluctance in our minds from the implied enthusiasm in our actions. Last, we should remember from the prisoner’s dilemma that sometimes a game can have irredeemably lousy rules. There may be nothing we can do once we’re in it, but the theory of information cascades may help us to avoid such a game in the first place.

  And if you’re the kind of person who always does what you think is right, no matter how crazy others think it is, take heart. The bad news is that you will be wrong more often than the herd followers. The good news is that sticking to your convictions creates a positive externality, letting people make accurate inferences from your behavior. There may come a time when you will save the entire herd from disaster.

  To Thine Own Self Compute

  The application of computer science to game theory has revealed that being obligated to strategize is itself a part—often a big part—of the price we pay in competing with one another. And as the difficulties of recursion demonstrate, nowhere is that price as high as when we’re required to get inside each other’s heads. Here, algorithmic game theory gives us a way to rethink mechanism design: to take into account not only the outcome of the games, but also the computational effort required of the players.

  We’ve seen how seemingly innocuous auction mechanisms, for instance, can run into all sorts of problems: overthinking, overpaying, runaway cascades. But the situation is not completely hopeless. In fact, there’s one auction design in particular that cuts through the burden of mental recursion like a hot knife through butter. It’s called the Vickrey auction.

  Named for Nobel Prize–winning economist William Vickrey, the Vickrey auction, just like the first-price auction, is a “sealed bid” auction process. That is, every participant simply writes down a single number in secret, and the highest bidder wins. However, in a Vickrey auction, the winner ends up paying not the amount of their own bid, but that of the second-place bidder. That is to say, if you bid $25 and I bid $10, you win the item at my price: you only have to pay $10.

  To a game theorist, a Vickrey auction has a number of attractive properties. And to an algorithmic game theorist in particular, one property especially stands out: the participants are incentivized to be honest. In fact, there is no better strategy than just bidding your “true value” for the item—exactly what you think the item is worth. Bidding any more than your true value is obviously silly, as you might end up stuck buying something for more than you think it’s worth. And bidding any less than your true value (i.e., shading your bid) risks losing the auction for no good reason, since it doesn’t save you any money—because if you win, you’ll only be paying the value of the second-highest bid, regardless of how high your own was. This makes the Vickrey auction what mechanism designers call “strategy-proof,” or just “truthful.” In the Vickrey auction, honesty is literally the best policy.

  Even better, honesty remains the best policy regardless of whether the other bidders are honest themselves. In the prisoner’s dilemma, we saw how defection turned out to be the “dominant” strategy—the best move no matter whether your partner defected or cooperated. In a Vickrey auction, on the other hand, honesty is the dominant strategy. This is the mechanism designer’s holy grail. You do not need to strategize or recurse.

  Now, it seems like the Vickrey auction would cost the seller some money compared to the first-price auction, but this isn’t necessarily true. In a first-price auction, every bidder is shading their bid down to avoid overpaying; in the second-price Vickrey auction, there’s no need to—in a sense, the auction itself is optimally shading their bid for them. In fact, a game-theoretic principle called “revenue equivalence” establishes that over time, the average expected sale price in a first-price auction will converge to precisely the same as in a Vickrey auction. Thus the Vickrey equilibrium involves the same bidder winning the item for the same price—without any strategizing by any of the bidders whatsoever. As Tim Roughgarden tells his Stanford students, the Vickrey auction is “awesome.”

  For Hebrew University algorithmic game theorist Noam Nisan, this awesomeness has an air to it that’s nearly utopian. “You would
like to get some kind of rules of society where it’s not worthwhile to lie, and then people won’t lie so much, right? That’s the basic idea. From my point of view, the amazing thing about Vickrey is that you wouldn’t expect that in general it’s possible to do that, right? Especially in things like an auction, where of course I want to pay less, how can you ever get— And then yet Vickrey shows, here is the way to do that. I think that’s really fantastic.”

  In fact, the lesson here goes far beyond auctions. In a landmark finding called the “revelation principle,” Nobel laureate Roger Myerson proved that any game that requires strategically masking the truth can be transformed into a game that requires nothing but simple honesty. Paul Milgrom, Myerson’s colleague at the time, reflects: “It’s one of those results that as you look at it from different sides, on the one side, it’s just absolutely shocking and amazing, and on the other side, it’s trivial. And that’s totally wonderful, it’s so awesome: that’s how you know you’re looking at one of the best things you can see.”

  The revelation principle may seem hard to accept on its face, but its proof is actually quite intuitive. Imagine that you have an agent or a lawyer who will be playing the game for you. If you trust them to represent your interests, you’re going to simply tell them exactly what you want, and let them handle all of the strategic bid-shading and the recursive strategizing on your behalf. In the Vickrey auction, the game itself performs this function. And the revelation principle just expands this idea: any game that can be played for you by agents to whom you’ll tell the truth, it says, will become an honesty-is-best game if the behavior you want from your agent is incorporated into the rules of the game itself. As Nisan puts it, “The basic thing is if you don’t want your clients to optimize against you, you’d better optimize for them. That’s the whole proof.… If I design an algorithm that already optimizes for you, there is nothing you can do.”

 

‹ Prev