Ellsberg, the young superstar, had a taste for crossing up established expectations. After graduating third from his class at Harvard, he had startled his intellectual comrades by enlisting in the Marine Corps, where he served for three years as an infantryman. In 1959, as a Harvard Junior Fellow, he delivered a lecture on strategy in foreign policy at the Boston Public Library, in which he famously contemplated the effectiveness of Adolf Hitler as a geopolitical tactician: “There is the artist to study, to learn what can be hoped for, what can be done with the threat of violence.” (Ellsberg always insisted that he didn’t recommend that the United States adopt Hitler-style strategies, but only wanted to make dispassionate study of their effectiveness—maybe so, but it’s hard to doubt he was trying to get a rise out of his audience.)
So it’s perhaps no surprise that Ellsberg was not content to accept the prevailing views. In fact, he’d been picking at the foundations of game theory since his undergraduate senior thesis. At RAND, he devised a famous experiment now known as Ellsberg’s paradox.
Suppose there’s an urn* with ninety balls inside. You know that thirty of the balls are red; concerning the other sixty balls, you know only that some are black and some are yellow. The experimenter describes to you the following four bets.
RED: You get $100 if the next ball pulled from the urn is red; otherwise, you get nothing.
BLACK: You get $100 if the next ball is black, otherwise nothing.
NOT-RED: You get $100 if the next ball is either black or yellow, otherwise nothing.
NOT-BLACK: You get $100 if the next ball is either red or yellow, otherwise nothing.
Which bet do you prefer; RED or BLACK? What about NOT-RED versus NOT-BLACK?
Ellsberg quizzed his subjects to find out which of these bets they preferred, given the choice. What he found was that the people he polled tended to prefer RED to BLACK. With RED, you know where you stand: you’ve got a 1-in-3 chance of getting the money. With BLACK, you have no idea what odds to expect. As for NOT-RED and NOT-BLACK, the situation is just the same; Ellsberg’s subjects liked NOT-RED better, preferring the state of knowing that their chance of a payoff is exactly 2/3.
Now suppose you have a more complicated choice: you have to pick two of the bets. And not any two you like: you have to take either “RED and NOT-RED” or “BLACK and NOT-BLACK.” If you prefer RED to BLACK and NOT-RED to NOT-BLACK, it seems reasonable that you prefer “RED and NOT-RED” to “BLACK and NOT-BLACK.”
But now here’s the problem. Picking RED and NOT-RED is the same thing as giving yourself $100. But so is BLACK and NOT-BLACK! How can one be preferable to the other when they’re the same thing?
For a proponent of expected utility theory, Ellsberg’s results looked very strange. Each bet must be worth a certain number of utils, and if RED has more utility than BLACK, and NOT-RED more than NOT-BLACK, it just has to be the case that RED + NOT-RED is worth more utils than BLACK + NOT-BLACK; but the two are the same. If you want to believe in utils, you have to believe that the participants in Ellsberg’s study are just plain wrong in their preferences; they are bad at calculating, or they’re not paying close attention to the question, or they’re simply crazy. Since the people Ellsberg asked were in fact well-known economists and decision theorists, this conclusion presents its own problems for the status quo.
For Ellsberg, the answer to the paradox is simply that expected utility theory is incorrect. As Donald Rumsfeld would later put it, there are known unknowns and there are unknown unknowns, and the two are to be processed differently. The “known unknowns” are like RED—we don’t know which ball we’ll get, but we can quantify the probability that the ball will be the color we want. BLACK, on the other hand, subjects the player to an “unknown unknown”—not only are we not sure whether the ball will be black, we don’t have any knowledge of how likely it is to be black. In the decision-theory literature, the former kind of unknown is called risk, the latter uncertainty. Risky strategies can be analyzed numerically; uncertain strategies, Ellsberg suggested, were beyond the bounds of formal mathematical analysis, or at least beyond the bounds of the flavor of mathematical analysis beloved at RAND.
None of which is to deny the incredible utility of utility theory. There are many situations, lotteries being one, where the mystery we’re subject to is all risk, governed by well-defined probabilities; and there are many more circumstances where “unknown unknowns” are present but play only a small role. We see here the characteristic push and pull of the mathematical approach to science. Mathematicians like Bernoulli and von Neumann construct formalisms that apply a penetrating light to a sphere of inquiry only dimly understood before; mathematically fluent scientists like Ellsberg work to understand the limits of those formalisms, to refine and improve them where it’s possible to do so, and to post strongly worded warning signs where it’s not.
Ellsberg’s paper is written in a vivid, literary style uncharacteristic of technical economics. In his concluding paragraph, he writes of his experimental subjects that “the Bayesian or Savage approach gives wrong predictions and, by their lights, bad advice. They act in conflict with the axioms deliberately, without apology, because it seems to them the sensible way to behave. Are they clearly mistaken?”
In the world of cold war Washington and RAND, decision theory and game theory were held in the highest intellectual esteem, seen as the scientific tools that would win the next world war, as the atom bomb had won the last one. That those tools might actually be limited in their application, especially in contexts for which there was no precedent and thus no means of estimating probabilities—like, say, the instantaneous reduction of the human race to radioactive dust—must have been at least a little troubling for Ellsberg. Was it here, over a disagreement about math, that his doubts about the military establishment really began?
THIRTEEN
WHERE THE TRAIN TRACKS MEET
The notion of utility helps make sense of a puzzling feature of the Cash WinFall story. When Gerald Selbee’s betting group bought massive quantities of tickets, they used Quic Pic, letting the lottery’s computers pick the numbers on their slips at random. Random Strategies, on the other hand, picked their numbers themselves; this meant they had to fill out hundreds of thousands of slips by hand, then feed them through the machines at their chosen convenience stores one by one, a massive and incredibly dull undertaking.
The winning numbers are completely random, so every lottery ticket has the same expected value; Selbee’s 100,000 Quic Pics would bring in the same amount of prize money, on average, as Harvey and Lu’s 100,000 artisanally marked tickets. As far as expected value is concerned, Random Strategies did a lot of painful work for no reward. Why?
Consider this case, which is simpler but of the same nature. Would you rather have $50,000, or would you rather have a 50/50 bet between losing $100,000 and gaining $200,000? The expected value of the bet is
(1/2) × (−$100,000) + (1/2) × ($200,000) = $50,000,
the same as the cash. And there is indeed some reason to feel indifferent between the two choices; if you made that bet time after time after time, you’d almost certainly make $200,000 about half the time and lose $100,000 the other half. Imagine you alternated winning and losing: after two bets you’ve won $200,000 and lost $100,000 for a net gain of $100,000, after four bets you’re up $200,000, after six bets $300,000, and so on: a profit of $50,000 per bet on average, just the same as if you’d gone the safe route.
But now pretend for a moment that you’re not a character in a word problem in an economics textbook, but rather an actual person—an actual person who does not have $100,000 cash on hand. When you lose that first bet and your bookie—let us say your big, angry, bald, power-lifting bookie—comes to collect, do you say, “An expected value calculation shows that it’s very likely I’ll be able to pay you back in the long run”? You do not. That argument, while mathematically sound, will not achieve its goals.
If you’re an actual person, you should take the $50,000.
This reasoning is well captured by utility theory. If I’m a corporation with limitless funds, losing $100,000 might not be so bad—let’s say it’s worth −100 utils—while winning $200,000 brings me 200 utils. In that case, dollars and utils might match up to be nicely linear; a util is just another name for a grand.
But if I’m an actual person with meager savings, the calculus is rather different. Winning $200,000 would change my life more than it would the corporation’s, so maybe it’s worth more to me—say 400 utils. But losing $100,000 doesn’t just clean out my bank account, it puts me in hock to the angry bald power lifter. That’s not just a bad day for the balance sheet, it’s a serious injury hazard. Maybe we rate it at −1,000 utils. In which case the expected utility of the bet is
(1/2) × (−1000) + (1/2) × (400) = −300
The negative utility of this bet means this is not only worse than a sure $50,000, it’s worse than doing nothing at all. The 50% chance of being totally wiped out is a risk you just can’t afford—at least, not without the promise of a much bigger reward.
This is a mathematical way of formalizing a principle you already know: the richer you are, the more risks you can afford to take. Bets like the one above are like risky stock investments with a positive expected dollar payoff; if you make a lot of these investments, you might sometimes lose a bunch of cash at once, but in the long run you’ll come out ahead. The rich person, who has enough reserves to absorb those occasional losses, invests and gets richer; the nonrich people stay right where they are.
A risky investment can make sense even if you don’t have the money to cover your losses—as long as you have a backup plan. A certain market move might come with a 99% chance of making a million dollars and a 1% chance of losing $50 million. Should you make that move? It has a positive expected value, so it seems like a good strategy. But you might also balk at the risk of absorbing such a big loss—especially because small probabilities are notoriously hard to be certain about.* The pros call moves like this “picking up pennies in front of a steamroller”—most of the time you make a little money, but one small slip and you’re squashed.
So what do you do? One strategy is to leverage yourself up to the eyeballs until you’ve got enough paper assets to make the risky move, but scaled up by a factor of one hundred. Now you’re very likely to make $100 million per transaction—great! And if the steamroller gets you? You’re out $5 billion. Except you’re not—because the world economy, in these interconnected times, is a big rickety tree house held together with rusty nails and string. An epic collapse of one part of the structure runs a serious risk of pulling down the whole shebang. The Federal Reserve has a strong disposition not to let that happen. As the old saying goes, if you’re down a million bucks, it’s your problem; but if you’re down five billion bucks, it’s the government’s problem.
This financial strategy is cynical, but it often works—it worked for Long-Term Capital Management in the 1990s, as chronicled in Roger Lowenstein’s superb book When Genius Failed, and it worked for the firms that survived, and even profited from, the financial collapse of 2008. Absent fundamental changes that seem nowhere in sight, it will work again.*
Financial firms are not human, and most humans, even rich humans, don’t like uncertainty. The rich investor might happily take the 50-50 bet with an expected value of $50,000, but would probably prefer to take the $50,000 outright. The relevant term of art is variance, a measure of how widely spread out the possible outcomes of a decision are, and how likely one is to encounter the extremes on either end. Among bets with the same expected dollar value, most people, especially people without limitless liquid assets, prefer the one with lower variance. That’s why some people invest in municipal bonds, even though stocks offer higher rates of return in the long run. With bonds, you’re sure you’re going to get your money. Invest in stocks, with their greater variance, and you’re likely to do better—but you might end up much worse.
Battling variance is one of the main challenges of managing money, whether you call it that or not. It’s because of variance that retirement funds diversify their holdings. If you have all your money in oil and gas stocks, one big shock to the energy sector can torch your whole portfolio. But if you’re half in gas and half in tech, a big move in one batch of stocks needn’t be accompanied by any action in the others; it’s a lower-variance portfolio. You want to have your eggs in different baskets, lots of different baskets; this is exactly what you do when you stash your savings in a giant index fund, which distributes its investments across the entire economy. The more mathematically minded financial self-help books, like Burton Malkiel’s A Random Walk down Wall Street, are fond of this strategy; it’s dull, but it works. If retirement planning is exciting . . .
Stocks, at least in the long run, tend to get more valuable on average; investing in the stock market, in other words, is a positive expected-value move. For bets that have negative expected value, the calculus flips; people hate a sure loss as much as they like a sure win. So you go for bigger variance, not smaller. You don’t see people swagger up to the roulette wheel and lay one chip on every number; that’s just an unnecessarily elaborate way of handing chips to the dealer.
What does all this have to do with Cash WinFall? As we said at the top, the expected dollar value of 100,000 lottery tickets is what it is, no matter which tickets you buy. But the variance is a different story. Suppose, for instance, I decide to go into the high-volume betting game, but I take a different approach; I buy 100,000 copies of the same ticket.
If that ticket happens to match 4 out of the 6 numbers in the lottery drawing, then I’m the lucky holder of 100,000 pick-4 winners, and I’m basically going to sweep up the entire $1.4 million prize pool, for a tidy 600% profit. But if my set of numbers is a loser, I lose my whole $200,000 pile. That’s a high-variance bet, with a big chance of a big loss and a small chance of an even bigger win.
So “don’t put all your money on one number” is pretty good advice—much better to spread your bets around. But wasn’t that exactly what Selbee’s gang was doing by using the Quic Pic machine, which chooses numbers at random?
Not quite. First of all, while Selbee wasn’t putting all his money on one ticket, he was buying the same ticket multiple times. At first, that seems strange. At his most active, he was buying 300,000 tickets per drawing, letting the computer pick his numbers randomly from almost 10 million choices. So his purchases amounted to a mere 3% of the possible tickets; what are the odds he’d buy the same ticket twice?
Actually, they’re really, really good. Old chestnut: bet the guests at a party that two people in the room have the same birthday. It had better be a good-sized party—say there are thirty people there. Thirty birthdays out of 365 options* isn’t very many, so you might think it pretty unlikely that two of those birthdays would land on the same day. But the relevant quantity isn’t the number of people: it’s the number of pairs of people. It’s not hard to check that there are 435 pairs of people,* and each pair has a 1 in 365 chance of sharing a birthday; so in a party that size you’d expect to see a pair sharing a birthday, or maybe even two pairs. In fact, the chance that two people out of thirty share a birthday turns out to be a little over 70%—pretty good odds. And if you buy 300,000 randomly chosen lottery tickets out of 10 million options, the chance of buying the same ticket twice is so close to 1 that I’d rather just say “it’s a certainty” than figure out how many more 9s I’d need after “99.9%” to specify the probability on the nose.
And it’s not just repeated tickets that cause the trouble. As always, it can be easier to see what’s going on with the math if we make the numbers small enough that we can draw pictures. So let’s posit a lottery draw with just seven balls, of which the state picks three as the jackpot combination. There are thirty-five possible jackpot combos, corresponding to the thirty-fiv
e different ways that three numbers can be chosen from the set 1, 2, 3, 4, 5, 6, 7. (Mathematicians like to say, for short, “7 choose 3 is 35.”) Here they are, in numerical order:
123 124 125 126 127
134 135 136 137
145 146 147
156 157
167
234 235 236 237
245 246 247
256 257
267
345 346 347
356 357
367
456 457
467
567
Say Gerald Selbee goes to the store and uses the Quic Pic to buy seven tickets at random. His chance of winning the jackpot remains pretty small. But in this lottery, you also get a prize for hitting two out of three numbers. (This particular lottery structure is sometimes called the Transylvanian lottery, though I could find no evidence that such a game has ever been played in Transylvania, or by vampires.)
Two out of three is a pretty easy win. So I don’t have to keep typing “two out of three,” let’s call a ticket that wins this lesser prize a deuce. If the jackpot drawing is 1, 4 and 7, for example, the four tickets with a 1, a 4, and some number other than 7 are all deuces. And besides those four, there are the four tickets that hit 1-7 and the four that hit 4-7. So twelve out of thirty-five, just over a third of the possible tickets, are deuces. Which suggests there are probably at least a couple of deuces among Gerald Selbee’s seven tickets. To be precise, you can compute that Selbee has
How Not to Be Wrong : The Power of Mathematical Thinking (9780698163843) Page 25