9
The Irrational Game: Why There’s No Perfect System
ROBERT NORTHCOTT
I walked into the room with a plan. After many Friday-night games with friends this was my first tournament in a real casino. I had read the books, and now I had formulated the perfect Hold’em betting strategy. While the suckers followed their hunches, instincts, and anecdotes, I would be cutting through all that loser-talk. I would be the robot, the machine, remorselessly accumulating. (That was just as well since I was also playing with money I did not strictly have, having borrowed it out of next month’s salary.)
Every year brings scores of books from poker experts, but are there really any can’t-lose systems out there? Does it really pay to play the odds? Once you start to explore these questions more deeply, you soon find yourself involved in philosophical questions about truth, probability, and chance. In short, you find yourself in the philosophy of science.
The Luck of the Draw
The crucial hand this night turned out to be one where my opponent was staying in, seemingly against all poker sense, chasing a very unlikely inside straight. I was set to win big, become the richest stud at my table, one of the leaders overall. . . when, by fluke, on the river, he actually made it. This was the definition of a bad beat—I had played correctly, the odds were greatly in my favor, but by pure luck the cards came through for the other guy anyway.
A first question is: what does luck really mean here? Ought my opponent to have bet on completing his straight? The odds said that it was crazy unlikely. Most of the time he would not have completed his straight, therefore in this situation it was bad strategy to bet so much on such a low probability. The fact that he did then complete it and win big was due to luck, not skill. The skill comes in playing the percentages correctly; the luck in how, after doing that, the cards then fall. No serious player can hope to compete at poker without a working knowledge of card odds. Should I bet twenty bucks to win a pot of two hundred? If it’s a fifty-fifty shot, of course a twenty-for-two hundred gamble is good business, but a one-in-a-hundred shot makes it the play of a fool. In the long run, luck evens out, so a night spent pursuing only gambles with favorable odds will usually prove very profitable—and “usually” is the way to bet if you want to win. Leave the long shots for the losers in the bar.
At first all this sounds very reasonable, but look more closely. These supposedly objective probabilities more often simply reflect our own ignorance. In this particular case, the card my opponent was about to be dealt was “decided” already, sitting waiting on top of the deck. That is, it was already certain that he would hit his straight; it’s just that neither of us knew that. So in a funny way, his bet was the correct move after all.
All the same, there does remain something objectively compelling about the relevant probability calculations. When the first card is dealt from a full and shuffled deck, for example, an ace of spades really is a one in fifty-two chance of coming up, we feel, not one in three, and the rational player needs to know that. What gives? A common move is to reason as follows: given the four face-up cards and my opponent’s two hole cards, he knew the forty-six remaining cards that the river card could have been. And out of those forty-six, only four would have completed his straight. Therefore his objective odds were four in forty-six. Actually though, in this particular case, his true odds were either 0 or 1, depending on whether or not the next card waiting to be dealt actually was one of the four he wanted.
And who’s to say that four in forty-six was the correct objective chance facing my opponent anyway? I don’t mean that perhaps he had peeked at the card or otherwise cheated. Rather, imagine a trained magician who noted carefully the cards of the previous hand, and then further noted carefully the croupier’s shuffling technique between hands. Close observation would enable the magician still to retain some information about the order of cards in the new hand, and hence objectively to know that the odds in this case were perhaps either slightly above or below four in forty-six. This new probability seems to be more objective than the crude old four in forty-six.
Next imagine an even more skilled magician, this time one able to track the shuffled cards even better and perhaps able to fix on a probability now very different from four in forty-six. Perhaps indeed, had she successfully tracked a key card, she might even have noted that that very card was the next one up, and so known that the chance of completing the straight was actually one hundred percent. Similarly, a computer equipped with a camera might also be able to reliably pick out the probability as either 0 or 1. In other words, which number we select as the objective probability in fact seems to depend on very messy un-objective personal factors like just how good I am at tracking shuffled cards.
So did I really suffer a bad beat after all? Well, on the assumption that neither I nor my opponent was preternaturally skilled at tracking shuffled cards, yes I did. But if I had been more like the magician, then I might have known that the objective probability was actually higher than four in forty-six, and perhaps then I should have seen it coming that my opponent would complete his inside straight.
Imagine an incompetent beginner who forgets his own hole cards when calculating the odds of completing a straight and goes wrong because of that. No one would say he suffered a bad beat; rather we would just say he was incompetent. So why should I say that I suffered a bad beat in my case? Was I not just incompetent too? Relative to a magician I was. Generally, all talk of luck and bad beats seems not to be absolutely objective after all. Rather, such concepts are only relative to whatever we happen to deem the normal level of observation skills.
Bad Beats in a Random World
These classic philosophical difficulties in making sense of objective probability tie into an even deeper issue, that of determinism. Is any event truly chancy, or is all uncertainty merely a result of our ignorance? For example, if only we knew the exact micro-composition of a coin, the exact movements of the air molecules and the exact strength with which the coin had been flipped, could we not always calculate with certainty whether it would come up heads or tails? If yes, then saying that a coin flip is fifty-fifty is merely to express our ignorance of the relevant micro-details rather than to capture any deep physical fact about the world. Under the influence of Newtonian physics, for centuries many scientists and philosophers thought that deep down the universe really is utterly deterministic and predictable in this way. If only we knew every detail, nothing would be uncertain to us.1 More recently though, quantum mechanics has often been taken to imply that perhaps the universe is fundamentally chancy after all, and uncertainty is not a symptom merely of our ignorance. Albert Einstein famously rejected this latter view, quipping that “God does not play dice with the universe.” But many others think he does (God, that is, not Einstein). The controversy continues today.
What if we also apply the idea of determinism to the human brain? After all, our brains are presumably part of the physical universe too. But then in principle we should be able to predict other humans’ behavior—I could know in advance whether my opponent would fold, and really know, with certainty. But then could I not also know my own future actions? In fact, what would there be for me to decide about at all? After all, my future decisions, and all my opponents’ too, would already be determined and perhaps knowable. Poker would become rather boring. So it seems that what philosophers call “decision theory,” one example of which is poker strategy, makes no sense without the assumption of free will, that is without the assumption that we are free to decide our actions one way or the other, just as we please.2
So why not simply assume free will when talking about poker then? Once outside the philosophy classroom that’s what we usually do anyway. Okay, but then our previous issue arises again—how can we justify declaring one poker strategy objectively superior to another? For as we have seen, as soon as we try to do this, we get entangled with messy issues of how much decision-makers know . . . including, ultimately, how much they do
or do not know about their own and others’ brains.
What follows from all this? That any winning poker system making use of objective poker odds is actually at best only winning relative to a particular level of ignorance or imperfect knowledge. That in turn leaves us, like all ignorant people, vulnerable to rude awakenings. And what we call a bad beat is just our label for such an awakening.
What Might Have Been . . .
So perhaps my best strategy was in fact best only relative to my imperfect knowledge, but nevertheless, a bad beat is still galling. And the hand with the straight was especially galling precisely because it was a big hand, and losing it left me dead in the water and virtually chipless. If the guy had not made that fluky straight, then I would have been winning.
That now brings up a thorny new issue. Did this one moment really cost me, as I imagined afterwards, the whole tournament? To assess that, we’d need to know what would have happened had I instead won that hand. How would the rest of the tournament have unfolded? We can never know for sure since of course there is no way of directly measuring or observing things that never actually happen. How then can we say anything sensible about them? Such hypothetical situations are known to philosophers as counterfactuals, since they are counter to the facts of what actually did happen. And they are notoriously tricky to handle.
Suppose that the hand in which my opponent made his fluky straight was worth X chips. Then clearly, if I had won I would have been X chips better off. In order to see what would have happened after that, could we not just re-run the tape of what actually did happen, and add X chips to my score? But sooner or later this method would break down. In particular, in the real tournament I was quickly eliminated, after which of course it continued without me. But with an extra X chips I would not yet have been eliminated, so any tape running on without me would then no longer be right.
Perhaps, given the psychological and strategic impact if I had won the hand, subsequent hands would have worked out differently. My spirits would have soared, my opponents’ sunk, with my extra chip stack I could have commanded the table better, and in the end I would have won the whole tournament. Well, perhaps—and then again, perhaps not. We can guess, but how can we know which guess is the most reasonable?
Here’s a further difficulty. In order for my opponent not to have made his fluky straight, the order of the deck would have to have been different than it actually was. It seems to follow that this would have led to the order being different on subsequent deals too, hence that all the subsequent hands would have featured totally different cards to those that were dealt in reality. Moreover, my different reaction to winning rather than losing the hand with the straight would presumably have had some micro-impact on the dealer’s brain, perhaps resulting in her shuffling the cards slightly differently, hence again resulting in totally different subsequent hands. Given these kinds of considerations, any hypothetical extrapolation starts to seem very difficult. Maybe we need to know in exactly what way the order of the deck was different in order to know just what these new hands would have been. For instance, how do we know that if I had won the hand with the straight I might not then have been dealt pocket aces in every other hand subsequently?
These and other complications meant that for many years philosophers were rather skeptical whether any rigorous evaluations could really be made of counterfactual claims at all. There don’t seem to be any actual facts we can appeal to directly.
Perhaps then, the solution is just to cleanse ourselves of speaking about these messy counterfactuals. But the trouble is that we unavoidably talk about counterfactuals all the time. Some examples: he wouldn’t be sick if he had taken the medicine, if you drop the cup it’ll smash, if you don’t practice you won’t improve, if you’d called and told me I could have done something about it, if the government passed this law then crime would soar . . . and so on. Generally, it’s often impossible for us to assign moral praise or blame, to explain some occurrence, or to decide between alternative actions without thinking about counterfactuals. For instance, it is right to warn you not to drop the cup only if the associated counterfactual claim—that if you were to drop it then it would indeed smash—is also right.
Looking at the previous examples more carefully, it seems that there is an intimate connection between counterfactuals and saying that something caused something else. (Exactly how intimate is controversial.) For example, claiming that dropping the cup caused it to smash might be understood as saying just that if you had not dropped it then it would not have smashed. Similarly, when I nonchalantly claim that it was only my bad beat with the straight that caused me not to win everything, this can be understood as if I had won that hand then I would have won the whole tournament. This aspect of causation was first mentioned by the famous eighteenth-century Scottish philosopher David Hume. In recent times, the late American philosopher David Lewis (1941–2001) particularly championed the idea that counterfactuals in fact capture causation’s very essence.
In the late 1960s Lewis and others proposed a clever and complicated formal system for evaluating just the kind of troublesome counterfactuals we have been looking at. In particular, he proposed an apparatus of “possible worlds” in addition to our actual world, where the different ways that things could have been—but in fact aren’t—still do occur, only in parallel worlds instead. Some possible worlds are “closer” or “more similar” to the actual world than are others. A claim about a counterfactual can then be evaluated, roughly speaking, according to whether the possible worlds in which it is true are closer to the actual world than are possible worlds in which alternative counterfactuals are true.
What exactly similarity amounts to, and to what extent talk of possible worlds is merely metaphorical, are themselves topics of lively debate. Other controversies remain too, and so far there’s still no agreed solution. Therefore, regarding counterfactuals, as yet philosophers are only able to say: can’t live with ‘em, can’t live without ‘em. That is, even though we have no consensus for ever actually deciding whether one is right or not, still we can’t help invoking them anyway.
Among other things, I’m afraid it follows that all those endless arguments in sports will . . . remain endless, which may be good news for TV sportscasters but bad news for everyone else. Thus we’re still going to hear: “He’d have won if he’d made that shot,” “He’d have won if he was mentally stronger,” “What a great career he would have had if he hadn’t gotten injured,” and so on. Well, no one can ever know! And no one can ever know either whether I really would have won that tournament if it weren’t for the bad beat – which is convenient, because that means I can carry on forever explaining to you all why indeed I would have. . .
Finally, all this has implications not just for evaluating whiny hard-luck stories but also for any winning system at poker. Put simply, why should I play one expert’s system? The answer must be, at some level, that by doing so I would stand a better chance of winning than otherwise. Or, to phrase it differently, perhaps playing one system will cause me to have a better chance. Either way, the answer makes no sense without some problematic appeal to counterfactuals.
Game Theory
Maybe there is one branch of decision theory that can make everything very easy. For in principle it might give us an optimal strategy for poker that we can all just follow mindlessly. Poker would become a matter merely of mechanically following this single set of golden rules, and psychology would be rendered as irrelevant as if we were playing tic-tac-toe. This is the promise of game theory.3
Notwithstanding its name, game theory is not just about frivolous parlor games. Rather, it offers a rigorous mathematical treatment of situations of strategic interaction where your best move depends in part on what other people do. Of course, what they do depends in part on what they think you will do, so what you will do depends in part on what you think they think you will do, so what they will do depends in part . . . You get the point. It is one of the achievements of
game theory to cut through this seemingly endless chain of second and third guessing to arrive at concrete optimal strategies: all things considered, what you should do in this situation is such-and-such. Poker is a particularly good example of strategic interaction, for your optimal betting strategy depends in part on how the other players will bet, which in turn depends in part on how they think you will bet, which in turn . . . It’s said that the inspiration for the development of game theory came to its founder, the Hungarian mathematician John von Neumann, precisely while playing poker. (Von Neumann, incidentally, was later an alleged model for the movie character Dr Strangelove.) And many of the pioneers of game theory in the 1940s and 1950s, such as John Nash of Beautiful Mind fame, were also inspired by the game.
The dream is that one day someone will work out a complete theoretical analysis for games like poker. Unfortunately the mathematical complications multiply horribly swiftly, and so far no one is anywhere near achieving such a triumph. For now then, you can safely play against a game theorist without fear. Some progress was made in the early days but only with respect to greatly simplified versions, typically with only a single bet each, and only one level of permitted raise.4 Nowadays, even these efforts have rather petered out.
But suppose one day someone does hit the jackpot and a complete analysis of poker appears, one that applies to versions played by real people and not just mathematical models. Would that then spell the death of the game? Would we no longer see ESPN specials on the World Series of Poker, just as today we don’t see ESPN specials on the world series of tic-tac-toe? Or maybe the clever theorist would keep such knowledge to herself like a secret golden code, all the better to astonish us by mysteriously sweeping all the winnings at every table she ever visited? But one recent branch of philosophy of science suggests that all this is overstated. In particular, once we look in detail at how abstract scientific models like those of game theory are applied to real-world phenomena like actual poker games with actual human players, we see that a golden code, even were it ever to be discovered, would in reality not be so golden after all.
Poker and Philosophy Page 13