The Signal and the Noise
Page 34
What Angelo eventually came to realize is that, for all his skill, his periodic bouts of tilt prevented him from doing much more than scrape by. As we have seen, it is considerably easier to lose money at poker when you play badly than to make money when you are playing well. Meanwhile, the edge even for a long-term winning player is quite small. It is quite plausible for someone who plays at a world-class level 90 percent of the time to lose money from the game overall if he tilts the other 10 percent of the time.
Angelo realized the full extent of his tilt problems once he was in his forties, after he had started writing about the game and coaching other players. He is naturally a perceptive person, and what started out as strategy sessions often turned into therapy sessions.
“I was coaching so many different types of people with so many different types of problems related to poker,” he told me. “The problems were very easy to see in another person. I’d say here’s a guy that’s just as smart as me. And I know for a fact he’s delusional about his skill. And I know that if everyone else is delusional, I have to be too.”
Every poker player tilts at least to some degree, Angelo thinks. “If someone came up to me and said ‘I don’t tilt,’ my mind registers, ‘There’s another delusional statement from a delusional human.’ It happens all the time.”
I had my own types of tilt when I played poker actively. I wasn’t one of those throw-stuff-around-the-room type of guys. Nor would I turn into a crazy maniac when I tilted, trying to play every hand (although my “A-game” was wild enough). Sometimes I’d even tighten up my game a bit. However, I’d play mechanically, unthinkingly, for long stretches of time and often late into the night—I’d just call a lot and hope that the pot got pushed my way. I had given up, deep down, on really trying to beat the game.
I realize now (I’m not sure that I did when I was playing) what my tilt triggers were. The biggest one was a sense of entitlement. I didn’t mind so much when I wasn’t getting many cards and had to fold for long stretches of time—that, I realized, was part of the statistical variance in the game. But when I thought I had played particularly well—let’s say I correctly detected an opponent’s bluff, for instance—but then he caught a miracle card on the river and beat my hand anyway, that’s what could really tilt me. I thought I had earned the pot, but he made the money.
By tilting, I could bring things back into a perverse equilibrium: I’d start playing badly enough that I deserved to lose. The fundamental reason that poker players tilt is that this balance is so often out of whack: over the short term, and often over the medium term, a player’s results aren’t very highly correlated with his skill. It certainly doesn’t help matters when players have an unrealistic notion about their skill level, as they very often will. “We tend to latch onto data that supports our theory,” Angelo told me. “And the theory is usually ‘I’m better than they are.’”
Beyond Results-Oriented Thinking
In the United States, we live in a very results-oriented society. If someone is rich or famous or beautiful, we tend to think they deserve to be those things. Often, in fact, these factors are self-reinforcing: making money begets more opportunities to make money; being famous provides someone with more ways to leverage their celebrity; standards of beauty may change with the look of a Hollywood starlet.
This is not intended as a political statement, an argument for (or against) greater redistribution of wealth or anything like that. As an empirical matter, however, success is determined by some combination of hard work, natural talent, and a person’s opportunities and environment—in other words, some combination of noise and signal. In the U.S., we tend to emphasize the signal component most of the time—except perhaps when it comes to our own shortcomings, which we tend to attribute to bad luck. We can account for our neighbors’ success by the size of their home, but we don’t know as much about the hurdles they had to overcome to get there.
When it comes to prediction, we’re really results-oriented. The investor who calls the stock market bottom is heralded as a genius, even if he had some buggy statistical model that just happened to get it right. The general manager who builds a team that wins the World Series is assumed to be better than his peers, even if, when you examine his track record, the team succeeded despite the moves he made rather than because of them. And this is certainly the case when it comes to poker. Chris Moneymaker wouldn’t have been much of a story if the marketing pitch were “Here’s some slob gambler who caught a bunch of lucky cards.”
Sometimes we take consideration of luck too far in the other direction, when we excuse predictions that really were bad by claiming they were unlucky. The credit-rating agencies used a version of this excuse when their incompetence helped to usher in the financial collapse. But as a default, just as we perceive more signal than there really is when we make predictions, we also tend to attribute more skill than is warranted to successful predictions when we assess them later.
Part of the solution is to apply more rigor in how we evaluate predictions. The question of how skillful a forecast is can often be addressed through empirical methods; the long run is achieved more quickly in some fields than in others. But another part of the solution—and sometimes the only solution when the data is very noisy—is to focus more on process than on results. If the sample of predictions is too noisy to determine whether a forecaster is much good, we can instead ask whether he is applying the attitudes and aptitudes that we know are correlated with forecasting success over the long run. (In a sense, we’ll be predicting how good his predictions will be.)
Poker players tend to understand this more than most other people, if only because they tend to experience the ups and downs in such a visceral way. A high-stakes player like Dwan might experience as much volatility in a single session of poker as a stock market investor would in his lifetime. Play well and win; play well and lose; play badly and lose; play badly and win: every poker player has experienced each of these conditions so many times over that they know there is a difference between process and results.
If you talk with the very best players, they don’t take any of their success for granted; they focus as much as they can on self-improvement. “Anyone who thinks they’ve gotten good enough, good enough that they’ve solved poker, they’re getting ready for a big downswing,” Dwan told me.
Angelo tries to speed the process along with his clients. “We’re walking around in this cloud of noise all the time,” he said. “Very often we don’t see what’s going on accurately.” Angelo’s methods for achieving this are varied and sometimes unconventional: he is an advocate of meditation, for instance. Not all his clients meditate, but the broader idea is to increase their level of self-awareness, encouraging them to develop a better sense for which things are and are not within their control.*
When we play poker, we control our decision-making process but not how the cards come down. If you correctly detect an opponent’s bluff, but he gets a lucky card and wins the hand anyway, you should be pleased rather than angry, because you played the hand as well as you could. The irony is that by being less focused on your results, you may achieve better ones.
Still, we are imperfect creatures living in an uncertain world. If we make a prediction and it goes badly, we can never really be certain whether it was our fault or not, whether our model was flawed or we just got unlucky. The closest approximation to a solution is to achieve a state of equanimity with the noise and the signal, recognizing that both are an irreducible part of our universe, and devote ourselves to appreciating each for what it is.
11
IF YOU CAN’T BEAT ’EM . . .
In 2009, a year after a financial crisis had wrecked the global economy, American investors traded $8 million in stocks every second that the New York Stock Exchange was open for business. Over the course of the typical trading day, the volume grew to $185 billion, roughly as much as the economies of Nigeria, the Philippines or Ireland produce in an entire year. Over the course of the w
hole of 2009, more than $46 trillion1 in stocks were traded: four times more than the revenues of all the companies in the Fortune 500 put together.2
This furious velocity of trading is something fairly new. In the 1950s, the average share of common stock in an American company was held for about six years before being traded—consistent with the idea that stocks are a long-term investment. By the 2000s, the velocity of trading had increased roughly twelvefold. Instead of being held for six years, the same share of stock was traded after just six months.3 The trend shows few signs of abating: stock market volumes have been doubling once every four or five years. With the advent of high-frequency trading, some stocks are now literally bought and sold in a New York microsecond.4
FIGURE 11-1: AVERAGE TIME U.S. COMMON STOCK WAS HELD
Economics 101 teaches that trading is rational only when it makes both parties better off. A baseball team with two good shortstops but no pitching trades one of them to a team with plenty of good arms but a shortstop who’s batting .190. Or an investor who is getting ready to retire cashes out her stocks and trades them to another investor who is just getting his feet wet in the market.
But very little of the trading that occurs on Wall Street today conforms to this view. Most of it reflects true differences of opinion—contrasting predictions—about the future returns of a stock.* Never before in human history have so many predictions been made so quickly and for such high stakes.
Why so much trading occurs is one of the greatest mysteries in finance.5 More and more people seem to think they can outpredict the collective wisdom of the market. Are these traders being rational? And if not, can we expect the market to settle on a rational price?
A Trip to Bayesland
If you follow the guidance provided by Bayes’s theorem, as this book recommends, then you’ll think about the future in terms of a series of probabilistic beliefs or forecasts. What are the chances that Barack Obama will be reelected? That Lindsay Lohan will be arrested again? That we will discover evidence of life on other another planet? That Rafael Nadal will win at Wimbledon? Some Bayesians assert6 that the most sensible way to think about these probabilities is in terms of the betting line we would set. If you take the idea to its logical extreme, then in Bayesland we all walk around with giant sandwich boards advertising our odds on each of these bets:
FIGURE 11-2: BAYESIAN SANDWICH BOARD
TODAY’S PRICES
OBAMA WINS REELECTION 55%
LOHAN GETS ARRESTED 99%
STOCK MARKET CRASHES 10%
LIFE ON MARS 2%
NADAL WINS WIMBLEDON 30%
In Bayesland, when two people pass each other by and find that they have different forecasts, they are obliged to do one of two things. The first option is to come to a consensus and revise their forecasts to match. If my sandwich board says that Nadal has a 30 percent chance to win Wimbledon, and yours says that he has a 50 percent chance instead, then perhaps we both revise our estimates to 40 percent. But we don’t necessarily have to meet in the middle; if you’re more up on Lindsay Lohan gossip than I am, perhaps I capitulate to you and concede that your Lindsay Lohan forecast is better, thereby adopting it as my own belief. Either way, we walk away from the meeting with the same number in mind—a revised and, we hope, more accurate forecast about the probability of some real-world event.
But sometimes we won’t agree. The law of the land says that we must then settle our differences by placing a bet on our forecasts. In Bayesland, you must make one of these two choices: come to a consensus or bet.* Otherwise, to a Bayesian, you are not really being rational. If after we have our little chat, you still think your forecast is better than mine, you should be happy to bet on it, since you stand to make money. If you don’t, you should have taken my forecast and adopted it as your own.
Of course, this whole process would be incredibly inefficient. We’d have to maintain forecasts on thousands and thousands of events and keep a ledger of the hundreds of bets that we had outstanding at any given time. In the real world, this is the function that markets play. They allow us to make transactions at one fixed price, at a consensus price, rather than having to barter or bet on everything.7
The Bayesian Invisible Hand
In fact, free-market capitalism and Bayes’ theorem come out of something of the same intellectual tradition. Adam Smith and Thomas Bayes were contemporaries, and both were educated in Scotland and were heavily influenced by the philosopher David Hume. Smith’s “invisible hand” might be thought of as a Bayesian process, in which prices are gradually updated in response to changes in supply and demand, eventually reaching some equilibrium. Or, Bayesian reasoning might be thought of as an “invisible hand” wherein we gradually update and improve our beliefs as we debate our ideas, sometimes placing bets on them when we can’t agree. Both are consensus-seeking processes that take advantage of the wisdom of crowds.
It might follow, then, that markets are an especially good way to make predictions. That’s really what the stock market is: a series of predictions about the future earnings and dividends of a company.8 My view is that this notion is mostly right most of the time. I advocate the use of betting markets for forecasting economic variables like GDP, for instance. One might expect these markets to improve predictions for the simple reason that they force us to put our money where our mouth is, and create an incentive for our forecasts to be accurate.
Another viewpoint, the efficient-market hypothesis, makes this point much more forcefully: it holds that it is impossible under certain conditions to outpredict markets. This view, which was the orthodoxy in economics departments for several decades, has become unpopular given the recent bubbles and busts in the market, some of which seemed predictable after the fact. But, the theory is more robust than you might think.
And yet, a central premise of this book is that we must accept the fallibility of our judgment if we want to come to more accurate predictions. To the extent that markets are reflections of our collective judgment, they are fallible too. In fact, a market that makes perfect predictions is a logical impossibility.
Justin Wolfers, Prediction Markets Cop
If there really were a Bayesland, then Justin Wolfers, a fast-talking, ponytailed polymath who is among America’s best young economists, would be its chief of police, writing a ticket anytime he observed someone refusing to bet on their forecasts. Wolfers challenged me to a dinner bet after I wrote on my blog that I thought Rick Santorum would win the Iowa caucus, bucking the prediction market Intrade (as well as my own predictive model), which still showed Mitt Romney ahead. In that case, I was willing to commit to the bet, which turned out well for me after Santorum won by literally just a few dozen votes after a weeks-long recount.* But there have been other times when I have been less willing to accept one of Wolfers’ challenges. Presuming you are a betting man as I am, what good is a prediction if you aren’t willing to put money on it?
Wolfers is from Australia, where he supported himself in college by running numbers for a bookie in Sydney.9 He now lives in Philadelphia, where he teaches at the Wharton School and writes for the Freakonomics blog. I visited Wolfers at his home, where he was an outstanding host, having ordered a full complement of hoagie sandwiches from Sarcone’s to welcome me, my research assistant Arikia Millikan, and one of his most talented students, David Rothschild. But he was buttering me up for a roast.
Wolfers and Rothschild had been studying the behavior of prediction markets like Intrade, a sort of real-life version of Bayesland in which traders buy and sell shares of stock that represent real-world news predictions—everything from who will win the Academy Award for Best Picture to the chance of an Israeli air strike on Iran. Political events are especially popular subjects for betting. One stock, for instance, might represent the possibility that Hillary Clinton would win the Democratic nomination in 2008. The stock pays a dividend of $100 if the proposition turns out to be true (Clinton wins the nomination) but nothing otherwise. However, the traders ca
n exchange their shares as much as they want until the outcome is determined. The market price for a share thus represents a consensus prediction about an outcome’s likelihood. (At one market,10 shares in Clinton stock crashed to $18 after she lost the Iowa caucuses, rebounded to $66 when she won the New Hampshire primary, and slowly drifted back down toward $0 as Obama outlasted her in the campaign.) Markets like these have a long tradition in politics, dating back to at least the 1892 presidential election, when stocks in Grover Cleveland and Benjamin Harrison were traded just steps away from the American Stock Exchange.11
“You should tell Nate about the comparison paper,” Wolfers said to Rothschild a few minutes into lunch, a mischievous grin on his face.
“I did a paper for an academic journal that looked at de-biased Internet-based polling, comparing it to prediction markets in 2008, showing they were comparable,” Rothschild volunteered.
“That’s way too polite,” Wolfers interrupted. “It was Intrade versus Nate.”
“And Intrade won,” Rothschild said.
Rothschild’s paper, which was published in Public Opinion Quarterly,12 compared the forecasts I made at FiveThirtyEight over the course of the 2008 election cycle with the predictions at Intrade. It concluded that, although FiveThirtyEight’s forecasts had done fairly well, Intrade’s were better.
The Benefits (and Limitations) of Group Forecasts