Book Read Free

The Perfect Bet

Page 21

by Adam Kucharski


  Although poker wasn’t mentioned in the Gambling Act, Judge Weinstein said this didn’t automatically mean the game wasn’t gambling. But the omission did mean that the role of chance in poker was up for debate. And Weinstein had found Heeb’s evidence convincing. Until that summer, no court had ever ruled on whether poker was gambling under federal law. Weinstein delivered his conclusion on August 21, 2012, and ruled that poker was predominantly governed by skill rather than chance. In other words, it did not count as gambling under federal law. DiCristina’s conviction was overturned.

  The victory was to be short-lived, however. Although Weinstein ruled that DiCristina had not broken federal law, New York State has a stricter definition of gambling. Its laws cover any game that “depends in a material degree upon an element of chance.” As a result, DiCristina’s acquittal was overturned in August 2013. Weinstein’s ruling on the relative role of luck and skill was not questioned. Rather, the state law meant that poker still fell under the definition of a gambling business.

  The DiCristina case is part of a growing debate about how much luck comes into games like poker. Definitions like “material degree of chance” will undoubtedly raise more questions in the future. Given the close links between gambling and certain parts of finance, surely this definition would cover some financial investments, too? Where do we draw the line between flair and fluke?

  IT IS TEMPTING TO sort games into separate boxes marked luck and skill. Roulette, often used as an example of pure luck, might go into one; chess, a game that many believe relies only on skill, might go in the other. But it isn’t this simple. To start with, processes that we think are as good as random are usually far from it.

  Despite its popular image as the pinnacle of randomness, roulette was first beaten with statistics, and then with physics. Other games have fallen to science too. Poker players have explained game theory and syndicates have turned sports betting into investments. According to Stanislaw Ulam, who worked on the hydrogen bomb at Los Alamos, the presence of skill is not always obvious in such games. “There may be such a thing as habitual luck,” he said. “People who are said to be lucky at cards probably have certain hidden talents for those games in which skill plays a role.” Ulam believed the same could be said of scientific research. Some scientists ran into seemingly good fortune so often that it was impossible not to suspect that there was an element of talent involved. Chemist Louis Pasteur put forward a similar philosophy in the nineteenth century. “Chance favours the prepared mind” was how he put it.

  Luck is rarely embedded so deeply in a situation that it can’t be altered. It might not be possible to completely remove luck, but history has shown that it can often be replaced by skill to some extent. Moreover, games that we assume rely solely on skill do not. Take chess. There is no inherent randomness in a game of chess: if two players make identical moves every time, the result will always be the same. But luck still plays a role. Because the optimal strategy is not known, there is a chance that a series of random moves could defeat even the best player.

  Unfortunately, when it comes to making decisions, we sometimes take a rather one-sided view of chance. If our choices do well, we put it down to skill; if they fail, it’s the result of bad luck. Our notion of skill can also be skewed by external sources. Newspapers print stories about entrepreneurs who have hit a trend and made millions or celebrities who have suddenly become household names. We hear tales of new writers who have produced instant best sellers and bands that have become famous overnight. We see success and wonder why those people were so special. But what if they are not?

  In 2006, Matthew Salganik and colleagues at Columbia University published a study of an artificial “music market,” in which participants could listen to, rate, and download dozens of different tracks. In total there were fourteen thousand participants, whom the researchers secretly split into nine groups. In eight of the groups, participants could see which tracks were popular with their fellow group members. The final group was the control group, in which participants had no idea what others were downloading.

  The researchers found that the most popular songs in the control group—a ranking that depended purely on the merits of the songs themselves, and not on what other people were downloading—were not necessarily popular in the eight social groups. In fact, the song rankings in these eight groups varied wildly. Although the “best” songs usually racked up some downloads, mass popularity was not guaranteed. Instead, fame developed in two stages. First, randomness influenced which tracks people happened to pick early on. The popularity of these first downloaded tracks was then amplified by social behavior, with people looking at the rankings and wanting to imitate their peers. “Fame has much less to do with intrinsic quality than we believe it does,” Peter Sheridan Dodds, one of the study authors, later wrote, “and much more to do with the characteristics of the people among whom fame spreads.”

  Mark Roulston and David Hand, statisticians at the hedge fund Winton Capital Management, point out that the randomness of popularity may also influence the ranking of investment funds. “Consider a set of funds with no skill,” they wrote in 2013. “Some will produce decent returns simply by chance and these will attract investors, while the poorly performing funds will close and their results may disappear from view. Looking at the results of those surviving funds, you would think that on average they do have some skill.”

  The line between luck and skill—and between gambling and investing—is rarely as clear as we think. Lotteries should be textbook examples of gambling, but after several weeks of rollovers, they can produce a positive expected payoff: buy up all the combinations of numbers, and you’ll make a profit. Sometimes the crossover happens the other way, with investments being more like wagers. Take Premium Bonds, a popular form of investment in the United Kingdom. Rather than receiving a fixed rate of interest as with regular bonds, investors in Premium Bonds are instead entered into a monthly prize draw. The top prize is £1 million, tax-free, and there are several smaller prizes, too. By investing in Premium Bonds, people are in effect gambling the interest they would have otherwise earned. If they instead put their savings in a regular bond, withdrew the interest, and used that money to buy rollover lottery tickets, the expected payoff would not be that different.

  If we want to separate luck and skill in a given situation, we must first find a way to measure them. But sometimes an outcome is very sensitive to small changes, with seemingly innocuous decisions completely altering the result. Individual events can have dramatic effects, particularly in sports like soccer and ice hockey where goals are relatively rare. It might be an ambitious pass that sets up a winning shot or a puck that hits the post. How can we distinguish between a hockey victory that is mostly down to talent and one that benefited from lots of lucky breaks?

  In 2008, hockey analyst Brian King suggested a way to measure how fortunate a particular NHL player had been. “Let’s pretend there was a stat called ‘blind luck,’” as he put it. To calculate his statistic, he took the proportion of total shots that a team scored while that player was on the ice and the proportion of opponents’ shots that were saved, and then added these two values together. King argued that although creating shooting opportunities involves a lot of skill, there was more luck influencing whether a shot went in or not. Worryingly, when King tested out the statistic on his local NHL team, it showed that the luckiest players were getting contract extensions while the unlucky ones were being dropped.

  The statistic, later dubbed “PDO” after King’s online moniker, has since been used to assess the fortunes of players—and teams—in other sports, too. In the 2014 soccer World Cup, several top teams failed to make it out of the preliminary group stage. Spain, Italy, Portugal, and England all fell at the first hurdle. Was it because they were lackluster or unlucky? The England team is famously used to misfortune, from disallowed goals to missed penalties. It seems that 2014 was no different: England had the lowest PDO of any team in the tournament, w
ith a score of 0.66.

  We might think that teams with a very low PDO are just hapless. Maybe they have a particularly error-prone striker or weak keeper. But teams rarely maintain an unusually low (or high) PDO in the long run. If we analyze more games, a team’s PDO will quickly settle down to numbers near the average value of one. It’s what Francis Galton called “regression to mediocrity”: if a team has a PDO that is noticeably above or below one after a handful of games, it is likely a symbol of luck.

  Statistics like PDO can be useful for assessing how lucky teams are, but they aren’t necessarily that helpful when placing bets. Gamblers are more interested in making predictions. In other words, they want to find factors that reflect ability rather than luck. But how important is it to actually understand skill?

  Take horse races. Predicting events at a racetrack is a messy process. All sorts of factors could influence a horse’s performance in a race, from past experience to track conditions. Some of which provide clear hints about the future, while others just muddy the predictions. To pin down which factors are useful, syndicates need to collect reliable, repeated observations about races. Hong Kong was the closest Bill Benter could find to a laboratory setup, with the same horses racing on a regular basis on the same tracks in similar conditions.

  Using his statistical model, Benter identified factors that could lead to successful race predictions. He found that some came out as more important than others. In Benter’s early analysis, for example, the model said the number of races a horse had previously run was a crucial factor when making predictions. In fact, it was more important than almost any other factor. Maybe the finding isn’t all that surprising. We might expect horses that have run more races to be used to the terrain and less intimated by their opponents.

  It’s easy to think up explanations for observed results. Given a statement that seems intuitive, we can convince ourselves as to why that should be the case, and why we shouldn’t be surprised at the result. This can be a problem when making predictions. By creating an explanation, we are assuming that one process has directly caused another. Horses in Hong Kong win because they are familiar with the terrain, and they are familiar with it because they have run lots of races. But just because two things are apparently related—like probability of winning and number of races run—it doesn’t mean that one directly causes the other.

  An oft-quoted mantra in the world of statistics is that “correlation does not imply causation.” Take the wine budget of Cambridge colleges. It turns out that the amount of money each Cambridge college spent on wine in the 2012–2013 academic year was positively correlated with students’ exam results during the same period. The more the colleges spent on wine, the better the results generally were. (King’s College, once home to Karl Pearson and Alan Turing, topped the wine list with a spend of £338,559, or about £850 per student.)

  Similar curiosities appear in other places, too. Countries that consume lots of chocolate win more Nobel prizes. When ice cream sales rise in New York City, so does the murder rate. Of course, buying ice cream doesn’t make us homicidal, just as eating chocolate is unlikely to turn us into Nobel-quality researchers and drinking wine won’t make us better at exams.

  In each of these cases, there might be a separate underlying factor that could explain the pattern. For Cambridge colleges it could be wealth, which would influence both wine spending and exam results. Or there could be a more complicated set of reasons lurking behind the observations. This is why Bill Benter doesn’t try to interpret why some factors appeared to be so important in his horseracing model. The number of races a horse has run might be related to another (hidden) factor that directly influenced performance. Alternatively, there could be an intricate trade-off between races run and other factors—like weight and jockey experience—which Benter could never hope to distill into a neat “A causes B” conclusion. But Benter is happy to sacrifice elegance and explanation if it means having good predictions. It doesn’t matter if his factors are counterintuitive or hard to justify. The model is there to estimate the probability a certain horse will win, not to explain why that horse will win.

  From hockey to horse racing, sports analysis methods have come a long way in recent years. They have enabled gamblers to study matches in more detail than ever, combining bigger models with better data. As a result, scientific betting has moved far beyond card counting.

  ON THE FINAL PAGE of his blackjack book Beat the Dealer, Edward Thorp predicted that the following decades would see a whole host of new methods attempting to tame chance. He knew it was hopeless to try to anticipate what they might be. “Most of the possibilities are beyond the reach of our present imagination and dreams,” he wrote. “It will be exciting to see them unfold.”

  Since Thorp made his prediction, the science of betting has indeed evolved. It has brought together new fields of research, spreading far from the felt tables and plastic chips of Las Vegas casinos. Yet the popular image of scientific wagers remains very much in the past. Stories of gambling strategies rarely stray far from the adventures of Thorp or the Eudaemons. Successful betting is viewed as a matter of card counting or watching roulette tables. Tales follow a mathematical path, with decisions reduced to basic probabilities.

  But the advantage of simple equations over human ingenuity is not as clear as these stories suggest. In poker, the ability to calculate the probability of getting a particular hand is helpful but by no means a sure route to victory. Gamblers also need to account for their opponents’ behavior. When John von Neumann developed game theory to tackle this problem, he found that employing deceptive tactics such as bluffing was actually the optimal thing to do. The gamblers had been right all along, even if they didn’t know why.

  Sometimes it’s necessary to stray from mathematical perfection altogether. As researchers delve further into the science of poker, they are finding situations where game theory comes up short and where traditional gambling traits—reading opponents, exploiting weaknesses, spotting emotion—can help computer players become the best in the world. It is not enough to know just probabilities; successful bots need to combine mathematics and human psychology.

  The same is true in sports. Analysts are increasingly trying to capture the individual quirks that make up a team performance. During the early 2000s, Billy Beane famously used “sabermetrics” to identify underrated players and take the cash-strapped Oakland A’s to the Major League Baseball playoffs. The techniques are now appearing in other sports. In the English Premier League, more and more soccer teams are employing statisticians to advise on team performances and potential transfers. When Manchester City won the league in 2014, they had almost a dozen analysts helping to put together tactics.

  Sometimes the human element can be the dominant factor, overshadowing the statistics gleaned from available match data. After all, the probability of a goal depends both on the physics of the ball and on the psyche of the player kicking it. Roberto Martinez, manager of Everton soccer club, has suggested that mind-set is as important as performances when assessing potential signings. Managers want to know how a player will settle into a new country or whether he can cope with pressure from a hostile crowd. And, clearly, it is very hard to measure factors like this.

  Measurement is often a difficult problem in sports. From the defenders who never make a tackle to the NFL cornerbacks who hardly ever touch the ball, we can’t always pin down valuable information. But knowing what we are missing is crucial if we want to fully understand what is happening in a match and what might happen in the future.

  When researchers develop a theoretical model of a sport, they are reducing reality to an abstraction. They are choosing to remove detail and concentrate only on key features, much like Pablo Picasso so famously did. When Picasso worked his “Bull” lithographs in the winter of 1945, he started by creating a realistic representation of the animal. “It was a superb, well-rounded bull,” said an assistant watching at the time. “I thought to myself that that was that.
” But Picasso was not finished. After completing his first image, he moved onto a second, and then a third. As Picasso worked on each new picture, the assistant noticed the bull was changing. “It began to diminish, to lose weight,” he said. “Picasso was taking away rather than adding to his composition.” With each image, Picasso carved further, keeping only the crucial contours, until he reached the eleventh lithograph. Almost every detail had gone, with nothing left but a handful of lines. Yet the shape was still recognizable as a bull. In those few strokes, Picasso had captured the essence of the animal, creating an image that was abstract, but not ambiguous. As Albert Einstein once said of scientific models, it was a case of “everything should be made as simple as possible, but not simpler.”

  Abstraction is not limited to the worlds of art and science. It is common in other areas of life, too. Take money. Whenever we pay with a credit card, we are replacing physical cash with an abstract representation. The numbers remain the same, but superfluous details—the texture, the color, the smell—have been removed. Maps are another example of abstraction: if a detail is unnecessary, it isn’t shown. Weather is abandoned when the focus is on transport and traffic; motorways vanish if we’re interested in sun and showers.

  Abstractions make a complex world easier to navigate. For most of us, a car accelerator is simply a device that makes the vehicle go faster. We don’t care—or need to know—about the chain of events between our foot and the wheels. Likewise, we rarely look at phones as transmitters that convert sound waves to electronic signals; in daily life, they are a series of buttons that produce a conversation.

 

‹ Prev