Book Read Free

The Perfect Bet

Page 20

by Adam Kucharski


  With computer programs cleaning up at chess, checkers, and now poker, it might be tempting to argue that humans can no longer compete at such games. Computers can analyze more data, remember more strategies, and examine more possibilities. They are able to spend longer learning and longer playing. Bots can teach themselves supposedly “human” tactics such as bluffing and even “superhuman” strategies humans haven’t spotted yet. So, is there anything that computers are not so good at?

  ALAN TURING ONCE NOTED that if a man tried to pretend to be a machine, “he would clearly make a very poor showing.” Ask the human to perform a calculation, and he’d be much slower, not to mention more error prone, than the computer. Even so, there are still some situations that bots struggle with. When playing Jeopardy! Watson found the short clues the most difficult. If the host read out a single category and a name—such as “first ladies” and Ronald Reagan—Watson would take too long to search through its database to find the correct response (which is “Who is Nancy Reagan?”). Whereas Watson would beat a human contestant in a race to solve a long, complicated clue, the human would prevail if there were only a few words to go by. In quiz shows, it seems that brevity is the enemy of machines.

  The same is true of poker. Bots need time to study their opponents, learning their betting style so it can be exploited. In contrast, human professionals are able to evaluate other players much more quickly. “Humans are good at making assumptions about an opponent with very little data,” Schaeffer said.

  In 2012, researchers at the University of London suggested that some people might be especially good at sizing up others. They designed a game, called the Deceptive Interaction Task, to test players’ ability to lie and detect lies. In the task, participants were placed in groups, with one person given a cue card containing an opinion—such as “I’m in favor of reality TV”—and instructions to either lie or tell the truth. After stating the opinion, the person had to give reasons for holding that view. The others in the group had to decide whether they thought the person was lying or not.

  The researchers found that people who were lying generally took longer to start to speak after receiving the cue card. Liars took an average 6.5 seconds, compared to 4.6 seconds for honest speakers. It also turned out that good liars were also effective lie detectors, much like the proverb “it takes a thief to catch a thief.” Although liars appeared to be better at spotting deceit in the game, it was not clear why this was the case. The researchers suggested it might be because they were better—whether consciously or unconsciously—at picking up on others’ slow responses as well as speeding up their own speech.

  Unfortunately, people aren’t so good at identifying the specific signs of lying. In a 2006 survey spanning fifty-eight countries, participants were asked “How can you tell when people are lying?” One answer dominated the responses, coming up in every country and topping the list in most: liars avoid eye contact. Although it’s a popular lie-detection method, it doesn’t appear to be a particularly good one. There’s no evidence that liars avert their gaze more than truthful people. Other supposed giveaways have dubious foundations, too. It is not clear that liars are noticeably more animated or shift posture when speaking.

  Behavior might not always reveal liars, but it can influence games in other ways. Psychologists at Harvard University and Caltech have shown that having certain facial expressions can lure opponents into making bad bets. In a 2010 study, they got participants to play a simplified poker game against a computer-generated player, whose face was displayed on a screen. The researchers told participants the computer would be using different styles of play but said nothing about the face on the screen. In reality, the instructions were a ruse. The computer picked moves randomly; all that changed was its face. The simulated player displayed three possible expressions, which followed stereotypes about honesty. One was seemingly trustworthy, one neutral, and one untrustworthy. The researchers found that players facing computer players with dishonest or neutral faces made relatively good choices. However, when they played “trustworthy” computer opponents, participants made significantly worse decisions, often folding when they had the stronger hand.

  The researchers pointed out that the study involved a cartoon version of poker, played by beginners. Professional poker games are likely to be very different. However, the study suggests that facial expressions might not influence poker in the way we assume. “Contrary to the popular belief that the optimal poker face is neutral in appearance,” the authors noted, “the face that invokes the most betting mistakes by our subjects has attributes that are correlated with trustworthiness.”

  Emotion can also influence overall playing style. The University of Alberta poker group has found that humans are particularly susceptible to strong-arm tactics. “In general, a lot of the knowledge that human poker pros have about how to beat other humans revolves around aggression,” Michael Johanson said. “An aggressive strategy that puts a lot of pressure on opponents, making them make tough decisions, tends to be very effective.” When playing humans, the bots try to mimic this behavior and push opponents into making mistakes. It seems that bots have a lot to gain by copying the behavior of humans. Sometimes, it even pays to copy their flaws.

  WHEN MATT MAZUR DECIDED to build a poker bot in 2006, he knew it would have to avoid detection. Poker websites would ban anyone they suspected of running computer players. It wasn’t enough to have a bot that could beat humans; Mazur would need a bot that could look human while doing it.

  A computer scientist based in Colorado, Mazur worked on a variety of software projects in his spare time. In 2006, the new project was poker. Mazur’s first attempt at a bot, created that autumn, was a program that played a “short stacking” strategy. This involved buying into games with very little money, and then playing very aggressively, hoping to scare off players and steal the pot. It’s often seen as an irritating tactic, and Mazur discovered it wasn’t a particularly successful one either. Six months in, the bot had played almost fifty thousand hands and lost over $1,000. Abandoning his flawed first draft, Mazur designed a new bot, which would play two-player poker properly. The finished bot played a tight game, choosing its moves carefully, and was aggressive in its betting. Mazur said the bot was reasonably competitive against humans in small-stakes games.

  The next challenge was to avoid getting caught. Unfortunately, there wasn’t much information out there to help Mazur. “Online poker sites are understandably quiet when it comes to what they look at to detect bots,” he said, “so bot developers are forced to make educated guesses.” While designing his poker program, Mazur therefore tried to put himself in the position of a bot hunter. “If I was trying to detect a bot, I would look at a lot of different factors, weigh them, and then manually investigate the evidence in order to make a call as far as whether a player was a bot or not.”

  One obvious red flag would be strange betting patterns. If a bot placed too many bets, or too quickly, it might look suspicious. Unfortunately, Mazur found his bots could sometimes behave strangely by accident. The bots worked in pairs to compete on poker websites. One of them would register for new games, and the other would play them. On one occasion, Mazur was away from his computer when the game-playing program crashed. The other bot had no idea what had happened, so it kept on registering for new games. Without the game-playing bot ready to take a seat at the table, Mazur’s account skipped over twenty games in a row. Mazur later realized his bots had other quirks, too. For instance, they would often play with the same stakes for hundreds of games. Mazur points out that humans rarely behave like that: they would generally get confident (or bored) over time and move up to higher-stakes games for a while.

  As well as playing sensibly, Mazur’s bots also had to navigate their way around the poker websites. Mazur found that some websites had features—be they accidental or deliberate—that made automated navigation harder. Sometimes they would subtly alter what appeared on his screen, perhaps by changing the size or s
hape of windows or moving buttons. Such changes wouldn’t cause problems for a human, but they could throw bots that had been taught to navigate a specific set of dimensions. Mazur had to get his bots to track the locations of the windows and buttons and adjust where they clicked to account for any changes.

  The whole process was like a version of Turing’s imitation game. To avoid detection, Mazur’s bots had to convince the website they were playing like humans. Sometimes, bots even found themselves facing Turing’s original test. Most poker websites include a chat feature, which lets players talk to each other. Generally, this isn’t a problem; players often remain silent in poker games. But there were some conversations that Mazur decided he couldn’t avoid. If someone accused his bot of being a computer program and the bot didn’t reply, there was a risk that he would be reported to the website owners. Mazur therefore put together a list of terms that suspicious opponents might use. If someone mentioned words such as “bot” or “cheater” during a game, he’d get an alert and intervene. It meant he’d have to be near his computer when his bot was playing, but the alternative was potentially much worse; an unsupervised program could easily run into trouble and not know to get out of it.

  It took a while for Mazur’s bots to become winners: the programs didn’t make money for the first eighteen months they were active. Eventually, in spring 2008, the bots started to produce modest profits. The successful run came to an abrupt end a few months later, however. On October 2, 2008, Mazur got an e-mail from the poker website informing him that his account had been suspended. So, what gave it away? “In retrospect,” he said, “I think the thing that got my bot caught was that it was simply playing too many games.” Mazur’s bot concentrated on heads-up “Sit ’n Go” games, which commence as soon as two players join the game. “A normal player might play ten to fifteen No Limit Heads Up Sit ’n Gos in a day,” Mazur said. “At its peak, my bot was playing fifty to sixty per day. That probably threw up some flags.” Of course, this is only his best guess. “It’s possible that it was something else entirely. I’ll probably never know for sure.”

  Mazur wasn’t actually that bothered about the loss of profit from his bot. “When my account was eventually suspended, I had not netted that much money,” he said. “I would have been much better off financially if I’d actually used that time to play poker instead. But then again, I didn’t build the bot to make money; I built it for the challenge.”

  After his account was suspended, Mazur e-mailed the poker website that had banned him and offered to explain exactly what he’d done. He knew several ways to make life even harder for bots, which he hoped might improve security for human poker players. Mazur told the company all the things they should look out for, from high volumes of games to unusual mouse movements. He even suggested countermeasures that could hinder bot development, such as varying the size and location of buttons on the screen.

  Mazur also posted a detailed history of his bot’s creation on his website, including screenshots and schematics. He wanted to show people that poker bots are hard to build, and there are much more useful things they could be doing with computers. “I realized that if I was going to spend that much time on a software project, I should devote that energy to more worthwhile endeavors.” Looking back, however, he doesn’t regret the experience. “Had I not built the poker bot, who knows where I’d be.”

  8

  BEYOND CARD COUNTING

  IF YOU EVER VISIT A LAS VEGAS CASINO, LOOK UP. HUNDREDS OF cameras cling to the ceiling like jet-black barnacles, watching the tables below. The artificial eyes are there to protect the casino’s income from the quick-witted and light-fingered. Until the 1960s, casinos’ definition of such cheating was fairly clear-cut. They only had to worry about things like dealers paying out on losing hands or players slipping high-value chips into their stake after the roulette ball had landed. The games themselves were fine; they were unbeatable.

  Except it turned out that wasn’t true. Edward Thorp found a loophole in blackjack big enough to fit a best-selling book through. Then a group of physics students tamed roulette, traditionally the epitome of chance. Beyond the casino floor, people have even scooped lottery jackpots using a mix of math and manpower.

  The debate over whether winning depends on luck or skill is now spreading to other games. It may even determine the fate of the once lucrative American poker industry. In 2011, US authorities shut down a number of major poker websites, bringing an end to the “poker boom” that had gripped the country for the previous few years. The legislative muscle for the shake-up came from the Unlawful Internet Gambling Enforcement Act. Passed in 2006, it banned bank transfers related to games where the “opportunity to win is predominantly subject to chance.” Although the act has helped curb the spread of poker, it doesn’t cover stock trading or horseracing. So, how do we decide what makes something a game of chance?

  During the summer of 2012, the answer would turn out to be worth a lot to one man. As well as taking on the big poker companies, federal authorities had also gone after people operating smaller games. That included Lawrence DiCristina, who ran a poker room on Staten Island in New York. The case went to trial in 2012, and DiCristina was convicted of operating an illegal gambling business.

  DiCristina launched a motion to dismiss the conviction, and the following month he was back in court arguing his case. During the hearing, DiCristina’s lawyer called economist Randal Heeb as an expert witness. Heeb’s aim was to convince the judge that poker was predominantly a game of skill, and therefore didn’t fall under the definition of illegal gambling. While giving evidence, Heeb presented data from millions of online poker games. He showed that, bar a few bad days, the top-ranked players won pretty consistently. In contrast, the worst players lost throughout the year. The fact that some people could make a living from poker was surely evidence that the game involved skill.

  The prosecution also had an expert witness, an economist named David DeRosa. He did not share Heeb’s views about poker. DeRosa had used a computer to simulate what might happen if a thousand people each tossed a coin ten thousand times. Assuming a certain outcome—such as tails—was equivalent to a win, and the number of times a particular person won the toss was totally random. And yet the results that came out were remarkably similar to those Heeb presented: a handful of people appeared to win consistently, and another group of people seemed to lose a large number of times. This wasn’t evidence that a coin toss involves skill, just that—much like the infinite number of monkeys typing—unlikely events can happen if we look at a large enough group.

  Another concern for DeRosa was the number of players who lost money. Based on Heeb’s data, it seemed that about 95 percent of people playing online poker ended up out of pocket. “How could it be skillful playing if you’re losing money?” DeRosa said. “I don’t consider it skill if you lose less money than the unfortunate fellow who lost more money.”

  Heeb admitted that, in a particular game, only 10 to 20 percent of players were skillful enough to win consistently. He said the reason so many more people lost than won was partly down to the house fee, with poker operators taking a cut from the pot of money in each round (in DiCristina’s games, the fee was 5 percent). But he did not think the apparent existence of a skilled poker elite was the result of chance. Although a small group may appear to win consistently if lots of people flip coins, good poker players generally continue to win after they’ve been ranked highly. The same cannot be said for the people who are fortunate with coin tosses.

  According to Heeb, part of the reason good players can win is that in poker players have control over events. If bettors place a bet on a sports match or a roulette wheel, their wagers do not affect the result. But poker players can change the outcome of the game with their betting. “In poker, the wager is not in the same sense a wager on the outcome,” Heeb said. “It is the strategic choice that you are making. You are trying to influence the outcome of the game.”

  But DeRosa argued th
at it doesn’t make sense to look at a player’s performance over several hands. The cards that are dealt are different each time, so each hand is independent of the last. If a single hand involves a lot of luck, there is no reason to think that player will have a successful round after a costly one. DeRosa compared the situation to the Monte Carlo fallacy. “If red has come up 20 times in a row in roulette,” he said, “it does not mean that ‘black is due.’”

  Heeb conceded that a single hand involves a lot of chance, but it did not mean the game was chiefly one of luck. He used the example of a baseball pitcher. Although pitching involves skill, a single pitch is also susceptible to chance: a weak pitcher could produce a good ball, and a strong pitcher could throw a bad one. To identify the best—and worst—pitchers, we need to look at lots of throws.

  The key issue, Heeb argued, is how long we must wait for the effects of skill to outweigh chance. If it takes a large number of hands (i.e., longer than most people will play), then poker should be viewed as a game of chance. Heeb’s analysis of the online poker games suggested this wasn’t the case. It seemed that skill overtook luck after a relatively small number of hands. After a few sessions of play, a skillful player could therefore expect to hold an advantage.

  It fell to the judge, a New Yorker named Jack Weinstein, to weigh the arguments. Weinstein noted that the law used to convict DiCristina—the Illegal Gambling Business Act—listed games such as roulette and slot machines, but it did not explicitly mention poker. Weinstein said it wasn’t the first time a law had failed to specify a crucial detail. In October 1926, airport operator William McBoyle helped arrange the theft of an airplane in Ottawa, Illinois. Although he was convicted under the National Motor Vehicle Theft Act, McBoyle appealed the result. His lawyers argued that the act did not explicitly cover airplanes, because it defined a vehicle as “an automobile, automobile truck, automobile wagon, motor cycle, or any other self-propelled vehicle not designed for running on rails.” According to McBoyle’s lawyers, this meant an airplane was not a vehicle, and so McBoyle could not be guilty of the federal crime of transporting a stolen vehicle. The US Supreme Court agreed. They noted that the wording of the law evoked the mental image of vehicles moving on land, so it shouldn’t be extended to aircraft simply because it seemed that a similar rule ought to apply. The conviction was reversed.

 

‹ Prev