Book Read Free

The Perfect Bet

Page 14

by Adam Kucharski


  FIGURE 5.2. With a medium growth rate, the population density oscillates.

  FIGURE 5.3. High growth rates lead to highly variable population dynamics.

  May found that chaos theory could explain what was going on. The fluctuations in density were the result of the population being sensitive to initial conditions. Just as Poincaré had found for roulette, a small change in the initial setup had a big effect on what happens further down the line. Despite the population following a straightforward biological process, it was not feasible to predict how it would behave far into the future.

  We might expect roulette to produce unexpected outcomes, but ecologists were stunned to find that something as simple as the logistic map could generate such complex patterns. May warned that the result could have some troubling consequences in other fields, too. From politics to economics, people needed to be aware that simple systems do not necessarily behave in simple ways.

  As well as studying single populations, May thought about ecosystems as wholes. For example, what happens when more and more creatures join an environment, generating a complicated web of interactions? In the early 1970s, many ecologists would have said the answer was a positive one. They believed that complexity was generally a good thing in nature; the more diversity there was in an ecosystem, the more robust it would be in the face of a sudden shock.

  That was the dogma, at least, and May was not convinced it was correct. To examine whether a complex system could really be stable, he looked at a hypothetical ecosystem with a large number of interacting species. The interactions were chosen at random: some were beneficial to a species, some harmful. He then measured the stability of the ecosystem by seeing what happened when it was disrupted. Would it return to its original state, or do something completely different, like collapse? This was one of the advantages of working with a theoretical model: he could test stability without disrupting the real ecosystem.

  May found that the larger the ecosystem, the less stable it would be. In fact, as the number of species grew very large, the probability of the ecosystem surviving shrank to zero. Increasing the level of complexity had a similarly harmful effect. When the ecosystem was more connected, with a higher chance of any two given species interacting with each other, it was less stable. The model suggested that the existence of large, complex ecosystems was unlikely, if not impossible.

  Of course, there are plenty of examples of complex yet seemingly robust ecosystems in nature. Rainforests and coral reefs have vast numbers of different species, yet they haven’t all collapsed. According to ecologist Andrew Dobson, the situation is the biological equivalent of a joke made in the early days of the European currency union. Although the euro worked in practice, observers said, it was not clear why it worked in theory.

  To explain the difference between theory and reality, May suggested that nature had to resort to “devious strategies” to maintain stability. Researchers have since put forward all sorts of intricate strategies in an attempt to drag the theory closer to nature. Yet, according to Stefano Allesina and Si Tang, two ecologists at the University of Chicago, this might not be necessary. In 2013, they proposed a possible explanation for the discrepancy between May’s model and real ecosystems.

  Whereas May had assumed random interactions between different species—some positive, some negative—Allesina and Tang focused on three specific relationships that are common in nature. The first of these was a predator-prey interaction, with one species eating another; obviously, the predator will gain from this relationship, and the prey will lose out. As well as predation, Allesina and Tang also included cooperation, where both parties benefit from the relationship, and competition, with both species suffering negative effects.

  Next, the researchers looked at whether each relationship stabilized the overall system or not. They found that excessive levels of competitive and cooperative relationships were destabilizing, whereas predator-prey relationships had a stabilizing effect on the system. In other words, a large ecosystem could be robust to disruption as long as it had a series of predator-prey interactions at its core.

  So, what does all this mean for betting and financial markets? Much like ecosystems, markets are now inhabited by several different bot species. Each has a different objective and specific strengths and weaknesses. There are bots out hunting for arbitrage opportunities; they are trying to react to new information first, be it an important event or an incorrect price. Then there are the “market makers,” offering to accept trades or bets on both sides and pocket the difference. These bots are essentially bookmakers, making their money by anticipating where the action will be. They buy low and sell high, with the aim of balancing their books. There are also bots trying to hide large transactions by sneaking smaller trades into the market. And there are predator bots watching for these large trades, hoping to spot a big transaction and take advantage of the subsequent shift in the market.

  During the flash crash on May 6, 2010, there were over fifteen thousand different accounts trading the futures contracts involved in the crisis. In a subsequent report, the Securities and Exchange Commission (SEC) divided the trading accounts into several different categories, depending on their role and strategy. Although there has been much debate about precisely what happened that afternoon, if the crash was indeed triggered by a single event—as the SEC report suggested—the havoc that followed was not the result of one algorithm. Chances are it came from the interaction between lots of different trading programs, with each one reacting to the situation in its own way.

  Some interactions had particularly damaging effects during the flash crash. In the middle of the crisis, at 2:45 p.m., there was a drought of buyers for futures contracts. High-frequency algorithms therefore traded among themselves, swapping over twenty-seven thousand futures in the space of fourteen seconds. Normality only resumed after the exchange deliberately paused the market for a few seconds, halting the runaway drop in price.

  Rather than treating betting or financial markets as a set of static economic rules, it makes sense to view them as an ecosystem. Some traders are predators, feeding off weaker prey. Others are competitors, fighting over the same strategy and both losing out. Many of the ideas and warnings from ecology can therefore apply to markets. Simplicity does not mean predictability, for example. Even if algorithms follow simple rules, they won’t necessarily behave in simple ways. Markets also involve webs of interactions—some strong, some brittle—which means that having lots of different bots in the same place does not necessarily help matters. Just as May showed, making an ecosystem more complex doesn’t necessarily make it more stable.

  Unfortunately, increased complexity is inevitable when there are lots of people looking for profitable strategies. Whether in betting or finance, ideas are less lucrative once others notice what is going on. As exploitable situations become widely known, the market gets more efficient and the advantage disappears. Strategies therefore have to evolve as existing approaches become redundant.

  Doyne Farmer has pointed out that the process of evolution can be broken down into several stages. To come up with a good strategy, you first need to spot a situation that can be exploited. Next, you need to get ahold of enough data to test whether your strategy works. Just as gamblers need plenty of data to rate horses or sports teams, traders need enough information to be sure that the advantage is really there, and not a random anomaly. At Prediction Company, this process was entirely algorithm-driven. The trading strategies were what Farmer called “evolving automata,” with the decision-making process mutating as the computers accumulated new experience.

  The shelf life of a trading strategy depends on how easy it is to complete each evolutionary stage. Farmer has suggested that it can often take years for markets to become efficient and strategies to become useless. Of course, the bigger the inefficiency is, the easier it is to spot and exploit. Because computer-based strategies tend to be highly lucrative at first, copycats are more likely to appear. Algorithmic a
pproaches therefore have to evolve faster than other types of strategy. “There’s going to be an ongoing saga of one-upmanship,” Farmer said.

  RECENT YEARS HAVE SEEN a huge growth in the number of algorithms scouring financial markets and betting exchanges. It is the latest connection between two industries that have a history of shared ideas, from probability theory to arbitrage. But the distinction between finance and gambling is blurring more than ever before.

  Several betting websites now allow people to bet on financial markets. As with other types of online betting, these transactions constitute gambling and hence are exempt from tax in many European countries (at least for the customer; there is still a tax burden on the bookmaker). One of the most popular types of financial wager is spread betting. In 2013, around a hundred thousand people in Britain placed bets in this way.

  In a traditional bet, the stake and potential payoff are fixed. You might bet on a certain team winning or on a share price rising. If the outcome goes your way, you get the payoff. If not, you lose your stake. Spread betting is slightly different. Your profit depends not just on the outcome but also on the size of the outcome. Let’s say a share is currently priced at $50, and you think it will increase in value in the next week. A spread betting company might offer you a spread bet at $1 per point over $51 (the difference between the current price and the offered number is the “spread,” and how the bookmaker makes its money). For every dollar the price rises above $51, you will get $1, and for every dollar it drops below, you will lose $1. In terms of payoff, it’s not that different from simply buying the share and then selling it a week later. You’ll make pretty much the same amount of profit (or loss) on both the bet and the financial transaction.

  But there is a crucial difference. If you make a profitable stock trade in the United Kingdom, you have to pay stamp duty and capital gains tax. If you place a spread bet, you don’t. Things are different in other countries. In Australia, profits from spread betting are classed as income and are therefore subject to tax.

  Deciding how to regulate transactions is a challenge in both gambling and finance. When dealing with an intricate trading ecosystem, however, it is not always clear what effects regulation will have. In 2006, the US Federal Reserve and the National Academy of Sciences brought together financiers and scientists to debate “systemic risk” in finance. The idea was to consider the stability of the financial system as a whole rather than just the behavior of individual components.

  During the meeting, Vincent Reinhart, an economist at the Federal Reserve, pointed out that a single action could have multiple potential outcomes. The question, of course, is which one will prevail. The result won’t depend on only what regulators do. It could also depend on how the policy is communicated and how the market reacts to news. This is where economic approaches borrowed from the physical sciences can come up short. Physicists study interactions that follow known rules; they don’t generally have to deal with human behavior. “The odds on a hundred-year storm do not change because people think it has become more likely,” Reinhart said.

  Ecologist Simon Levin, who also attended the meeting, elaborated on the unpredictability of behavior. He noted that economic interventions—like the ones available to the Federal Reserve—aim to change individual behavior in the hope of improving the system as a whole. Although certain measures can change what individuals do, it is very difficult to stop panic spreading through a market.

  Yet the spread of information is only going to get faster. News no longer has to be read and processed by humans. Bots are absorbing news automatically and handing it to programs that make trading decisions. Individual algorithms react to what others do, with decisions made on the sort of timescales that humans can never fully supervise. This can lead to dramatic, unexpected behavior. Such problems often come from the fact that high-frequency algorithms are designed to be simple and fast. The bots are rarely complex or clever: the aim is to exploit an advantage before anyone else gets there. Creating successful artificial gamblers is not always a matter of being first, however. As we shall discover, sometimes it pays to be smart.

  6

  LIFE CONSISTS OF BLUFFING

  IN SUMMER 2010, POKER WEBSITES LAUNCHED A CRACKDOWN ON robot players. By pretending to be people, these bots had been winning tens of thousands of dollars. Naturally, their human opponents weren’t too happy. In retaliation, website owners shut down any accounts that were apparently run by software. One company handed almost $60,000 back to players after discovering that bots had been winning on their tables.

  It wasn’t long before computer programs again surfaced in online poker games. In February 2013, Swedish police started investigating poker bots that had been operating on a state-owned poker website. It turned out that these bots had made the equivalent of over half a million dollars. It wasn’t just the size of the haul that worried poker companies; it was how the money was made. Rather than taking money from weaker players in low-stakes games, the bots had been winning on high-stakes tables. Until these sophisticated computer players were discovered, few people in the industry had realized that bots were capable of playing so well.

  Yet poker algorithms have not always been so successful. When bots first became popular in the early 2000s, they were easily beaten. So, what has changed in recent years? To understand why bots are getting better at poker, we must first look at how humans play games.

  WHEN THE US CONGRESS put forward a bill in 1969 suggesting that cigarette advertisements be banned from television, people expected American tobacco companies to be furious. After all, this was an industry that had spent over $300 million promoting their products the previous year. With that much at stake, a clampdown would surely trigger the powerful weapons of the tobacco lobby. They would hire lawyers, challenge members of Congress, fight antismoking campaigners. The vote was scheduled to take place in December 1970, which gave the firms eighteen months to make their move. So, what did they choose to do? Pretty much nothing.

  Far from hurting tobacco companies’ profits, the ban actually worked in the companies’ favor. For years, the firms had been trapped in an absurd game. Television advertising had little effect on whether people smoked, which in theory made it a waste of money. If the firms had all got together and stopped their promotions, profits would almost certainly have increased. However, ads did have an impact on which brand people smoked. So, if all the firms stopped their publicity, and one of them started advertising again, that company would steal customers from all the others.

  Whatever their competitors did, it was always best for a firm to advertise. By doing so, it would either take market share from companies that didn’t promote their products or avoid losing customers to firms that did. Although everyone would save money by cooperating, each individual firm would always benefit by advertising. Which meant all the companies inevitably ended up in the same position, putting out advertisements to hinder the other firms. Economists refer to such a situation—where each person is making the best decision possible given the choices made by others—as a “Nash equilibrium.” Spending would rise further and further until this costly game stopped. Or somebody forced it to stop.

  Congress finally banned tobacco ads from television in January 1971. One year later, the total spent on cigarette advertising had fallen by over 25 percent. Yet tobacco revenues held steady. Thanks to the government, the equilibrium had been broken.

  JOHN NASH PUBLISHED HIS first papers on game theory while he was a PhD student at Princeton. He’d arrived at the university in 1948, after being awarded a scholarship on the strength of his undergraduate tutor’s reference, a two-sentence letter that read, “Mr. Nash is nineteen years old and is graduating from Carnegie Tech in June. He is a mathematical genius.”

  During the next two years, Nash worked on a version of the “prisoner’s dilemma.” This hypothetical problem involves two suspects caught at the scene of a crime. Each is placed in a separate cell and must choose whether to remain silent or testify
against the other person. If they both keep quiet, both receive one-year sentences. If one remains silent and the other talks, the quiet prisoner gets three years and the one who blames him is released. If both talk, both are sent down for two years.

  Overall, it would be best if both prisoners kept their mouths shut and took the one-year sentence. However, if you are a prisoner stuck alone in a cell, unable to tell what your accomplice is going to do, it is always better to talk: if your partner stays silent, you get off; if your partner talks, you receive two years rather than three. The Nash equilibrium for the prisoner’s dilemma game therefore has both players talking. Although they will end up suffering two years in prison rather than one, neither will gain anything if one alone changes strategy. Substitute talking and silence for advertising and cutting promotions, and it is the same problem the advertising firms faced.

  Nash received his PhD in 1950, for a twenty-seven-page thesis describing how his equilibrium can sometimes thwart seemingly beneficial outcomes. But Nash wasn’t the first person to take a mathematical hammer to the problem of competitive games. History has given that accolade to John von Neumann. Although later known for his time at Los Alamos and Princeton, in 1926 von Neumann was a young lecturer at the University of Berlin. In fact, he was the youngest in its history. Despite his prodigious academic record, however, there were still some things he wasn’t very good at. One of them was poker.

  Poker might seem like the ideal game for a mathematician. At first glance, it’s just a matter of probabilities: the probability you receive a good hand; the probability your opponent gets a better one. But anyone who has played poker using only probability knows that things are not so simple. “Real life consists of bluffing,” von Neumann noted, “of little tactics of deception, of asking yourself what is the other man going to think I mean to do.” If he was to grasp poker, he would need to find a way to account for his opponent’s strategy.

 

‹ Prev