The Perfect Bet

Home > Nonfiction > The Perfect Bet > Page 13
The Perfect Bet Page 13

by Adam Kucharski


  It turned out to be a superb piece of detective work. Two days later Betfair admitted that the error had indeed been caused by a faulty bot. “Due to a technical glitch within the core exchange database,” they said, “one of the bets evaded the prevention system and was shown on the site.” Apparently, the bot’s owner had less than £1,000 in an account at the time, so as well as fixing the glitch, Betfair voided the bets that had been made.

  As several Betfair users had already pointed out, such ridiculous odds should never have been available. The two hundred or so gamblers who had bet on the race would therefore have struggled to persuade a lawyer to take their case. “You cannot win—or lose—what is not there in the first place,” Greg Wood, the Guardian’s racing correspondent, wrote at the time, “and even the most opportunistic ambulance-chaser is likely to take one look at this fact and point to the door.”

  Unfortunately, the damage created by bots isn’t always so limited. Computer trading software is also becoming popular in finance, where the stakes can be much higher. Six months after the Voler La Vedette bot got its odds wrong, one financial company was to discover just how expensive a troublesome program could be.

  THE SUMMER OF 2012 was a busy time for Knight Capital. The New Jersey–based stockbroker was getting its computer systems ready for the launch of the New York Stock Exchange’s Retail Liquidity Program on August 1. The idea of the liquidity program was to make it cheaper for customers to carry out large stock trades. The trades themselves would be executed by brokers like Knight, which would provide the bridge between the customer and the market.

  Knight used a piece of software called SMARS to handle customers’ trades. The software was a high-speed order router: when a trade request came in from a client, SMARS would execute a series of smaller child orders until the original request had been filled. To avoid overshooting the required value, the program kept a tally of how many child orders had been completed and how much of the original request still needed to be executed.

  Until 2003, a program named Power Peg had been responsible for halting trading once the order had been met. In 2005, this program was phased out. Knight disabled the Power Peg code and installed the tally counter into a different part of the SMARS software. But, according to a subsequent US government report, Knight did not check what would happen if the Power Peg program was accidently triggered again.

  Toward the end of July 2012, technicians at Knight Capital started to update the software on each of the company’s servers. Over a series of days, they installed the new computer code on seven of the eight servers. However, they reportedly failed to add it to the eighth server, which still contained the old Power Peg program.

  Launch day arrived, and trade orders started coming in from customers and other brokers. Although Knight’s seven updated servers did their job properly, the eighth was unaware of how many requests had already been completed. It therefore did its own thing, peppering the market with millions of orders and buying and selling stocks in a rapid-fire trading spree. As the erroneous orders piled up, the tangle of trades that would later have to be unraveled grew larger and larger. While technology staff worked to identify the problem, the company’s portfolio grew. Over the course of forty-five minutes, Knight bought around $3.5 billion worth of stocks and sold over $3 billion. When it eventually stopped the algorithm and unwound the trades, the error would cost it over $460 million, equivalent to a loss of $170,000 per second. The incident left a massive dent in Knight’s finances, and in December of that year the company was acquired by a rival trading firm.

  Although Knight’s losses came from the unanticipated behavior of a computer program, technical problems are not the only enemy of algorithmic strategies. Even when automated software is working as planned, companies can still be vulnerable. If their program is too well behaved—and hence too predictable—a competitor might find a way to take advantage of it.

  In 2007, a trader named Svend Egil Larsen noticed that the algorithms of one US-based broker would always respond to certain trades in the same way. No matter how many stocks were bought, the broker’s software would raise the price in a similar manner. Larsen, who was based in Norway, realized that he could nudge up the price by making lots of little purchases, and then sell a large amount of stock back at the higher price. He’d become the financial equivalent of Professor Pavlov, ringing his bell and watching the algorithm respond obediently. Over the course of a few months, the tactic earned Larsen over $50,000.

  Not everybody appreciated the ingenuity of his strategy. In 2010, Larsen and fellow trader Peder Veiby—who’d been doing the same thing—were charged with manipulating the market. The courts seized their profits and handed the pair suspended sentences. When the verdict was announced, Veiby’s lawyer argued that the nature of the opponent had biased the ruling. Had the pair profited from a stupid human trader rather than a stupid algorithm, the court would not have reached the same conclusion. Public opinion sided with Larsen and Veiby, with the press comparing their exploits to those of Robin Hood. Their support was vindicated two years later, when the Supreme Court overturned the verdict, clearing the two men of all charges.

  There are several ways algorithms can wander into dangerous territory. They might be influenced by an error in the code, or they might be running on an out-of-date system. Sometimes they take a wrong turn; sometimes a competitor leads them astray. But so far we have only looked at single events. Larsen targeted a specific broker. Knight was a lone company. Just one gambler offered ridiculous odds on Voler La Vedette. Yet there are an increasing number of algorithms in betting and finance. If a single bot can take the wrong path, what happens when lots of firms use these programs?

  DOYNE FARMER’S WORK ON prediction did not end with the path of a casino roulette ball. After obtaining his PhD from UCLA in 1981, Farmer moved to the Santa Fe Institute in New Mexico. While there, he developed an interest in finance. Over a few short years, he went from forecasting roulette spins to anticipating the behavior of stock markets. In 1991, he founded a hedge fund with fellow ex-Eudaemon Norman Packard. It was named Prediction Company, and the plan was to apply concepts from chaos theory to the financial world. Mixing physics and finance was to prove extremely successful, and Farmer spent eight years with the company before deciding to return to academia.

  Farmer is now a professor at the University of Oxford, where he looks at the effects of introducing complexity to economics. Although there is already plenty of mathematical thinking in the world of finance, Farmer has pointed out that it is generally aimed at specific transactions. People use mathematics to decide the price of their financial products or to estimate the risk involved in certain trades. But how do all these interactions fit together? If bots influence each other’s decisions, what effect could it have on the economic system as a whole? And what might happen when things go wrong?

  A crisis can sometimes begin with a single sentence. At lunchtime on April 23, 2013, the following message appeared on the Associated Press’s Twitter feed: “Breaking: Two Explosions in the White House and Barack Obama is injured.” The news was relayed to the millions of people who follow the Associated Press on Twitter, with many of them reposting the message to their own followers.

  Reporters were quick to question the authenticity of the tweet, not least because the White House was hosting a press conference at the time (which had not seen any explosions). The message indeed turned out to be a hoax, posted by hackers. The tweet was soon removed, and the Associated Press Twitter account was temporarily suspended.

  Unfortunately, financial markets had already reacted to the news. Or, rather, they had overreacted. Within three minutes of the fake announcement, the S&P 500 stock index had lost $136 billion in value. Although markets soon returned to their original level, the speed—and severity—of the reaction made some financial analysts wonder whether it was really caused by human traders. Would people have really spotted an errant tweet so quickly? And would they have believed it so ea
sily?

  It wasn’t the first time a stock index had ended up looking like a sharp stalactite, stretching down from the realms of sanity. One of the biggest market shocks came on May 6, 2010. When the US financial markets opened that morning, already several potential clouds were on the horizon, including the upcoming British election and ongoing financial difficulties in Greece. Yet nobody foresaw the storm that was to arrive midafternoon.

  Although the Dow Jones Industrial Average had dipped a little earlier in the day, at 2:32 p.m. it started to decline sharply. By 2:42 p.m. it had lost almost 4 percent in value. The decline accelerated, and five minutes later the index was down another 5 percent. In barely twenty minutes, almost $900 billion had been wiped from the market’s value. The descent triggered one of the exchange’s fail-safe mechanisms, which paused trading for a few moments. This allowed prices to stabilize, and the index started to clamber back toward its original level. Even so, the drop had been staggering. So, what had happened?

  Severe market disruptions can often be traced to one main trigger event. In 2013, it was the hoax Twitter announcement about the White House. Bots that scour online newsfeeds, attempting to exploit information before their competitors, would have likely picked up on this and started making trades. The story gained a curious footnote in the following year, when the Associated Press introduced automated company earnings reports. Algorithms sift through the reports and produce a couple of hundred words summarizing firms’ performance in the Associated Press’s traditional writing style. The change means that humans are now even more absent from the financial news process. In press offices, algorithms convert reports into prose; on trading floors, their fellow robots turn these words into trading decisions.

  The 2010 Dow Jones “flash crash” was thought to be the result of a different type of trigger event: a trade rather than an announcement. At 2:32 p.m., a mutual fund had used an automated program to sell seventy-five thousand futures contracts. Instead of spreading the order over a period of time, as a series of small icebergs, the program had apparently dropped the whole thing in pretty much all at once. The previous time the fund had dealt with a trade that big, it had taken five hours to sell seventy-five thousand contracts. On this occasion, it had completed the whole transaction in barely twenty minutes.

  It was undoubtedly a massive order, but it was just one order, made by a single firm. Likewise, bots that analyze Twitter feeds are relatively niche applications: the majority of banks and hedge funds do not trade in this way. Yet the reaction of these Twitter-happy algorithms led to a spike that wiped billions off the stock market. How did these seemingly isolated events lead to such turbulence?

  To understand the problem, we can turn to an observation made by economist John Maynard Keynes in 1936. During the 1930s, English newspapers would often run beauty contests. They would publish a collection of girls’ photos and ask readers to vote for the six they thought would be most popular overall. Keynes pointed out that shrewd readers wouldn’t simply choose the girls they liked best. Instead, they would select the ones they thought everyone else would pick. And, if readers were especially sharp, they would go to the next level and try to work out which girl everyone else would expect to be the most popular.

  According to Keynes, the stock market often works in much the same way. When speculating on share prices, investors are in effect trying to anticipate what everyone else will do. Prices don’t necessarily rise because a company is fundamentally sound; they increase because other investors think the company is valuable. The desire to know what others are thinking means lots of second-guessing. What’s more, modern markets are moving further and further away from a carefully considered newspaper contest. Information arrives fast, and so does the action. And this is where algorithms can run into trouble.

  Bots are often viewed as complicated, opaque creatures. Indeed, complex seems to be the preferred adjective of journalists writing about trading algorithms (or any algorithm, for that matter). But in high-frequency trading, it’s quite the opposite: if you want to be quick, you need to keep things simple. The more instructions you have to deal with when trading financial products, the longer things take. Rather than clogging up their bots with subtlety and nuance, creators instead limit strategies to a few lines of computer code. Doyne Farmer warns that this doesn’t leave much room for reason and rationality. “As soon as you limit what you can do to ten lines of code, you’re non-rational,” he said. “You’re not even at insect-level intelligence.”

  When traders react to a big event—whether a Twitter post or a major sell order—it piques the attention of the high-speed algorithms monitoring market activity. If others are selling stocks, they join in. As prices plummet, the programs follow each other’s trades, driving prices further downward. The market turns into an extremely fast beauty contest, with no one wanting to pick the wrong girl. The speed of the game can lead to serious problems. After all, it’s hard to work out who will move first when algorithms are faster than the eye can see. “You don’t have much time to think,” Farmer said. “It creates a big danger of over-reaction and herding.”

  Some traders have reported that mini flash crashes happen frequently. These shocks are not severe enough to grab headlines, but they are still there to be found by anyone who looks hard enough. A share price might drop in a fraction of a second, or trading activity will suddenly increase a hundredfold. In fact, there might be several such crashes every day. When researchers at the University of Miami looked at stock market data between 2006 and 2011, they found thousands of “ultrafast extreme events” in which a stock crashed or spiked in value—and recovered again—in less than a second. According to Neil Johnson, who led the research, these events are a world away from the kind of situations covered by traditional financial theories. “Humans are unable to participate in real time,” he said, “and instead, an ultrafast ecology of robots rises up to take control.”

  WHEN PEOPLE TALK ABOUT chaos theory, they often focus on the physics side of things. They might mention Edward Lorenz and his work on forecasting and the butterfly effect: the unpredictability of the weather, and the tornado caused by the flap of an insect’s wings. Or they might recall the story of the Eudaemons and roulette prediction, and how the trajectory of a billiard ball can be sensitive to initial conditions. Yet chaos theory has reached beyond the physical sciences. While the Eudaemons were preparing to take their roulette strategy to Las Vegas, on the other side of the United States ecologist Robert May was working on an idea that would fundamentally change how we think about biological systems.

  Princeton University is a world away from the glittering high-rises of Las Vegas. The campus is a maze of neo-Gothic halls and sun-dappled quads; squirrels dash through ivy-clad archways, while students’ distinctive orange and black scarves billow in the New Jersey wind. Look carefully and there are also traces of famous past residents. There’s an “Einstein Drive,” which loops in front of the nearby Institute of Advanced Study. For a while there was also a “Von Neumann corner,” named after all the car accidents the mathematician reportedly had there. The story goes that von Neumann had a particularly ambitious excuse for one of his collisions. “I was proceeding down the road,” he said. “The trees on the right were passing me in orderly fashion at sixty miles per hour. Suddenly one of them stepped in my path.”

  During the 1970s, May was a professor of zoology at the university. He spent much of his time studying animal communities. He was particularly interested in how animal numbers changed over time. To examine how different factors influenced ecological systems, he constructed some simple mathematical models of population growth.

  From a mathematical point of view, the simplest type of population is one that reproduces in discrete bursts. Take insects: many species in temperate regions breed once per season. Ecologists can explore the behavior of hypothetical insect populations using an equation called “the logistic map.” The concept was first proposed in 1838 by statistician Pierre Verhulst, who
was investigating potential limits to population. To calculate the population density in a particular year using the logistic map, we multiply three factors together: the population growth rate, the density in the previous year, and the amount of space—and hence resources—still available. Mathematically, this takes the form:

  Density in next year = Growth rate × Current density × (1–Current density)

  The logistic map is built on a simple set of assumptions, and when the growth rate is small it churns out a simple result. Over a few seasons, the population settles down to equilibrium, with the population density remaining the same from one year to the next.

  FIGURE 5.1. Results from the logistic map with a low growth rate.

  The situation changes as the growth rate increases. Eventually, the population density starts to oscillate. In one year, lots of insects are hatched, which reduces available resources; next year, fewer insects survive, which makes spaces for more creatures the following year, and so on. If we sketch out how the population changes over time, we get the picture shown in Figure 5.2.

  When the growth rate gets even larger, something strange happens. Rather than settle down to a fixed value, or switch between two values in a predictable way, the population density begins to vary wildly.

  Remember that there is no randomness in the model, no chance events. The animal density depends on a simple one-line equation. And yet the result is a bumpy, noisy set of values, which do not appear to follow a straightforward pattern.

 

‹ Prev