Book Read Free

The Rules of Contagion

Page 5

by Adam Kucharski


  In February 2007, a year before Bear Stearns collapsed, credit specialist Janet Tavakoli wrote about the rise of investment products like CDOs. She was particularly unimpressed with the models used to estimate correlations between mortgages. By making assumptions that were so far removed from reality, these models had in effect created a mathematical illusion, a way of making high-risk loans look like low-risk investments.[8] ‘Correlation trading has spread through the psyche of the financial markets like a highly infectious thought virus,’ Tavakoli noted. ‘So far, there have been few fatalities, but several victims have fallen ill, and the disease is rapidly spreading.’[9] Others shared her skepticism, viewing popular correlation methods as an overly simplistic way of analysing mortgage products. One leading hedge fund reportedly kept an abacus in one of its conference rooms; there was a label next to it that read ‘correlation model’.[10]

  Despite the problems with these models, mortgage products remained popular. Then reality caught up, as house prices started to fall. During that 2008 summer, I came to the opinion that many had been aware of the potential implications. The investments were tumbling in value by the day, but it didn’t seem to matter as long as there were still naïve investors out there to sell them on to. It was like carrying a sack of money that you know has a massive hole in the bottom, but not caring because you’re stuffing so much more in the top.

  As a strategy it was, well, full of holes. By August 2008, speculation was rife about just how empty the money bags were. Across the city, banks were looking for injections of funding, competing to court sovereign wealth funds in the Middle East. I remember equity traders grabbing passing interns to point out the latest drop in Lehman’s share price. I’d walk past empty desks, where once profitable CDO teams had been let go. Some of my colleagues would glance up nervously whenever security walked by, wondering if they’d be next. The fear was spreading. Then came the crash.

  The rise of complex financial products – and fall of funds like Long Term Capital Management – had persuaded central banks that they needed to understand the tangled web of financial trading. In May 2006 the Federal Reserve Bank of New York organised a conference to discuss ‘systemic risk’. They wanted to identify factors that might affect the stability of the financial network.[11]

  The conference attendees came from a range of scientific fields. One was ecologist George Sugihara. His lab in San Diego focused on marine conservation, using models to understand the dynamics of fish populations. Sugihara was also familiar with the world of finance, having spent four years working for Deutsche Bank in the late 1990s. During that period, banks had rapidly expanded their quantitative teams, seeking out people with experience of mathematical models. In an attempt to recruit Sugihara, Deutsche Bank had taken him on a luxury trip to a British country estate. The story goes that during dinner, a senior banker wrote a huge salary offer on a napkin. An astonished Sugihara didn’t know what to say. Mistaking Sugihara’s silence for disdain, the banker withdrew the napkin and proceeded to write an even bigger number. There was another pause, followed by another number. This time, Sugihara took the offer.[12]

  Those years with Deutsche Bank would be highly profitable for both parties. Although the data involved financial stocks rather than fish stocks, Sugihara’s experience with predictive models successfully transferred across to his new field. ‘Basically, I modelled the fear and greed of mobs that trade,’ he later told Nature.[13]

  Another person to join the Federal Reserve discussions was Robert May, who had previously supervised Sugihara’s PhD. An ecologist by training, May had worked extensively on analysis of infectious diseases. Although May was drawn into financial research largely by accident, he would go on to publish several studies looking at contagion in financial markets. In a 2013 piece for The Lancet medical journal, he noted the apparent similarity between disease outbreaks and financial bubbles. ‘The recent rise in financial assets and the subsequent crash have rather precisely the same shape as the typical rise and fall of cases in an outbreak of measles or other infection,’ he wrote. May pointed out that when an infectious disease epidemic rises it’s bad news, and when it falls, it’s good news. In contrast, it’s generally seen as positive when financial prices rise and bad when they fall. But he argued that this is a false distinction: rising prices are not always a good sign. ‘When something is going up without a convincing explanation about why it’s going up, that really is an illustration of the foolishness of the people,’ as he put it.[14]

  One of the best-known historical bubbles is ‘tulip mania’, which gripped the Netherlands in the 1630s. In popular culture, it’s a classic story of financial madness. Rich and poor alike poured more and more money into the flowers, to the point where tulip bulbs were going for the price of houses. One sailor who mistook a bulb for a tasty onion ended up in jail. Legend has it that when the market crashed in 1637, the economy suffered and some people drowned themselves in canals.[15] Yet according to Anne Goldgar at Kings College London, there wasn’t really that much of a bulb bubble. She couldn’t find a record of anybody who was ruined by the crash. Only a handful of wealthy people splashed out for the most expensive tulips. The economy was unharmed. Nobody drowned.[16]

  Other bubbles have had a much larger impact. The first time that people used the word ‘bubble’ to describe overinflated investments was during the South Sea Bubble.[17] Founded in 1711, the British South Sea Company controlled several trading and slavery contracts in the Americas. In 1719, they secured a lucrative financial deal with the British government. The following year, the company’s share price surged, rising four-fold in a matter of weeks, before falling just as sharply a couple of months later.[19]

  Price of South Sea Company shares, 1720

  Data from Frehen et al., 2013[18]

  Isaac Newton had sold most of his shares during the spring of 1720, only to invest again during the summer peak. According to mathematician Andrew Odlyzko, ‘Newton did not just taste of the Bubble’s madness, but drank deeply of it.’ Some people timed their investments better. Bookseller Thomas Guy, an early investor, got out before the peak and used the profits to establish Guy’s Hospital in London.[20]

  There have been many other bubbles since, from Britain’s Railway Mania in the 1840s to the US dot-com bubble in the late 1990s. Bubbles generally involve a situation where investors pile in, leading to a rapid rise in price, followed by a crash when the bubble bursts. Odlyzko calls them ‘beautiful illusions’, luring investors away from reality. During a bubble, prices can climb far above values that can be logically justified. Sometimes people invest simply on the assumption that more will join afterwards, driving up the value of their investment.[21] This can lead to what is known as the ‘greater fool theory’: people may know it’s foolish to buy something expensive, but believe there is a greater fool out there, who will later buy it off them at a higher price.[22]

  One of the most extreme examples of the greater fool theory is a pyramid scheme. Such schemes come in a variety of forms, but all have the same basic premise. Recruiters encourage people to invest in the scheme, with the promise that they’ll get a share of the total pot if they can recruit enough other people. Because pyramid schemes follow a rigid format, they are relatively easy to analyse. Suppose a scheme starts with ten people paying in, and each of these people has to recruit ten others to get their payout. If they all manage to pull in another ten, it will mean 100 new people. Each of the new recruits will need to persuade another ten, which would grow the scheme by another 1,000 people. Expanding another step would require 10,000 extra people, then 100,000, then a million. It doesn’t take long to spot that in the later stages of the scheme, there simply aren’t enough people out there to persuade: the bubble will probably burst after a few rounds of recruitment. If we know how many people are susceptible to the idea, and might plausibly sign up, we can therefore predict how quickly the scheme will fail.

  Given their unsustainable nature, pyramid schemes are generally il
legal. But the potential for rapid growth, and the money it brings for the people at the top, means that they remain a popular option for scammers, particularly if there is a large pool of potential participants. In China, some pyramid schemes – or ‘business cults’ as the authorities call them – have reached a huge scale. Since 2010, several schemes have managed to recruit over a million investors each.[23]

  The four phases of a bubble

  Adapted from original graphic by Jean-Paul Rodrigue

  Unlike pyramid schemes, which follow a rigid structure, financial bubbles can be harder to analyse. However, economist Jean-Paul Rodrigue suggests we can still divide a bubble into four main stages. First, there is a stealth phase, where specialist investors put money into a new idea. Next comes the awareness phase, with a wider range of investors getting involved. There may be an initial sell-off during this period as early investors cash in, like Newton did in the early stages of the South Sea Bubble. As the idea becomes more popular, the media and public join in, sending prices higher and higher in a mania. Eventually the bubble peaks and starts its decline during a ‘blow off’ phase, perhaps with some small secondary peaks as optimistic investors hope for another rise. These bubble stages are analogous to the four stages of an outbreak: spark, growth, peak, decline.[24]

  One signature feature of a bubble is that it grows rapidly, with the rate of buying activity increasing over time. Bubbles often feature what’s known as ‘super-exponential’ growth;[25] not only does the buying activity accelerate, the acceleration itself accelerates. With every increase in price, even more investors join in, driving the price higher. And like an infection, the faster a bubble grows, the faster it will burn through the population of susceptible people.

  Unfortunately, it can be difficult to know how many people out there are still susceptible. This is a common problem when analysing an outbreak: during the initial growth phase, it’s hard to work out how far through we are. For infectious disease outbreaks, a lot depends on how many infections show up as cases. Suppose most infections go unreported. This means that for every case we see, there will be a lot of other new infections out there, reducing the number of people who are still susceptible. In contrast, if the majority of infections are reported, there could still be a lot of people at risk of infection. One way around this problem is to collect and test blood samples from a population. If most people have already been infected and developed immunity to the disease, it’s unlikely the outbreak can continue for much longer. Of course, it’s not always possible to collect a large number of samples in a short space of time. Even so, we can still say something about the maximum possible outbreak size. By definition, it’s impossible to have more infections than there are people in the population.

  Things aren’t so simple for financial bubbles. People can leverage their trades, borrowing money to cover additional investments. This makes it much harder to estimate how much susceptibility there is, and hence what phase of the bubble we’re in. Still, it is sometimes possible to spot the signals of unsustainable growth. As the dot-com bubble grew in the late 1990s, a common justification for rising prices was the claim that internet traffic was doubling every 100 days. This explained why infrastructure companies were being valued at hundreds of billions of dollars and investors were pouring money into internet providers like WorldCom. But the claim was nonsense. In 1998, Andrew Odlyzko, then a researcher at AT&T labs, realised the internet was growing at a much slower rate, taking about a year to double in size.[26] In one press release, WorldCom had claimed that user demand was growing by 10 per cent every week. For this growth to be sustainable, it would mean that within a year or so, everyone in the world would have had to be active online for twenty-four hours a day.[27] There were simply not enough susceptible people out there.

  Arguably the greatest bubble of recent years has been Bitcoin, which uses a shared public transaction record with strong encryption to create a decentralised digital currency. Or as comedian John Oliver described it: ‘everything you don’t understand about money combined with everything you don’t understand about computers.’[28] The price of one Bitcoin climbed to almost $20,000 in December 2017, before dropping to less than a fifth of this value a year later.[29] It was the latest in a series of mini-bubbles; Bitcoin prices had risen and crashed several times since the currency emerged in 2009. (Prices would start to rise again in mid-2019.)

  Each Bitcoin bubble involved a larger group of susceptible people, like an outbreak gradually making its way from a village into a town and finally into a city. At first, a small group of early investors got involved; they understood the Bitcoin technology and believed in its underlying value. Then a wider range of investors joined in, bringing more money and higher prices. Finally, Bitcoin hit the mass-market, with coverage on the front pages of newspapers and adverts on public transport. The delay between each of the historical Bitcoin peaks suggests that the idea didn’t spread very efficiently between these different groups. If susceptible populations are strongly connected, an epidemic will generally peak around the same time, rather than as a series of smaller outbreaks.

  According to Jean-Paul Rodrigue, there is a dramatic shift during the main growth phase of a bubble. The amount of money available increases, while the average knowledge base decreases. ‘The market gradually becomes more exuberant as “paper fortunes” are made from regular “investors” and greed sets in,’ he suggested.[30] Economist Charles Kindleberger, who wrote the landmark book Manias, Panics, and Crashes in 1978, along with Robert Aliber, emphasised the role of social contagion during this phase of a bubble: ‘There is nothing so disturbing to one’s well being and judgment as to see a friend get rich’.[31] Investors’ desire to be part of a growing trend can even cause warnings about a bubble to backfire. During the British Railway Mania in the 1840s, newspapers like The Times argued that railway investment was growing too fast, potentially putting other parts of the economy at risk. But this only encouraged investors, who saw it as evidence that railway company stock prices would continue rising.[32]

  In the later stages of a bubble, fear can spread in much the same way as enthusiasm. The first ripple in the 2008 mortgage bubble appeared as early as April 2006, when US house prices peaked.[33] It sparked the idea that mortgage investments were much riskier than people had thought, an idea that would spread through the industry, eventually bringing down entire banks in the process. Lehman Brothers would collapse on 15 September 2008, a week or so after I finished my internship in Canary Wharf. Unlike Long Term Capital Management, there would be no saviour. Lehman’s collapse triggered fears that the entire global financial system could go under. In the US and Europe, governments and central banks provided over $14 trillion worth of support to prop up the industry. The scale of the intervention reflected how much banks’ investments had expanded in the preceding decades. Between the 1880s and 1960s, British banks’ assets were generally around half the size of the country’s economy. By 2008, they were more than five times larger.[34]

  I didn’t realise it at the time, but as I was leaving finance for a career in epidemiology, in another part of London the two fields were coming together. Over on Threadneedle Street, the Bank of England was battling to limit the fallout from Lehman’s collapse.[35] More than ever, it was clear that many had overestimated the stability of the financial network. Popular assumptions of robustness and resilience no longer held up; contagion was a much bigger problem than people had thought.

  This is where the disease researchers came in. Building on that 2006 conference at the Federal Reserve, Robert May had started to discuss the problem with other scientists. One of them was Nim Arinaminpathy, a colleague at the University of Oxford. Arinaminpathy recalled that, pre-2007, it was unusual to study the financial system as a whole. ‘There was a lot of faith in the vast, complex financial system being self-correcting,’ he said. ‘The attitude was “we don’t need to know how the system works, instead we can concentrate on individual institutions”.’[36] Unfortun
ately, the events of 2008 would reveal the weakness in this approach. Surely there was a better way?

  During the late 1990s, May had been Chief Scientist to the UK Government. As part of this role, he’d got to know Mervyn King, who would later become Governor of the Bank of England. When the 2008 crisis hit, May suggested they look at the issue of contagion in more detail. If a bank suffered a shock, how might it propagate through the financial system? May and his colleagues were well placed to tackle the problem. In the preceding decades, they had studied a range of infections – from measles to hiv – and developed new methods to guide disease control programmes. These ideas would eventually revolutionise central banks’ approach to financial contagion. However, to understand how these methods work, we first need to look at a more fundamental question: how do we work out whether an infection – or a crisis – will spread or not?

  After william kermack and Anderson McKendrick announced their work on epidemic theory in the 1920s, the field took a sharp mathematical turn. Although people continued working on outbreak analysis, the work became more abstract and technical. Researchers like Alfred Lotka published lengthy, complicated papers, moving the field away from real-life epidemics. They found ways to study hypothetical outbreaks involving random events, intricate transmission processes and multiple populations. The emergence of computers helped drive these technical developments; models that were previously difficult to analyse by hand could now be simulated.[37]

 

‹ Prev