The Rules of Contagion

Home > Nonfiction > The Rules of Contagion > Page 8
The Rules of Contagion Page 8

by Adam Kucharski


  The 2008 crisis wasn’t the first time Andy Haldane had thought about contagion in financial systems.[82] ‘I remember back in 2004/5, writing a note about us having entered the era of “super-systemic risk” as a result of these sorts of infections.’ His note suggested that the financial network might be robust in some situations and extremely fragile in others. The idea was well-established in ecology: the structure of a network might make it resilient to minor shocks, but the same structure could also leave it vulnerable to complete collapse if put under enough stress. Think about a team at work. If most people are doing well, weaker members can get away with mistakes because they are linked to high performers. However, if most of the team are struggling, the same links will instead drag strong members down. ‘The basic point was that all this integration did indeed reduce the probability of mini-crashes,’ Haldane said, ‘but increased the probability of a maxi-crash.’

  It may have been a prescient idea, but it didn’t spread very far. ‘That note didn’t really go anywhere unfortunately,’ he said, ‘until the big one came.’ Why didn’t the idea take off? ‘It was hard to spot any examples of such systemic risk at the time. It appeared to be a very flat ocean at that point.’ That would change in autumn 2008. After Lehman Brothers collapsed, people across the banking industry started thinking in terms of epidemics. According to Haldane, it was the only way to explain what had happened. ‘You couldn’t tell a story about why Lehman had brought the financial system down without telling a contagion story.’

  If you were to make a list of network features that could amplify contagion, you’d find that the pre-2008 banking system had most of them. Let’s start with the distribution of links between banks. Rather than connections being scattered evenly, a handful of firms dominated the network, creating massive potential for superspreading. In 2006, researchers working with the Federal Reserve Bank of New York picked apart the structure of the US Fedwire payment network. When they looked at the $1.3 trillion of transfers that happened between thousands of US banks on a typical day, they found that 75 per cent of the payments involved just 66 institutions.[83]

  Illustration of assortative and disassortative networks

  Adapted from Hao et al., 2011

  The variability in links wasn’t the only problem. It was also how these big banks fitted into the rest of the network. In 1989, epidemiologist Sunetra Gupta led a study showing that the dynamics of infections could depend on whether a network is what mathematicians call ‘assortative’ or ‘disassortative’. In an assortative network, highly connected individuals are linked mostly to other highly connected people. This results in an outbreak that spreads quickly through these clusters of high-risk individuals, but struggles to reach the other, less connected, parts of the network. In contrast, a disassortative network is when high-risk people are mostly linked to low risk ones. This makes the infection spread slower at first, but leads to a larger overall epidemic.[84]

  The banking network, of course, turned out to be disassortative. A major bank like Lehman Brothers could therefore spread contagion widely; when Lehman failed, it had trading relationships with over one million counter-parties.[85] ‘It was entangled in this mesh of exposures – derivatives and cash – and no one had the faintest idea quite who owed what to whom,’ Haldane said. It didn’t help that there were numerous, often hidden, loops in the wider network, creating multiple routes of transmission from Lehman to other companies and markets. What’s more, these routes could be very short. The international financial network had become a smaller world during the 1990s and 2000s. By 2008, each country was only a step or two away from another nation’s crisis.[86]

  In February 2009, investor Warren Buffett used his annual letter to shareholders to warn about the ‘frightening web of mutual dependence’ between large banks.[87] ‘Participants seeking to dodge troubles face the same problem as someone seeking to avoid venereal disease,’ he wrote. ‘It’s not just whom you sleep with, but also whom they are sleeping with.’ As well as putting supposedly careful institutions at risk, Buffett suggested that the network structure could also incentivise bad behaviour. If the government needed to step in and help during a crisis, the first companies on the list would be those that were capable of infecting many others. ‘Sleeping around, to continue our metaphor, can actually be useful for large derivatives dealers because it assures them government aid if trouble hits.’

  Given the apparent vulnerability of the financial network, central banks and regulators needed to understand the 2008 crisis. What else had been driving transmission? The Bank of England had already been working on models of financial contagion pre-crisis, but 2008 brought a new, real-life urgency to the work. ‘We started using them in practice when the crisis broke,’ Haldane said. ‘Not just for making sense of what was going on, but more importantly for what we might do to stop it happening again.’

  When one bank lends money to another, it creates a tangible link between the two: if the borrower goes under, the lender loses their money. In theory, we could trace this network to understand the outbreak risk, just as we can for STIs. But there’s more to it than that. Nim Arinaminpathy has pointed out that networks of loans were just one of several problems in 2008. ‘It’s almost like hiv,’ he said. ‘You can have transmission through sexual contacts, as well as needle exchanges or blood transfusions. There are multiple routes of transmission.’ In finance, contagion can also come from several different sources. ‘It isn’t just lending relationships, it’s also about shared assets and other exposures.’

  A long-standing idea in finance is that banks can use diversification to reduce their overall risk. By holding a range of investments, individual risks will balance each other out, improving the bank’s stability. In the lead up to 2008, most banks had adopted this approach to investment. They’d also chosen to do it in the same way, chasing the same types of assets and investment ideas. Although each individual bank had diversified their investments, there was little diversity in the way they had collectively done it.

  Why the similarity in behaviour? During the Great Depression that followed the 1929 Wall Street crash, economist John Maynard Keynes observed that there is a strong incentive to follow the crowd. ‘A sound banker, alas, is not one who foresees danger and avoids it,’ he once wrote, ‘but one who, when he is ruined, is ruined in a conventional way along with his fellows, so that no one can really blame him.’[88] The incentive works the other way too. Pre-2008, many companies started investing in trendy financial products like CDOs, which were far outside their area of expertise. Janet Tavakoli has pointed out that banks were happy to indulge them, inflating the bubble further. ‘As they say in poker, if you don’t know how to spot the sucker at the table, it is you.’[89]

  When multiple banks invest in the same asset, it creates a potential route of transmission between them. If a crisis hits and one bank starts selling off its assets, it will affect all the other firms who hold these investments. The more the largest banks diversify their investments, the more opportunities for shared contagion. Several studies have found that during a financial crisis, diversification can destabilise the wider network.[90]

  Robert May and Andy Haldane noted that historically, the largest banks had held lower amounts of capital than their smaller peers. The popular argument was that because these banks held more diverse investments, they were at less risk; they didn’t need to have a big buffer against unexpected losses. The 2008 crisis revealed the flaws in this thinking. Large banks were no less likely to fail than smaller ones. What’s more, these big firms were disproportionally important to the stability of the financial network. ‘What matters is not a bank’s closeness to the edge of the cliff,’ May and Haldane wrote in 2011, ‘it is the extent of the fall.’[91]

  Two days after lehman went under, Financial Times journalist John Authers visited a Manhattan branch of Citibank during his lunch break. He wanted to move some cash out of his account. Some of his money was covered by government deposit insura
nce, but only up to a limit; if Citibank collapsed too, he’d lose the rest. He wasn’t the only one who’d had this idea. ‘At Citi, I found a long queue, all well-dressed Wall Streeters,’ he later wrote.[92] ‘They were doing the same as me.’ The bank staff helped him open additional accounts in the name of his wife and children, reducing his risk. Authers was shocked to discover they’d been doing this all morning. ‘I was finding it a little hard to breathe. There was a bank run happening, in New York’s financial district. The people panicking were the Wall Streeters who best understood what was going on.’ Should he report what was happening? Given the severity of the crisis, Authers decided it would only make the situation worse. ‘Such a story on the FT’s front page might have been enough to push the system over the edge.’ His counterparts at other newspapers came to the same conclusion, and the news went uncovered.

  The analogy between financial and biological contagion is a useful starting point, but there is one situation it doesn’t cover. To get infected during a disease outbreak, a person needs to be exposed to the pathogen. Financial contagion can also spread through tangible exposures, like a loan between banks or an investment in the same asset as someone else. The difference with finance is that firms don’t always need a direct exposure to fall ill. ‘There’s one way this is unlike any other network we’ve dealt with,’ said Nim Arinaminpathy. ‘You can have apparently healthy institutions come crashing down.’ If the public believes that a bank will go under, they may try to withdraw their money all at once, which would sink even a healthy bank. Likewise, when banks lose confidence in the financial system – as happened in 2007/8 – they often hoard money rather than lending it out. The rumour and speculation that circulates from one trader to another may therefore bring down firms that would otherwise have survived the crisis.

  During 2011, Arinaminpathy and Robert May worked with Sujit Kapadia at the Bank of England to investigate not only direct transmission through bad loans or shared investments, but also the indirect effect of fear and panic. They found that if bankers started hoarding money when they lost confidence in the system, it could exacerbate a crisis: banks that would otherwise have had enough capital to ride it out would instead fail. The damage was much worse when a large bank was involved because they tended to be in the middle of the financial network.[93] This suggested that rather than simply looking at the size of banks, regulators should consider who is at the heart of the system. It isn’t just about banks being ‘too big to fail’; it is more about them being ‘too central to fail’.

  These kinds of insights from epidemic theory are now being put into practice, something Haldane described as a ‘philosophical shift’ in how we think about financial contagion. One major change has been to get banks to hold more capital if they are important to the network, reducing their susceptibility to infection. Then there is the issue of the network links that transmitted the infection in the first place. Could regulators target these too? ‘The hardest part of this was when you went to questions of “Should we act to alter the very structure of the web”?’ Haldane said. ‘That’s when people started to kick up more of a fuss because it was a more intrusive intervention in their business model.’

  In 2011, a commission chaired by John Vickers recommended that larger British banks put a ‘ring-fence’ around their riskier trading activities.[94] This would help prevent the fallout from bad investments spreading to the retail parts of banks, which deal with high-street services like our savings accounts. ‘The ring-fence would help insulate UK retail banking from external shocks,’ the commission suggested. ‘A channel of financial system interconnectedness – and hence of contagion – would be made safer.’ The UK government eventually put the recommendation into practice, forcing banks to split their activities. Because it was such a tough policy to get through, it wasn’t picked up elsewhere; ring-fencing was proposed in other parts of Europe, but not implemented.[95]

  Ring-fencing isn’t the only strategy for reducing transmission. When banks trade financial derivatives, it’s often done ‘over the counter’ from one firm direct to another, rather than through a central exchange. Such trading activity came to almost $600 trillion in 2018.[96] However, since 2009, the largest derivatives contracts are no longer traded directly between major banks. They now have to go through independently run central hubs which have the effect of simplifying the network structure.

  The danger, of course, is that if a hub fails, it could become a giant superspreader. ‘If there is a big shock, it makes things worse because the risk is concentrated,’ said Barbara Casu, an economist at Cass Business School.[97] ‘It should act as a risk buffer, but in extreme cases it could act as a risk amplifier.’ To guard against this problem, hubs have access to emergency capital from the members who use them. This mutual approach has drawn criticism from financiers who prefer an every-firm-for-themselves style of banking.[98] But by removing the tangle of hidden loops from the network, the hubs should mean fewer opportunities for contagion, and less uncertainty about who is at risk.

  Despite progress in our understanding of financial contagion, there is still work to be done. ‘It’s like infectious disease modelling in the 1970s and 1980s,’ said Arinaminpathy. ‘There was a lot of great theory and the data had some catching up to do.’ One of the big obstacles is access to trading information. Banks are naturally protective of their business activities, making it difficult for researchers to form a picture of exactly how institutions are connected, particularly at the global level. This makes it difficult to assess potential contagion. Network scientists have found that, when examining the probability of a crisis, small errors in knowledge about the lending network could lead to big errors in estimates of system-wide risk.[99]

  Yet it’s not only a matter of trading data. As well as studying the structure of networks, we need to think more about Newton’s ‘madness of people’. We need to consider how beliefs and behaviours arise, and how they can spread. This means thinking about people as well as pathogens. From innovations to infections, contagion is often a social process.

  3

  The measure of friendship

  The terms of the wager were simple. If John Ellis lost at darts, he had to get the word ‘penguin’ into his next scientific paper. It was 1977, and Ellis and his colleagues were in a pub near the CERN particle physics laboratory, just outside Geneva. Ellis was playing against Melissa Franklin, a visiting student. She had to leave before the end of the game, but another researcher took her place and sealed the victory. ‘Nevertheless,’ Ellis later said,[1] ‘I felt obligated to carry out the conditions of the bet.’

  That raised the question of how to sneak a penguin into a physics paper. At the time, Ellis was working on a manuscript that described how a particular type of subatomic particle – the so-called ‘bottom quark’ – behaved. As was common in physics, he sketched out a diagram with arrows and loops showing how the particles would transition from one state to another. First introduced by Richard Feynman in 1948, these ‘Feynman diagrams’ had become a popular tool for physicists. The drawings provided Ellis with the inspiration he needed. ‘One evening, after working at CERN, I stopped on my way back to my apartment to visit some friends living in Meyrin where I smoked some illegal substance,’ he recalled. ‘Later, when I got back to my apartment and continued working on our paper, I had a sudden flash that the famous diagrams look like penguins.’

  Ellis’s idea would catch on. Since the paper was published, his ‘penguin diagrams’ have been cited thousands of times by other physicists. Even so, the penguins are nowhere near as widespread as the figures they are based on. Feynman diagrams would spread rapidly after their 1948 debut, transforming physics. One of the reasons the idea sparked was the Institute for Advanced Study in Princeton, New Jersey. Its director was J. Robert Oppenheimer, who’d previously led the US effort to develop the atomic bomb. Oppenheimer called the institute his ‘intellectual hotel’, bringing in a series of junior researchers on two-year positions.[2] Youn
g minds arrived from around the world, with Oppenheimer wanting to encourage the global flow of ideas. ‘The best way to send information is to wrap it up in a person,’ as he put it.

  The spread of scientific concepts would inspire some of the first research into the transmission of ideas. During the early 1960s, US mathematician William Goffman suggested that the transfer of information between scientists worked much like an epidemic.[3] Just as diseases like malaria spread from person to person via mosquitoes, scientific research often passed from scientist to scientist via academic papers. From Darwin’s theory of evolution to Newton’s laws of motion and Freud’s psychoanalytic movement, new concepts had spread to ‘susceptible’ scientists who came into contact with them.

  Still, not everyone was susceptible to Feynman diagrams. One sceptic was Lev Landau at the Moscow Institute for Physical Problems. A highly respected physicist, Landau had clear ideas about how much he respected others; he was known to maintain a list rating his fellow researchers. Landau used an inverted scale from 0 to 5. A score of 0 indicated the greatest physicist – a position held only by Newton in the list – and 5 meant ‘mundane’. Landau rated himself a 2.5, upgrading this to a 2 after he won the 1962 Nobel Prize.[4]

  Although Landau rated Feynman as a 1, he wasn’t impressed by the diagrams, seeing them as a distraction from more important problems. Landau hosted a popular weekly seminar at the Moscow Institute. Twice, speakers tried to present Feynman diagrams; both times they were kicked off the podium before they could finish their talks. When a PhD student said he was planning to follow Feynman’s lead, Landau accused him of ‘fashion chasing’. Landau did eventually use the diagrams in a 1954 paper, but he outsourced the tricky analysis to two of his students. ‘This is the first work where I could not carry out the calculations myself’, he admitted to a colleague.[5]

 

‹ Prev