The Rules of Contagion

Home > Nonfiction > The Rules of Contagion > Page 4
The Rules of Contagion Page 4

by Adam Kucharski


  Growth of an independent happening over time. Example shows what would happen if everyone had a 5 per cent or 10 per cent chance of being affected per year

  The curve gradually flattens off, though, because the size of the unaffected group shrinks over time. Each year, a proportion of people who were previously unaffected get the condition, but because there are fewer and fewer of such people over time, the overall total doesn’t grow so much later on. If the chance of being affected each year is lower, the curve will grow more slowly initially, but still eventually plateau. In reality, the curve won’t necessarily level off at 100 per cent: the final amount of people affected will depend on who is initially ‘susceptible’ to the happening.

  As an illustration, consider home ownership in the UK. Of people who were born in 1960, very few were homeowners by the age of twenty, but the majority had owned a house by the time they were thirty years old. In contrast, people who were born in 1980 or 1990 had a much lower chance of becoming a homeowner during each year of their twenties. If we plot the proportion of people who become homeowners over time, we can see how quickly ownership grows in different age groups.

  Percentage of people who were homeowners by a given age, based on year of birth

  Data: Council of Mortgage Lenders[50]

  Of course, home ownership isn’t completely random – factors such as inheritance influence people’s chance of buying – but the overall pattern lines up with Ross’s concept of an independent happening. On average, one twenty-year-old becoming a homeowner won’t have much effect on whether another gets on the housing ladder. As long as events occur independently of one another at a fairly consistent rate, this overall pattern won’t vary much. Whether we plot the amount of people who are on the housing ladder by a certain age, or the chance your bus has arrived after a certain time waiting, we’ll get a similar picture.

  Independent happenings are a natural starting point, but things get more interesting when events are contagious. Ross called these types of events ‘dependent happenings’, because what happens to one person depends on how many others are currently affected. The simplest type of outbreak is one where affected people pass the condition on to others, and once affected, people remain so. In this situation, the happening will gradually permeate through the population. Ross noted that such epidemics would follow the shape of a ‘long-drawn-out letter S’. The number of people affected grows exponentially at first, with the number of new cases rising faster and faster over time. Eventually, this growth slows down and levels off.

  Illustrative example of the S-shaped growth of a dependent happening, based on Ross’s model. The plot shows the growth of a more contagious and less contagious happening

  The assumption that people remain affected indefinitely doesn’t usually apply to infectious diseases, because people may recover, receive treatment, or die from the infection. But it can capture other kinds of spread. The S-shaped curve would later become popular in sociology, after Everett Rogers featured it in his 1962 book Diffusion of Innovations.[51] He noted that the initial adoption of new ideas and products generally followed this shape. In the mid twentieth century, the diffusion of products, like radios and refrigerators, all traced out an S-curve; later on, televisions, microwave ovens and mobile phones would do so as well.

  According to Rogers, four different types of people are responsible for the growth of a product: initial uptake comes from ‘innovators’, followed by ‘early adopters’, then the majority of the population, and finally ‘laggards’. His research into innovations mostly followed this descriptive approach, starting with the S-curve and trying to find possible explanations.

  Ross had worked in the opposite direction. He’d used his mechanistic reasoning to derive the curve from scratch, showing that the spread of such happenings would inevitably lead to this pattern. Ross’s model also gives us an explanation for why the adoption of new ideas gradually slows down. As more people adopt, it becomes harder and harder to meet someone who has not yet heard about the idea. Although the overall number of adopters continues to grow, there are fewer and fewer people adopting it at each point in time. The number of new adoptions therefore begins to decline.

  VCR ownership over time in the United States

  Data: Consumer Electronics Association

  In the 1960s, marketing researcher Frank Bass developed what was essentially an extended version of Ross’s model.[52] Unlike Rogers’s descriptive analysis, Bass used his model to look at the timescale of adoption as well as the overall shape. By thinking about the way people might adopt innovations, Bass was able to make predictions about the uptake of new technology. In Rogers’s curve, innovators are responsible for the first 2.5 per cent of adoptions, with everyone else in the remaining 97.5 per cent. These values are somewhat arbitrary: because Rogers relied on a descriptive method, he needed to know the full shape of the S-curve; it was only possible to categorise people once an idea had been fully adopted. In contrast, Bass could use the early shape of the adoption curve to estimate the relative roles of innovators and everyone else, who he called ‘imitators’. In a 1966 working paper, he predicted that new colour television sales – then still rising – would peak in 1968. ‘Industry forecasts were much more optimistic than mine,’ Bass later noted,[53] ‘and it was perhaps to be expected that my forecast would not be well received.’ Bass’ prediction wasn’t popular, but it ended up being much closer to reality. New sales indeed slowed then peaked, just as the model suggested they would.

  As well as looking at how interest plateaus, we can also examine the early stages of adoption. When Everett Rogers published the S-curve in the early 1960s, he suggested that a new idea had ‘taken off’ once 20–25 per cent of people had adopted it. ‘After that point, it is probably impossible to stop the further diffusion of a new idea,’ he argued, ‘even if one wishes to do so’. Based on outbreak dynamics, we can come up with a more precise definition for this take-off point. Specifically, we can work out when the number of new adoptions is growing fastest. After this point, a lack of susceptible people will start to slow the spread, causing the outbreak to eventually plateau. In Ross’s simple model, the fastest growth occurs when just over 21 per cent of the potential audience have adopted the idea. Remarkably, this is the case regardless of how easily the innovation spreads.[54]

  Ross’s mechanistic approach is useful because it shows us what different types of happenings might look like in real life. Think about how the VCR adoption curve compares with the home ownership one: both eventually plateau, but the VCR curve grows exponentially at first. Simple models of contagion will usually predict this kind of growth, because each new adoption creates even more adoptions, whereas models of independent happenings will not. It doesn’t mean that exponential growth is always a sign that something is contagious – there might be other reasons why people increasingly adopt a technology – but it does show how different infection processes can affect the shape of an outbreak.

  If we think about the dynamics of an outbreak, we can also identify shapes that would be very unlikely in reality. Imagine a disease epidemic that increases exponentially until all of the population is affected. What would be required to generate this shape?

  In large epidemics, transmission generally slows down because there aren’t many susceptible people left to infect. For the epidemic to keep increasing faster and faster, infectious people would have to actively start seeking out the remaining susceptibles in the later stages of the epidemic. It’s the equivalent of you catching a cold, finding all your friends who hadn’t got it yet and deliberately coughing on them until they got infected. The most familiar scenario that would create this outbreak shape is therefore a fictional one: a group of zombies hunting down the last few surviving humans.

  Illustration of an outbreak curve that grows exponentially until everyone is affected

  Back in real life, there are a few infections that affect their hosts in a way that increases transmission. Animals infected
with rabies are often more aggressive, which helps the virus to spread through bites,[55] and people who have malaria can give off an odour that makes them more attractive to mosquitoes.[56] But such effects generally aren’t large enough to overcome declining numbers of susceptibles in the later stages of an epidemic. What’s more, many infections have the opposite effect on behaviour, causing lethargy or inactivity, which reduces the potential for transmission.[57] From innovations to infections, epidemics almost inevitably slow down as susceptibles become harder to find.

  Ronald ross had planned to study a whole range of outbreaks, but as his models became more complicated, the mathematics became trickier. He could outline the transmission processes, but he couldn’t analyse the resulting dynamics. That’s when he turned to Hilda Hudson, a lecturer at London’s West Ham Technical Institute.[58] The daughter of a mathematician, Hudson had published her first piece of research in the journal Nature when she was ten years old.[59] She later studied at the University of Cambridge, where she was the only woman in her year to get first class marks in mathematics. Although she matched the results of the male student who ranked seventh, her performance wasn’t included in the official listing (it wasn’t until 1948 that women were allowed to receive Cambridge degrees[60]).

  Hudson’s expertise made it possible to expand the Theory of Happenings, visualising the patterns the different models could produce. Some happenings simmered away over time, gradually affecting everyone. Others rose sharply then fell. Some caused large outbreaks then settled down to a lower endemic level. There were outbreaks that came in steady waves, rising and falling with the seasons, and outbreaks that recurred sporadically. Ross and Hudson argued that the methods would cover most real-life situations. ‘The rise and fall of epidemics as far as we can see at present can be explained by the general laws of happenings,’ they suggested.[61]

  Unfortunately, Hudson and Ross’s work on the Theory of Happenings would be limited to three papers. One barrier was the First World War. In 1916, Hudson was called away to help design aircraft as part of the British war effort, work for which she would later get an OBE.[62] After the war, they faced another hurdle, with the papers ignored by their target audience. ‘So little interest was taken in them by the “health authorities,” that I have thought it useless to continue,’ Ross later wrote.

  When Ross first started working on the Theory of Happenings, he’d hoped it could eventually tackle ‘questions connected with statistics, demography, public health, the theory of evolution, and even commerce, politics and statesmanship’.[63] It was a grand vision, and one that would eventually transform how we think about contagion. Yet even in the field of infectious disease research, several decades would pass before the methods became popular. And it would take even longer for the ideas to make their way into other areas of life.

  2

  Panics and pandemics

  ‘I can calculate the motion of heavenly bodies but not the madness of people.’ According to legend, Isaac Newton said this after losing a fortune investing in the South Sea Company. He’d bought shares in late 1719 and initially seen his investment rise, which persuaded him to cash in. However, the share price continued to climb and Newton – regretting his hasty sale – reinvested. When the bubble burst a few months later, he lost £20,000, equivalent to around £20 million in today’s money.[1]

  Great academic minds have a mixed record when it comes to financial markets. Some, like mathematicians Edward Thorp and James Simons, have set up successful investment funds, bringing in huge profits. Others have succeeded in sending money the opposite way. Take the hedge fund Long Term Capital Management (LTCM), which suffered massive losses following the Asian and Russian Financial Crises in 1997 and 1998. With two Nobel Prize-winning economists on its board and healthy initial profits, the firm had been the envy of Wall Street. Investment banks had lent them increasingly large sums of money to pursue increasingly ambitious trading strategies, to the point that when the fund went under in 1998, they had liabilities of over $100 billion.[2]

  During the mid-1990s, a new phrase had become popular among bankers. ‘Financial contagion’ described the spread of economic problems from one country to another. The Asian Financial Crisis was a prime example.[3] It wasn’t the crisis itself that hit funds like LTCM; it was the indirect shockwaves that propagated through other markets. And because they’d lent so much to LTCM, banks also found themselves at risk. When some of Wall Street’s most powerful bankers gathered on the tenth floor of the Federal Reserve Bank of New York on 23 September 1998, it was this fear of contagion that brought them there. To avoid LTCM’s woes spreading to other institutions, they agreed a $3.6bn bailout. It was an expensive lesson, but unfortunately not one that was learned. Almost exactly ten years later, the same banks would be having the same conversations about financial contagion. This time it would be much worse.

  I spent the summer of 2008 thinking about how to buy and sell the statistical concept of correlation. I’d just finished my penultimate year of university, and was interning with an investment bank in London’s Canary Wharf. The basic idea was simple enough. Correlation measures how much things move in line with each other: if a stock market is highly correlated, stocks will tend to rise or fall together; if it’s uncorrelated, some stocks might go up while others go down. If you think stocks are going to behave similarly in future, you’d ideally want a trading strategy that profited from this correlation. My job was to help develop such a strategy.

  Correlation isn’t just some niche topic to keep a mathematically minded intern occupied. It turns out to be crucial for understanding why 2008 would end with a full-blown financial crisis. It can also help explain how contagion spreads more generally, from social behaviour to sexually transmitted infections. As we’ll see, it’s a link that would eventually pull outbreak analysis into the heart of modern finance.

  Each morning that summer, I took the Docklands Light Railway to work. Just before it reached my stop at Canary Wharf, the train would pass the skyscraper at 25 Bank Street. The building was home to Lehman Brothers. When I’d applied for internships in late 2007, Lehman had been one of the coveted destinations for many applicants. It was part of the elite ‘bulge bracket’ group of banks, which also included firms like Goldman Sachs, JP Morgan, and Merrill Lynch. Bear Stearns had been part of the club too, until its collapse in March 2008.

  Bear, as the bankers called it, had gone under because of failed investments in the mortgage market. Soon after, JP Morgan bought the carcass for less than a tenth of its earlier value. By the summer, everyone in the industry was speculating on which firm would go under next. Lehman seemed to be top of the list.

  For mathematics students, an internship in finance was the brightly lit path that distracted from all others. Everyone I knew on my degree course, regardless of their eventual career, signed up for one. I was about a month or so into my internship when I changed my mind, and decided to pursue a PhD instead of a job offer. A major factor was the course in epidemiology I’d taken earlier that year. I’d become fascinated by the idea that disease outbreaks didn’t have to be this mysterious, unpredictable occurrence. With the right methods, we could pick them apart, uncover what was really going on, and hopefully do something about it.

  But first, there was the question of what was going on around me in Canary Wharf. Despite having settled on another career path, I still wanted to understand what was happening to the banking industry. Why had rows of trading desks recently been emptied of their employees? Why were celebrated financial ideas suddenly crumbling? And how bad could it get?

  I was based in equities, analysing company share prices, but in the preceding years the real money had been in credit-based investments. One investment stood out in particular: banks had increasingly bunched together mortgages and other loans into ‘collateralized debt obligations’ (CDOs). These products let investors take on some of the mortgage lender’s risk and earn money in return.[4] Such approaches could be extremely lucra
tive. Sajid Javid, who in 2019 was appointed the UK’s Chancellor of the Exchequer, reportedly earned around £3m a year trading various credit products before he left banking in 2009.[5]

  CDOs were based on an idea borrowed from the life insurance industry. Insurers had noticed that people were more likely to die following the death of a spouse, a social effect known as ‘broken heart syndrome’. In the mid-1990s, they developed a way to account for this effect when calculating insurance costs. It didn’t take long for bankers to borrow the idea and find a new use for it. Rather than looking at deaths, banks were interested in what happened when someone defaulted on a mortgage. Would other households follow? Such borrowing of mathematical models is common in finance, as well as in other fields. ‘Human beings have limited foresight and great imagination,’ financial mathematician Emanuel Derman once noted, ‘so that, inevitably, a model will be used in ways its creator never intended.’[6]

  Unfortunately, the mortgage models had some major flaws. Perhaps the biggest problem was that they were based on historical house prices, which had risen for the best part of two decades. This period of history suggested that the mortgage market wasn’t particularly correlated: if someone in Florida missed a payment, for example, it didn’t mean someone in California would too. Although some had speculated that housing was a bubble set to burst, many remained optimistic. In July 2005, CNBC interviewed Ben Bernanke, who chaired President Bush’s Council of Economic Advisers and would shortly become Chairman of the US Federal Reserve. What did Bernanke think the worst-case scenario was? What would happen if house prices dropped across the country? ‘It’s a pretty unlikely possibility,’ Bernanke said.[7] ‘We’ve never had a decline in house prices on a nationwide basis.’

 

‹ Prev