Book Read Free

Money

Page 24

by Felix Martin


  WHY IT IS A PROBLEM: THE ANSWER TO THE QUEEN’S QUESTION

  Most people are not that interested in the fine details of what academic economists get up to—and had these abstruse theoretical developments in macroeconomics and finance remained cloistered in the ivory towers, they would be right to be indifferent. But that is not what happened at all. It is a rare faith that does not at some point become so convinced of its own rectitude that it sets out to convert the world at large. By the late 1990s, the disciples of both modern, orthodox macroeconomics and modern, academic finance were proudly marching out under their respective banners to fight the good fight and evangelise their gospels.

  The case of finance has become more notorious. In its early days, the older and more worldly of its proponents did wonder about the relevance of it all. In 1969, for example, the Nobel laureate James Tobin saw it as evidence of a worrying lack of realism that in the world depicted by academic finance “[t]here would be no room for monetary policy to affect aggregate demand” and that “[t]he real economy would call the tune for the financial sector, with no feedback in the other direction.”23 These features, he dared to suggest, showed that it must need careful handling before being used to guide policy in the real world. As the capital markets grew in size and scope, as innovation accelerated, and as the theory developed, however, newer proponents argued that Tobin’s qualms were irrelevant, since what they were doing was showing the marvellous way the world could be, even if it wasn’t that way yet. The pitch that such zealotry reached was demonstrated by the verdict delivered in 1995 by Fischer Black, one of the founding fathers of options theory, on the cornucopia of new financial instruments that his models had helped to create. “I don’t see that the private market, in creating this wonderful array of derivatives, is creating any systemic risk,” Black argued; “[h]owever, there is someone around creating systemic risk: the government.”24

  The manner in which anti-authority fantasies of this sort, and the automatic presumption in favour of practical financial deregulation which they supported, were rudely interrupted by reality during the crash of 2008 needs no rehearsal. Perhaps less well known are the practical consequences of the conversion of the policy-making world to the doctrines of the orthodox, New Keynesian macroeconomics on the other side of the schism. The most important of these concerned the correct objectives of monetary policy. The sole monetary ill that had been permitted into the New Keynesian theory was high or volatile inflation, which was deemed to retard the growth of GDP.25 The appropriate policy objective, therefore, was low and stable inflation, or “monetary stability.” Henceforth, governments should therefore confine their role to establishing a reasonable inflation target, and then delegate the job of setting interest rates to an independent central bank staffed by able technicians.26 On such grounds, the Bank of England was granted its independence and given a mandate to target inflation in 1997, and the European Central Bank was founded as an independent, inflation-targeting central bank in 1998.

  There is little doubt that under most circumstances, low and stable inflation is a good thing for both the distribution of wealth and income, and the stimulation of economic prosperity. But in retrospect, it is clear that “monetary stability” alone was far too narrow a policy objective as it was pursued from the mid-1990s to the mid-2000s. Disconcerting signs of impending disaster in the pre-crisis economy—booming house prices, a drastic underpricing of liquidity in asset markets, the emergence of the shadow banking system, the declines in lending standards, bank capital, and liquidity ratios—were not given the priority they merited, because, unlike low and stable inflation, they were simply not identified as being relevant. As the Chairman of the U.K.’s Financial Services Authority admitted bluntly in 2012, central banks had “a flawed theory of economic stability … which believed that achieving low and stable current inflation was sufficient to ensure economic and financial stability, and which failed to identify that credit and asset price cycles are key drivers of instability.”27

  Indeed, the fruits of a decade’s devoted worship at the shrine of monetary stability were more damaging even than this. The single-minded pursuit of low and stable inflation not only drew attention away from the other monetary and financial factors that were to bring the global economy to its knees in 2008—it exacerbated them. The heretical Cassandra Hyman Minsky had warned of this baleful possibility many years before.28 The more successful a central bank is in mitigating one type of risk by achieving low and stable inflation, the more confident investors will become, and the more they will willingly assume other types of risk by investing in uncertain and illiquid securities. Squeezing the balloon in one place—eliminating high and volatile inflation—will simply reinflate it in another—causing catastrophic instability in asset markets. Monetary stability will actually breed financial instability.

  Not all policy-makers were unaware that the orthodox theory might be leading them into error—and why it might be doing so. In 2001, Mervyn King—an internationally renowned macroeconomist and future Governor of the Bank of England—was to be found lamenting the fact that while “[m]ost people think that economics is the study of money,” it was in fact nothing of the sort. “Most economists,” he explained, “hold conversations in which the word ‘money’ hardly appears at all.”29 “My own belief,” he warned, “is that the absence of money in the standard models which economists use will cause problems in the future … Money, I conjecture, will regain an important place in the conversation of economists.”30 The global financial crisis has shown his belief to have been prophetic, though precisely because his conjecture was not.

  What was it in the end that frustrated the dream of Bagehot and of Keynes for an economics that takes money seriously? The ultimate answer lies in the powerful influence of Locke’s monetary doctrines. By the time Bagehot launched his assault it was too late. Money had already gone through the Looking-Glass. The conventional understanding of money as a commodity medium of exchange was already in place—and neither evidence nor argument to the contrary was even intelligible any longer to anyone under its spell. As a result, the crisis of 1866 and Bagehot’s famous reaction to it was not, it turned out, the point at which two ways of thinking about money and the economy converged—but the one from which they parted ways.

  From the moneyless economics of the classical school there evolved modern, orthodox macroeconomics: the science of monetary society taught in universities and deployed by central banks. From the practitioners’ economics of Bagehot, meanwhile, there evolved the academic discipline of finance—the tools of the trade taught in business schools, used by bankers and bond traders. One was an intellectual framework for understanding the economy without money, banks, and finance. The other was a framework for understanding money, banks, and finance, without the rest of the economy. The result of this intellectual apartheid was that when in 2008 a crisis in the financial sector caused the biggest macroeconomic crash in history, and when the economy failed to recover afterwards because the banking sector was broken, neither modern macroeconomics nor modern finance could make head nor tail of it. Fortunately, as Lawrence Summers pointed out, there were alternative traditions to fall back on. But the answer to the Queen’s question—Why did none of the economists see it coming?—is simple. Their main framework for understanding the macroeconomy didn’t include money. And by the same token, the question that many were keen to put to the bankers and their regulators—Why didn’t they realise that what they were doing was so risky?—also turned out to be simple. Their framework for understanding finance did not include the macroeconomy.

  It would all have been comical—or just irrelevant—had it not ended in such a cataclysmic economic disaster. At the end of his speech at Bretton Woods, Lawrence Summers noted how economics had lost track of finance over the previous two decades—and acknowledged that the crash showed how it, and thereby the world, had suffered as a result. But as Keynes, Bagehot, and indeed William Lowndes before them, would have been eager
to explain, the divergence was much older than that. And at the root of it all was a deceptively simple change of perspective: the difference between two conceptions of money.

  14 How to Turn the Locusts into Bees

  CAN WE AVOID THE ISLAND OF DR. MOREAU?

  In November 2004, the Chairman of Germany’s governing Social Democratic Party, Franz Müntefering, made a famous speech attacking the culture of modern financial capitalism. He launched a vitriolic tirade against contemporary financiers, describing them as “irresponsible locust swarms, who measure success in quarterly intervals, suck off substance, and let companies die once they have eaten them away.”1 It was a metaphor that struck a chord with the public all over Europe—and one that stood in ironic contrast to the analogy of the enterprising and co-operative beehive which the Dutchman Bernard Mandeville had employed to convince sceptics of the benefits of monetary society in the early eighteenth century.2

  At the time, Müntefering’s invective seemed the nadir of finance’s public reputation in Europe. Nine years later, the stock of banks and bankers across the globe had sunk infinitely lower still. The immediate catalyst was the global financial crisis of 2007–8. It was in the banking sector, after all, that the macroeconomic disaster that left millions out of work and societies deeply fractured began; and to add insult to injury, the general public was forced to bail out the very institutions which caused the crisis. In Southern Europe, popular resentment found a target in “the dictatorship of the bankers.”3 Even in the centres of global capitalism, banking’s reputation took such a battering that by mid-2012 the house magazine of the global financial elite, The Economist, required only one word to summarise its assessment of contemporary finance professionals: “Banksters.”4

  The crisis and its aftermath reactivated the old suspicion—perfectly captured in Müntefering’s rhetoric—that banking is basically a parasitic rather than a productive activity. Banking has always been difficult for outsiders to understand, but the last decade and a half has seen an exponential increase in the rate of innovation and sophistication in finance. When many of these same innovations were implicated in the crash and it was taxpayers rather than bankers who were stuck with the bill, old doubts resurfaced. What was the point of the CDOs and CDSes, the ABCP and the SPVs, that the 1990s and 2000s gave us? It was not just brassed-off account-holders and exasperated taxpayers who expressed their doubts, but some of the leading lights of the financial industry itself. Adair Turner, Chairman of the U.K. Financial Services Authority, put it diplomatically in August 2009 when he said that at least some of the previous decade of financial innovation had been “socially useless.”5 Paul Volcker, the grand old man of global financial regulation, was more direct. The only financial innovation of the previous two decades that had added any genuine value to the broader economy, he said with withering contempt, was the ATM.6

  The result of this powerful and widespread reaction to the crisis is that today, for the first time in decades, there are serious campaigns in progress in virtually all of the world’s most developed economies to reform banking, finance, and the entire framework of monetary policy and financial regulation. There has been a slew of investigations, reports, panels, and legislation—all of which have come on top of other, ongoing, and international efforts.7 The politicians and regulators, it appears, have been eager to heed the well-known motto of the ex–White House chief of staff Rahm Emanuel: “Never let a serious crisis go to waste.”8

  Or have they? If the unauthorised biography of money we have unearthed tells us something about what went wrong with economic theory and policy before and after the crisis, does it also have something to contribute to the very live debates over the more structural questions of whether the monetary and financial system can be fixed so that a repeat of today’s economic and social catastrophe can be avoided? Is there anything we can learn from the neglected tradition of monetary scepticism that would help solve this pressing policy problem? And might it be rather more radical than the reforms currently working their way through the parliaments and regulators of the world’s financial capitals? The stated aim of all these processes is to make banking and finance serve the real economy and society again—to turn Franz Müntefering’s locusts into Sir Bernard Mandeville’s bees. But as connoisseurs of the horror genre, from H. G. Wells’ 1896 novel The Island of Dr. Moreau to David Cronenberg’s 1986 film The Fly, know only too well, genetic engineering is a risky business. Get it wrong, and you can end up with a monster.

  FROM QUID PRO QUO TO SOMETHING FOR NOTHING

  On 14 September 2007, the U.K. Chancellor of the Exchequer announced that he had authorised the Bank of England to provide a “liquidity support facility”—effectively, a larger than normal overdraft—to Northern Rock, a medium-sized British bank that specialised in residential mortgages.9 Northern Rock had run into trouble because it funded a large part of its book of mortgage lending—by its nature, a collection of very long-term promises to pay—by selling short-dated bills and bonds to investors; that is, short-term promises to pay. When problems emerged in international financial markets in the course of 2007, this short-term funding disappeared. And when Northern Rock’s depositors saw the way the wind was blowing, they also began to pull out their money. A run on the bank in the so-called “wholesale” funding markets—the markets for its bills and bonds—had become a run on the bank in its “retail” funding market—its deposits from individuals and companies. All of a sudden, Northern Rock was in the throes of a classic liquidity crisis. The “run on the Rock,” as it soon became known, had begun.10

  This was hardly a novel problem in the world of banking. As we have seen, the purest essence of banking is the business of maintaining the synchronisation of payments in and out of the balance sheet.11 The generic challenge is that the assets which banks hold—the loans they have made—are typically to be repaid relatively far in the future, while their liabilities potentially come due much sooner—indeed, on demand, in the case of many kinds of deposits. There is, in other words, an intrinsic mismatch—the bank’s “maturity gap” as it is called—that cannot be eliminated from a banking system like the one that exists today. Most of the time, the maturity gap is not a problem. Indeed, its very existence is in one sense the whole purpose of the banking system. The bank’s depositors get the freedom of being able to withdraw or make payments with their deposits at a moment’s notice, while they get interest that can only be generated by risky and illiquid loans. But it makes synchronising payments a particularly delicate art. If, for one reason or another, depositors and bondholders lose confidence in a bank’s ability to meet its commitments to them as they come due, and they therefore withdraw their deposits and refuse to roll over their lending en masse, the maturity gap presents an insuperable problem for the bank if it can only rely on its own resources.

  Fortunately, however, modern banks have friends in high places. Under the terms of the Great Monetary Settlement, a bank’s liabilities, unlike the liabilities of normal companies, are an officially endorsed component of the national money supply. And since money is the central co-ordinating institution of the economy, any impairment of its transferability would impose grave costs on the whole of society—not just on the particular bank that issued it. Money must therefore be protected from suspicions concerning any one of the banks that operate it. Just as electricity is delivered through a network for which the failure of a single power station can be disastrous, the vast majority of modern money is provided and operated by a network of banks in which the failure of one can disrupt the system as a whole. In fact, even greater vigilance is required in the case of the banking system. Disruption of the electricity grid at least requires the malfunction of physical infrastructure. In the banking system, a mere loss of confidence in one of the parts can be fatal to the whole.

  Preventing liquidity crises in banks has therefore long been recognised as an important responsibility of the sovereign: as we saw earlier, it was Walter Bagehot who formalised the
rules for how to cure a crisis when it occurs. If panic strikes and a bank’s depositors and bondholders withdraw their funding, he taught, the correct remedy is for the sovereign to step into their shoes. As bondholders and depositors demand payment, the bank should be permitted to borrow from the Bank of England in order to pay them out in sovereign money. More and more of its balance sheet will be funded, in effect, by the central bank, and less and less by private investors. And by the same token, private investors will hold fewer and fewer claims on the private bank and more and more claims on the Bank of England; or cash, as it is more commonly known. Bagehot’s solution became standard practice throughout the world. Even the U.S., a latecomer to the wonders of modern central banking, installed the system in 1913. This was the time-honoured palliative being deployed in September 2007 by the Bank of England—for the first time, it was said, since the collapse of Overend, Gurney a hundred and forty years earlier.12

  As the months went on, it became clear that Northern Rock’s problem was not just one of liquidity, however. Many of the loans it had made were no good. This was no longer a problem of synchronising payments that were going to be made as agreed. It meant that no matter how good the synchronisation, the sums might not add up. The total value of Northern Rock’s liabilities, it seemed, was larger than the value of its assets—regardless of when one or the other was coming due. Under normal circumstances—if it has been doing its job properly—the value of a bank’s assets will be larger than its liabilities. The difference between the two is the bank’s equity capital. When it is positive, the bank is said to be solvent, and the more positive it is, the larger the decline in asset values that the bank can withstand without becoming insolvent. Northern Rock, it seemed, had been sailing too close to the wind. It had operated with a small amount of equity capital. When the housing market had deteriorated and the economy gone into recession, the value of the mortgages that made up much of its assets had started to fall. The value of the bank’s liabilities, on the other hand, had stayed the same—as liabilities awkwardly do. The bank’s equity capital had quickly been eroded. The market price of a share in that equity had collapsed in response. From a high of over £12 in the halcyon days of February 2007, it had already fallen to around £7 in late August, and then to £3 two days after the announcement of the Bank of England’s liquidity support operation. Now it dropped to below a pound a share. In the absence of external assistance, it was clear that the market believed Northern Rock to be not just illiquid, but insolvent.

 

‹ Prev