The Cash Nexus: Money and Politics in Modern History, 1700-2000

Home > Other > The Cash Nexus: Money and Politics in Modern History, 1700-2000 > Page 21
The Cash Nexus: Money and Politics in Modern History, 1700-2000 Page 21

by Niall Ferguson


  The key point is that the Bank continued to have multiple roles: a political duty to attend to the government’s financial needs, largely in abeyance in the Victorian era; a statutory duty to maintain the convertibility of banknotes into gold; and a commercial duty to pay dividends to its shareholders. With the 1870s came the recognition of a fourth role: as ‘lender-of-last resort’ to the banking system as a whole. That it should perform such a function was the conclusion Bagehot drew from its actions during the financial crises of 1825, 1839, 1847, 1857 and 1866, when the huge discount house of Overend Gurney had failed.97 The Bank had occasionally bailed out ailing banks in the past;98 but in ‘lifeboat operations’ such as that which rescued Barings in 1890, the Bank was able to use its special relationship with government to underwrite a salvaging operation by the principal merchant banks.99 The crisis of July–August 1914 extended the role of lender of last resort further: after the traditional emergency measures had been adopted (suspension of the 1844 Act, suspension of gold convertibility), a moratorium on bills of exchange led to the Bank’s taking over an unknown (but large) quantity of bad debts; this bailed out the bill-brokers whose foreign remittances had dried up as a result of the diplomatic crisis. The issue of new £1 and £10 Treasury notes also acted as an injection of base money.100 Though the circumstances of 1914 were certainly exceptional, this represented a significant extension of the Bank’s public role: having once been able to focus its gaze on ‘the proportion’, it now had to be concerned about general financial, and by extension even macroeconomic, stability.101 It was only gradually in the course of the twentieth century that economists became conscious of the problem of ‘moral hazard’ that followed from the central bank’s new role as lender of last resort. If banks that were ‘too big to fail’ could more or less rely on being bailed out by the authorities, then they were likely to be even less risk-averse in their business. (The same problem arose with the system of deposit insurance introduced in the United States in the 1930s.)

  This was the British model, then: a synthesis of Peelite principle and Bagehotian pragmatism. But it should be stressed that the evolution of central bank functions varied considerably from country to country. Rules governing gold reserves were not all the same, and not all countries redeemed in coin and bullion.102 Moreover, other countries broadened the remit of their central banks beyond specie convertibility from the very outset. According to its 1875 statute, the German Reichsbank was supposed ‘to regulate the money supply in the entire Reich area, to facilitate the balancing of payments and to ensure the utilization of the available capital’.103 The American Federal Reserve system as it was established by the Act of December 1913 was supposed to relate its monetary policy to the volume of ‘notes, drafts and bills of exchange arising out of actual commercial transactions’ – an echo of the ‘real bills’ doctrine advanced by the British opponents of ‘bullionism’ in the 1810s.104

  In some respects, the First World War and its aftermath tended to diminish these differences, on paper at least. For all the combatants, the war took central bank–state relations back to the eighteenth century: the government deficit came first, while the suspension of gold convertibility was a means not only of avoiding a general liquidity crisis but also of centralizing the gold needed to finance ballooning trade deficits. More novel was the way central banks everywhere in Europe sought to manage their exchange rates in the absence of the gold peg. Exchange controls and requisitions of overseas assets in private portfolios were designed to limit depreciation against the dollar. After the war, on the other hand, the banks sought to reassert themselves by regaining or increasing their independence from government – in the words of the 1921 Brussels Conference, all ‘banks of issue should be freed from political pressure’105 – and proclaiming their faith in the ‘rules’ of the restored gold standard. The Genoa Conference held in 1922 issued a clarion call for central bank independence and gold convertibility – a model adopted in the wake of currency reforms in Austria (1922), Hungary (1923) and Germany (1924), as well as in Chile (1926), Canada (1935) and Argentina (1936).106

  Why then was there such a divergence in monetary experience after 1918, with some countries inflating and others deflating? The answer is that behind their outward similarities the bankers’ priorities were quite different. Rudolf Havenstein, President of the Reichsbank throughout the inflation years, regarded the maintenance of German industrial production and employment as his principal objectives; currency stability he disregarded, possibly because he subscribed to the view that the depreciation of the mark would persuade Britain and the United States to reduce the reparations burden imposed on Germany, perhaps because he sincerely believed Knapp’s legalistic ‘state theory of money’ (which, in true Prussian fashion, maintained that paper money would retain its value if the state said it did).107 His successor, Hjalmar Schacht, though outwardly a devotee of gold and central bank independence, also saw monetary policy as a potential instrument of revisionist diplomacy, ultimately aligning himself with Hitler.108 In Britain, by contrast, the restoration and defence of the pre-war exchange rate was seen as indispensable if confidence in London as a financial centre was to be restored; and this became Montagu Norman’s mission as Governor of the Bank of England. Meanwhile, France and the United States attached more importance to domestic conditions than the rules of the game: both countries systematically sterilized gold inflows to prevent their large balance of payments surpluses translating into higher domestic inflation.109 Partly because of this – but also because sterling was overvalued after the return to gold – the British attempt to turn back the clock of monetary history ended with the great international financial crisis of 1931, after which one country after another abandoned gold.

  The Federal Reserve Bank of New York developed an especially aberrant monetary theory after the death in 1928 of its President, Benjamin Strong. Focusing on nominal rates of interest and bank borrowing, convinced that there had been excessive monetary expansion in the 1920s, the Fed repeatedly did the wrong thing: failing to halt contraction after the Wall Street crash (October 1929); sterilizing gold inflows and even inducing a perverse monetary contraction; raising interest rates to stem gold outflows (September 1931 and again in February 1933) and discontinuing open market purchases of government securities in 1932 even when its reserve ratio was double the required minimum.110 If a single human agency can be blamed for the severity of the Great Depression, it was to be found here.

  FROM INDISCRETION TO INDEPENDENCE

  Revolution, depression and another world war led between them to the subordination of central banks almost everywhere to governments. Given the mess they had made of the 1920s and 1930s, it was a fate most of them deserved. The extreme case was in the Soviet Union, where credit was entirely centralized within the framework of the Five Year Plans. In Germany the Reichsbank under Schacht imposed an array of controls on the financial system, only to find itself in turn subjugated by Hitler, who responded to Schacht’s warnings about the inflationary effects of rearmament by sacking him. But the erosion of central bank power happened in democracies too: even before the Second World War the Danish, New Zealand and Canadian central banks had all been nationalized. The Federal Reserve system was effectively subordinated to the Treasury under the New Deal (though this did not prevent another avoidable recession in 1936–7, when the Fed needlessly raised reserve requirements).111 By the end of the Second World War even the Bank of England was so manifestly the money-printing wing of the Treasury that nationalization was barely resisted.112 Today it is still the case that most central banks are state-owned.113

  The logic of nationalization was that the private ownership of central banks was incompatible with their macroeconomic responsibility, which in practice meant maintaining low interest rates, while fiscal policy did the serious Keynesian work of achieving the ideal level of demand. In the words of the Radcliffe Committee report (1959), ‘Monetary policy … cannot be envisaged as a form of economic strategy tha
t pursues its own objectives. It is a part of a country’s economic policy as a whole and must be planned as such.’114 In practice – and this was especially true in Britain – it was the struggle to maintain successive dollar pegs under the Bretton Woods system that really dominated monetary policy. The Bank of England no longer relied on changing the discount rate; it now had a wide range of credit controls at its disposal. Successive Chancellors tinkered with these in an almost impossible struggle to maintain full employment without weakening sterling.115 In the United States, by contrast, the Federal Reserve retained considerable freedom to engineer economic contractions to reduce inflation (or ‘lean against the wind’): it did so on six occasions between 1947 and 1979, with substantial and enduring real effects. On average, a shift to anti-inflationary policy led to a reduction of industrial production of 12 per cent and a two-percentage-point increase in unemployment.116 This was what William McChesney Martin – Governor of the Federal Reserve from 1951 until 1970 – meant by ‘tak[ing] away the punch bowl just when the party is getting going’.

  Two events exposed the inflationary dangers of central bank impotence: the Vietnam War which, along with the ‘Great Society’ welfare programme, pushed American deficits up (though not by as much as is often asserted);117 and the oil crises triggered by the Yom Kippur War of 1973 and the Iranian Revolution of 1979. The collapse of the Bretton Woods system – because of European refusals to revalue against the dollar – removed the external check on monetary expansion. To proponents of the ‘political business cycle’ theory, there was nothing now to prevent politicians manipulating monetary policy so as to secure re-election – except the rapidly worsening trade-off between inflation and employment as popular expectations adjusted and the ‘non-accelerating inflation rate of unemployment’ (‘nairu’) rose (see Chapter 8).

  How far the high inflation of the 1970s was directly responsible for low growth remains a matter for debate. Some economists maintain that reducing inflation to zero would promote growth, since inflation creates a bias in favour of consumption over saving;118 others that pushing the unemployment rate below the ‘nairu’ has only mild inflationary effects.119 But even if it is true that inflation is only detrimental to growth at rates of more than 40 per cent – and may even be helpful at around 8 per cent120 – there were other obvious reasons for checking the acceleration in inflation, not least the questionable legitimacy of income and wealth redistribution by this means.121

  There were three intellectual responses to the ‘stagflationary’ crisis. The first was that central banks should now make price stability their paramount, if not sole, objective. The second was that they should do this by targeting the growth of the money supply. The third was that they should be made more independent from governmental pressure.

  Never have the rules of the game changed as rapidly as they did in the 1970s, as various central banks experimented with a plethora of monetary targets (such as M0 and M3 in Britain and non-borrowed reserves in the United States).122 In itself ‘monetarism’ was a compromised revolution almost from the outset, as the economic theorists disapproved of the bankers’ reliance on the old interest-rate tool (they wanted the monetary base to be directly controlled to achieve the target for the monetary aggregate). In any case, the deregulation of the financial system which accompanied the new policy (especially in Britain) had the perverse effect of changing the very monetary aggregates that were being targeted. Almost as soon as they had abandoned one system of fixed exchange rates, European politicians began to devise a new system for themselves; even the British and Americans acknowledged by the mid-1980s that exchange rates could not simply be left to their own very volatile devices. The real significance of monetarism was as part of the broader regime change symbolized politically by the elections of Margaret Thatcher and Ronald Reagan and the accession to power of Helmut Kohl in Germany. The monetary shocks inflicted in 1979–82 as nominal interest rates rose sharply broke the upward spiral of inflationary expectations.

  This success compensated for the theoretical failure, however: behind the scenes ‘rules’ were quietly dropped in favour of ‘discretion’ – by which was meant a reliance on a multiplicity of rules, not all of them explicit or consistent with one another. The nemesis of this incoherence was most painful in Britain, where monetary targeting was abandoned by Nigel Lawson in favour of ‘shadowing’ the deutschmark, and ultimately joining the Exchange Rate Mechanism at the very moment when German reunification was driving German interest rates upwards.123 In the aftermath of sterling’s ignominious exit from the ERM, the Bank followed the example of the Bank of New Zealand in targeting neither money nor the exchange rate but inflation itself. In the course of the 1990s this approach was adopted by more than fifty other central banks – though not the Federal Reserve, which still chooses to pursue its dual statutory goals of ‘maximum employment’ and ‘stable prices’ using open market operations and with reference to an eclectic mixture of variables.124

  The 1990s are sometimes seen as ‘the age of the central bankers’.125 Thanks to the proliferation of new nations, there were more central banks than ever: from just 18 in 1900 and 59 in 1950, their number had risen to 161 by 1990 and 172 by 1999. Over 90 per cent of all members of the United Nations now have their own central banks.126 Great power is frequently attributed to the élite handful of these institutions. Before Economic and Monetary Union, the Bundesbank was portrayed as ‘the Bank that rules Europe’.127 In the United States first Paul Volcker and then Alan Greenspan were so successful in enhancing the power and prestige of the chairmanship of the Federal Reserve Board that the latter came to be seen as more economically powerful than the President. The fact that inflation had been discernibly lower in countries with independent central banks128 persuaded many theorists, bankers and politicians that a separation of economic powers was the key to price stability (if not to higher growth).129 This was, as so often in the history of economic policy, an old idea in a new guise. In the 1930s the Bank of England’s roving monetary expert Otto Niemeyer (Keynes’s arch rival since their Cambridge days) had spelt out the principle in a report presented to the New Zealand House of Representatives in 1931:

  The bank must be entirely free from both the actual fact and the fear of political interference. If that cannot be secured, its existence will do more harm than good, for, while a Central Bank must serve the Community, it cannot carry out its difficult technical functions and hope to form a connecting-link with other Central Banks of the world if it is subject to political pressures or influences other than economic.130

  The rediscovery of this argument has led to greater autonomy for a rising proportion of the world’s central banks. Within less than a week of coming to power in 1997, the new Labour government unexpectedly granted the Bank of England ‘operational independence’, meaning freedom to set interest rates so as to achieve a publicly announced inflation target.131 So high is the esteem in which the Chairman of the Federal Reserve is held at the time of writing that he is absolved from explicit targets, instead dispensing occasional Delphic utterances.

  FROM INDEPENDENCE TO IRRELEVANCE?

  Nevertheless, the ultimate power of the executive and legislature over the central bank should never be lost sight of: even the most independent central bank in the world will ultimately have to yield to the wishes of the government in a national emergency. This does not necessarily have to be a war, as the Bundesbank discovered to its discomfort in 1990, when Chancellor Kohl overruled President Karl-Otto Pöhl on the terms of German monetary reunification. Arguably, central banks have only gained more independence because the political will to achieve lower inflation has grown; there is no evidence that they achieve lower inflation at a lower cost in terms of growth and employment than banks that are not independent.132

  More importantly, the dramatic expansion and evolution of financial markets since the 1980s have significantly reduced the leverage central banks can exert over private sector credit. As Benjamin Friedman has po
inted out, the total volume of reserves that banks and other financial institutions maintain with the Federal Reserve System is less than $50 billion, a tiny fraction of total US GDP (0.5 per cent). By comparison, the outstanding volume of securities issued by the US Treasury is $3.7 trillion; add the issues of government sponsored or guaranteed institutions, and the total comes to $7.1 trillion; and if private-sector bonds are included the total US bond market amounts to $13.6 trillion. The equity market is even larger. True, the central bank is still the monopoly supplier (or withdrawer) of bank reserves; so relatively small changes in its policy may in theory influence the financial system as a whole. But innovations in the payments system – electronic money and ‘smart cards’ – may begin to reduce the need for traditional bank reserves and centralized national clearing systems.133

  Already the growth of non-bank credit – loans by institutions which are not banks on the basis of liabilities other than bank reserves – is tending to limit the importance of bank reserves. Pension funds, insurance companies and mutual funds do not hold reserves; yet their share of the US credit market has been increasing steadily. In 1950 the commercial banks accounted for more than half the total US credit market; by 1998 their share was down to less than a quarter. This reflects the improvements in data processing and information technology, which have significantly reduced informational ‘asymmetries’ – the very raison d’être of traditional commercial banks. At the same time, the growth of ‘securitization’, whereby traditional forms of bank loan are sold on to non-bank investors and packaged into aggregated portfolios, has further weakened the link between the central bank’s reserve system and the credit system as a whole. For all these reasons, Friedman has characterized the modern central bank of the (near) future as ‘an army with only a signal corps’.134 In any case, central banks that rely on changes in short-term interest rates to maintain price stability are reliant on forecasts of price inflation at least two years into the future.135 So the signals they send may turn out to be the wrong ones if the forecasts are wrong.

 

‹ Prev