The Great Degeneration: How Institutions Decay and Economies Die
Page 11
In any case, the differences in economic welfare within countries are in some ways just as big as the differences between them. In 2007 the average income of Americans in the top 1 per cent in terms of income was thirty times that of the average income of Americans in the remaining 99 per cent. This is another differential that has changed rapidly in recent years – but, unlike inter-country inequality, intra-country inequality has been increasing rather than diminishing. In 1978 the top percentile was just ten times richer than everyone else. By most measures, American society is as unequal today as it was in the late 1920s.4 Another way of putting this is that a massive proportion of the benefits of the last thirty-five years of economic growth has gone to the super-elite. That was not true in the period between the Great Depression of the 1930s and the Great Inflation of the 1970s. Between 1933 and 1973 the average real income of the 99 per cent rose (before tax) by a factor of four and a half. Yet from 1973 until 2010 it actually fell.5
So what exactly is going on? As we have seen, narrowly economic explanations that focus on the impact of financial forces (‘deleveraging’), international integration (‘globalization’), the role of information technology (‘offshoring’ and ‘outsourcing’) or fiscal policy (‘stimulus’ versus ‘austerity’) do not offer sufficient explanations. We need to delve into the history of institutions to understand the complex dynamics of convergence and divergence that characterize today’s world. The democratic deficits of Chapter 1, the regulatory fragility of Chapter 2, the rule of lawyers of Chapter 3 and the uncivil society of Chapter 4: these offer better explanations of why the West is now delivering lower growth and greater inequality than in the past – in other words, why it is now the West that finds itself in Adam Smith’s stationary state.
The Urban Future
In these concluding pages, I want to ask what my diagnosis of a great institutional degeneration in the Western world implies about the future. To answer that question it is helpful to borrow former US Defense Secretary Donald Rumsfeld’s famous typology of ‘known knowns’, ‘known unknowns’ and ‘unknown unknowns’ – but to add a fourth category: ‘unknown knowns’. These are the future scenarios that are quite well known to students of history, but which are ignored by everybody else.
Let us begin with the known knowns. Aside from the laws of physics and chemistry, the following things are unlikely to change significantly in the foreseeable future: the normal (or bell-curve) distribution of intelligence in any population, the cognitive biases of the human mind, and our evolved biological behaviours. We can also assume that the global population will continue to rise towards nine billion, though with nearly all of the increase concentrated in Africa and South Asia, and that in the rest of the world the age structure will tilt further in the direction of the elderly. On the other hand, at least some key commodities – base metals and rare earths in particular – will remain in finite supply. However, the pace of global technological diffusion seems likely to remain high and this will encourage the continued migration of people from the country to the cities. The developing world’s new ‘megacities’ – conurbations with populations of more than ten million – will thus play a defining role in the twenty-first century. There are already twenty of these: six (led by Shanghai) in China, three (led by Mumbai) in India, along with Mexico City, São Paolo, Dhaka, Karachi, Buenos Aires, Manila, Rio de Janeiro, Moscow, Cairo, Istanbul and Lagos. They, along with 420 other non-Western cities, could generate close to half of all the growth between 2012 and 2025, according the McKinsey Global Institute.6
In many ways, this is an exciting prospect. The physicist Geoffrey West has shown that there are both economies of scale (in infrastructure) and increasing returns to scale (in human creativity) from the process of urbanization. In his words: ‘Cities are . . . the cause of the good life. They are the centres of wealth creation, creativity, innovation, and invention. They’re the exciting places. They are these magnets that suck people in.’ West and his colleagues at the Santa Fe Institute have identified two remarkable statistical regularities. First, ‘every infrastructural quantity . . . from total length of roadways to the length of electrical lines to the length of gas lines . . . scaled in the same way as the number of gas stations.’ That is to say, the bigger the city, the fewer gas stations were needed per capita, an economy of scale with a fairly consistent exponent of around 0.85 (meaning that, when a city’s population increases by 100 per cent, it needs to increase the number of gas stations per capita by only 85 per cent). Secondly, and more surprisingly:
Socioeconomic . . . things like wages, the number of educational institutions, the number of patents produced, et cetera . . . scaled in what we called a super-linear fashion. Instead of being an exponent less than one, indicating economies of scale, the exponent was bigger than one, indicating . . . increasing returns to scale . . . That says that systematically, the bigger the city, the more wages you can expect, the more educational institutions in principle, [the] more cultural events, [the] more patents are produced, it’s more innovative and so on. Remarkably, all to the same degree. There was a universal exponent which turned out to be approximately 1.15, which . . . says something like the following: If you double the size of a city from 50,000 to a hundred thousand, a million to two million, five million to ten million . . . systematically, you get a roughly 15 per cent increase in productivity, patents, the number of research institutions, wages [per capita] . . . and you get systematically a 15 per cent saving in length of roads and general infrastructure.7
People even walk disproportionately faster in big cities than in small ones. There is a disproportionately wider range of possible jobs to do. All this is best explained in terms of network effects. True, there are equally large negative externalities: bigger cities have disproportionately bigger problems with crime, disease and pollution. But provided we can innovate fast enough, West argues, our megacities can avoid – or at least postpone – the moment of collapse.*
West’s analysis explains why the process of urbanization – which is in many ways at the heart of the history of civilization – is more than exponential. Although his data are drawn from all over the world, however, we know that there is a major difference in the benefits of urbanization between New York or London, on the one hand, and Mumbai or Lagos, on the other. In late July 2012, a massive failure of the power grid in northern India – which deprived 640 million people of electricity – provided a reminder that megacities are very fragile networks. We know, too, that at times in New York’s history – notably the late 1980s, when violent crime peaked – the negative externalities of urban networks came close to outweighing the positives.
The argument of this book implies that the net benefits of urbanization are conditioned by the institutional framework within which cities operate. Where there is effective representative government, where there is a dynamic market economy, where the rule of law is upheld and where civil society is independent from the state, the benefits of a dense population overwhelm the costs. Where these conditions do not pertain, the opposite applies. Put differently, in a secure institutional framework, urban networks are what Nassim Taleb calls ‘anti-fragile’: they evolve in ways that are not only resilient in the face of perturbations, but actually gain strength from them (like London during the Blitz). But where that framework is lacking, urban networks are fragile: they can collapse in the face of a relatively small shock (like Rome when attacked by the Visigoths in AD 410).
Shooters and Diggers
In the Spaghetti Western The Good, the Bad and the Ugly, there is a memorable scene that sums up the world economy today. Blondie (Clint Eastwood) and Tuco (Eli Wallach) have finally found the cemetery where they know the gold they seek is buried – a vast Civil War graveyard. Eastwood looks at his gun, looks at Wallach and utters the immortal line: ‘In this world, there are two kinds of people, my friend. Those with loaded guns . . . and those who dig.’
In the post-crisis economic o
rder, there are likewise two kinds of economies. Those with vast accumulations of assets, including sovereign wealth funds (currently in excess of $4 trillion) and hard-currency reserves ($5.5 trillion for emerging markets alone), are the ones with loaded guns. The economies with huge public debts (which now total nearly $50 trillion worldwide), by contrast, are the ones that have to dig. In such a world, it pays to have underground resources. But these are not distributed at all fairly. By my calculations, the estimated market value of the world’s proven subsoil mineral reserves is around $359 trillion, of which over 60 per cent is owned by just ten countries: Russia, the United States, Australia, Saudi Arabia, China, Guinea (which is rich in bauxite), Iran, Venezuela, South Africa and Kazakhstan.8
Now we enter the realm of the known unknowns. We do not know by how much resource discoveries (especially in unsurveyed Africa) and technological advances (such as hydraulic fracturing) will increase the supply of natural resources in the years to come. Nor do we know what impact financial crises will have on commodity prices and therefore the incentive to exploit new sources of fuel and material. Finally, we do not know with any certainty how politics will affect a sector that is more vulnerable to expropriation and arbitrary taxation than any other because of the immobile nature of its assets. We know that the unrestricted burning of fossil fuels is likely to lead to changes in the earth’s climate, but we do not know exactly what these will be or when they will be disruptive enough to generate a meaningful policy response. Until then, the West will indulge itself with fantasies about ‘green’ energy, and the Rest will continue to burn coal as fast as it can be dug up, instead of doing the things that would really reduce carbon dioxide emissions: building nuclear and clean-coal power-plants, converting vehicles to natural gas and increasing the energy efficiency of the average home.9 All these known unknowns explain the extraordinary whipsaw movements in commodity prices that we have witnessed since 2002.
Also in the category of known unknowns belong two distinct kinds of natural disaster: earthquakes and the associated tsunamis they cause, which are randomly generated by the movements of the earth’s tectonic plates (so we know their location but not their timing or magnitude), and pandemics, which arise from the similarly random mutation of viruses like influenza. The most that can be said about these two threats to humanity is that they will kill many more people in the future than in the past because of the increasing concentration of our species in cities in the Asia-Pacific region which, perversely, are often located close to fault lines because of the human fondness for coastal locations. Add to this the problem of nuclear proliferation, and it does not seem unreasonable to regard the world as a more dangerous place than it was during the Cold War, when the principal threat to mankind was the calculable risk of a worst-case outcome to a simple two-player game. Today we face more uncertainty than calculable risk. Such is the result of exchanging a bipolar world for a networked one.
By their very nature, the unknown unknowns are impossible to anticipate. But what of the unknown knowns – the insights that history has to offer, which most people choose to ignore? Asked in late 2011 to name ‘the key risks that could derail growth in fast-growth markets over the next three years’, nearly a thousand global business executives identified asset-price bubbles, political corruption, inequality of income and failure to tackle inflation as the four biggest.10 By 2014, these fears may seem misplaced. From an historian’s point of view, the real risks in the non-Western world today are of revolution and war. These are precisely the events we should expect under the circumstances described above. Revolutions are caused by a combination of food-price spikes, a youthful population, a rising middle class, a disruptive ideology, a corrupt old regime and a weakening international order. All these conditions are present in the Middle East today – and of course the Islamist revolution is already well under way, albeit under the misleading Western label of the ‘Arab Spring’. The thing to worry about is the war that nearly always follows a revolution of such magnitude. For despite Steven Pinker’s optimistic claim that the long-run trend of human history is away from violence, the statistical incidence of war exhibits no such pattern.11 Like earthquakes, we know where wars are likely to occur, but we cannot know when they will break out or how big they will be.
Against ‘Technoptimism’
Revolution and war are not new threats. In the eighteenth century the disruptive ideology that grew out of the Enlightenment became the basis for two major challenges to the Anglophone empire that then bestrode the globe. In fighting revolution on both sides of the Atlantic, the British state accumulated a very large public debt, mainly as a result of its wars against revolutionary France. By the end of the Napoleonic era, the national debt exceeded 250 per cent of GDP. Yet the subsequent deleveraging – which reduced the debt burden by an order of magnitude to just 25 per cent of GDP – was perhaps the most successful in recorded history. Inflation played no role whatsoever. The British government consistently ran peacetime primary surpluses, thanks to a combination of fiscal discipline and a growth rate higher than the rate of interest. This ‘beautiful deleveraging’* was not without its ugly episodes, notably in the mid-1820s and late 1840s, when austerity policies caused social unrest (and failed to alleviate a disastrous famine in Ireland). Nevertheless, the deleveraging process coincided with the key phase of the first Industrial Revolution – the railway mania – and the expansion of the British Empire to very nearly its maximum extent. The lesson of history is that a country that achieves technological innovation and profitable geopolitical expansion can grow its way out from under a mountain of debt.12
Can the United States emulate this feat? I doubt it. First, the evidence suggests that it is very hard to achieve higher growth under a heavy debt burden. In their study of twenty-six episodes of ‘debt overhang’ – cases when public debt in advanced countries exceeded 90 per cent of GDP for at least five years – Carmen and Vincent Reinhart and Ken Rogoff show that debt overhangs were associated with lower growth (of 1.2 percentage points of GDP) over protracted periods (lasting an average of twenty-three years), lowering the level of output by nearly a quarter relative to the pre-overhang trend.13 Significantly, the negative impact on growth was not necessarily the result of higher real interest rates. A crucial point is the non-linear character of the relationship between debt and growth. Because the debt burden lowers growth only when it rises above the 90 per cent of GDP threshold, the habit of running deficits gets well established before it becomes deleterious. This evidence poses a serious problem for those Keynesian economists who believe that the correct response to a reduction in aggregate demand via private sector deleveraging is for the already indebted public sector to borrow even more. It also casts doubt on the validity of the claim that low interest rates on US Treasuries are a market signal that the government can and should issue more debt.14
Equally remote is the prospect that a technological breakthrough comparable with the railways could provide the United States with a ‘get out of jail’ card. The harsh reality is that, from the vantage point of 2012, the next twenty-five years (2013–38) are highly unlikely to see more dramatic changes than science and technology produced in the last twenty-five (1987–2012). For a start, the end of the Cold War and the Asian economic miracle provided one-off, non-repeatable stimuli to the process of innovation in the form of a massive reduction in labour costs and therefore the price of hardware not to mention all those ex-Soviet PhDs who could finally do something useful. The IT revolution that began in the 1980s was important in terms of its productivity impact inside the US – though even this should not be exaggerated – but we are surely now in the realm of diminishing returns (the symptoms of which are deflation plus underemployment due partly to automation of unskilled work). Likewise, the breakthroughs in medical science we can expect as a result of the successful mapping of the human genome will probably result in further extensions of the average lifespan, but if we make no commensurate advances in neur
oscience – if we succeed in protracting the life of the body but not of the mind – the net economic consequences will be negative, because we will simply increase the number of dependent elderly.
My pessimism about the likelihood of a technological deus ex machina is supported by a simple historical observation. The achievements of the last twenty-five years were not especially impressive compared with what we did in the preceding twenty-five years, 1961–86 (for example, landing men on the moon). And the technological milestones of the twenty-five years before that, 1935–60, were even more remarkable (such as splitting the atom). In the words of Peter Thiel, perhaps the lone sceptic within a hundred miles of Palo Alto, ‘We wanted flying cars, instead we got 140 characters.’* Travel speeds have declined since the days of Concorde. Green energy is ‘unaffordable energy’. And we lack the ambition to ‘declare war’ on Alzheimer’s disease, ‘even though nearly a third of America’s 85-year-olds suffer from some form of dementia’.15 Moreover, technological optimists have to explain why the rapid scientific technological progress in those earlier periods coincided with massive conflict between armed ideologies. (Question: Which was the world’s most scientifically advanced society in 1932, in terms of Nobel Prize-winners in the sciences? Answer: Germany.) The implications are clear. More and faster information is not good in itself. Knowledge is not always the cure. And network effects are not always positive. There was great technological progress during the 1930s. But it did not end the Depression. That took a world war.