Ehrlich thought that he was investigating a planet’s physical resources and predicting their rate of decline. In fact he was prophesying the content of future knowledge. And, by envisaging a future in which only the best knowledge of 1971 was deployed, he was implicitly assuming that only a small and rapidly dwindling set of problems would ever be solved again. Furthermore, by casting problems in terms of ‘resource depletion’, and ignoring the human level of explanation, he missed all the important determinants of what he was trying to predict, namely: did the relevant people and institutions have what it takes to solve problems? And, more broadly, what does it take to solve problems?
A few years later, a graduate student in the then new subject of environmental science explained to me that colour television was a sign of the imminent collapse of our ‘consumer society’. Why? Because, first of all, he said, it served no useful purpose. All the useful functions of television could be performed just as well in monochrome. Adding colour, at several times the cost, was merely ‘conspicuous consumption’. That term had been coined by the economist Thorstein Veblen in 1902, a couple of decades before even monochrome television was invented; it meant wanting new possessions in order to show off to the neighbours. That we had now reached the physical limit of conspicuous consumption could be proved, said my colleague, by analysing the resource constraints scientifically. The cathode-ray tubes in colour televisions depended on the element europium to make the red phosphors on the screen. Europium is one of the rarest elements on Earth. The planet’s total known reserves were only enough to build a few hundred million more colour televisions. After that, it would be back to monochrome. But worse – think what this would mean. From then on there would be two kinds of people: those with colour televisions and those without. And the same would be true of everything else that was being consumed. It would be a world with permanent class distinction, in which the elites would hoard the last of the resources and live lives of gaudy display, while, to sustain that illusory state through its final years, everyone else would be labouring on in drab resentment. And so it went on, nightmare built upon nightmare.
I asked him how he knew that no new source of europium would be discovered. He asked how I knew that it would. And, even if it were, what would we do then? I asked how he knew that colour cathode-ray tubes could not be built without europium. He assured me that they could not: it was a miracle that there existed even one element with the necessary properties. After all, why should nature supply elements with properties to suit our convenience?
I had to concede the point. There aren’t that many elements, and each of them has only a few energy levels that could be used to emit light. No doubt they had all been assessed by physicists. If the bottom line was that there was no alternative to europium for making colour televisions, then there was no alternative.
Yet something deeply puzzled me about that ‘miracle’ of the red phosphor. If nature provides only one pair of suitable energy levels, why does it provide even one? I had not yet heard of the fine-tuning problem (it was new at the time), but this was puzzling for a similar reason. Transmitting accurate images in real time is a natural thing for people to want to do, like travelling fast. It would not have been puzzling if the laws of physics forbade it, just as they do forbid faster-than-light travel. For them to allow it but only if one knew how would be normal too. But for them only just to allow it would be a fine-tuning coincidence. Why would the laws of physics draw the line so close to a point that happened to have significance for human technology? It would be as if the centre of the Earth had turned out to be within a few kilometres of the centre of the universe. It seemed to violate the Principle of Mediocrity.
What made this even more puzzling was that, as with the real fine-tuning problem, my colleague was claiming that there were many such coincidences. His whole point was that the colour-television problem was just one representative instance of a phenomenon that was happening simultaneously in many areas of technology: the ultimate limits were being reached. Just as we were using up the last stocks of the rarest of rare-earth elements for the frivolous purpose of watching soap operas in colour, so everything that looked like progress was actually just an insane rush to exploit the last resources left on our planet. The 1970s were, he believed, a unique and terrible moment in history.
He was right in one respect: no alternative red phosphor has been discovered to this day. Yet, as I write this chapter, I see before me a superbly coloured computer display that contains not one atom of europium. Its pixels are liquid crystals consisting entirely of common elements, and it does not require a cathode-ray tube. Nor would it matter if it did, for by now enough europium has been mined to supply every human being on earth with a dozen europium-type screens, and the known reserves of the element comprise several times that amount.
Even while my pessimistic colleague was dismissing colour television technology as useless and doomed, optimistic people were discovering new ways of achieving it, and new uses for it – uses that he thought he had ruled out by considering for five minutes how well colour televisions could do the existing job of monochrome ones. But what stands out, for me, is not the failed prophecy and its underlying fallacy, nor relief that the nightmare never happened. It is the contrast between two different conceptions of what people are. In the pessimistic conception, they are wasters: they take precious resources and madly convert them into useless coloured pictures. This is true of static societies: those statues really were what my colleague thought colour televisions are – which is why comparing our society with the ‘old culture’ of Easter Island is exactly wrong. In the optimistic conception – the one that was unforeseeably vindicated by events – people are problem-solvers: creators of the unsustainable solution and hence also of the next problem. In the pessimistic conception, that distinctive ability of people is a disease for which sustainability is the cure. In the optimistic one, sustainability is the disease and people are the cure.
Since then, whole new industries have come into existence to harness great waves of innovation, and in many of those – from medical imaging to video games to desktop publishing to nature documentaries like Attenborough’s – colour television proved to be very useful after all. And, far from there being a permanent class distinction between monochrome- and colour-television users, the monochrome technology is now practically extinct, as are cathode-ray televisions. Colour displays are now so cheap that they are being given away free with magazines as advertising gimmicks. And all those technologies, far from being divisive, are inherently egalitarian, sweeping away many formerly entrenched barriers to people’s access to information, opinion, art and education.
Optimistic opponents of Malthusian arguments are often – rightly – keen to stress that all evils are due to lack of knowledge, and that problems are soluble. Prophecies of disaster such as the ones I have described do illustrate the fact that the prophetic mode of thinking, no matter how plausible it seems prospectively, is fallacious and inherently biased. However, to expect that problems will always be solved in time to avert disasters would be the same fallacy. And, indeed, the deeper and more dangerous mistake made by Malthusians is that they claim to have a way of averting resource-allocation disasters (namely, sustainability). Thus they also deny that other great truth that I suggested we engrave in stone: problems are inevitable.
A solution may be problem-free for a period, and in a parochial application, but there is no way of identifying in advance which problems will have such a solution. Hence there is no way, short of stasis, to avoid unforeseen problems arising from new solutions. But stasis is itself unsustainable, as witness every static society in history. Malthus could not have known that the obscure element uranium, which had just been discovered, would eventually become relevant to the survival of civilization, just as my colleague could not have known that, within his lifetime, colour televisions would be saving lives every day.
So there is no resource-management strategy that can prevent dis
asters, just as there is no political system that provides only good leaders and good policies, nor a scientific method that provides only true theories. But there are ideas that reliably cause disasters, and one of them is, notoriously, the idea that the future can be scientifically planned. The only rational policy, in all three cases, is to judge institutions, plans and ways of life according to how good they are at correcting mistakes: removing bad policies and leaders, superseding bad explanations, and recovering from disasters.
For example, one of the triumphs of twentieth-century progress was the discovery of antibiotics, which ended many of the plagues and endemic illnesses that had caused suffering and death since time immemorial. However, it has been pointed out almost from the outset by critics of ‘so-called progress’ that this triumph may only be temporary, because of the evolution of antibiotic-resistant pathogens. This is often held up as an indictment of – to give it its broad context – Enlightenment hubris. We need lose only one battle in this war of science against bacteria and their weapon, evolution (so the argument goes), to be doomed, because our other ‘so-called progress’ – such as cheap worldwide air travel, global trade, enormous cities – makes us more vulnerable than ever before to a global pandemic that could exceed the Black Death in destructiveness and even cause our extinction.
But all triumphs are temporary. So to use this fact to reinterpret progress as ‘so-called progress’ is bad philosophy. The fact that reliance on specific antibiotics is unsustainable is only an indictment from the point of view of someone who expects a sustainable lifestyle. But in reality there is no such thing. Only progress is sustainable.
The prophetic approach can see only what one might do to postpone disaster, namely improve sustainability: drastically reduce and disperse the population, make travel difficult, suppress contact between different geographical areas. A society which did this would not be able to afford the kind of scientific research that would lead to new antibiotics. Its members would hope that their lifestyle would protect them instead. But note that this lifestyle did not, when it was tried, prevent the Black Death. Nor would it cure cancer.
Prevention and delaying tactics are useful, but they can be no more than a minor part of a viable strategy for the future. Problems are inevitable, and sooner or later survival will depend on being able to cope when prevention and delaying tactics have failed. Obviously we need to work towards cures. But we can do that only for diseases that we already know about. So we need the capacity to deal with unforeseen, unforeseeable failures. For this we need a large and vibrant research community, interested in explanation and problem-solving. We need the wealth to fund it, and the technological capacity to implement what it discovers.
This is also true of the problem of climate change, about which there is currently great controversy. We face the prospect that carbon-dioxide emissions from technology will cause an increase in the average temperature of the atmosphere, with harmful effects such as droughts, sea-level rises, disruption to agriculture, and the extinctions of some species. These are forecast to outweigh the beneficial effects, such as an increase in crop yields, a general boost to plant life, and a reduction in the number of people dying of hypothermia in winter. Trillions of dollars, and a great deal of legislation and institutional change, intended to reduce those emissions, currently hang on the outcomes of simulations of the planet’s climate by the most powerful supercomputers, and on projections by economists about what those computations imply about the economy in the next century. In the light of the above discussion, we should notice several things about the controversy and about the underlying problem.
First, we have been lucky so far. Regardless of how accurate the prevailing climate models are, it is uncontroversial from the laws of physics, without any need for supercomputers or sophisticated modelling, that such emissions must, eventually, increase the temperature, which must, eventually, be harmful. Consider, therefore: what if the relevant parameters had been just slightly different and the moment of disaster had been in, say, 1902 – Veblen’s time – when carbondioxide emissions were already orders of magnitude above their pre-Enlightenment values. Then the disaster would have happened before anyone could have predicted it or known what was happening. Sea levels would have risen, agriculture would have been disrupted, millions would have begun to die, with worse to come. And the great issue of the day would have been not how to prevent it but what could be done about it.
They had no supercomputers then. Because of Babbage’s failures and the scientific community’s misjudgements – and, perhaps most importantly, their lack of wealth – they lacked the vital technology of automated computing altogether. Mechanical calculators and roomfuls of clerks would have been insufficient. But, much worse: they had almost no atmospheric physicists. In fact the total number of physicists of all kinds was a small fraction of the number who today work on climate change alone. From society’s point of view, physicists were a luxury in 1902, like colour televisions were in the 1970s. Yet, to recover from the disaster, society would have needed more scientific knowledge, and better technology, and more of it – that is to say, more wealth. For instance, in 1900, building a sea wall to protect the coast of a low-lying island would have required resources so enormous that the only islands that could have afforded it would have been those with either large concentrations of cheap labour or exceptional wealth, as in the Netherlands, much of whose population already lived below sea level thanks to the technology of dyke-building.
This is a challenge that is highly susceptible to automation. But people were in no position to address it in that way. All relevant machines were underpowered, unreliable, expensive, and impossible to produce in large numbers. An enormous effort to construct a Panama canal had just failed with the loss of thousands of lives and vast amounts of money, due to inadequate technology and scientific knowledge. And, to compound those problems, the world as a whole had very little wealth by today’s standards. Today, a coastal defence project would be well within the capabilities of almost any coastal nation – and would add decades to the time available to find other solutions to rising sea levels.
If none are found, what would we do then? That is a question of a wholly different kind, which brings me to my second observation on the climate-change controversy. It is that, while the supercomputer simulations make (conditional) predictions, the economic forecasts make almost pure prophecies. For we can expect the future of human responses to climate to depend heavily on how successful people are at creating new knowledge to address the problems that arise. So comparing predictions with prophecies is going to lead to that same old mistake.
Again, suppose that disaster had already been under way in 1902. Consider what it would have taken for scientists to forecast, say, carbon-dioxide emissions for the twentieth century. On the (shaky) assumption that energy use would continue to increase by roughly the same exponential factor as before, they could have estimated the resulting increase in emissions. But that estimate would not have included the effects of nuclear power. It could not have, because radioactivity itself had only just been discovered, and would not be harnessed for power until the middle of the century. But suppose that somehow they had been able to foresee that. Then they might have modified their carbon-dioxide forecast, and concluded that emissions could easily be restored to below the 1902 level by the end of the century. But, again, that would only be because they could not possibly foresee the campaign against nuclear power, which would put a stop to its expansion (ironically, on environmental grounds) before it ever became a significant factor in reducing emissions. And so on. Time and again, the unpredictable factor of new human ideas, both good and bad, would make the scientific prediction useless. The same is bound to be true – even more so – of forecasts today for the coming century. Which brings me to my third observation about the current controversy.
It is not yet accurately known how sensitive the atmosphere’s temperature is to the concentration of carbon dioxide – that is, h
ow much a given increase in concentration increases the temperature. This number is important politically, because it affects how urgent the problem is: high sensitivity means high urgency; low sensitivity means the opposite. Unfortunately, this has led to the political debate being dominated by the side issue of how ‘anthropogenic’ (human-caused) the increase in temperature to date has been. It is as if people were arguing about how best to prepare for the next hurricane while all agreeing that the only hurricanes one should prepare for are human-induced ones. All sides seem to assume that if it turns out that a random fluctuation in the temperature is about to raise sea levels, disrupt agriculture, wipe out species and so on, our best plan would be simply to grin and bear it. Or if two-thirds of the increase is anthropogenic, we should not mitigate the effects of the other third.
Trying to predict what our net effect on the environment will be for the next century and then subordinating all policy decisions to optimizing that prediction cannot work. We cannot know how much to reduce emissions by, nor how much effect that will have, because we cannot know the future discoveries that will make some of our present actions seem wise, some counter-productive and some irrelevant, nor how much our efforts are going to be assisted or impeded by sheer luck. Tactics to delay the onset of foreseeable problems may help. But they cannot replace, and must be subordinate to, increasing our ability to intervene after events turn out as we did not foresee. If that does not happen in regard to carbon-dioxide-induced warming, it will happen with something else.
Indeed, we did not foresee the global-warming disaster. I call it a disaster because the prevailing theory is that our best option is to prevent carbon-dioxide emissions by spending vast sums and enforcing severe worldwide restrictions on behaviour, and that is already a disaster by any reasonable measure. I call it unforeseen because we now realize that it was already under way even in 1971, when I attended that lecture. Ehrlich did tell us that agriculture was soon going to be devastated by rapid climate change. But the change in question was going to be global cooling, caused by smog and the condensation trails of supersonic aircraft. The possibility of warming caused by gas emissions had already been mooted by some scientists, but Ehrlich did not consider it worth mentioning. He told us that the evidence was that a general cooling trend had already begun, and that it would continue with catastrophic effects, though it would be reversed in the very long term because of ‘heat pollution’ from industry (an effect that is currently at least a hundred times smaller than the global warming that preoccupies us).
The Beginning of Infinity Page 52