The Beginning of Infinity
Page 24
TERMINOLOGY
One-to-one correspondence Tallying each member of one set with each member of another.
Infinite (mathematical) A set is infinite if it can be placed in one-to-one correspondence with part of itself.
Infinite (physical) A rather vague concept meaning something like ‘larger than anything that could in principle be encompassed by experience’.
Countably infinite Infinite, but small enough to be placed in one-to-one correspondence with the natural numbers.
Measure A method by which a theory gives meaning to proportions and averages of infinite sets of things, such as universes.
Singularity A situation in which something physical becomes unboundedly large, while remaining everywhere finite.
Multiverse A unified physical entity that contains more than one universe.
Infinite regress A fallacy in which an argument or explanation depends on a sub-argument of the same form which purports to address essentially the same problem as the original argument.
Computation A physical process that instantiates the properties of some abstract entity.
Proof A computation which, given a theory of how the computer on which it runs works, establishes the truth of some abstract proposition.
MEANINGS OF ‘THE BEGINNING OF INFINITY’ ENCOUNTERED IN THIS CHAPTER
– The ending of the ancient aversion to the infinite (and the universal).
– Calculus, Cantor’s theory and other theories of the infinite and the infinitesimal in mathematics.
– The view along a corridor of Infinity Hotel.
– The property of infinite sequences that every element is exceptionally close to the beginning.
– The universality of reason.
– The infinite reach of some ideas.
– The internal structure of a multiverse which gives meaning to an ‘infinity of universes’.
– The unpredictability of the content of future knowledge is a necessary condition for the unlimited growth of that knowledge.
SUMMARY
We can understand infinity through the infinite reach of some explanations. It makes sense, both in mathematics and in physics. But it has counter-intuitive properties, some of which are illustrated by Hilbert’s thought experiment of Infinity Hotel. One of them is that, if unlimited progress really is going to happen, not only are we now at almost the very beginning of it, we always shall be. Cantor proved, with his diagonal argument, that there are infinitely many levels of infinity, of which physics uses at most the first one or two: the infinity of the natural numbers and the infinity of the continuum. Where there are infinitely many identical copies of an observer (for instance in multiple universes), probability and proportions do not make sense unless the collection as a whole has a structure subject to laws of physics that give them meaning. A mere infinite sequence of universes, like the rooms in Infinity Hotel, does not have such structure, which means that anthropic reasoning by itself is insufficient to explain the apparent ‘fine-tuning’ of the constants of physics. Proof is a physical process: whether a mathematical proposition is provable or unprovable, decidable or undecidable, depends on the laws of physics, which determine which abstract entities and relationships are modelled by physical objects. Similarly, whether a task or pattern is simple or complex depends on what the laws of physics are.
9
Optimism
The possibilities that lie in the future are infinite. When I say ‘It is our duty to remain optimists,’ this includes not only the openness of the future but also that which all of us contribute to it by everything we do: we are all responsible for what the future holds in store. Thus it is our duty, not to prophesy evil but, rather, to fight for a better world.
Karl Popper, The Myth of the Framework (1994)
Martin Rees suspects that civilization was lucky to survive the twentieth century. For throughout the Cold War there was always a possibility that another world war would break out, this time fought with hydrogen bombs, and that civilization would be destroyed. That danger seems to have receded, but in Rees’s book Our Final Century, published in 2003, he came to the worrying conclusion that civilization now had only a 50 per cent chance of surviving the twenty-first century.
Again this was because of the danger that newly created knowledge would have catastrophic consequences. For example, Rees thought it likely that civilization-destroying weapons, particularly biological ones, would soon become so easy to make that terrorist organizations, or even malevolent individuals, could not be prevented from acquiring them. He also feared accidental catastrophes, such as the escape of genetically modified micro-organisms from a laboratory, resulting in a pandemic of an incurable disease. Intelligent robots, and nanotechnology (engineering on the atomic scale), ‘could in the long run be even more threatening’, he wrote. And ‘it is not inconceivable that physics could be dangerous too.’ For instance, it has been suggested that elementary-particle accelerators that briefly create conditions that are in some respects more extreme than any since the Big Bang might destabilize the very vacuum of space and destroy our entire universe.
Rees pointed out that, for his conclusion to hold, it is not necessary for any one of those catastrophes to be at all probable, because we need be unlucky only once, and we incur the risk afresh every time progress is made in a variety of fields. He compared this with playing Russian roulette.
But there is a crucial difference between the human condition and Russian roulette: the probability of winning at Russian roulette is unaffected by anything that the player may think or do. Within its rules, it is a game of pure chance. In contrast, the future of civilization depends entirely on what we think and do. If civilization falls, that will not be something that just happens to us: it will be the outcome of choices that people make. If civilization survives, that will be because people succeed in solving the problems of survival, and that too will not have happened by chance.
Both the future of civilization and the outcome of a game of Russian roulette are unpredictable, but in different senses and for entirely unrelated reasons. Russian roulette is merely random. Although we cannot predict the outcome, we do know what the possible outcomes are, and the probability of each, provided that the rules of the game are obeyed. The future of civilization is unknowable, because the knowledge that is going to affect it has yet to be created. Hence the possible outcomes are not yet known, let alone their probabilities.
The growth of knowledge cannot change that fact. On the contrary, it contributes strongly to it: the ability of scientific theories to predict the future depends on the reach of their explanations, but no explanation has enough reach to predict the content of its own successors – or their effects, or those of other ideas that have not yet been thought of. Just as no one in 1900 could have foreseen the consequences of innovations made during the twentieth century – including whole new fields such as nuclear physics, computer science and biotechnology – so our own future will be shaped by knowledge that we do not yet have. We cannot even predict most of the problems that we shall encounter, or most of the opportunities to solve them, let alone the solutions and attempted solutions and how they will affect events. People in 1900 did not consider the internet or nuclear power unlikely: they did not conceive of them at all.
No good explanation can predict the outcome, or the probability of an outcome, of a phenomenon whose course is going to be significantly affected by the creation of new knowledge. This is a fundamental limitation on the reach of scientific prediction, and, when planning for the future, it is vital to come to terms with it. Following Popper, I shall use the term prediction for conclusions about future events that follow from good explanations, and prophecy for anything that purports to know what is not yet knowable. Trying to know the unknowable leads inexorably to error and self-deception. Among other things, it creates a bias towards pessimism. For example, in 1894 the physicist Albert Michelson made the following prophecy about the future of physics:
&n
bsp; The more important fundamental laws and facts of physical science have all been discovered, and these are now so firmly established that the possibility of their ever being supplanted in consequence of new discoveries is exceedingly remote . . . Our future discoveries must be looked for in the sixth place of decimals.
Albert Michelson, address at the opening of the Ryerson Physical Laboratory, University of Chicago, 1894
What exactly was Michelson doing when he judged that there was only an ‘exceedingly remote’ chance that the foundations of physics as he knew them would ever be superseded? He was prophesying the future. How? On the basis of the best knowledge available at the time. But that consisted of the physics of 1894! Powerful and accurate though it was in countless applications, it was not capable of predicting the content of its successors. It was poorly suited even to imagining the changes that relativity and quantum theory would bring – which is why the physicists who did imagine them won Nobel prizes. Michelson would not have put the expansion of the universe, or the existence of parallel universes, or the non-existence of the force of gravity, on any list of possible discoveries whose probability was ‘exceedingly remote’. He just didn’t conceive of them at all.
A century earlier, the mathematician Joseph-Louis Lagrange had remarked that Isaac Newton had not only been the greatest genius who ever lived, but also the luckiest, for ‘the system of the world can be discovered only once.’ Lagrange would never know that some of his own work, which he had regarded as a mere translation of Newton’s into a more elegant mathematical language, was a step towards the replacement of Newton’s ‘system of the world’. Michelson did live to see a series of discoveries that spectacularly refuted the physics of 1894, and with it his own prophecy.
Like Lagrange, Michelson himself had already contributed unwittingly to the new system – in this case with an experimental result. In 1887 he and his colleague Edward Morley had observed that the speed of light relative to an observer remains constant when the observer moves. This astoundingly counter-intuitive fact later became the centrepiece of Einstein’s special theory of relativity. But Michelson and Morley did not realize that that was what they had observed. Observations are theory-laden. Given an experimental oddity, we have no way of predicting whether it will eventually be explained merely by correcting a minor parochial assumption or by revolutionizing entire sciences. We can know that only after we have seen it in the light of a new explanation. In the meantime we have no option but to see the world through our best existing explanations – which include our existing misconceptions. And that biases our intuition. Among other things, it inhibits us from conceiving of significant changes.
When the determinants of future events are unknowable, how should one prepare for them? How can one? Given that some of those determinants are beyond the reach of scientific prediction, what is the right philosophy of the unknown future? What is the rational approach to the unknowable – to the inconceivable? That is the subject of this chapter.
The terms ‘optimism’ or ‘pessimism’ have always been about the unknowable, but they did not originally refer especially to the future, as they do today. Originally, ‘optimism’ was the doctrine that the world – past, present and future – is as good as it could possibly be. The term was first used to describe an argument of Leibniz (1646–1716) that God, being ‘perfect’, would have created nothing less than ‘the best of all possible worlds’. Leibniz believed that this idea solved the ‘problem of evil’, which I mentioned in Chapter 4: he proposed that all apparent evils in the world are outweighed by good consequences that are too remote to be known. Similarly, all apparently good events that fail to happen – including all improvements that humans are unsuccessful in achieving – fail because they would have had bad consequences that would have outweighed the good.
Since consequences are determined by the laws of physics, the larger part of Leibniz’s claim must be that the laws of physics are the best possible too. Alternative laws that made scientific progress easier, or made disease an impossible phenomenon, or made even one disease slightly less unpleasant – in short, any alternative that would seem to be an improvement upon our actual history with all its plagues, tortures, tyrannies and natural disasters – would in fact have been even worse on balance, according to Leibniz.
That theory is a spectacularly bad explanation. Not only can any observed sequence of events be explained as ‘best’ by that method, an alternative Leibniz could equally well have claimed that we live in the worst of all possible worlds, and that every good event is necessary in order to prevent something even better from happening. Indeed, some philosophers, such as Arthur Schopenhauer, have claimed just that. Their stance is called philosophical ‘pessimism’. Or one could claim that the world is exactly halfway between the best possible and the worst possible – and so on. Notice that, despite their superficial differences, all those theories have something important in common: if any of them were true, rational thought would have almost no power to discover true explanations. For, since we can always imagine states of affairs that seem better than what we observe, we would always be mistaken that they were better, no matter how good our explanations were. So, in such a world, the true explanations of events are never even imaginable. For instance, in Leibniz’s ‘optimistic’ world, whenever we try to solve a problem and fail, it is because we have been thwarted by an unimaginably vast intelligence that determined that it was best for us to fail. And, still worse, whenever someone rejects reason and decides instead to rely on bad explanations or logical fallacies – or, for that matter, on pure malevolence – they still achieve, in every case, a better outcome on balance than the most rational and benevolent thought possibly could have. This does not describe an explicable world. And that would be very bad news for us, its inhabitants. Both the original ‘optimism’ and the original ‘pessimism’ are close to pure pessimism as I shall define it.
In everyday usage, a common saying is that ‘an optimist calls a glass half full while a pessimist calls it half empty’. But those attitudes are not what I am referring to either: they are matters not of philosophy but of psychology – more ‘spin’ than substance. The terms can also refer to moods, such as cheerfulness or depression, but, again, moods do not necessitate any particular stance about the future: the statesman Winston Churchill suffered from intense depression, yet his outlook on the future of civilization, and his specific expectations as wartime leader, were unusually positive. Conversely the economist Thomas Malthus, a notorious prophet of doom (of whom more below), is said to have been a serene and happy fellow, who often had his companions at the dinner table in gales of laughter.
Blind optimism is a stance towards the future. It consists of proceeding as if one knows that the bad outcomes will not happen. The opposite approach, blind pessimism, often called the precautionary principle, seeks to ward off disaster by avoiding everything not known to be safe. No one seriously advocates either of these two as a universal policy, but their assumptions and their arguments are common, and often creep into people’s planning.
Blind optimism is also known as ‘overconfidence’ or ‘recklessness’. An often cited example, perhaps unfairly, is the judgement of the builders of the ocean liner Titanic that it was ‘practically unsinkable’. The largest ship of its day, it sank on its maiden voyage in 1912. Designed to survive every foreseeable disaster, it collided with an iceberg in a manner that had not been foreseen. A blind pessimist argues that there is an inherent asymmetry between good and bad consequences: a successful maiden voyage cannot possibly do as much good as a disastrous one can do harm. As Rees points out, a single catastrophic consequence of an otherwise beneficial innovation could put an end to human progress for ever. So the blindly pessimistic approach to building ocean liners is to stick with existing designs and refrain from attempting any records.
But blind pessimism is a blindly optimistic doctrine. It assumes that unforeseen disastrous consequences cannot follow from existing knowledge
too (or, rather, from existing ignorance). Not all shipwrecks happen to record-breaking ships. Not all unforeseen physical disasters need be caused by physics experiments or new technology. But one thing we do know is that protecting ourselves from any disaster, foreseeable or not, or recovering from it once it has happened, requires knowledge; and knowledge has to be created. The harm that can flow from any innovation that does not destroy the growth of knowledge is always finite; the good can be unlimited. There would be no existing ship designs to stick with, nor records to stay within, if no one had ever violated the precautionary principle.
Because pessimism needs to counter that argument in order to be at all persuasive, a recurring theme in pessimistic theories throughout history has been that an exceptionally dangerous moment is imminent. Our Final Century makes the case that the period since the mid twentieth century has been the first in which technology has been capable of destroying civilization. But that is not so. Many civilizations in history were destroyed by the simple technologies of fire and the sword. Indeed, of all civilizations in history, the overwhelming majority have been destroyed, some intentionally, some as a result of plague or natural disaster. Virtually all of them could have avoided the catastrophes that destroyed them if only they had possessed a little additional knowledge, such as improved agricultural or military technology, better hygiene, or better political or economic institutions. Very few, if any, could have been saved by greater caution about innovation. In fact most had enthusiastically implemented the precautionary principle.
More generally, what they lacked was a certain combination of abstract knowledge and knowledge embodied in technological artefacts, namely sufficient wealth. Let me define that in a non-parochial way as the repertoire of physical transformations that they would be capable of causing.