Poincaré was the archetypal absent-minded academic – no, come to think of it he was ‘present-minded somewhere else’, namely in his mathematics, and it’s easy to understand why. He was probably the most naturally gifted mathematician of the nineteenth century. If you had a mind like his, you’d spend most of your time somewhere else, too, revelling in the beauty of the mathiverse.
Poincaré ranged over almost all of mathematics, and he wrote several best-selling popular science books, too. In one piece of research, which single-handedly created a new ‘qualitative’ way of thinking about dynamics, he pointed out that when you are studying some physical system that can exist in a variety of different states, then it may be a good idea to consider the states that it could be in, but isn’t, as well as the particular state in which it is. By doing that, you set up a context that lets you understand what the system is doing, and why. This context is the ‘phase space’ of the system. Each possible state can be thought of as a point in that phase space. As time passes, the state changes, so this representative point traces out a curve, the trajectory of the system. The rule that determines the successive steps in the trajectory is the dynamic of the system. In most areas of physics, the dynamic is completely determined, once and for all, but we can extend this terminology to cases where the rule involves possible choices. A good example is a game. Now the phase space is the space of possible positions, the dynamic is the rules of the game and a trajectory is a legal sequence of moves by the players.
The formal setting and terminology for phase spaces is not as important, for us, as the viewpoint that they encourage. For example, you might wonder why the surface of a pool of water, in the absence of wind or other disturbances, is flat. It just sits there, flat; it isn’t even doing anything. But you start to make progress immediately if you ask the question ‘what would happen if it wasn’t flat?’ For instance, why can’t the water be piled up into a hump in the middle of the pond? Well, imagine that it was. Imagine that you can control the position of every molecule of water, and that you pile it up in this way, miraculously keeping every molecule just where you’ve placed it. Then, you ‘let go’. What would happen? The heap of water would collapse, and waves would slosh across the pool until everything settled down to that nice, flat surface that we’ve learned to expect. Again, suppose you arranged the water so that there was a big dip in the middle. Then as soon as you let go, water would move in from the sides to fill the dip.
Mathematically, this idea can be formalised in terms of the space of all possible shapes for the water’s surface. ‘Possible’ here doesn’t mean physically possible: the only shape you’ll ever see in the real world, barring disturbances, is a flat surface. ‘Possible’ means ‘conceptually possible’. So we can set up this space of all possible shapes for the surface as a simple mathematical construct, and this is the phase space for the problem. Each ‘point’ – location – in phase space represents a conceivable shape for the surface. Just one of those points, one state, represents ‘flat’.
Having defined the appropriate phase space, the next step is to understand the dynamic: the way that the natural flow of water under gravity affects the possible surfaces of the pool. In this case, there is a simple principle that solves the whole problem: the idea that water flows so as to make its total energy as small as possible. If you put the water into some particular state, like that piled-up hump, and then let go, the surface will follow the ‘energy gradient’ downhill, until it finds the lowest possible energy. Then (after some sloshing around which slowly subsides because of friction) it will remain at rest in this lowest-energy state.
The energy in this problem is ‘potential energy’, determined by gravity. The potential energy of a mass of water is equal to its height above some arbitrary reference level, multiplied by the mass concerned. Suppose that the water is not flat. Then some parts are higher up than others. So we can transfer some water from the high level to the lower one, by flattening a hump and filling a dip. When we do that, the water involved moves downwards, so the total energy decreases. Conclusion: if the surface is not flat, then the energy is not as small as possible. Or, to put it the other way round: the minimum energy configuration occurs when the surface is flat.
The shape of a soap bubble is another example. Why is it round? The way to answer that question is to compare the actual round shape with a hypothetical non-round shape. What’s different? Yes, the alternative isn’t round, but is there some less obvious difference? According to Greek legend, Dido was offered as much land (in northern Africa) as she could enclose with a bull’s hide. She cut it into a very long, thin strip and enclosed a circle. There she founded the city of Carthage. Why did she choose a circle? Because the circle is the shape with greatest area, for a given perimeter. In the same way, a sphere is the shape with greatest volume, for a given surface area; or, to put it another way, it is the shape with the smallest surface area that contains a given volume. A soap bubble contains a fixed volume of air, and its surface area gives the energy of the soap film due to surface tension. In the space of all possible shapes for bubbles, the one with the least energy is a sphere. All other shapes have larger energy, and are therefore ruled out.
You may not feel that bubbles are important. But the same principle explains why Roundworld (the planet not the universe, but maybe that, too) is round. When it was molten rock, it settled into a spherical shape, because that had the least energy. For the same reason, the heavy materials like iron sank into the core, and the lighter ones, like continents and air, floated up to the top. Actually, Roundworld isn’t exactly a sphere, because it rotates, so centrifugal forces cause it to bulge at the equator. But the amount of bulge is only one-third of one per cent. And that bulging shape is the minimum-energy configuration for a mass of liquid spinning at the same speed as the Earth’s rotation when it was just starting to solidify.
The physics here isn’t important for the message of this book. What is important is the ‘Worlds of If’ point of view involved in the application of phase spaces. When we discussed the shape of water in a pond, we pretty much ignored the flat surface, the thing we were trying to explain. The entire argument hinged upon non-flat surfaces, humps and dips, and hypothetical transfers of water from one to the other. Almost all of the explanation involved thinking about things that don’t actually happen. Only at the end, having ruled out all non-flat surfaces, did we observe that the only possibility left was therefore what the water would actually do. The same goes for the bubble.
At first sight, this might seem to be a very oblique way of doing physics. It takes the stance that the way to understand the real world is to ignore it, and focus instead on all the possible alternative unreal worlds. Then we find some principle (in this case, minimum energy) to rule out nearly all of the unreal worlds, and see what’s left. Wouldn’t it be easier to start with the real world, and focus solely on that? No, it wouldn’t. As we’ve just seen, the real world alone is too limited to offer a convincing explanation. What you get from the real world alone is ‘the world is like it is, and there’s nothing more to be said’. However, if you take the imaginative leap of considering unreal worlds, too, you can compare the real world with all of those unreal worlds, and maybe find a principle that picks out the real one from all the others. Then you have answered the question ‘Why is the world the way it is, rather than something else?’
An excellent way to approach ‘why’ questions is to consider alternatives and rule them out. ‘Why did you park the car round the corner down a side-street?’ ‘Because if I’d parked outside the front door on the double yellow lines, a traffic warden would have given me a parking ticket.’ This particular ‘why’ question is a story, a piece of fiction: a hypothetical discussion of the likely consequences of an action that never occurred. Humans invented their own brand of narrativium as an aid to the exploration of I-space, the space of ‘insteads’. Narrative provides I-space with a geography: if I did this instead of that, then what would happen
would be …
On Discworld, phase spaces are real. The fictitious alternatives to the one actual state exist, too, and you can get inside the phase space and roam over its landscape – provided you know the right spells, secret entrances and other magical paraphernalia. L-space is a case in point. On Roundworld, we can pretend that phase space exists, and we can imagine exploring its geography. This pretence has turned out to be extraordinarily insightful.
Associated with any physical system, then, is a phase space, a space of the possible. If you’re studying the solar system, then the phase space comprises all possible ways to arrange one star, nine planets, a considerable number of moons and a gigantic number of asteroids in space. If you’re studying a sand-pile, then the phase space comprises the number of possible ways to arrange several million grains of sand. If you’re studying thermodynamics, then the phase space comprises all possible positions and velocities for a large number of gas molecules. Indeed, for each molecule there are three position coordinates and three velocity coordinates, because the molecule lives in three-dimensional space. So with N molecules there are 6 N coordinates altogether. If you’re looking at games of chess, then the phase space consists of all possible positions of the pieces on the board. If you’re thinking about all possible books, then the phase space is L-space. And if you’re thinking about all possible universes, you’re contemplating U-space. Each ‘point’ of U-space is an entire universe (and you have to invent the multiverse to hold them all …)
When cosmologists think about varying the natural constants, as we described in Chapter 2 in connection with the carbon resonance in stars, they are thinking about one tiny and rather obvious piece of U-space, the part that can be derived from our universe by changing the fundamental constants but otherwise keeping the laws the same. There are infinitely many other ways to set up an alternative universe: they range from having 101 dimensions and totally different laws to being identical with our universe except for six atoms of dysprosium in the core of the star Procyon that change into iodine on Thursdays.
As this example suggests, the first thing to appreciate about phase spaces is that they are generally rather big. What the universe actually does is a tiny proportion of all the things it could have done instead. For instance, suppose that a car park has one hundred parking slots, and that cars are either red, blue, green, white, or black. When the car park is full, how many different patterns of colour are there? Ignore the make of car, ignore how well or badly it is parked; focus solely on the pattern of colours.
Mathematicians call this kind of question ‘combinatorics’, and they have devised all sorts of clever ways to find answers. Roughly speaking, combinatorics is the art of counting things without actually counting them. Many years ago a mathematical acquaintance of ours came across a university administrator counting light bulbs in the roof of a lecture hall. The lights were arranged in a perfect rectangular grid, 10 by 20. The administrator was staring at the ceiling, going ‘49, 50, 51 …’
‘Two hundred,’ said the mathematician.
‘How do you know that?’
‘Well, it’s a 10 by 20 grid, and 10 times 20 is 200.’
‘No, no,’ replied the administrator. ‘I want the exact number.’2
Back to those cars. There are five colours, and each slot can be filled by just one of them. So there are five ways to fill the first slot, five ways to fill the second, and so on. Any way to fill the first slot can be combined with any way to fill the second, so those two slots can be filled in 5 × 5 = 25 ways. Each of those can be combined with any of the five ways to fill the third slot, so now we have 25 × 5 = 125 possibilities. By the same reasoning, the total number of ways to fill the whole car park is 5 × 5 × 5 … × 5, with a hundred fives. This is 5100, which is rather big. To be precise, it is
78886090522101180541172856528278622
96732064351090230047702789306640625
(we’ve broken the number in two so that it fits the page width) which has 70 digits. It took a computer algebra system about five seconds to work that out, by the way, and about 4.999 of those seconds were taken up with giving it the instructions. And most of the rest was used up printing the result to the screen. Anyway, you now see why combinatorics is the art of counting without actually counting; if you listed all the possibilities and counted them ‘1, 2, 3, 4 …’ you’d never finish. So it’s a good job that the university administrator wasn’t in charge of car parking.
How big is L-space? The Librarian said it is infinite, which is true if you used infinity to mean ‘a much larger number than I can envisage’ or if you don’t place an upper limit on how big a book can be,3 or if you allow all possible alphabets, syllabaries, and pictograms. If we stick to ‘ordinary-sized’ English books, we can reduce the estimate.
A typical book is 100,000 words long, or about 600,000 characters (letters and spaces, we’ll ignore punctuation marks). There are 26 letters in the English alphabet, plus a space, making 27 characters that can go into each of the 600,000 possible positions. The counting principle that we used to solve the car-parking problem now implies that the maximum number of books of this length is 27600,000, which is roughly 10860,000 (that is, an 860,000-digit number). Of course, most of those ‘books’ make very little sense, because we’ve not yet insisted that the letters make sensible words. If we assume that the words are drawn from a list of 10,000 standard ones, and calculate the number of ways to arrange 100,000 words in order, then the figure changes to 10,000100,000, equal to 10400,000, and this is quite a bit smaller … but still enormous. Mind you, most of those books wouldn’t make much sense either; they’d read something like ‘Cabbage patronymic forgotten prohibit hostile quintessence’ continuing at book length.4 So maybe we ought to work with sentences … At any rate, even if we cut the numbers down in that manner, it turns out that the universe is not big enough to contain that many physical books. So it’s a good job that L-space is available, and now we know why there’s never enough shelf space. We like to think that our major libraries, such as the British Library or the Library of Congress, are pretty big. But, in fact, the space of those books that actually exist is a tiny, tiny fraction of L-space, of all the books that could have existed. In particular, we’re never going to run out of new books to write.
Poincaré’s phase space viewpoint has proved to be so useful that nowadays you’ll find it in every area of science – and in areas that aren’t science at all. A major consumer of phase spaces is economics. Suppose that a national economy involves a million different goods: cheese, bicycles, rats-on-a-stick, and so on. Associated with each good is a price, say £2.35 for a lump of cheese, £449.99 for a bicycle, £15.00 for a rat-on-a-stick. So the state of the economy is a list of one million numbers. The phase space consists of all possible lists of a million numbers, including many lists that make no economic sense at all, such as lists that include the £0.02 bicycle or the £999,999,999.95 rat. The economist’s job is to discover the principles that select, from the space of all possible lists of numbers, the actual list that is observed.
The classic principle of this kind is the Law of Supply and Demand, which says that if goods are in short supply and you really, really want them, then the price goes up. It sometimes works, but it often doesn’t. Finding such laws is something of a black art, and the results are not totally convincing, but that just tells us that economics is hard. Poor results notwithstanding, the economist’s way of thinking is a phase space point of view.
Here’s a little tale that shows just how far removed economic theory is from reality. The basis of conventional economics is the idea of a rational agent with perfect information, who maximises utility. According to these assumptions, a taxi-driver, for example, will arrange his activities to generate the most money for the least effort.
Now, the income of a taxi-driver depends on circumstances. On good days, with lots of passengers around, he will do well; on bad days, he won’t. A rational taxi-driver will therefore work longer on good
days and give up early on bad ones. However, a study of taxi-drivers in New York carried out by Colin Camerer and others shows the exact opposite. The taxi-drivers seem to set themselves a daily target, and stop working once they reach it. So they work shorter hours on good days, and longer hours on bad ones. They could increase their earnings by 8 per cent just by working the same number of hours every day, for the same total working time. If they worked longer on good days and shorter on bad ones, they could increase their earnings by 15 per cent. But they don’t have a good enough intuition for economic phase space to appreciate this. They are adopting a common human trait of placing too much value on what they have today, and too little on what they may gain tomorrow.
Biology, too, has been invaded by phase spaces. The first of these to gain widespread currency was DNA-space. Associated with every living organism is its genome, a string of chemical molecules called DNA. The DNA molecule is a double helix, two spirals wrapped round a common core. Each spiral is made up of a string of ‘bases’ or ‘nucleotides’, which come in four varieties: cytosine, guanine, adenine, thymine, normally abbreviated to their initials C, G, A, T. The sequences on the two strings are ‘complementary’: wherever C appears on one string, you get G on the other, and similarly for A and T. So the DNA contains two copies of the sequence, one positive and one negative, so to speak. In the abstract, then, the genome can be thought of as a single sequence of these four letters, something like AATG-GCCTCAG … going on for rather a long time. The human genome, for example, goes on for about three billion letters.
The phase space for genomes, DNA-space, consists of all possible sequences of a given length. If we’re thinking about human beings, the relevant DNA-space comprises all possible sequences of three billion code letters C, G, A, T. How big is that space? It’s the same problem as the cars in the car park, mathematically speaking, so the answer is 4 × 4 × 4 × … × 4 with three billion 4s. That is, 43,000,000,000. This number is a lot bigger than the 70-digit number we got for the car-parking problem. It’s a lot bigger than L-space for normal-sized books, too. In fact, it has about 1,800,000,000 digits. If you wrote it out with 3,000 digits per page, you’d need a 600,000-page book to hold it.
The Science of Discworld II Page 5