by Peter Watson
The string revolution came about because of a fundamental paradox. Although each was successful on its own account, the theory of general relativity, explaining the large-scale structure of the universe, and quantum mechanics, explaining the minuscule subatomic scale, were mutually incompatible. Physicists could not believe that nature would allow such a state of affairs – one set of laws for large things, another for small things – and for some time they had been seeking ways to reconcile this incompatibility, which many felt was not unrelated to their failure to explain gravity. There were other fundamental questions, too, which the string theorists faced up to: Why are there four fundamental forces?33 Why are there the number of particles that there are, and why do they have the properties they do? The answer that string theorists propose is that the basic constituent of matter is not, in fact, a series of particles – point-shaped entities – but very tiny, one-dimensional strings, as often as not formed into loops. These strings are very small – about 10–33 of a centimetre – which means that they are beyond the scope of direct observation of current measuring instruments. Notwithstanding that, according to string theory an electron is a string vibrating one way, an up quark is a string vibrating another way, and a tau particle is a string vibrating in a third way, and so on, just as the strings on a violin vibrate in different ways so as to produce different notes. As the figures show, we are dealing here with very small entities indeed – about a hundred billion billion (1020) times smaller than an atomic nucleus. But, say the string theorists, at this level it is possible to reconcile relativity and quantum theory. As a by-product and a bonus, they also say that a gravity particle – the graviton – emerges naturally from the calculations.
String theory first emerged in 1968–70, when Gabriele Veneziano, at CERN, noticed that a mathematical formula first worked out 200 years ago accidentally seemed to explain various aspects of particle physics.34 Then three other physicists, Yoichiro Nambu, Holger Nielson and Leonard Susskind, showed that these mathematics could be better understood if particles were not point-shaped objects but small strings that vibrated. The approach was discarded later, however, after it failed to explain the strong force. But the idea refused to die, and the first string revolution, as it came to be called, took off in 1984, after a landmark paper by Michael Greene and John Schwarz first showed that relativity and quantum theory could be reconciled by string theory. This breakthrough stimulated an avalanche of research, and in the next two years more than a thousand papers on string theory were published, together showing that many of the main features of particles physics emerge naturally from the theory. This fecundity of string theory, however, brought its own problems. For a while there were in fact five string theories, all equally elegant, but no one could tell which was the ‘real’ one. Once more string theory stalled, until the ‘Strings 1995’ conference, held in March at the University of Southern California, where Edward Witten introduced the ‘second superstring revolution.’35 Witten was able to convince his colleagues that the five apparently different theories were in fact five aspects of the same underlying concept, which then became known as M-theory, the M standing variously for mystery, meta, or ‘mother of all theories.’*
In dealing with such tiny entities as strings, possibilities arise that physicists had not earlier entertained, one being that there may be ‘hidden dimensions’ and to explain this another analogy is needed. Start with the idea that particles are seen as particles only because our instruments are too blunt to see that small. To use Greene’s own example, think of a hosepipe seen from a distance. It looks like a filament in one dimension, like a line drawn on a page. In fact, of course, when you are close up it has two dimensions – and always did have, only we weren’t close enough to see it. Physicists say it is (or may be) the same at string level – there are hidden dimensions curled up of which we are not at present aware. In fact, they say that there may be eleven dimensions in all, ten of space and one of time.36 This is a difficult if not impossible idea to imagine or visualise, but the scientists make their arguments for mathematical reasons (math that even mathematicians find difficult). When they do make this allowance, however, many things about the universe fall into place. For example, black holes are explained – as perhaps similar to fundamental particles, as gateways to other universes. The extra dimensions are also needed because the way they curl and bend, string theorists say, may determine the size and frequency of the vibrations of the strings, in other words explaining why the familiar ‘particles’ have the mass and energy and number that they do. In its latest configuration, string theory involves more than strings: two-, three-, and more dimensional membranes, or ‘branes,’ small packets, the understanding of which will be the main work of the twenty-first century.37
The most startling thing about string theory, other than the existence of strings themselves, is that it suggests there may be a prehistory to the universe, a period before the Big Bang. As Greene puts it, string theory ‘suggests that rather than being enormously hot and tightly curled into a tiny spatial speck, the universe started out as cold and essentially infinite in spatial extent.’38 Then, he says, an instability kicked in, there was a period of inflation, and our universe formed as we know it. This also has the merit of allowing all four forces, including gravity, to be unified.
String theory stretches everyone’s comprehension to its limits. Visual analogies break down, the math is hard even for mathematicians, but there are a few ideas we can all grasp. First, strings concern a world beyond the Planck length. This is, in a way, a logical outcome of Planck’s conception of the quantum, which he first had in 1900. Second, as yet it is 99 percent theory; physicists are beginning to find ways to test the new theories experimentally, but as of now there is no shortage of sceptics as to whether strings even exist. Third, at these very small levels, we may enter a spaceless and timeless realm. The very latest research involves structures known as zero branes in whose realm ordinary geometry is replaced by ‘noncommunicative geometry,’ conceived by the French mathematician Alain Connes. Greene believes this may be a major step forward philosophically as well as scientifically, a breakthrough ‘that is capable of giving us an answer to the question of how the universe began and why there are such things as space and time – a formalism that will take us a step closer to answering Leibniz’s question of why there is something rather than nothing.’39 Finally, in superstring theory we have the virtually complete amalgamation of physics and mathematics. The two have always been close, but never more so than now, as we approach the possibility that in a sense, the very basis for reality is mathematical.
Many scientists believe we are living in a golden age for mathematics. Two areas in particular have attracted widespread attention among mathematicians themselves.
Chaoplexity is an amalgam of chaos and complexity. In 1987 in Chaos: Making a New Science, James Gleick introduced this new area of intellectual activity.40 Chaos research starts from the concept that there are many phenomena in the world that are, as the mathematicians say, nonlinear, meaning they are in principle unpredictable. The most famous of these is the so-called butterfly effect, whereby a butterfly fluttering its wings in, say, the Midwest of America can trigger a whole raft of events that might culminate in a monsoon in the Far East. A second aspect of the theory is that of the ‘emergent’ property, which refers to the fact that there are on Earth phenomena that ‘cannot be predicted, or understood, simply by examining the system’s parts.’ Consciousness is a good example here, since even if it can be understood (a very moot point), it cannot be understood from inspection of neurons and chemicals within the brain. However, this only goes halfway to what the chaos scientists are saying. They also argue that the advent of the computer enables us to conduct much more powerful mathematics than ever before, with the result that we shall eventually be able to model – and therefore simulate – complex systems, such as large molecules, neural networks, population growth, weather patterns. In other words, the deep order und
erlying this apparent chaos will be revealed.
The basic idea in chaoplexity comes from Benoit Mandelbrot, an applied mathematician from IBM, who identified what he called the ‘fractal.’ The perfect fractal is a coastline, but others include snowflakes and trees. Seen from a distance, they have one shape or outline; closer up more intricate details are revealed; closer still, and there is yet more detail. However close you go, the more intricate the outline, with, often, the patterns repeated at different scales. Because these outlines never resolve themselves into smooth lines – in other words never conform to some simple mathematical function – Mandelbrot called them the ‘most complex objects in mathematics.’41 At the same time, however, it turns out that simple mathematical rules can be fed into computer programs that, after many generations, give rise to complicated patterns, patterns that ‘never quite repeat themselves.’ From this, and from their observations of real-life fractals, mathematicians now infer that there are in nature some very powerful rules governing apparently chaotic and complex systems that have yet to be unearthed – another example of deep order.
In the late 1980s and early 1990s chaos suddenly blossomed as one of the most popular forms of mathematics, and a new research outfit was founded, the Santa Fe Institute in New Mexico, southeast of Los Alamos, where Murray Gell-Mann, discoverer of the quark, joined the faculty.42 This new specialty has come up with several new concepts, among them ‘self-organised criticality,’ ‘catastrophe theory,’ the hierarchical structure of reality, ‘artificial life,’ and ‘self-organisation.’ Self-organised criticality is the brainchild of Per Bak, a Danish physicist who emigrated to the United States in the 1970s.43 His starting point, as he told John Horgan, is a sandpile. As one adds grains of sand and the pile grows, there comes a point – the critical state – when the addition of a single grain can cause an avalanche. Bak was struck by the apparent similarity of this process to other phenomena – stock market crashes, the extinction of species, earthquakes, and so on. He takes the view that these processes can be understood mathematically – that is, described mathematically. We may one day be able to understand why these things happen, though that doesn’t necessarily mean we shall be able to control and prevent them. It is not far from Per Bak’s theory to Frenchman René Thom’s idea of catastrophe theory, that purely mathematical calculations can explain ‘discontinuous behaviour,’ such as the emergence of life, the change from a caterpillar into a butterfly, or the collapse of civilisations. They are all aspects of the search for deep order.
Against all this the work of Philip Anderson stands out. He won a Nobel Prize in 1977 for his work on superconductors. Instead of arguing for underlying order, Anderson’s view is that there is a hierarchy of order – each level of organisation in the world, and in biology in particular, is independent of the order in the levels above and below. ‘At each stage, entirely new laws, concepts and generalisations are necessary, requiring inspiration and creativity to just as great a degree as in the previous one. Psychology is not applied biology, nor is biology applied chemistry … you mustn’t give in to the temptation that when you have a good general principle at one level that it’s going to work at all levels.’44
There is a somewhat disappointed air about the chaoplexologists at the turn of the century. What seemed so thrilling in the early 1990s has not, as yet, produced anything nearly as exciting as string theory, for example. Where math does remain exciting and undismayed, however, is in its relationship to biology. These achievements were summarised by Ian Stewart, professor of mathematics at Warwick University in Britain, in his 1998 book Life’s Other Secret.45 Stewart comes from a tradition less well known than the Hawkings-Penrose-Feynman-Glashow physics/cosmology set, or the Dawkins-Gould-Dennett evolution set. He is the latest in a line that includes D’Arcy Wentworth Thompson (On Growth and Form, 1917), Stuart Kauffman (The Origins of Order, 1993), and Brian Goodwin (How the Leopard Changed Its Spots, 1994). Their collective message is that genetics is not, and never can be, a complete explanation for life. What is also needed, surprising as it may seem, is a knowledge of mathematics, because it is mathematics that governs the physical substances – the deep order – out of which, in the end, all living things are made.
Life’s Other Secret is dedicated to showing that mathematics ‘now informs our understanding of life at every level from DNA to rain forests, from viruses to flocks of birds, from the origins of the first self-copying molecule to the stately unstoppable march of evolution.’46 Some of Stewart’s examples are a mixture of the enchanting and the provocative, such as the mathematics of spiders’ webs and snowflakes, the population variations of ant colonies, and the formation of swarms of starlings; he also explores the branching systems of plants and the patterned skins of such animals as leopards and tigers. He has a whole chapter, ‘Flowers for Fibonacci,’ outlining patterns in the plant kingdom. The Fibonacci sequence of numbers –
1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144 …
– was first invented by Leonardo of Pisa in 1202, Leonardo being the son of Bonaccio, hence ‘Fi-bonacci.’ In the sequence, each number is the sum of the two that precede it, and this simple arrangement describes so much: lilies have 3 petals, for example, buttercups have 5, delphiniums 8, marigolds 13, asters 21 and daisies 34, 55 or 89.47 But Stewart’s book, and thinking, are much more ambitious and interesting than this. He begins by showing that the division of cells in the embryo displays a remarkable similarity to the way soap bubbles form in foams, and that the way chromosomes are laid out in a dividing cell is also similar to the way mutually repelling magnets arrange themselves. In other words, whatever instructions are coded into genes, many biological entities behave as though they are constrained by the physical properties they possess, properties that can be written as mathematical equations. For Stewart this is no accident. This is life taking advantage of the mathematics/physics of nature for its own purposes. He finds that there is a ‘deep geometry’ of molecules, especially in DNA, which forms knots and coils, this architecture being all-important. For example, he quotes a remarkable experiment carried out by Heinz Fraenkel-Conrat and Robley Williams with the tobacco mosaic virus.48 This, says Stewart, is a bridge between the inorganic and organic worlds; if the components of the virus are separated in a test tube and then left to their own devices, they spontaneously reassemble into a complete virus that can replicate. In other words, it is the architecture of the molecules that automatically produces life. In theory, therefore, this form of virus – life – could be created by preparing synthetic substances and putting them together in a test tube. In the latter half of the 1990s, mathematicians have understood the processes by which primitive forms of life – the slime mould, for example, the soil amoeba Dictyostelium discoideum – proceed. They turn out to be not so very difficult mathematical equations. ‘The main point here,’ says Stewart, ‘is that a lot of properties of life are turning out to be physics, not biology.’49
Perhaps most revealing are the experiments that Stewart and others call ‘artificial life.’ These are essentially games played on computers designed to replicate in symbolic form various aspects of evolution.50 The screen will typically have a grid, say 100 squares wide and 100 squares deep. Into each of these squares is allotted a ‘bush’ or a ‘flower,’ say, or on the other hand, a ‘slug’ and ‘an animal that preys on slugs.’ Various rules are programmed in: one rule might be that a predator can move five squares each time, whereas a slug can move only one square; another might be that slugs on green flowers are less likely to be seen (and eaten) than slugs on red flowers, and so on. Then, since computers are being used, this artificial life can be turned on and run for, say, 10,000 moves, or even 50 million moves, to see what ‘?-volves’ (A = artificial). A number of these programs have been tried. The most startling was Andrew Pargellis’s ‘Amoeba,’ begun in 1996. This was seeded only with a random block of computer code, 7 percent of which was randomly replaced every 100,000 steps (to simulate mutation). Pargellis found that
about every 50 million steps a self-replicating segment of code appeared, simply as a result of the math on which the program was based. As Stewart put it, ‘Replication didn’t have to be built into the rules – it just happened.’51 Other surprises included symbiosis, the appearance of parasites, and long periods of stasis punctuated by rapid change – in other words, punctuated equilibrium much as described by Niles Eldredge and Stephen Jay Gould. Just as these models (they are not really experiments in the traditional sense) show how life might have begun, Stewart also quotes mathematical models which suggest that a network of neural cells, a ‘neural net,’ when hooked together, naturally acquires the ability to make computations, a phenomenon known as ‘emergent computation.’52 It means that nets with raw computational ability can arise spontaneously through the workings of ordinary physics: ‘Evolution will then select whichever nets can carry out computations that enhance the organism’s survival ability, leading to specific computation of an increasingly sophisticated kind.’53