Why String Theory?

Home > Other > Why String Theory? > Page 12
Why String Theory? Page 12

by Joseph Conlon


  Colliding particles always have an axis along which the particles approach each other. Hard scattering occurs when particles emerge both at large energies and large angles from the collision axis. Its opposite is soft scattering, where the particles continue mostly on their original paths and are only deflected by small amounts. Hard scattering is also the distinctive feature of point-like scattering. If a BB gun is fired at jelly, the shot will pass through in the general direction it was fired. However if the same gun is fired towards a set of marbles, the shot may ricochet at large angles.

  In a previous age, it had been the existence of hard scattering that had enabled Ernest Rutherford to discover in 1911 that the atomic charge was concentrated within a nucleus, rather than being diffusely smeared throughout the atom in the plum pudding model. Rutherford had fired energetic particles at a thin sheet of metal and had seen then bouncing back. It was this (surprising) observation of hard scattering that forced Rutherford to the conclusion that the atom contained an almost point-like nucleus:

  It was quite the most incredible event that has ever happened to me in my life. It was almost as incredible as if you fired a 15-inch shell at a piece of tissue paper and it came back and hit you.

  Rutherford’s story was now being relived almost sixty years later. It was the surprising observation of hard scattering in strong interaction events that led the physicists of the 1960s and 1970s to the conclusion that the atomic nucleus itself was also made up of pointlike components – partons or quarks – and point-like objects are not string-like objects.3

  Morally, the string model of the strong force was like the plum-pudding model of the atom – a model in which ‘charge’ was smeared out across an extended region. This implied a distinctive property of the Veneziano formula and its relatives – as you increased the energy of collisions, scattering would become softer, and in the limit of extremely high energy, hard scattering events would become extremely unlikely. As throughout 1972 and 1973 the collision energies were increased, the data however told the reverse story. Furthermore, the data appeared precisely consistent with the new theory of quantum chromodynamics – and as more data came in, the characteristic predictions of quantum chromodynamics became better and better fits to this data. Even before the results became decisive, the scent of the right theory was picked up. Leonard Susskind recalls,

  At a physics conference [in 1974] I asked, ‘You people, I want to know your belief about the probability that QCD is the right theory of hadrons.’ I took a poll. Nobody gave it more than 5 percent. Then I asked, ‘What are you working on?’ QCD, QCD, QCD.

  The data had surged in one direction, while the string theory of strong interactions – Dual Resonance models – was pointed along the opposite axis. String theory as an account of the strong interactions received a final coup de grâce: the new theory of quantum chromodynamics could also explain why string-like models had succeeded in giving an approximate description of strong interactions. In quantum chromodynamics, the force lines of the strong interaction did indeed tend to bunch into strings, in a similar way that the magnetic field lines of electromagnetism bunch together. The hadrons and mesons of the strong interaction, which the Veneziano model had been created to describe, did have an approximate description as points of strong charge bound together by tubes of strong force flux – and so resembling relativistic strings under tension. This explained why the string theory of strong interactions had captured certain features of the data but also could never be more than an approximation to the true theory.

  As measured by the number of papers published, interest in string theory died as rapidly as it had risen. As in 1974 Abba won the Eurovision song contest with Waterloo, Johan Cruyff and the Netherlands dazzled the world with Total Football, and Gerald Ford replaced Richard Nixon as president, a question to most physicists of what string theory was, and what it was for, would have received a bleak answer: string theory was a failed theory of the strong interactions, and it was good for nothing.

  5.3 THE WILDERNESS YEARS

  However, for string theory this was not the end – nor even the beginning of the end. With hindsight, it was only the end of the beginning. In the period from 1968 to 1973, the attraction of the Veneziano amplitude, the Dual Resonance models, and subsequently string theory was not simply the chance that they might provide a description of the strong interactions. Another attraction was the realisation was that there was an underlying structure that was at once both interesting and non-trivial. There was clearly something there. This happens sufficiently infrequently in physics that, when it does, people pay attention. Most of physics is about modifying or exploiting known theories in known ways, in the hope that the modification can explain data that is not otherwise explained. The appearance of genuinely new structures or new theories is not so common.

  It is worth digressing here about what is meant by a theory or a structure. The terminology is loose, and I do not intend to be doctrinaire, but I do want to avoid some misunderstandings. What is a theory not? It is not just a guess, whether simple or educated, as in ‘my theory for why the bucket is in the bedroom and not in the kitchen is that one of the children was using it as a policeman’s hat’. Neither is it only a possible explanation for an experimental anomaly, to be resolved by more data – ‘model’ is the term used here. It is more a chain of ideas and equations that is glued together by some combination of physical and mathematical reasoning. By itself, neither mathematics nor logic is enough. A purely logical chain tends to be far too brittle – if a single step fails, then the entire argument collapses. A good physical theory is more robust, being able to survive errors of thought and interpretation to still give useful results.

  It may seem a bit paradoxical to say that logic is insufficient. After all, is not logic the principal component of clear thinking and good argument? Actually, beyond a certain elementary level the answer is no. The rules of logic are to good science what the rules of prose as taught at primary school are to good writing. Why is this so? As an example, consider the sum

  If you evaluate this sum for the first one hundred thousand terms, you will find that it equals 3.14158. We also remember that π starts its expansion as 3.14159. Now, there is no logical reason why a series whose first hundred thousand terms sum to a number looking suspiciously like π should have anything to do with π. The fact that the expansion appears to be tending to π – there is no logical reason why this should be anything other than an accident.

  Now, in fact this series does converge exactly to π, and this can be shown with the full panoply of formal pure mathematics. However, one should not require this to give serious credence to the idea that the series does indeed sum to π. The numerical coincidence should be sufficient motivation to devote time to investigating further the properties of the sum. While this was a deliberately simple example, the general (true) lesson is that patterns should attract attention, even if they have no strict logical significance.

  The everyday life of theoretical physics is full of analogous examples, and it is not formal logic that is used to buttress the conclusions. Instead, it is supporting evidence, the use of multiple different lines of argument to obtain the same result, the presence of unexpected cancellations and simplifications, and the occurrence of Goldilocks calculations where everything turns out ‘just right’.

  One of the principal attractions of first Dual Resonance models and then string theory was the feeling that a Goldilocks structure had been encountered. Although the direction it led was unclear, the consistency of the theory was ‘just right’. Calculations worked – but only just, imposing consistency requirements that could be satisfied with nothing to spare. It was this sense that kept people working on string theory even when it was clear it had failed as a theory of the strong interactions. John Schwarz describes this period:

  [We] felt that string theory was too beautiful to be just a mathematical curiosity. It ought to have some physical relevance. We had frequently been struck by the fa
ct that string theories exhibit unanticipated miraculous properties. What this means is that they have a very deep mathematical structure that is not fully understood. By digging deeper one could reasonably expect to find more surprises and then learn new lessons.

  While the majority answer in 1974 to ‘What is string theory?’ would then have been ‘A failed theory of the strong interactions’, to some the answer was ‘An intriguing framework that still needs exploring’. To many of those who had worked on the subject from 1968 to 1973, something had been found – and the question was what that something was. The answers in 1974 were fragmentary. The ‘something’ was certainly a theory of strings. The Dual Resonance models had been successfully reinterpreted as arising from a theory of quantised strings. The scattering amplitudes that arose from the Dual Resonance models were those of the scattering of strings. The ‘something’ also appeared to involve more dimensions than the ordinary four, as these original string theories, called bosonic string theories, required a total of twenty-six spacetime dimensions for consistency.

  However, many puzzles still existed. Not the least of these puzzles was the fact that string theories always contained an unphysical particle, called a tachyon. The equations for a tachyon naively make it appear that it is a particle travelling faster than the speed of light. This is not however the right way to think about them. Really, a tachyon signals an instability in a theory, like the case of a ball balanced precariously at the top of the mountain. Given the slightest perturbation, the ball will roll down and away. The only thing unphysical about this ball is the assumption that its position is a permanent state of affairs. Its location is perfectly physical, just unstable – and the ball will not long survive there.

  The presence of the tachyon indicated that the bosonic string was also unstable. Despite all efforts, no direct solution to this problem could be found, until in the early 1970s first Pierre Ramond, and then André Neveu and John Schwarz, made a modification to the bosonic string equations to create the superstring. The essence of the modification was to include additional fermionic particles into the theory. The modification also crucially required an additional symmetry – supersymmetry, which has subsequently played an important role in models of particle physics.

  Compared to the bosonic string, the equations and properties of the superstring were similar but better. In the superstring, the tachyonic particle and its instability were absent – although this was not something that was entirely obvious at first. Furthermore, technical reasons implied that the critical number of spacetime dimensions also changed – the Goldilocks number was now ten rather than twenty-six. The superstring also contained in its spectrum a mass-less particle with two units of intrinsic quantum angular momentum. These were known to be the properties of the graviton, the hypothesised particle that plays the same role for gravity as the photon plays for electromagnetism. In 1973 and 1974 Tamiaki Yoneya, and separately Joel Scherk and John Schwarz, showed that the interactions of this stringy particle were indeed the same as the graviton of general relativity.

  With hindsight, these properties were pointing towards the reinterpretation of string theory as a candidate theory of quantum gravity rather than a candidate theory of the strong interactions. However, this change of perspective did not come easily. As happens frequently in science, the change was preceded by a period of first trying to force the theory into the answers that were desired, rather than looking at what the theory actually said. For example, the change in perspective also involved an enormous change in energies. Strings have a tension. Tension is measured in energy per unit length. The shift in target from strong interactions to gravitational interactions involved a shift in the magnitude of this tension by a mere factor of one hundred billion billion billion billion, or 1038.

  The proposal that string theory should be viewed as a fundamental theory of gravity was first put in print by Scherk and Schwarz in 1974. While this represents the modern view, this proposal was not received as if it had come down from Mount Sinai. The development of a quantum theory of the gravitational force is the historical problema di tutti problemi of theoretical physics: string theory had already failed once at a simpler task, and there was no reason to expect success simply through doubling down and multiplying the stakes.

  In any case, few had motivation to care. The 1970s was also the decade of the triumph of the Standard Model. Its basic structure was confirmed and reconfirmed as new particles were discovered – the charm quark in 1974, the tau lepton in 1975, the bottom quark in 1977, and the gluon, the force carrier of the strong interaction, in 1978. The predictions of the Standard Model were verified in experiment after experiment. As energies increased, jets of strongly interacting particles – a key prediction of quantum chromodynamics – were observed in the aftermath of particle collisions. Such was the confidence in the success of the Standard Model that at the end of the decade, some of its developers – Sheldon Glashow, Abdus Salam and Steven Weinberg – were awarded the 1979 Nobel Prize, even though it would still be another four years before the force carriers of the weak interactions, the W and Z bosons, were discovered in 1983.

  For theorists who had stuck with quantum field theory through the 1960s, this was a period when everything they touched turned to gold. Emboldened by their prior success, their thoughts started turning upwards. Was there a unification principle behind the Standard Model? At large enough energies, could the forces of the Standard Model simply be different aspects of the same force? If this were true, all the interactions of the Standard Model would be merely different facets of a single theory. Many (grand) unification models were proposed using quantum field theory, starting with a 1974 paper by Howard Georgi and Sheldon Glashow of Harvard.4 These theories predicted that the proton, the particle at the centre of the hydrogen atom and therefore necessary for life as we know it, was unstable, decaying with a lifetime vastly longer than the age of the universe. Ambitious experiments were proposed to look for these predicted decays, with the expectation of imminent success. For those who worked on understanding and predicting experimental results, this was a time of triumphalism.

  Amidst an atmosphere of total indifference a few people continued research on string theory throughout the 1970s.5 It was not a fashionable topic and string theory was not an easy subject to make a career in. However, those who remained had a free choice of problems. In 1988 the British physicist Michael Green – now recently retired from Isaac Newton’s old professorship, the Lucasian Chair of Mathematics at Cambridge – recalled this era as follows:

  In a sense life was very nice in those days because particle physics is generally a very competitive subject and it was just nice to be working on something that we could take at our own pace without feeling pressurised.

  String theory in the later 1970s was not about experimental data. Instead, it was about as far removed from data as it was possible to be while still remaining within the umbrella of science. It was not even really a theory. It was more an embryonic hope budding inside a chrysalis. It was a hope that this poorly understood set of ideas, after resolving the many uncertainties that still accompanied them, might just possibly maybe give a quantum account of the gravitational force.

  However, even if this best-case scenario were true, it seemed entirely implausible that there would ever be any way to test it, given the enormous energy scales involved in quantum gravity. In short, string theory in the 1970s was theory for theorists. The problem it was concerned with had almost no empirical consequences. The solution, if found, appeared entirely untestable.

  There is nothing wrong with this. Scientists are different, and different scientists are good at different things. Some problems require years of closeted monk-like attention, while other problems require an obsessive focus on the latest data. Some problems can be elegantly solved with pen and paper, whereas others can only be tackled by techie code wizards harnessing hundreds of computers. Some scientists want to dominate others, intellectually and socially, while others pr
efer to walk their own path in their own way. The belief in a single approach to good science results in only a single style of science getting done.

  So string theory research in the late 1970s and early 1980s – in that it existed at all – consisted of untangling and understanding the structure of the theory. It was also clearly a topic on the boundaries of what was respectable. In the meantime, several new theoretical ideas appeared and commanded much attention. Although not obvious at the time, it would subsequently turn out that much of this apparent mainstream would end up as tributaries that fed into string theory.

  5.4 HARBINGERS OF GREAT JOY

  One mainstream idea of the 1970s was that of supersymmetry, which over the course of the decade went from nothing to become a large part of theoretical physics. We have already seen in chapter 3 the importance attached to symmetries in physics. There are two basic kinds of particles, bosons and fermions. The distinguishing feature of supersymmetry is that it transforms bosons into fermions and fermions into bosons. Under this symmetry, the interactions and masses of bosonic particles are related to those of fermionic particles. In a sense that can be made precise, this symmetry relating bosonic and fermionic particles can also be viewed as an extension of the relativistic symmetries of space and time – and in this same sense, it is also true that supersymmetry is not just an extension but the extension. Supersymmetry is the only possible allowed extension of the spacetime symmetries of relativity.

 

‹ Prev