Book Read Free

Farewell to Reality

Page 30

by Jim Baggott


  In this interpretation of the holographic principle, our three-dimensional world is an illusion. It is really a hologram, like the three-dimensional image of the sexy young woman who winks at you as you walk by. Reality is actually information stored on the boundary of the universe.

  This is Plato’s allegory of the cave in reverse. In that story, the prisoner perceived reality as a two-dimensional shadow projection of a three-dimensional ‘real’ world. The holographic principle says that our perceived three-dimensional reality is actually a projection from a two-dimensional hologram ‘painted’ on the boundary of the universe.

  This is where we find the source code of the cosmos.

  Holography and superstring theory

  The holographic principle might well have remained an interesting curiosity in information theory and a fascinating slice of metaphysics. But in 1998, Argentinian theorist Juan Maldacena announced a powerful new result. He showed that the physics described by a Type IIB superstring theory within an n-dimensional bulk spacetime is entirely equivalent to the physics described by low-energy quantum field theory applied to its (n-l)-dimensional boundary. This was a whole new superstring duality.

  What makes this duality extraordinary is that low-energy quantum field theory does not include gravity. Yet its dual superstring theory does.

  In his paper, Maldacena did not make explicit the connection between his result and the holographic principle. But just a few months later Witten posted a paper elaborating Maldacena’s result. On seeing Witten’s paper, Susskind understood that the black hole war had finally been won:

  Quantum field theory is a special case of quantum mechanics, and information in quantum mechanics can never be destroyed. Whatever else Maldacena and Witten had done, they had proved beyond any shadow of a doubt that information would never be lost behind a black hole horizon. The string theorists would understand this immediately; the relativists would take longer. But the war was over.13

  Indeed, Witten used this new duality to show that a black hole in the bulk spacetime is equivalent to a relatively mundane hot ‘soup’ of elementary particles, such as gluons, on the boundary surface.

  There are caveats. Maldacena had considered a model consisting of a stack of D-branes sandwiched together. D-branes are places where the ends of open strings ‘stick’. The open strings can wander all over the surface of a D-brane, but they can’t escape it. Closed strings, on the other hand, are free to wander through the bulk.

  The stack of D-branes form a slab. Open strings can now wander over the different layers of the slab, and their ends may be located on a single layer or on different layers. When only low-energy configurations (low-mass strings) are considered, closed strings (and hence gravity) are eliminated and the model can be represented by a quantum field theory such as QCD, in which the open strings are gluons.

  Maldacena then changed perspective. We can consider the D-branes as surfaces on which open strings move about, but we can also consider them as physical entities in their own right, with their own energy and mass. Stack enough D-branes together and spacetime starts to warp, just as it does in the presence of any mass. Add more D-branes and we cross a threshold. We form a black brane. The curved spacetime created by the black brane has some singular properties. It is a so-called anti-de Sitter space (or AdS).

  In 1917, Dutch physicist Willem de Sitter solved Einstein’s gravitational field equations for a model universe empty of matter in which spacetime expands exponentially. We can think of such a universe as consisting only of dark energy, producing a positive cosmological constant. It therefore has positive curvature. As on the surface of a sphere, the angles of a triangle drawn in de Sitter space will add up to more than 180°.

  In an anti-de Sitter space, the cosmological constant is negative, the spacetime curvature is negative and the angles of a triangle add up to less than 180°.* This is a hyperbolic universe shaped more like a saddle. Inject matter into an anti-de Sitter universe and the curvature of spacetime causes it to be pushed away from the boundary and drawn towards the centre, even in the absence of a conventional gravitational field.

  Closed strings moving through the bulk of this anti-de Sitter space are no different in principle from closed strings moving through the different D-brane layers in the first perspective. But closed strings that wander close to the boundary defined by the black brane appear very different. The curvature of anti-de Sitter space in the region of the boundary pushes against the closed strings, such that they appear to lose energy.

  This was Maldacena’s insight. These two apparently very different perspectives must describe the same physics. Open strings moving through a stack of D-branes (described equivalently in terms of a quantum field theory without gravity) must yield the same physics as low-energy closed strings (described by a Type IIB superstring theory with gravity) near the boundary of an anti-de Sitter space. The type of quantum field theory used is referred to as conformal field theory, or CFT. This relationship that Maldacena had discovered is then called a CFT/AdS duality.

  The caveat is that our own universe is not an anti-de Sitter space. With the accelerating expansion of spacetime and the consequent dilution of matter (both light and dark) that this implies, our universe will come in time to resemble a de Sitter, not an anti-de Sitter, universe.

  The black hole war might have been won, but Hawking did not immediately concede defeat. Eventually, on 23 April 2007, he paid out on a bet that he had made with Don Page. In 1980 the two had agreed a bet of $1 to £1 that, in essence, black holes destroy information. Hawking had expressed this mathematically through a structure he called the ‘$-matrix’. Now he declared: ‘I concede in light of the weakness of the $.’14

  The reality check

  Battles of wits between rival theorists will always make entertaining reading. In this case, there was an important physical principle at stake, and there can be no doubt that the black hole war helped to promote an important theoretical advance in the form of the holographic principle.

  Much of this chapter has been concerned with quantum information and black hole physics, with a consistent thread running through all the key episodes — information and entropy, black holes and the second law of thermodynamics, Hawking radiation, the black hole information paradox, black hole complementarity and the holographic principle, to the triumph of the CFT/AdS duality.

  We must now ask the awkward question: how can we tell if any of this theory is right?

  By definition, a black hole doesn’t give anything away. But we can infer its presence indirectly by the effect it has on visible matter in its neighbourhood. Consequently, there is some astronomical evidence for the existence of black holes.

  The accretion of a large quantity of in-falling gas causes the temperature of the gas to rise so high that the material surrounding the black hole emits energetic X-rays which can be detected. This is not to be confused with Hawking radiation, which is emitted by the black hole itself, independently of any in-falling material. Similarly, binary systems composed of a black hole and a companion star can emit X-rays as the stellar material is sucked towards the black hole’s event horizon. A strong X-ray source in the constellation Cygnus — called Cygnus X-l — was one of the first such black hole candidates, with an event horizon thought to be just 26 kilometres in diameter.

  So-called ‘active galaxies’ have cores that are far more luminous over at least some parts of the electromagnetic spectrum. Such activity is believed to result from the accretion of material by a supermassive black hole at the centre of the galaxy. The general consensus is that such supermassive black holes exist at the centres of most galaxies, including our own Milky Way.

  But whilst this indirect evidence certainty favours the existence of black holes, there is no astronomical data to tell us what they might actually be like. Consequently, we’re not in a position to verify the ‘no hair’ theorem. We can’t measure the temperature of a black hole, or its entropy. We can’t determine if black holes real
ly do emit Hawking radiation (all black holes of a size likely to be indirectly detectable will in any case absorb more CMB radiation than they emit). We can’t tell if black holes really do evaporate, although the Fermi Gamma-ray Space Telescope is searching for telltale gamma-ray bursts from evaporating primordial black holes.

  In The Hidden Reality, Greene acknowledges this ‘fine print’:

  We think the answer to each of these questions [concerning information and black hole physics] is yes because of the coherent, consistent, and carefully constructed theoretical edifice into which the conclusions perfectly fit. But since none of these ideas has been subject to the experimenter’s scalpel, it is certainly possible (though in my view highly unlikely) that future advances will convince us that one or more of these essential intermediate steps are wrong.15

  The theorists are no doubt in the best position to judge these things. Few seriously doubt that Hawking radiation is a ‘real’ phenomenon. But it’s worth bearing in mind that a proper description of the quantum nature of black holes really demands a proper quantum theory of gravity. Hawking made great strides by applying general relativity to describe a black hole and then separately applying quantum field theory in the vicinity of the black hole’s event horizon. But who is to say what the application of a consistent theory of quantum gravity would yield?

  And, just to be clear, there are plenty of examples from the history of science where few theorists have doubted something that has ultimately proved to be quite wrong.

  What, then, of the role of quantum information in our description of the universe? What does ‘information is physical’ really mean? I believe there are two ways in which we can interpret this statement. One is scientific, the second is metaphysical.

  The scientific interpretation acknowledges that information is not much different from other physical quantities. But, as such, it is a secondary quality. It relies on the properties of physical objects, such as photons with different polarization states or electrons with different spin orientations. In this sense it is like heat or temperature, which is a secondary quality determined by the motions of physical objects. ‘Information is physical’ means that information must be embodied in a physical system of some kind and processing information therefore has physical consequences. Take the physical system away, and there can be no information.

  I obviously have no issue with this.

  The metaphysical interpretation suggests that information exists independently of the physical system, that it is a primary quality, the ultimate manifestation of an independent reality. ‘Information is physical’ then acknowledges that in our empirical reality of observation and measurement, information becomes dressed in a clothing of physical properties. This is a bit like suggesting that heat or temperature are the ultimately reality, existing independently but projected into our empirical world of experience in terms of the motions of physical objects.

  I have no real issue with this either, so long as we don’t pretend that it is science.

  Coherence vs correspondence

  I have another motive for including the tale of the black hole war in this book. Clearly, this is an area of theoretical physics to which we struggle to apply either the Testability or the Veracity Principle. As there is simply no observational or experimental data to which we can refer the theoretical predictions, we might conclude that this is a branch of theoretical physics that is necessarily speculative or metaphysical in nature.

  Clearly, the notion that the entire universe is a hologram projected from information encoded on its boundary belongs firmly in the bucket labelled ‘fairy-tale physics’.

  But there is a very strong sense that some considerable scientific progress has been made here. Debates have raged over real issues of principle, and these debates have been resolved, it would appear by arriving at the ‘right’ answer. It really does seem that a singular truth has been established. This is not something that can be lightly dismissed.

  How come? If none of this theory is testable, and can’t be compared with observational or experimental facts, how has resolving the black hole information paradox helped to establish the ‘truth’?

  The answer is, I think, reasonably obvious after a moment’s reflection. The scientific method is premised on a correspondence theory of truth. This is what the Veracity Principle is all about. A scientific statement is held to be true if, and only if, it corresponds to facts that we can establish about empirical reality. This implies that there can be no scientific truth — no scientific right or wrong — without reference to facts about the external world.

  But truth, like reality, is a movable feast. There is another kind of truth. We can establish the internal logical consistency of a collection of sentences or propositions — or mathematical structures — independently of the facts. In this coherence theory of truth, we seek to establish right and wrong in relation to the theoretical system itself, irrespective of whether or not the theory can be tested by reference to the facts, and so irrespective of whether or not the theory itself is scientifically right or wrong.

  Now, of course this is not inconsistent with correspondence to the facts. We would rightly expect that scientific theories of empirical reality should be internally consistent and coherent. We might therefore conclude that a properly scientific theory must be both internally consistent and coherent and must make predictions that can be shown to correspond to the facts. But I think you can see how it is possible for theories to be developed which can be demonstrated to be internally consistent and coherent but which do not make predictions that can be tested.

  The simple truth is that the holographic principle was ‘proved’ not by reference to the observed behaviours of bits of quantum information and the observed physical properties of black holes, but by reference to a superstring duality; in other words, by reference to another theoretical structure that itself makes no testable predictions.

  The truth established in the black hole war is a ‘coherence truth’, not a ‘correspondence truth’, and there’s a big difference. If we stick rigidly to the definition of science that I outlined in Chapter 1 (and which may have seemed so very reasonable back then), then we must conclude that ‘coherence truth’ is in itself insufficient to qualify as scientific truth. The history of science is littered with examples of theories that have been demonstrated to be internally consistent and coherently true (and which few have doubted), but which have been shown to be ultimately worthless as scientific theories.

  Now, I am by no means suggesting here that the holographic principle should be considered worthless. There may well be applications of the principle in information theory that can be tested as techniques in quantum computing are developed.

  I guess all I’m really asking for is the exercise of a little scientific scepticism. When we read popular science books and articles and watch television documentaries that tell us that information is the ultimate reality and that the universe is a hologram, let’s just put all this into a proper perspective.

  Let’s stay focused on the nature of the ‘truth’ that is being communicated.

  * Or, as Neo pondered in a scene in The Matrix, perhaps there is no spoon.

  * Yes, yes, I know. Quarks and leptons come in three generations. Quark colour comes in three varieties — red, green and blue. Boy, you can be really pedantic at times!

  * Named for Ronald Rivest, Adi Shamir and Leonard Adelman. The RSA algorithm is a form of public-key encryption.

  * By ‘microscopic’, I mean a state defined by the individual positions and speeds of the molecules. A macroscopic state is then defined by averaging over all the different microscopic state possibilities.

  * In the American Standard Code for Information Interchange (ASCII) system, the letter ‘s’ (for example), transcribes as the bit string 1110011.

  * A high probability (p) has a consequently low inverse probability (l/p) and the logarithm of this will be small.

  ** The entropy of Shakespeare? I worked as
a post-doctoral researcher at Oxford University in the early 1980s. In those days, analysing experimental data typically required a Fortran program running on a mainframe computer. I’d occasionally get irritated by the fact that some students were hogging the teletype machines in the computer centre. Bafflingly, they seemed to be typing the entire works of Shakespeare into a computer program. I learned later that they were using the program to determine the entropy of Shakespeare.

  * This is known as the Chandrasekhar limit, named for Indian physicist Subrahmanyan Chandrasekhar.

  * Hawking demonstrated that the entropy of a black hole is proportional to one quarter of the black hole surface area, measured in units defined by the Planck area (the square of the Planck length), about 0.26 billionths of a trillionth of a trillionth of a trillionth of a trillionth of a trillionth of a square metre (2.6 × 10-70 m2).

  ** Although this is very small. A black hole of 1 M is expected to have a temperature just 60 billionths of a degree above absolute zero. It would actually absorb more cosmic microwave background radiation than it emits.

  * Now named the Kavli Institute for Theoretical Physics, in recognition of a substantial donation to the institute from Norwegian-born Fred Kavli, a physicist, inventor, entrepreneur and philanthropist.

  * For the purists, the information in the n-dimensional space is isomorphic with information contained in its (n - 1)-dimensional bounding surface.

  * Recall that Euclidean geometry, in which the angles of a triangle add up to precisely 1800, is based on the assumption of a ‘flat’ spacetime.

  11

  Ego Sum Ergo Est

  I Am Therefore It Is: the Anthropic Cosmological Principle

  A theorist goes astray in two ways: 1. The devil leads him by the nose with a false hypothesis. (For this he deserves our pity.) 2. His arguments are erroneous and sloppy. (For this he deserves a beating.)

 

‹ Prev