Book Read Free

Farewell to Reality

Page 28

by Jim Baggott


  In this hypothesis, however, Tegmark argues that the effectiveness of mathematics is not at all unreasonable, or even surprising.

  First, some groundwork. The hypothesis is premised on what Tegmark calls the External Reality Hypothesis (ERH). There exists an external physical reality that is completely independent of human beings.

  This is hardly radical. The ERH defines what it means to be a realist, though as I have explained, there can be no observational or experimental verification for such a hypothesis. The assumption of an independent reality is really an act of faith, or, if this is a tad too theological for your taste, an act of metaphysics.

  What is radical is what Tegmark concludes from the ERH: ‘The Mathematical Universe Hypothesis (MUH): Our external physical reality is a mathematical structure.’3

  What exactly is this supposed to mean?

  Imagine strolling barefoot along a beach late one afternoon. The sun is arcing towards the distant horizon, and we’re killing time picking up interesting-looking shells that happen to cross our path. You pick up three shells and add these to the four you already have in your pocket. How many shells do you now have? Easy. You know that three plus four equals seven.

  A mathematical structure consists of a set of abstract entities and the relations that can be established to exist between these. In our beachcombing example, the abstract entities are integer numbers and the relations consist of addition, subtraction, etc. These are the relations of ordinary algebra.

  The idea of numbers and the relationships between them is so commonplace that we often don’t think of it as ‘abstract’ at all. But, of course, in our real-world beachcombing scenario, there appear to be only shells. The idea that there are numbers of shells and that these can be added or subtracted in a logical fashion is an additional structure that is arguably inherent in our empirical reality of shells. In the MUH, Tegmark argues that when we strip away reality’s empirical dressing (the shells, me, you, the beach and everything else), what we are left with is a universe defined solely by the abstract entities and their relations — the numbers and the algebra.

  In fact, he goes further. Our tendency is to deploy different mathematical structures as appropriate in an attempt to describe our single (empirical) universe. Tegmark argues that the structure is the universe. Every structure that can be conceived therefore describes a different (parallel) universe.

  This is not an entirely new vision. The ancient Greek philosopher Plato argued that the independent reality of things-in-themselves consists of perfect, unchanging ‘forms’. The ‘forms’ are abstract concepts or ideas such as ‘goodness’ and ‘beauty’, accessible to us only through our powers of reason. Our empirical reality is then an imperfect or corrupt projection of these forms into our world of perception and experience. If we think of the ‘forms’ as abstract mathematical structures, then Tegmark’s MUH is a kind of radical Platonism. As Tegmark himself explained in an interview with Adam Frank for Discover magazine:

  Well, Galileo and Wigner and lots of other scientists would argue that abstract mathematics ‘describes’ reality. Plato would say that mathematics exists somewhere out there as an ideal reality. I am working in between. I have this sort of crazy-sounding idea that the reason why mathematics is so effective at describing reality is that it is reality. That is the mathematical universe hypothesis: Mathematical things actually exist, and they are actually physical reality.4

  This kind of logic obviously begs all sorts of questions. One possible counter-argument is that mathematical structures are not independently existing things. Rather, they are actually human inventions. They are systems of logic and reasoning with concepts, rules and language that we have devised and which we find particularly powerful when used to describe our physical world. By associating mathematics with the human mind in this way, we conclude that in a universe with no minds there can be no mathematics — mathematics is not an independently existing thing that minds have a knack of ‘discovering’.

  You might be inclined to conclude that this is really all just some kind of philosophical nit-picking, and I would be tempted to agree with you. But Tegmark claims that the MUH is testable. If mathematical structures exist independently of the things they are used to describe, then physicists can expect to continue to uncover more and more mathematical regularities. And if our universe is but one in what Tegmark calls the ‘Level IV’ multiverse, then we can test this by determining the statistical likelihood of a universe described by the mathematical structure that prevails compared with universes described by other structures.

  I, for one, don’t find this very convincing. You can come to your own conclusions. Despite his claims of testability, Tegmark himself seems to acknowledge that it is really all philosophical speculation. In an apparent throwaway remark towards the end of the interview with Discover, he comments that his wife, a respected cosmologist, ‘makes fun of me for my philosophical “bananas stuff”, but we try not to talk about it too much’.

  Now that sounds like good advice.

  Quantum information

  The second possibility is that the basic stuff of the universe is information. So how is this supposed to work?

  This logic is driven from the observation that the physical world appears to consist of opposites. We have positive and negative, spin-up and spin-down, vertical and horizontal, left and right, particle and anti-particle, and so on. Once again, if we strip away the empirical dressing, such as charge, spin, etc., what we are left with is a fundamental ‘oppositeness’. An elementary ‘on’ and ‘off’.

  Or, alternatively, an elementary ‘0’ and ‘1’.*

  In one of the simplest mathematical structures we can devise (or discover, depending on your point of view), the abstract entities are ‘bits’ which have one of only two possible values, 0 or 1. As most readers will be aware, these are the basic — so-called ‘binary’ — units of information used in all computer processes.

  But now here’s a twist. Classical bits have the values 0 or 1. Their values are one or the other. They cannot be added together in mysterious ways to make superpositions of 0 and 1. However, if we form our bits from quantum particles such as photons or electrons, then curious superpositions of 0 and 1 become perfectly possible. Such ‘quantum bits’ are often referred to as ‘qubits’. Because we can form superpositions of qubits, the processing of such quantum information works very differently compared with the processing of classical information.

  Suppose we have a system consisting of just two classical bits. The bits have values 0 or 1, so there are four (or 22 = 2 × 2) different possible ‘bit strings’. Both bits could have the value 0, giving the bit string 00. There are two possibilities for the situation where one bit has the value 0 and one has the value 1-01 and 10. Finally, if both bits have the value 1, the bit string is 11.

  If we extend this logic to three classical bits, then we anticipate eight (or 23 = 2 × 2 × 2) different possible bit strings: 000, 100, 010, 001, 110, 101, 011 and 111. We could go on, but the process quickly becomes tedious and in any case you get the point. A system of n classical bits gives 2n different possible bit strings.

  But a system of two or three classical bits will form only one of these bit strings at a time. In a system consisting of two or three qubits, we can form superpositions of all these different possible combinations. The physical state of the superposition is determined by the amplitudes of the wavefunctions of each qubit combination, subject to the restriction that the modulus squares of the amplitudes sum to 1 (measurement can give one, and only one, bit string). This means that the state of a superposition of n qubits is described by 2n amplitude factors.

  Here’s where it gets interesting. If we apply a computational process to a classical bit, then the value of that bit may change — the bit string changes from one possibility to another. For example, in a system with two bits, the string may change from 00 to 10. But applying a computational process to a qubit superposition changes all 2n components of the
superposition simultaneously. An input superposition yields an output superposition. This is important. When we apply a computation to an input in a classical computer, we get a linear output. In a quantum computer we get an exponential amount of computation in the same amount of time.

  But, hang on. What about the collapse of the wavefunction? Isn’t it the case that when we make a measurement to discover what the qubit string actually is, we lose all the other components of the superposition? Yes, this is true. However, by exploiting quantum interference effects between different computational paths, we can fix it so that the probability of observing the correct bit string (i.e. the string that represents the logically correct result of the computation) is enhanced and all the other bit strings are suppressed.

  Exponentially scaling up the output of a computation sounds vaguely like a good thing to do, but if we’re going to discard most of the possible results then where’s the benefit? But the fact is that we don’t ‘discard’ the other results. We set up the input superposition so that the computation proceeds exponentially and the amplitudes of the components in the output superposition are ‘concentrated’ around the logically correct result.

  It’s difficult to comprehend just what this means without some examples. The prospects for quantum computation were set out by Oxford theorist David Deutsch in 1985, but it took almost ten years for computer scientists to develop algorithms based on its mechanics.

  In 1994, American mathematician Peter Shor devised a quantum algorithm for finding the prime factors of large numbers. Three years later, Indian-born American computer scientist Lov Grover developed a quantum algorithm that produces database search results within a square root of the amount of time required by a classical computer.

  These examples may not sound particularly earth-shattering, but don’t be misled. The cryptographic systems used for most internet-based financial transactions (such as the RSA algorithm)* are founded on the simple fact that factoring large integer numbers requires a vast amount of computing power, regarded as virtually impossible with conventional computers. For example, it has been estimated that a network of a million conventional computers would require over a million years to factorize a 250-digit number. Yet this feat could in principle be performed in a matter of minutes using Shor’s algorithm on a single quantum computer.5

  Excitement is building. There have been many recent reports of practical, though small, laboratory-scale quantum computers. Entangled quantum states and superpositions are extremely fragile and can decohere very quickly, so any system relying on the constant establishment (and collapsing) of entangled states has to be operated in a very carefully managed environment.

  In February 2012, researchers at IBM announced significant technological advances which bring them ‘very close to the minimum requirements for a full-scale quantum computing system as determined by the world-wide research community’.6

  Stay tuned.

  Information and entropy

  Advances in quantum computing make the subject of quantum information both fascinating and important. But other than notions that the universe could be considered to be one vast quantum computer, they don’t get us any closer to the idea that information might be the ultimate reality.

  Indeed, we might be inclined to dismiss this idea for the same reason we might have dismissed the MUH. We could argue that the concept of information, like mathematics, is an abstraction based on the fundamental properties of the empirical entities of the material universe. Quantum particles have properties that we can interpret in terms of information. But, we might conclude, quantum information cannot exist without quantum properties. Which came first? The properties or the information? The chicken or the egg?

  But there is one relationship that might cause us to at least pause and think before concluding that only physical properties can shape physical reality. It lends some credibility to the notion that information itself might be considered as a physical thing.

  This is the relationship between information and entropy.

  Entropy is a thermodynamic quantity that we tend to interpret as the amount of ‘disorder’ in a system. For example, as a block of ice melts, it transforms into a more disordered, liquid form. As liquid water is heated to steam, it transforms into an even more disordered, gaseous form. The measured entropy of water increases as water transforms from solid to liquid to gas.

  The second law of thermodynamics claims that in a spontaneous change, entropy always increases. If we take a substance — such as air — contained in a closed system, prevented from exchanging energy with the outside world, then the entropy of the air will increase spontaneously and inexorably to a maximum as it reaches equilibrium with its surroundings. It seems intuitively obvious that the oxygen and nitrogen molecules and trace atoms of inert gas that make up the air will not all huddle together in one corner of the container. Instead, the air spreads out to give a uniform air pressure. This is the state with maximum entropy.

  Unlike other thermodynamics quantities, such as heat or work, entropy has always seemed a bit mysterious. The second law ties entropy to the ‘arrow of time’, the experience that despite being able to move freely in three spatial dimensions — forward—back, left—right, up-down — we appear obliged to follow time in only one direction — forwards. Suppose we watch as a smashed cocktail glass spontaneously reassembles itself, refills with Singapore sling and flies upwards through the air to return to a guest’s fingers. We would quickly conclude that we’re watching a film of these events playing backwards in rime.

  Austrian physicist Ludwig Boltzmann established that entropy and the second law are essentially statistical in nature. Molecules of air might be injected into one corner of the container, but they soon discover that this system has many more microscopic physical states available than the small number accessible to them huddled in the corner.* Statistically speaking, there are many more states in which the air molecules move through all the space available in the container than there are states in which the molecules group together.

  Another way of putting this is that the macroscopic state with a uniform average distribution of molecular positions and speeds is the most probable, simply because there are so many more microscopic states that contribute to the average. The air molecules expand in the container from a less probable to a more probable macroscopic state, and the entropy increases. Boltzmann discovered that the entropy is proportional to the logarithm of the number of possible microscopic states that the system can have. The higher the number of these states, the higher the probability of the macroscopic state that results.

  Note that this is all about statistical probabilities. There is in principle nothing preventing all the air molecules in my study from suddenly rushing into one corner of the room, causing me to die from asphyxiation. It’s just that this macroscopic state of the air molecules is very, very improbable.

  There’s yet another way of thinking about all this. Suppose we wanted to keep track of the positions and velocities of molecules in a sample of water. This is obviously a lot easier to do if the water is in the form of ice, as the molecules form a reasonably regular and predictable array with fixed positions. But as we heat the ice and convert it eventually to steam, we lose the ability to keep track of these molecules. The molecules are all still present, and we can use statistics to give us some notion of their average speeds, but we can no longer tell where every molecule is, where it’s going or how fast.

  Now imagine we could apply a similar logic to one of the great soliloquies from Shakespeare’s Macbeth. From Act V, scene v, we have:

  She should have died hereafter;

  There would have been a time for such a word.

  Tomorrow, and tomorrow, and tomorrow,

  Creeps in this petty pace from day to day,

  To the last syllable of recorded time;

  And all our yesterdays have lighted fools

  The way to dusty death. Out, out, brief candle!

  Life’s bu
t a walking shadow, a poor player

  That struts and frets his hour upon the stage

  And then is heard no more. It is a tale

  Told by an idiot, full of sound and fury

  Signifying nothing.7

  Let’s suppose we can ‘heat’ this soliloquy. At first, the passage melts and the words lose their places in the structure — ‘And syllable but shadow a frets sound all …’ Eventually, the words come apart and transform into a soup of individual letters — ‘s’, ‘t’, ‘A’, ‘n’, ‘e’… But the letters of the English alphabet can be coded as a series of bit strings.* With further heating, the bit strings come apart to produce a random ‘steam’ of bits-‘0’, ‘0’, ‘1’, ‘0’, ‘1’ …

  All the resonance and meaning in the soliloquy — all the information it contained — has not exactly been lost in this process. After all, we still have all the bits. But our ability to recover the information has become extremely difficult. Our ignorance has increased. It would take an enormous amount of effort to reconstruct the soliloquy from the now scrambled bits, just as it would take an awful lot of work to reconstruct the cocktail glass from all the shards picked up from the floor. If the information isn’t lost, then it has certainly become almost irretrievably ‘hidden’ (or, alternatively, our ignorance of the soliloquy has become very stubborn).

  What this suggests is that there is a deep relationship between information and entropy.

  In 1948, American mathematician and engineer Claude Shannon developed an early but very powerful form of information theory. Shannon worked at Bell Laboratories in New Jersey, the prestigious research establishment of American Telephone and Telegraph (AT&T) and Western Electric (it is now the research and development subsidiary of Alcatel-Lucent). He was interested in the efficiency of information transfer via communications channels such as telegraphy, and he found that ‘information’ as a concept could be expressed as the logarithm of the inverse of the probability of the value of a random variable used to communicate the information.

 

‹ Prev