by Max Tegmark
That was the good news. Although the CUH has attractive features such as ensuring that our Universe is rigorously defined and perhaps mitigating the cosmological measure problem by limiting what exists, it also poses serious challenges that need to be resolved.
A first concern about the CUH is that it may sound like a surrender of the philosophical high ground, effectively conceding that although all possible mathematical structures are “out there,” some have privileged status. However, my guess is that if the CUH turns out to be correct, it will instead be because the rest of the mathematical landscape was a mere illusion, fundamentally undefined and simply not existing in any meaningful sense.
A more immediate challenge is that our current standard model (and virtually all historically successful theories) violate the CUH, and it’s far from obvious whether a viable computable alternative exists. The main source of CUH violation comes from incorporating the continuum, usually in the form of real or complex numbers, which can’t even comprise the input to a finite computation, since they generically require infinitely many bits to specify. Even approaches attempting to banish the classical spacetime continuum by discretizing or quantizing it tend to maintain continuous variables in other aspects of the theory, such as the strength of the electromagnetic field or the amplitude of the quantum wavefunction.
One interesting approach to this continuum challenge involves replacing real numbers by a mathematical structure that emulates the continuum while remaining computable—for example, what mathematicians refer to as algebraic numbers. Another approach that I feel is worth exploring is abandoning the continuum as fundamental and trying to recover it as an approximation. As mentioned, we’ve never measured anything in physics to more than about sixteen significant digits, and no experiment has been carried out whose outcome depends on the hypothesis that a true continuum exists, or hinges on nature computing something uncomputable. It’s striking that many of the continuum models of classical mathematical physics (for example, the equations describing waves, diffusion or liquid flow) are known to be mere approximations of an underlying discrete collection of atoms. Quantum-gravity research suggests that even classical spacetime breaks down on very small scales. We therefore can’t be sure that quantities that we still treat as continuous (such as spacetime, field strengths and quantum wavefunction amplitudes) aren’t mere approximations of something discrete. Indeed, certain discrete computable structures (indeed, finite ones satisfying the FUH) can approximate our continuum physics models so well that we physicists use them when we need to compute things in practice, leaving open the question of whether the mathematical structure of our Universe is more like the former or more like the latter. Some authors such as Konrad Zuse, John Barrow, Jürgen Schmidhuber and Stephen Wolfram have gone as far as suggesting that the laws of nature are both computable and finite like a cellular automaton or computer simulation. (Note, however, that these suggestions differ from the CUH and FUH, by requiring the time evolution rather than the description [the relations] of the structure to be computable.)
Adding further twists, physics has also produced examples of how something continuous (like quantum fields) can produce a discrete solution (like a crystal lattice), which in turn appears like a continuous medium on large scales, which in turn has vibrations that behave like discrete particles called phonons. My MIT colleague Xiao-Gang Wen has shown that such “emergent” particles may even behave like ones in our standard model, raising the possibility that we may have multiple layers of effective continuous and discrete descriptions on top of what’s ultimately a discrete computable structure.
The Transcendent Structure of Level IV
Above we explored how mathematical structures and computations are closely related, in that the former are defined by the latter. On the other hand, computations are merely special cases of mathematical structures. For example, the information content (memory state) of a digital computer is a string of bits, say, “1001011100111001 …” of great but finite length, equivalent to some large but finite whole number n written in binary. The information processing of a computer is a deterministic rule for changing each memory state into another (applied over and over again), so mathematically, it’s simply a function f mapping the whole numbers onto themselves that gets iterated: n→f(n)→f(f(n))→…. In other words, even the most sophisticated computer simulation is merely a special case of a mathematical structure, hence included in the Level IV multiverse.
Figure 12.6: The arrows indicate the close relations between mathematical structures, formal systems and computations. The question mark suggests that these are all aspects of the same transcendent structure, whose nature we still haven’t fully understood.
Figure 12.6 illustrates how computations and mathematical structures are related not only to each other, but also to formal systems, the abstract symbolic systems of axioms and deduction rules that mathematicians use to prove theorems about mathematical structures. The boxes in Figure 12.1 correspond to such formal systems. If a formal system describes a mathematical structure, mathematicians say that the latter is a model of the former. Moreover, computations can generate theorems in formal systems (indeed, for certain classes of formal systems, there are algorithms that can compute all theorems).
Figure 12.6 also illustrates that there are potential problems at all three vertices of the triangle: mathematical structures may have relations that are undefined, formal systems may contain statements that are undecidable, and computations may fail to halt after a finite number of steps. The relations between the three vertices with their corresponding complications are illustrated by six arrows, explained in more detail in my 2007 mathematical-universe paper. Since different arrows are studied by different specialists in a range of fields from mathematical logic to computer science, the study of the triangle as a whole is somewhat interdisciplinary, and I think it deserves more attention.
I’ve drawn a question mark at the center of the triangle to suggest that the three vertices (mathematical structures, formal systems and computations) are simply different aspects of one underlying transcendent structure whose nature we still don’t fully understand. This structure (perhaps restricted to the defined/decidable/halting part as per the CUH) exists “out there” in a baggage-free way, and is both the totality of what has mathematical existence and the totality of what has physical existence.
Implications of the Level IV Multiverse
So far in this chapter, we’ve argued that the ultimate physical reality is the Level IV multiverse, and started exploring its mathematical properties. Now let’s explore its physical properties as well as other implications of the Level IV idea.
Symmetries and Beyond
If we turn our attention to some particular mathematical structure on the master list that serves as our atlas of the Level IV multiverse, how can we derive the physical properties that a self-aware observer in it would perceive it to have? In other words, how would an infinitely intelligent mathematician start with its mathematical definition and derive the physics description that we called the “consensus reality” in Chapter 9?1
We argued in Chapter 10 that her first step would be to calculate what symmetries the mathematical structure has. Symmetry properties are among the very few types of properties that every mathematical structure possesses, and they can manifest themselves as physical symmetries to the structure’s inhabitants.
The question of what she should calculate next when exploring an arbitrary structure is largely uncharted territory, but I find it striking that in the particular mathematical structure that we inhabit, further study of its symmetries has led to a gold mine of further insights. The German mathematician Emmy Noether proved in 1915 that each continuous symmetry of our mathematical structure leads to a so-called conservation law of physics, whereby some quantity is guaranteed to stay constant—and thereby has the sort of permanence that might make self-aware observers take note of it and give it a “baggage” name. All the conserved quantities th
at we discussed in Chapter 7 correspond to such symmetries: for example, energy corresponds to time-translation symmetry (that our laws of physics stay the same for all time), momentum corresponds to space-translation symmetry (that the laws are the same everywhere), angular momentum corresponds to rotation symmetry (that empty space has no special “up” direction) and electric charge corresponds to a certain symmetry of quantum mechanics. The Hungarian physicist Eugene Wigner went on to show that these symmetries also dictated all the quantum properties that particles can have, including mass and spin. In other words, between the two of them, Noether and Wigner showed that, at least in our own mathematical structure, studying the symmetries reveals what sort of “stuff” can exist in it. As I mentioned in Chapter 7, some physics colleagues of mine with a penchant for math jargon like to quip that a particle is simply “an element of an irreducible representation of the symmetry group.” It’s become clear that practically all our laws of physics originate in symmetries, and the physics Nobel laureate Philip Warren Anderson has gone even further, saying, “It is only slightly overstating the case to say that physics is the study of symmetry.”
Why do symmetries play such an important role in physics? The MUH provides the answer that our physical reality has symmetry properties because it’s a mathematical structure, and mathematical structures have symmetry properties. The deeper question of why the particular structure that we inhabit has so much symmetry then becomes equivalent to asking why we find ourselves in this particular structure, rather than in another one with less symmetry. Part of the answer may be that symmetries appear to be more the rule than the exception in mathematical structures, especially in large ones not too far down the master list, where simple algorithms can define relations for a vast number of elements precisely because they all have properties in common. An anthropic-selection effect may be at work as well: as pointed out by Wigner himself, the existence of observers able to spot regularities in the world around them probably requires symmetries, so given that we’re observers, we should expect to find ourselves in a highly symmetric mathematical structure. For example, imagine trying to make sense of a world where experiments were never repeatable because their outcome depended on exactly where and when you performed them. If dropped rocks sometimes fell down, sometimes fell up and sometimes fell sideways, and everything else around us similarly behaved in a seemingly random way, without any discernible patterns or regularities, then there might have been no point in evolving a brain.
The way modern physics is usually presented, symmetries are treated as an input rather than an output. For example, Einstein founded special relativity on what’s called Lorentz symmetry (the postulate that you can’t tell whether you’re standing still because all laws of physics, including those governing the speed of light, are the same for all uniformly moving observers). Similarly a symmetry called SU(3) × SU(2) × U(1) is usually taken as a starting assumption for the standard model of particle physics. Under the Mathematical Universe Hypothesis, the logic is reversed: the symmetries aren’t an assumption, but simply properties of the mathematical structure that can be calculated from its definition on the master list.
* * *
1In the philosophy of science, the conventional approach holds that a theory of mathematical physics can be broken down into (i) a mathematical structure, (ii) an empirical domain and (iii) a set of correspondence rules that link parts of the mathematical structure with parts of the empirical domain. If the MUH is correct, then (ii) and (iii) are redundant in the sense that they can, at least in principle, be derived from (i). Instead, they can be viewed as a handy user’s manual for the theory defined by (i).
The Illusion of Initial Conditions
Compared to how we usually teach physics at MIT, the Level IV multiverse provides a very different starting point for the subject, and this causes most traditional physics concepts to be reinterpreted. As we just saw, some concepts such as symmetries retain their central status. In contrast, other concepts, such as initial conditions, complexity and randomness, get reinterpreted as mere illusions, existing only in the mind of the beholder and not in the external physical reality.
Let’s first examine initial conditions, which we briefly encountered in Chapter 6. Nobody captures the traditional view of initial conditions better than Eugene Wigner: “Our knowledge of the physical world has been divided into two categories: initial conditions and the laws of nature. The state of the world is described by the initial conditions. These are complicated and no accurate regularity has been discovered in them. In a sense, the physicist isn’t interested in the initial conditions, but leaves their study to the astronomer, geologist, geographer, etc.” In other words, we physicists traditionally call the regularities that we understand “laws” and dismiss much of what we don’t understand as “initial conditions.” The laws let us predict how these conditions will change over time, but give no information about why they started out the way they did.
In contrast, the Mathematical Universe Hypothesis leaves no room for such arbitrary initial conditions, eliminating them altogether as a fundamental concept. This is because our physical reality is a mathematical structure that is completely specified in all respects by its mathematical definition in the master list. A purported Theory of Everything saying that everything just “started out” or “was created” in some not fully specified state would constitute an incomplete description, thus violating the MUH. A mathematical structure isn’t allowed to be partly undefined. So traditional physics embraces initial conditions, while the MUH rejects them: what are we to make of this?
The Illusion of Randomness
Because of this requirement that everything be defined, the MUH also banishes another concept that has played a central role in physics: randomness. Regardless of whether anything seems random to an observer, it must ultimately be an illusion, not existing at the fundamental level, because there’s nothing random about a mathematical structure. Yet the physics textbooks on my office bookshelves are full of that word: quantum measurements are said to produce random outcomes, and the heat in a cup of coffee is alleged to be caused by random motion of its molecules. Again traditional physics embraces something that the MUH rejects: what are we to make of this?
The initial-condition puzzle and the randomness puzzle are linked, and raise a pressing question. By a crude estimate, it takes almost a googol (10100) bits of information to specify the actual state of every particle in our Universe right now. What’s the origin of this information? The traditional answer involves a combination of initial conditions and randomness: lots of bits are needed to describe how our Universe started out, since traditional laws of physics don’t specify this, and then we need additional bits to describe the outcome of various random processes that happened between then and now. Now that the MUH requires everything to be specified and banishes both initial conditions and randomness, how are we to account for all this information? If the mathematical structure is simple enough to be described by equations on a T-shirt, this at face value appears downright impossible! Let’s now tackle these questions.
The Illusion of Complexity
How much information does our Universe really contain? As we have discussed, the information content (algorithmic complexity) of something is the length in bits of its shortest self-contained description. To appreciate the subtlety of this, let’s first ask how much information each of the six different patterns in Figure 12.7 contains. At first glance, the two leftmost ones look very similar, like seemingly random patterns of 128 × 128 = 16,384 tiny black and white pixels. This suggests that we need about 16,384 bits to describe either of them, one bit to specify the color of each pixel. But whereas this is probably true for the upper pattern, which I created with a quantum–random number generator, there’s a hidden simplicity in the lower pattern: it’s just the binary digits of the square root of two! That simple description is enough to calculate the whole pattern: ≈ 1.414213562…, which is written as 1.0100001010000110 �
�� in binary. For argument’s sake, let’s say that this pattern of zeros and ones can be generated by a computer program that’s 100 bits long. Then the apparent complexity of the lower left pattern is an illusion: we’re looking not at 16,384 bits of information, but merely 100!
Figure 12.7: The complexity of a pattern (how many bits of information are needed to describe it) isn’t always obvious. The upper left panel shows 128 × 128 = 16,384 squares that are randomly colored black or white, which typically can’t be described using less than 16,384 bits. The smaller pieces of this pattern (top middle and right) consist of fewer random squares and therefore require fewer bits to describe. The lower left pattern, on the other hand, can be generated by a very short (100-bit, say) program, because it’s simply the binary digits of (0 = black square, 1 = white square). Describing the bottom middle pattern requires an additional 14 bits to specify which digit of it starts at. Finally, the lower right pattern requires 9 bits just as the one above it; the pattern is so short that it doesn’t help to specify that it’s part of .