Programming the Universe

Home > Other > Programming the Universe > Page 20
Programming the Universe Page 20

by Seth Lloyd


  The paper divides measures of complexity into four categories: first, measures (like algorithmic information) of how hard it is to describe something; second, measures (like computational complexity) of how hard it is to do something; third, measures of the degree of organization in a system; fourth, non-quantitative ideas associated with complexity (like self-organization or complex adaptive systems). Of the forty-two, those I find most interesting are measures that combine how hard it is to describe something, how hard it is to do something, and the degree of organization into a single measure. It is on this set of measures I will concentrate here.

  The laws of physics describe trade-offs and relationships between measurable quantities, and the laws of complexity do the same. A particularly useful trade-off is that between information and effort. Consider algorithmic information as a measure of information content, and computational complexity as a measure of effort required. Consider the amount of effort required to produce a particular bit string—the first million digits of π, for example. These digits can be produced with relatively little computational complexity (only a few million logical operations, taking under a second on a conventional computer) by the million-plus-digit program that says “PRINT 3.1415926 . . .” Although we don’t know the exact algorithmic information of the first million digits of π (recall that algorithmic information is uncomputable), we can always find an upper bound to that algorithmic information by producing short programs that compute the first million digits of π. For instance, a program that computes those digits using a mathematical technique called a continued fraction representation could have a length of less than 1,000 digits, but this compact program takes a lot longer to produce the first million digits of π than the simple but bulky PRINT program. It would take billions of logical operations to produce those million digits.

  In the early 1980s, Charles Bennett proposed a simple definition of complexity that relies on the trade-off between information and effort. Following Solomonoff, Bennett identified the most plausible explanation of a bit string or data set with the shortest program that produced it. (If there were several programs that were almost as short as the shortest, Bennett included those as plausible explanations, too.) Then Bennett looked at the computational complexity of those short programs. He called this quantity—the effort required to produce the bit string from its most plausible explanation—“logical depth.”

  Of all the measures of complexity Heinz and I studied, logical depth was the most appealing. Bit strings that are obviously simple, like the string consisting of a billion 1’s, have short fast-running programs that can produce them (e.g., “PRINT 1 ONE BILLION TIMES”) and are logically shallow. Random bit strings (e.g., 11010101100010 . . . 011, a bit string I got by flipping a coin and calling heads 1 and tails 0) are plausibly produced by long fast-running programs (e.g., “PRINT 11010101100010 . . . 011”) and are also logically shallow. By contrast, bit strings corresponding to the first million digits of π take a long time to produce from their shortest known programs and are logically deep. Logically deep bit strings possess large amounts of structure—structure that takes a long time to compute from the shortest possible program.

  Heinz and I were very taken by Bennett’s ideas on complexity. Heinz’s only complaint was that this scheme was not physical enough. “Logical depth” referred to bit strings, computer programs, and logical operations. Heinz wanted a measure of complexity that referred to physical systems—energy and entropy. So he and I concocted a physical analog to logical depth, which we called “thermodynamic depth,” to emphasize the connection to Bennett’s work. Instead of being a property of bit strings, thermodynamic depth is a property of physical systems. Instead of identifying the most plausible way that a bit string was produced with the shortest program that produced it, Heinz and I looked directly at the most plausible way that a physical system was produced. Finally, instead of using the computational complexity—the number of logical operations—that went into producing a given bit string, we looked at the amount of physical resources needed to produce the given physical system—an atom, say, or an elephant.

  The particular physical resource Heinz and I considered was related to entropy. Recall that entropy is measured in bits. Entropy consists of random, unknown bits. The opposite of entropy is called “negentropy.” Negentropy consists of known, structured bits. A system’s negentropy is a measure of how far away that system is from its maximum possible entropy. A living, breathing human being has lots of negentropy, as opposed to, say, a gas of helium atoms at uniform temperature, which has no negentropy. You can think of entropy as consisting of random, junky bits and negentropy as consisting of ordered, useful bits. The thermodynamic depth of a physical system is equal to the number of useful bits that went into assembling the system.

  Because it is the acknowledged offspring of logical depth, thermodynamic depth shares many of logical depth’s good qualities. Simple, regular systems that are easily assembled, such as salt crystals, are typically thermodynamically shallow. Fully random systems, such as our gas of helium atoms, generated by a straightforward random process such as heating, are also thermodynamically shallow. But intricate, structured systems, such as living systems, required a huge investment of useful bits over billions of years to assemble and are thermodynamically deep.

  When applied to bit strings (for example, those produced by a randomly programmed quantum computer), thermodynamic depth is even closer to logical depth. The most plausible way a bit string can be produced is from the shortest program. Thus the thermodynamic depth of the bit string is the amount of memory space used by the quantum computer in producing the string; that is, the thermodynamic depth is the spatial computational complexity of the shortest program.

  In the computational universe—in which each physical system does indeed correspond to a string of quantum bits and its behavior is programmed by random quantum fluctuations—thermodynamic depth and logical depth are complementary, closely related quantities. To nail down the analogy between thermodynamic depth and logical depth, we need to find a physical analog to the elementary logical operation, or op. The previous chapter defined just such an analog: every time a quantum wave wiggles, an op is performed. To give a physical analog of the number of ops that went into constructing a bit string, just count the number of wiggles that went into constructing a physical system.

  Recall from the previous chapter that this number of wiggles is proportional to what in physics is called the action of the physical system. Action is the number of wiggles multiplied by Planck’s constant. Action divided by Planck’s constant is a good physical analog for the number of ops, i.e., for computational complexity. To estimate how hard it was to construct a given physical system, just look at the action that went into putting it together. (“The action is where the action is.”)

  The results of the previous chapter now allow us to estimate the logical depth and thermodynamic depth of the universe as a whole and so to put an upper bound on the depth of everything that it contains. The total amount of computational effort that went into putting the universe together is 10122 ops (the logical depth) performed on 1092 bits (the thermodynamic depth).

  Effective Complexity

  Logical and thermodynamic depth are not the only measures that quantify some aspect of complexity. Depending on which feature of a complex system you wish to characterize, there are other measures that are equally or even more useful. One such measure is the quantity known as “effective complexity,” a measure of the amount of regularity in a system; this definition of complexity was originally proposed by Murray Gell-Mann. Over the last decade, Gell-Mann and I have worked to make the notion of effective complexity mathematically precise.

  Effective complexity is a simple and elegant measure of complexity. Every physical system has associated with it a quantity of information—the amount required to describe the physical state of the system to the accuracy allowed by quantum mechanics. The basic way to measure something’s eff
ective complexity is to divide that amount into two parts: information that describes the regular aspects of the thing and information that describes its random aspects. The amount of information required to describe a system’s regularities is its effective complexity.

  In an engineered system, such as an airplane, the effective complexity is essentially equal to the length of the system’s blueprint: it is the amount of information required to put the system together. In an airplane, for example, the blueprint specifies the shape of the wings and the chemical content and manufacturing procedure for the alloy from which the wings are made. Wing shape and alloy composition are regularities of the design; the bits that specify these features have to take on specific values if the airplane is to fly. These bits figure in the airplane’s effective complexity. But the blueprint does not specify the position of each and every atom in the wing. The bits that specify just where each atom is at one or another point in time are accidents; they do not contribute to the flightworthiness of the airplane, nor are they an indicator of its complexity.

  As the example of an airplane suggests, complexity is a key issue in engineering. How can we engineer complex systems that are still robust in their behavior? The maxim we teach engineering undergraduates at MIT goes by the acronym KISS: Keep It Simple, Stupid! But what if the system you’re engineering is itself complex, like an airplane? MIT has an entire division, the Engineering Systems Division, that brings together researchers from engineering, the hard sciences, and the social sciences to identify and solve problems of complex engineered systems. One promising technique for engineering complex systems is known as axiomatic design, an approach conceived by Nam Suh, the former head of MIT’s Department of Mechanical Engineering. The idea of axiomatic design is to minimize the information content of the engineered system while maintaining its ability to carry out its functional requirements. Properly applied, axiomatic design results in airplanes, software, and toasters all just complex enough, and no more, to attain their design goals. Axiomatic design minimizes the effective complexity of the engineered system while maintaining the system’s effectiveness. Keep It Simple, Stupid—but not too simple.

  Determining the effective complexity of a physical system obviously involves a judgment about what constitutes a regularity and what does not. That is, you must establish criteria that indicate when a bit is an “important” bit, a bit of regularity, and when it is “unimportant,” a bit of randomness. In an engineered system, the important bits are those that have to take on particular values or else the system will not do what it’s supposed to do. In evolved systems, such as a bacterium, it is less obvious which bits are important and which are unimportant. A simple criterion for figuring out whether a bit is important and so contributes to the effective complexity is to flip it and see what happens. If flipping the bit has a significant effect, it is important; if flipping the bit has no significant effect, it is unimportant. If the bit affects the bacterium’s ability to survive and reproduce, then that bit contributes to the effective complexity of the bacterium. A bacterium’s important bits are those that affect the bacterium’s future in a significant way; the effective complexity of any system that exhibits purposeful behavior can be similarly measured. Any bit that affects the ability of the system to attain its purpose contributes to the system’s effective complexity.

  Of course, the definition of purposeful behavior is to some degree subjective. But suppose we focus on behavior that allows a system to (a) get energy and (b) use that energy to construct copies of itself. Living systems devote most of their time and effort to eating and reproducing. However one defines life, any system that can accomplish those two actions has gone a long way on the road to being alive. Once we identify as purposeful those behaviors that enhance the system’s ability to get energy and use it to reproduce, then we can measure the effective complexity of all living systems and all systems that may someday be alive. As we’ll see, effectively complex systems that get energy and reproduce arise naturally out of the underlying computational processes of the universe.

  Why Is the Universe Complex?

  Now that we have formally defined complexity, we can demonstrate that the universe necessarily generates it. The laws of physics are computationally universal, enabling the universe to contain logically deep systems and systems with high effective complexity. But we can also show that the universe must contain such complex systems. Let’s look at the first information-processing revolution—the creation of the universe—in detail.

  In measuring the complexity of the universe, we’ll work within the current standard cosmological model. In this model, there is not enough matter in the universe to eventually slow and then reverse its expansion, causing it to end in a Big Crunch. It will continue to expand forever. Such a universe is spatially infinite, even at the very beginning. But we are interested in the computation that the universe performs—that is, the causally connected part of the universe, the part within the horizon, made up of bits that can talk to one another. Unless we are explicitly talking about what is happening beyond the horizon, we will adopt the usual practice and refer to the part of the universe within the horizon as “the universe.”

  The first information-processing revolution begins with the beginning of the universe. Before the beginning there is nothing—no space, no time, no energy, no bits. At the very instant of beginning, nothing yet has happened. The monkeys have not yet begun to type.

  Observational evidence suggests that in the beginning the universe was simple. As far as we can tell, there may have been only one possible initial state, and that state was everywhere the same. If there were only one possible initial state at time zero, the universe contained zero bits of information. Its logical depth, thermodynamic depth, and effective complexity were also zero.

  Now the universe begins to compute. One Planck time later (10-44 seconds), the universe contains one bit within the horizon. The amount of computation that can be performed on this bit in one Planck time is one op; that is, the universe’s effective complexity and thermodynamic depth can be no greater than one bit, and its logical depth can be no greater than one op. The monkeys have typed one bit.

  As the universe expands, the number of bits within the horizon grows and the number of ops accumulates. The maximum logical depth is limited by the number of ops, and the effective complexity and thermodynamic depth are limited by the number of bits. Though the universe is growing in complexity, it is still relatively simple. But the monkeys are typing away.

  What is the universe computing during these early times? As usual, it is computing its own behavior. The universe computes itself. If we knew more about quantum gravity, we could reproduce the first few steps of the universe’s computation on existing, man-made quantum computers, simple though they are. In fact, the computational theory of quantum gravity advocated earlier gives a straightforward picture of what the universe is computing. In this picture, the universe embarks on all possible computations at once.

  Recall that quantum computers have the ability to perform many computations simultaneously, using quantum parallelism. Almost all input quantum bits are superpositions of 0 and 1. There is only one state that is 0 and one state that is 1, but there are an infinite number of possible input states that are superpositions of 0 and 1. Consequently, almost all one-qubit inputs to the quantum computer tell it to do this and that simultaneously.

  Similarly, almost all two-qubit input states are superpositions of 00, 01, 10, and 11. If each of these four inputs instructs the computer to perform a specific computation, then almost all two-qubit input states instruct the quantum computer to perform these four computations in quantum parallel. And so it goes. As the number of input qubits grows, the universal quantum computer continues to embark upon all possible computations at once.

  Even though the very early universe is simple, neither effectively complex nor logically deep, it has a glorious future ahead. The early universe is what Charles Bennett calls an “ambitious”
system: even if it is not initially complex, it is intrinsically able to generate large amounts of complexity over time.

  In the early universe, our quantum monkeys are typing in superpositions of all possible inputs. The computational universe interprets these inputs as instructions to perform all possible computations in quantum parallel. (This superposition of all possible structures is sometimes called the multiverse.) In one of these parallel quantum computations, it generates the particular complexity we see around us. As always, with monkeys typing into computers, structures that can be generated from short programs are more likely than structures that require long programs to produce.

  The universe is computing. Bits are flipping. What are these bits? Bits in the early universe represent local values of energy density. For example, a 0 can represent a lower-than-average energy density and a 1 a higher-than-average energy density. Because of the simple, homogeneous nature of the initial state, the average density of energy is everywhere the same, but there are quantum fluctuations about that average. The quantum bits of the universe are in a superposition of lower density plus higher density. In terms of energy, the natural dynamics of the universe create regions in which the energy density takes on a superposition of different values.

  As soon as the universe begins, its qubits begin to flip and interact. That is, as soon as the monkeys have begun to type their program by creating a quantum superposition, the laws of physics start to interpret that program. Recall that information, once created, tends to spread. Information is infectious. Because of the sensitivity of quantum bits to interactions with the other quantum bits in their surroundings, quantum information is particularly infectious. As noted earlier, this spreading of quantum information leads to decoherence, the severing of histories.

 

‹ Prev