Programming the Universe

Home > Other > Programming the Universe > Page 6
Programming the Universe Page 6

by Seth Lloyd


  Nowadays, by contrast, telescopes reveal huge variations and nonuniformities in the universe. Matter clusters together to form planets such as Earth and stars such as the sun. Planets and suns cluster together to form solar systems. Our solar system clusters together with billions of others to form a galaxy, the Milky Way. The Milky Way, in turn, is only one of tens of galaxies in a cluster of galaxies—and our cluster of galaxies is only one cluster in a supercluster. This hierarchy of clusters of matter, separated by cosmic voids, makes up the present, large-scale structure of the universe.

  How did this large-scale structure come about? Where did the bits of information come from? These bits had their origins in the very early universe we’ve just explored. Their origins can be explained by the laws of quantum mechanics, coupled to the laws of gravity.

  Quantum mechanics is the theory that describes how matter and energy behave at their most fundamental levels. At the small scale, quantum mechanics describes the behavior of molecules, atoms, and elementary particles. At larger scales, it describes the behavior of you and me. Larger still, it describes the behavior of the universe as a whole. The laws of quantum mechanics are responsible for the emergence of detail and structure in the universe.

  The theory of quantum mechanics gives rise to large-scale structure because of its intrinsically probabilistic nature. Counterintuitive as it may seem, quantum mechanics produces detail and structure because it is inherently uncertain.

  The early universe was uniform: the density of energy was everywhere almost the same. But it was not exactly the same. In quantum mechanics, quantities such as position, velocity, and energy density do not have exact values. Instead, their values fluctuate. We can describe their probable values—the most likely location of a particle, for example—but we cannot claim perfect certainty. Because of these quantum fluctuations, some regions of the early universe were ever so slightly more dense than other regions.

  As time passed, the attractive force of gravity caused more matter to move toward these denser regions, further increasing their energy density, and decreasing density in the surrounding volume. Gravity thus amplified the effect of an initially tiny discrepancy, causing it to increase. Just such a tiny quantum fluctuation near the beginning of time formed the seed for what would eventually become a cluster of galaxies. Slightly later on, further fluctuations formed the seeds for the positions of individual galaxies within the cluster, and still later fluctuations seeded the positions of planets and stars.

  In the process of creating this large-scale structure, gravity also created the free energy that living things require to survive. As the matter clumped together, it moved faster and faster, gaining energy from the gravitational field; that is, the matter heated up. The larger the clump grew, the hotter the matter became. If enough matter clumped together, the temperature in the center of the clump rose to the point at which thermonuclear reactions are ignited: the sun began to shine! The light from the sun has lots of free energy—energy plants would use for photosynthesis, for example. As soon as plants came into existence, that is.

  The ability of gravity to amplify small fluctuations in density is a reflection of a physical phenomenon known as “chaos.” In a chaotic system, what begins as a tiny difference is amplified in time. Perhaps the most famous example of chaos is the so-called butterfly effect. The equations for motion within Earth’s atmosphere are inherently chaotic; thus, a tiny perturbation, such as the flutter of a butterfly’s wing, can be amplified over time and distance, becoming a hurricane months and miles down the line. The minuscule quantum fluctuations of energy density at the time of the Big Bang are the butterfly effects that would come to yield the large-scale structure of the universe.

  Every galaxy, star, and planet owes its mass and position to quantum accidents of the early universe. But there’s more: these accidents are also the source of the universe’s minute details. Chance is a crucial element of the language of nature. Every roll of the quantum dice injects a few more bits of detail into the world. As these details accumulate, they form the seeds for all the variety of the universe. Every tree, branch, leaf, cell, and strand of DNA owes its particular form to some past toss of the quantum dice. Without the laws of quantum mechanics, the universe would still be featureless and bare. Gambling for money may be infernal, but betting on throws of the quantum dice is divine.

  The Universal Computer

  We have seen that the universe computes by registering and transforming information, so we might call what we see around us the “universal computer.” But there is another, more technical meaning to that phrase. In computer science, a universal computer is a device that can be programmed to process bits of information in any desired way. Conventional digital computers of the sort on which this book is being written are universal computers, and their languages are universal languages. Human beings are capable of universal computation, and human languages are universal. Most systems that can be programmed to perform arbitrarily long sequences of simple transformations of information are universal.

  Universal computers can do pretty much anything with information. Two of the inventors of universal computers and universal languages, Alonzo Church and Alan Turing, hypothesized that any possible mathematical manipulation can be performed on a universal computer; that is, universal computers can generate mathematical patterns of any level of complexity. A universal computer itself, though, need not be a complicated machine; all it must be able to do is take bits, one or two at a time, and perform simple operations upon them. Any desired transformation of however large a set of bits can be enacted by repeatedly performing operations on just one or two bits at a time. And any machine that can enact this sequence of simple logical operations is a universal computer.

  Significantly, universal computers can be programmed to transform information in any way desired, and any universal computer can be programmed to transform information in the same way as any other universal computer. That is, any universal computer can simulate another, and vice versa. This intersimulatability means that all universal computers can perform the same set of tasks. This feature of computational universality is a familiar one: if a program will run on a PC, it can necessarily be translated to run on a Mac.

  Of course, the program may take longer to run on the Mac than the PC, or vice versa. Programs written for a specific universal computer tend to run faster on that computer than the translated program runs on another. But the translated program will still run. In fact, every universal computer can be shown not only to simulate every other universal computer, but to do so efficiently. The slowdown due to translation is relatively insignificant.

  Digital vs. Quantum

  The universe computes. Its computer language consists of the laws of physics and their chemical and biological consequences. But is the universe nothing more than a universal digital computer, in the technical sense elucidated by Church and Turing? It is possible to give a precise scientific answer to this question. The answer is No.

  The idea that the universe might be, at bottom, a digital computer is decades old. In the 1960s, Edward Fredkin, then a professor at MIT, and Konrad Zuse, who constructed the first electronic digital computers in Germany in the early 1940s, both proposed that the universe was fundamentally a universal digital computer. (More recently, this idea has found an advocate in the computer scientist Stephen Wolfram.) The idea is an appealing one: digital systems are simple, yet able to reproduce behavior of any degree of complexity. In particular, computers whose architecture mimics the structure of space and time (so-called cellular automata) can efficiently reproduce the motions of classical particles and the interactions between them.

  In addition to the aesthetic appeal of a digital universe, there is powerful observational evidence for the computational ability of physical laws. The laws of physics clearly support universal computation. The problem with identifying the universe as a classical digital computer is that the universe appears to be significantly more computational
ly powerful.

  Two computing machines have the same computational power if each can simulate the other efficiently. The key word here is “efficiently.” The laws of physics can simulate digital computation efficiently; the universe effortlessly encompasses conventional digital computers. But now consider whether or not the universe can be simulated efficiently by a conventional computer. In fact, a conventional digital computer seems unable to simulate the universe efficiently.

  At first, it might seem otherwise. After all, the laws of physics are apparently simple. Even if they turn out to be somewhat more complicated than we currently suspect, they are still mathematical laws that can be expressed in a conventional computer language; that is, a conventional computer can simulate the laws of physics and their consequences. If you had a large enough computer, then, you could program it (using, for example, a language such as Java) with descriptions of the initial state of the universe, and of the laws of physics, and set it running. Eventually, you would expect this computer to come up with accurate descriptions of the state of the universe at any later time.

  The problem with such simulations is not that they are impossible, but that they are inefficient. The universe is fundamentally quantum-mechanical, and conventional digital computers have a hard time simulating quantum-mechanical systems. Why? Quantum mechanics is just as weird and counterintuitive for conventional computers as it is for human beings. In fact, in order to simulate even a tiny piece of the universe—consisting, say, of a few hundred atoms—for a tiny fraction of a second, a conventional computer would need more memory space than there are atoms in the universe as a whole, and would take more time to complete the task than the current age of the universe. Now that’s inefficient.

  This is not to say that classical computers are useless for capturing certain aspects of quantum behavior: they are quite good at calculating approximate energies and ground states of quantum systems. It’s just that there is no known way for them to perform a full-blown dynamical simulation of a complex quantum system without using vast amounts of dynamical resources. Classical bits are very bad at storing the information required to characterize a quantum system: the number of bits grows exponentially with the number of pieces of the system. What does this mean? The failure of classical simulation of quantum systems suggests that the universe is intrinsically more computationally powerful than a classical digital computer.

  But what about a quantum computer? A few years ago, acting on a suggestion from the physicist Richard Feynman, I showed that quantum computers can simulate any system that obeys the known laws of physics (and even those that obey as yet undiscovered laws!) in a straightforward and efficient way.

  In brief, the simulation proceeds as follows: First, map the state of every piece of a quantum system—every atom, electron, or photon—onto the state of some small set of quantum bits, known as a quantum register. Because the register is itself quantum-mechanical, it has no problem storing the quantum information inherent in the original system on just a few quantum bits. Then enact the natural dynamics of the quantum system using simple quantum logic operations—interactions between quantum bits. Because the dynamics of physical systems consists of interactions between its constituent parts, these interactions can be simulated directly by quantum logic operations mapped onto the bits in the quantum register that correspond to those parts.

  This method of quantum simulation is direct and efficient. The amount of time the quantum computer takes to perform the simulation is proportional to the time over which the simulated system evolves, and the amount of memory space required for the simulation is proportional to the number of subsystems or subvolumes of the simulated system. The simulation proceeds by a direct mapping of the dynamics of the system onto the dynamics of the quantum computer. Indeed, an observer that interacted with the quantum computer via a suitable interface would be unable to tell the difference between the quantum computer and the system itself. All measurements made on the computer would yield exactly the same results as the analogous measurements made on the system. Quantum computers, then, are universal quantum simulators.

  The universe is a physical system. Thus, it could be simulated efficiently by a quantum computer—one exactly the same size as the universe itself. Because the universe supports quantum computation and can be efficiently simulated by a quantum computer, the universe is neither more nor less computationally powerful than a universal quantum computer.

  In fact, the universe is indistinguishable from a quantum computer. Consider a quantum computer performing an efficient simulation of the universe. Now, compare the results of measurements taken in the universe with measurements taken in the quantum computer. Measurements in the universe are taken by one piece of the universe—in this case, us—on the remainder. The analogous processes occur in a quantum computer when one register of the computer gains information about another register. Because the quantum computer can perform an efficient and accurate simulation, the results of these two sets of measurements will be indistinguishable.

  The universe possesses the same information-processing power as a universal quantum computer. A universal quantum computer can accurately and efficiently simulate the universe. The results of measurements made in the universe are indistinguishable from the results of measurement processes in a quantum computer. We can now give a precise answer to the question of whether the universe is a quantum computer in the technical sense. The answer is Yes. The universe is a quantum computer.

  What is the universe computing? Everything we see and everything we don’t see is a manifestation of the universe’s quantum computation. We won’t know exactly how the universe performs its most minute computations until we obtain a complete theory of fundamental physics, but even without knowing the full details, we can see that the quantum-computational power of the universe provides a direct explanation for its intricacy, diversity, and complexity.

  Computation and Complexity

  The universe we see outside our windows is amazingly complex, full of form and transformation. Yet the laws of physics are simple, as far as we can tell. What is it about these simple laws that allows for such complex phenomena?

  To answer this question, first let’s consider an old, and incorrect, theory of why the universe is complex. In the latter part of the nineteenth century, three physicists—James Clerk Maxwell, Ludwig Boltzmann, and Josiah Willard Gibbs—discovered that the thermodynamic quantity known as entropy was, as we’ve noted, a form of information: namely, information that isn’t known. Inspired by concepts of information, Boltzmann proposed an explanation for the order and diversity of the universe. Suppose, said Boltzmann, that the information that defines the universe resulted from a completely random process, as if each bit were determined by the toss of a coin.

  This explanation of the order and diversity of the universe is equivalent to a well-known scenario, apparently proposed by the French mathematician Emile Borel at the beginning of the twentieth century. Borel imagined a million monkeys (singes dactylographes) typing at typewriters for ten hours a day. Borel pointed out that over the course of a single year, the scripts the monkeys produced could conceivably contain all of the texts shelved in the world’s richest libraries. (He then went on to dismiss the probability of this happening as infinitesimally small.)

  Borel’s image of monkeys pounding on the keys was subsequently appropriated by the British astronomer and mathematician Arthur Eddington, who settled on reproducing all the books in the British Museum. Eddington’s version was taken up by Sir James Jeans, who erroneously ascribed it to Thomas Huxley. In Huxley’s 1860 debate with Bishop Wilberforce regarding Darwin’s Origin of Species, he did indeed mention monkeys. Wilberforce asked whether Huxley was descended from a monkey on his grandfather’s or his grandmother’s side, to which Huxley responded that he would rather be descended from a monkey than from a man of great intelligence who used his gifts in the service of falsehood. But none of the contemporary reports of the debate mention m
onkeys plugging away at typewriters (which had barely been invented at the time).

  By the mid-twentieth century, the idea of monkeys inadvertently reproducing the world’s literature had made it into the pages of The New Yorker, in Russell Maloney’s short story “Inflexible Logic.” Typing monkeys began to proliferate in the stories of Isaac Asimov, Douglas Adams, and others. A typical monkey-typing story begins with a researcher assembling a simian team and teaching them to hit the typewriter keys. One monkey inserts a fresh sheet of paper and begins to type: “Hamlet. Act I, Scene I. . . .”

  It is certainly possible for one or another of these monkeys to type Hamlet. It is possible, as well, that the information defining the universe was created by similarly random processes. After all, if we identify heads with 1 and tails with 0, tossing a coin repeatedly will eventually produce any desired string of bits of a finite length, including a bit string that describes the universe as a whole.

  The explicit argument against the creation of long texts by completely random processes dates back more than two thousand years, to Cicero. In his De natura deorum (“On the Nature of the Gods”), Balbus the Stoic presents the following argument against the atomists (such as Democritus), who have argued that the order of nature arose out of the random collision of atoms: “I can’t but marvel that there could be anyone who can persuade themselves that solid atoms moving under the force of gravity could construct this elaborate and beautiful world out of their chance collisions. If they believe this could have happened, then I don’t understand why they shouldn’t also think that if innumerable copies of the twenty-one letters of the alphabet, made of gold or what have you, were shaken together and thrown out on the ground they could spell out the whole text of the Annals of Ennius. I doubt whether chance would succeed in spelling out a single verse!”

 

‹ Prev