Programming the Universe

Home > Other > Programming the Universe > Page 7
Programming the Universe Page 7

by Seth Lloyd


  Let’s rephrase Balbus’s argument in terms of monkeys. Although the universe could have been created entirely by random flips of a coin, it is highly unlikely, given the finite age and extent of the universe. In fact, the chance of an ordered universe like ours arising out of random flips of a coin is so small as to be effectively zero. To see just how small, consider the monkeys once again. There are about fifty keys on a standard typewriter keyboard. Even ignoring capitalization, the chance of a monkey typing “h” is one in fifty. The probability of typing “ha” is one-fiftieth of one in fifty, or 1 in 2,500. The probability of typing “ham” is one in fifty times fifty times fifty, or 1 in 125,000. The probability of a monkey typing out a phrase with twenty-two characters is one divided by fifty raised to the twenty-second power, or about 10-38. It would take a billion billion monkeys, each typing ten characters per second, for each of the roughly billion billion seconds since the universe began, just to have one of them type out “hamlet. act i, scene i.”

  By way of practical example, a popular Web site, http://user.tninet.se/~ecf599g/aardasnails/java/Monkey/webpages/, enlists your computer as a “monkey” in an attempt to reproduce passages from Shakespeare at random. The record as of this writing is the first twenty-four letters of Henry IV, Part 2, typed after 2,737,850 million billion billion billion monkey years.

  The combination of very small probabilities together with the finite age and extent of the visible universe makes the completely random generation of order extremely unlikely. If the universe were infinite in age or extent, then sometime or somewhere every possible pattern, text, and string of bits would be generated. Even in an infinite universe, however, Boltzmann’s argument fails. If the order we see were generated completely at random, then whenever we obtained new bits of information, they too would be highly likely to be random. But this is not the case: new bits revealed by observation are rarely wholly random. If you question this statement, just go to the window and look out, or pick up an apple and bite into it. Either action will reveal new, but non-random bits.

  Here’s another example: In astronomy, new galaxies and other cosmic structures, such as quasars, are constantly swimming into view.4 If the argument for complete randomness were true, then as new objects swam into view they would reveal completely random arrangements of matter—a sort of cosmic slush—rather than the quasars and ordered, if mysterious, objects that we do, in fact, see.

  In short, Boltzmann’s explanation of order is not impossible. But it is hugely improbable.

  Just for the fun of it, let’s see how much of Hamlet could have been generated by random processes since the universe began. The universe is full of photons—particles of light left over from the Big Bang. There are about 1090 photons, and each photon registers a few random bits. If we interpret those bits as characters in English, then somewhere out there is a bunch of photons that reads “Hamlet, Act I, Scene I. Enter Barnardo and Francisco.” Even if we imagine that every elementary particle is a monkey that has been typing at the maximum rate allowed by the laws of physics since the universe began, the longest initial piece of Hamlet that could have been generated by random typing is “Hamlet, Act I, Scene I. Enter Barnardo and Francisco. Barnardo: Who’s there?”

  Just to create the first few lines of Hamlet by a fully random process such as monkeys typing would take the entire computational resources of the universe. To create anything more complicated by a random process would require greater computational resources than the universe possesses.

  Boltzmann was wrong: the universe is not completely random. However, this does not mean that Cicero’s Balbus was right. He was wrong, too. The existence of complex and intricate patterns does not require that these patterns be produced by a complex and intricate machine or intelligence. I’ll say it again: Computers are simple machines. They operate by performing a small set of almost trivial operations, over and over again. But despite their simplicity, they can be programmed to produce patterns of any desired complexity, and the programs that produce these patterns need not possess any apparent order themselves: they can be random sequences of bits. The generation of random bits does play a key role in the establishment of order in the universe, just not as directly as Boltzmann imagined.

  The universe contains random bits whose origins can be traced back to quantum fluctuations in the wake of the Big Bang. We have seen how these random bits can serve as “seeds” of future detail ranging from the positions of galaxies to the locations of mutations in DNA. These random bits, introduced by quantum mechanics, in effect programmed the later behavior of the universe.

  Back to the monkeys. This time, instead of having the monkeys type random sequences of characters into typewriters, let’s have them type random sequences into a computer. (The image of monkeys typing away at computers is ubiquitous, at least in cyberspace. I first heard about it from Charles Bennett and Gregory Chaitin of IBM in the 1980s.) For example, let’s say we sit a monkey down at a PC, and tell the computer that the typescript is a program in a computer language, such as Java. The computer then interprets the monkey’s random output not as a text, but as a computer program—that is, as a sequence of instructions in a particular computer language.

  What happens when the computer tries to execute this random program? Most of the time, it will become confused and stop, issuing an error message. Garbage in, garbage out. But some short computer programs—and thus, programs with a relatively high probability of being randomly generated—actually have interesting outputs. For example, a few lines of code will make the computer start outputting all the digits of pi (π). Another short program will make the computer produce intricate fractals. Another short program will cause it to simulate the standard model of elementary particles. Another will make it simulate the early moments of the Big Bang. Yet another will allow the computer to simulate chemistry. And still another will start the computer off toward proving all possible mathematical theorems.

  Why do computers generate interesting results from short programs? A computer can be thought of as a device that generates patterns: Any conceivable pattern that can be described in language can be generated by a computer. The key difference between monkeys typing into typewriters and monkeys typing into computers is that in the latter case the random bits they generate are interpreted as instructions.

  As we’ve illustrated, almost nothing a monkey types exhibits a pattern in and of itself. When a monkey types random strings into a typewriter, all the typewriter does is faithfully reproduce those patternless strings. But when the monkey types into a computer, the computer interprets those patternless strings as instructions and uses them as a basis for constructing patterns.

  Quantum mechanics supplies the universe with “monkeys” in the form of random quantum fluctuations, such as those that seeded the locations of galaxies. The computer into which they type is the universe itself. From a simple initial state, obeying simple physical laws, the universe has systematically processed and amplified the bits of information embodied in those quantum fluctuations. The result of this information processing is the diverse, information-packed universe we see around us: programmed by quanta, physics gave rise first to chemistry and then to life; programmed by mutation and recombination, life gave rise to Shakespeare; programmed by experience and imagination, Shakespeare gave rise to Hamlet. You might say that the difference between a monkey at a typewriter and a monkey at a computer is all the difference in the world.

  Part 2

  A CLOSER LOOK

  CHAPTER 4

  Information and Physical Systems

  Information Is Physical

  By now you know that the central theme of this book is that all physical systems register and process information, and that by understanding how the universe computes, we can understand why it is complex. So, when did the realization that all physical systems register and process information—something previously thought of as nonphysical—come about? The scientific study of information and computation originat
ed in the 1930s and underwent explosive growth in the last half of the twentieth century. But the realization that information is a fundamental physical quantity predated the scientific study of either information or computation. By the end of the nineteenth century, it had been well established that all physical systems register a definable quantity of information and that their dynamics transform and process that information. In particular, the physical quantity known as entropy came to be seen as a measure of information registered by the individual atoms that make up matter.

  The great nineteenth-century statistical physicists James Clerk Maxwell in the United Kingdom, Ludwig Boltzmann in Austria, and Josiah Willard Gibbs in the United States derived the fundamental formulas of what would go on to be called “information theory,” and used them to characterize the behavior of atoms. In particular, they applied these formulas in order to derive justification for the second law of thermodynamics.

  As noted, the first law of thermodynamics is a statement about energy: energy is conserved when it is transformed from mechanical energy to heat. The second law of thermodynamics, however, is a statement about information, and about how it is processed at the microscopic scale. The law states that entropy (which is a measure of information) tends to increase. More precisely, it states that each physical system contains a certain number of bits of information—both invisible information (or entropy) and visible information—and that the physical dynamics that process and transform that information never decrease that total number of bits.

  Although it is more than a century and a half old, the second law of thermodynamics remains a subject of scientific controversy. Almost no scientist doubts its truth, but many disagree as to why it is true. The computational nature of the universe can resolve at least part of this controversy. Properly understood, the second law of thermodynamics rises from the interplay between “visible information,” the information we have access to about the state of matter, and “invisible information,” the bits of entropy—no less physical—that are registered by the atoms forming that matter.

  Origins of the Computational Model

  My undergraduate curriculum at Harvard went by the name “General Education.” In practice, this seemed to mean that if I could talk my way into a course, then it was part of my curriculum. Accordingly, with the blessing—or, at any rate, the signature—of my advisor, the Nobel laureate Sheldon Glashow, I designed my undergraduate physics curriculum around Robert Fitzgerald’s courses on prosody and on Homer, Virgil, and Dante, supported by Leon Kirchner’s course on chamber music performance and I. Bernard Cohen’s graduate seminar on the Influences of the Physical Sciences on the Social. Glashow also insisted that I take some physics.

  Two courses I took sent me down the path that would lead to the computational model of the universe. The first was Michael Tinkham’s course in statistical mechanics, the remarkable synthesis of quantum mechanics (the physics of atoms and molecules) and thermodynamics (the study of heat and work). As a science, statistical mechanics began in the last years of the nineteenth century and has led to lasers, lightbulbs, and transistors, to name just a few of its consequences. The primary message of Tinkham’s course was that the thermodynamic quantity called entropy—known as a measure of the heat energy that can’t be turned into mechanical energy in a closed thermodynamic system—can also be understood as a measure of information.

  Entropy (from the Greek for “in turning”) was first defined by Rudolf Clausius in 1865 as a mysterious thermodynamic quantity that limits the power of steam engines. Heat has lots of entropy. Engines that run off of heat, like steam engines, have to do something with that entropy; typically they get rid of it in the form of exhaust. They can’t turn all of the energy in heat into useful work. Entropy, said Clausius, tends to increase.

  At the end of the nineteenth century, the founders of statistical mechanics—Maxwell, Boltzmann, and Gibbs—realized that entropy was also a form of information: entropy is a measure of the number of bits of unavailable information registered by the atoms and molecules that make up the world. The second law of thermodynamics comes about, then, by combining this notion with the fact that the laws of physics preserve information, as we will soon discuss. Nature does not destroy bits.

  But surely it takes an infinite number of bits of entropy to specify the positions and velocities of even a single atom exactly, my class objected. Not so, said Tinkham. The laws of quantum mechanics, which govern the microscopic behavior of physical systems, ensure that atoms and molecules register a finite amount of information.

  Hot bits! This was great stuff, even if I didn’t fully understand it. All physical systems can be characterized in terms of information, and Maxwell, Boltzmann, and Gibbs had figured this out fifty years before the word “bit” was even invented! But what about this quantum mechanics? Clearly I needed to know more. So I took Norman Ramsey’s introductory course on quantum mechanics. Ramsey is one of the world’s most expert quantum-mechanical masseurs. He developed many of the techniques used today to convince atoms and molecules to give up their energy and their secrets, techniques for which he went on to win a Nobel Prize.

  But what was plain to Ramsey about quantum mechanics remained opaque to me. How could it be, for example, that an electron can be in two places at once? Ramsey assured us through detailed experimental data that not only was an electron allowed to be in many places at the same time, it was in fact required to be there (and there, and there, and there). Perhaps the early hour in the lecture hall, lit only by the glow of a transparency projector, had induced a trancelike state—but I didn’t get it. I would not awaken from this particular trance until years later, when I was working for Ramsey at the Institut Laue-Langevin in Grenoble, France, on his experiment to measure separation of electric charge inside the neutron.

  The neutron and its charged partner, the proton, are the particles that make up the nuclei of atoms. Neutrons and protons are in turn made up of electrically charged particles called quarks. The separation of electric charge that Ramsey wanted to measure corresponded to a distance of one billion billion billionth of a meter between the quarks within the neutron, a distance smaller, relative to the size of the neutron, than the size of the neutron, relative to us. The experiment involved taking neutrons from a nuclear reactor, cooling them down until they were moving at walking pace (in the final cooling stage, the neutrons were made to run uphill until they were exhausted and almost came to a halt), subjecting them to electric and magnetic fields, and then “massaging” them into a state in which they would reveal their secrets.

  As you may guess, you have to massage a neutron very sensitively for it to reveal anything at all. Everything has to go right in such an experiment, or nothing happens. Our neutrons were fickle, and no matter how many times we polished their electrodes and pumped out their vacuum, they refused to talk to us. In this slack time, Ramsey assigned to me a simple calculation based on the neutron spinning both clockwise and counterclockwise at the same time, all the while conversing with the particles of light around it. Perhaps it was because this was the first time I had ever been asked to do a calculation for a real experiment, or perhaps it was because Ramsey snapped his fingers, but I awoke from my trance. Neutrons, I saw, had to spin clockwise and counterclockwise at the same time. They had no choice: it was in their nature. The language that neutrons spoke was not the ordinary language of yes or no, it was yes and no at once. If I wanted to talk to neutrons and have them talk back, I had to listen when they said yes and no at the same time. If this sounds confusing, it is. But I had finally learned my first words in the quantum language of love. 5 You will learn to say a few words in this language yourself in the next chapter.

  Michael Tinkham’s course on statistical mechanics taught me that physical objects might be thought of as being made of information. Ramsey’s course on quantum mechanics taught me how the laws of physics governed the way in which that information was represented and processed. Most of the scientific work I have
done since I took those courses has revolved around the interplay between physics and information. The computational nature of the universe itself arises from this interplay.

  The Atomic Hypothesis

  The mathematical theory of information was developed in the middle of the twentieth century by Harry Nyquist, Claude Shannon, Norbert Wiener, and others. These researchers used mathematical arguments to derive formulas for the number of bits of information that could reliably be sent down communication channels such as telephone lines. When Shannon showed his new formula for information to the mathematician John von Neumann and asked him what the quantity he had just defined should be called, von Neumann is said to have replied, “H.”

  “Why H?” asked Shannon.

  “Because that’s what Boltzmann called it,” said von Neumann. The basic formulas for information theory had already been derived by Maxwell, Boltzmann, and Gibbs.

  To understand what information has to do with atoms, look at the origins of the atomic hypothesis. The ancient Greeks postulated that all matter was made of atoms (the Greek atomos meant “unsplittable”). The atomic hypothesis was based on an aesthetic notion: distaste for the infinite. The ancients simply did not want to believe that you could keep subdividing matter into ever smaller pieces. Isaac Newton’s and Gottfried Wilhelm Leibniz’s invention of calculus in the seventeenth century, however, provided mathematical methods for dealing with the infinitely small, and early attempts to describe solids, liquids, and gases mathematically modeled them as continuous substances that could be subdivided an infinite number of times. The power and elegance of calculus, together with the lack of direct evidence for the existence of atoms, made for scientific theories based on the continuum. But by the second half of the nineteenth century, observational evidence had begun to indicate that, as proposed by the atomic hypothesis, matter might indeed be made up of very small, discrete chunks, rather than being continuous.

 

‹ Prev