The Universe_Leading Scientists Explore the Origin, Mysteries, and Future of the Cosmos

Home > Other > The Universe_Leading Scientists Explore the Origin, Mysteries, and Future of the Cosmos > Page 30
The Universe_Leading Scientists Explore the Origin, Mysteries, and Future of the Cosmos Page 30

by John Brockman


  I wrote a paper a few years ago that compared the evolutionary power of human beings to that of bacteria. The point of comparison was the number of bits per second of new genetic combinations that a population of human beings generated, compared with the number generated by a culture of bacteria. A culture of bacteria in a swimming pool of seawater has about a trillion bacteria, reproducing once every thirty minutes. Compare this with the genetic power of a small town with a few thousand people in New England—say, Peyton Place—reproducing every thirty years. Despite the huge difference in population, Peyton Place can generate as many new genetic combinations as the culture of bacteria a billion times more numerous. This assumes that the bacteria are only generating new combinations via mutation, which of course they don’t, but for this purpose we will not discuss bacteria having sex. In daytime TV, the sexual recombination and selection happens much faster, of course.

  Sexual reproduction is a great revolution. Then, of course, there’s the grandmother or granddaddy of all information-processing revolutions, life itself. The discovery, however it came about, that information can be stored and processed genetically and that this could be used to encode functions inside an organism that can reproduce is an incredible revolution. It happened 4 to 5 billion years ago on Earth—maybe earlier, if one believes that life developed elsewhere and then was transported here. At any rate, since the universe is only 13.8 billion years old, it happened sometime in the last 13.8 billion years.

  We forgot to talk about the human brain—or should I say, my brain forgot to talk about the brain? There are many information-processing revolutions, and I’m presumably leaving out many thousands we don’t even know about but which were equally as important as the ones we’ve discussed.

  To pull a Kuhnian maneuver, the main thing I’d like to point out about these information-processing revolutions is that each one arises out of the technology of the previous one. Electronic information-processing, for instance, comes out of the notion of written language—of having zeroes and ones, the idea that you can make machines to copy and transmit information. A printing press is not so useful without written language. Without spoken language, you wouldn’t come up with written language. It’s hard to speak if you don’t have a brain. And what are brains for but to help you have sex? You can’t have sex without life. Music came from the ability to make sound, and the ability to make sound evolved for the purpose of having sex. You either need vocal cords to sing with or sticks to beat on a drum with. To make sound, you need a physical object. Every information-processing revolution requires either living systems, electromechanical systems, or mechanical systems. For every information-processing revolution, there is a technology.

  OK, so life is the big one, the mother of all information-processing revolutions. But what revolution occurred that allowed life to exist? I would claim that, in fact, all information-processing revolutions have their origin in the intrinsic computational nature of the universe. The first information-processing revolution was the Big Bang. Information-processing revolutions come into existence because at some level the universe is constructed of information. It is made out of bits.

  Of course, the universe is also made out of elementary particles, unknown dark energy, and lots of other things. I’m not advocating that we junk our normal picture of the universe as being constructed out of quarks, electrons, and protons. But in fact it’s been known ever since the latter part of the 19th century that every elementary particle, every photon, every electron, registers a certain number of bits of information. Whenever two elementary particles bounce off each other, those bits flip. The universe computes.

  The notion that the universe is, at bottom, processing information sounds like some radical idea. In fact, it’s an old discovery, dating back to Maxwell, Boltzmann, and Gibbs, the physicists who developed statistical mechanics from 1860 to 1900. They showed that in fact the universe is fundamentally about information. They, of course, called this information “entropy,” but if you look at their scientific discoveries through the lens of 20th-century technology, what in fact they discovered was that entropy is the number of bits of information registered by atoms. So in fact it’s scientifically uncontroversial that the universe at bottom is processing information. My claim is that this intrinsic ability of the universe to register and process information is actually responsible for all the subsequent information-processing revolutions.

  How do we think of information these days? The contemporary scientific view of information is based on the theories of Claude Shannon. When Shannon came up with his fundamental formula for information, he went to the physicist and polymath John von Neumann and said, “What shall I call this?” and von Neumann said, “You’ll call it H, because that’s what Boltzmann called it,” referring to Boltzmann’s famous H-theorem. The founders of information theory were well aware that the formulas they were using had been developed back in the 19th century to describe the motions of atoms. When Shannon talked about the number of bits in a signal that can be sent down a communications channel, he was using the same formulas to describe it that Maxwell and Boltzmann had used to describe the amount of information, or the entropy, required to describe the positions and momenta of a set of interacting particles in a gas.

  What is a bit of information? Let’s get down to the question of what information is. When you buy a computer, you ask how many bits its memory can register. A bit comes from a distinction between two different possibilities. In a computer, a bit is a little electric switch, which can be open or closed; or it’s a capacitor that can be charged, which is called 1, or uncharged, which is called 0. Anything that has two distinct states registers a bit of information. At the elementary-particle level, a proton can have two distinct states: spin-up or spin-down. Each proton registers one bit of information. In fact, the proton registers a bit whether it wants to or not, or whether this information is interpreted or not. It registers a bit merely by the fact of existing. A proton possesses two different states and so registers a bit.

  We exploit the intrinsic information-processing ability of atoms when building quantum computers, because many of our quantum computers consist of arrays of protons interacting with their neighbors, each of which stores a bit. Each proton would be storing a bit of information whether we were asking them to flip those bits or not. Similarly, if you have a bunch of atoms zipping around, they bounce off each other. Take two helium atoms in a child’s balloon. The atoms come together and they bounce off each other and then they move apart again. Maxwell and Boltzmann realized that there’s essentially a string of bits that attach to each of these atoms to describe its position and momentum. When the atoms bounce off each other, the string of bits changes, because the atoms’ momenta change. When the atoms collide, their bits flip.

  The number of bits registered by each atom is well known and has been quantified ever since Maxwell and Boltzmann. Each particle—for instance, each of the molecules in this room—registers something on the order of 30 or 40 bits of information as it bounces around. This feature of the universe—that it registers and processes information at its most fundamental level—is scientifically uncontroversial, in the sense that it has been known for 120 years and is the accepted dogma of physics.

  The universe computes. My claim is that this intrinsic information-processing ability of the universe is responsible for the remainder of the information-processing revolutions we see around us, from life up to electronic computers. Let me repeat the claim: It’s a scientific fact that the universe is a big computer. More technically, the universe is a gigantic information processor capable of universal computation. That’s the definition of a computer.

  If he were here, Marvin Minsky would say, “Ed Fredkin and Konrad Zuse, back in the 1960s, claimed that the universe was a computer, a giant cellular automaton.” Konrad Zuse was the first person to build an electronic digital computer, around 1940. He and Ed Fredkin, at MIT, came up with this idea that the universe might be a gigantic type o
f computer called a cellular automaton. This is an idea that has since been developed by Stephen Wolfram. The idea that the universe is some kind of digital computer is, in fact, an old claim as well.

  Thus, my claim that the universe computes is an old one dating back at least half a century. This claim could actually be substantiated from a scientific perspective. One could prove, by looking at the basic laws of physics, that the universe is or is not a computer, and if so, what kind of computer it is. We have very good experimental evidence that the laws of physics support computation. I own a computer, and it obeys the laws of physics, whatever those laws are. We know the universe supports computation, at least on a macroscopic scale. My claim is that the universe supports computation at its most tiny scale. We know that the universe processes information at this level, and we know that at the larger level it’s capable of doing universal computations and creating things like human beings.

  The thesis that the universe is, at bottom, a computer, is in fact an old notion. The work of Maxwell, Boltzmann, and Gibbs established the basic computational framework more than a century ago. But for some reason, the consequences of the computational nature of the universe have yet to be explored in a systematic way. What does it mean to us that the universe computes? This question is worthy of significant scientific investigation. Most of my work investigates the scientific consequences of the computational universe.

  One of the primary consequences of the computational nature of the universe is that the complexity we see around us arises in a natural way, without outside intervention. Indeed, if the universe computes, complex systems like life must necessarily arise. So describing the universe in terms of how it processes information, rather than describing it solely in terms of the interactions of elementary particles, is not some kind of empty exercise. Rather, the computational nature of the universe has dramatic consequences.

  Let’s be more explicit about why something that’s computationally capable, like the universe, must necessarily spontaneously generate the kind of complexity that’s around us. There’s a famous story, “Inflexible Logic,” by Russell Maloney, which appeared in The New Yorker in 1940, in which a wealthy dilettante hears the phrase that if you had enough monkeys typing, they would type the works of Shakespeare. Because he’s got a lot of money, he assembles a team of monkeys and a professional trainer and he has them start typing. At a cocktail party, he has an argument with a Yale mathematician who says that this is really implausible, because any calculation of the odds of this happening will show it will never happen. The gentleman invites the mathematician up to his estate in Greenwich, Connecticut, and he takes him to where the monkeys have just started to write out Tom Sawyer and Love’s Labour’s Lost. They’re doing it without any single mistake. The mathematician is so upset that he kills all the monkeys. I’m not sure what the moral of this story is.

  The image of monkeys typing on typewriters is quite old. I spent a fair amount of time this summer going over the Internet and talking with various experts around the world about the origins of this story. Some people ascribe it to Thomas Huxley in his debate with Bishop Wilberforce in 1858, after the appearance of The Origin of Species. From eyewitness reports of that debate, it’s clear that Wilberforce asked Huxley from which side of his family, his mother’s or his father’s, he was descended from an ape. Huxley said, “I would rather be descended from a humble ape than from a great gentleman who uses considerable intellectual gifts in the service of falsehood.” A woman in the audience fainted when he said that. They didn’t have R-rated movies back then.

  Although Huxley made a stirring defense of Darwin’s theory of natural selection during this debate, and although he did refer to monkeys, apparently he did not talk about monkeys typing on typewriters, because for one thing typewriters as we know them had barely been invented in 1859. The erroneous attribution of the image of typing monkeys to Huxley seems to have arisen because Arthur Eddington, in 1928, speculated about monkeys typing all the books in the British Library. Subsequently Sir James Jeans ascribed the typing monkeys to Huxley.

  In fact, it seems to have been the French mathematician Emile Borel who came up with the image of typing monkeys, in 1907. Borel was the person who developed the modern mathematical theory of combinatorics. Borel imagined a million monkeys each typing ten characters a second at random. He pointed out that these monkeys could in fact produce all the books in all the richest libraries of the world. He then went on to dismiss the probability of them doing so as infinitesimally small.

  It is true that the monkeys would, in fact, type gibberish. If you plug “monkeys typing” into Google, you’ll find a website that will enlist your computer to emulate typing monkeys. The site lists records of how many monkey years it takes to type out the opening bits of various Shakespeare plays and the current record is 17 characters of Love’s Labour’s Lost over 483 billion monkey years. Monkeys typing on typewriters generate random gobbledygook.

  Before Borel, Boltzmann advanced a monkeys-typing explanation for why the universe is complex. The universe, he said, is just a big thermal fluctuation. Like the flips of a coin, the universe is, in fact, just random information. His colleagues soon dissuaded him from this position, because it’s obviously not so. If it were, then every new bit of information you got that you hadn’t received before would be random. But when our telescopes look out in space, they get new information all the time and it’s not random. Far from it: The new information they gather is full of structure. Why is that?

  To see why the universe is full of complex structure, imagine that the monkeys are typing into a computer rather than a typewriter. The computer, in turn, rather than just running Microsoft Word, interprets what the monkeys type as an instruction in some suitable computer language, like Java. Now, even though the monkeys are still typing gobbledygook, something remarkable happens. The computer starts to generate complex structures.

  At first this seems odd: Garbage in, garbage out. But in fact there are short, random-looking computer programs that will produce very complicated structures. For example, one short, random-looking program will make the computer start proving all provable mathematical theorems. A second short, random-looking program will make the computer evaluate the consequences of the laws of physics. There are computer programs to do many things, and you don’t need a lot of extra information to produce all sorts of complex phenomena from monkeys typing into a computer.

  There’s a mathematical theory called algorithmic information which can be thought of as the theory of what happens when monkeys type into computers. This theory was developed in the early 1960s by Ray Solomonoff in Cambridge, Mass.; Gregory Chaitin, who was then a fifteen-year-old enfant terrible at IBM in Brazil, and then Andrey Kolmogorov, who was a famous Russian academic mathematician. Algorithmic information theory tells you the probability of producing complex patterns from randomly programmed computers. The bottom line is that if monkeys start typing into computers, there’s a very high probability that they’ll produce things like the laws of chemistry, autocatalytic sets, or prebiotic kinds of life. Monkeys typing into computers make up a reasonable explanation for why we have complexity in our universe. Monkeys typing into a computer have a reasonable probability of producing almost any computable form of order that exists. You would not be surprised, in this monkey universe, to see all sorts of interesting things arising. You might not get Hamlet, because something like Hamlet requires huge sophistication and the evolution of societies, etc. But things like the laws of chemistry, or autocatalytic sets, or some kind of prebiotic form of protolife are the kinds of things you would expect to see happen.

  To apply this explanation to the origin of complexity in our universe, we need two things: a computer and monkeys. We have the computer, which is the universe itself. As was pointed out a century ago, the universe registers and processes information systematically at its most fundamental level. The machinery is there to be typed on. So all you need is monkeys. Where do you get the monk
eys?

  The monkeys that program our universe are supplied by the laws of quantum mechanics. Quantum mechanics is inherently chancy. You may have heard Einstein’s phrase, “God does not play dice.” Einstein was wrong. God does play dice. In the case of quantum mechanics, Einstein was, famously, wrong. In fact, it is just when God plays dice that these little quantum blips, or fluctuations, get programmed into our universe.

  For example, Alan Guth has done work on how such quantum fluctuations form the seeds for the formation of large-scale structure in the universe. Why is our galaxy here rather than somewhere 100 million light-years away? It’s here because way back in the very, very, very, very early universe there was a little quantum fluctuation that made a slight overdensity of matter somewhere near here. This overdensity of matter was very tiny, but it was enough to make a seed around which other matter could clump. The structure we see, like the large-scale structure of the universe, is in fact made by quantum monkeys typing.

  We have all the ingredients, then, for a reasonable explanation of why the universe is complex. You don’t require very complicated dynamics for the universe to compute. The computational dynamics of the universe can be very simple. Almost anything will work. The universe computes. Then the universe is filled with little quantum monkeys, in the form of quantum fluctuations, that program it. Quantum fluctuations get processed by the intrinsic computational power of the universes and eventually give rise to the order that we see around us.

 

‹ Prev