Book Read Free

Programming the Universe

Page 4

by Seth Lloyd


  Computers date back to the early days of Homo sapiens. Like the first tools, the first computers were rocks. “Calculus” is the Latin word for pebble, and the first calculations were performed by arranging and rearranging just that. And rock computers didn’t have to be small. Stonehenge may well have been a big rock computer for calculating the relations between the calendar and the arrangement of the planets.

  The technology used in computing puts intrinsic limits on the computations that can be performed (think rocks versus an Intel Pentium IV). Rock computers are good for counting, adding, and subtracting, but they aren’t so good at multiplying and dividing. And to deal with large numbers, you need a lot of rocks. Several thousand years ago, someone had the bright idea of combining rocks with wood: If you kept the rocks in grooves on a wooden table, they were easier to move back and forth. Then it was discovered that if you used beads instead of rocks and mounted them on wooden rods, the beads were not only easy to move back and forth but also hard to lose.

  The wooden computer, or abacus, is a powerful calculating tool. Before the invention of electronic computers, a trained abacus operator could out-calculate a trained adding-machine operator. But the abacus is not merely a convenient machine for manipulating pebbles. It embodies a powerful mathematical abstraction: zero. The concept of zero is the crucial piece of the Arabic number system—a system allowing arbitrarily large numbers to be represented and manipulated with ease—and the abacus is its mechanical incorporation. But which came first? Given the origin of the word for zero and the antiquity of the first abacus, it seems likely that the machine did.1 Sometimes, machines make ideas.

  Ideas also make machines. First rock, then wood: what material would supply the next advance in information processing? Bone. In the early seventeenth century, the Scottish mathematician John Napier discovered a way of changing the process of multiplication into addition. He carved ivory into bars, ruled marks corresponding to numbers on the bars, and then performed multiplication by sliding the bars alongside each other until the marks corresponding to the two numbers lined up. The total length of the two bars together then gave the product of the two numbers. The slide rule was born.

  In the beginning of the nineteenth century, an eccentric Englishman named Charles Babbage proposed making computers out of metal. Babbage’s “Difference Engine,” which was meant to calculate various trigonometric and logarithmic tables, was to be built out of gears and shafts, like a steam engine. Each gear would register information by its position, and then process that information by meshing with other gears and turning. Though entirely mechanical in its construction, the way Babbage’s machine organized information foreshadowed the way contemporary electronic computers organize information. It was designed with a central processing unit and a memory bank that could hold both program and data.

  Despite generous financial support from the British crown, Babbage’s plan to build a Difference Engine failed. Early-nineteenth-century technology possessed neither sufficiently precise machining techniques nor sufficiently hard alloys to construct the engine’s gears and shafts. (The effort wasn’t wasted, however: the development by Babbage’s technicians of more precise machining techniques and harder alloys significantly accelerated the ongoing Industrial Revolution.) Although effective mechanical calculators were available by the end of the nineteenth century, large-scale working computers had to await the development of electronic circuit technology in the beginning of the twentieth.

  By 1940, an international competition had arisen between various groups to construct computers using electronic switches such as vacuum tubes or electromechanical relays. The first simple electronic computer was built by Konrad Zuse in Germany in 1941, followed by large-scale computers in the United States and Great Britain later in the 1940s. These consisted of several rooms’ worth of vacuum tubes, switching circuits, and power supplies, but they were puny in terms of their computational power—about a million times less powerful than the computer on which this book is being written.

  Though costly, the initial electronic computers were useful enough that efforts were made to streamline them. In the 1960s, vacuum tubes and electromechanical relays were replaced by transistors, semiconductor switches that were smaller and more reliable and required less energy. A semiconductor is a material such as silicon that conducts electricity better than insulators such as glass or rubber, but less well than conductors such as copper. Starting in the late 1960s, the transistors were made still smaller by etching them on silicon-based integrated circuits, which collected all the components needed to process information on one semiconductor chip.

  Since the 1960s, advances in photolithography—the science of engineering ever more detailed circuits—have halved the size of the components of integrated circuits every eighteen months or so. As a result, computers have doubled in power at the same rate, the phenomenon known as Moore’s law. Nowadays, the wires in the integrated circuits in a run-of-the-mill computer are only 1,000 atoms wide.

  For future reference, let me define some of the types of computers to which I will refer. A digital computer is a computer that operates by applying logic gates to bits; a digital computer can be electronic or mechanical. A classical computer is a computer that computes using the laws of classical mechanics. A classical digital computer is one that computes by performing classical logical operations on classical bits. An electronic computer is one that computes using electronic devices such as vacuum tubes or transistors. A digital electronic computer is a digital computer that operates electronically. An analog computer is one that operates on continuous signals as opposed to bits; it gets its name because such a computer is typically used to construct a computational “analog” of a physical system. Analog computers can be electronic or mechanical. A quantum computer is one that operates using the laws of quantum mechanics. Quantum computers have both digital and analog aspects.

  Logic Circuits

  What are our ever more powerful computers doing? They are processing information by breaking it up into its component bits and operating on those bits a few at a time. As noted earlier, the information to be processed is presented to the computer in the form of a program, a set of instructions in a computer language. The program is encoded in the computer’s memory as a sequence of bits. For example, the command PRINT is written in ASCII code as P = 1010000 R = 1010010 I = 1001001 N = 1001110 T = 1010100. The computer looks at the program a few bits at a time, interprets the bits as an instruction, and executes the instruction. Then it looks at the next few bits and executes that instruction. And so on. Complicated procedures can be built up from sets of simple instructions—but there’s more on that to come.

  Conventional computers consist primarily of electronic circuits that physically implement “logic circuits.” Logic circuits allow a complicated logical expression to be built up out of simple operations that act on just a few bits at a time. Physically, logic circuits consist of bits, wires, and gates. Bits, as we have seen, can register either 0 or 1; wires move the bits from one place to another; gates transform the bits one or two at a time.

  For example, a NOT gate takes its input bit and flips it: that is, NOT transforms 0 into 1 and 1 into 0. A COPY gate makes a copy of a bit: it transforms an input bit 0 into two output bits 00 and an input bit 1 into two output bits 11. An AND gate takes two input bits and produces a single output bit equal to 1 if, and only if, both input bits are equal to 1; otherwise it produces the output 0. An OR gate takes two input bits and produces an output bit equal to 1 if one or both of the input bits is equal to 1; if both input bits are equal to 0, then it produces the output 0. Since the 1854 publication of An Investigation of the Laws of Thought by the logician George Boole of Queen’s College, Cork, it has been known that any desired logical expression, including complex mathematical calculations, can be built up out of NOT, COPY, AND, and OR. They make up a universal set of logic gates.

  Figure 3. Logic Gates

  Logic gates are devices that ta
ke one or more input bits and transform them into one or more output bits. Clockwise from upper left: OR gate, AND gate, NOT gate, and COPY gate.

  Boole’s Laws of Thought imply that any logical expression or computation can be encoded as a logic circuit. A digital computer is a computer that operates by implementing a large logic circuit consisting of millions of logic gates. Familiar computers such as Macintoshes and PCs are electronic realizations of digital computers.

  Figure 4. Logic Circuits

  AND, OR, NOT, and COPY gates can be wired together to make logic circuits. A logic circuit can perform more complicated transformations of its input bits.

  In an electronic computer, bits are registered by electronic devices such as capacitors. A capacitor is like a bucket that holds electrons. To fill the bucket, a voltage is applied to the capacitor. A capacitor at zero voltage has no excess electrons and is said to be uncharged. An uncharged capacitor in a computer registers a 0. A capacitor at non-zero voltage holds lots of excess electrons and registers a 1.

  Capacitors are not the only electronic devices that computers use to store information. In your computer’s hard drive, bits are registered by tiny magnets: a magnet whose north pole points up registers a 0 and a magnet whose north pole points down registers a 1. As always, any device that has two reliably distinguishable states can register a bit.

  In a conventional digital electronic computer, logic gates are implemented using transistors. A transistor can be thought of as a switch. When the switch is open, current can’t flow through it. When the switch is closed, current flows through. A transistor has two inputs and one output. In an n-type transistor, when the first input is kept at a low voltage the switch is open and current can’t flow from the second input to the output; raising the voltage on the first input allows current to flow. In a p-type transistor, when the first input is kept at low voltage the switch is closed, so current can flow from the second input to the output; n- and p-type transistors can be wired together to create AND, OR, NOT, and COPY gates.

  When a computer computes, all it is doing is applying logic gates to bits. Computer games, word processing, number crunching, and spam all arise out of the electronic transformation of bits, one or two at a time.

  Uncomputability

  Up to this point, we have emphasized the underlying simplicity of information and computation. A bit is a simple thing; a computer is a simple machine. But this doesn’t mean that computers are incapable of complex and sophisticated behavior. One counterintuitive result of a computer’s fundamentally logical operation is that its future behavior is intrinsically unpredictable. The only way to find out what a computer will do once it has embarked upon a computation is to wait and see what happens.

  In the 1930s, the Austrian logician Kurt Gödel showed that any sufficiently powerful mathematical theory has statements that, if false, would render the theory inconsistent but that cannot be proved to be true. All sufficiently powerful systems of logic contain unprovable statements. The computational analog of an unprovable statement is an uncomputable quantity.

  A well-known problem whose answer is uncomputable is the so-called halting problem: Program a computer. Set it running. Does the computer ever halt and give an output? Or does it run forever? There is no general procedure to compute the answer to this question. That is, no computer program can take as input another computer program and determine with 100 percent probability whether the first computer program halts or not.

  Of course, for many programs, you can tell whether or not the computer will halt. For example, the program PRINT 1,000,000,000 clearly halts: A computer given this program as input prints 1,000,000,000, then halts. But as a rule, no matter how long a computer has gone on computing without halting, you cannot conclude that it will never halt.

  Although it may sound abstract, the halting problem has many practical consequences. Take, for example, the debugging of computer programs. Most computer programs contain “bugs” or errors that make the computer behave in unexpected ways, e.g., crash. It would be useful to have a “universal debugger” for computer programs. Such a debugger would take as input the computer program, together with a description of what the program is supposed to accomplish, and then check to see that the program does what it is supposed to do. Such a debugger cannot exist.

  The universal debugger is supposed to verify that its input program gives the right output. So the first thing a universal debugger should check is whether the input program has any output at all. But to verify that the program gives an output, the universal debugger needs to solve the halting problem. That it cannot do. The only way to determine if the program will halt is to run it and see, and at that point, we no longer need the universal debugger. So the next time a bug freezes your computer, you can take solace in the deep mathematical truth that there is no systematic way to eliminate all bugs. Or you can just curse and reboot.

  Gödel showed that the capacity for self-reference leads automatically to paradoxes in logic; the British mathematician Alan Turing showed that self-reference leads to uncomputability in computers. It is tempting to identify similar paradoxes in how human beings function. After all, human beings are masters of self-reference (some humans seem capable of no other form of reference) and are certainly subject to paradox.

  Humans are notoriously unable to predict their own future actions. This is an important feature of what we call free will. “Free will” refers to our apparent freedom to make decisions. For example, when I sit down in a restaurant and look at the menu, I and only I decide what I will order, and before I decide, even I don’t know what I will choose. That is, our own future choices are inscrutable to ourselves. (They may not, of course, be inscrutable to others. For years my wife and I would go for lunch to Josie’s in Santa Fe. I, after spending a long time scrutinizing the menu, would always order the half plate of chiles rellenos, with red and green chile, and posole instead of rice. I felt strongly that I was exercising free will: until I chose the rellenos half plate, I felt anything was possible. My wife, however, knew exactly what I was going to order all the time.)

  The inscrutable nature of our choices when we exercise free will is a close analog of the halting problem: once we set a train of thought in motion, we do not know whether it will lead anywhere at all. Even if it does lead somewhere, we don’t know where that somewhere is until we get there.

  Ironically, it is customary to assign our own unpredictable behavior and that of other humans to irrationality: were we to behave rationally, we reason, the world would be more predictable. In fact, it is just when we behave rationally, moving logically, like a computer, from step to step, that our behavior becomes provably unpredictable. Rationality combines with self-reference to make our actions intrinsically paradoxical and uncertain.

  This lovely inscrutability of pure reason harks back to an earlier account of the role of logic in the universe. From his home in Cordova, the twelfth-century Muslim philosopher Averroës (Ibn Rushd) in his studies of Aristotle concluded that what is immortal in human beings is not their soul but their capacity for reason. Reason is immortal exactly because it is not specific to any individual; instead, it is the common property of all reasoning beings.

  Computers certainly possess the ability to reason and the capacity for self-reference. And just because they do, their actions are intrinsically inscrutable. Consequently, as they become more powerful and perform a more varied set of tasks, computers exhibit an unpredictability approaching that of human beings. Indeed, by Averroës’s standards, they possess the same degree of immortality as humans.

  Programming computers to perform simple human tasks is difficult: getting a computerized robot to vacuum a room or empty a dishwasher, even to minimal standards, is a problem that has outstripped the abilities of several generations of researchers in artificial intelligence. By contrast, no special effort is required to program a computer to behave in unpredictable and annoying ways. When it comes to their capacity to screw things up, computers
are becoming more human every day.2

  CHAPTER 3

  The Computational Universe

  The Story of the Universe: Part One

  The universe is made of atoms and elementary particles, such as electrons, photons, quarks, and neutrinos. Although we will soon delve into a vision of the universe based on a computational model, we would be foolish not to first explore the stunning discoveries of cosmology and elementary-particle physics. Science already affords us excellent ways of describing the universe in terms of physics, chemistry, and biology. The computational universe is not an alternative to the physical universe. The universe that evolves by processing information and the universe that evolves by the laws of physics are one and the same. The two descriptions, computational and physical, are complementary ways of capturing the same phenomena.

  Of course, humans have been speculating about the origins of the universe far longer than they have been dabbling in modern science. Telling stories about the universe is as old as telling stories. In Norse mythology, the universe begins when a giant cow licks the gods out of the salty lip of a primordial pit. In Japanese mythology, Japan arises from the incestuous embrace of the brother and sister gods Izanagi and Izanami. In one Hindu creation myth, all creatures rise out of the clarified butter obtained from the sacrifice of the thousand-headed Purusha. More recently, though, over the last century or so, astrophysicists and cosmologists have constructed a detailed history of the universe supported by observational evidence.

  The universe began a little less than 14 billion years ago, in a huge explosion called the Big Bang. As it expanded and cooled down, various familiar forms of matter condensed out of the cosmic soup. Three minutes after the Big Bang, the building blocks for simple atoms such as hydrogen and helium had formed. These building blocks clumped together under the influence of gravity to form the first stars and galaxies 200 million years after the Big Bang. Heavier elements, such as iron, were produced when these early stars exploded in supernovae. Our own sun and solar system formed 5 billion years ago, and life on Earth was up and running a little over a billion years later.

 

‹ Prev