The Universe_Leading Scientists Explore the Origin, Mysteries, and Future of the Cosmos

Home > Other > The Universe_Leading Scientists Explore the Origin, Mysteries, and Future of the Cosmos > Page 34
The Universe_Leading Scientists Explore the Origin, Mysteries, and Future of the Cosmos Page 34

by John Brockman


  Biologists often emphasize the part that computers will play, and it’s true that computers will be indispensable, but there’s a third leg, which is good theoretical ideas. It won’t be enough to have big computers and great data. You need ideas, and I think those ideas will be expressed in the language of complex-systems mathematics. Although that phrase, “complex systems,” has been talked about a lot, I hope people out there appreciate what a feeble state it’s in, theoretically speaking. We really don’t understand much about it. We have a lot of computer simulations that show stunning phenomena, but where’s the understanding? Where’s the insight? There are very few cases that we understand, and so that brings me back to synchrony. I like that example of synchrony as a case of spontaneous order because that’s one of the few cases we can understand mathematically. If we want to solve these problems, we’ve got to do the math problems we can do, and we need the simplest phenomena first, and synchrony is among them. It’s going to be a long slog to really understand these problems.

  Another thought, though, is that we may not need understanding. It could be that understanding is overrated. Perhaps insight is something that’s been good for three or four hundred years, since Isaac Newton, but it is not the ultimate end. The ultimate end is really just control of these diseases and avoiding horrible ecological scenarios. If we could get there, even without knowing what we’re doing, that would maybe be good enough. Computers might understand it, but we don’t have to. There could be a real story here about the overrating of understanding.

  In broad strokes, there were hundreds of years after Aristotle when we didn’t really understand a whole lot. Once Kepler, Copernicus, and Newton began explaining what they saw through math, there was a great era of understanding, through certain classes of math problems that could be solved. All the mathematics that let us understand laws of physics—Maxwell’s equations, thermodynamics, on through quantum theory—all involve a certain class of math problems that we know how to solve completely and thoroughly: that is, linear problems. It’s only in the past few decades that we’ve been banging our heads on the nonlinear ones. Of those, we understand just the smallest ones, using only three or four variables—that’s chaos theory. As soon as you have hundreds, or millions, or billions of variables—like in the brain—we don’t understand those problems at all. That’s what complex systems is supposed to be about, but we’re not even close to understanding them. We can simulate them in a computer, but that’s not really that different from just watching. We still don’t understand.

  20

  Constructor Theory

  David Deutsch

  Physicist, University of Oxford; author, The Beginning of Infinity; recipient, Edge of Computation Science Prize

  Some considerable time ago, we were discussing my idea, new at the time, for constructor theory, which was and is an idea I had for generalizing the quantum theory of computation to cover not just computation but all physical processes. I guessed, and still guess, that this is going to provide a new mode of description of physical systems and laws of physics. It will also have new laws of its own, which will be deeper than the deepest existing theories such as quantum theory and relativity. At the time, I was very enthusiastic about this, and what intervened between then and now is that writing a book took much longer than I expected. But now I’m back to it, and we’re working on constructor theory, and, if anything, I would say it’s fulfilling its promise more than I expected and sooner than I expected.

  One of the first rather unexpected yields of this theory has been a new foundation for information theory. There’s a notorious problem with defining information within physics—namely, that on the one hand information is purely abstract, and the original theory of computation as developed by Alan Turing and others regarded computers and the information they manipulate purely abstractly, as mathematical objects. Many mathematicians to this day don’t realize that information is physical and that there is no such thing as an abstract computer. Only a physical object can compute things.

  On the other hand, physicists have always known that in order to do the work that the theory of information does within physics—such as informing the theory of statistical mechanics, and thereby thermodynamics, the second law of thermodynamics—information has to be a physical quantity. And yet information is independent of the physical object it resides in.

  I’m speaking to you now: Information starts as some kind of electrochemical signals in my brain, and then it gets converted into other signals in my nerves and then into sound waves and then into the vibrations of a microphone, mechanical vibrations, then into electricity, and so on, and presumably will eventually go on the Internet. This “something” has been instantiated in radically different physical objects that obey different laws of physics. Yet in order to describe this process, you have to refer to the thing that has remained unchanged throughout the process, which is only the information rather than any obviously physical thing like energy or momentum.

  The way to get this substrate independence of information is to refer it to a level of physics that is below and more fundamental than things like laws of motion, that we have been used thinking of as near the lowest, most fundamental level of physics. Constructor theory is that deeper level of physics, physical laws, and physical systems, more fundamental than the existing prevailing conception of what physics is—namely, particles and waves and space and time and an initial state and laws of motion that describe the evolution of that initial state. What led to this hope for this new kind of foundation for the laws of physics was really the quantum theory of computation.

  I had thought for a while that the quantum theory of computation is the whole of physics. The reason why it seemed reasonable to think that was that a universal quantum computer can simulate any other finite physical object with arbitrary accuracy, and that means that the set of all possible motions, which is computations, of a universal computer corresponds to the set of all possible motions of anything. There’s a certain sense in which studying the universal quantum computer is the same thing as studying every other physical object. It contains all possible motions of all possible physical objects within its own possible diversity. I used to say that the quantum theory of computation is the whole of physics because of this property. But then I realized that that isn’t quite true and there’s an important gap in that connection. Namely, although the quantum computer can simulate any other object and can represent any other object so that you can study any object via its characteristic programs, what the quantum theory of computation can’t tell you is, which program corresponds to which physical object.

  This might sound like an inessential technicality, but it’s actually of fundamental importance, because not knowing which abstraction in the computer corresponds to which object is a little bit like having a bank account and the bank telling you, “Oh, your balance is some number.” Unless you know what number it is, you haven’t really expressed the whole of the physical situation of you and your bank account. Similarly, if you’re told only that your physical system corresponds to some program of the quantum computer and you haven’t said which, then you haven’t specified to the whole of physics.

  Then I thought, “What we need is a generalization of the quantum theory of computation that does say that—that assigns to each program the corresponding real object.” That was an early conception of constructor theory—making it directly a generalization of the theory of computation. But then I realized that that’s not quite the way to go, because that still tries to cast constructor theory within the same mold as all existing theories, and therefore it wouldn’t solve this problem of providing an underlying framework. It still would mean that, just as a program has an initial state and then laws of motion—that is, the laws of the operation of the computer—and then a final state which is the output of the computer, so that way of looking at constructor theory would have simply been a translation of existing physics. It wouldn’t have provided anything new.r />
  The new thing, which I think is the key to the fact that constructor theory delivers new content, was that the laws of constructor theory are not about an initial state, laws of motion, final state, or anything like that. They’re just about which transformations are possible and which are impossible. The laws of motion, and that kind of thing, are indirect remote consequences of just saying what’s possible and what’s impossible. Also, the laws of constructor theory are not about the constructor; they’re not about how you do it, only whether you can do it, and this is analogous to the theory of computation. The theory of computation isn’t about transistors and wires and input/output devices and so on. It’s about which transformations of information are possible and which aren’t possible. Since we have the universal computer, we know that each possible one corresponds to a program for a universal computer, but the universal computer can be made in lots of different ways. How you make it is inessential to the deep laws of computation.

  In the case of constructor theory, what’s important is which transformations of physical objects are possible and which are impossible. When they’re possible, you’ll be able to do them in lots of different ways, usually. When they’re impossible, that will always be because some law of physics forbids them, and that’s why, as Karl Popper said, the content of a physical theory, of any scientific theory, is in what it forbids and also in how it explains what it forbids.

  If you have this theory of what is possible and what is impossible, it implicitly tells you what all the laws of physics are. That very simple basis is proving very fruitful already, and I have great hopes that various niggling problems and notorious difficulties in existing formulations of physics will be solved by this single idea. It may well take a lot of work to see how, but that’s what I expect, and I think that’s what we’re beginning to see. This is often misunderstood as claiming that only the scientific theories are worth having. Now that, as Popper once remarked, is a silly interpretation. For example, Popper’s own theory is a philosophical theory. He certainly wasn’t saying that that was an illegitimate theory.

  In some ways, this theory, just like quantum theory and relativity and anything that’s fundamental in physics, overlaps with philosophy. So having the right philosophy—which is the philosophy of Karl Popper, basically—though not essential, is extremely helpful to avoid going down blind alleys. Popper, I suppose, is most famous for his criterion of demarcation between science and metaphysics: Scientific theories are those that are in principle testable by experiment, and what he called metaphysical theories—I think they would be better called philosophical theories—are the ones that can’t.

  Being testable is not as simple a concept as it sounds. Popper investigated in great detail and laid down principles that lead me to the question, “In what sense is constructor theory testable?” Constructor theory consists of a language in which to express other scientific theories—well, that can’t be true or false, it can only be convenient or inconvenient—but also laws. But these laws are not about physical objects, they’re laws about other laws. They say that other laws have to obey constructor theoretic principles.

  That raises the issue of how you can test a law about laws, because if it says that laws have to have such-and-such a property, you can’t actually go around and find a law that doesn’t have that property, because experiment could never tell you that that law was true. Fortunately, this problem has been solved by Popper. You have to go indirectly, in the case of these laws about laws. I want to introduce the terminology that laws about laws should be called principles. A lot of people already use that kind of terminology, but I’d rather make it standardized.

  For example, take the principle of the conservation of energy, which is a statement that all laws have to respect the conservation of energy. Perhaps it’s not obvious to you, but there is no experiment that would show a violation of the conservation of energy, because if somebody presented you with an object that produced more energy than it took in, you could always say, “Ah, well, that’s due to an invisible thing, or a law of motion that’s different from what we think, or maybe the formula for energy is different for this object than what we thought,” so there’s no experiment that could ever refute it.

  And in fact in the history of physics the discovery of the neutrino was made by exactly that method. It appeared that the law of conservation of energy was not being obeyed in beta decay, and then Pauli suggested that maybe the energy was being carried off by an invisible particle that you couldn’t detect. It turned out that he was right, but the way you have to test that is not by doing an experiment on beta decay but by seeing whether the theory, the law that says that the neutrino exists, is successful and independently testable. It’s the testability of the law that the principle tells you about, that in effect provides the testability of the principle.

  One thing I think is important to stress about constructor theory is when I say we want to reformulate physics in terms of what can and can’t be done, that sounds like a retreat into operationalism, or into positivism, or something: that we shouldn’t worry about the constructor—that is, the thing that does the transformation—but only in the input and output and whether they’re compatible. But actually that’s not how it works in constructor theory.

  Constructor theory is all about how it comes about; it just expresses this in a different language. I’m not very familiar with the popular idea of cybernetics that came about a few decades ago, but I wouldn’t be surprised if those ideas, which proved at the time not to lead anywhere, were actually an early avatar of constructor theory. If so, we’ll be able to see that only in hindsight, because some of the ideas of constructor theory are really impossible to have until you have a conceptual framework that’s post–quantum theory of computation—i.e., after the theory of computation has been explicitly incorporated into physics, not just philosophically. That’s what the quantum theory of computation did.

  I’m not sure whether von Neumann used the term “constructor theory”—or did he just call it the universal constructor? Von Neumann’s work in the 1940s is another place where constructor theory could be thought to have its antecedents. But von Neumann was interested in different issues. He was interested in how living things can possibly exist given what the laws of physics are. This was before the DNA mechanism was discovered.

  He was interested in issues of principle—how the existence of a self-replicating object was even consistent with the laws of physics as we know them. He realized that there was an underlying logic and underlying algebra, something that in which one could express this and show what was needed. He actually solved the problem of how a living thing could possibly exist, basically by showing that it couldn’t possibly work by literally copying itself. It had to have within it a code, a recipe, a specification—or computer program, as we would say today—specifying how to build it, and therefore the self-replication process had to take place in two stages. He did this all before the DNA system was known, but he never got any further, because, from my perspective, he never got anywhere with constructor theory or with realizing that this was all at the foundations of physics rather than just the foundations of biology, because he was stuck in this prevailing conception of physics as being about initial conditions, laws of motion, and final state. Where, among other things, you have to include the constructor in your description of a system, which means that you don’t see the laws about the transformation for what they are.

  When he found he couldn’t make a mathematical model of a living object by writing down its equations on a piece of paper, he resorted to simplifying the laws of physics and then simplifying them again and again, and eventually he invented the whole field we now call cellular automata. It’s a very interesting field, but it takes us away from real physics, because it abstracts away the laws of physics. What I want to do is go in the other direction—to integrate it with laws of physics, not as they are now but with the laws of physics that have an underlying algebra that resembles
, or is a generalization of, the theory of computation.

  Several strands led toward this. I was lucky enough to be placed in more than one of them. The main thing was that starting with Turing, and then Rolf Landauer, who was a lone voice in the 1960s saying that computation is physics . . . Because the theory of computation to this day is regarded by mathematicians as being about abstractions rather than as being about physics. Landauer realized that the concept of a purely abstract computer doesn’t make sense, and the theory of computation has to be a theory of what physical objects can do to information. Landauer focused on what restrictions the laws of physics imposed on what kinds of computation can be done. Unfortunately, that was the wrong way around, because, as we later discovered, the most important thing about the relationship of physics with computation, and the most striking thing, is that quantum theory—i.e., the deepest laws of physics that we know—permits new modes of computation that wouldn’t be possible in classical physics. Once you have established the quantum theory of computation, you’ve got a theory of computation that is wholly within physics, and it’s then natural to try to generalize that, which is what I wanted to do. So that’s one of the directions.

  Von Neumann was motivated, really, by theoretical biology rather than theoretical physics. Another thing that I think inhibited von Neumann from realizing that his theory was fundamental physics was that he had the wrong idea about quantum theory. He had settled for, and was one of the pioneers of, building a cop-out version of quantum theory that made it into just an operational theory, where you would use quantum theory just to work out and predict the outcomes of experiments rather than express the laws of how the outcome comes about. That was one of the reasons von Neumann never thought of his own theory as being a generalization of quantum theory—because he didn’t really take quantum theory seriously. His contribution to quantum theory was to provide this set of von Neumann rules that allows you to use the theory in practice without ever wondering what it means.

 

‹ Prev