It is interesting that Turing himself earlier rejected this argument as a basis for believing that machines cannot, in principle, think. He argued that there was no proof that similar limitations didn’t exist for the human intellect, and further that a human being could triumph only over one machine at a time, not simultaneously over all machines. “In short, then,” he wrote, “there might be men cleverer than any given machine, but then again there might be other machines cleverer again, and so on.”
I find Turing’s essay on the issue of machine intelligence, although it is almost 50 years old, a clear and most refreshing discussion. In this essay Turing proposed what has since become known as the Turing Test for machine intelligence. In the spirit of a physicist, it is an operational test, which Turing dubbed “the imitation game.” If the machine passed—that is, if it most of the time succeeded in fooling a human interrogator, removed from it in another room, into thinking it was human—then the answer to the question “Can machines think?” would be in the affirmative.
Turing made his own views on the issue quite clear:
I believe that in about fifty years’ time it will be possible to programme computers, with a storage capacity of about 103 [bits], to make them play the imitation game so well that an average interrogator will not have more than a 70 per cent chance of making the right identification after 5 minutes of questioning. The original question, “Can Machines Think?” I believe to be too meaningless to deserve discussion. Nevertheless I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted.
Predictions of “educated opinion” are notoriously chancy, and Turing was clearly overoptimistic. While we now possess machines of the storage capacity he hoped for, I don’t think that any of them have yet unambiguously passed the Turing Test (notwithstanding Gary Kasparov’s suspicions that some of Deep Blue’s moves were made by its human programmers). It is certainly clear to me that neither of the two most famous fictional intelligent computers, HAL and Data, would be likely to pass the test. (Despite this, I sided with Jean-Luc Picard when he argued before a Federation tribunal, in “The Measure of a Man,” that Data was a sentient living being, entitled to the rights of such, and not merely property of the Federation.) Moreover, the issue of machine intelligence is still as hotly debated today as it was when Turing made his predictions.
A fundamental difference between Turing’s arguments and Penrose’s is that Penrose goes further than the use of mathematics to argue his point. He attempts to isolate the fundamental physical difference. From my point of view, this approach is the only reason that the issues of intelligence and consciousness are appropriate for physicists to debate (and the chief reason I have introduced them at all). As I understand it, Penrose claims that the difference between human intelligence and computing algorithms originates in the mysterious nature of quantum mechanics, which of course governs the functioning of the fundamental atomic constituents of the brain. Moreover, he argues that a full understanding of human consciousness will rely on new laws of physics, which he claims are required in order for us to properly understand how the classical world arises from the quantum-mechanical world. He introduces the unfortunate notion that a proper understanding of quantum gravity will be integral to this understanding of consciousness. However, even if one completely disagrees with this last claim, one can explore whether the non-classical physics associated with the human mind will distinguish it forever from a computer. And I believe that some exciting developments in the past few years suggest that the opposite is true!
As computers get smaller and smaller, the individual logic units—the “bits” of the machine—will eventually become the size of atoms. (Data’s positronic brain apparently uses positrons, the antiparticles of electrons, but what the heck.) Richard Feynman used to speculate on how small you could make various machines and still have them work. He realized that once bits were the size of atoms, the laws of quantum mechanics, which allow atoms to behave very differently from billiard balls, must be taken into account.
Indeed, while the field of computer science is based on the mathematical theory of computation, computations are carried out using physical devices, and thus it is the task of physics in the end to determine what is practically computable and how. Since the physical world at a fundamental level is quantum mechanical in nature, the theory of computation must also take into account quantum mechanics. Thus, the classical theories of Turing and others on computation should really be thought of as approximations of a more general “quantum theory of computing.”
It has been explicitly demonstrated in the past few years that many of the limits on practical computing with digital computers—which use standard, classical bits for their computations—can be overcome by quantum computers. Algorithms can be developed which, if the computer components are quantum mechanical in nature, will allow calculations to be made exponentially faster than classical computation theory allows. A particular example involves an algorithm to find a nontrivial factor of a large number (that is, a factor not equal to 1 or the number), but the specifics of this example are not important here. What is important is an appreciation of why quantum computations can be different from classical ones. But to get an inkling of some of the physical processes that may well underlie our conscious awareness requires us to enter the murky world of quantum mechanics and explore phenomena that defy all classical reasoning.
CHAPTER FOURTEEN
THE GHOST IN THE MACHINE
After three hours I asked him to summon up the soul of Jimi Hendrix and requested “All Along the Watchtower.” You know, the guy’s been dead twenty years, but he still hasn’t lost his edge!
—Fox Mulder
The physicist Frank Wilczek once confided to me that the most amusing physics blooper he regularly hears in the mass media is the description of some development or other as a “quantum leap.” At the risk of sounding like William Safire, let me elaborate. This phrase has come to denote a “great leap forward, of huge significance.” Needless to say, that’s the exact opposite of what a quantum leap really is. (Of course, since I enjoyed the television series Quantum Leap, I like to think its producers were not thinking of a huge quantum leap so much as a huge leap in time made possible by quantum mechanics.) Quantum mechanics is based on the idea that at a fundamental level the continuous universe we know is really not continuous at all. On a scale much smaller than we can normally experience directly (although I’ll come to some recent striking exceptions to this rule), the laws of quantum mechanics tell us that a finite system can exist only in a range of discrete states. To go from one state to another—to make a “quantum leap,” in other words—the system must absorb or release a quantum, or small package of energy. The fact that energy can be absorbed only in such small packages, always a fixed multiple of a single quantum, was the realization that began the revolution that became quantum mechanics.
The reason it took until 1905 or so before the apparently continuous flow of energy was shown to be discrete is because individual quanta of energy are so small that their discrete nature is irrelevant on the human scale. Thus, whenever a system makes a quantum leap, the change is directly unnoticeable (and sometimes also unknowable)! Now, while this unnoticeability may seem a little strange, it doesn’t begin to prefigure the revolution in thinking, and in the understanding of the world, which quantum mechanics brought about. Einstein’s theories of relativity are taxing on one’s sense of reality, but after a little work they and their implications can become intuitively as well as mathematically clear. A popular myth is that shortly after Einstein invented relativity there were only fifteen people in the world who understood it; nowadays, special relativity, in particular, is accessible to anyone with a high school knowledge of mathematics. However, almost a century after the first stirrings of the quantum theory, no one really understands it.
In my
last book, I borrowed an argument from Harvard physicist Sidney Coleman to explain this lapse: Because our entire experience of the world involves scales on which quantum phenomena are not directly observable, our intuition and our language is completely classical in character. We can’t help but try and explain quantum phenomena using classical pictures. This approach is usually called the interpretation of quantum mechanics. But as Coleman has emphasized, it is doomed from the start. What we should really be studying is the interpretation of classical mechanics, since the universe at its most fundamental level is quantum mechanical in character and the classical world of our experience is only an approximation of the underlying reality. It is therefore no more appropriate to try to understand and explain the real, quantum universe in terms of purely classical concepts than it is to try to explain 3-dimensional motion in terms of 2-dimensional concepts, or to describe the actions of twins in terms of one member of the set. In such approaches, paradoxes inevitably result.
To prepare us for the paradoxes that follow, let’s imagine ourselves employing some of the wrongheaded approaches mentioned above. Say I take a baseball and throw it up in the air toward center field. Now, if I have access only to the horizontal position of the baseball, I will see a ball moving horizontally at a constant velocity until it comes to rest in the glove of the outfielder. Now, say I throw the ball up a lot harder and higher, with a far greater vertical velocity but with the same horizontal velocity. If I have access only to the horizontal data, I will see exactly what I saw before—except that this time the baseball will hit the outfielder’s glove a lot harder. “That’s crazy!” I’ll exclaim, because both cases appeared to be exactly the same, so the laws of classical baseball tell me that the impact on the outfielder’s glove will be exactly the same.
Now let’s turn to the twins. I notice that one of the twins is behind me in line in the hardware store, as I’m paying for a hammer. Then I go next door to the grocery store, and I see the other twin at the checkout counter just as I enter. I do a double take, because I know it’s impossible that the person behind me in line in one store beat me to the other store. Something is wrong with the picture.
These two cases may seem similar, but there is an important difference between them. In the former case, the paradox results simply because there is some “hidden variable”—namely, the third dimension, which, if taken into account, resolves the problem. Classical mechanics works perfectly to describe the 3-dimensional motion of the baseballs. Fundamentally, my description of a single ball traveling in space according to Newton’s laws is sound.
In the latter case, however, the paradox results because the twins are not a single person. When they are in the same vicinity, there’s no way in which I can make sense of appearances, given my classical worldview. However, as long as they are far enough apart so that I don’t see both of them in succession, it doesn’t really matter to me whether I am looking at June or Jane—in other words, they might as well be one person. Nevertheless, I must still also understand that treating them as one person, even if it works in certain circumstances, is not the underlying reality.
The key question is, Which of these two examples provides the better analogy to quantum mechanics? Are our classical notions fundamentally sound or are we ignoring some hidden variable that will make the nonsensical quantum universe right again? Or is a quantum-mechanical particle like the twins of the second example? Is it fundamentally incorrect, at some scale, to imagine that this quantum-mechanical object is really explicable in terms of a classical object? Well, you can guess the answer. Experiment on simple quantum systems—systems consisting just of several atoms or several photons—have put the issue to rest. If the first alternative were correct, I probably wouldn’t have bothered with this whole discussion.
Once you accept the fact that quantum particles are not the same as classical particles, and that instilling them with the properties we see in the macroscopic universe forces paradoxes akin to seeing a person behind you in line suddenly appear ahead of you in the next line, the paradoxes become somewhat easier to accept—at least, for me. Having said this, it is now an appropriate time to introduce some of the properties of the quantum universe. But let me do so in terms of the workings of a computer, so that we can begin to see immediately how quantum mechanics changes the rules.
A classical computer is based on fundamental units of information called bits, which exist in memory locations that store either a 1 or a 0. All information can be encoded in bits, and all computations can be reduced to operating on bits—changing 1s to 0s, or 0s to 1s, or leaving the numbers as is. Nowadays the storage devices are made up of small metal “gates” placed on top of insulating bases; these gates can have either a lot of charge stored on them (1), or very little charge stored on them (0). In practice, “lots” of charge means, say, 100,000 extra electrons, while very little charge means less than 10 or 100 extra electrons. Because the number of extra charges that differentiate a 1 from a 0 is so large, these states can easily be distinguished, so that each gate can be unambiguously read as being in a 1 or 0 state.
Now, the problem—or, rather, the opportunity—afforded if the physical device carrying this binary information gets smaller and smaller is that the ability to unambiguously differentiate between the two states of the system becomes difficult or impossible. Once a system becomes small enough so that the laws of quantum mechanics become important, a system that can be in one of two different states when measured is, in general, not in either state at any time before the measurement (nor is it in any other particular state)!
This sounds like gibberish, but it is the gibberish on which quantum mechanics is based, and it works. The central point—which relates directly both to the discrete energy levels of systems and to Heisenberg’s uncertainty principle—is that making a measurement of a system can change the system. The prototypical example of this is an elementary particle with “spin.” Many elementary particles possess this property—the physicists’ term for it is angular momentum—though they do not actually spin the way macroscopic objects do. In any case, spin defines an axis—the axis of rotation. If we choose an axis about which we measure an elementary particle’s spin, it turns out that due to quantum mechanics, some of these particles spin in one direction—say, clockwise—and some spin the same amount in the other direction, counterclockwise. We call the former case “spin up” and the latter “spin down.”
Thus, the spin configuration of certain elementary particles can take one of two values, making them two-state, or binary, systems. Whenever you make a measurement of the particle’s spin, you will find that it is either spinning up or that it is spinning down. But you would not be correct to assume that the particle was spinning up or spinning down before you made the measurement—that is a classical assumption, akin to treating twins as if they were a single person.
We simply cannot attribute any physical reality to the particle’s spin along a certain axis until after we measure it. This may sound like a New Age argument, but that’s just because we are accustomed to a classical reality and not a quantum reality. What may be even more surprising to some readers is that quantum mechanics involves not just this sort of observer-created reality but also an underlying objective reality, independent of the observer, and that, moreover, the theory is deterministic. It often disheartens me that even in books purporting to provide popular explanations of quantum mechanics, these points are either not emphasized or are ignored or misstated.
What makes things confusing is that objective reality in quantum mechanics is not necessarily associated with quantities that we can classically observe but rather with something called the quantum-mechanical “wavefunction” of a system. This mathematically well-defined object completely describes the configuration of the system at any time. It is objective, and determines what we will measure, even though our measurement may then affect the future evolution of the wavefunction. Moreover, it evolves by laws as deterministic as Newton’s laws of mot
ion.
What makes things appear to be subjective and indeterminate is that the wavefunction cannot be measured directly. Rather, the wavefunction gives the probability that a given measurement will yield a given result. Even if we know the exact form of the wavefunction in advance, we cannot in general say exactly what a given measurement will yield. The result of the measurement is known only with some probability. Thus does indeterminacy sneak into the actual world of observation and measurement.
The other consequence of the nature of the wavefunction is even more striking. The reason it yields the probabilities for various results that may arise when one makes a series of measurements on equivalent systems is that the wavefunction is given by the sum of the different states—each state implying a different result of the measurement—each multiplied by a coefficient related to the probability that the system will be in that particular state when it’s measured.
This may not sound so strange at first, but think about it for a minute. The wavefunction can incorporate two mutually exclusive configurations—say, spin up and spin down—at the same time. Since the wavefunction governs the evolution of the quantum-mechanical particle system, this means that the particle is neither spinning up nor spinning down before the measurement, but rather is, in some weird sense, doing both. When you make the measurement, you find one or the other result (with the probability having been determined by the wavefunction). Moreover, after the measurement, since the particle is now restricted to existing in the spin state you measured, the nature of the wavefunction describing the particle will have changed. It will now not involve a sum of both states, but only one state.
Things can get even weirder. The wavefunction for a particle that starts out at one point (A) and is then measured later at another point (B) is made up of the sum of many different quantum configurations, each of which traveled along its own separate trajectory between the two points. Thus, there is no sense in which the particle that went from A to B took some specific path between those two points, unless you measured the path. Thus for example, an electron that starts out on one side of a barrier with two slits, and ends up on the other side of the barrier, in some sense goes through both slits before being measured on the other side.
Beyond Star Trek Page 14