The Age of Spiritual Machines: When Computers Exceed Human Intelligence

Home > Other > The Age of Spiritual Machines: When Computers Exceed Human Intelligence > Page 16
The Age of Spiritual Machines: When Computers Exceed Human Intelligence Page 16

by Ray Kurzweil


  Suppose No One Ever Looks at the Answer

  Consider that the quantum ambiguity a quantum computer relies on is decohered, that is, disambiguated, when a conscious entity observes the ambiguous phenomenon. The conscious entities in this case are us, the users of the quantum computer. But in using a quantum computer, we are not directly looking at the nuclear spin states of individual electrons. The spin states are measured by an apparatus that in turn answers some question that the quantum computer has been asked to solve. These measurements are then processed by other electronic gadgets, manipulated further by conventional computing equipment, and finally displayed or printed on a piece of paper.

  Suppose no human or other conscious entity ever looks at the printout. In this situation, there has been no conscious observation, and therefore no decoherence. As I discussed earlier, the physical world only bothers to manifest itself in an unambiguous state when one of us conscious entities decides to interact with it. So the page with the answer is ambiguous, undetermined—until and unless a conscious entity looks at it. Then instantly all the ambiguity is retroactively resolved, and the answer is there on the page. The implication is that the answer is not there until we look at it. But don’t try to sneak up on the page fast enough to see the answerless page; the quantum effects are instantaneous.

  What Is It Good For?

  A key requirement for quantum computing is a way to test the answer. Such a test does not always exist. However, a quantum computer would be a great mathematician. It could simultaneously consider every possible combination of axioms and previously solved theorems (within a quantum computer’s qu-bit capacity) to prove or disprove virtually any provable or disprovable conjecture. Although a mathematical proof is often extremely difficult to come up with, confirming its validity is usually straightforward, so the quantum approach is well suited.

  Quantum computing is not directly applicable, however, to problems such as playing a board game. Whereas the “perfect” chess move for a given board is a good example of a finite but intractable computing problem, there is no easy way to test the answer. If a person or process were to present an answer, there is no way to test its validity other than to build the same move-countermove tree that generated the answer in the first place. Even for mere “good” moves, a quantum computer would have no obvious advantage over a digital computer.

  How about creating art? Here a quantum computer would have considerable value. Creating a work of art involves solving a series, possibly an extensive series, of problems. A quantum computer could consider every possible combination of elements—words, notes, strokes—for each such decision. We still need a way to test each answer to the sequence of aesthetic problems, but the quantum computer would be ideal for instantly searching through a Universe of possibilities.

  Encryption Destroyed and Resurrected

  As mentioned above, the classic problem that a quantum computer is ideally suited for is cracking encryption codes, which relies on factoring large numbers. The strength of an encryption code is measured by the number of bits that needs to be factored. For example, it is illegal in the United States to export encryption technology using more than 40 bits (56 bits if you give a key to law-enforcement authorities). A 40-bit encryption method is not very secure. In September 1997, Ian Goldberg, a University of California at Berkeley graduate student, was able to crack a 40-bit code in three and a half hours using a network of 250 small computers. 15 A 56-bit code is a bit better (16 bits better, actually). Ten months later, John Gilmore, a computer privacy activist, and Paul Kocher, an encryption expert, were able to break the 56-bit code in 56 hours using a specially designed computer that cost them $250,000 to build. But a quantum computer can easily factor any sized number (within its capacity). Quantum computing technology would essentially destroy digital encryption.

  But as technology takes away, it also gives. A related quantum effect can provide a new method of encryption that can never be broken. Again, keep in mind that, in view of the Law of Accelerating Returns, “never” is not as long as it used to be.

  This effect is called quantum entanglement. Einstein, who was not a fan of quantum mechanics, had a different name for it, calling it “spooky action at a distance.” The phenomenon was recently demonstrated by Dr. Nicolas Gisin of the University of Geneva in a recent experiment across the city of Geneva. 16 Dr. Gisin sent twin photons in opposite directions through optical fibers. Once the photons were about seven miles apart, they each encountered a glass plate from which they could either bounce off or pass through. Thus, they were each forced to make a decision to choose among two equally probable pathways. Since there was no possible communication link between the two photons, classical physics would predict that their decisions would be independent. But they both made the same decision. And they did so at the same instant in time, so even if there were an unknown communication path between them, there was not enough time for a message to travel from one photon to the other at the speed of light. The two particles were quantum entangled and communicated instantly with each other regardless of their separation. The effect was reliably repeated over many such photon pairs.

  The apparent communication between the two photons takes place at a speed far greater than the speed of light. In theory, the speed is infinite in that the decoherence of the two photon travel decisions, according to quantum theory, takes place at exactly the same instant. Dr. Gisin’s experiment was sufficiently sensitive to demonstrate the communication was at least ten thousand times faster than the speed of light.

  So, does this violate Einstein’s Special Theory of Relativity, which postulates the speed of light as the fastest speed at which we can transmit information? The answer is no—there is no information being communicated by the entangled photons. The decision of the photons is random—a profound quantum randomness—and randomness is precisely not information. Both the sender and the receiver of the message simultaneously access the identical random decisions of the entangled photons, which are used to encode and decode, respectively, the message. So we are communicating randomness—not information—at speeds far greater than the speed of light. The only way we could convert the random decisions of the photons into information is if we edited the random sequence of photon decisions. But editing this random sequence would require observing the photon decisions, which in turn would cause quantum decoherence, which would destroy the quantum entanglement. So Einstein’s theory is preserved.

  Even though we cannot instantly transmit information using quantum entanglement, transmitting randomness is still very useful. It allows us to resurrect the process of encryption that quantum computing would destroy. If the sender and receiver of a message are at the two ends of an optical fiber, they can use the precisely matched random decisions of a stream of quantum entangled photons to respectively encode and decode a message. Since the encryption is fundamentally random and nonrepeating, it cannot be broken. Eavesdropping would also be impossible, as this would cause quantum decoherence that could be detected at both ends. So privacy is preserved.

  Note that in quantum encryption, we are transmitting the code instantly. The actual message will arrive much more slowly—at only the speed of light.

  Quantum Consciousness Revisited

  The prospect of computers competing with the full range of human capabilities generates strong, often adverse feelings, as well as no shortage of arguments that such a specter is theoretically impossible. One of the more interesting such arguments comes from an Oxford mathematician and physicist, Roger Penrose.

  In his 1989 best-seller, The Emperor’s New Mind, Penrose puts forth two conjectures. 17 The first has to do with an unsettling theorem proved by a Czech mathematician, Kurt Godel. Godel’s famous “incompleteness theorem,” which has been called the most important theorem in mathematics, states that in a mathematical system powerful enough to generate the natural numbers, there inevitably exist propositions that can be neither proved nor disproved. This was another one of th
ose twentieth-century insights that upset the orderliness of nineteenth-century thinking.

  A corollary of Godel’s theorem is that there are mathematical propositions that cannot be decided by an algorithm. In essence, these Gödelian impossible problems require an infinite number of steps to be solved. So Penrose’s first conjecture is that machines cannot do what humans can do because machines can only follow an algorithm. An algorithm cannot solve a Godelian unsolvable problem. But humans can. Therefore, humans are better.

  Penrose goes on to state that humans can solve unsolvable problems because our brains do quantum computing. Subsequently responding to criticism that neurons are too big to exhibit quantum effects, Penrose cited small structures in the neurons called microtubules that may be capable of quantum computation.

  However, Penrose’s first conjecture—that humans are inherently superior to machines—is unconvincing for at least three reasons:

  1. It is true that machines can’t solve Gödelian impossible problems. But humans can’t solve them either. Humans can only estimate them. Computers can make estimates as well, and in recent years are doing a better job of this than humans.

  2. In any event, quantum computing does not permit solving Gödelian impossible problems either. Solving a Godelian impossible problem requires an algorithm with an infinite number of steps. Quantum computing can turn an intractable problem that could not be solved on a conventional computer in trillions of years into an instantaneous computation. But it still falls short of infinite computing.

  3. Even if (1) and (2) above were wrong, that is, if humans could solve Gödelian impossible problems and do so because of their quantum-computing ability, that still does not restrict quantum computing from machines. The opposite is the case. If the human brain exhibits quantum computing, this would only confirm that quantum computing is possible, that matter following natural laws can perform quantum computing. Any mechanisms in human neurons capable of quantum computing, such as the microtubules, would be replicable in a machine. Machines use quantum effects—tunneling—in trillions of devices (that is, transistors) today.18 There is nothing to suggest that the human brain has exclusive access to quantum computing.

  Penrose’s second conjecture is more difficult to resolve. It is that an entity exhibiting quantum computing is conscious. He is saying that it is the human’s quantum computing that accounts for her consciousness. Thus quantum computing—quantum decoherence—yields consciousness.

  Now we do know that there is a link between consciousness and quantum decoherence. That is; consciousness observing a quantum uncertainty causes quantum decoherence. Penrose, however, is asserting a link in the opposite direction. This does not follow logically. Of course quantum mechanics is not logical in the usual sense—it follows quantum logic (some observers use the word “strange” to describe quantum logic). But even applying quantum logic, Penrose’s second conjecture does not appear to follow. On the other hand, I am unable to reject it out of hand because there is a strong nexus between consciousness and quantum decoherence in that the former causes the latter. I have thought about this issue for three years, and have been unable to accept it or reject it. Perhaps before writing my next book I will have an opinion on Penrose’s second conjecture.

  REVERSE ENGINEERING A PROVEN DESIGN: THE HUMAN BRAIN

  For many people the mind is the last refuge of mystery against the encroaching spread of science, and they don’t like the idea of science engulfing the last bit of terra incognita.

  —Herb Simon as quoted by Daniel Dennett

  Cannot we let people be themselves, and enjoy life in their own way? You are trying to make another you. One’s enough.

  —Ralph Waldo Emerson

  For the wise men of old ... the solution has been knowledge and self-discipline , ... and in the practice of this technique, are ready to do things hitherto regarded as disgusting and impious—such as digging up and mutilating the dead.

  —C. S. Lewis

  Intelligence is: (a) the most complex phenomenon in the Universe; or (b) a profoundly simple process.

  The answer, of course, is (c) both of the above. It’s another one of those. great dualities that make life interesting. We’ve already talked about the simplicity of intelligence: simple paradigms and the simple process of computation. Let’s talk about the complexity.

  We come back to knowledge, which starts out with simple seeds but ultimately becomes elaborate as the knowledge-gathering process interacts with the chaotic real world. Indeed, that is how intelligence originated. It was the result of the evolutionary process we call natural selection, itself a simple paradigm, that drew its complexity from the pandemonium of its environment. We see the same phenomenon when we harness evolution in the computer. We start with simple formulas, add the simple process of evolutionary iteration and combine this with the simplicity of massive computation. The result is often complex, capable, and intelligent algorithms.

  IS THE BRAIN BIG ENOUGH?

  Is our conception of human neuron functioning and our estimates of the number of neurons and connections in the human brain consistent with what we know about the brain’s capabilities? Perhaps human neurons are far more capable than we think they are. If so, building a machine with human-level capabilities might take longer than expected.

  We find that estimates of the number of concepts-“chunks” of knowledge-that a human expert in a particular field has mastered are remarkably consistent: about 50,000 to 100,000. This approximate range appears to be valid over a wide range of human endeavors: the number of board positions mastered by a chess grand master, the concepts mastered by an expert in a technical field, such as a physician, the vocabulary of a writer (Shakespeare used 29,000 words;19 this book uses a lot fewer).

  This type of professional knowledge is, of course, only a small subset of the knowledge we need to function as human beings. Basic knowledge of the world, including so-called common sense, is more extensive. We also have an ability to recognize patterns: spoken language, written language, objects, faces. And we have our skills: walking, talking, catching balls. I believe that a reasonably conservative estimate of the general knowledge of a typical human is a thousand times greater than the knowledge of an expert in her professional field. This provides us a rough estimate of 100 million chunks-bits of understanding, concepts, patterns, specific skills-per human. As we will see below, even if this estimate is low (by a factor of up to a thousand), the brain is still big enough.

  The number of neurons in the human brain is estimated at approximately 100 billion, with an average of 1,000 connections per neuron, for a total of 100 trillion connections. With 100 trillion connections and 100 million chunks of knowledge (including patterns and skills), we get an estimate of about a million connections per chunk.

  Our computer simulations of neural nets use a variety of different types of neuron models, all of which are relatively simple. Efforts to provide detailed electronic models of real mammalian neurons appear to show that while animal neurons are more complicated than typical computer models, the difference in complexity is modest. Even using our simpler computer versions of neurons, we find that we can model a chunk of knowledge-a face, a character shape, a phoneme, a word sense-using as little as a thousand connections per chunk. Thus our rough estimate of a million neural connections in the human brain per human knowledge chunk appears reasonable.

  Indeed it appears ample. Thus we could make my estimate (of the number of knowledge chunks) a thousand times greater, and the calculation still works. It is likely, however, that the brain’s encoding of knowledge is less efficient than the methods we use in our machines. This apparent inefficiency is consistent with our Understanding that the human brain is conservatively designed. The brain relies on a large degree of redundancy and a relatively low density of information storage to gain reliability and to continue to function effectively despite a high rate of neuron loss as we age. My conclusion is that it does not appear that we need to contemplate a mode
l of information processing of individual neurons that is significantly model complex than we currently understand in order to explain human capa bility. The brain is big enough.

  But we don’t need to simulate the entire evolution of the human brain in order to tap the intricate secrets it contains. Just as a technology company will take apart and “reverse engineer” (analyze to understand the methods of) a rival’s products, we can do the same with the human brain. It is, after all, the best example we can get our hands on of an intelligent process. We can tap the architecture, organization, and innate knowledge of the human brain in order to greatly accelerate our understanding of how to design intelligence in a machine. By probing the brain’s circuits, we can copy and imitate a proven design, one that took its original designer several billion years to develop. (And it’s not even copyrighted.)

  As we approach the computational ability to simulate the human brain—we’re not there today, but we will begin to be in about a decade’s time—such an effort will be intensely pursued. Indeed, this endeavor has already begun.

  For example, Synaptics’ vision chip is fundamentally a copy of the neural organization, implemented in silicon of course, of not only the human retina, but the early stages of mammalian visual processing. It has captured the essence of the algorithm of early mammalian visual processing, an algorithm called center surround filtering. It is not a particularly complicated chip, yet it realistically captures the essence of the initial stages of human vision.

  There is a popular conceit among observers, both informed and uninformed, that such a reverse engineering project is infeasible. Hofstadter worries that “our brains may be too weak to understand themselves.”20 But that is not what we are finding. As we probe the brain’s circuits, we find that the massively parallel algorithms are far from incomprehensible. Nor is there anything like an infinite number of them. There are hundreds of specialized regions in the brain, and it does have a rather ornate architecture, the consequence of its long history. The entire puzzle is not beyond our comprehension. It will certainly not be beyond the comprehension of twenty-first-century machines.

 

‹ Prev