by Ray Kurzweil
As with the DNA computer described above, a key to successful quantum computing is a careful statement of the problem, including a precise way to test possible answers. The quantum computer effectively tests every possible combination of values for the qubits. So a quantum computer with one thousand qubits would test 21,000 (a number approximately equal to one followed by 301 zeroes) potential solutions simultaneously.
A thousand-bit quantum computer would vastly outperform any conceivable DNA computer, or for that matter any conceivable nonquantum computer. There are two limitations to the process, however. The first is that, like the DNA and optical computers discussed above, only a special set of problems is amenable to being presented to a quantum computer. In essence, we need to be able to test each possible answer in a simple way.
The classic example of a practical use for quantum computing is in factoring very large numbers (finding which smaller numbers, when multiplied together, result in the large number). Factoring numbers with more than 512 bits is currently not achievable on a digital computer, even a massively parallel one.32 Interesting classes of problems amenable to quantum computing include breaking encryption codes (which rely on factoring large numbers). The other problem is that the computational power of a quantum computer depends on the number of entangled qubits, and the state of the art is currently limited to around ten bits. A ten-bit quantum computer is not very useful, since 210 is only 1,024. In a conventional computer, it is a straightforward process to combine memory bits and logic gates. We cannot, however, create a twenty-qubit quantum computer simply by combining two ten-qubit machines. All of the qubits have to be quantum-entangled together, and that has proved to be challenging.
A key question is: how difficult is it to add each additional qubit? The computational power of a quantum computer grows exponentially with each added qubit, but if it turns out that adding each additional qubit makes the engineering task exponentially more difficult, we will not be gaining any leverage. (That is, the computational power of a quantum computer will be only linearly proportional to the engineering difficulty.) In general, proposed methods for adding qubits make the resulting systems significantly more delicate and susceptible to premature decoherence.
There are proposals to increase significantly the number of qubits, although these have not yet been proved in practice. For example, Stephan Gulde and his colleagues at the University of Innsbruck have built a quantum computer using a single atom of calcium that has the potential to simultaneously encode dozens of qubits—possibly up to one hundred—using different quantum properties within the atom.33 The ultimate role of quantum computing remains unresolved. But even if a quantum computer with hundreds of entangled qubits proves feasible, it will remain a special-purpose device, although one with remarkable capabilities that cannot be emulated in any other way.
When I suggested in The Age of Spiritual Machines that molecular computing would be the sixth major computing paradigm, the idea was still controversial. There has been so much progress in the past five years that there has been a sea change in attitude among experts, and this is now a mainstream view. We already have proofs of concept for all of the major requirements for three-dimensional molecular computing: single-molecule transistors, memory cells based on atoms, nanowires, and methods to self-assemble and self-diagnose the trillions (potentially trillions of trillions) of components.
Contemporary electronics proceeds from the design of detailed chip layouts to photolithography to the manufacturing of chips in large, centralized factories. Nanocircuits are more likely to be created in small chemistry flasks, a development that will be another important step in the decentralization of our industrial infrastructure and will maintain the law of accelerating returns through this century and beyond.
The Computational Capacity of the Human Brain
It may seem rash to expect fully intelligent machines in a few decades, when the computers have barely matched insect mentality in a half-century of development. Indeed, for that reason, many long-time artificial intelligence researchers scoff at the suggestion, and offer a few centuries as a more believable period. But there are very good reasons why things will go much faster in the next fifty years than they have in the last fifty. . . . Since 1990, the power available to individual AI and robotics programs has doubled yearly, to 30 MIPS by 1994 and 500 MIPS by 1998. Seeds long ago alleged barren are suddenly sprouting. Machines read text, recognize speech, even translate languages. Robots drive cross-country, crawl across Mars, and trundle down office corridors. In 1996 a theorem-proving program called EQP running five weeks on a 50 MIPS computer at Argonne National Laboratory found a proof of a Boolean algebra conjecture by Herbert Robbins that had eluded mathematicians for sixty years. And it is still only Spring. Wait until Summer.
—HANS MORAVEC, “WHEN WILL COMPUTER HARDWARE MATCH THE HUMAN BRAIN?” 1997
What is the computational capacity of a human brain? A number of estimates have been made, based on replicating the functionality of brain regions that have been reverse engineered (that is, the methods understood) at human levels of performance. Once we have an estimate of the computational capacity for a particular region, we can extrapolate that capacity to the entire brain by considering what portion of the brain that region represents. These estimates are based on functional simulation, which replicates the overall functionality of a region rather than simulating each neuron and interneuronal connection in that region.
Although we would not want to rely on any single calculation, we find that various assessments of different regions of the brain all provide reasonably close estimates for the entire brain. The following are order-of-magnitude estimates, meaning that we are attempting to determine the appropriate figures to the closest multiple of ten. The fact that different ways of making the same estimate provide similar answers corroborates the approach and indicates that the estimates are in an appropriate range.
The prediction that the Singularity—an expansion of human intelligence by a factor of trillions through merger with its nonbiological form—will occur within the next several decades does not depend on the precision of these calculations. Even if our estimate of the amount of computation required to simulate the human brain was too optimistic (that is, too low) by a factor of even one thousand (which I believe is unlikely), that would delay the Singularity by only about eight years.34 A factor of one million would mean a delay of only about fifteen years, and a factor of one billion would be a delay of about twenty-one years.35
Hans Moravec, legendary roboticist at Carnegie Mellon University, has analyzed the transformations performed by the neural image-processing circuitry contained in the retina.36 The retina is about two centimeters wide and a half millimeter thick. Most of the retina’s depth is devoted to capturing an image, but one fifth of it is devoted to image processing, which includes distinguishing dark and light, and detecting motion in about one million small regions of the image.
The retina, according to Moravec’s analysis, performs ten million of these edge and motion detections each second. Based on his several decades of experience in creating robotic vision systems, he estimates that the execution of about one hundred computer instructions is required to re-create each such detection at human levels of performance, meaning that replicating the image-processing functionality of this portion of the retina requires 1,000 MIPS. The human brain is about 75,000 times heavier than the 0.02 grams of neurons in this portion of the retina, resulting in an estimate of about 1014 (100 trillion) instructions per second for the entire brain.37
Another estimate comes from the work of Lloyd Watts and his colleagues on creating functional simulations of regions of the human auditory system, which I discuss further in chapter 4.38 One of the functions of the software Watts has developed is a task called “stream separation,” which is used in teleconferencing and other applications to achieve telepresence (the localization of each participant in a remote audio teleconference). To accomplish this, Watts explains, mean
s “precisely measuring the time delay between sound sensors that are separated in space and that both receive the sound.” The process involves pitch analysis, spatial position, and speech cues, including language-specific cues. “One of the important cues used by humans for localizing the position of a sound source is the Interaural Time Difference (ITD), that is, the difference in time of arrival of sounds at the two ears.”39
Watts’s own group has created functionally equivalent re-creations of these brain regions derived from reverse engineering. He estimates that 1011 cps are required to achieve human-level localization of sounds. The auditory cortex regions responsible for this processing comprise at least 0.1 percent of the brain’s neurons. So we again arrive at a ballpark estimate of around 1014 cps (1011 cps × 103).
Yet another estimate comes from a simulation at the University of Texas that represents the functionality of a cerebellum region containing 104 neurons; this required about 108 cps, or about 104 cps per neuron. Extrapolating this over an estimated 1011 neurons results in a figure of about 1015 cps for the entire brain.
We will discuss the state of human-brain reverse engineering later, but it is clear that we can emulate the functionality of brain regions with less computation than would be required to simulate the precise nonlinear operation of each neuron and all of the neural components (that is, all of the complex interactions that take place inside each neuron). We come to the same conclusion when we attempt to simulate the functionality of organs in the body. For example, implantable devices are being tested that simulate the functionality of the human pancreas in regulating insulin levels.40 These devices work by measuring glucose levels in the blood and releasing insulin in a controlled fashion to keep the levels in an appropriate range. While they follow a method similar to that of a biological pancreas, they do not, however, attempt to simulate each pancreatic islet cell, and there would be no reason to do so.
These estimates all result in comparable orders of magnitude (1014 to 1015 cps). Given the early stage of human-brain reverse engineering, I will use a more conservative figure of 1016 cps for our subsequent discussions.
Functional simulation of the brain is sufficient to re-create human powers of pattern recognition, intellect, and emotional intelligence. On the other hand, if we want to “upload” a particular person’s personality (that is, capture all of his or her knowledge, skills, and personality, a concept I will explore in greater detail at the end of chapter 4), then we may need to simulate neural processes at the level of individual neurons and portions of neurons, such as the soma (cell body), axon (output connection), dendrites (trees of incoming connections), and synapses (regions connecting axons and dendrites). For this, we need to look at detailed models of individual neurons. The “fan out” (number of interneuronal connections) per neuron is estimated at 103. With an estimated 1011 neurons, that’s about 1014 connections. With a reset time of five milliseconds, that comes to about 1016 synaptic transactions per second.
Neuron-model simulations indicate the need for about 103 calculations per synaptic transaction to capture the nonlinearities (complex interactions) in the dendrites and other neuron regions, resulting in an overall estimate of about 1019 cps for simulating the human brain at this level.41 We can therefore consider this an upper bound, but 1014 to 1016 cps to achieve functional equivalence of all brain regions is likely to be sufficient.
IBM’s Blue Gene/L supercomputer, now being built and scheduled to be completed around the time of the publication of this book, is projected to provide 360 trillion calculations per second (3.6 × 1014 cps).42 This figure is already greater than the lower estimates described above. Blue Gene/L will also have around one hundred terabytes (about 1015 bits) of main storage, more than our memory estimate for functional emulation of the human brain (see below). In line with my earlier predictions, supercomputers will achieve my more conservative estimate of 1016 cps for functional human-brain emulation by early in the next decade (see the “Supercomputer Power” figure on p. 71).
Accelerating the Availability of Human-Level Personal Computing. Personal computers today provide more than 109 cps. According to the projections in the “Exponential Growth of Computing” chart (p. 70), we will achieve 1016 cps by 2025. However, there are several ways this timeline can be accelerated. Rather than using general-purpose processors, one can use application-specific integrated circuits (ASICs) to provide greater price-performance for very repetitive calculations. Such circuits already provide extremely high computational throughput for the repetitive calculations used in generating moving images in video games. ASICs can increase price-performance a thousandfold, cutting about eight years off the 2025 date. The varied programs that a simulation of the human brain will comprise will also include a great deal of repetition and thus will be amenable to ASIC implementation. The cerebellum, for example, repeats a basic wiring pattern billions of times.
We will also be able to amplify the power of personal computers by harvesting the unused computation power of devices on the Internet. New communication paradigms such as “mesh” computing contemplate treating every device in the network as a node rather than just a “spoke.”43 In other words, instead of devices (such as personal computers and PDAs) merely sending information to and from nodes, each device will act as a node itself, sending information to and receiving information from every other device. That will create very robust, self-organizing communication networks. It will also make it easier for computers and other devices to tap unused CPU cycles of the devices in their region of the mesh.
Currently at least 99 percent, if not 99.9 percent, of the computational capacity of all the computers on the Internet lies unused. Effectively harnessing this computation can provide another factor of 102 or 103 in increased price-performance. For these reasons, it is reasonable to expect human brain capacity, at least in terms of hardware computational capacity, for one thousand dollars by around 2020.
Yet another approach to accelerate the availability of human-level computation in a personal computer is to use transistors in their native “analog” mode. Many of the processes in the human brain are analog, not digital. Although we can emulate analog processes to any desired degree of accuracy with digital computation, we lose several orders of magnitude of efficiency in doing so. A single transistor can multiply two values represented as analog levels; doing so with digital circuits requires thousands of transistors. California Institute of Technology’s Carver Mead has been pioneering this concept.44 One disadvantage of Mead’s approach is that the engineering design time required for such native analog computing is lengthy, so most researchers developing software to emulate regions of the brain usually prefer the rapid turnaround of software simulations.
Human Memory Capacity. How does computational capacity compare to human memory capacity? It turns out that we arrive at similar time-frame estimates if we look at human memory requirements. The number of “chunks” of knowledge mastered by an expert in a domain is approximately 105 for a variety of domains. These chunks represent patterns (such as faces) as well as specific knowledge. For example, a world-class chess master is estimated to have mastered about 100,000 board positions. Shakespeare used 29,000 words but close to 100,000 meanings of those words. Development of expert systems in medicine indicate that humans can master about 100,000 concepts in a domain. If we estimate that this “professional” knowledge represents as little as 1 percent of the overall pattern and knowledge store of a human, we arrive at an estimate of 107 chunks.
Based on my own experience in designing systems that can store similar chunks of knowledge in either rule-based expert systems or self-organizing pattern-recognition systems, a reasonable estimate is about 106 bits per chunk (pattern or item of knowledge), for a total capacity of 1013 (10 trillion) bits for a human’s functional memory.
According to the projections from the ITRS road map (see RAM chart on p. 57), we will be able to purchase 1013 bits of memory for one thousand dollars by around 2018. Keep in mi
nd that this memory will be millions of times faster than the electrochemical memory process used in the human brain and thus will be far more effective.
Again, if we model human memory on the level of individual interneuronal connections, we get a higher estimate. We can estimate about 104 bits per connection to store the connection patterns and neurotransmitter concentrations. With an estimated 1014 connections, that comes to 1018 (a billion billion) bits.
Based on the above analyses, it is reasonable to expect the hardware that can emulate human-brain functionality to be available for approximately one thousand dollars by around 2020. As we will discuss in chapter 4, the software that will replicate that functionality will take about a decade longer. However, the exponential growth of the price-performance, capacity, and speed of our hardware technology will continue during that period, so by 2030 it will take a village of human brains (around one thousand) to match a thousand dollars’ worth of computing. By 2050, one thousand dollars of computing will exceed the processing power of all human brains on Earth. Of course, this figure includes those brains still using only biological neurons.
While human neurons are wondrous creations, we wouldn’t (and don’t) design computing circuits using the same slow methods. Despite the ingenuity of the designs evolved through natural selection, they are many orders of magnitude less capable than what we will be able to engineer. As we reverse engineer our bodies and brains, we will be in a position to create comparable systems that are far more durable and that operate thousands to millions of times faster than our naturally evolved systems. Our electronic circuits are already more than one million times faster than a neuron’s electrochemical processes, and this speed is continuing to accelerate.
Most of the complexity of a human neuron is devoted to maintaining its life-support functions, not its information-processing capabilities. Ultimately, we will be able to port our mental processes to a more suitable computational substrate. Then our minds won’t have to stay so small.