The Singularity Is Near: When Humans Transcend Biology

Home > Other > The Singularity Is Near: When Humans Transcend Biology > Page 17
The Singularity Is Near: When Humans Transcend Biology Page 17

by Ray Kurzweil

MOLLY 2004: One that I’m thinking of, anyway. You can merge your thinking with someone else and still keep your separate identity at the same time.

  MOLLY 2104: If the situation—and the person—is right, then, yes, it’s a very sublime thing to do.

  MOLLY 2004: Like falling in love?

  MOLLY 2104: Like being in love. It’s the ultimate way to share.

  GEORGE 2048: I think you’ll go for it, Molly 2004.

  MOLLY 2104: You ought to know, George, since you were the first person I did it with.

  CHAPTER FOUR

  * * *

  Achieving the Software of

  Human Intelligence

  How to Reverse Engineer the Human Brain

  There are good reasons to believe that we are at a turning point, and that it will be possible within the next two decades to formulate a meaningful understanding of brain function. This optimistic view is based on several measurable trends, and a simple observation which has been proven repeatedly in the history of science: Scientific advances are enabled by a technology advance that allows us to see what we have not been able to see before. At about the turn of the twenty-first century, we passed a detectable turning point in both neuroscience knowledge and computing power. For the first time in history, we collectively know enough about our own brains, and have developed such advanced computing technology, that we can now seriously undertake the construction of a verifiable, real-time, high-resolution model of significant parts of our intelligence.

  —LLOYD WATTS, NEUROSCIENTIST1

  Now, for the first time, we are observing the brain at work in a global manner with such clarity that we should be able to discover the overall programs behind its magnificent powers.

  —J. G. TAYLOR, B. HORWITZ, K. J. FRISTON, NEUROSCIENTISTS2

  The brain is good: it is an existence proof that a certain arrangement of matter can produce mind, perform intelligent reasoning, pattern recognition, learning and a lot of other important tasks of engineering interest. Hence we can learn to build new systems by borrowing ideas from the brain. . . . The brain is bad: it is an evolved, messy system where a lot of interactions happen because of evolutionary contingencies. . . . On the other hand, it must also be robust (since we can survive with it) and be able to stand fairly major variations and environmental insults, so the truly valuable insight from the brain might be how to create resilient complex systems that self-organize well. . . . The interactions within a neuron are complex, but on the next level neurons seem to be somewhat simple objects that can be put together flexibly into networks. The cortical networks are a real mess locally, but again on the next level the connectivity isn’t that complex. It would be likely that evolution has produced a number of modules or repeating themes that are being re-used, and when we understand them and their interactions we can do something similar.

  —ANDERS SANDBERG, COMPUTATIONAL NEUROSCIENTIST, ROYAL INSTITUTE OF TECHNOLOGY, SWEDEN

  Reverse Engineering the Brain: An Overview of the Task

  The combination of human-level intelligence with a computer’s inherent superiority in speed, accuracy, and memory-sharing ability will be formidable. To date, however, most AI research and development has utilized engineering methods that are not necessarily based on how the human brain functions, for the simple reason that we have not had the precise tools needed to develop detailed models of human cognition.

  Our ability to reverse engineer the brain—to see inside, model it, and simulate its regions—is growing exponentially. We will ultimately understand the principles of operation underlying the full range of our own thinking, knowledge that will provide us with powerful procedures for developing the software of intelligent machines. We will modify, refine, and extend these techniques as we apply them to computational technologies that are far more powerful than the electrochemical processing that takes place in biological neurons. A key benefit of this grand project will be the precise insights it offers into ourselves. We will also gain powerful new ways to treat neurological problems such as Alzheimer’s, stroke, Parkinson’s disease, and sensory disabilities, and ultimately will be able to vastly extend our intelligence.

  New Brain-Imaging and Modeling Tools. The first step in reverse engineering the brain is to peer into the brain to determine how it works. So far, our tools for doing this have been crude, but that is now changing, as a significant number of new scanning technologies feature greatly improved spatial and temporal resolution, price-performance, and bandwidth. Simultaneously we are rapidly accumulating data on the precise characteristics and dynamics of the constituent parts and systems of the brain, ranging from individual synapses to large regions such as the cerebellum, which comprises more than half of the brain’s neurons. Extensive databases are methodically cataloging our exponentially growing knowledge of the brain.3

  Researchers have also shown they can rapidly understand and apply this information by building models and working simulations. These simulations of brain regions are based on the mathematical principles of complexity theory and chaotic computing and are already providing results that closely match experiments performed on actual human and animal brains.

  As noted in chapter 2, the power of the scanning and computational tools needed for the task of reverse engineering the brain is accelerating, similar to the acceleration in technology that made the genome project feasible. When we get to the nanobot era (see “Scanning Using Nanobots” on p. 163), we will be able to scan from inside the brain with exquisitely high spatial and temporal resolution.4 There are no inherent barriers to our being able to reverse engineer the operating principles of human intelligence and replicate these capabilities in the more powerful computational substrates that will become available in the decades ahead. The human brain is a complex hierarchy of complex systems, but it does not represent a level of complexity beyond what we are already capable of handling.

  The Software of the Brain. The price-performance of computation and communication is doubling every year. As we saw earlier, the computational capacity needed to emulate human intelligence will be available in less than two decades.5A principal assumption underlying the expectation of the Singularity is that nonbiological mediums will be able to emulate the richness, subtlety, and depth of human thinking. But achieving the hardware computational capacity of a single human brain—or even of the collective intelligence of villages and nations—will not automatically produce human levels of capability. (By “human levels” I include all the diverse and subtle ways humans are intelligent, including musical and artistic aptitude, creativity, physical motion through the world, and understanding and responding appropriately to emotions.) The hardware computational capacity is necessary but not sufficient. Understanding the organization and content of these resources—the software of intelligence—is even more critical and is the objective of the brain reverse-engineering undertaking.

  Once a computer achieves a human level of intelligence, it will necessarily soar past it. A key advantage of nonbiological intelligence is that machines can easily share their knowledge. If you learn French or read War and Peace, you can’t readily download that learning to me, as I have to acquire that scholarship the same painstaking way that you did. I can’t (yet) quickly access or transmit your knowledge, which is embedded in a vast pattern of neurotransmitter concentrations (levels of chemicals in the synapses that allow one neuron to influence another) and interneuronal connections (portions of the neurons called axons and dendrites that connect neurons).

  But consider the case of a machine’s intelligence. At one of my companies, we spent years teaching one research computer how to recognize continuous human speech, using pattern-recognition software.6 We exposed it to thousands of hours of recorded speech, corrected its errors, and patiently improved its performance by training its “chaotic” self-organizing algorithms (methods that modify their own rules, based on processes that use semirandom initial information, and with results that are not fully predictable). Finally, the computer became quite
adept at recognizing speech. Now, if you want your own personal computer to recognize speech, you don’t have to put it through the same painstaking learning process (as we do with each human child); you can simply download the already established patterns in seconds.

  Analytic Versus Neuromorphic Modeling of the Brain. A good example of the divergence between human intelligence and contemporary AI is how each undertakes the solution of a chess problem. Humans do so by recognizing patterns, while machines build huge logical “trees” of possible moves and counter-moves. Most technology (of all kinds) to date has used this latter type of “top-down,” analytic, engineering approach. Our flying machines, for example, do not attempt to re-create the physiology and mechanics of birds. But as our tools for reverse engineering the ways of nature are growing rapidly in sophistication, technology is moving toward emulating nature while implementing these techniques in far more capable substrates.

  The most compelling scenario for mastering the software of intelligence is to tap directly into the blueprint of the best example we can get our hands on of an intelligent process: the human brain. Although it took its original “designer” (evolution) several billion years to develop the brain, it’s readily available to us, protected by a skull but with the right tools not hidden from our view. Its contents are not yet copyrighted or patented. (We can, however, expect that to change; patent applications have already been filed based on brain reverse engineering.)7 We will apply the thousands of trillions of bytes of information derived from brain scans and neural models at many levels to design more intelligent parallel algorithms for our machines, particularly those based on self-organizing paradigms.

  With this self-organizing approach, we don’t have to attempt to replicate every single neural connection. There is a great deal of repetition and redundancy within any particular brain region. We are discovering that higher-level models of brain regions are often simpler than the detailed models of their neuronal components.

  How Complex Is the Brain? Although the information contained in a human brain would require on the order of one billion billion bits (see chapter 3), the initial design of the brain is based on the rather compact human genome. The entire genome consists of eight hundred million bytes, but most of it is redundant, leaving only about thirty to one hundred million bytes (less than 109 bits) of unique information (after compression), which is smaller than the program for Microsoft Word.8 To be fair, we should also take into account “epigenetic” data, which is information stored in proteins that control gene expression (that is, that determine which genes are allowed to create proteins in each cell), as well as the entire protein-replication machinery, such as the ribosomes and a host of enzymes. However, such additional information does not significantly change the order of magnitude of this calculation.9 Slightly more than half of the genetic and epigenetic information characterizes the initial state of the human brain.

  Of course, the complexity of our brains greatly increases as we interact with the world (by a factor of about one billion over the genome).10 But highly repetitive patterns are found in each specific brain region, so it is not necessary to capture each particular detail to successfully reverse engineer the relevant algorithms, which combine digital and analog methods (for example, the firing of a neuron can be considered a digital event whereas neurotransmitter levels in the synapse can be considered analog values). The basic wiring pattern of the cerebellum, for example, is described in the genome only once but repeated billions of times. With the information from brain scanning and modeling studies, we can design simulated “neuromorphic” equivalent software (that is, algorithms functionally equivalent to the overall performance of a brain region).

  The pace of building working models and simulations is only slightly behind the availability of brain-scanning and neuron-structure information. There are more than fifty thousand neuroscientists in the world, writing articles for more than three hundred journals.11 The field is broad and diverse, with scientists and engineers creating new scanning and sensing technologies and developing models and theories at many levels. So even people in the field are often not completely aware of the full dimensions of contemporary research.

  Modeling the Brain. In contemporary neuroscience, models and simulations are being developed from diverse sources, including brain scans, interneuronal connection models, neuronal models, and psychophysical testing. As mentioned earlier, auditory-system researcher Lloyd Watts has developed a comprehensive model of a significant portion of the human auditory-processing system from neurobiology studies of specific neuron types and interneuronal-connection information. Watts’s model includes five parallel paths and the actual representations of auditory information at each stage of neural processing. Watts has implemented his model in a computer as real-time software that can locate and identify sounds and functions, similar to the way human hearing operates. Although a work in progress, the model illustrates the feasibility of converting neurobiological models and brain-connection data into working simulations.

  As Hans Moravec and others have speculated, these efficient functional simulations require about one thousand times less computation than would be required if we simulated the nonlinearities in each dendrite, synapse, and other subneural structure in the region being simulated. (As I discussed in chapter 3, we can estimate the computation required for functional simulation of the brain at 1016 calculations per second [cps], versus 1019 cps to simulate the subneural nonlinearities.)12

  The actual speed ratio between contemporary electronics and the electro-chemical signaling in biological interneuronal connections is at least one million to one. We find this same inefficiency in all aspects of our biology, because biological evolution built all of its mechanisms and systems with a severely constrained set of materials: namely, cells, which are themselves made from a limited set of proteins. Although biological proteins are three-dimensional, they are restricted to complex molecules that can be folded from a linear (one-dimensional) sequence of amino acids.

  Peeling the Onion. The brain is not a single information-processing organ but rather an intricate and intertwined collection of hundreds of specialized regions. The process of “peeling the onion” to understand the functions of these interleaved regions is well under way. As the requisite neuron descriptions and brain-interconnection data become available, detailed and implementable replicas such as the simulation of the auditory regions described below (see “Another Example: Watts’s Model of the Auditory Regions” on p. 183) will be developed for all brain regions.

  Most brain-modeling algorithms are not the sequential, logical methods that are commonly used in digital computing today. The brain tends to use self-organizing, chaotic, holographic processes (that is, information not located in one place but distributed throughout a region). It is also massively parallel and utilizes hybrid digital-controlled analog techniques. However, a wide range of projects has demonstrated our ability to understand these techniques and to extract them from our rapidly escalating knowledge of the brain and its organization.

  After the algorithms of a particular region are understood, they can be refined and extended before being implemented in synthetic neural equivalents. They can be run on a computational substrate that is already far faster than neural circuitry. (Current computers perform computations in billionths of a second, compared to thousandths of a second for interneuronal transactions.) And we can also make use of the methods for building intelligent machines that we already understand.

  Is the Human Brain Different from a Computer?

  The answer to this question depends on what we mean by the word “computer.” Most computers today are all digital and perform one (or perhaps a few) computations at a time at extremely high speed. In contrast, the human brain combines digital and analog methods but performs most computations in the analog (continuous) domain, using neurotransmitters and related mechanisms. Although these neurons execute calculations at extremely slow speeds (typically two hundred transacti
ons per second), the brain as a whole is massively parallel: most of its neurons work at the same time, resulting in up to one hundred trillion computations being carried out simultaneously.

  The massive parallelism of the human brain is the key to its pattern-recognition ability, which is one of the pillars of our species’ thinking. Mammalian neurons engage in a chaotic dance (that is, with many apparently random interactions), and if the neural network has learned its lessons well, a stable pattern will emerge, reflecting the network’s decision. At the present, parallel designs for computers are somewhat limited. But there is no reason why functionally equivalent nonbiological re-creations of biological neural networks cannot be built using these principles. Indeed, dozens of efforts around the world have already succeeded in doing so. My own technical field is pattern recognition, and the projects that I have been involved in for about forty years use this form of trainable and nondeterministic computing.

  Many of the brain’s characteristic methods of organization can also be effectively simulated using conventional computing of sufficient power. Duplicating the design paradigms of nature will, I believe, be a key trend in future computing. We should keep in mind, as well, that digital computing can be functionally equivalent to analog computing—that is, we can perform all of the functions of a hybrid digital-analog network with an all-digital computer. The reverse is not true: we can’t simulate all of the functions of a digital computer with an analog one.

  However, analog computing does have an engineering advantage: it is potentially thousands of times more efficient. An analog computation can be performed by a few transistors or, in the case of mammalian neurons, specific electrochemical processes. A digital computation, in contrast, requires thousands or tens of thousands of transistors. On the other hand, this advantage can be offset by the ease of programming (and modifying) digital computer-based simulations.

 

‹ Prev