Book Read Free

The Age of Spiritual Machines: When Computers Exceed Human Intelligence

Page 14

by Ray Kurzweil


  I FEEL GOOD WHEN I.LEARN SOMETHING, BUT ACQUIRING KNOWLEDGE SURE IS A TEDIOUS PROCESS. PARTICULARLY WHEN I’VE BEEN UP ALL NIGHT STUDYING FOR AN EXAM. AND I’M NOT SURE HOW MUCH OF THIS STUFF I RETAIN.

  That’s another weakness of the human form of intelligence. Computers can share their knowledge with each other readily and quickly. We humans don’t have a means for sharing knowledge directly, other than the slow process of human communication, of human teaching and learning.

  DIDN’T YOU SAY THAT COMPUTER NEURAL NETS LEARN THE SAME WAY PEOPLE DO?

  You mean, slowly?

  EXACTLY, BY BEING EXPOSED TO PATTERNS THOUSANDS OF TIMES, JUST LIKE US.

  Yes, that’s the point of neural nets; they’re intended as analogues of human neural nets, at least simplified versions of what we understand them to be. However, we can build our electronic nets in such a way that once the net has painstakingly learned its lessons, the pattern of its synaptic connection strengths can be captured and then quickly downloaded to another machine, or to millions of other machines. Machines can readily share all of their accumulated knowledge, so only one machine has to do the learning. We humans can’t do that. That’s one reason I said that when computers reach the level of human intelligence, they will necessarily roar past it.

  SO IS TECHNOLOGY GOING TO ENABLE US HUMANS TO DOWNLOAD KNOWLEDGE IN THE FUTURE? I MEAN, I ENJOY LEARNING, DEPENDING ON THE PROFESSOR, OF COURSE, BUT IT CAN BE A DRAG.

  The technology to communicate between the electronic world and the human neural world is already taking shape. So we will be able to directly feed streams of data to our neural pathways. Unfortunately, that doesn’t mean we can directly download knowledge, at least not to the human neural circuits we now use. As we’ve talked about, human learning is distributed throughout a region of our brain. Knowledge involves millions of connections, so our knowledge structures are not localized. Nature didn’t provide a direct pathway to adjust all those connections, other than the slow conventional way. While we will be able to create certain specific pathways to our neural connections, and indeed we’re already doing that, I don’t see how it would be practical to directly communicate to the many millions of interneuronal connections necessary to quickly download knowledge.

  I GUESS I’LL JUST HAVE TO KEEP HITTING THE BOOKS. SOME OF MY PROFESSORS ARE KIND OF COOL, THOUGH, THE WAY THEY SEEM TO KNOW EVERYTHING.

  As I said, humans are good at faking it when we go outside of our area of expertise. However, there is a way that downloading knowledge will be feasible by the middle of the twenty-first century.

  I’M LISTENING.

  Downloading knowledge will be one of the benefits of the neural-implant technology. We’ll have implants that extend our capacity for retaining knowledge, for enhancing memory. Unlike nature, we won’t leave out a quick knowledge downloading port in the electronic version of our synapses. So it will be feasible to quickly download knowledge to these electronic extensions of our brains. Of course, when we fully port our minds to a new computational medium, downloading knowledge will become even easier.

  SO I’LL BE ABLE TO BUY MEMORY IMPLANTS PRELOADED WITH A KNOWLEDGE OF, SAY, MY FRENCH LIT COURSE.

  Sure, or you can mentally click on a French literature web site and download the knowledge directly from the site.

  KIND OF DEFEATS THE PURPOSE OF LITERATURE, DOESN’T IT? I MEAN SOME OF THIS STUFF IS NEAT TO READ.

  I would prefer to think that intensifying knowledge will enhance the appreciation of literature, or any art form. After all, we need knowledge to appreciate an artistic expression. Otherwise, we don’t understand the vocabulary and the allusions.

  Anyway, you’ll still be able to read, just a lot faster. In the second half of the twenty-first century, you’ll be able to read a book in a few seconds.

  I DON’T THINK I COULD TURN THE PAGES THAT FAST.

  Oh come on, the pages will be—

  VIRTUAL PAGES, OF COURSE.

  PART TWO

  PREPARING THE PRESENT

  CHAPTER SIX

  BUILDING NEW BRAINS ...

  THE HARDWARE OF INTELLIGENCE

  You can only make a certain amount with your hands, but with your mind, it’s unlimited.

  —Kal Seinfeld’s advice to his son, Jerry

  Let’s review what we need to build an intelligent machine. One resource required is the right set of formulas. We examined three quintessential formulas in chapter 4. There are dozens of others in use, and a more complete understanding of the brain will undoubtedly introduce hundreds more. But all of these appear to be variations on the three basic themes: recursive search, self-organizing networks of elements, and evolutionary improvement through repeated struggle among competing designs.

  A second resource needed is knowledge. Some pieces of knowledge are needed as seeds for a process to converge on a meaningful result. Much of the rest can be automatically learned by adaptive methods when neural nets or evolutionary algorithms are exposed to the right learning environment.

  The third resource required is computation itself. In this regard, the human brain is eminently capable in some ways, and remarkably weak in others. Its strength is reflected in its massive parallelism, an approach that our computers can also benefit from. The brain’s weakness is the extraordinarily slow speed of its computing medium, a limitation that computers do not share with us. For this reason, DNA-based evolution will eventually have to be abandoned. DNA-based evolution is good at tinkering with and extending its designs, but it is unable to scrap an entire design and start over. Organisms created through DNA-based evolution are stuck with an extremely plodding type of circuitry.

  But the Law of Accelerating Returns tells us that evolution will not remain stuck at a dead end for very long. And indeed, evolution has found a way around the computational limitations of neural circuitry. Cleverly, it has created organisms that in turn invented a computational technology a million times faster than carbon-based neurons (which are continuing to get yet faster). Ultimately, the computing conducted on extremely slow mammalian neural circuits will be ported to a far more versatile and speedier electronic (and photonic) equivalent.

  When will this happen? Let’s take another look at the Law of Accelerating Returns as applied to computation.

  Achieving the Hardware Capacity of the Human Brain

  In the chapter 1 chart, “The Exponential Growth of Computing, 1900-1998,” we saw that the slope of the curve representing exponential growth was itself gradually increasing. Computer speed (as measured in calculations per second per thousand dollars) doubled every three years between 1910 and 1950, doubled every two years between 1950 and 1966, and is now doubling every year. This suggests possible exponential growth in the rate of exponential growth. 1

  This apparent acceleration in the acceleration may result, however, from the confounding of the two strands of the Law of Accelerating Returns, which for the past forty years has expressed itself using the Moore’s Law paradigm of shrinking transistor sizes on an integrated circuit. As transistor die sizes decrease, the electrons streaming through the transistor have less distance to travel, hence the switching speed of the transistor increases. So exponentially improving speed is the first strand. Reduced transistor die sizes also enable chip manufacturers to squeeze a greater number of transistors onto an integrated circuit, so exponentially improving densities of computation is the second strand.

  In the early years of the computer age, it was primarily the first strand—increasing circuit speeds—that improved the overall computation rate of computers. During the 1990s, however, advanced microprocessors began using a form of parallel processing called pipelining, in which multiple calculations were performed at the same time (some mainframes going back to the 1970s used this technique). Thus the speed of computer processors as measured in instructions per second now also reflects the second strand: greater densities of computation resulting from the use of parallel processing.

  As we are approaching more perfect harne
ssing of the improving density of computation, processor speeds are now effectively doubling every twelve months. This is fully feasible today when we build hardware-based neural nets because neural net processors are relatively simple and highly parallel. Here we create a processor for each neuron and eventually one for each interneuronal connection. Moore’s, Law thereby enables us to double both the number of processors as well as their speed every two years, an effective quadrupling of the number of interneuronal-connection calculations per second.

  This apparent acceleration in the acceleration of computer speeds may result, therefore, from an improving ability to benefit from both strands of the Law of Accelerating Returns. When Moore’s Law dies by the year 2020, new forms of circuitry beyond integrated circuits will continue both strands of exponential improvement. But ordinary exponential growth—two strands of it—is dramatic enough. Using the more conservative prediction of just one level of acceleration as our guide, let’s consider where the Law of Accelerating Returns will take us in the twenty-first century.

  The human brain has about 100 billion neurons. With an estimated average of one thousand connections between each neuron and its neighbors, we have about 100 trillion connections, each capable of a simultaneous calculation. That’s rather massive parallel processing, and one key to the strength of human thinking. A profound weakness, however, is the excruciatingly slow speed of neural circuitry, only 200 calculations per second. For problems that benefit from massive parallelism, such as neural-net-based pattern recognition, the human brain does a great job. For problems that require extensive sequential thinking, the human brain is only mediocre.

  With 100 trillion connections, each computing at 200 calculations per second, we get 20 million billion calculations per second. This is a conservatively high estimate; other estimates are lower by one to three orders of magnitude. So when will we see the computing speed of the human brain in your personal computer?

  The answer depends on the type of computer we are trying to build. The most relevant is a massively parallel neural net computer. In 1997, $2,000 of neural computer chips using only modest parallel processing could perform around 2 billion connection calculations per second. Since neural net emulations benefit from both strands of the acceleration of computational power, this capacity will double every twelve months. Thus by the year 2020, it will have doubled about twenty-three times, resulting in a speed of about 20 million billion neural connection calculations per second, which is equal to the human brain.

  If we apply the same analysis to an “ordinary” personal computer, we get the year 2025 to achieve human brain capacity in a $1,000 device.2 This is because the general-purpose type of computations that a conventional personal computer is designed for are inherently more expensive than the simpler, highly repetitive neural-connection calculations. Thus I believe that the 2020 estimate is more accurate because by 2020, most of the computations performed in our computers will be of the neural-connection type.

  The memory capacity of the human brain is about 100 trillion synapse strengths (neurotransmitter concentrations at interneuronal connections), which we can estimate at about a million billion bits. In 1998, a billion bits of RAM (128 megabytes) cost about $200. The capacity of memory circuits has been doubling every eighteen months. Thus by the year 2023, a million billion bits will cost about $1,000.3 However, this silicon equivalent will run more than a billion times faster than the human brain. There are techniques for trading off memory for speed, so we can effectively match human memory for $1,000 sooner than 2023.

  THE EXPONENTIAL GROWTH OF COMPUTING, 1900-2100

  Taking all of this into consideration, it is reasonable to estimate that a $1,000 personal computer will match the computing speed and capacity of the human brain by around the year 2020, particularly for the neuron-connection calculation, which appears to comprise the bulk of the computation in the human brain. Supercomputers are one thousand to ten thousand times faster than personal computers. As this book is being written, IBM is building a supercomputer based on the design of Deep Blue, its silicon chess champion, capable of 10 teraflops (that is, 10 trillion calculations per second), only 2,000 times slower than the human brain. Japan’s Nippon Electric Company hopes to beat that with a 32-teraflop machine. IBM then hopes to follow that with 100 teraflops by around the year 2004 (just what Moore’s Law predicts, by the way). Supercomputers will reach the 20 million billion calculations per second capacity of the human brain around 2010, a decade earlier than personal computers.4

  In another approach, projects such as Sun Microsystems’ Jini program have been initiated to harvest the unused computation on the Internet. Note that at any particular moment, the significant majority of the. computers on the Internet are not being used. Even those that are being used are not being used to capacity (for example, typing text uses less than one percent of a typical notebook computer’s computing capacity). Under the Internet computation harvesting proposals, cooperating sites would load special software that would enable a virtual massively parallel computer to be created out of the computers on the network. Each user would still have priority over his or her own machine, but in the background, a significant fraction of the millions of computers on the Internet would be harvested into one or more supercomputers. The amount of unused computation on the Internet today exceeds the computational capacity of the human brain, so we already have available in at least one form the hardware side of human intelligence. And with the continuation of the Law of Accelerating Returns, this availability will become increasingly ubiquitous.

  After human capacity in a $1,000 personal computer is achieved around the year 2020, our thinking machines will improve the cost performance of their computing by a factor of two every twelve months. That means that the capacity of computing will double ten times every decade, which is a factor of one thousand (210) every ten years. So your personal computer will be able to simulate the brain power of a small village by the year 2030, the entire population of the United States by 2048, and a trillion human brains by 2060.5 If we estimate the human Earth population at 10 billion persons, one penny’s worth of computing circa 2099 will have a billion times greater computing capacity than all humans on Earth.6

  Of course I may be off by a year or two. But computers in the twenty-first century will not be wanting for computing capacity or memory.

  Computing Substrates in the Twenty-First Century

  I’ve noted that the continued exponential growth of computing is implied by the Law of Accelerating Returns, which states that any process that moves toward greater order—evolution in particular—will exponentially speed up its pace as time passes. The two resources that the exploding pace of an evolutionary process—such as the progression of computer technology—requires are (1) its own increasing order, and (2) the chaos in the environment in which it takes place. Both of these resources are essentially without limit.

  Although we can anticipate the overall acceleration in technological progress, one might still expect that the actual manifestation of this progression would still be somewhat irregular. After all, it depends on such variable phenomena as individual innovation, business conditions, investment patterns, and the like. Contemporary theories of evolutionary processes, such as the Punctuated Equilibrium theories,7 posit that evolution works by periodic leaps or discontinuities followed by periods of relative stability. It is thus remarkable how predictable computer progress has been.

  So, how will the Law of Accelerating Returns as applied to computation roll out in the decades beyond the demise of Moore’s Law on Integrated Circuits by the year 2020? For the immediate future, Moore’s Law will continue with ever smaller component geometries packing greater numbers of yet faster transistors on each chip. But as circuit dimensions reach near atomic sizes, undesirable quantum effects such as unwanted electron tunneling will produce unreliable results. Nonetheless, Moore’s standard methodology will get very close to human processing power in a personal computer and beyond
that in a supercomputer.

  The next frontier is the third dimension. Already, venture-backed companies (mostly California-based) are competing to build chips with dozens and ultimately thousands of layers of circuitry With names like Cubic Memory, Dense-Pac, and Staktek, these companies are already shipping functional three-dimensional “cubes” of circuitry. Although not yet cost competitive with the customary flat chips, the third dimension will be there when we run out of space in the first two.8

  Computing with Light

  Beyond that, there is no shortage of exotic computing technologies being developed in research labs, many of which have already demonstrated promising results. Optical computing uses streams of photons (particles of light) rather than electrons. A laser can produce billions of coherent streams of photons, with each stream performing its own independent series of calculations. The calculations on each stream are performed in parallel by special optical elements such as lenses, mirrors, and diffraction gratings. Several companies, including Quanta-Image, Photonics, and Mytec Technologies, have applied optical computing to the recognition of fingerprints. Lockheed has applied optical computing to the automatic identification of malignant breast lesions.9

 

‹ Prev