The Singularity Is Near: When Humans Transcend Biology

Home > Other > The Singularity Is Near: When Humans Transcend Biology > Page 32
The Singularity Is Near: When Humans Transcend Biology Page 32

by Ray Kurzweil


  RAY: I think you’ll settle somewhere in your thirties and stay there for a while.

  MOLLY 2004: Thirties sounds pretty good. I think a slightly more mature age than twenty-five is a good idea anyway. But what do you mean “for a while”?

  RAY: Stopping and reversing aging is only the beginning. Using nanobots for health and longevity is just the early adoption phase of introducing nanotechnology and intelligent computation into our bodies and brains. The more profound implication is that we’ll augment our thinking processes with nanobots that communicate with one another and with our biological neurons. Once nonbiological intelligence gets a foothold, so to speak, in our brains, it will be subject to the law of accelerating returns and expand exponentially. Our biological thinking, on the other hand, is basically stuck. MOLLY 2004: There you go again with things accelerating, but when this really gets going, thinking with biological neurons will be pretty trivial in comparison. RAY: That’s a fair statement.

  MOLLY 2004: So, Miss Molly of the future, when did I drop my biological body and brain?

  MOLLY 2104: Well, you don’t really want me to spell out your future, do you? And anyway it’s actually not a straightforward question.

  MOLLY 2004: How’s that?

  MOLLY 2104: In the 2040s we developed the means to instantly create new portions of ourselves, either biological or nonbiological. It became apparent that our true nature was a pattern of information, but we still needed to manifest ourselves in some physical form. However, we could quickly change that physical form.

  MOLLY 2004: By?

  MOLLY 2104: By applying new high-speed MNT manufacturing. So we could readily and rapidly redesign our physical instantiation. So I could have a biological body at one time and not at another, then have it again, then change it, and so on.

  MOLLY 2004: I think I’m following this.

  MOLLY 2104: The point is that I could have my biological brain and/or body or not have it. It’s not a matter of dropping anything, because we can always get back something we drop.

  MOLLY 2004: So you’re still doing this?

  MOLLY 2104: Some people still do this, but now in 2104 it’s a bit anachronistic. I mean, the simulations of biology are totally indistinguishable from actual biology, so why bother with physical instantiations?

  MOLLY 2004: Yeah, it’s messy isn’t it?

  MOLLY 2104: I’ll say.

  MOLLY 2004: I do have to say that it seems strange to be able to change your physical embodiment. I mean, where’s your—my—continuity?

  MOLLY 2104: It’s the same as your continuity in 2004. You’re changing your particles all the time also. It’s just your pattern of information that has continuity.

  MOLLY 2004: But in 2104 you’re able to change your pattern of information quickly also. I can’t do that yet.

  MOLLY 2104: It’s really not that different. You change your pattern—your memory, skills, experiences, even personality over time—but there is a continuity, a core that changes only gradually.

  MOLLY 2004: But I thought you could change your appearance and personality dramatically in an instant?

  MOLLY 2104: Yes, but that’s just a surface manifestation. My true core changes only gradually, just like when I was you in 2004.

  MOLLY 2004: Well, there are lots of times when I’d be delighted to instantly change my surface appearance.

  Robotics: Strong AI

  Consider another argument put forth by Turing. So far we have constructed only fairly simple and predictable artifacts. When we increase the complexity of our machines, there may, perhaps, be surprises in store for us. He draws a parallel with a fission pile. Below a certain “critical” size, nothing much happens: but above the critical size, the sparks begin to fly. So too, perhaps, with brains and machines. Most brains and all machines are, at present “sub-critical”—they react to incoming stimuli in a stodgy and uninteresting way, have no ideas of their own, can produce only stock responses—but a few brains at present, and possibly some machines in the future, are super-critical, and scintillate on their own account. Turing is suggesting that it is only a matter of complexity, and that above a certain level of complexity a qualitative difference appears, so that “super-critical” machines will be quite unlike the simple ones hitherto envisaged.

  —J. R. LUCAS, OXFORD PHILOSOPHER, IN HIS 1961 ESSAY “MINDS, MACHINES, AND GÖDEL” 157

  Given that superintelligence will one day be technologically feasible, will people choose to develop it? This question can pretty confidently be answered in the affirmative. Associated with every step along the road to superintelligence are enormous economic payoffs. The computer industry invests huge sums in the next generation of hardware and software, and it will continue doing so as long as there is a competitive pressure and profits to be made. People want better computers and smarter software, and they want the benefits these machines can help produce. Better medical drugs; relief for humans from the need to perform boring or dangerous jobs; entertainment—there is no end to the list of consumer-benefits. There is also a strong military motive to develop artificial intelligence. And nowhere on the path is there any natural stopping point where technophobics could plausibly argue “hither but not further.”

  —NICK BOSTROM, “HOW LONG BEFORE SUPERINTELLIGENCE?” 1997

  It is hard to think of any problem that a superintelligence could not either solve or at least help us solve. Disease, poverty, environmental destruction, unnecessary suffering of all kinds: these are things that a superintelligence equipped with advanced nanotechnology would be capable of eliminating. Additionally, a superintelligence could give us indefinite lifespan, either by stopping and reversing the aging process through the use of nanomedicine, or by offering us the option to upload ourselves. A superintelligence could also create opportunities for us to vastly increase our own intellectual and emotional capabilities, and it could assist us in creating a highly appealing experiential world in which we could live lives devoted to joyful game-playing, relating to each other, experiencing, personal growth, and to living closer to our ideals.

  —NICK BOSTROM, “ETHICAL ISSUES IN ADVANCED ARTIFICIAL INTELLIGENCE,” 2003

  Will robots inherit the earth? Yes, but they will be our children.

  —MARVIN MINSKY, 1995

  Of the three primary revolutions underlying the Singularity (G, N, and R), the most profound is R, which refers to the creation of nonbiological intelligence that exceeds that of unenhanced humans. A more intelligent process will inherently outcompete one that is less intelligent, making intelligence the most powerful force in the universe.

  While the R in GNR stands for robotics, the real issue involved here is strong AI (artificial intelligence that exceeds human intelligence). The standard reason for emphasizing robotics in this formulation is that intelligence needs an embodiment, a physical presence, to affect the world. I disagree with the emphasis on physical presence, however, for I believe that the central concern is intelligence. Intelligence will inherently find a way to influence the world, including creating its own means for embodiment and physical manipulation. Furthermore, we can include physical skills as a fundamental part of intelligence; a large portion of the human brain (the cerebellum, comprising more than half our neurons), for example, is devoted to coordinating our skills and muscles.

  Artificial intelligence at human levels will necessarily greatly exceed human intelligence for several reasons. As I pointed out earlier, machines can readily share their knowledge. As unenhanced humans we do not have the means of sharing the vast patterns of interneuronal connections and neurotransmitter-concentration levels that comprise our learning, knowledge, and skills, other than through slow, language-based communication. Of course, even this method of communication has been very beneficial, as it has distinguished us from other animals and has been an enabling factor in the creation of technology.

  Human skills are able to develop only in ways that have been evolutionarily encouraged. Those skills, which are primaril
y based on massively parallel pattern recognition, provide proficiency for certain tasks, such as distinguishing faces, identifying objects, and recognizing language sounds. But they’re not suited for many others, such as determining patterns in financial data. Once we fully master pattern-recognition paradigms, machine methods can apply these techniques to any type of pattern.158

  Machines can pool their resources in ways that humans cannot. Although teams of humans can accomplish both physical and mental feats that individual humans cannot achieve, machines can more easily and readily aggregate their computational, memory, and communications resources. As discussed earlier, the Internet is evolving into a worldwide grid of computing resources that can instantly be brought together to form massive supercomputers.

  Machines have exacting memories. Contemporary computers can master billions of facts accurately, a capability that is doubling every year.159 The underlying speed and price-performance of computing itself is doubling every year, and the rate of doubling is itself accelerating.

  As human knowledge migrates to the Web, machines will be able to read, understand, and synthesize all human-machine information. The last time a biological human was able to grasp all human scientific knowledge was hundreds of years ago.

  Another advantage of machine intelligence is that it can consistently perform at peak levels and can combine peak skills. Among humans one person may have mastered music composition, while another may have mastered transistor design, but given the fixed architecture of our brains we do not have the capacity (or the time) to develop and utilize the highest level of skill in every increasingly specialized area. Humans also vary a great deal in a particular skill, so that when we speak, say, of human levels of composing music, do we mean Beethoven, or do we mean the average person? Nonbiological intelligence will be able to match and exceed peak human skills in each area.

  For these reasons, once a computer is able to match the subtlety and range of human intelligence, it will necessarily soar past it and then continue its double-exponential ascent.

  A key question regarding the Singularity is whether the “chicken” (strong AI) or the “egg” (nanotechnology) will come first. In other words, will strong AI lead to full nanotechnology (molecular-manufacturing assemblers that can turn information into physical products), or will full nanotechnology lead to strong AI? The logic of the first premise is that strong AI would imply superhuman AI for the reasons just cited, and superhuman AI would be in a position to solve any remaining design problems required to implement full nanotechnology.

  The second premise is based on the realization that the hardware requirements for strong AI will be met by nanotechnology-based computation. Likewise the software requirements will be facilitated by nanobots that could create highly detailed scans of human brain functioning and thereby achieve the completion of reverse engineering the human brain.

  Both premises are logical; it’s clear that either technology can assist the other. The reality is that progress in both areas will necessarily use our most advanced tools, so advances in each field will simultaneously facilitate the other. However, I do expect that full MNT will emerge prior to strong AI, but only by a few years (around 2025 for nanotechnology, around 2029 for strong AI).

  As revolutionary as nanotechnology will be, strong AI will have far more profound consequences. Nanotechnology is powerful but not necessarily intelligent. We can devise ways of at least trying to manage the enormous powers of nanotechnology, but superintelligence innately cannot be controlled.

  Runaway AI. Once strong AI is achieved, it can readily be advanced and its powers multiplied, as that is the fundamental nature of machine abilities. As one strong AI immediately begets many strong AIs, the latter access their own design, understand and improve it, and thereby very rapidly evolve into a yet more capable, more intelligent AI, with the cycle repeating itself indefinitely. Each cycle not only creates a more intelligent AI but takes less time than the cycle before it, as is the nature of technological evolution (or any evolutionary process). The premise is that once strong AI is achieved, it will immediately become a runaway phenomenon of rapidly escalating superintelligence.160

  My own view is only slightly different. The logic of runaway AI is valid, but we still need to consider the timing. Achieving human levels in a machine will not immediately cause a runaway phenomenon. Consider that a human level of intelligence has limitations. We have examples of this today—about six billion of them. Consider a scenario in which you took one hundred humans from, say, a shopping mall. This group would constitute examples of reasonably well-educated humans. Yet if this group was presented with the task of improving human intelligence, it wouldn’t get very far, even if provided with the templates of human intelligence. It would probably have a hard time creating a simple computer. Speeding up the thinking and expanding the memory capacities of these one hundred humans would not immediately solve this problem.

  I pointed out above that machines will match (and quickly exceed) peak human skills in each area of skill. So instead, let’s take one hundred scientists and engineers. A group of technically trained people with the right backgrounds would be capable of improving accessible designs. If a machine attained equivalence to one hundred (and eventually one thousand, then one million) technically trained humans, each operating much faster than a biological human, a rapid acceleration of intelligence would ultimately follow.

  However, this acceleration won’t happen immediately when a computer passes the Turing test. The Turing test is comparable to matching the capabilities of an average, educated human and thus is closer to the example of humans from a shopping mall. It will take time for computers to master all of the requisite skills and to marry these skills with all the necessary knowledge bases.

  Once we’ve succeeded in creating a machine that can pass the Turing test (around 2029), the succeeding period will be an era of consolidation in which nonbiological intelligence will make rapid gains. However, the extraordinary expansion contemplated for the Singularity, in which human intelligence is multiplied by billions, won’t take place until the mid-2040s (as discussed in chapter 3).

  The AI Winter

  There’s this stupid myth out there that A.I. has failed, but A.I. is everywhere around you every second of the day. People just don’t notice it. You’ve got A.I. systems in cars, tuning the parameters of the fuel injection systems. When you land in an airplane, your gate gets chosen by an A.I. scheduling system. Every time you use a piece of Microsoft software, you’ve got an A.I. system trying to figure out what you’re doing, like writing a letter, and it does a pretty damned good job. Every time you see a movie with computer-generated characters, they’re all little A.I. characters behaving as a group. Every time you play a video game, you’re playing against an A.I. system.

  —RODNEY BROOKS, DIRECTOR OF THE MIT AI LAB161

  I still run into people who claim that artificial intelligence withered in the 1980s, an argument that is comparable to insisting that the Internet died in the dot-com bust of the early 2000s.162 The bandwidth and price-performance of Internet technologies, the number of nodes (servers), and the dollar volume of e-commerce all accelerated smoothly through the boom as well as the bust and the period since. The same has been true for AI.

  The technology hype cycle for a paradigm shift—railroads, AI, Internet, telecommunications, possibly now nanotechnology—typically starts with a period of unrealistic expectations based on a lack of understanding of all the enabling factors required. Although utilization of the new paradigm does increase exponentially, early growth is slow until the knee of the exponential-growth curve is realized. While the widespread expectations for revolutionary change are accurate, they are incorrectly timed. When the prospects do not quickly pan out, a period of disillusionment sets in. Nevertheless exponential growth continues unabated, and years later a more mature and more realistic transformation does occur.

  We saw this in the railroad frenzy of the nineteenth century, which
was followed by widespread bankruptcies. (I have some of these early unpaid railroad bonds in my collection of historical documents.) And we are still feeling the effects of the e-commerce and telecommunications busts of several years ago, which helped fuel a recession from which we are now recovering.

  AI experienced a similar premature optimism in the wake of programs such as the 1957 General Problem Solver created by Allen Newell, J. C. Shaw, and Herbert Simon, which was able to find proofs for theorems that had stumped mathematicians such as Bertrand Russell, and early programs from the MIT Artificial Intelligence Laboratory, which could answer SAT questions (such as analogies and story problems) at the level of college students.163 A rash of AI companies occurred in the 1970s, but when profits did not materialize there was an AI “bust” in the 1980s, which has become known as the “AI winter.” Many observers still think that the AI winter was the end of the story and that nothing has since come of the AI field.

  Yet today many thousands of AI applications are deeply embedded in the infrastructure of every industry. Most of these applications were research projects ten to fifteen years ago. People who ask, “Whatever happened to AI?” remind me of travelers to the rain forest who wonder, “Where are all the many species that are supposed to live here?” when hundreds of species of flora and fauna are flourishing only a few dozen meters away, deeply integrated into the local ecology.

  We are well into the era of “narrow AI,” which refers to artificial intelligence that performs a useful and specific function that once required human intelligence to perform, and does so at human levels or better. Often narrow AI systems greatly exceed the speed of humans, as well as provide the ability to manage and consider thousands of variables simultaneously. I describe a broad variety of narrow AI examples below.

 

‹ Prev