The Singularity Is Near: When Humans Transcend Biology

Home > Other > The Singularity Is Near: When Humans Transcend Biology > Page 6
The Singularity Is Near: When Humans Transcend Biology Page 6

by Ray Kurzweil


  MOLLY 2004: Just the human knowledge? What about all the machine knowledge?

  GEORGE 2048: We like to think of it as one civilization.

  CHARLES: So, it does appear that machines will be able to improve their own design.

  MOLLY 2004: Oh, we humans are starting to do that now.

  RAY: But we’re just tinkering with a few details. Inherently, DNA-based intelligence is just so very slow and limited.

  CHARLES: So the machines will design their own next generation rather quickly.

  GEORGE 2048: Indeed, in 2048, that is certainly the case.

  CHARLES: Just what I was getting at, a new line of evolution then.

  NED: Sounds more like a precarious runaway phenomenon.

  CHARLES: Basically, that’s what evolution is.

  NED: But what of the interaction of the machines with their progenitors? I mean, I don’t think I’d want to get in their way. I was able to hide from the English authorities for a few years in the early 1800s, but I suspect that will be more difficult with these . . .

  GEORGE 2048: Guys.

  MOLLY 2004: Hiding from those little robots—

  RAY: Nanobots, you mean.

  MOLLY 2004: Yes, hiding from the nanobots will be difficult, for sure.

  RAY: I would expect the intelligence that arises from the Singularity to have great respect for their biological heritage.

  GEORGE 2048: Absolutely, it’s more than respect, it’s . . . reverence.

  MOLLY 2004: That’s great, George, I’ll be your revered pet. Not what I had in mind.

  NED: That’s just how Ted Kaczynski puts it: we’re going to become pets. That’s our destiny, to become contented pets but certainly not free men.

  MOLLY 2004: And what about this Epoch Six? If I stay biological, I’ll be using up all this precious matter and energy in a most inefficient way. You’ll want to turn me into, like, a billion virtual Mollys and Georges, each of them thinking a lot faster than I do now. Seems like there will be a lot of pressure to go over to the other side.

  RAY: Still, you represent only a tiny fraction of the available matter and energy. Keeping you biological won’t appreciably change the order of magnitude of matter and energy available to the Singularity. It will be well worth it to maintain the biological heritage.

  GEORGE 2048: Absolutely.

  RAY: Just like today we seek to preserve the rain forest and the diversity of species.

  MOLLY 2004: That’s just what I was afraid of. I mean, we’re doing such a wonderful job with the rain forest. I think we still have a little bit of it left. We’ll end up like those endangered species.

  NED: Or extinct ones.

  MOLLY 2004: And there’s not just me. How about all the stuff I use? I go through a lot of stuff.

  GEORGE 2048: That’s not a problem, we’ll just recycle all your stuff. We’ll create the environments you need as you need them.

  MOLLY 2004: Oh, I’ll be in virtual reality?

  RAY: No, actually, foglet reality.

  MOLLY 2004: I’ll be in a fog?

  RAY: No, no, foglets.

  MOLLY 2004: Excuse me?

  RAY: I’ll explain later in the book.

  MOLLY 2004: Well, give me a hint.

  RAY: Foglets are nanobots—robots the size of blood cells—that can connect themselves to replicate any physical structure. Moreover, they can direct visual and auditory information in such a way as to bring the morphing qualities of virtual reality into real reality. 38

  MOLLY 2004: I’m sorry I asked. But, as I think about it, I want more than just my stuff. I want all the animals and plants, too. Even if I don’t get to see and touch them all, I like to know they’re there.

  GEORGE 2048: But nothing will be lost.

  MOLLY 2004: I know you keep saying that. But I mean actually there—you know, as in biological reality.

  RAY: Actually, the entire biosphere is less than one millionth of the matter and energy in the solar system.

  CHARLES: It includes a lot of the carbon.

  RAY: It’s still worth keeping all of it to make sure we haven’t lost anything.

  GEORGE 2048: That has been the consensus for at least several years now.

  MOLLY 2004: So, basically, I’ll have everything I need at my fingertips?

  GEORGE 2048: Indeed.

  MOLLY 2004: Sounds like King Midas. You know, everything he touched turned to gold.

  NED: Yes, and as you will recall he died of starvation as a result.

  MOLLY 2004: Well, if I do end up going over to the other side, with all of that vast expanse of subjective time, I think I’ll die of boredom.

  GEORGE 2048: Oh, that could never happen. I will make sure of it.

  CHAPTER TWO

  * * *

  A Theory of

  Technology Evolution

  The Law of Accelerating Returns

  The further backward you look, the further forward you can see.

  —WINSTON CHURCHILL

  Two billion years ago, our ancestors were microbes; a half-billion years ago, fish; a hundred million years ago, something like mice; ten million years ago, arboreal apes; and a million years ago, proto-humans puzzling out the taming of fire. Our evolutionary lineage is marked by mastery of change. In our time, the pace is quickening.

  —CARL SAGAN

  Our sole responsibility is to produce something smarter than we are; any problems beyond that are not ours to solve. . . . [T]here are no hard problems, only problems that are hard to a certain level of intelligence. Move the smallest bit upwards [in level of intelligence], and some problems will suddenly move from “impossible” to “obvious.” Move a substantial degree upwards, and all of them will become obvious.

  —ELIEZER S. YUDKOWSKY, STARING INTO THE SINGULARITY, 1996

  “The future can’t be predicted,” is a common refrain. . . .But . . .when [this perspective] is wrong, it is profoundly wrong.

  —JOHN SMART1

  The ongoing acceleration of technology is the implication and inevitable result of what I call the law of accelerating returns, which describes the acceleration of the pace of and the exponential growth of the products of an evolutionary process. These products include, in particular, information-bearing technologies such as computation, and their acceleration extends substantially beyond the predictions made by what has become known as Moore’s Law. The Singularity is the inexorable result of the law of accelerating returns, so it is important that we examine the nature of this evolutionary process.

  The Nature of Order. The previous chapter featured several graphs demonstrating the acceleration of paradigm shift. (Paradigm shifts are major changes in methods and intellectual processes to accomplish tasks; examples include written language and the computer.) The graphs plotted what fifteen thinkers and reference works regarded as the key events in biological and technological evolution from the Big Bang to the Internet. We see some expected variation, but an unmistakable exponential trend: key events have been occurring at an ever-hastening pace.

  The criteria for what constituted “key events” varied from one thinker’s list to another. But it’s worth considering the principles they used in making their selections. Some observers have judged that the truly epochal advances in the history of biology and technology have involved increases in complexity.2 Although increased complexity does appear to follow advances in both biological and technological evolution, I believe that this observation is not precisely correct. But let’s first examine what complexity means.

  Not surprisingly, the concept of complexity is complex. One concept of complexity is the minimum amount of information required to represent a process. Let’s say you have a design for a system (for example, a computer program or a computer-assisted design file for a computer), which can be described by a data file containing one million bits. We could say your design has a complexity of one million bits. But suppose we notice that the one million bits actually consist of a pattern of one thousand bits that is repeated one t
housand times. We could note the repetitions, remove the repeated patterns, and express the entire design in just over one thousand bits, thereby reducing the size of the file by a factor of about one thousand.

  The most popular data-compression techniques use similar methods of finding redundancy within information.3 But after you’ve compressed a data file in this way, can you be absolutely certain that there are no other rules or methods that might be discovered that would enable you to express the file in even more compact terms? For example, suppose my file was simply “pi” (3.1415 . . .) expressed to one million bits of precision. Most data-compression programs would fail to recognize this sequence and would not compress the million bits at all, since the bits in a binary expression of pi are effectively random and thus have no repeated pattern according to all tests of randomness.

  But if we can determine that the file (or a portion of the file) in fact represents pi, we can easily express it (or that portion of it) very compactly as “pi to one million bits of accuracy.” Since we can never be sure that we have not overlooked some even more compact representation of an information sequence, any amount of compression sets only an upper bound for the complexity of the information. Murray Gell-Mann provides one definition of complexity along these lines. He defines the “algorithmic information content” (AIC) of a set of information as “the length of the shortest program that will cause a standard universal computer to print out the string of bits and then halt.”4

  However, Gell-Mann’s concept is not fully adequate. If we have a file with random information, it cannot be compressed. That observation is, in fact, a key criterion for determining if a sequence of numbers is truly random. However, if any random sequence will do for a particular design, then this information can be characterized by a simple instruction, such as “put random sequence of numbers here.” So the random sequence, whether it’s ten bits or one billion bits, does not represent a significant amount of complexity, because it is characterized by a simple instruction. This is the difference between a random sequence and an unpredictable sequence of information that has purpose.

  To gain some further insight into the nature of complexity, consider the complexity of a rock. If we were to characterize all of the properties (precise location, angular momentum, spin, velocity, and so on) of every atom in the rock, we would have a vast amount of information. A one-kilogram (2.2-pound) rock has 1025 atoms which, as I will discuss in the next chapter, can hold up to 1027 bits of information. That’s one hundred million billion times more information than the genetic code of a human (even without compressing the genetic code).5 But for most common purposes, the bulk of this information is largely random and of little consequence. So we can characterize the rock for most purposes with far less information just by specifying its shape and the type of material of which it is made. Thus, it is reasonable to consider the complexity of an ordinary rock to be far less than that of a human even though the rock theoretically contains vast amounts of information.6

  One concept of complexity is the minimum amount of meaningful, non-random, but unpredictable information needed to characterize a system or process.

  In Gell-Mann’s concept, the AIC of a million-bit random string would be about a million bits long. So I am adding to Gell-Mann’s AIC concept the idea of replacing each random string with a simple instruction to “put random bits” here.

  However, even this is not sufficient. Another issue is raised by strings of arbitrary data, such as names and phone numbers in a phone book, or periodic measurements of radiation levels or temperature. Such data is not random, and data-compression methods will only succeed in reducing it to a small degree. Yet it does not represent complexity as that term is generally understood. It is just data. So we need another simple instruction to “put arbitrary data sequence” here.

  To summarize my proposed measure of the complexity of a set of information, we first consider its AIC as Gell-Mann has defined it. We then replace each random string with a simple instruction to insert a random string. We then do the same for arbitrary data strings. Now we have a measure of complexity that reasonably matches our intuition.

  It is a fair observation that paradigm shifts in an evolutionary process such as biology—and its continuation through technology—each represent an increase in complexity, as I have defined it above. For example, the evolution of DNA allowed for more complex organisms, whose biological information processes could be controlled by the DNA molecule’s flexible data storage. The Cambrian explosion provided a stable set of animal body plans (in DNA), so that the evolutionary process could concentrate on more complex cerebral development. In technology, the invention of the computer provided a means for human civilization to store and manipulate ever more complex sets of information. The extensive interconnectedness of the Internet provides for even greater complexity.

  “Increasing complexity” on its own is not, however, the ultimate goal or end-product of these evolutionary processes. Evolution results in better answers, not necessarily more complicated ones. Sometimes a superior solution is a simpler one. So let’s consider another concept: order. Order is not the same as the opposite of disorder. If disorder represents a random sequence of events, the opposite of disorder should be “not randomness.” Information is a sequence of data that is meaningful in a process, such as the DNA code of an organism or the bits in a computer program. “Noise,” on the other hand, is a random sequence. Noise is inherently unpredictable but carries no information. Information, however, is also unpredictable. If we can predict future data from past data, that future data stops being information. Thus, neither information nor noise can be compressed (and restored to exactly the same sequence). We might consider a predictably alternating pattern (such as 0101010 . . .) to be orderly, but it carries no information beyond the first couple of bits.

  Thus, orderliness does not constitute order, because order requires information. Order is information that fits a purpose. The measure of order is the measure of how well the information fits the purpose. In the evolution of life-forms, the purpose is to survive. In an evolutionary algorithm (a computer program that simulates evolution to solve a problem) applied to, say, designing a jet engine, the purpose is to optimize engine performance, efficiency, and possibly other criteria.7Measuring order is more difficult than measuring complexity. There are proposed measures of complexity, as I discussed above. For order, we need a measure of “success” that would be tailored to each situation. When we create evolutionary algorithms, the programmer needs to provide such a success measure (called the “utility function”). In the evolutionary process of technology development, we could assign a measure of economic success.

  Simply having more information does not necessarily result in a better fit. Sometimes, a deeper order—a better fit to a purpose—is achieved through simplification rather than further increases in complexity. For example, a new theory that ties together apparently disparate ideas into one broader, more coherent theory reduces complexity but nonetheless may increase the “order for a purpose.” (In this case, the purpose is to accurately model observed phenomena.) Indeed, achieving simpler theories is a driving force in science. (As Einstein said,“Make everything as simple as possible, but no simpler.”)

  An important example of this concept is one that represented a key step in the evolution of hominids: the shift in the thumb’s pivot point, which allowed more precise manipulation of the environment.8 Primates such as chimpanzees can grasp but they cannot manipulate objects with either a “power grip,” or sufficient fine-motor coordination to write or to shape objects. A change in the thumb’s pivot point did not significantly increase the complexity of the animal but nonetheless did represent an increase in order, enabling, among other things, the development of technology. Evolution has shown, however, that the general trend toward greater order does typically result in greater complexity.9

  Thus improving a solution to a problem—which usually increases but sometimes decreases comple
xity—increases order. Now we are left with the issue of defining the problem. Indeed, the key to an evolutionary algorithm (and to biological and technological evolution in general) is exactly this: defining the problem (which includes the utility function). In biological evolution the overall problem has always been to survive. In particular ecological niches this overriding challenge translates into more specific objectives, such as the ability of certain species to survive in extreme environments or to camouflage themselves from predators. As biological evolution moved toward humanoids, the objective itself evolved to the ability to outthink adversaries and to manipulate the environment accordingly.

  It may appear that this aspect of the law of accelerating returns contradicts the second law of thermodynamics, which implies that entropy (randomness in a closed system) cannot decrease and, therefore, generally increases.10 However, the law of accelerating returns pertains to evolution, which is not a closed system. It takes place amid great chaos and indeed depends on the disorder in its midst, from which it draws its options for diversity. And from these options, an evolutionary process continually prunes its choices to create ever greater order. Even a crisis, such as the periodic large asteroids that have crashed into the Earth, although increasing chaos temporarily, end up increasing—deepening—the order created by biological evolution.

  To summarize, evolution increases order, which may or may not increase complexity (but usually does). A primary reason that evolution—of life-forms or of technology—speeds up is that it builds on its own increasing order, with ever more sophisticated means of recording and manipulating information. Innovations created by evolution encourage and enable faster evolution. In the case of the evolution of life-forms, the most notable early example is DNA, which provides a recorded and protected transcription of life’s design from which to launch further experiments. In the case of the evolution of technology, ever-improving human methods of recording information have fostered yet further advances in technology. The first computers were designed on paper and assembled by hand. Today, they are designed on computer workstations, with the computers themselves working out many details of the next generation’s design, and are then produced in fully automated factories with only limited human intervention.

 

‹ Prev