The Singularity Is Near: When Humans Transcend Biology

Home > Other > The Singularity Is Near: When Humans Transcend Biology > Page 39
The Singularity Is Near: When Humans Transcend Biology Page 39

by Ray Kurzweil


  The compelling benefits of overcoming profound diseases and disabilities will keep these technologies on a rapid course, but medical applications represent only the early-adoption phase. As the technologies become established, there will be no barriers to using them for vast expansion of human potential.

  Stephen Hawking recently commented in the German magazine Focus that computer intelligence will surpass that of humans within a few decades. He advocated that we “urgently need to develop direct connections to the brain, so that computers can add to human intelligence, rather than be in opposition.”25 Hawking can take comfort that the development program he is recommending is well under way.

  There will be many variations of human body version 2.0, and each organ and body system will have its own course of development and refinement. Biological evolution is only capable of what is called “local optimization,” meaning that it can improve a design but only within the constraints of design “decisions” that biology arrived at long ago. For example, biological evolution is restricted to building everything from a very limited class of materials—namely, proteins, which are folded from one-dimensional strings of amino acids. It is restricted to thinking processes (pattern recognition, logical analysis, skill formation, and other cognitive skills) that use extremely slow chemical switching. And biological evolution itself works very slowly, only incrementally improving designs that continue to apply these basic concepts. It is incapable of suddenly changing, for example, to structural materials made of diamondoid or to nanotube-based logical switching.

  However, there is a way around this inherent limitation. Biological evolution did create a species that could think and manipulate its environment. That species is now succeeding in accessing—and improving—its own design and is capable of reconsidering and altering these basic tenets of biology.

  Human Body Version 3.0. I envision human body 3.0—in the 2030s and 2040s—as a more fundamental redesign. Rather than reformulating each subsystem, we (both the biological and nonbiological portions of our thinking, working together) will have the opportunity to revamp our bodies based on our experience with version 2.0. As with the transition from 1.0 to 2.0, the transition to 3.0 will be gradual and will involve many competing ideas.

  One attribute I envision for version 3.0 is the ability to change our bodies. We’ll be able to do that very easily in virtual-reality environments (see the next section), but we will also acquire the means to do this in real reality. We will incorporate MNT-based fabrication into ourselves, so we’ll be able to rapidly alter our physical manifestation at will.

  Even with our mostly nonbiological brains we’re likely to keep the aesthetics and emotional import of human bodies, given the influence this aesthetic has on the human brain. (Even when extended, the nonbiological portion of our intelligence will still have been derived from biological human intelligence.) That is, human body version 3.0 is likely still to look human by today’s standards, but given the greatly expanded plasticity that our bodies will have, ideas of what constitutes beauty will be expanded upon over time. Already, people augment their bodies with body piercing, tattoos, and plastic surgery, and social acceptance of these changes has rapidly increased. Since we’ll be able to make changes that are readily reversible, there is likely to be far greater experimentation.

  J. Storrs Hall has described nanobot designs he calls “foglets” that are able to link together to form a great variety of structures and that can quickly change their structural organization. They’re called “foglets” because if there’s a sufficient density of them in an area, they can control sound and light to form variable sounds and images. They are essentially creating virtual-reality environments externally (that is, in the physical world) rather than internally (in the nervous system). Using them a person can modify his body or his environment, though some of these changes will actually be illusions, since the foglets can control sound and images.26 Hall’s foglets are one conceptual design for creating real morphable bodies to compete with those in virtual reality.

  BILL (AN ENVIRONMENTALIST): On this human body version 2.0 stuff, aren’t you throwing the baby out—quite literally—with the bathwater? You’re suggesting replacing the entire human body and brain with machines. There’s no human being left.

  RAY: We don’t agree on the definition of human, but just where do you suggest drawing the line? Augmenting the human body and brain with biological or nonbiological interventions is hardly a new concept. There’s still a lot of human suffering.

  BILL: I have no objection to alleviating human suffering. But replacing a human body with a machine to exceed human performance leaves you with, well, a machine. We have cars that can travel on the ground faster than a human, but we don’t consider them to be human.

  RAY: The problem here has a lot to do with the word “machine.” Your conception of a machine is of something that is much less valued—less complex, less creative, less intelligent, less knowledgeable, less subtle and supple—than a human. That’s reasonable for today’s machines because all the machines we’ve ever met—like cars—are like this. The whole point of my thesis, of the coming Singularity revolution, is that this notion of a machine—of nonbiological intelligence—will fundamentally change.

  BILL: Well, that’s exactly my problem. Part of our humanness is our limitations. We don’t claim to be the fastest entity possible, to have memories with the biggest capacity possible, and so on. But there is an indefinable, spiritual quality to being human that a machine inherently doesn’t possess.

  RAY: Again, where do you draw the line? Humans are already replacing parts of their bodies and brains with nonbiological replacements that work better at performing their “human” functions.

  BILL: Better only in the sense of replacing diseased or disabled organs and systems. But you’re replacing essentially all of our humanness to enhance human ability, and that’s inherently inhuman.

  RAY: Then perhaps our basic disagreement is over the nature of being human. To me, the essence of being human is not our limitations—although we do have many—it’s our ability to reach beyond our limitations. We didn’t stay on the ground. We didn’t even stay on the planet. And we are already not settling for the limitations of our biology.

  BILL: We have to use these technological powers with great discretion. Past a certain point, we’re losing some ineffable quality that gives life meaning.

  RAY: I think we’re in agreement that we need to recognize what’s important in our humanity. But there is no reason to celebrate our limitations.

  . . . on the Human Brain

  Is all what we see or seem, but a dream within a dream?

  —EDGAR ALLAN POE

  The computer programmer is a creator of universes for which he alone is the lawgiver. No playwright, no stage director, no emperor, however powerful, has ever exercised such absolute authority to arrange a stage or a field of battle and to command such unswervingly dutiful actors or troops.

  —JOSEPH WEIZENBAUM

  One windy day two monks were arguing about a flapping banner. The first said, “I say the banner is moving, not the wind.” The second said, “I say the wind is moving, not the banner.” A third monk passed by and said, “The wind is not moving. The banner is not moving. Your minds are moving.”

  —ZEN PARABLE

  Suppose someone were to say, “Imagine this butterfly exactly as it is, but ugly instead of beautiful.”

  —LUDWIG WITTGENSTEIN

  The 2010 Scenario. Computers arriving at the beginning of the next decade will become essentially invisible: woven into our clothing, embedded in our furniture and environment. They will tap into the worldwide mesh (what the World Wide Web will become once all of its linked devices become communicating Web servers, thereby forming vast supercomputers and memory banks) of high-speed communications and computational resources. We’ll have very high-bandwidth, wireless communication to the Internet at all times. Displays will be built into our eyeglasses and contact len
ses and images projected directly onto our retinas. The Department of Defense is already using technology along these lines to create virtual-reality environments in which to train soldiers.27 An impressive immersive virtual reality system already demonstrated by the army’s Institute for Creative Technologies includes virtual humans that respond appropriately to the user’s actions.

  Similar tiny devices will project auditory environments. Cell phones are already being introduced in clothing that projects sound to the ears.28 And there’s an MP3 player that vibrates your skull to play music that only you can hear.29 The army has also pioneered transmitting sound through the skull from a soldier’s helmet.

  There are also systems that can project from a distance sound that only a specific person can hear, a technology that was dramatized by the personalized talking street ads in the movie Minority Report. The Hypersonic Sound technology and the Audio Spotlight systems achieve this by modulating the sound on ultrasonic beams, which can be precisely aimed. Sound is generated by the beams interacting with air, which restores sound in the audible range. By focusing multiple sets of beams on a wall or other surface, a new kind of personalized surround sound without speakers is also possible.30

  These resources will provide high-resolution, full-immersion visualauditory virtual reality at any time. We will also have augmented reality with displays overlaying the real world to provide real-time guidance and explanations. For example, your retinal display might remind us, “That’s Dr. John Smith, director of the ABC Institute—you last saw him six months ago at the XYZ conference” or, “That’s the Time-Life Building—your meeting is on the tenth floor.”

  We’ll have real-time translation of foreign languages, essentially subtitles on the world, and access to many forms of online information integrated into our daily activities. Virtual personalities that overlay the real world will help us with information retrieval and our chores and transactions. These virtual assistants won’t always wait for questions and directives but will step forward if they see us struggling to find a piece of information. (As we wonder about “That actress . . . who played the princess, or was it the queen . . . in that movie with the robot,” our virtual assistant may whisper in our ear or display in our visual field of view: “Natalie Portman as Queen Amidala in Star Wars, episodes 1, 2, and 3.”)

  The 2030 Scenario. Nanobot technology will provide fully immersive, totally convincing virtual reality. Nanobots will take up positions in close physical proximity to every interneuronal connection coming from our senses. We already have the technology for electronic devices to communicate with neurons in both directions, yet requiring no direct physical contact with the neurons. For example, scientists at the Max Planck Institute have developed “neuron transistors” that can detect the firing of a nearby neuron, or alternatively can cause a nearby neuron to fire or suppress it from firing.31 This amounts to two-way communication between neurons and the electronicbased neuron transistors. As mentioned above, quantum dots have also shown the ability to provide noninvasive communication between neurons and electronics.32

  If we want to experience real reality, the nanobots just stay in position (in the capillaries) and do nothing. If we want to enter virtual reality, they suppress all of the inputs coming from our actual senses and replace them with the signals that would be appropriate for the virtual environment.33 Your brain experiences these signals as if they came from your physical body. After all, the brain does not experience the body directly. As I discussed in chapter 4, inputs from the body—comprising a few hundred megabits per second—representing information about touch, temperature, acid levels, the movement of food, and other physical events, stream through the Lamina 1 neurons, then through the posterior ventromedial nucleus, ending up in the two insula regions of cortex. If these are coded correctly—and we will know how to do that from the brain reverse-engineering effort—your brain will experience the synthetic signals just as it would real ones. You could decide to cause your muscles and limbs to move as you normally would, but the nanobots would intercept these interneuronal signals, suppress your real limbs from moving, and instead cause your virtual limbs to move, appropriately adjusting your vestibular system and providing the appropriate movement and reorientation in the virtual environment.

  The Web will provide a panoply of virtual environments to explore. Some will be re-creations of real places; others will be fanciful environments that have no counterpart in the physical world. Some, indeed, would be impossible, perhaps because they violate the laws of physics. We will be able to visit these virtual places and have any kind of interaction with other real, as well as simulated, people (of course, ultimately there won’t be a clear distinction between the two), ranging from business negotiations to sensual encounters. “Virtualreality environment designer” will be a new job description and a new art form.

  Become Someone Else. In virtual reality we won’t be restricted to a single personality, since we will be able to change our appearance and effectively become other people. Without altering our physical body (in real reality) we will be able to readily transform our projected body in these three-dimensional virtual environments. We can select different bodies at the same time for different people. So your parents may see you as one person, while your girlfriend will experience you as another. However, the other person may choose to override your selections, preferring to see you differently than the body you have chosen for yourself. You could pick different body projections for different people: Ben Franklin for a wise uncle, a clown for an annoying coworker. Romantic couples can choose whom they wish to be, even to become each other. These are all easily changeable decisions.

  I had the opportunity to experience what it is like to project myself as another persona in a virtual-reality demonstration at the 2001 TED (technology, entertainment, design) conference in Monterey. By means of magnetic sensors in my clothing a computer was able to track all of my movements. With ultrahigh-speed animation the computer created a life-size, near photorealistic image of a young woman—Ramona—who followed my movements in real time. Using signal-processing technology, my voice was transformed into a woman’s voice and also controlled the movements of Ramona’s lips. So it appeared to the TED audience as if Ramona herself were giving the presentation.34

  To make the concept understandable, the audience could see me and see Ramona at the same time, both moving simultaneously in exactly the same way. A band came onstage, and I—Ramona—performed Jefferson Airplane’s “White Rabbit,” as well as an original song. My daughter, then fourteen, also equipped with magnetic sensors, joined me, and her dance movements were transformed into those of a male backup dancer—who happened to be a virtual Richard Saul Wurman, the impresario of the TED conference. The hit of the presentation was seeing Wurman—not known for his hip-hop moves—convincingly doing my daughter’s dance steps. Present in the audience was the creative leadership of Warner Bros., who then went off and created the movie Simone, in which the character played by Al Pacino transforms himself into Simone in essentially the same way.

  The experience was a profound and moving one for me. When I looked in the “cybermirror” (a display showing me what the audience was seeing), I saw myself as Ramona rather than the person I usually see in the mirror. I experienced the emotional force—and not just the intellectual idea—of transforming myself into someone else.

  People’s identities are frequently closely tied to their bodies (“I’m a person with a big nose,” “I’m skinny,” “I’m a big guy,” and so on). I found the opportunity to become a different person liberating. All of us have a variety of personalities that we are capable of conveying but generally suppress them since we have no readily available means of expressing them. Today we have very limited technologies available—such as fashion, makeup, and hairstyle—to change who we are for different relationships and occasions, but our palette of personalities will greatly expand in future full-immersion virtual-reality environments.

  In addition
to encompassing all of the senses, these shared environments can include emotional overlays. Nanobots will be capable of generating the neurological correlates of emotions, sexual pleasure, and other derivatives of our sensory experience and mental reactions. Experiments during open brain surgery have demonstrated that stimulating certain specific points in the brain can trigger emotional experiences (for example, the girl who found everything funny when stimulated in a particular spot of her brain, as I reported in The Age of Spiritual Machines).35 Some emotions and secondary reactions involve a pattern of activity in the brain rather than the stimulation of a specific neuron, but with massively distributed nanobots, stimulating these patterns will also be feasible.

  Experience Beamers. “Experience beamers” will send the entire flow of their sensory experiences as well as the neurological correlates of their emotional reactions out onto the Web, just as people today beam their bedroom images from their Web cams. A popular pastime will be to plug into someone else’s sensory-emotional beam and experience what it’s like to be that person, à la the premise of the movie Being John Malkovich. There will also be a vast selection of archived experiences to choose from, with virtual-experience design another new art form.

  Expand Your Mind. The most important application of circa-2030 nanobots will be literally to expand our minds through the merger of biological and nonbiological intelligence. The first stage will be to augment our hundred trillion very slow interneuronal connections with high-speed virtual connections via nanorobot communication.36 This will provide us with the opportunity to greatly boost our pattern-recognition abilities, memories, and overall thinking capacity, as well as to directly interface with powerful forms of nonbiological intelligence. The technology will also provide wireless communication from one brain to another.

 

‹ Prev