Radical Evolution: The Promise and Peril of Enhancing Our Minds, Our Bodies -- and What It Means to Be Human
Page 24
Lanier’s father was a writer for fiction magazines, his mother a concert pianist who also was the family’s primary breadwinner. They managed because of her pioneering remote stock trading, buying and selling on Wall Street by phone from the yucca-punctuated desert. “I lived down that ditch,” he remarks today, pointing to an irrigation canal. Mesilla is now showing signs of wealth he does not remember from his childhood. People are using metal for fencing, not just woven twigs. The park now has grass.
Lanier had a singular upbringing, which shapes him and his views about the future of human nature. “My childhood is divided into two pieces—when my mom was alive and after she died. When she was alive it was a childhood that was kind of structured in a nice house kept nicely.” Then she died, suddenly, in a horrible automobile accident. Lanier was nine. “The year after my mom died I just spent in a hospital trying to die.” He suffered a succession of diseases. “I have a lost year there,” he says. He recalls little of man’s first landing on the moon that year.
Meanwhile, “my father made some disastrous financial decisions. We ended up utterly impoverished. We moved to this cheap piece of land and we lived in tents.” Trying to recoup, Lanier’s father—always something of a bohemian—decided to erect a homemade home. Astoundingly, he let his strange, brilliant, motherless child design it. The result was spectacularly, predictably unpredictable. “I designed this crazy building,” Lanier recalls.
Lanier had read a book by the creator of The Whole Earth Catalog lauding the charms of geodesic domes. Many years later he would meet the author, Stewart Brand. “The first time I ever met Stewart I said, ‘You know, I grew up in a dome,’ and his first words to me were, ‘Did it leak?’” Lanier, sputtering, replied, “Of course it leaked; what were you thinking?” A rant ensued concerning malpractice. Nonetheless, it was quite a structure. “This house wasn’t made only of domes. It had some fantastic crystal forms and weird spires and towers and jutting parts. I mean, it was a really strange house.” Much of it has since fallen down.
Architectural follies, of course, hardly began to resolve the central issue of a preteen’s life, even if “you couldn’t really build a dome without some trigonometry.”
“The degree to which I was a social failure is impossible to even state,” he says. “It was just extreme beyond. . . ” Lanier trails off. It was beyond not having friends. People surrounding him were utterly “mean-spirited, hostile and threatening. If I’d been in any normal place, some kind of serious intervention would have happened. The Spanish and the Mexicans despised each other, the Indians were darkly depressed and just swallowed by hopelessness, and the white culture was redneck—extremely intolerant, boorish, violent and uneducated. They were warring camps. It was just a terribly mean-spirited environment that I happened to be in. I figured out how to connect to people socially much later in life than most people do. In the technical academic world you meet a lot of awkward young men in the math department or whatever. People are always saying, ‘Oh, we have a real weirdo this semester,’ and there will be some hairy creature in some little dingy basement room who is sort of suckling on a computer.”
He laughs. “I mean, they don’t even know the meaning of awkward young technical guy. They don’t even know the beginning of the meaning. But I got it eventually. It was just very hard. I remember I would go into a little convenience store and my goal would be to buy something without creating a scene. Without embarrassing myself. I had to practice that.”
In this world, any connections were magical. One night when he was 11, the phone system went crazy. Suddenly, “all the phone lines were connected together, so if you picked it up there were hundreds of people. It was kids, and they were all talking to each other. There were all these floating voices. It was just this one night, and what was cool about it, for me, it was like the thing on the Internet where no one knows you’re a dog. That was an amazing moment for me.”
When he was 14, he found refuge in a summer program for high-schoolers at New Mexico State University in nearby Las Cruces. There, “I think I went through developmental stages that people usually go through pre-puberty—in terms of learning how to have conversations and stuff. I started to learn how to talk to people.”
By today’s standards, New Mexico State—home of the Aggies, reflecting its rural heritage—was a glorified community college. But because of New Mexico’s role in nuclear weapons development it harbored a few math, physics and engineering faculty members with world-class minds. The Manhattan Project scientists had been hidden in Los Alamos up by Santa Fe. The White Sands Missile Range is nearby, as is the Trinity site at Alamogordo, where the world’s first atomic bomb was exploded on July 16, 1945. Thus, a neighbor was Clyde Tombaugh, who discovered Pluto. His backyard bristled with telescopes, which he let the young Lanier use. Tombaugh was in charge of designing optical missile trackers at White Sands.
At the end of that summer, Lanier decided okay, I am simply not leaving. He had not graduated from high school, but he was more than clever enough to obfuscate the paperwork. “I just stayed in college instead of going back to high school, and that actually started to work.”
The most astonishing thing about New Mexico State was that, thanks to the federal government’s interest in nukes, it had a computer facility that Lanier describes as “kick-ass.” Some of the pioneers of graphical representation came out of there. This was a very big deal. Graphical representation is what the trash can image on your desktop is all about. It’s at the heart of the point-and-click system that you take for granted in Windows. It’s also the foundation of all the games that today pull in more money than do Hollywood box office receipts. As a result of his experience there, Lanier wound up as a programmer in Silicon Valley, joining what commentator Jon Katz calls “an inconspicuous movement, attracting millions of intelligent, technologically aware, community-oriented, self-described outsiders, mesmerized by finding themselves in a club that will not only take them in, but puts them in charge.”
Today Lanier lives very near the crest of the Berkeley Hills in a stunning place adventurously created with the help of “an avant-garde seismic engineer.” It has a spectacular view of the Golden Gate Bridge. The down payment came from his cashing out of a recent start-up company. Still without a diploma of any kind, Lanier almost absentmindedly collects faculty appointments that keep him on the transcontinental jets he professes to loathe. These postings include the engineering school of Dartmouth, the business school of the University of Pennsylvania, the arts school of New York University, and the computer science department of Columbia. The Micronesian island chain of Palau has issued a postage stamp honoring him. In his spare time he is fundamentally rethinking the historical underpinnings of all computers. He believes the way they work now is whacked. There has got to be a better way.
When faced by the prospect of a sudden transformation of human nature, Ray Kurzweil, Bill Joy and Jaron Lanier each responds from the deep recesses of his soul. Kurzweil worships the power of ideas to resolve all problems; Joy in his lonely fashion engages death; Lanier attributes all his subsequent work to finding “the connection I lost.”
The thinking of Kurzweil, Joy and Lanier describes a triangle. Lanier’s is not some middle vision between that of Kurzweil and Joy. He is off in an entirely other territory that pokes and prods their technological determinism. Lanier agrees with Kurzweil that it is not tremendously likely that you can stop radical evolution by willing it gone. He agrees with Joy that The Curve could lead to mortal dangers. Yet Lanier would not relinquish transcendence even were that possible. Indeed, he views the prospect of exploring all the ways humans could expand their connections as the greatest adventure on which the species has ever embarked. Lanier’s critical difference is that he does not see The Curve yielding some inevitable, preordained result, as in the fashion of the Heaven and Hell Scenarios. “If it turns out Bill or Ray are right, I’ll be disappointed mostly because it’s such a profoundly dull and unheroic outcome,” he sa
ys. “It’s such a gizmo outcome. There is no depth to it at all.”
Lanier believes it is well within the power of the species to transcend to something far beyond the current understanding of human nature. He just views as sterile the prospect of uploading some portion of our brains into computers. Instead, he pictures a rich and tasty brew of opportunities. He can see a vast array of transcendences. He imagines humans making intelligent decisions, exercising creative control. If you were graphing Lanier’s idea, it would not be represented by smooth curves, either up or down, as in the first two scenarios. It would doubtless have fits and starts, hiccups and coughs, reverses and loops—not unlike the history we humans always have known. It would be messy and chaotic, like humans themselves. Technology would not be in control. It would not be on rails, inexorably deciding human affairs. At the same time, the outcome would definitely involve radical change.
I call visions like this The Prevail Scenario.
UNCERTAINTY SUFFUSES The Prevail Scenario. For Lanier, that’s not a bug. It’s a feature. “The universe doesn’t provide us a way to have absolute truth,” he says. “I am not fanatical about my ideas. I’m perfectly happy to see where there are holes in them. This idea is something I believe—in the sense that I act on it. But let me tell you the trap I want to avoid falling into.” He judges Kurzweil and Joy to be “severe exaggerators and overstaters. Their reasoning is similar to that of a paranoid person in that they find only the little bits that fit into their worldview and build this cage in which they imprison themselves. I’m not willing to be a fanatic and demand that people see that every bit of data supports my view. I want to be given the latitude to present my own thing more softly. I actually perceive it with less of a sense of certainty and bullheadedness. It’s just my best guess.”
His key point about The Prevail Scenario: “I will argue for perceiving a gradual ramp of increased bridging of the interpersonal gap. I believe that that’s demonstrable. I do not perceive it as being an exponential increase. I do not perceive it as something where there is an economy of scale and it’s compounding itself and it’s heading towards some asymptotic point. I am not saying it’s accelerating.” The Prevail Scenario, he’s saying, is measured by its impact on human society. He is specifically arguing that even if technology is on a curve, its impact is not. This is why he is skeptical about the idea of a Singularity—technology increasing so quickly as to create an imminent and cataclysmic upheaval in human affairs.
In his version of The Prevail Scenario, Lanier is talking about transcendence through an “infinite game.” “The future that I’m trying to find is one where people are in the center and there’s this ever-expanding game of connecting people that creates a game into the future.”
James P. Carse, the emeritus director of religious studies and professor of the history of literature at NYU, in 1986 published a book called Finite and Infinite Games. In it, Carse describes the familiar contests of everyday life—games played in business and politics, in the bedroom and on the battlefield. Finite games have winners and losers, a beginning and an end. Finite players try to control the game, predict everything that will happen, and set the bottom line in advance. They are serious and determined about getting that outcome. They try to fix the future based on the past.
Players of infinite games, by contrast, enjoy being surprised. Continuously running into something one didn’t know will ensure that the game will go on forever. The meaning of the past changes depending on what happens in the future. “A finite game is played for the purpose of winning, an infinite game for the purpose of continuing the play,” Carse says. Infinite games never end, for they are unscripted and unpredictable. Carse sees them as more rewarding, and Lanier vibrates to this chord. Finite players play within the rules. Infinite players play with the rules. “Life, liberty, and the pursuit of happiness” is an infinite game, Lanier believes. Infinite games are the real transcendence games. They allow you to transcend your boundaries. They allow you to transcend who you are.
On several levels, Lanier questions what he sees as the finite-game premises of the Heaven and Hell Scenarios. He doesn’t doubt that there are exponential processes at work, including Moore’s Law and all the rest. But he wonders if they are immutable, and questions the nature of their social impact. After all, by definition you can only measure closed environments with fixed boundaries. A lot of life isn’t like that, Lanier points out. Sure, the price of chips is plummeting. But is that making us smarter?
How would it be possible to measure the system known as the U.S. Constitution? Lanier asks. To prevail from horse-drawn days to the present, it has to be a miraculously sophisticated document. Yet we have no units to measure that sophistication, he points out. The Constitution is not a computer operating system but a human operating system. That’s the difference between a closed system and an open one. The great irony is that if we can measure something, it can’t be all that complex. How can we measure creativity? Human nature is the ultimate example of the immeasurable.
That’s why Lanier is far more bent by nerds trying to mold human nature to their closed-system computers than he is concerned about human enhancement. Right now he thinks computers are making us stupider.
How can you say that? I protest. If you call up an airline reservation number, you get an amazingly sophisticated machine that can understand what you’re saying and respond in meaningful ways.
“The very nature of oppression has always been to force people to live within the confines of some idea about what a person is,” he replies. “That is true whether you’re talking about some ancient religious oppressive regime, or a communist regime, or a fascist regime, or one of the big bad industrial-age companies” that reduced people to cogs in their organizational machine. “Or for that matter Freud. There are a lot of people who have this idea about what a human is and expect other people to live within the confines of that theory.”
He views the belief that a human is like a computer as the current repression. “In the computer-human loop, the human is the more flexible portion. So whenever you change a piece of computer technology, the chances are that the human users will actually be changing more than the technology itself changed.” You quickly learn that there are only certain questions that the airline reservation bot can handle, and only certain words, and you dumb down your activity to deal with its limitations. If you treat a computer like a person, thinking that there might be any real intelligence there, you make yourself stupid.
“I don’t think it is particularly dangerous yet. But that’s the start of a potential trend that I think could be a big problem.” You start by learning how to con your allegedly smart word program. You have to. Otherwise it will automatically fix things from right to wrong, and you won’t get what you really meant to type. Then you organize all your finances so that they’ll look good to a pathetically simplistic credit-rating computer. When you have machines evaluating people, as in school testing, you have to learn what the machine wants and play its little game in order to establish that you are a satisfactory human being. Keep this up—accepting the notion that we can trust machines to do some of our thinking for us—and you depart from reality, Lanier believes. We model ourselves after our technologies, becoming some sort of anti-Pinocchios. Human spunk begins to evaporate.
“To train ourselves to adapt to a low-grade form in order to get some machine to work is a little bit like asking people to reduce their vocabulary so that language will work better overall,” Lanier says. “Or asking people not to play any new musical chords because all the musical instruments are designed for the existing chords or something like that. It shuts down the game.”
Lanier’s critical concern is connectedness between human beings, not transistors. Suppose that someday bots run convenience stores and dry-cleaning emporia, replacing immigrants. Will a bot ever get to know you well enough to, one spring day, along with your shirts, give you garden seeds for Thai eggplant and melon?
If vap
idity is where The Curve is taking us, Lanier wants no part of it. Enhancement, by contrast, doesn’t worry him particularly. “If somebody put some brain chip in their head and it’s supposed to enhance their memory but instead it makes them weird in some way, as long as they are still part of the game of society in connecting with people, they would probably just be an interesting and eccentric person, as long as they are not homicidal or something like that. I’m not saying there aren’t any potential problems. But to me that can be part of an adventure. That doesn’t intrinsically scare me as much as a society that voluntarily endures a slow suicide through nerdification, in which they blanch out their own lives of any flavor or meaning. That scares me more.”
Lanier is dismissive of what he describes as “the religion of the elite technologists,” from Moravec to Minsky, in the halls of “true believers” at Stanford, MIT and Carnegie Mellon. They believe in a key anticipated outcome of The Heaven Scenario: “That computers are becoming autonomous and a successor species.”
“My feeling about spiritual questions is that there is a tightrope that I try to stay on, not always successfully. If you fall to the right side, you become an excessive reductionist. You pretend to know more than you do and you become overly rational. If you fall to the left side, you become superstitious and you believe that there are magic tricks of meaning. Staying right on that line is where you’re a skeptic but also acknowledge the degree of mystery in our lives. If you can adhere to that, I think that’s where truth lies. Sometimes it’s lonely and frustrating. For a lot of these questions, I think ‘I don’t know’ is the most dignified and profound answer. A profound ‘I don’t know’ is the result of a lot of work.”
Lanier wants to stay open to the possibility that “the world we manipulate here isn’t all there is. The world accessible by technologies isn’t all there is. I don’t want to become a superstitious fool and believe I can say anything about this other world. That’s very important. I don’t want to start saying, ‘Oh, there are these angels here.’ The idea of God as an entity that talks and stuff doesn’t quite fit for me. It’s also not something I’m gonna dismiss.” He makes a small joke by pretending to be the systems administrator of all creation: “We have limited privileges in this area.”