Radical Evolution: The Promise and Peril of Enhancing Our Minds, Our Bodies -- and What It Means to Be Human
Page 26
That’s just the way: a person does a low-down thing, and then he don’t want to take no consequences of it. Thinks as long as he can hide, it ain’t no disgrace. That was my fix exactly. The more I studied about this the more my conscience went to grinding me, and the more wicked and low-down and ornery I got to feeling. And at last, when it hit me all of a sudden that here was the plain hand of Providence slapping me in the face and letting me know my wickedness was being watched all the time from up there in heaven, whilst I was stealing a poor old woman’s nigger that hadn’t ever done me no harm, and now was showing me there’s One that’s always on the lookout, and ain’t a-going to allow no such miserable doings to go only just so fur and no further, I most dropped in my tracks I was so scared. Well, I tried the best I could to kinder soften it up somehow for myself by saying I was brung up wicked, and so I warn’t so much to blame; but something inside of me kept saying, “There was the Sunday-school, you could a gone to it; and if you’d a done it they’d a learnt you there that people that acts as I’d been acting about that nigger goes to everlasting fire.”
Huck decides right then and there to abandon a life of sin, avoid eternal damnation and for once in his life do the right thing by society’s lights. He decides to squeal, to write a letter to Jim’s owner telling her how to recapture her slave.
Then he gets to thinking about human nature:
I felt good and all washed clean of sin for the first time I had ever felt so in my life, and I knowed I could pray now. But I didn’t do it straight off, but laid the paper down and set there thinking—thinking how good it was all this happened so, and how near I come to being lost and going to hell. And went on thinking. And got to thinking over our trip down the river; and I see Jim before me, all the time: in the day, and in the night-time, sometimes moonlight, sometimes storms, and we a-floating along, talking, and singing, and laughing. But somehow I couldn’t seem to strike no places to harden me against him, but only the other kind. I’d see him standing my watch on top of his’n, ’stead of calling me, so I could go on sleeping; and see him how glad he was when I come back out of the fog; and when I come to him again in the swamp, up there where the feud was; and such-like times; and would always call me honey, and pet me, and do everything he could think of for me, and how good he always was; and at last I struck the time I saved him by telling the men we had small-pox aboard, and he was so grateful, and said I was the best friend old Jim ever had in the world, and the only one he’s got now; and then I happened to look around, and see that paper.
It was a close place. I took it up, and held it in my hand. I was a-trembling, because I’d got to decide, forever, betwixt two things, and I knowed it. I studied a minute, sort of holding my breath, and then says to myself:
“All right, then, I’ll go to hell”—and tore it up.
It was awful thoughts, and awful words, but they was said. And I let them stay said; and never thought no more about reforming. I shoved the whole thing out of my head; and said I would take up wickedness again, which was in my line, being brung up to it, and the other warn’t. And for a starter, I would go to work and steal Jim out of slavery again; and if I could think up anything worse, I would do that, too; because as long as I was in, and in for good, I might as well go the whole hog.
A classic example of The Prevail Scenario is the arguably most perfect film Hollywood ever made, Casablanca. Humphrey Bogart as Rick is ensconced in a cozy world of thieves, swindlers, gamblers, drunks, parasites, refugees, soldiers of fortune, genially corrupt French police and terrifying Nazis. Rick’s cynicism is his pride; he sticks his neck out for nobody. His only interest is in seeing his Café Américain flourish. And then, of course, of all the gin joints in all the towns in all the world, Ilsa (Ingrid Bergman) walks into his. The rest of the film concerns him betraying his own cauterized heart in service of a higher purpose. As Rick says, “It’s still a story without an ending.”
Interestingly, the most phenomenally successful film series of the recent era—the Star Wars, Harry Potter, Matrix and Lord of the Rings movies—are all exemplars of the Prevail myth, from Han Solo’s grudging heroism to little people with furry feet vanquishing the combined forces of Darkness. If the ageless way humans process information is by telling stories, what does our hunger for that story say?
All scenarios are to some degree faith-based. They rest upon assumptions that cannot be proven. In fact, that is one of the key points of scenario exercises—discovering what people’s hidden assumptions are, in order to hold them up to the light. In a scenario exercise, should you hear someone say, “Oh, that can’t happen,” that’s a surefire sign of an embedded and probably unexamined assumption.
This is not to say embedded assumptions are necessarily wrong. It is simply useful to know what they are, why we believe them to be valid, what the early warning signs would be if messy reality started to challenge them, and what we would do about it if our most cherished assumptions turned out to be flawed.
In both the Heaven and Hell Scenarios, the embedded assumption is that human destiny can be projected reliably if you apply enough logic, rationality and empiricism to the project.
In The Prevail Scenario, by contrast, the embedded assumption is that even if a smooth curve does describe the future of technology, it is not likely to describe the real world of human fortune. The analogy is to the utter failure of the straight-line projections of Malthusians, who believed industrial development would lead to starvation, when in fact the problem turned out to be obesity.
The Prevail Scenario is essentially driven by a faith in human cussedness. It is based on a hunch that you can count on humans to throw The Curve a curve. It is an instinct that human change will bounce strangely in the course of being translated from technological change. It is also a belief that transcendence is unlikely to be part of any simple scheme. Prevail does not, however, assign a path to how this outcome will be achieved. The mean-spirited may say it expects a very large miracle. The more sympathetic may say it expects many millions of small miracles.
It is dangerously wrong to assign probabilities to scenarios and ignore those that strike you as unlikely. History shows that the low-probability, high-impact scenarios are the ones that really shock—Pearl Harbor, for example. It would be unfortunate if, as you lay dying, surrounded by millions of others, your last thought was, I wish I’d paid more attention to Bill Joy. It would be unnerving if you woke up one day to find your world unhinged due to the rise of greater-than-human intelligence, and your first thought was, Didn’t Ray Kurzweil say something about this?
Prevail’s trick is that it embraces uncertainty. Even in the face of unprecedented threats, it displays a faith that the ragged human convoy of divergent perceptions, piqued honor, posturing, insecurity and humor will wend its way to glory. It puts a shocking premium on Faulkner’s hope that man will prevail “because he has a soul, a spirit capable of compassion and sacrifice and endurance.” It assumes that even as change picks up speed, giving us less and less time to react, we will still be able to rely on the impulse that Churchill described when he said, “Americans can always be counted on to do the right thing—after they have exhausted all other possibilities.”
To focus his version of Prevail, Lanier adds a fourth proposition:
• The key measure of Prevail’s success is an increasing intensity of links between humans, not transistors. If some sort of transcendence is achieved beyond today’s understanding of human nature, it will not be through some individual becoming superman. In Lanier’s Prevail Scenario, transcendence is social, not solitary. The measure is the extent to which many transform together.
In Lanier’s version of Prevail, the idea of progress is progressing. Lanier points out that, historically, there have been two measures of the march of human progress. One is technological and economic advance, starting with fire and the wheel and marking points on The Curve up through the steam engine and beyond.
The second ramp is moral improvement. I
t starts with the Ten Commandments and proceeds through the jury convicting Martha Stewart on all counts. Some find our moral improvement difficult to perceive, pointing to the variety and abundance of 20th century atrocities. It’s hard to argue with these people. They may be right. But Lanier thinks those who deny the existence of a moral incline are not in touch with the enthusiasm humans once brought to raping, pillaging and burning. Genghis Khan’s Mongols killed nearly as many people as did all of World War II, back when 50 million dead was a significant portion of the entire human race. Their achievement—making the streets of Beijing “greasy with the fat of the slain,” for example—is still a marvel given their severely limited technologies of fire and the sword.
Lanier has issues with both of these ramps. Note that Lanier uses the word ramp because he does not necessarily believe either is on an exponential Curve. The technological incline is a flawed measure of progress on many levels, Lanier says, most particularly because it suggests that the meaning of humanity can be reduced to zeros and ones. The moral ramp is a problem because, taken to its logical outcome, it requires more energy than humans have, and also can lead to holy wars. So his version of Prevail rests on the proposition that a third ramp exists and that it is the important one. That is the ramp of increased connection between people.
Many of the flaws Lanier sees in using technology as a measure of progress begin with his experience as a software scientist. Lanier seriously questions whether information technology will work well enough anytime soon to produce either Heaven or Hell. He completely believes that the moment nanobots are poised to eat humanity, for example, they will be felled by a Windows crash. “I’m serious about that—no joke,” he says. “Legacy code and bugs all get worse when code gets giant. If code is at all similar in the coming century to what it is now, super-smart nanobots will run for nanoseconds between crashes. The fact that software doesn’t follow Moore’s Law is the most important factor in the future of technology.” DARPA has similar concerns. It is fundamentally rethinking how computers work. As Col. Tim Gibson, a program manager for DARPA’s Advanced Technology Office, put it, “You go to Wal-Mart and buy a telephone for less than $10 and you expect it to work. We don’t expect computers to work; we expect them to have a problem. If a commander expects a system to have a problem, then how could he rely upon it?” In fact, Lanier sees full global employment as the main virtue of increasingly crappy giant software. The only solution will be “the planet of the help desks.” Everybody on earth will have to be employed taking phone calls giving advice on how to make the stuff work.
There are a host of reminders of the limits of technological prognostication. In 1950, in the article “Miracles You’ll See in the Next 50 Years,” Popular Mechanics claimed that in the year 2000, eight-room houses with all the furnishings, completely “synthetic in the best chemical sense of the term,” would cost $5,000. To clean it, the housewife would simply hose everything down. Food “out of the reach of any Roman emperor” would be made from sawdust and wood pulp. Discarded rayon underwear would be made into candy. Spreading oil on the ocean and igniting it would divert hurricanes. The flu and the common cold would be easily cured. The rooftop family helicopter would accomplish voyages of over 20 miles, including much commuting. For short trips, the answer would be one’s teardrop-shaped, alcohol-burning car. Popular Mechanics didn’t get it all wrong, of course. They noted that the telegraph companies were hitting hard times in the year 2000, because of the fax machine.
Nonetheless, there are some distinct categories into which bad predictions fall:
• The enterprise turned out to be a lot more complicated than it sounded. This is why we don’t have robotic maids, or electricity from nuclear fusion, or an explanation for what causes cancer.
• The cost/benefit ratio never worked out. This is why we don’t have vacation hotels in orbit.
• The future was overtaken by new technologies. This is why automotive standard equipment does not include CB radios.
• Bad experience inoculated us against the plan. This is why there are so few new nuclear fission power plants.
And most important:
• Inventors fundamentally misunderstood human behavior. This is why we have so few paperless offices.
“We should start from the point of view that it is best to make the assumption that we know less than we think we do about reality,” Lanier says. “It’s hard to know for sure—it’s a guess—but probably there’s a lot more to reality than we think. If you think a human is just like a naturally occurring technology that’s almost understood—. If you think that a human is something that you just have to figure out a few little things about, but basically, the underlying theory about it is coming together and all you have to do is trace those genes and proteomics and a little bit of stuff about how neural networks work and in about another 20 or 30 years, basically, you’ll have it nailed—. If you believe that that would be a complete description of what a human is—. There is this danger that you might have missed something and you have reduced what a human is.”
Needless to say, his peers pillory Lanier for his heresy. He is the one guilty of linear thinking, they believe. In the near future we will not so much write software, laborious line by laborious line, as grow it—reverse engineering the techniques we find in nature and adapting them to our software needs. “If we are still plunking around with software in 2012 or 2015, that would be a really bad sign for people who expect a real-soon-now Singularity,” says Vernor Vinge. If, however, you start seeing large networks reliably coordinating difficult tasks such as air traffic control by learning from their experience, or parallel processors behaving like biological cells, The Curve will be on track to change society, he says. Kurzweil is hurt that anyone would compare his elaborate methodology to harebrained rabbit-out-of-the-hat predictions from the past. He scoffs at the notion that software is not improving. He points out that his company’s voice-recognition software in 1985 cost $5,000 for a 1,000-word vocabulary. By the turn of the century it cost $50 for 100,000 words, and the newer software was much more accurate and easier to use. He acknowledges that software is not advancing as fast as hardware, but he estimates its value doubles every six years—still an exponential increase. He also accuses Lanier of “engineer’s pessimism.” That suggests Lanier is simply displaying the melancholy naysaying of someone who can’t face another programming deadline. It is also a subtle slur. Calling Lanier an engineer suggests that he is not so much a scientist, much less a visionary, as he is a cubicle-inhabiting code monkey. Lanier replies, hey, these guys talk a great game and wave their arms. I actually do this stuff. If they can write the software, they can prove I’m wrong. That retort suggests some people with hefty credentials are dilettantes and poseurs. This altercation goes around and around.
Lanier sees more virtue in measuring progress through the second ramp—moral improvement. “I have to admit that I want to believe in one particular large-scale, smooth, ascending curve as a governor of mankind’s history,” he says. “Specifically, I want to believe that moral progress has been real, and continues today.” Should you start with the revolutionary proposition that “all men are created equal” in 1776, Lanier suggests, you can then plot the graph of increasing dignity and autonomy through the abolition of slavery in the United States with the Civil War, women gaining the right to vote, the abolition of legal racial discrimination with the civil rights struggle, American empathy with those with whom we were supposed to be at war in Vietnam, the widespread acceptance of the sexes being treated equally, the breaking down of legal barriers against gays, and now an increased insistence that animals are not machines but feeling beings who should not be made to suffer gratuitously at the hand of humans. “You could plot all these on a graph and see an exponential rate of expansion of the ‘circle of empathy,’” Lanier says.
This empathy notion is that people draw a mental circle around themselves. Inside the circle is everyone we care about and for whom we ha
ve deep compassion and understanding. Outside are the ones for whom we don’t. “Most people, when they’re young and idealistic, tend to want to draw the circle pretty large,” Lanier observes. “Indeed, it would be lovely to draw it really, really large, to be able to live life in such a way that one caused no harm at all.” The problem is, if you draw the circle too large, you starve. If you try to kill no living creature, what about those bacteria? If you say, “Okay, not bacteria, but I’ll try not to kill insects,” well, what about those bugs you might find in your flour? The point is that you have to set some limits. Otherwise, universal empathy “takes so much energy that you can’t do very much else,” he says. There is also, of course, the opposite hazard of drawing the circle so small that you cut off people who are important to making you who you are.
Lanier ultimately finds the circle of empathy troublesome as a measure of The Prevail Scenario. For one thing, the technological elite is trying to co-opt it. Those who worship the idea that computers are becoming sufficiently smart to be a successor species to humans would have you believe that soon we will be morally obligated to bring silicon beings inside our circle of empathy. Lanier thinks that is perilous hogwash. He thinks it cheapens the standing of humans inside that circle.
He also thinks that focusing on a process of increased morality is dangerously narcissistic. That’s “the tragedy of religion and the tragedy of most utopias,” he says. “If your utopia is based on everybody adhering to some ideal of what is good, then what you’re saying is, ‘I know what is good, and all of you will love the same goodness that I love.’ So it’s really ultimately about you.” Others will be good your way or be tied to a stake surrounded by kindling.