Book Read Free

Radical Evolution: The Promise and Peril of Enhancing Our Minds, Our Bodies -- and What It Means to Be Human

Page 20

by Joel Garreau


  When pressed, Fukuyama acknowledges that human nature has not been static over all these millennia. It has evolved. Christianity, for example, was an effort at transcendence that made a difference. “Look at the difference between the Christian world and the Roman world in terms of things like sympathy, compassion, the prevalence of cruelty,” he says. “I mean, the Romans were unbelievably cruel. Basically, if I’m powerful I’m going to get all the women I want and all the money I want and the hell with you.”

  He recalls a passage in Democracy in America not two centuries ago in which Alexis de Tocqueville quotes a letter from a French noble lady, Madame de Sévigné, dated October 30, 1675. The lower classes in Brittany have just staged a revolt against the imposition of a new tax, and they are being put down. Madame de Sévigné is very cultured; she sponsors a salon. She’s writing to her daughter and says well isn’t it marvelous that you seem to have kissed every lad in Provence, and then she immediately goes on to give the news from Rennes. It seems old men, women about to give birth and children are “wandering around and crying on their departure from this city, without knowing where to go, and without food or a place to lie in. Day before yesterday, a fiddler was broken on the wheel for getting up a dance and stealing some stamped paper. He was quartered after death,” she reports cheerfully, “and his limbs exposed at the four corners of the city.” Then she immediately goes on to say what a wonderful day she’s had with Madame de Tarente. Soon after, she writes, “You talk very pleasantly about our miseries, but we are no longer so jaded with capital punishments; only one a week now, just to keep up appearances. It is true that hanging now seems to me quite a cooling entertainment.”

  Tocqueville says, “It would be a mistake to suppose Madame de Sévigné, who wrote these lines, was a selfish or cruel person; she was passionately attached to her children and very ready to sympathize in the sorrows of her friends; nay, her letters show that she treated her vassals and servants with kindness and indulgence. But Madame de Sévigné has no clear notion of suffering in anyone who was not a person of quality. In our time”—a century and a half later; he wrote this in the 1830s—“the harshest man, writing to the most insensitive person of his acquaintance, would not venture to indulge in the cruel jocularity that I have quoted; and even if his own manners allowed him to do so, the manners of society at large would forbid it. Whence does this arise?” Tocqueville attributes it to the transformative notion that “all men are created equal,” especially as the idea evolved in American democracy. “In democratic ages men rarely sacrifice themselves for one another, but they display general compassion for the members of the human race,” he writes. “At the time of their highest culture the Romans slaughtered the generals of their enemies, after having dragged them in triumph behind a car; and they flung their prisoners to the beasts of the Circus for the amusement of the people.” Even Cicero does not see barbarians as belonging “to the same human race as a Roman. On the contrary, in proportion as nations become more like each other, they become reciprocally more compassionate, and the law of nations is mitigated.”

  This is one of Fukuyama’s big arguments against changing human nature. It threatens the optimal “end of history” of modern capitalistic democracy that he firmly believes suits humans better than any other existing system.

  Okay, so cultural evolution works, I tell him. We can learn from experience and pass that learning on to our descendants. Fukuyama acknowledges that. He even points out that humankind’s constant effort to fix its shortcomings is what drives human history. So suppose we modify ourselves through technological evolution. The problem with that would be what?

  It’s wrong, he replies, to assume that technology always produces positive social outcomes. The cotton gin was bad, for example. In the late 1700s, slavery was becoming unprofitable in America. It might soon have waned. Eli Whitney’s clever invention, however, made lucrative the use of slaves to harvest cotton. So bondage expanded. The ultimate result was the bloodiest conflict in American history, the Civil War. Today, “you can’t have modern democracy,” says Fukuyama, “unless you have this basic belief in equality, which means that you should empathize with suffering and feelings of other people and recognize their rights as equal to your own.” His concern is that the divisions between The Enhanced, The Naturals and The Rest may be so profound as to make past ruptures over race and religion seem quaint and paltry. If wealthy parents figure out a way to increase the intelligence of all of their descendants, “we have the makings not just of a moral dilemma but of a full-scale class war,” he writes. What’s more, “human nature is what gives us a moral sense, provides us with the social skills to live in society, and serves as a ground for more sophisticated philosophical discussions of rights, justice, and morality.” He’s terrified that The Enhanced, in time, “will look, think, act, and perhaps even feel differently from those who were not similarly chosen, and may come in time to think of themselves as different kinds of creatures.” He sees that separation as easily leading to people getting “off their couches and into the streets . . . actually picking up guns and bombs and using them on other people.”

  But suppose we end up using technology to expand our circle of empathy by increasing contact with people all over the world? I ask Fukuyama. Why are you so sure we can’t improve on human nature?

  “I think the answer is no, we’re not sure that we can’t. But I would say there’s a high probability of our screwing this thing up,” he replies. “Certainly no one would say that we want more hatred. But if you think about things like anger and the kind of violence and pride and the responses that lie behind a lot of acts of violence, it actually is all in the service of defending norms of communities. So the question is whether you can actually intervene to dampen that emotional response in ways that won’t undercut your ability to actually defend your community. If you could get rid of just random, pointless violence but not directed, necessary violence—if you think that you’re good enough to figure out what the sources of one are, and not the other—then be my guest. I just think it’s so likely that we’re going to screw this thing up. We just don’t understand how complex these interdependent feelings are.

  “Even something like the elimination of pain and suffering, you know. This is the argument that’s the most difficult to make. But I think it’s ultimately the most critical one. There’s something about the experience of pain and longing and anxiety and all of these things that our therapeutic society is trying to get rid of. It is somehow necessary to our self-understanding of what we are as human beings. I mean, you can’t have courage without risk. You can’t have real compassion or sympathy without the personal experience of pain.

  “I’m not sure that we would be better off as gods. For example, I presume that one of the attributes of being a god is not ever having to worry about your own mortality. That seems to me a perfectly good case of something that every individual would wish for but which is going to be disastrous for society as a whole.” He denies the possibility that we might gain new wisdom with age. To the contrary, he doesn’t think immortals will ever have a new idea again. He believes the only way new ideas get accepted is, “literally, people dying off.” This is an intriguing hypothesis. If immortals are not capable of innovation, that at least does solve the problem of The Curve’s ever-exponentially-increasing technological change—unless robots learn to dream.

  Fukuyama fears a world in which “you’ve got two-thirds to three-quarters of your population beyond the age of sexual interest, very rigid views, kind of fundamentally unable to adapt in certain basic ways to changes in what’s going on in the world.” Fukuyama, who was born in 1952, can’t see the possibility that people might enjoy having a series of new lives. He doesn’t like the fact that with the current extension of age, a smaller and smaller portion of our lives is concerned with the raising and socialization of children. He thinks the more time we spend raising children, the better humans we become: “It plays a role in the socialization of
the parents.” He also doesn’t like the idea that “sex becomes a fairly minor part of life. There’s all this political correctness about there’s no reason why people in their 70s can’t be sexy. Total bullshit, you know. I would prefer to live in a more natural kind of society. I think it’s nice to look at young, sexy women.”

  Apart from the utilitarian aspects of this outcome, Fukuyama sees a moral issue in immortality. “The deeper issue is, can people conceive of dying for a cause higher than themselves and their own fucking little petty lives? I mean, can they think of dying for God or for their country or for their community or anything beyond themselves? I think that the very aspiration is wrong—this aspiration to want to live forever—because your own petty life trumps all other values, and I think that any traditional notion of transcendence began with the notion that the continuation of your personal human life is not the highest of all goods. No animal is capable of formulating an abstract cause to die for.”

  You’re not going to like the next 100 years of your life, I tease him.

  “Well, no male in my family has ever lived past about the age of 75, and I’m not expecting to, either.”

  What if you get double-crossed by advancing technology and last for a very long time?

  “I’m not sure that I’d be happy about that. My mother had a stroke about six or seven years before she died, and she was really never the same person after that. When my father died, he was in a hot bath in Japan and he was 73. Just had a heart attack. I would much rather go the way my father did than the way my mother did. I wouldn’t want the extra 10 years.”

  Of course, the real problem Fukuyama has with The Hell Scenario is not whether it is persuasive and realistic and horrifying. That’s not a hard case to make. But what do you do to prevent it?

  “It has been a long time since anyone has proposed that what the world needs is more regulation,” he writes. That is exactly what he proposes, however, going so far as to insist on a regulatory regimen covering the entire globe. He thinks little of scientific self-regulation, arguing that there are too many greedy people chasing too much money to leave it up to scientists to regulate themselves. “Science cannot by itself establish the ends to which it is put. . . . Only ‘theology, philosophy or politics’ can do that.”

  He dismisses the notion that the GRIN technologies are beyond control. To do it, though, he thinks we will need new institutions, reaching into the internal workings of China, India, Japan, Korea and Europe, as well as the United States, to bring the power of public opinion to bear on technologies he views as offering an unprecedented threat. That might happen, he says, only after enormous public outcry caused by, for example, hideously deformed babies—products of experiments gone horribly wrong. We might need to suffer as much revulsion as we did after Hiroshima, he believes.

  BILL JOY AGREES with Fukuyama that a tragedy may be needed to bring action. His version of optimism is to hope that the wake-up call will be only a medium-sized catastrophe, killing millions. As opposed to some full-blown version of The Hell Scenario, such as a genetically altered virus eating the flesh of the entire species, or legions of tiny robots sucking the nutrients out of the entire biosphere, or there being worldwide class warfare between divergent kinds of humans, or super-intelligent machines arising that perceive humans either as pests to be destroyed or, perhaps worse, as pets.

  But what exactly do you do in response to the agonizing end of the human race so vividly projected in The Hell Scenario? If you find this scenario utterly persuasive, what do you do to defeat it? That remains the key issue.

  Joy has several valuable ideas. He believes scientists can and should regulate themselves, being deathly cautious about creating anything that can uncontrollably replicate itself. “Scientists do not believe they can do their work if they have to consider consequences,” he says. “But such free passes are no longer sensible in the age of self-replication. Scientists and technologists must take clear responsibility for the consequences of their discoveries. That this will slow the pace of discovery is unfortunate. But there is no alternative. It is not sufficient to have great science, to have great and repeatable laboratory results. What we need is great results in the world.” Joy believes that “market forces should replace regulation. Companies that wish to make personalized drugs can take legal and financial responsibility for the outcomes.” He wants liability law to put a cost on catastrophic risks. “We don’t need the costs to be perfect. If they are at least roughly proportionate to the magnitude of the true risk, some very dangerous things, to our great relief, will become uneconomic. There will still be rogues, but this is a game of risk reduction, not risk elimination, and the markets can provide us a great leg up.” He also believes we need to recognize that information is now the same thing as a physical object. If you view an organism as so dangerous as to require P4 containment—the highest level, complete with airlocks, moon suits, double-door autoclaves and liquid waste sterilizers—then keep information about that organism under the same kind of wraps. “There is no reason to publish the plague genome to everybody. You can publish it to people who need to know it. You wouldn’t sell them the material. If the material is the same as the information, then why give them the information? That’s common sense. It’s just in two different forms. The idea that everything ought to be available to everybody is foolishness. The reason is we’re scientists and we’ve always published and if we don’t publish, blah, blah, blah, blah. That’s bullshit! A realistic view of the world would say that people are evil and people make mistakes and we ought to use common sense. These technologies were created by large groups. An individual couldn’t have created bioengineering, and so the group of people who created it has a responsibility for the way in which it’s used. A directed attack against a particular important node in the network can cause incredible havoc. Slow the thing down to give yourself some time. There are some messes we can’t clean up.”

  Nonetheless, when pressed, no matter how convincingly Joy portrays his Hell Scenario, he finally dribbles off when it comes to conquering it. “We should relinquish stuff that we decide is too dangerous for us—which is like a tautology. That’s a definition of sanity—that you don’t do what is fatal to you,” Joy says.

  He’s right on several counts, one being that it is a tautology, a needless repetition of an idea in different words. How do you decide what’s fatal? Who decides what’s fatal? What happens if there is a disagreement? Especially if, say, Asians, Americans and Europeans have widely diverging views? Nuclear weapons are easy to control compared to the GRIN technologies. Few people want nuclear war. Many people, however, want healthy, smart, athletic, beautiful, long-lived children.

  Even as distinguished a describer of The Hell Scenario as Martin Rees runs into problems figuring out what to do about it. Rees is the “astronomer royal” of the United Kingdom—sort of a poet laureate to the stars. From this lofty vantage point, Sir Martin has produced the most far-reaching version of The Hell Scenario imaginable. He can even see that “experiments that crash atoms together with immense force could start a chain reaction that erodes everything on Earth; the experiments could even tear the fabric of space itself, an ultimate ‘Doomsday’ catastrophe whose fallout spreads at the speed of light to engulf the entire universe.” In his book Our Final Hour; A Scientist’s Warning: How Terror, Error and Environmental Disaster Threaten Humankind’s Future in This Century—On Earth and Beyond, Rees vociferously makes the case that “technical advances will in themselves render society more vulnerable to disruption.” He quotes the odds of our species surviving to the end of the 21st century as no better than 50-50.

  Chapter by chapter, Rees catalogs everything that could go colossally wrong in this century. It’s hard to believe he’s missed anything. Climate change, asteroid collision, flesh-eating viruses assembled and released by madmen, genetics run amok, nanobots run amok, machine intelligences run amok—it’s all in there. So are quite a few exotic catastrophes that could theoreticall
y result from experiments in Rees’ specialty, physics—such as the universe tearing mentioned above.

  As expert and devastating as his recitation is, Rees has difficulty with the issue of what to do about these doomsdays. He does not like risk taking at all. Take our world today. If it were up to him, we wouldn’t be living in it. Given the risks of a nuclear exchange during the Cold War, Rees flat out says he would “rather be red than dead,” in the words of the old slogan. “I personally would not have chosen to risk a one in six chance of a disaster that would have killed hundreds of millions and shattered the physical fabric of all of our cities, even if the alternative was a certainty of a Soviet takeover of Western Europe,” he says.

 

‹ Prev