Falter: Has the Human Game Begun to Play Itself Out?

Home > Other > Falter: Has the Human Game Begun to Play Itself Out? > Page 18
Falter: Has the Human Game Begun to Play Itself Out? Page 18

by Bill McKibben


  But, in fact, both mate selection and parental pressure come with strong limits built in. You can spend a great deal of time looking for the spouse you think will provide your child with the best possible genes, but in the end all you can do is create a set of possibilities, just change the odds some. Nature works within the borders imposed by the genes belonging to the parents; the outcomes are not guaranteed. Even if you’re using the PGD technology we discussed, where fertility technicians can help parents create several embryos and then choose the one they like the most, you live within the bounds imposed by your particular genetic codes.

  As for nurture, its limits are almost the point, given that people can and do resist their parents’ plans for them. For many people, this rejection becomes the turning point in their lives. Rebelling against the wishes and hopes of your parents is how a great many of us define who we are. It may be hard, and it may be painful, and some people may never manage it. And some never need to, because their parents were wise and gentle enough to help them down a congenial path. But it’s not impossible.

  Whereas, the point of CRISPR, if used for germline engineering of embryos, would be to replace chance with design. Because the parents would no longer be playing the odds, and because no child can rebel against a protein.

  And if you think about it this way, you soon realize that this is the most antilibertarian technology ever devised. Yes, it increases the ability of parents to make choices. But only by turning the object of their choices, their child, into something we’ve never seen before: a human built to spec, designed (that is, forced) to be a certain way. Her parents, sitting there in the clinic with their Visa card in hand, will make a series of choices that will then play out over her lifetime and, because those choices will be heritable, over her children’s lifetimes, and yea unto the generations. This is control of a kind that tyrants only dream about.

  Consider even the early kinds of small changes that would-be baby designers want to target. Though enhanced intelligence is a common goal—“it’s not much fun being around dumb people,” in the words of James Watson—and though CRISPR advocates say the technique “can in principle be used to boost the expected intelligence of an embryo by a considerable amount,” it may actually be hard to get at, as intelligence seems to be spread across a wide array of genes. “Each one accounts for such a small proportion of variance, they are hard to pinpoint,” as Steven Pinker explains.4 Other things are easier: Julian Savulescu describes a variant of the COMT gene associated with altruism, and an MAOA gene variant linked with nonviolence.5 The gene for the dopamine receptor D4 (in particular, the “hypervariable coding in its third exon”) seems linked straight to mood, as certain variations make people more likely to seek out novelty and to answer yes to statements such as “Sometimes I bubble with happiness” or “I am a cheerful optimist.” Other individual genes are also clearly linked to obvious physical traits: MSTN produces “big, lean muscles,” Harvard’s George Church has noted. When researchers tweak that gene in pigs, they get “double-muscled” swine that “would make body-builders jealous,” he said.6

  And, of course, as our power to transform children accelerates, it may get spookier. Gregory Stock, the former head of UCLA’s Program on Medicine, Technology and Society, offered a set of predictions years ago, at the dawn of the genetic manipulation era: “People will be inclined to give their children those skills and traits that align with their own temperaments and lifestyles. An optimist may feel so good about his optimism and energy that he wants more of it for his child. A concert pianist may see music as so integral to life that she wants to give her daughter greater talent than her own. A devout individual may want his child to be even more religious and resistant to temptation.”7 Does this sound absurd? We can put someone in an MRI and see what portions of his brain light up when he prays. In the early summer of 2018, researchers at Columbia and Yale announced they’d discovered the “neurobiological home” for spirituality somewhere in the parietal cortex, directly behind the frontal lobe.8

  “The best humans have not been produced yet,” Michigan State’s Stephen Hsu insists flatly. “If you want to produce smart humans, nice humans, honorable humans, caring humans, whatever it is, those are traits that are related to the presence or absence of certain genes and we’ll have much finer control over the types of people that are born in the future through this.”9

  Let’s assume we can. Even if, at first, we’re limited to relatively simple shifts, there’s every reason to think that the power will grow swiftly—as Ray Kurzweil points out, it took seven years to decipher the first 1 percent of the human genome, and then just seven years more to finish the job, because the rate of understanding kept doubling. “Everything to do with information technology is doubling every 12 to 15 months, and information technology is encompassing everything,” he says10—including, of course, our ability to design our kids.

  So—and here we reach the crucial point of this whole discussion—what does it feel like to be that kid? Let’s say it works well. Let’s say her parents chose to make her more optimistic, “sunnier.” Maybe they were able to add a few IQ points—not a genius yet, but still. And an extra dose of EPO, so her longer, leaner muscles wouldn’t tire so easily.

  Here’s the first thing it feels like: disconnected.

  Because time doesn’t stop. You get one chance to improve your child, there in the fertility clinic before the egg is implanted, and then she’s stuck for the rest of her life with whatever enhancements you’ve selected. Meanwhile, science marches on. (Fast, fast. In the winter of 2018, a company called Synthego announced that it had figured out how to accelerate CRISPR research so that scientists don’t need to “spend weeks” organizing their modifications.)11 So, by the time your next kid comes along, a year or two later, our ability to manipulate the genome may have doubled. Now you can order up a child with a fancier package of improvements, the human equivalent of a moon roof and leather seats. And so, who is child number one? She’s Windows 8, she’s iPhone 6, and so on, forever. Her younger brother is smarter, sure, but by the time he’s twenty-four and looking for work? The twenty-one-year-olds are going to have an edge, no?

  Think about how lonely this feels. On the one hand, you’re no longer really related to your past. Current humans have changed so little over the millennia that, say, Stonehenge still makes us feel something. It was created by creatures genetically very much like us, creatures who processed dopamine the same way we do. They are much more like us than our grandchildren would be, should we go down this path. But those modified grandchildren will also no longer be really related to their future. They’ll be marooned on an island in time, in a way that no human being has ever been before or will be again. When we engineer and design, we turn people into a form of technology, and obsolescence is an utterly predictable feature of every technology we’ve ever seen. For a few years, you’re more useful than any humans who’ve ever come before, and then you’re more useless.

  But that’s just the beginning of the loneliness. With your purchase you will have installed into the nucleus of every cell in your child’s body a code that will pump out proteins designed to change her. For a few years that presents no existential problem; she’s just chugging along. But then comes adolescence, the moment when we begin to seriously question ourselves, when we try to understand who we are. That’s our great task as human beings, and now it can’t really be done. She’s feeling happy and optimistic? Is that because of some event, some new idea of herself—or is it because she’s been constructed to feel that way? How would one know? Every journey of self-discovery would end, ultimately, in the design specs from the fertility clinic. They’d be, in essence if not actuality, the first documents in the baby book and the last testament.

  She works hard and takes pride in her achievement—straight As! Why the pride, though, when it’s just what she was programmed to do? She takes up running and, holy cow, can she move! Those long, lean muscles never seem to run out of oxygen.
But what does that teach her about herself, beyond that she was designed that way? I doubt if Lance Armstrong earned any insights into his character (beyond “I’m a fraud”). In that sense, my athletic career has been far more fruitful than his.

  Even the parents seem cheated in this scheme. I take great pride in my daughter Sophie’s progress through the world, even though her mother is far more responsible for it than I am, and even though neither of us is all that responsible. But we can maintain some sense that our devoted care—all those books read, hikes hiked—helped make her the smart and sunny person she is. Yes, like all of us, she is a creature of her genes, but at least those genes weren’t designed to produce a certain outcome. It’s one thing to understand that you are who you are in part because of your genes; it’s another to understand that you were specifically engineered for a certain outcome. The randomness of our current genetic inheritance allows each of us a certain mental freedom from determinism, but that freedom disappears the day we understand ourselves to be, in essence, a product. Sometimes we need to engineer ourselves: hence Prozac. But you can stop taking Prozac. You can’t turn off the engineered dopamine receptor. That’s you, and you will never know yourself without it. As climate change has shrunk the effective size of our planet, the creation of designer babies shrinks the effective range of our souls.

  * * *

  And in return we get … what? In the best of worlds, where everyone has access to this technology, we’d get more intelligence, more athletic ability. That sounds good. For at least a century, in our high-consumer paradise, we’ve devoutly believed that more is better. For some things, that seems to be true: My phone has more memory; therefore, it’s superior. My camera captures more pixels; therefore, hooray! With humans, though, the “therefore” is almost certainly wrong.

  I’m assuming that, for many of us, happiness is one goal of our own personal human game. We actually have a fairly good idea of what makes human beings happy, thanks in large part to Mihaly Csikszentmihalyi, the longtime head of the psychology department at the University of Chicago. Back in the 1960s, he was studying painters and noted the “almost trance-like state” they entered when their work was going well. They didn’t seem to be motivated by finishing the painting, or by the money they’d get for selling it. It seemed to be the work itself that spurred them on, even in the face of hunger or fatigue.

  To follow up on this clue, Csikszentmihalyi and his colleagues developed a method they called “experience sampling.” They’d give their study subjects a pager and then buzz them at random intervals throughout the day. When the buzz sounded, they were supposed to quickly fill out a short form listing what they were doing and their mood. Such surveys yielded immense insights—for instance, if people were feeling chaotic and out of control in midafternoon, they were going to spend a lot of the evening watching TV, apparently because it reordered their lives. But the most remarkable finding, robust after many years, was that people were happiest when they were engaged in what Csikszentmihalyi came to call “flow”—that is, when, like those painters, they were fully engaged, and at the limit of their skills. A person in a state of flow has neither less challenge than she can handle, nor more. So, if you’re a beginning rock climber, a single boulder can provide you with enough challenge to become fully absorbed; once you master it, you need a steeper wall. Dancers require choreography that they can actually perform; basketball players require opponents good enough to test their skills. It’s “a stretching of oneself toward new dimensions of skill and competence,” Csikszentmihalyi said.12

  No one can do this all the time, of course, hence the need for The Bachelorette and a bottle of beer. But it’s what defines us at our best.

  And so, it should therefore sting the Kurzweils of the world to grasp that you can’t make a more realized human being by giving him extra talent. The greatest cross-country skier on earth doesn’t get more out of a race than I do, even if he finishes it in half the time. As long as I’m fully engaged, the world drops away—and the point is the world dropping away. If you could engineer a rock climber to have stronger fingers and no fear of heights, then she would be able to climb more routes than she could climb now. But so what? She wouldn’t get extra satisfaction from her new talent, because the satisfaction comes from being at the edge of her abilities. In fact, you might complicate her life considerably, because she’d have to go farther afield to find cliffs big enough to match her souped-up abilities. If you were eventually able to engineer her to the point where dashing up Mount Everest presented no great challenge, you would have robbed the entire exercise of its point. Flow doesn’t increase if you have more ability; it simply requires challenge sufficient to your ability.

  We are already capable of being as absorbed and engaged as we ever could be. We’re good enough.

  17

  One reason that techno-utopians don’t worry about the loss of human meaning is because they’re not particularly attached to humans.

  There are, to be sure, plenty of doctors hoping for new ways to treat human suffering. But the streak of misanthropy that runs through the conversation of the digital and technological elite is hard to miss: Human brains, the artificial intelligence pioneer Marvin Minsky once explained, are simply “machines that happen to be made out of meat.”1 Robert Haynes, president of the Sixteenth International Congress of Genetics, said in his keynote address that “the ability to manipulate genes should indicate to people the very deep extent to which we are biological machines.” It’s no longer possible, he insisted, “to live by the idea that there is something special, unique, or even sacred about living organisms.”2 Indeed, in the spring of 2018 a University of Washington professor proposed using CRISPR to create a “humanzee,” a human-chimp hybrid, specifically to prove that people aren’t special. “The fundamental take-home message of such a creation would be to drive a stake into the heart of the destructive disinformation campaign” holding that people are different from the rest of creation, he explained.3 This kind of self-loathing permeates the whole subculture. Robert Ettinger, the first man to start freezing his fellow humans so they could be revived in a century or two, looked forward to a golden posthuman age, one where, among other things, we would be reengineered to achieve the “elimination of elimination.” He found defecation so unpleasant that he wanted “alternative organs” that would “occasionally expel small, dry compact residues.”4

  By this logic, if we are machines, then our destiny is to be surpassed by better machines. And we shouldn’t complain; we should welcome it. The approaching epochal moment when computers will be as smart as humans becomes just a meaningless way station. As the science writer Tim Urban points out, an AI “wouldn’t see human-level intelligence as some important milestone—it’s only a relevant marker from our point of view—and wouldn’t have any reason to stop at our level. And given the advantages over us that even human-intelligence-equivalent artificial general intelligence (AGI) would have, it’s pretty obvious that it would only hit human intelligence for a brief instant before racing onwards to the realm of superior-to-human intelligence.”5

  After all, AGI’s got better components. Already today’s microprocessors run about ten million times the speed of our brains, whose internal communications “are horribly outmatched by a computer’s ability to communicate optically at the speed of light,” Urban observes. And our human constraints aren’t going away: “the brain is locked into its size by the shape of our skulls,” while “computers can expand to any physical size, allowing far more hardware to be put to work.” Also, humans fatigue easily; also, our software can’t be as easily updated. And a group of computers can “take on one goal as a unit, because there wouldn’t necessarily be dissenting opinions and motivations and self-interest, like we have within the human population.”6 James Lovelock, the British scientist who formulated the Gaia theory, insisted that robots would inevitably take over simply because it takes a neuron a second to send a message a foot in our brains, while an electron
can speed along a foot of wire in a nanosecond. “It’s a million times faster, simple as that,” he said. “So to a robot, once fully established in that new world, a second is a million seconds. Everything is happening so fast that they have on earth a million times longer to live, to grow up, to evolve than we do.”7

  In other words, forget about the fact that the self-driving truck is going to take away your job. The practical risks we’re running pale next to the questions about human meaning: what on earth would be the point of people in this new world? The futurist Yuval Harari provides one answer: we could devote our lives to playing ever-more-immersive video games. “If you have a home with a teenage son,” he writes, “you can conduct your own experiment. Provide him with a minimum subsidy of Coke and pizza, and then remove all demands for work and all parental supervision. The likely outcome is that he will remain in his room for days, glued to the screen. He won’t do any homework or housework, will skip school, skip meals, and even skip showers and sleep. Yet he is unlikely to suffer from boredom or a sense of purposelessness.”8 Steve Wozniak, cofounder of Apple, predicts that robots will graciously take us on as pets so we can “be taken care of all the time.”9 He added that he was now feeding his dog filet mignon, on the principle of “do unto others.” None of that is why we’re developing artificial intelligence. (We’re developing it to make money, one business at a time.) But that is what many of the people who look closely at it think may happen.

  You can already sense the beginnings of this shift. The average person now touches, swipes, or taps his phone 2,617 times a day.10 Eighty-seven percent of people with smartphones wake up and go to sleep with them. This is by far the largest change in the texture of everyday life during my six decades on earth; nothing else comes close. The artificial intelligences at the other end, the giant algorithms that run Google and Facebook and the like, by now know when we’re bored; they understand that we crave the positive reinforcement of “likes”; they know what to feed us to keep us clicking. As Jaron Lanier points out, because the business models of the social media giants prize “engagement” above all, they’ve learned to shovel negative information at us because “emotions such as fear and anger well up more easily and dwell in us longer than positive ones.… Fight-or-flight responses occur in seconds,” which is about the right time frame for Twitter, as opposed to, say, a novel or a record album.11 In the political realm, they’ve learned that we respond to an ever-greater sense of outrage; hence, Trump.

 

‹ Prev