Falter: Has the Human Game Begun to Play Itself Out?
Page 16
The important thing, Doudna stressed, was that CRISPR had in fact opened the door to precisely such enhancements. “Had this conversation occurred just a few years earlier, Sam and I would have dismissed Christina’s proposal as pure fantasy,” she said. “Sure, genetically modified humans made for great science fiction, but unless the Homo sapiens genome suddenly became as easy to manipulate as the genome of a laboratory bacterium like E. coli, there was little chance” of it actually happening. “Making the human genome as easily manipulable as that of a bacterium was, after all, precisely what CRISPR had accomplished.” CRISPR has been used to change the metabolism of monkeys, after all. Given the money at stake, “it seems only a matter of time before humans were added to the growing list of creatures whose genomes” were up for grabs.25
I’d hazard a guess that “Christina” will not be the last entrepreneur down this road. In fact, there are already players milling around the starting gate, many of them true heavy hitters from Silicon Valley. The best-known “consumer-facing” genetics company is probably 23andMe, founded by Anne Wojcicki. Anne’s father, Stanley, was the chair of Stanford’s physics department in the late 1990s; he had a couple of students, Sergey Brin and Larry Page, who would go on to start a thing called Google. In fact, they started it in Anne’s sister Susan’s garage. (Anne would later marry and divorce Brin; Susan is now the CEO of YouTube, owned of course by Google.) The company 23andMe is best known for its saliva test that unveils your genetics, though one of its patents envisions using this knowledge to help people, in the words of UC Davis’s Paul Knoepfler, “select a potential mate from a group of possible mates.”
Some of its competitors are pushing the envelope a little further. Take GenePeeks, a genetic research company whose main product, Matchright, examines DNA from you and your potential partner and estimates your chances of producing offspring with genetic disorders. Its cofounder, and chief scientific officer, is a Princeton professor named Lee Silver. If you’d like to know what’s in the back of his mind, he laid it out many years ago in a book called Remaking Eden. The first germline therapies, he predicted, would be performed to eliminate a few obvious diseases such as cystic fibrosis, and those early and compassionate interventions would cause “fears to subside.” (This is apparently what Dr. He intended with Lulu and Nana, though in the short term it seems to have backfired.) Silver envisions what comes next: a mother in a maternity room rejoicing in her new son. “I knew Max would be a boy,” she explains to visitors. “And while I was at it I made sure that Max wouldn’t turn out to be fat like my brother Tom.” A few iterations later and this time a mother is comforting herself during labor by leafing through a photo album of what her infant daughter will look like when she’s sixteen: “Five feet five inches tall with a pretty face.”26
“By the time scientists had employed CRISPR in primate embryos to create the first gene-edited monkeys, I was asking myself how long it would be before some maverick scientist attempted to do the same in humans,” Doudna writes. It was time for a “conversation,” she felt, and “given that this scientific development affects all of humankind, it seemed imperative to get as many sectors of society as possible involved. What’s more, I felt the conversation should begin immediately, before further applications of the technology thwarted any attempts to rein it in.”27 That makes sense to me. Clearly, CRISPR is a perfect example of what Ray Kurzweil meant when he said that exponential increases in computing power would change the world. It’s one instance, one of the most striking, of what that new power might produce. It couldn’t be more remarkable: a “word processor” for the DNA that is at our core.
So: what could germline engineering do to humans, and to the game we’ve been playing?
15
The advertisement writes itself: As we get better at germline engineering over the years, we could produce improved children. Their smiles would reveal broad rows of evenly spaced teeth, and of course they’d be smiling a lot because they’d be in a good, sunny mood. And why not, given that their fine-tuned brains would be earning them high grades. “Going for perfection,” as James Watson, the father of the genetic age, once put it. “Who wants an ugly baby?” Who indeed. (Of course, you might want to be a little careful here, as someone has to define “ugly.” Watson, for instance, also said, “[W]hen you interview fat people, you feel bad, because you know you’re not going to hire them,” and suggested further that germline engineering could be used to deal with the problem of “cold fish.”)1 We have giant industries based on the idea of what constitutes beauty, and libraries full of self-help books that point us toward particular personalities, so it stands to reason that many people will see this kind of genetic improvement as an obvious next step in our progress as a species.
In the first flush of enthusiasm about new technologies, though, we often overlook the possible drawbacks. For example, if you knew everything you now know about how the smartphone and social media were going to affect your life, and our society, would you still welcome them as enthusiastically as you did the first time you saw an iPhone or logged on to Facebook? That’s not a useful question at this point; we have the world we have, Twitter and all. But as we don’t yet quite have a world with germline genetic engineering, we should raise the questions now.
It’s not as if possible worries are buried very far down. Jennifer Doudna reports that in the years since she pioneered CRISPR, she’s had a series of nightmares, most notably one in which Adolf Hitler (with a pig face, “perhaps because I had spent so much time thinking about the humanized pig genome that was being rewritten with CRISPR around this time”) summons her to tell him about “the uses and implications of this amazing technology you’ve developed.”2
It’s never a good sign when even an imagined Adolf Hitler is interested in your work, but for the moment, let’s leave aside the specter of cloned soldiers in jackboots and concentrate instead on the more practical problems and immediate difficulties that could arise from human genetic engineering, or from the strong artificial intelligence that scientists say may be just around the corner.
* * *
It’s worth remembering that any new technology arrives in a world that’s already shaped a certain way. If it’s a powerful technology, it can either shake up that pattern or help set it in stone. So, for instance, we’ve seen that most of the planet is at a moment of maximal inequality right now. And we can say with some certainty that engineering your baby will be expensive. Even now, after many decades, IVF treatment for couples with fertility problems runs quickly into the tens of thousands of dollars, usually uncovered by insurance. So, even a pundit with an unimproved IQ can confidently predict that this new technology will make inequality worse. “Since the wealthy would be able to afford the procedure more often,” Doudna points out, “and since any beneficial genetic modifications made to an embryo would be transmitted to all of that person’s offspring, linkages between class and genetics would ineluctably grow from one generation to the next, no matter how small the disparity in access might be.” (She’s generously considering how this will play out “in countries with comprehensive health-care systems,” which is a polite way of saying “not America.”) “If you think our world is unequal now,” she adds, “just imagine it stratified along both socioeconomic and genetic lines.”3
In truth, this objection is so obvious that the people who plan on carrying out this work don’t even bother pretending otherwise. Lee Silver, the Princeton professor who runs GenePeeks, said long ago that eventually “all aspects of the economy, the media, the entertainment industry, and the knowledge industry will be controlled by members of the GenRich class.” Meanwhile, “Naturals” will work “as low-paid service providers or laborers.” Before too long, he added, the two groups will be genetically distinct enough that they’ll have “no ability to cross-breed, and with as much romantic interest in each other as a current human would have for a chimpanzee.” Even before mating becomes impossible, he says, “GenRich parents will put
intense pressure on their children not to dilute their expensive genetic endowment in this way.”4 The Oxford ethics professor Julian Savulescu, a proponent of human engineering whom we will meet again later, told an interviewer that, “in all likelihood,” the technology would exacerbate inequality. His solution: genetically improving the moral impulses of early adopters so that they would “make these technologies available to more people and reduce inequality.”5 This seems a fairly roundabout way of proceeding, though perhaps no odder than the proposal by geneticists of a government-run lottery for a ticket to genetically enhance your kid. (Call it “Charlie and the Baby Factory.”)
In fact, if one were genuinely worried about inequality—or indeed, if one were worried about disease in general, or happiness, or children—one wouldn’t spend much time and money on biology at all. Genetics plays a part in determining who we are and how our lives proceed, but as Nathaniel Comfort, a professor of the history of biology at Johns Hopkins, points out, “Decent, affordable housing; access to real food, education, and transportation; and reducing exposure to crime and violence are far more important.”6 Consider the experience of the writer Johann Hari, invited to a conference organized by Peter Thiel on depression, anxiety, and addiction. He was amazed to find that most of the participants were convinced that such problems were caused by “malformations of the brain.” When it was his turn to speak, Hari said, “As your society becomes more unequal, you are more likely to be depressed.” Humans, he continued, “crave connection—to other people, to meaning, to the natural world. So we have begun to live in ways that don’t work for us, and it is causing us deep pain.”7 If we wanted to somehow engineer better humans, we’d start by engineering their neighborhoods and schools, not their genes. But, of course, that’s not politically plausible in the world we currently inhabit, the world where “there is no such thing as society. There are just individuals.” If there are just individuals, that’s where you start and end.
* * *
The advertisements for ever-greater artificial intelligence write themselves, too: cars that drive you where you want to go, bartenders that mix perfect drinks. As with somatic gene repair for sick patients, there are uses for these new technologies that seem to make perfect sense: the specialized robots that are beginning the decades-long cleanup of the Fukushima reactors, for instance—when one of those robots emerges from the core, it must be “sealed in a steel cask and interred with other radioactive waste,”8 which you wouldn’t want to do with a human. People are building tiny homes with 3-D printers for hurricane refugees; autopilots fly passenger jets most of the time.
Increasingly, though, these technologies are about replacing people who are doing their work perfectly well; it’s just that machines can do the work more cheaply. Bricklayers, for instance: a sobering picture on the front page of the New York Times recently showed a bricklayer desperately racing like John Henry to match a $400,000 machine called SAM, for “semi-automated mason.”9 A pair of economists recently predicted that by 2033, there was a 99 percent chance that insurance underwriters would lose their jobs to computer programs. Sports referees faced a 98 percent risk of obsolescence, waiters a 94 percent chance, and so on. (Archaeologists were the safest, “because the job requires highly sophisticated types of pattern recognition, and doesn’t produce huge profits.”)10 Other researchers pointed out that the Rust Belt has already been so heavily automated that employment will actually drop less there than in places with big service industries. Number one: Las Vegas, which stands to lose 65 percent of its current jobs in the next two decades.11 So, if inequality worries you, just wait.
These practical losses come with practical gains, obviously: Driverless cars would make it theoretically possible to have fleets of dispatchable, roving electric vehicles that in turn could reduce traffic by 90 percent, free up the city streets that no longer require parking spaces, and save some of the lives lost each year in auto accidents. Also, you could go to a bar and have an extra beer without worry. Still, the transition will be remarkably wrenching. If you include part-timers, more Americans work as drivers than are employed in manufacturing jobs—in forty of the fifty U.S. states, “truck driver” is the single most common occupation.12 What are they going to do instead? Not become bakers—89 percent of them are expected to lose their jobs to automation by 2033, along with 83 percent of sailors. Wall Street is steadily shedding jobs because algorithms now execute 70 percent of equity trades; it’s great for those who remain, given that there’s ever more money to go in fewer pockets, but it does make you wonder if we might not be in the last era of high employment.
Tyler Cowen, described by BusinessWeek as “America’s hottest economist” and proprietor of the country’s most widely read economics blog, works in the same Koch-funded economics department at George Mason University where James Buchanan was once a star. His advice to young people is to develop a skill that can’t be automated, and that can be sold to the remaining high earners: be a maid, a personal trainer, a private tutor, a classy sex worker. “At some point it is hard to sell more physical stuff to high earners, yet there is usually just a bit more room to make them feel better. Better about the world. Better about themselves. Better about what they have achieved,” he counsels.13 The author Curtis White, in his book on robotics, concluded: “What survives of the middle class in the future will be a servant class. A class of motivators. A class of sycophants, whose jobs will depend not only on their skills but on their ability to flatter and provide pleasure for elites.”14 Kai-Fu Lee, the head of Sinovation Ventures, an AI venture capital firm, had a slightly sweeter take: “The solution to the problem of mass unemployment will involve ‘service jobs of love.’ These are jobs that AI cannot do, and that society needs and that give people a sense of purpose. Examples include accompanying an older person to visit the doctor, mentoring at an orphanage, and serving as a sponsor at Alcoholics Anonymous.”15 Laying aside the question of just what it is you’re mentoring the orphans about—mentoring them, perhaps, to become orphan mentors in turn—one practical problem is that these don’t sound like very well-paid occupations. Lee suggests that high tax rates on the people running AI companies might suffice to make up the difference, although, as he points out, “most of the money being made from artificial intelligence will go to the United States and China,” so orphan mentors in the other 190 countries may be out of luck.
Not everyone thinks this will be a problem.
“People say everyone will be out of work. No. People will invent new jobs,” Ray Kurzweil told me.
“What will they be?”
“Oh, I don’t know. We haven’t invented them yet.”
Which is fair enough, and in truth, it’s as far as we’re likely to get with this discussion. This new technology will likely make inequality worse—perhaps engrave it in silicon and DNA. That’s worth knowing, but it doesn’t answer the question of whether we should proceed. To figure that out, we need to think through other, even deeper, practical problems that come with change at this scale and at this speed. For instance, the end of the world.
* * *
Long ago—way back in 2000—Bill Joy, then chief scientist at Sun Microsystems, wrote a remarkable essay for Wired magazine called “The Future Does Not Need Us.” Joy, the father of the UNIX operating system, argued that the new technologies starting to emerge might go very badly wrong: fatal plagues from genetically engineered life forms, for instance, or robots that would take over and push us aside. His conclusion: “Something like extinction.”16 This was not enough to slow down the development of these new technologies—just the opposite: Joy was writing before CRISPR and back when human beings were still the best chess players on the planet—but it did establish a pattern. Some of the people who know the most about where we’re headed are the wariest and the most outspoken. In October 2018, for instance, Stephen Hawking’s posthumous set of “last predictions” was published—his greatest fear was a “new species” of genetically engineered “superhuman
s” who would wipe out the rest of humanity.17
Or consider tech entrepreneur Elon Musk, who described the development of artificial intelligence as “summoning the demon.” “We need to be super careful with AI,” he recently tweeted. “Potentially more dangerous than nukes.” Musk was an early investor in DeepMind, a British AI company acquired by Google in 2014. He’d put up the money, he said, precisely so he could keep an eye on the development of artificial intelligence. (Probably a good idea, given that one of the founders of the company once remarked, “I think human extinction will probably occur, and technology will likely play a part in this.”)18 “I have exposure to the very cutting-edge AI, and I think people should be really concerned about it,” Musk told the National Governors Association in the summer of 2017. “I keep sounding the alarm bell,” he continued, “but until people see robots going down the street killing people, they don’t know how to react, it seems so ethereal.”19 All the big brains were talking the same way. Hawking wrote that success in AI would be “the biggest event in human history,” but it might “also be the last, unless we learn to avoid the risks.”20 And here’s Michael Vassar, president of the Machine Intelligence Research Institute: “I definitely think people should try to develop Artificial General Intelligence with all due care. In this case all due care means much more scrupulous caution than would be necessary for dealing with Ebola or plutonium.”21
Why are people so scared? Let the Swedish philosopher Nick Bostrom explain. He’s hardly a Luddite. Indeed, he gave a speech in 1999 to a California convention of “transhumanists” that may mark the rhetorical high water of the entire techno-utopian movement. Thanks to ever-increasing computer power and ever-shinier biotech, he predicted then, we would soon have “values that will strike us as being of a far higher order than those we can realize as unenhanced biological humans,” not to mention “love that is stronger, purer, and more secure than any human has yet harbored,” not to mention “orgasms … whose blissfulness vastly exceeds what any human has yet experienced.”22 But fifteen years later, ensconced in Oxford as nothing less than the director of the Future of Humanity Institute, he’d begun to worry a great deal: “In fairy tales you have genies who grant wishes,” he told a reporter for The New Yorker. “Almost universally the moral of those is that if you are not extremely careful what you wish for, then what seems like it should be a great blessing turns out to be a curse.” The problem, he and many others say, is that if you have an intelligence greater than our own, it could develop “instrumental goals.”23