Earth in Human Hands
Page 38
Recently I was visited in Washington by my old pal Dorion Sagan, whom I have known since I was six years old. We spent an afternoon together looking through boxes of his father, Carl’s, papers in the Library of Congress. In a batch of correspondence with Isaac Asimov, Dorion found a 1976 letter from Carl discussing planetary protection. He wrote:
As with all questions of interplanetary quarantine or recombinant DNA we must ask not only what is the most likely theory but what is the probability that this theory is incorrect. The potential consequences of back contamination of Earth are so severe that I would require any theory to have a probability of being incorrect of 10-6 or less to be believable. Since it is unlikely that any existing theory can have anything approaching this level of reliability I believe the only responsible reaction in the face of our level of ignorance is caution.
Dorion found the use of 10-6 hilarious, asking me, “How the hell could you ever know if a scientific theory had a probability of 10-6 of being wrong?” For decades we’ve had a running joke about our very scientific fathers, how seriously they take their own ideas and their (we think, at times) excessive faith in quantitative solutions to intractable problems. So we spent the rest of that evening saying things like “This has got to be the best vodka tonic I’ve ever tasted. There is only a possibility of 10-6 that it is not.”
All joking aside, though, the letter quoted here demonstrates not arrogance but the opposite: a kind of humility. Despite the (perhaps slightly laughable) quantitative precision, this was just Carl’s nerdy way of saying, “We had better be very certain we’re right, and there is no way we can ever have enough faith in our scientific theories to risk the whole game.” I repeatedly find myself impressed and grateful for the job that Carl and his contemporaries, the first generation of planetary explorers, did putting in place the planetary protection protocols we still follow today. This seems relevant to the debate about METI broadcasts. We are trying to agree on policy for a proposed scientific investigation that carries some risk. The level of danger, impossible to estimate precisely, seems quite low. Yet if we’re wrong, it could change everything.
Rise of the Machines
The next point I made in San Jose was:
If we’re really concerned about advanced intelligences of unknown motivation that might harm or destroy us, then we should take note of the fact that right up the street here in Silicon Valley there are people who are spending all their waking hours trying to develop such entities.
As I discuss in chapter 5, some people feel that we are actually close to developing machines with intellect that will surpass that of humans. As with killer aliens, the idea of killer robots is so intertwined with tacky science fiction that it has an aura of inanity. However, if machines do indeed become conscious and autonomous, we cannot really know what their motivations, or their attitudes toward us, will be. If they get to the point where they have cognitive abilities superior to ours, and they apply this to building still-smarter machines, it could cause a “singularity” or “intelligence explosion,” and then all bets are off. As with aliens, we don’t need to postulate that they will be evil in order to imagine that they could represent a grave or existential threat. They may simply have needs and motivations we cannot fathom and that differ from our own. Or they’ll be so fantastically effective at misunderstanding our instructions that, as in all those fables about genies granting three wishes, we’ll wish we could take them all back.
I tend to be among those who do not fear the development of superior machine intelligence. I don’t doubt that artificial intelligence is going to change our world in surprising and unpredictable ways. Yet I don’t think machines are going to suddenly become conscious and decide to kill us off or enslave us. Nevertheless, as with the Fermi paradox, an honest assessment must include the admission that nobody knows—because nobody understands consciousness.
Although I am not terribly worried about either possibility, smart machines or superior aliens, it does seem that the problem of dangerous machines is potentially more imminent than possible “spirits from the vasty deep” we may summon from hundreds of light-years away. Yet the questions are comparable. We can’t prove they won’t present an existential risk; so how should we proceed?
In January 2015, Max Tegmark, an MIT physicist and founder of the Future of Life Institute, organized a conference in Puerto Rico on the Future of AI: Opportunities and Challenges, at which an international, interdisciplinary group of experts gathered to address the direction of AI (artificial intelligence) research, and the prospects for enhancing promising outcomes and avoiding pitfalls of a future “intelligence explosion.”
This conference was well timed, as there seems to be a recent sea change in the AI community. A critical number of researchers has realized that they may be close enough to their goal that dangers need to be addressed and, it is hoped, managed. Many who went into the field years ago with the simple goal of manifesting “human-level AI” as soon as possible are now turning to questions about how to do so safely. Rather than simply try to make machines as smart as they can as quickly as they can, they are considering how to push their newly created intelligences in certain desired directions. This brings us back to Asimov’s laws of robotics, and some interesting technophilosophical questions such as how we might instill values in machines, what values really are and where they come from, and how you would code them algorithmically.2
At the Puerto Rico gathering, a consensus emerged that it was time to redefine the goal of AI research away from simply making artificial brains and toward making things that are going to be beneficial to society. All participants signed a letter that has since been signed by dozens more scientists, technologists, entrepreneurs, and scholars. It notes the rapid increase in machine capabilities and the likelihood that this increase will accelerate, leading to societal issues that must be anticipated and addressed proactively with new research directions. On the plus side, the letter notes,
The potential benefits are huge, since everything that civilization has to offer is a product of human intelligence; we cannot predict what we might achieve when this intelligence is magnified by the tools AI may provide, but the eradication of disease and poverty are not unfathomable.
The letter doesn’t dwell on the negative possibilities (for instance, destroying all of human civilization), but it does state that
We recommend expanded research aimed at ensuring that increasingly capable AI systems are robust and beneficial: our AI systems must do what we want them to do.
The letter was accompanied by a list of research priorities to be funded initially by a $10 million donation from Elon Musk. Many of these are focused on the near-term problems arising from somewhat smarter machines: possible legal issues associated with self-driving cars and the economic disruption that will be caused by machine displacement of workers across many sectors.
Near the end, this letter finally mentions the possibility of an “intelligence explosion” and the range of opinions on how likely such an event really is. It then suggests, vaguely, that more research should be done to ensure that if it does happen, there might be some way for humans to maintain control and seek a positive outcome. When you finish reading this statement, you realize that all these bright and well-intentioned experts don’t really have a clue as to whether or not this could happen and how to affect the outcome if it does.
I appreciate these efforts by the AI community to develop, in tandem with machine cleverness, the wisdom to guide these growing powers. Their efforts will no doubt lead to some interesting and worthwhile research results that will help to improve AI systems. Will they be sufficient to avoid the worst possible outcomes? Not in the worst-case scenarios. Even if we really knew enough to enact rigorous guidelines, I’m not sure to what extent those competing with other companies for profits, or those working for militaries trying to make sure their autonomous weapons outperform those of their enemies, would temper their efforts if they saw such constraints as limi
ting their opportunity for advantage.
I am encouraged, however, that many of the best minds in the field are attempting to guide their community toward responsibility. Yet, when I think it through, my lack of concern about a dangerous “intelligence explosion” does not come from confidence in the AI community’s ability to control truly sentient machines, or infuse them with values and goals that make them benign. I just think they are overconfident about their ability to make such machines anytime soon.
Nobody is seriously proposing a ban on artificial intelligence research, perhaps because it is obvious that such a ban would be impossible to enforce. Nonetheless, at least leaders in the field are discussing the questions, and thinking about ways to foster a culture of responsibility and to encourage safe best practices among those coming closest to success.
No Choice but to Choose
What is our role in this universe? Do we know enough about planets, and about ourselves, to be the shapers of worlds? In chapter 4, I discuss the problem of geoengineering and the related question of terraforming. Are these always bad ideas even when they might prevent mass extinction or allow biospheres to flourish? With these choices, philosophical and spiritual questions about what we should do are deeply entwined with technical questions about what we can do with, or to, planets.
Climate change is here, and there are some who, unwisely, advocate that we should go for the quick techno fix. Fortunately, among those who study geoengineering and climate policy, it seems that a consensus view is developing that these are last-resort options. We are still too ignorant, lacking even the knowledge to assess the risks adequately, but we need to keep doing research, and acquire that knowledge, because sooner or later—and I hope it’s much later—it will become necessary to do some high-tech geoengineering. There is no question that if we manage to survive and become long-term actors on this planet, someday we will want to deploy more active geoengineering as a defense against the dangerous natural capriciousness of planetary climate systems.
Picture the next several millennia. Intrusive geoengineering is a bad idea now but will someday be necessary. Intelligent machines are likely to become permanently integrated into human societies and Earth systems. Active SETI is also something that can be done well only by civilizations with a multimillennial outlook. With each of these issues (geoengineering, artificial intelligence, and METI), we’ll soon know a lot more. We need to keep studying these problems, and we need to take a long view. We have a lot to learn about our climate and cognitive systems. We are on the threshold of being able to study exoplanets and learn whether inhabited worlds, and possibly technically altered worlds, are common. Our listening technology is growing in such a way that even in twenty years we’ll have significantly more evidence that bears on whether there really is a Great Silence.
If we want to become a broadcasting civilization, we can only really do so effectively if we can move far beyond brief one-off stunts and begin a project that will have global buy-in and will last for millennia. Yes, it is hard for us to imagine that now. That is reflective of one of our biggest problems, but we will need to think and act this way both to manage ourselves well on this world and to start to reach out effectively to others.
It so happened that when we gathered at the AAAS for the symposium on active SETI, the subject of geoengineering was in the news because just that same week a much-anticipated report on climate intervention had been released by the National Academy of Sciences.3 The analogy between METI and geoengineering was stressed by our fifth panelist, and the only nonscientist, David Tatel, a federal judge on the U.S. Court of Appeals for the District of Columbia. Tatel was appointed in 1994 by President Clinton to fill a vacancy left by Ruth Bader Ginsburg.
How did a judge and lawyer with a background in civil rights law get involved in the METI debate? Well, as I discovered when he invited me to lunch in his expansive offices in the Federal Court Building in Washington, DC, he is a man of wide-ranging and eclectic interests. He has also been blind since 1972, which is neither here nor there, except it was fascinating to notice his use of alternate and innovative communications technology to read, write, correspond, and deliver public lectures. He certainly navigates this world more effectively than most of us do, and I did wonder, though I was too shy to ask, if his experience of doing so with a fundamentally different sensory palette might give him any ideas about or insight into the challenges of communicating with extraterrestrials who will likely have their own, differently evolved range of senses. Judge Tatel’s entree into the world of SETI, it turns out, came about in a roundabout way, through his father’s service in World War II. At the end of the war, Howard Tatel and some colleagues managed to capture a large German radio antenna and used it to help establish American radio astronomy. Tatel Senior then developed some innovative engineering concepts that were used to build the large telescopes of the U.S. National Radio Astronomy Observatory in Green Bank. So the telescope Frank Drake used for project Ozma was called the Tatel telescope, and it was at a celebration for the fiftieth anniversary of this facility that Judge Tatel met several of the scientists involved in the current heated debate over active SETI. He kept in touch with them and, through his intellectual gregariousness, got involved in enough discussions on the subject to generate an invitation from Jill Tarter to represent a nonscientist, policy voice at the symposium.
In addition to geoengineering, Judge Tatel suggested, as analogies for the policy challenges of active SETI, both recombinant DNA research and laboratory studies involving dangerous pathogens, areas where scientists have struggled to balance the value of unfettered curiosity-driven research and the free exchange of ideas, with the threat of conjuring something truly destructive and potentially unstoppable. Especially given the self-reproducing, mutating, and adapting quality of organisms, in biotechnology there is a possibility of unleashing a genie that cannot be put back in the bottle.
There is no way to entirely enforce a worldwide ban on any research, even if that were seen as desirable, but there are ways to encourage best practices. In each of these areas the potential dangers have been recognized and widely discussed by international groups of concerned professionals. These processes were accompanied by temporary suspensions of government funding to slow down the research while guidelines were developed and adopted. Through collaborative conversations, the scientific community focused its intellect and devised agreements for how to proceed.
Judge Tatel suggested that geoengineering is the closest analogy to the problem of METI broadcasts. Although biotechnology is moving rapidly in the direction of DIY genetic laboratories, where any “maker” in a garage can cook up their own organisms, it still takes a pretty sophisticated government laboratory to work on a pathogen like H5NI (Asian-origin avian influenza, or “bird flu”), which means there is still some possibility of enforcing meaningful guidelines for research. But geoengineering is more like METI in that a “rogue” experiment can be attempted by any rich person or group.
Tatel suggested that the National Academy geoengineering report could serve as an initial model for an approach to METI, and he finished his talk by quoting from the conclusions of the report:
Planning for any deployment of albedo modification would bring unique legal, ethical, social, political and economic considerations. Open conversations about the governance of albedo modification research could help build civil society trust in research in this area. If new governance is needed, it should be developed in a deliberative process with input from a broad set of stockholders.
It is true that anyone can buy or build a radio telescope and start broadcasting, but it remains the case that the more powerful facilities, those with the power to significantly change Earth’s visibility, are controlled by larger governmental and academic organizations that could be expected to conform to widely accepted guidelines. As radio technology progresses, it will, as has biotechnology, move in the direction of empowering individuals who wish to do their own powerful experime
nts.
For this reason, there may be a window of time during which we can seek consensus. As the march of technology moves in a direction that would facilitate lawlessness and rogue actors, there will be perhaps two or three decades when the major broadcasting power is still locked up in larger institutions. Whether or not Vakoch is able to start his desired, controversial Arecibo transmissions, there is nothing on the drawing board that would immediately and drastically change Earth’s galactic visibility. Nobody on Earth is yet capable of broadcasting the powerful, continuous beacons that successful METI would likely require.
Depending on whom you talk to, that is a reason why we really don’t need to worry about this now—or why right now is a crucial time, an important window of opportunity for trying to get the global conversation going before new technological capabilities inevitably complicate the situation.
This sense of a window that may soon be closing is part of what motivates David Brin in his activism on this issue. Brin used his time at the AAAS podium to urge our community to start a process of inclusive international discussions about the wisdom of active SETI. He held up the “Asilomar process” as a model. This refers to the time, in the 1970s, when there was a lot of fear of recombinant DNA research and its growing power to create and possibly release dangerous, unstoppable agents. At that time, the DNA research community agreed on a voluntary hiatus, and a gathering was held at a conference center at Asilomar State Beach, in Northern California, to discuss potential hazards and regulations. Out of this came a set of guidelines for containment of some experiments, whereby those judged to have higher risk would require additional levels of containment. There was also a class of experiment (those involving deadly pathogens that could not be effectively contained) judged so risky that by consensus they were banned. These guidelines were incorporated into the culture, laboratory practices, and facilities of the burgeoning field and industry of biotechnology. It is widely believed that by effectively self-policing, the biotechnology community avoided government regulations that would have slowed or halted progress in research.