Book Read Free

Farsighted

Page 15

by Steven Johnson


  Think back to Meadow Lake in Queens and those fish struggling to find oxygen beneath the superbloom of blue-green algae. Those fish were, in a sense, stakeholders in the decision process. They were included as a meaningful variable in part because they play an important role in the ecosystem, which ultimately sustains life for human beings, but also because many of us believe they have some intrinsic right to life as a species on this planet, whether they support human needs or not. When the early Manhattanites decided to bury Collect Pond, no one mapped out the impact pathways on the ecology of Lower Manhattan. They just thought they would get rid of an increasingly polluted lake and build some new houses.

  Skeptics will argue that, yes, there are some environmental planners out there who are concerned with wetlands wildlife, but if you look at the planet as a whole, we are trashing it at an unprecedented clip. The last two centuries have clearly been the most environmentally destructive of any in human history: for every fish we preserved in Meadow Lake, there are a thousand species we have driven to extinction. Isn’t this clear evidence that we are making worse choices in the modern age?

  But the truth is, on a species level, we have been as destructive ecologically as our technology would allow for at least twenty thousand years, maybe longer. No doubt there were some preindustrial communities who factored the “balance of nature” into their collective decisions about what to eat and where to live. But for most of human history, we have been willing to sacrifice just about any natural resource if it aided our short-term needs. Consider the list of mammals driven into extinction during the first few thousand years that humans occupied North America, from roughly 11,000 to 8000 BC: mastodons, jaguars, woolly mammoths, saber-toothed cats, and at least a dozen other species of bears, antelopes, horses, and other animals. For most of our history, our carnage has been reined in far more by our technological limitations than by our intellectual or moral ones. We’ve always churned through everything our tools have allowed us to. We just have better tools now—if “better” is the right word for it—so we can do more damage.

  The fish in Meadow Lake, on the other hand, suggest a new kind of deliberation: the decision to preserve a species even if it provides little value to us, in the short term, at least. People have been burying the pond since the Stone Age gave them tools to dig with. But contemplating the impact of nitrogen runoff on an algae bloom and how that bloom might starve the fish of oxygen—that is a new way of thinking.

  The fact that some of us continue to debate whether global warming is even happening—let alone what we should do about it—shows us that we’re still not experts at this kind of thinking. Yes, it does seem ominous that the United States is currently threatening to withdraw from the Paris climate accord. But we are very early in that particular narrative; the ending is not at all clear. So far, the Paris Agreement story is really the story of two distinct decisions: 198 nations signing the accord itself, and one temperamental leader promising to withdraw in a huff. Approached from the long view, which one looks more impressive? We’ve had impetuous leaders since the birth of agriculture; truly global accords with real consequences for everyday life are a new concoction.

  The fact that we sometimes seem incompetent at these kinds of choices is a sign that we are grading on a reverse curve: we have higher standards now, so it sometimes seems as though we’re less deliberative than our ancestors. But the truth is, both the spectrum of our decisions and their time horizons have widened dramatically over the past few centuries. The Aztecs and the Greeks could peer into the future as far as their calendars and their crude astronomy would allow them. They built institutions and structures designed explicitly to last centuries. But they never contemplated decisions that addressed problems that wouldn’t arrive for another fifty years. They could see cycles and continuity on the long scale. But they couldn’t anticipate emergent problems.

  We are better predictors of the future, and our decisions are beginning to reflect that new ability. The problem is that the future is coming at us faster than ever before.

  THE LONG VIEW

  How long could our time horizons be extended? As individuals, almost all of us will find ourselves contemplating at least a few decisions that by definition extend the length of our lives: who to marry, whether to have children, where to live, what vocation to pursue. As a society we are actively deliberating decisions with time horizons that extend beyond a century, in climate change, automation and artificial intelligence, medicine, and urban planning. Could the horizon recede even farther?

  Consider a decision that most of us probably do not, initially, at least, have strong feelings about either way: Should we talk to intelligent life-forms living on other planets? In 2015, a dozen or so science and tech luminaries, including Elon Musk, signed a statement that answered that question with a vehement no: “Intentionally signaling other civilizations in the Milky Way Galaxy,” the statement argued, “raises concerns from all the people of Earth, about both the message and the consequences of contact. A worldwide scientific, political and humanitarian discussion must occur before any message is sent.” They argued, in effect, that an advanced alien civilization might respond to our interstellar greetings with the same graciousness that Cortés showed the Aztecs. The statement was a response to a growing movement led by a multidisciplinary group of astronomers, psychologists, anthropologists, and amateur space enthusiasts that aims to send messages specifically targeting planets in the Milky Way that are likely to support life. Instead of just scanning the skies for signs of intelligent life, the way SETI’s telescopes do, this new approach, sometimes called METI (Messaging Extraterrestrial Intelligence), actively tries to initiate contact. The METI organization, led by former SETI scientist Douglas Vakoch, has planned a series of messages to be broadcast from 2018 onward. And Yuri Milner’s Breakthrough Listen endeavor has also promised to support a “Breakthrough Message” companion project, including an open competition to design the messages to be transmitted to the stars. Think of it as a kind of intergalactic design charrette.

  If you believe that the message has a plausible chance of making contact with an alien intelligence, it’s hard not to think of it as one of the most important decisions we will ever make as a species. Are we going to be galactic introverts, huddled behind the door listening for signs of life outside? Or are we going to be extroverted conversation starters? (And if it’s the latter, what should we say?) The decision to send a message into space may not generate a meaningful outcome for a thousand years, or even a hundred thousand years, given the transit times between the correspondents. The first intentional message ever sent—the famous Arecibo Message sent by Frank Drake in the 1970s—was addressing a cluster of stars fifty thousand light-years away. The laws of physics dictate the minimum time for the result of that decision to become perceptible to us: one hundred thousand years. It is hard to imagine a decision confronting humanity with a longer leash on the future.

  The anti-METI movement is predicated on the fact that if we do ever manage to make contact with another intelligent life-form, almost by definition our new pen pals will be far more advanced than we are. (A less advanced civilization would be incapable of detecting our signal, and it would be a staggering coincidence if we happened to make contact with a civilization that was at the same level of technological sophistication as ours.) It is this asymmetry that has convinced so many future-minded thinkers that METI is a bad idea. The human history of exploitation weighs heavily on the imagination of the METI critics. Stephen Hawking, for instance, announced in a 2010 documentary series, “If aliens visit us, the outcome would be much as when Columbus landed in America, which didn’t turn out well for the Native Americans.” Astronomer and sci-fi author David Brin echoes the Hawking critique: “Every single case we know of a more technologically advanced culture contacting a less technologically advanced culture resulted at least in pain.”

  There is something about the METI decision that force
s the mind to stretch beyond its usual limits. Using your own human intelligence, you have to imagine some radically different form of intelligence. You have to imagine time scales where a decision made in 2017 might trigger momentous consequences ten thousand years from now. The sheer magnitude of those consequences challenges our usual measures of cause and effect. If you think METI has a reasonable chance of making contact with another intelligent organism somewhere in the Milky Way, then you have to accept that this small group of astronomers and science-fiction authors and billionaire patrons may, in fact, be wrestling with a decision that could prove to be the most transformative one in the history of human civilization.

  All of which takes us back to a much more down-to-earth but no less challenging question: Who gets to decide? After many years of debate, the SETI community established an agreed-upon procedure that scientists and government agencies should follow in the event that SETI actually stumbles upon an intelligible signal from space. The protocols specifically ordain that “no response to a signal or other evidence of extraterrestrial intelligence should be sent until appropriate international consultations have taken place.” But an equivalent set of guidelines does not yet exist to govern our own interstellar outreach.

  The METI debate runs parallel to other existential decisions that we will be confronting in the coming decades, as our technological and scientific powers increase. Should we create superintelligent machines that exceed our own intellectual capabilities by such a wide margin that we cease to understand how their intelligence works? Should we “cure” death, as many Silicon Valley visionaries are proposing? Like METI, these are potentially among the most momentous decisions human beings will ever make, and yet the number of people actively participating in that decision—so far—is minuscule.

  One of the most thoughtful participants in the debate over the METI decision, Kathryn Denning, an anthropologist at York University in Toronto, has argued that decisions like METI require a far wider sample of stakeholders: “I think the METI debate may be one of those rare topics where scientific knowledge is highly relevant to the discussion, but its connection to obvious policy is tenuous at best, because in the final analysis, it’s all about how much risk the people of Earth are willing to tolerate . . . and why exactly should astronomers, cosmologists, physicists, anthropologists, psychologists, sociologists, biologists, scifi authors, or anyone else (in no particular order) get to decide what those tolerances should be?”

  Agreements like SETI protocols—and even the Paris climate accord—should be seen as genuine achievements in the history of human decision-making. But they are closer to norms than to actual legislation. They do not have the force of law behind them. Norms are powerful things. But as we have seen in recent years, norms can also be fragile, easily undermined by disrupters who don’t mind offending the mainstream. And they are rarely strong enough to resist the march of technological innovation.

  The fragility of norms may be most apparent in decisions that involve extinction-level risk. New technologies (like self-replicating machines) or interventions (like METI) that pose even the slightest risk to our survival as a species require much more global oversight. Creating those regulations would force us to, as Denning suggests, measure risk tolerance on a planetary level. They would require a kind of global Bad Events Table, only instead of calculating the risk magnitude of events that would unfold in a matter of seconds, as the Google algorithm does, the table would measure risk for events that might not emerge for centuries. If we don’t build institutions that can measure that risk tolerance, then by default the gamblers will always set the agenda, and the rest of us will have to live with the consequences. This same pattern applies to choices that aren’t as much about existential risk as they are about existential change. Most Americans and Europeans, when asked, say they would not like to “cure” death; they say they much prefer pursuing longer, more meaningful lives, not immortality. But if immortality is, in fact, within our reach technologically—and there is at least some persuasive evidence to suggest it is—we don’t necessarily have the institutions in place that are equipped to stop it. Do we want to have the option to live forever? That is a global, species-level decision if there ever was one.

  How would we go about making decisions like this? We do have institutions like the United Nations that gave us a framework for making planetary choices, and for all the limitations of its power, the fact that the UN exists at all is a measure of real progress. If our decision-making prowess improves with the growing diversity of the group making the decision, it’s hard to imagine a more farsighted institution than one that represents all the countries of the world. But, of course, the United Nations represents the citizens of those countries through very indirect means. Its decisions are hardly direct expressions of the “will of the people.” Would it be possible to conduct something equivalent to a design charrette on the scale of the planet, where stakeholders—not just political appointees—can weigh in with their own priorities and tolerance for risk?

  We invented the institution of democracy—in all its many guises—to help us decide, as a society, what our laws should be. Perhaps it is time that we took some of the lessons we have learned from small-group decision-making and applied them to the realm of mass decisions. This is not as unlikely as it sounds. After all, the rise of the Internet has enabled us to reinvent the way we communicate multiple times in my lifetime alone: from email to blogs to Facebook status updates. Why shouldn’t we take this opportunity to reinvent our decision-making tools as well?

  There is some evidence that Internet crowds can be harnessed to set priorities and suggest options with more acumen than the so-called experts, if the software tools organizing all that collective intelligence (and stupidity) are designed properly. In the month leading up to the 2008 inauguration, the incoming Obama administration opened up a Citizen’s Briefing Book on the web, inviting the US population to suggest priorities for the next four years—a small experiment in direct democracy inspired by the Open Government movement that was then on the rise. Ordinary citizens could suggest initiatives and also vote to support other initiatives. In the end, two of the three most popular initiatives urged Obama to radically reform our draconian drug laws and end marijuana prohibition. At the time, the results provoked titters from the media establishment: This is what happens when you open the gates to the Internet crazies; you get a horde of stoners suggesting policy that has zero chance of mainstream support. And yet by the end of Obama’s second term, that briefing book turned out to be the first glimmer of an idea whose time had come. Sentencing laws were being rewritten, cannabis was legal in half a dozen states, and a majority of Americans now support full legalization.

  In a polarized, nationalistic age, the idea of global oversight on any issue, however existential the threat it poses, may sound naive. And it may well be that technologies have their own inevitability, and we can only rein them in in the short run. Reducing our carbon footprint, by comparison, may prove to be an easier choice than stopping something like METI or immortality research, because there is an increasingly visible path for minimizing climate change that involves adopting even more advanced technology: not retreating back to a preindustrial life, but moving forward into a world of carbon-neutral technology, like solar panels and electric vehicles. In our history, there is not a lot of precedent of human beings voluntarily swearing off a new technological capability—or choosing not to make contact with another society—because of some threat that might not arrive for generations. But maybe it’s time we learned how to make that kind of decision.

  SUPERINTELLIGENCE

  The development of supercomputers like Cheyenne—computers smart enough to map the impact pathways of climate change a hundred years into the future—has endowed us with two kinds of farsightedness: they let us predict future changes in our climate that help us make better decisions about our energy use and our carbon footprint today, and they suggest long-term tre
nds in the development of artificial intelligence, trends that may pose their own existential threat to humans in the coming centuries. The upward trajectory of Moore’s law and recent advances in machine learning have convinced many scientists and technologists that we must confront a new global decision: what to do with the potential threat from “superintelligent” machines. If computers reach a level of intelligence where they can outperform humans at nuanced decisions like rendering a verdict in a complicated criminal trial, they will almost certainly have been programmed by evolutionary algorithms, where the code follows a kind of vastly accelerated version of Darwin’s natural selection. Humans will program some original base of code, and then the system will experiment with random variations at blistering speed, selecting the variants that improve the intelligence of the machine and mutating that new “species” of code. Run enough cycles and the machine may evolve an intellectual sophistication without any human programmer understanding how the machine got so smart. In recent years, a growing number of scientists and tech-sector leaders—Bill Gates, Elon Musk, Stephen Hawking—have sounded the alarm that a superintelligent AI could pose a potential “existential threat” to humanity.

  All of which suggests that we are going to confront a decision as a planet: Are we going to allow superintelligent machines, or not? It’s possible that we will “make” the decision in the same way the citizens of New York made the decision to fill Collect Pond, or the way the inventors of the industrial age decided to fill the atmosphere with carbon. In other words, we’ll make it in an entirely unstructured, bottom-up way, without any of the long-term deliberation the decision warrants. We’ll keep opting for smarter and smarter computers because in the short term, they’re better at scheduling meetings and curating workout playlists and driving our cars for us. But those choices won’t reflect the potential long-term threat posed by superintelligent machines.

 

‹ Prev