by Brin, David
AFTERWORD
david brin
It’s been said that “dinosaurs are extinct because they had no space program.” Mammals might never have inherited Earth, had clever velociraptors looked up at the sky—with telescopes—and detected the fatal rock well in advance, then got organized, assertively working together to deflect doom. Of course, this truism is both obvious and a little unfair, since many dinosaur cousins did escape the Cretaceous calamity by taking to the sky, mastering flight eons before we hairy types got around to it.
Still, the metaphor makes a powerful point, for what use is intelligence if it cannot probe the future, effectively, seeking (and possibly averting) threats to our children. Blithe or willful evasion of this responsibility makes up the litany of disasters that make up what we call “human history,” as so chillingly described in recent books, like Jared Diamond’s Collapse.
Indeed, it is daunting to count ways that the universe has at its disposal to shatter the expectations and desires of living beings. Technological humans have vastly expanded not only their range of possible options, but also the number of things that might go wrong and stymie all hope. And so the central question: is it possible for a species—equipped with tools of appraisal, foresight, and situational awareness—to survey the future for dangers and gather the will to prevent them?
To be clear, we did not need high tech to begin wreaking anthropogenic damage on the world we depend upon. Pretty soon after people developed a knack for animal husbandry—protecting goats from predators, and thereby benefiting from large herds of meat on the hoof—our beloved swarms of ungulates overgrazed and disrupted local ecosystems, accelerating the spread of deserts in a post ice-age world. The next super-technology—irrigation—had similar effects, expanding human populations while ruining much of the Fertile Crescent, wherever hydrological societies failed to understand salt-accumulation and other unforeseen side effects.
One can envision this sort of thing helping to explain the “Fermi Paradox”—the mysterious lack of any clear sign (so far) of sapient civilizations among the stars. Among the hundred or so theories that I’ve catalogued for this interstellar quandary, one compelling possibility is that humans got smart exceptionally fast, allowing us to achieve science only 10,000 years or so after developing primitive pastoralism and agriculture. Quick enough to start noticing the harm that we were doing, while our homeworld still retains appreciable amounts of health and natural fecundity. Other sapients, who do it more slowly, might never notice any contrast, as they gradually degrade the nursery that engendered them.
Indeed, we can use the Fermi Paradox as a metaphysical whiteboard, to write the multi-spectra of our fears. If there is a pattern of behavior—say feudalism—that has repeatedly sucked in 99% of human societies, limiting our wisdom and vision and producing one tragic failure after another, might similar syndromes have thwarted aliens out there, perhaps systematically and often? Pondering these potential “great filters”—as Nicholas Bostrom put it—lets us view our own blunders in a fresh and disturbing light. At every major choice point, we’re behooved to ask: might this be the big one that culls out most promising young races, leaving the cosmos a mostly-silent realm? Could this be the one that will trap us too, making us yet another, typical failure?
That is the central, dark rumination Bostrom attempts to cover, when he writes about potential catastrophe modes. Working with Lord Martin Rees (who contributed to this volume), Bostrom has established Oxford’s Future of Humanity Institute, dedicated to exploring some of the dour problems we had better solve, or else.
As does the Lifeboat Foundation, a loose association of scholars and pundits, scientists and enthusiasts, drawn together to discuss a related theme—is it possible for humanity to develop good habits of foresight, satiability, accountability and agility, in time to make it across the minefield just ahead? A murky tomorrow, strewn with dangers—some of them forged by nature, but others by our ever-clever selves?
To be clear, I have had my doubts about some aspects of LF, from time to time. But in this era, that is to be expected. It does not detract from the value Lifeboat contributes through its lively, online arguments, as well as this volume’s core aim. Getting us talking about that shadowy realm ahead.
A Pathfinding Genre
Of course, this has long been a realm explored most vigorously by serious science fiction, as in my novels Earth and Existence… and as so many other authors have done, from Harry Harrison, Frederik Pohl and Alice Sheldon through Margaret Atwood and (relentlessly) Michael Crichton. While some of the fictional calamities offered by these writers have been silly, lurid, or nonsensical, they all reflect one valuable trend—that we’re starting to care about chains of cause-and-effect!
Moreover, science fiction is the one branch of literature brave enough to admit that change happens. Change, in fact, can be a topic that’s bold, enriching, even fascinating: adding to the already rich palette of narrative storytelling that stretches back to Cro-Magnon campfires. Only science fiction goes further, asserting that yesterday’s so-called “eternal human verities” may seem boring and archaic to tomorrow’s children. If they are wise, they will face (and invent) new problems of their own, while standing on the shoulders of earlier generations, having learned from our mistakes.
Indeed, it is this core premise—even “verities” may change—that explains why so many scholastics in stodgy, university departments go to great lengths deriding science fiction. It terrifies mavens in armchair cloisters to realize that nature—even human nature—has always been in flux, and that literature might bravely face that fact, head-on.
SF pokes at the murky path ahead, exposing perils in vivid fashion, sometimes propelling millions of readers and viewers to transform attitudes, or even take action. When such tales are supremely effective—as in George Orwell’s Nineteen Eighty-Four or Soylent Green or Dr. Strangelove, you sometimes get the most powerful of all stories—self-preventing prophecies.1
Alas, SF tales, like all popular literature and cinema, also have to satisfy commercial needs—keeping protagonists in white-knuckle jeopardy for 350 pages or 90 minutes of screen time. And this paramount requirement often simplifies, or even lobotomizes the warning message. Elsewhere I show how this leads to two iron rules of modern media: (1) thou shalt never show a public or governmental institution actually functioning well, and (2) thou shalt always portray average citizens as useless, cowardly sheep.2
There are exceptions, of course. Many of the stories chosen for this volume examine possible future (or present) failure modes, without rigidly obeying those iron rules. They admit the glimmering possibility that humans and their cultures might successfully adapt. Whether reprinted classics or original tales written specially for this volume, their light shines onto that dimly-lit and rocky trail ahead.
Optimists and Pessimists
As for this volume’s nonfiction portions, it is interesting to note the following from James Blodgett’s article on Saving the World:
“If Rees and Wells, who predict disaster soon, are right, that may be too late. However, I have learned that there is enough material in the asteroid belt to build habitats for trillions of people.”
Here we see illustrated the spectacular range of possibilities that are being reconnoitered by some very intelligent and tech-savvy modern thinkers. At one end, you have the Transhumanist Movement, whose members variously predict a coming era of extended lifespans and/or uploading of human personalities into super-sentient machines and/or redesign of the human species itself. One leader in the movement, Zoltan Istvan, is running as a 2016 U.S. Presidential candidate under the newly formed Transhumanist Party.
Even the more moderate elements of this zeitgeist still sound high on some optimism drug—for example Peter Diamandis, whose excellent book Abundance, certainly makes a strong case for what Ray Kurzweil calls a “Law of Accelerating Returns,” the notion that world-changing technologies will leverage upon each other in positive-sum way
s—for instance, when new methods of desalinization combine with cheaper solar energy and better ecological modeling to reverse many of our water woes, the transforming effects could be tremendous. It would be easy to ridicule them (and some do). But we’ve seen this happen before, and the lesson is two-fold:
“Sure, those fine advances may be possible! So let’s believe in ourselves and invest serious resources to making great things happen!
“At the same time, though, let us also be quicker at perceiving inevitable, unforeseen side effects. That, too, is a valuable lesson from the past.”
Indeed, you’ve got to respect these guys, for they are on the front lines, reifying the dreams that give us all reason to hope. See, for example, Peter’s great work developing X Prizes that promote solutions to tractable problems by stimulating our greatest asset, the agile industriousness of brilliant, challenged minds.3
At the opposite end are grouches who perceive only darkness and obstacles ahead, who indeed number fizzy optimism among our problems! Remember those “unforeseen side effects” I mentioned? Well, folks like Michael Crichton and Francis Fukayama and Margaret Atwood can always be counted on to perceive those possibilities first and foremost, if not to the exclusion of all else. Indeed, so (for commercial reasons) does Hollywood.
At the extremes, zealots and curmudgeons become caricatures, discrediting their promises and warnings with finger-wagging exaggeration and tunnel vision.
On the other hand, our society’s greatest invention has to be the openness that allows bright ideas and critical warnings to flow. Like T cells zeroing in on potential opportunities and errors, they do not have to be right every time in order to serve a useful purpose! Sometimes it’s enough, just getting calmer, more pragmatic fellow citizens to lift their heads. To notice and to think.
What both ends of this spectrum seem to miss is how familiar it would all seem, to our ancestors—this juxtaposition of bright possibilities and gloom. Transcendent promises and jeremiads of doom. Just read ancient accounts and you’ll soon realize that all previous generations must have been battered with such ravings, by believers in bright visions or dark, who shared one common trait—dissatisfaction with things as they are. With the hand we’re dealt. Wide-eyed and capering outside the Temple walls, preaching either hope or despair to fascinated throngs, these were predecessors of today’s transhumanists and their bitterest detractors.
Just one essential thing seems to have changed. In earlier times, the grouches and transcendentalists could only imagine their forecasts arriving via supernatural means. Doom might be the work of angry gods. Glorious improvements might unfold as rewards for virtue, or from following the correct prescriptions or rituals. But neither could emerge from practical exercise of mundane commerce and craft!
Today though, as we moderns are undeniably picking up the very tools and skills of Creation itself, these fellows no longer just tout subjective incantations. Rather, they now talk about a coming rise or decline—heaven or hell—coming about physically and objectively, wrought by human hands.
Is it then partly a matter of personality? If it’s true that optimism or pessimism bubble up from deeper-psychological forces, within, then are these techie Big Thinkers erecting their towers of justification after the fact?
I don’t say this to disparage! Indeed, this writer’s own take on all of this is at least partly a function of my own quirky, underlying nature—as a contrarian. As someone whose basic catechism goes “um sure, that’s interesting. But have you considered THIS inconvenient glitch in your model?”
Hence, around transhumanists, I point out cavils/dangers/side-effects and possible ways that it all might fail.
But when fate carries me near gloom artists—(especially cable TV’s merchants of fear)—I demand:
“Who are YOU to undermine confidence in our ability to take on challenges and to do what our ancestors have already done, countless times before us?
“To look ahead, catch our mistakes in the nick of time, innovate, create, negotiate, compromise, compete, cooperate… and prevail?”
Our Own Duty to the Future
So how will we do that? Again I return to James Blodgett, who wrote about how we, as individuals and citizens, bear much of the responsibility for solving problems, so that our children will inherit hope… and perhaps even pride in their heritage. It is the central theme of the “Restored United States” in my novel The Postman. It is the goal of every sincere politician who tried to actually make politics work, and every resident who talks fellow community members into compromise, instead of screaming at each others’ faces. It is the methodology of open and flat-fair reciprocal accountability analyzed in The Transparent Society.
There are countless things that we can do, as workers, family members, neighbors and citizens. For example:
refuse the blandishments of those fear merchants who feign to be “journalists,” preaching hate-your-own-neighbors.
re-learn the citizen arts of negotiation and meeting those neighbors halfway.
spurn blatantly stupid metaphors—like a hoary, lobotomizing so-called “left-right political axis” that none of you could define, if your life depended upon it, and that is unworthy of a scientific, complex and sophisticated 21st century civilization.
find ways to improve your institutions, instead of wallowing in the sanctimonious drug high of self-righteousness.
but also bypass those institutions, by acting as individuals, as ad-hoc groups, to improve what can be improved, and thereby help to prove right the guys we want to be right, like Peter Diamandis.
In another place, I talked about the simplest way to do this. A method that is so cheap and easy and lazy that none of you have any excuses. It is called proxy activism… the simple way to invest in saving the world by whatever combination of concerns that you feel to be important. It is utterly straightforward. And if we all did this one little thing, the world would change, no matter what folks in Washington or on Cable News believe.4
It’s Coming, Like It or Not
I could go on. There are so many realms under this tent. And indeed, as a sci fi author, I have to admit that the problem faced by Douglas Richards—in his essay about difficulties of sci fi—is a tough one to overcome. For if the optimists (and/or a subset of the pessimists) prove right, then accelerating progress may render moot even the sharpest and most compelling of our stories. The “singularity” will then be a daunting barrier to look past. This is one reason why I keep re-defining the near-intermediate future from 50 years ahead—as in Earth (1989)—down to 30 years—as in Existence (2012)—and so on.
Does this mean I sense the threshold, just ahead, and deem it impossible to write beyond?
Nonsense Richard! Take heart, dear colleague. Boldly set forth across that sea! The Singularity may turn out to be a soft one, allowing human style beings to criss-cross the stars and have adventures, as in the novels of Vernor Vinge. Or it might engender great minds who then choose to encourage human adventure, as in the novels of Iain Banks. Heck, I’ve even written stories set in worlds where men and women are effectively gods, yet have new problems of their own. And why not?
This is, after all, our greatest power. To envision that dark road ahead, filled with land mines and quicksand and snakes and deadfalls, created by both nature and by man, ready to trip any unwary species and civilization.
Only… we’re not unwary! Suspicion and worry-R-us!
What we need (and I will repeat it endlessly) is confidence.
Not arrogance! But the ability to trade criticism, learn from each other…
…and then… to boldly go.
ENDNOTES
http://www.davidbrin.com/1984.html
http://www.davidbrin.com/idiotplot.html
http://www.xprize.org
http://www.davidbrin.com/proxyactivism.html
LIFEBOAT FOUNDATION
The Lifeboat Foundation is a nonprofit nongovernmental organization dedicated to encouraging scientific advancem
ents while helping humanity survive existential risks and possible misuse of increasingly powerful technologies, including genetic engineering, nanotechnology, and robotics/AI, as we move towards the Singularity.
Lifeboat Foundation is pursuing a variety of options, including helping to accelerate the development of technologies to defend humanity such as new methods to combat viruses, effective nanotechnological defensive strategies, and even self-sustaining space colonies in case the other defensive strategies fail.
We believe that, in some situations, it might be feasible to relinquish technological capacity in the public interest (for example, we are against the U.S. government posting the recipe for the 1918 flu virus on the internet). We have some of the best minds on the planet working on programs to enable our survival. We invite you to join our cause!
LINKS
Visit our site at http://lifeboat.com. The Lifeboat Foundation is working on a prototype Friendly AI at http://lifeboat.com/ai and also has launched the world’s first bitcoin endowment fund at https://lifeboat.com/ex/bitcoin.
Join our Facebook group at
https://www.facebook.com/groups/lifeboatfoundation/.
Join our LinkedIn group at
http://www.linkedin.com/egis/35656/2B322944A8E3.
Read our blog at http://lifeboat.com/blog.
Follow our Twitter feed at http://twitter.com/lifeboathq.
Watch our YouTube channel at https://youtube.com/lifeboathq.
Participate in our programs at http://lifeboat.com/ex/programs.
Join our various mailing list/forums at http://lifeboat.com/ex/forums.
Read our first book The Human Race to the Future: What Could Happen—and What to Do at http://amzn.to/1uYeeAF. Interact with its author at
https://www.facebook.com/groups/thehumanracetothefuture.