by Max Tegmark
If we humans eventually trigger the development of more intelligent entities through a singularity, I therefore think it’s likely that they, too, would feel self-aware, and should be viewed not as mere lifeless machines but as conscious beings like us. However, their consciousness may subjectively feel quite different from ours. For example, they’d probably lack our strong human fear of death: as long as they’ve backed themselves up, all they stand to lose are the memories they’ve accumulated since their most recent backup. The ability to readily copy information and software between AIs would probably reduce the strong sense of individuality that’s so characteristic of our human consciousness: there would be less of a distinction between you and me if we could trivially share and copy all our memories and abilities, so a group of nearby AIs may feel more like a single organism with a hive mind.
If this is true, then it can reconcile long-term survival of life with the doomsday argument from Chapter 11: what’s about to end is not life itself, but our reference class, self-aware observer moments that subjectively feel approximately like our human minds. Even if a multitude of sophisticated hive minds colonize our Universe during billions of years, we shouldn’t be any more surprised that we aren’t them than we should be that we aren’t ants.
Reactions to the singularity
People’s reactions to the possibility of a singularity vary dramatically. The friendly-AI vision has a venerable history in the science-fiction literature, undergirding Isaac Asimov’s famous three laws of robotics that were intended to ensure a harmonious relationship between robots and humans. Stories in which AIs outsmart and attack their creators have been popular as well, as in the Terminator movies. Many dismiss the singularity as “the rapture of the geeks,” and view it as a far-fetched science-fiction scenario that won’t happen, at least not for the foreseeable future. Others think that it’s likely to happen, and that if we don’t plan for it carefully, it will probably destroy not only our human species, but also everything we ever cared about, as we explored earlier. I serve as an advisor to the Machine Intelligence Research Institute (http://intelligence.org), and many of its researchers fall into this category, viewing the singularity as the most serious existential risk of our time. Some of them feel that if the friendly-AI vision of Yudkowsky and others can’t be guaranteed, then the best approach is to keep future AIs locked in under firm human control or not to develop advanced AIs at all.
Although we’ve so far focused our discussion on negative consequences of a singularity, others, such as Ray Kurzweil, feel that a singularity would be something hugely positive, indeed the best thing that could happen to humanity, solving all our current human problems.
Does the idea of humankind getting replaced by more advanced life sound appealing or appalling to you? That probably depends strongly on the circumstances, and in particular on whether you view the future beings as our descendants or our conquerors.
If parents have a child who’s smarter than them, who learns from them, and then goes out and accomplishes what they could only dream of, they’ll probably feel happy and proud even if they know they can’t live to see it all. Parents of a highly intelligent mass murderer feel differently. We might feel that we have a similar parent-child relationship with future AIs, regarding them as the heirs of our values. It will therefore make a huge difference whether future advanced life retains our most cherished goals.
Another key factor is whether the transition is gradual or abrupt. I suspect that few are disturbed by the prospects of humankind gradually evolving, over thousands of years, to become more intelligent and better adapted to our changing environment, perhaps also modifying its physical appearance in the process. On the other hand, many parents would feel ambivalent about having their dream child if they knew it would cost them their lives. If advanced future technology doesn’t replace us abruptly, but rather upgrades and enhances us gradually, eventually merging with us, then this might provide both the goal retention and the gradualism required for us to view post-singularity life-forms as our descendants. Mobile phones and the Internet have already enhanced the ability of us humans to achieve what we want, arguably without significantly eroding our core values, and singularity optimists believe that the same can be true of brain implants, thought-controlled devices and even wholesale uploading of human minds to a virtual reality.
Moreover, this could open up space, the final frontier. After all, extremely advanced life capable of spreading throughout our Universe can probably only come about in a two-step process: first intelligent beings evolve through natural selection, then they choose to pass on the torch of life by building more advanced consciousness that can further improve itself. Unshackled by the limitations of our human bodies, such advanced life can rise up and eventually inhabit much of our observable Universe, an idea long explored by science-fiction writers, AI aficionados and trans-humanist thinkers.
In summary, will there be a singularity within a few decades? And is this something we should work for or against? I think it’s fair to say that we’re nowhere near consensus on either of these two questions, but that doesn’t mean it’s rational for us to do nothing about the issue. It could be the best or worst thing ever to happen to humankind, so if there’s even a 1% chance that there’ll be a singularity in our lifetime, I think a reasonable precaution would be to spend at least 1% of our GDP studying the issue and deciding what to do about it. So why don’t we?
Human Stupidity: A Cosmic Perspective
My career has given me a cosmic perspective in which existential risk management feels more urgent, as summarized in Figure 13.5. We professors are often forced to hand out grades, and if I were teaching Risk Management 101 and had to give us humans a midterm grade based on our existential risk management so far, you could argue that I should give a B– on the grounds that we’re muddling by and still haven’t dropped the course. From my cosmological perspective, however, I find our performance pathetic, and can’t give more than a D: the long-term potential for life is literally astronomical, yet we humans have no convincing plans for dealing with even the most-urgent existential risks, and we devote a minuscule fraction of our attention and resources to developing such plans. Compared with the roughly twenty million U.S. dollars spent last year on the Union of Concerned Scientists, one of the largest organizations focused on at least some existential risks, the United States alone spent about five hundred times more on cosmetic surgery, about a thousand times more on air-conditioning for troops, about five thousand times more on cigarettes, and about thirty-five thousand times more on its military, not counting military health care, military retirement costs or interest on military debt.
How can we humans be so shortsighted? Well, given that evolution has prepared us mainly for technologies like sticks and rocks, perhaps we shouldn’t be surprised that we’re dealing with modern technology so poorly, but rather that we’re not doing even worse. Here I am sitting in a large wood-and-stone box repeatedly pressing little black squares while staring at a glowing rectangle in front of me. I haven’t met a single living organism today, and I’ve been sitting here for hours, illuminated by a strange growing spiral above me. The fact that I’m nonetheless feeling happy is testament to how remarkably adaptable the brains evolution has endowed us humans with are. As is the fact that I’ve learned to interpret the squiggly black patterns on my glowing rectangle as words telling a story, and that I know how to calculate the age of our Universe, even though none of these specific abilities had any survival value to my cave-dwelling ancestors. But just because we can do a lot doesn’t mean we can do everything necessary. External forces have changed our environment slowly over the past 100,000 years of human history, and evolution has gradually helped us adapt. But recently, we ourselves have changed our environment way too fast for evolution to keep up, and we’ve made it so complex that it’s hard even for the world’s leading experts to fully understand the limited aspects they focus on. So it’s no wonder that we sometimes lose sight of
the big picture and prioritize short-term gratification over the long-term survival of our spaceship. For example, that glowing spiral above my head gets powered by burning coal into carbon dioxide, which contributes to overheating our spaceship, and now that I think of it, I really should have turned it off long ago.
Figure 13.5: The importance of managing existential risk in a reasonable way becomes more obvious in a cosmic perspective, highlighting the huge future potential that we stand to lose if we mess up and destroy our human civilization.
Click here to see a larger image.
Human Society: A Scientific Perspective
So here we are on Spaceship Earth, heading into an asteroid belt of existential risks without a plan or even a captain. We clearly need to do something about this, but what should our goals be, and how can we best accomplish them? The what question is ethical, whereas the how question is scientific. Both are clearly crucial. To paraphrase Einstein, “science without ethics is blind; ethics without science is lame.” However (and this is a point that my friend Geoff Anders likes to emphasize), there are some ethical conclusions that we have nearly universal agreement on (such as “not having a global nuclear war is better than having one”), which we’re nonetheless doing a dismal job turning into practical goals that we effectively advance. This is why I gave us a D grade in existential-risk mitigation, and I think it’s unfair to blame this failure mainly on difficulties with ethics and the what question. Rather, I think that we should start with the problems where we humans have broad agreement on what our goals are, such as the long-term survival of our civilization, and use a scientific approach to tackling the question of how to achieve these goals (I’m using the word scientific in a broad sense of emphasizing the use of logical reasoning). I don’t feel that it’s enough to simply say things like “a change of heart on a vast scale has to be achieved”—we need more concrete strategies. So how should we pursue our goals? How can we help humanity become less shortsighted when it charts out its future course? In essence, how can we make reason play a greater role in decision making?
Changes in our human society result from a complex set of forces pushing in different directions, often working against each other. From a physics perspective, the easiest way to change a complex system is to find an instability, where the effect of pushing with a small force gets amplified into a major change. For example, we saw that a gentle nudge to an asteroid can prevent it from hitting Earth a decade later. Analogously, the easiest way for a single person to affect society is by exploiting an instability, as captured by numerous physics-based metaphors: an idea can be a “spark in a powder keg,” “spread like wildfire,” have a “domino effect,” or “snowball out of control.”1 For example, if you want to tackle the existential risk from killer asteroids, the hard way is to build an asteroid deflector–rocket system. The easier way is to spend much less money building an early-warning system, knowing that once you have information about an incoming asteroid, raising money for the rocket system will be easy.
I think that for making our planet a better place, many of the easiest instabilities to utilize involve spreading correct information. For reason to play a role in decision making, the relevant information needs to be in the heads of those making the decisions. As illustrated in Figure 13.6, this typically involves three steps, all of which frequently fail: the information must be created/discovered, disseminated by the discoverer and learned by the decision maker. Once discoveries have propagated around the triangle into the heads of others, they enable further discoveries, fueling the growth of human knowledge in a virtuous cycle. Some discoveries have the added advantage of making the triangle itself more efficient: the printing press and the Internet have radically facilitated both dissemination and learning, while better detectors and computers have greatly assisted researchers. Yet even today, there’s room for major improvements to all three links of the information triangle.
Scientific research and other information creation is clearly a good investment for society, as are attempts to counter censorship and other impediments to information dissemination. In terms of utilizing instabilities, however, I think that the lowest-hanging fruit is on the bottom arrow in Figure 13.6: learning. Despite spectacular success in research, I feel that our global scientific community has been nothing short of a spectacular failure when it comes to educating the public and our decision makers. Haitians burned twelve “witches” in 2010. In the United States, polls have shown that 39% of Americans consider astrology scientific, and 46% believe that our human species is less than 10,000 years old. If everyone understood the concept of “scientific concept,” these percentages would be zero. Moreover, the world would be a better place, since people with a scientific lifestyle, basing their decisions on correct information, maximize their chances of success. By making rational buying and voting decisions, they also strengthen the scientific approach to decision making in companies, organizations and governments.
Figure 13.6: Information is crucial for reason to prevail in the management of our society. When important information is discovered, it needs to be made publicly available, then learned by those to whom it’s relevant.
Click here to see a larger image.
Why have we scientists failed so miserably? I think the answers lie mainly in psychology, sociology and economics. A scientific lifestyle requires a scientific approach to both gathering information and using information, and both have their pitfalls. You’re clearly more likely to make the right choice if you’re aware of the full spectrum of arguments before making your mind up, yet there are many reasons why people don’t get such complete information. Many lack access to it (97% of Afghans don’t have Internet, and in a 2010 poll, 92% didn’t know about the 9/11 attacks). Many are too swamped with obligations and distractions to seek it. Many seek information only from sources that confirm their preconceptions—for example, a 2012 poll showed 27% of Americans believing that Barack Obama was probably or definitely born in another country. The most valuable information can be hard to find even for those who are online and uncensored, buried in an unscientific media avalanche.
Then there’s what we do with the information we have. The core of a scientific lifestyle is to change your mind when faced with information that disagrees with your views, avoiding intellectual inertia, yet many laud leaders who stubbornly stick to their views as “strong.” Richard Feynman hailed “distrust of experts” as a cornerstone of science, yet herd mentality and blind faith in authority figures is widespread. Logic forms the basis of scientific reasoning, yet wishful thinking, irrational fears and other cognitive biases often dominate decisions.
So what can we do to promote a scientific lifestyle? The obvious answer is improving education. In some countries, having even the most rudimentary education would be a major improvement (less than half of all Pakistanis can read). By undercutting fundamentalism and intolerance, education would curtail violence and war. By empowering women, it would curb poverty and the population explosion. However, even countries that offer everybody education can make major improvements. All too often, schools resemble museums, reflecting the past rather than shaping the future. The curriculum should shift from one watered down by consensus and lobbying to skills our century needs for relationships, health, contraception, time management, critical thinking and recognizing propaganda. For youngsters, learning a global language and typing should trump long division and writing cursive. In the Internet age, my own role as a classroom teacher has changed. I’m no longer needed as a conduit of information, which my students can simply download on their own. Rather, my key role is inspiring a scientific lifestyle, curiosity and desire to learn more.
Now let’s get to the most interesting question: how can we really make a scientific lifestyle take root and flourish? Reasonable people have been making similar arguments for better education since long before I was in diapers, yet rather than improving, education and adherence to a scientific lifestyle is arguably deteriorating further in
many countries, including the United States. Why? Clearly because there are powerful forces pushing in the opposite direction, and they’re pushing more effectively. Corporations concerned that a better understanding of certain scientific issues would harm their profits have an incentive to muddy the waters, as do fringe religious groups concerned that questioning their pseudo-scientific claims would erode their power.
So what can we do? The first thing we scientists need to do is get off our high horses, admit that our persuasive strategies have failed, and develop a better strategy. We have the advantage of having the better arguments, but the anti-scientific coalition has the advantage of better funding. However, and this is painfully ironic, it’s also more scientifically organized! If a company wants to change public opinion to increase their profits, it deploys scientific and highly effective marketing tools. What do people believe today? What do we want them to believe tomorrow? Which of their fears, insecurities, hopes and other emotions can we take advantage of? What’s the most cost-effective way of changing their minds? Plan a campaign. Launch. Done. Is the message oversimplified or misleading? Does it unfairly discredit the competition? That’s par for the course when marketing the latest smartphone or cigarette, so it would be naive to think that the code of conduct should be any different when this coalition fights science. Yet we scientists are often painfully naive, deluding ourselves that just because we think we have the moral high ground, we can somehow defeat this corporate-fundamentalist coalition by using obsolete unscientific strategies. Based on what scientific argument will it make a hoot of a difference if we grumble, “We won’t stoop that low” and “People need to change” in faculty lunchrooms and recite statistics to journalists? We scientists have basically been saying, “Tanks are unethical, so let’s fight tanks with swords.”