by Michio Kaku
Kurzweil maintains that, instead of taking over, our robot creations will unlock a new world of health and prosperity. According to him, microscopic robots, or nanobots, will circulate in our blood and “destroy pathogens, correct DNA errors, eliminate toxins, and perform many other tasks to enhance our physical well-being.” He is hopeful that science will soon discover a cure for aging and firmly believes that if he lives long enough, he will live forever. He confided to me that he takes several hundred pills a day, anticipating his own immortality. But in case he doesn’t make it, he has willed his body to be preserved in liquid nitrogen at a cryogenics firm.
Kurzweil also foresees a time much further into the future when robots will convert the atoms of the Earth into computers. Eventually, all the atoms of the sun and solar system would be absorbed into this grand thinking machine. He told me that when he gazes into the heavens, he sometimes imagines that he might, in due course, witness evidence of superintelligent robots rearranging the stars.
Not everyone is convinced, however, of this rosy future. Mitch Kapor, founder of Lotus Development Corporation, says that the singularity movement is “fundamentally, in my view, driven by a religious impulse. And all the frantic arm-waving can’t obscure that fact for me.” Hollywood has countered Kurzweil’s utopia with a worst-case scenario for what it might mean to create our own evolutionary successors, who might push us aside and make us go the way of the dodo bird. In the movie The Terminator, the military creates an intelligent computer network called Skynet, which monitors all of our nuclear weapons. It is designed to protect us from the threat of nuclear war. But then, Skynet becomes self-aware. The military, frightened that the machine has developed a mind of its own, tries to shut it down. Skynet, programmed to protect itself, does the only thing it can do to prevent this, and that is to destroy the human race. It proceeds to launch a devastating nuclear war, wiping out civilization. Humans are reduced to raggedy bands of misfits and guerrillas trying to defeat the awesome power of the machines.
Is Hollywood just trying to sell tickets by scaring the pants off moviegoers? Or could this really happen? This question is thorny in part because the concepts of self-awareness and consciousness are so clouded by moral, philosophical, and religious arguments that we lack a rigorous conventional framework in which to understand them. Before we continue our discussion of machine intelligence, we need to establish a clear definition of self-awareness.
SPACE-TIME THEORY OF CONSCIOUSNESS
I have proposed a theory that I call the space-time theory of consciousness. It is testable, reproducible, falsifiable, and quantifiable. It not only defines self-awareness but also allows us to quantify it on a scale.
The theory starts with the idea that animals, plants, and even machines can be conscious. Consciousness, I claim, is the process of creating a model of yourself using multiple feedback loops—for example, in space, in society, or in time—in order to carry out a goal. To measure consciousness, we simply count the number and types of feedback loops necessary for subjects to achieve a model of themselves.
The smallest unit of consciousness might be found in a thermostat or photocell, which employs a single feedback loop to create a model of itself in terms of temperature or light. A flower might have, say, ten units of consciousness, since it has ten feedback loops measuring water, temperature, the direction of gravity, sunlight, et cetera. In my theory, these loops can be grouped according to a certain level of consciousness. Thermostats and flowers would belong to Level 0.
Level 1 consciousness includes that of reptiles, fruit flies, and mosquitos, which generate models of themselves with regard to space. A reptile has numerous feedback loops to determine the coordinates of its prey and the location of potential mates, potential rivals, and itself.
Level 2 involves social animals. Their feedback loops relate to their pack or tribe and produce models of the complex social hierarchy within the group as expressed by emotions and gestures.
These levels roughly mimic the stages of evolution of the mammalian brain. The most ancient part of our brain is at the very back, where balance, territoriality, and instincts are processed. The brain expanded in the forward direction and developed the limbic system, the monkey brain of emotions, located in the center of the brain. This progression from the back to the front is also the way a child’s brain matures.
So, then, what is human consciousness in this scheme? What distinguishes us from plants and animals?
I theorize that humans are different from animals because we understand time. We have temporal consciousness in addition to spatial and social consciousness. The latest part of the brain to evolve is the prefrontal cortex, which lies just behind our forehead. It is constantly running simulations of the future. Animals may seem like they’re planning, for example, when they hibernate, but these behaviors are largely the result of instinct. It is not possible to teach your pet dog or cat the meaning of tomorrow, because they live in the present. Humans, however, are constantly preparing for the future and even for beyond our own life spans. We scheme and daydream—we can’t help it. Our brains are planning machines.
MRI scans have shown that when we arrange to perform a task, we access and incorporate previous memories of that same task, which make our plans more realistic. One theory states that animals don’t have a sophisticated memory system because they rely on instinct and therefore don’t require the ability to envision the future. In other words, the very purpose of having a memory may be to project it into the future.
Within this framework, we can now define self-awareness, which can be understood as the ability to put ourselves inside a simulation of the future, consistent with a goal.
When we apply this theory to machines, we see that our best machines at present are on the lowest rung of Level 1 consciousness, based on their ability to locate their position in space. Most, like those built for the DARPA Robotics Challenge, can barely navigate around an empty room. There are some robots that can partially simulate the future, such as Google’s DeepMind computer, but only in an extremely narrow direction. If you ask DeepMind to accomplish anything other than a Go game, it freezes up.
How much further do we have to go, and what are the steps we will have to take, to achieve a self-aware machine like The Terminator’s Skynet?
CREATING SELF-AWARE MACHINES?
In order to create self-aware machines, we would have to give them an objective. Goals do not magically arise in robots and instead must be programmed into them from the outside. This condition is a tremendous barrier against machine rebellion. Take the 1921 play R.U.R., which first coined the word robot. Its plot describes robots rising up against humans because they see other robots being mistreated. For this to happen, the machines would need to have a high level of preprogramming. Robots do not feel empathy or suffering or a desire to take over the world unless they are instructed to do so.
But let us say, for the sake of argument, that someone gives our robot the aim of eliminating humanity. The computer must then create realistic simulations of the future and place itself in these plans. We now come up against the crucial problem. To be able to list possible scenarios and outcomes and evaluate how realistic they are, the robot would have to understand millions of rules of common sense—the simple laws of physics, biology, and human behavior that we take for granted. Moreover, it would have to understand causality and anticipate the consequences of certain actions. Humans learn these laws from decades of experiences. One reason why childhood lasts so long is because there is so much subtle information to absorb about human society and the natural world. Robots, however, have not been exposed to the great majority of interactions that draw upon shared experience.
I like to think of the case of an experienced bank robber who can plan his next heist efficiently and outsmart the police because he has a large storehouse of memories of previous bank robberies and can understand the effect of each decision he makes. In contrast, to accomplish a simple action such as bringing a g
un into a bank to rob it, a computer would have to analyze a complex sequence of secondary events numbering in the thousands, each one involving millions of lines of computer code. It would not intrinsically grasp cause and effect.
It is certainly possible for robots to become self-aware and to have dangerous goals, but you can see why it is so unlikely, especially in the foreseeable future. Inputting all the equations that a machine would need to destroy the human race would be an immensely difficult undertaking. The problem of killer robots can largely be eliminated by preventing anyone from programming them to have objectives harmful to humans. When self-aware robots do arrive, we must add a fail-safe chip that will shut them off if they have murderous thoughts. We can rest easy knowing that we will not be placed in zoos anytime soon, where our robot successors can throw peanuts at us through the bars and make us dance.
This means that when we explore the outer planets and the stars, we can rely on robots to help us build the infrastructure necessary to create settlements and cities on distant moons and planets, but we have to be careful that their goals are consistent with ours and that we have fail-safe mechanisms in place in case they pose a threat. Though we may face danger when robots become self-aware, that won’t happen until late in this century or early in the next, so there is time to prepare.
WHY ROBOTS RUN AMOK
There is one scenario, however, that keeps AI researchers up at night. A robot could conceivably be given an ambiguous or ill-phrased command that, if carried out, would unleash havoc.
In the movie I, Robot, there is a master computer, called VIKI, which controls the infrastructure of the city. VIKI is given the command to protect humanity. But by studying how humans treat other humans, the computer comes to the conclusion that the greatest threat to humanity is humanity itself. It mathematically determines that the only way to protect humanity is to take control over it.
Another example is the tale of King Midas. He asks the god Dionysus for the ability to turn anything into gold by touching it. This power at first seems to be a sure path to riches and glory. But then he touches his daughter, who turns to gold. His food, too, becomes inedible. He finds himself a slave of the very gift he begged for.
H. G. Wells explored a similar predicament with his short story “The Man Who Could Work Miracles.” One day, an ordinary clerk finds himself with an astonishing ability. Anything he wishes for comes true. He goes out drinking late at night with a friend, performing miracles along the way. They don’t want the night to ever end, so he innocently wishes that the Earth would stop rotating. All of a sudden, violent winds and gigantic floods descend upon them. People, buildings, and towns are hurled into space at a thousand miles per hour, the speed of the Earth’s rotation. Realizing that he has destroyed the planet, his last wish is for everything to return to normal—the way it was before he gained his power.
Here, science fiction teaches us to exercise caution. As we develop AI, we must meticulously examine every possible consequence, especially those that may not be immediately obvious. After all, our ability to do so is part of what makes us human.
QUANTUM COMPUTING
To gain a fuller picture of the future of robotics, let’s take a closer look at what goes on inside computers. Currently, most digital computers are based on silicon circuits and obey Moore’s law, which states that computer power doubles every eighteen months. But technological advancement in the past few years has begun to slow down from its frantic pace in the previous decades, and some have posited an extreme scenario in which Moore’s law collapses and seriously disrupts the world economy, which has come to depend on the nearly exponential growth of computing power. If this happens, Silicon Valley could turn into another Rust Belt. To head off this potential crisis, physicists around the world are seeking a replacement for silicon. They are working on an assortment of alternative computers, including molecular, atomic, DNA, quantum dot, optical, and protein computers, but none of them are ready for prime time.
There is also a wild card in the mix. As silicon transistors become smaller and smaller, they will reach the size of atoms. Currently, a standard Pentium chip may have silicon layers with a thickness of twenty atoms or so. Within a decade, these chips may have layers only five atoms deep, and if so electrons may begin to leak out, as predicted by quantum theory, creating short circuits. A revolutionary type of computer is necessary. Molecular computers, perhaps based on graphene, may replace silicon chips. But one day, perhaps even these molecular computers will encounter problems with effects predicted by quantum theory.
At that point, we may have to build the ultimate computer, the quantum computer, capable of operating on the smallest transistor possible: a single atom.
Here’s how it might work. Silicon circuits contain a gate that can either be open or closed to the flow of electrons. Information is stored on the basis of these open or closed circuits. Binary mathematics, which is based on a series of 1’s and 0’s, describes this process: 0 may represent a closed gate, and 1 may represent an open gate.
Now consider replacing silicon with a row of individual atoms. Atoms are like tiny magnets, which have a north pole and a south pole. When atoms are placed in a magnetic field, you might suspect that they can be pointing either up or down. In reality, each atom actually points up and down simultaneously until a final measurement is made. In a sense, an electron can be in two states at the same time. This defies common sense, but is the reality according to quantum mechanics. Its advantage is enormous. You can only store so much data if the magnets are pointing up or down. But if each magnet is a mixture of states, you can pack far greater amounts of information onto a tiny cluster of atoms. Each “bit” of information, which can be either 1 or 0, now becomes a “qubit,” a complex mixture of 1’s and 0’s with vastly more storage.
The point of bringing up quantum computers is that they may hold the key to exploring the universe. In principle, a quantum computer may give us the ability to exceed human intelligence. They are still a wild card. We don’t know when quantum computers will arrive or what their full potential may be. But they could prove invaluable in space exploration. Rather than simply build the settlements and cities of the future, they may take us a step further and give us the ability to do the high-level planning necessary to terraform entire planets.
Quantum computers would be vastly more potent than ordinary digital computers. Digital computers might need several centuries to crack a code based on an exceptionally difficult math problem, such as factorizing a number in the millions into two smaller numbers. But quantum computers, calculating with a high number of mixed atomic states, could swiftly complete the decryption. The CIA and other spy agencies are acutely aware of their promise. Among the mountains of classified material from the National Security Agency that were leaked to the press a few years ago was a top-secret document indicating that quantum computers were being carefully monitored by the agency but that no breakthrough was expected in the immediate future.
Given the excitement and hubbub over quantum computers, when might we expect to have them?
WHY DON’T WE HAVE QUANTUM COMPUTERS?
Computing on individual atoms can be both a blessing and a curse. While atoms can store an enormous quantity of information, the most minute impurity, vibration, or disturbance could ruin a calculation. It is necessary, but notoriously difficult, to totally isolate the atoms from the outside world. They must reach a state of what is called “coherence,” in which they vibrate in unison. But the slightest interference—say, someone sneezing in the next building—could cause the atoms to vibrate randomly and independently of one another. “Decoherence” is one of the biggest problems we face in the development of quantum computers.
Because of this problem, quantum computers today can only perform rudimentary calculations. In fact, the world record for a quantum computer involves about twenty qubits. This may not seem so impressive, but it is truly an achievement. It may take several decades or perhaps until late in this c
entury to attain a high-functioning quantum computer, but when the technology arrives, it will dramatically augment the power of AI.
ROBOTS IN THE FAR FUTURE
Considering the primitive state of automatons today, I also would not expect to see self-aware robots for a number of decades—again perhaps not until the end of the century. In the intervening years, we will likely first deploy sophisticated remote-controlled machines to continue the work of exploring space, and then, perhaps, automatons with innovative learning capabilities to begin laying the foundations for human settlements. Later will come self-replicating automatons to complete infrastructure, and then, finally, quantum-fueled conscious machines to help us establish and maintain an intergalactic civilization.
Of course, all this talk of reaching distant stars raises an important question. How are we, or our robots, supposed to get there? How accurate are the starships we see every night on TV?
Why go to the stars?
Because we are the descendants of those primates who chose to look over the next hill.
Because we won’t survive here indefinitely.
Because the stars are there, beckoning with fresh horizons.