Physics of the Future

Home > Science > Physics of the Future > Page 13
Physics of the Future Page 13

by Michio Kaku


  Back in 1986, scientists were able to map completely the location of all the neurons in the nervous system of the tiny worm C. elegans. This was initially heralded as a breakthrough that would allow us to decode the mystery of the brain. But knowing the precise location of its 302 nerve cells and 6,000 chemical synapses did not produce any new understanding of how this worm functions, even decades later.

  In the same way, it will take many decades, even after the human brain is finally reverse engineered, to understand how all the parts work and fit together. If the human brain is finally reverse engineered and completely decoded by the end of the century, then we will have taken a giant step in creating humanlike robots. Then what is to prevent them from taking over?

  WHEN MACHINES BECOME CONSCIOUS

  In The Terminator movie series, the Pentagon proudly unveils Skynet, a sprawling, foolproof computer network designed to faithfully control the U.S. nuclear arsenal. It flawlessly carries out its tasks until one day in 1995, when something unexpected happens. Skynet becomes conscious. Skynet’s human handlers, shocked to realize that their creation has suddenly become sentient, try to shut it down. But they are too late. In self-defense, Skynet decides that the only way to protect itself is to destroy humanity by launching a devastating nuclear war. Three billion people are soon incinerated in countless nuclear infernos. In the aftermath, Skynet unleashes legion after legion of robotic killing machines to slaughter the remaining stragglers. Modern civilization crumbles, reduced to tiny, pathetic bands of misfits and rebels.

  Worse, in the Matrix Trilogy, humans are so primitive that they don’t even realize that the machines have already taken over. Humans carry out their daily affairs, thinking everything is normal, oblivious to the fact that they are actually living in pods. Their world is a virtual reality simulation run by the robot masters. Human “existence” is only a software program, running inside a large computer, that is being fed into the brains of humans living in these pods. The only reason the machines even bother to have humans around is to use them as batteries.

  Hollywood, of course, makes its living by scaring the pants off its audience. But it does raise a legitimate scientific question: What happens when robots finally become as smart as us? What happens when robots wake up and become conscious? Scientists vigorously debate the question: not if, but when this momentous event will happen.

  According to some experts, our robot creations will gradually rise up the evolutionary tree. Today, they are as smart as cockroaches. In the future, they will be as smart as mice, rabbits, dogs and cats, monkeys, and then they will rival humans. It may take decades to slowly climb this path, but they believe that it is only a matter of time before the machines exceed us in intelligence.

  AI researchers are split on the question of when this might happen. Some say that within twenty years robots will approach the intelligence of the human brain and then leave us in the dust. In 1993, Vernor Vinge said, “Within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended …. I’ll be surprised if this event occurs before 2005 or after 2030.”

  On the other hand, Douglas Hofstadter, author of Gödel, Escher, Bach, says, “I’d be very surprised if anything remotely like this happened in the next 100 years to 200 years.”

  When I talked to Marvin Minsky of MIT, one of the founding figures in the history of AI, he was careful to tell me that he places no timetable on when this event will happen. He believes the day will come but shies away from being the oracle and predicting the precise date. (Being the grand old man of AI, a field he helped to create almost from scratch, perhaps he has seen too many predictions fail and create a backlash.)

  A large part of the problem with these scenarios is that there is no universal consensus as to the meaning of the word consciousness. Philosophers and mathematicians have grappled with the word for centuries, and have nothing to show for it. Seventeenth-century thinker Gottfried Leibniz, inventor of calculus, once wrote, “If you could blow the brain up to the size of a mill and walk about inside, you would not find consciousness.” Philosopher David Chalmers has even catalogued almost 20,000 papers written on the subject, with no consensus whatsoever.

  Nowhere in science have so many devoted so much to create so little.

  Consciousness, unfortunately, is a buzzword that means different things to different people. Sadly, there is no universally accepted definition of the term.

  I personally think that one of the problems has been the failure to clearly define consciousness and then a failure to quantify it.

  But if I were to venture a guess, I would theorize that consciousness consists of at least three basic components:

  1. sensing and recognizing the environment

  2. self-awareness

  3. planning for the future by setting goals and plans, that is, simulating the future and plotting strategy

  In this approach, even simple machines and insects have some form of consciousness, which can be ranked numerically on a scale of 1 to 10. There is a continuum of consciousness, which can be quantified. A hammer cannot sense its environment, so it would have a 0 rating on this scale. But a thermostat can. The essence of a thermostat is that it can sense the temperature of the environment and act on it by changing it, so it would have a ranking of 1. Hence, machines with feedback mechanisms have a primitive form of consciousness. Worms also have this ability. They can sense the presence of food, mates, and danger and act on this information, but can do little else. Insects, which can detect more than one parameter (such as sight, sound, smells, pressure, etc.), would have a higher numerical rank, perhaps a 2 or 3.

  The highest form of this sensing would be the ability to recognize and understand objects in the environment. Humans can immediately size up their environment and act accordingly and hence rate high on this scale. However, this is where robots score badly. Pattern recognition, as we have seen, is one of the principal roadblocks to artificial intelligence. Robots can sense their environments much better than humans, but they do not understand or recognize what they see. On this scale of consciousness, robots score near the bottom, near the insects, due to their lack of pattern recognition.

  The next-higher level of consciousness involves self-awareness. If you place a mirror next to most male animals, they will immediately react aggressively, even attacking the mirror. The image causes the animal to defend its territory. Many animals lack awareness of who they are. But monkeys, elephants, dolphins, and some birds quickly realize that the image in the mirror represents themselves and they cease to attack it. Humans would rank near the top on this scale, since they have a highly developed sense of who they are in relation to other animals, other humans, and the world. In addition, humans are so aware of themselves that they can talk silently to themselves, so they can evaluate a situation by thinking.

  Third, animals can be ranked by their ability to formulate plans for the future. Insects, to the best of our knowledge, do not set elaborate goals for the future. Instead, for the most part, they react to immediate situations on a moment-to-moment basis, relying on instinct and cues from the immediate environment.

  In this sense, predators are more conscious than prey. Predators have to plan ahead, by searching for places to hide, by planning to ambush, by stalking, by anticipating the flight of the prey. Prey, however, only have to run, so they rank lower on this scale.

  Furthermore, primates can improvise as they make plans for the immediate future. If they are shown a banana that is just out of reach, then they might devise strategies to grab that banana, such as using a stick. So, when faced with a specific goal (grabbing food), primates will make plans into the immediate future to achieve that goal.

  But on the whole, animals do not have a well-developed sense of the distant past or future. Apparently, there is no tomorrow in the animal kingdom. We have no evidence that they can think days into the future. (Animals will store food in preparation for the winter, but this is l
argely genetic: they have been programmed by their genes to react to plunging temperatures by seeking out food.)

  Humans, however, have a very well-developed sense of the future and continually make plans. We constantly run simulations of reality in our heads. In fact, we can contemplate plans far beyond our own lifetimes. We judge other humans, in fact, by their ability to predict evolving situations and formulate concrete strategies. An important part of leadership is to anticipate future situations, weigh possible outcomes, and set concrete goals accordingly.

  In other words, this form of consciousness involves predicting the future, that is, creating multiple models that approximate future events. This requires a very sophisticated understanding of common sense and the rules of nature. It means that you ask yourself “what if” repeatedly. Whether planning to rob a bank or run for president, this kind of planning means being able to run multiple simulations of possible realities in your head.

  All indications are that only humans have mastered this art in nature.

  We also see this when psychological profiles of test subjects are analyzed. Psychologists often compare the psychological profiles of adults to their profiles when they were children. Then one asks the question: What is the one quality that predicted their success in marriage, careers, wealth, etc.? When one compensates for socioeconomic factors, one finds that one characteristic sometimes stands out from all the others: the ability to delay gratification. According to the long-term studies of Walter Mischel of Columbia University, and many others, children who were able to refrain from immediate gratification (e.g., eating a marshmallow given to them) and held out for greater long-term rewards (getting two marshmallows instead of one) consistently scored higher on almost every measure of future success, in SATs, life, love, and career.

  But being able to defer gratification also refers to a higher level of awareness and consciousness. These children were able to simulate the future and realize that future rewards were greater. So being able to see the future consequences of our actions requires a higher level of awareness.

  AI researchers, therefore, should aim to create a robot with all three characteristics. The first is hard to achieve, since robots can sense their environment but cannot make sense of it. Self-awareness is easier to achieve. But planning for the future requires common sense, an intuitive understanding of what is possible, and concrete strategies for reaching specific goals.

  So we see that common sense is a prerequisite for the highest level of consciousness. In order for a robot to simulate reality and predict the future, it must first master millions of commonsense rules about the world around it. But common sense is not enough. Common sense is just the “rules of the game,” rather than the rules of strategy and planning.

  On this scale, we can then rank all the various robots that have been created.

  We see that Deep Blue, the chess-playing machine, would rank very low. It can beat the world champion in chess, but it cannot do anything else. It is able to run a simulation of reality, but only for playing chess. It is incapable of running simulations of any other reality. This is true for many of the world’s largest computers. They excel at simulating the reality of one object, for example, modeling a nuclear detonation, the wind patterns around a jet airplane, or the weather. These computers can run simulations of reality much better than a human. But they are also pitifully one-dimensional, and hence useless in surviving in the real world.

  Today, AI researchers are clueless about how to duplicate all these processes in a robot. Most throw up their hands and say that somehow huge networks of computers will show “emergent phenomena” in the same way that order sometimes spontaneously coalesces from chaos. When asked precisely how these emergent phenomena will create consciousness, most roll their eyes to the heavens.

  Although we do not know how to create a robot with consciousness, we can imagine what a robot would look like that is more advanced than us, given this framework for measuring consciousness.

  They would excel in the third characteristic: they would be able to run complex simulations of the future far ahead of us, from more perspectives, with more details and depth. Their simulations would be more accurate than ours, because they would have a better grasp of common sense and the rules of nature and hence better able to ferret out patterns. They would be able to anticipate problems that we might ignore or not be aware of. Moreover, they would be able to set their own goals. If their goals include helping the human race, then everything is fine. But if one day they formulate goals in which humans are in the way, this could have nasty consequences.

  But this raises the next question: What happens to humans in this scenario?

  WHEN ROBOTS EXCEED HUMANS

  In one scenario, we puny humans are simply pushed aside as a relic of evolution. It is a law of evolution that fitter species arise to displace unfit species; and perhaps humans will be lost in the shuffle, eventually winding up in zoos where our robotic creations come to stare at us. Perhaps that is our destiny: to give birth to superrobots that treat us as an embarrassingly primitive footnote in their evolution. Perhaps that is our role in history, to give birth to our evolutionary successors. In this view, our role is to get out of their way.

  Douglas Hofstadter confided to me that this might be the natural order of things, but we should treat these superintelligent robots as we do our children, because that is what they are, in some sense. If we can care for our children, he said to me, then why can’t we also care about intelligent robots, which are also our children?

  Hans Moravec contemplates how we may feel being left in the dust by our robots: “… life may seem pointless if we are fated to spend it staring stupidly at our ultraintelligent progeny as they try to describe their ever more spectacular discoveries in baby talk that we can understand.”

  When we finally hit the fateful day when robots are smarter than us, not only will we no longer be the most intelligent being on earth, but our creations may make copies of themselves that are even smarter than they are. This army of self-replicating robots will then create endless future generations of robots, each one smarter than the previous one. Since robots can theoretically produce ever-smarter generations of robots in a very short period of time, eventually this process will explode exponentially, until they begin to devour the resources of the planet in their insatiable quest to become ever more intelligent.

  In one scenario, this ravenous appetite for ever-increasing intelligence will eventually ravage the resources of the entire planet, so the entire earth becomes a computer. Some envision these superintelligent robots then shooting out into space to continue their quest for more intelligence, until they reach other planets, stars, and galaxies in order to convert them into computers. But since the planets, stars, and galaxies are so incredibly far away, perhaps the computer may alter the laws of physics so its ravenous appetite can race faster than the speed of light to consume whole star systems and galaxies. Some even believe it might consume the entire universe, so that the universe becomes intelligent.

  This is the “singularity.” The word originally came from the world of relativistic physics, my personal specialty, where a singularity represents a point of infinite gravity, from which nothing can escape, such as a black hole. Because light itself cannot escape, it is a horizon beyond which we cannot see.

  The idea of an AI singularity was first mentioned in 1958, in a conversation between two mathematicians, Stanislaw Ulam (who made the key breakthrough in the design of the hydrogen bomb) and John von Neumann. Ulam wrote, “One conversation centered on the ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the human race beyond which human affairs, as we know them, could not continue.” Versions of the idea have been kicking around for decades. But it was then amplified and popularized by science fiction writer and mathematician Vernor Vinge in his novels and essays.

  But this leaves the cruc
ial question unanswered: When will the singularity take place? Within our lifetimes? Perhaps in the next century? Or never? We recall that the participants at the 2009 Asilomar conference put the date at any time between 20 to 1,000 years into the future.

  One man who has become the spokesperson for the singularity is inventor and bestselling author Ray Kurzweil, who has a penchant for making predictions based on the exponential growth of technology. Kurzweil once told me that when he gazes at the distant stars at night, perhaps one should be able to see some cosmic evidence of the singularity happening in some distant galaxy. With the ability to devour or rearrange whole star systems, there should be some footprint left behind by this rapidly expanding singularity. (His detractors say that he is whipping up a near-religious fervor around the singularity. However, his supporters say that he has an uncanny ability to correctly see into the future, judging by his track record.)

  Kurzweil cut his teeth on the computer revolution by starting up companies in diverse fields involving pattern recognition, such as speech recognition technology, optical character recognition, and electronic keyboard instruments. In 1999, he wrote a best seller, The Age of Spiritual Machines: When Computers Exceed Human Intelligence, which predicted when robots will surpass us in intelligence. In 2005, he wrote The Singularity Is Near and elaborated on those predictions. The fateful day when computers surpass human intelligence will come in stages.

  By 2019, he predicts, a $1,000 personal computer will have as much raw power as a human brain. Soon after that, computers will leave us in the dust. By 2029, a $1,000 personal computer will be 1,000 times more powerful than a human brain. By 2045, a $1,000 computer will be a billion times more intelligent than every human combined. Even small computers will surpass the ability of the entire human race.

 

‹ Prev