Book Read Free

Physics of the Future: How Science Will Shape Human Destiny and Our Daily Lives by the Year 2100

Page 14

by Michio Kaku


  By 2019, he predicts, a $1,000 personal computer will have as much raw power as a human brain. Soon after that, computers will leave us in the dust. By 2029, a $1,000 personal computer will be 1,000 times more powerful than a human brain. By 2045, a $1,000 computer will be a billion times more intelligent than every human combined. Even small computers will surpass the ability of the entire human race.

  After 2045, computers become so advanced that they make copies of themselves that are ever increasing in intelligence, creating a runaway singularity. To satisfy their never-ending, ravenous appetite for computer power, they will begin to devour the earth, asteroids, planets, and stars, and even affect the cosmological history of the universe itself.

  I had the chance to visit Kurzweil in his office outside Boston. Walking through the corridor, you see the awards and honors he has received, as well as some of the musical instruments he has designed, which are used by top musicians, such as Stevie Wonder. He explained to me that there was a turning point in his life. It came when he was unexpectedly diagnosed with type II diabetes when he was thirty-five. Suddenly, he was faced with the grim reality that he would not live long enough to see his predictions come true. His body, after years of neglect, had aged beyond his years. Rattled by this diagnosis, he now attacked the problem of personal health with the same enthusiasm and energy he used for the computer revolution. (Today, he consumes more than 100 pills a day and has written books on the revolution in longevity. He expects that the revolution in microscopic robots will be able to clean out and repair the human body so that it can live forever. His philosophy is that he would like to live long enough to see the medical breakthroughs that can prolong our life spans indefinitely. In other words, he wants to live long enough to live forever.)

  Recently, he embarked on an ambitious plan to launch the Singularity University, based in the NASA Ames laboratory in the Bay Area, which trains a cadre of scientists to prepare for the coming singularity.

  There are many variations and combinations of these various themes.

  Kurzweil himself believes, “It’s not going to be an invasion of intelligent machines coming over the horizon. We’re going to merge with this technology …. We’re going to put these intelligent devices in our bodies and brains to make us live longer and healthier.”

  Any idea as controversial as the singularity is bound to unleash a backlash. Mitch Kapor, founder of Lotus Development Corporation, says that the singularity is “intelligent design for the IQ 140 people …. This proposition that we’re heading to this point at which everything is going to be just unimaginably different—it’s fundamentally, in my view, driven by a religious impulse. And all the frantic arm-waving can’t obscure that fact for me.”

  Douglas Hofstadter has said, “It’s as if you took a lot of good food and some dog excrement and blended it all up so that you can’t possibly figure out what’s good or bad. It’s an intimate mixture of rubbish and good ideas, and it’s very hard to disentangle the two, because these are smart people; they’re not stupid.”

  No one knows how this will play out. But I think the most likely scenario is the following.

  MOST LIKELY SCENARIO: FRIENDLY AI

  First, scientists will probably take simple measures to ensure that robots are not dangerous. At the very least, scientists can put a chip in robot brains to automatically shut them off if they have murderous thoughts. In this approach, all intelligent robots will be equipped with a fail-safe mechanism that can be switched on by a human at any time, especially when a robot exhibits errant behavior. At the slightest hint that a robot is malfunctioning, any voice command will immediately shut it down.

  Or specialized hunter robots may also be created whose duty is to neutralize deviant robots. These robot hunters will be specifically designed to have superior speed, strength, and coordination in order to capture errant robots. They will be designed to understand the weak points of any robotic system and how they behave under certain conditions. Human can also be trained in this skill. In the movie Blade Runner, a specially trained cadre of agents, including one played by Harrison Ford, are skilled in the techniques necessary to neutralize any rogue robot.

  Since it will take many decades of hard work for robots to slowly go up the evolutionary scale, it will not be a sudden moment when humanity is caught off guard and we are all shepherded into zoos like cattle. Consciousness, as I see it, is a process that can be ranked on a scale, rather than being a sudden evolutionary event, and it will take many decades for robots to ascend up this scale of consciousness. After all, it took Mother Nature millions of years to develop human consciousness. So humans will not be caught off guard one day when the Internet unexpectedly “wakes up” or robots suddenly begin to plan for themselves.

  This is the option preferred by science fiction writer Isaac Asimov, who envisioned each robot hardwired in the factory with three laws to prevent them from getting out of control. He devised his famous three laws of robotics to prevent robots from hurting themselves or humans. (Basically, the three laws state that robots cannot harm humans, they must obey humans, and they must protect themselves, in that order.)

  (Even with Asimov’s three laws, there are also problems when there are contradictions among the three laws. For example, if one creates a benevolent robot, what happens if humanity makes self-destructive choices that can endanger the human race? Then a friendly robot may feel that it has to seize control of the government to prevent humanity from harming itself. This was the problem faced by Will Smith in the movie version of I, Robot, when the central computer decides that “some humans must be sacrificed and some freedoms must be surrendered” in order to save humanity. To prevent a robot from enslaving us in order to save us, some have advocated that we must add the zeroth law of robotics: Robots cannot harm or enslave the human race.)

  But many scientists are leaning toward something called “friendly AI,” where we design our robots to be benign from the very beginning. Since we are the creators of these robots, we will design them, from the very start, to perform only useful and benevolent tasks.

  The term “friendly AI” was coined by Eliezer Yudkowsky, a founder of the Singularity Institute for Artificial Intelligence. Friendly AI is a bit different from Asimov’s laws, which are forced upon robots, perhaps against their will. (Asimov’s laws, imposed from the outside, could actually invite the robots to devise clever ways to circumvent them.) In friendly AI, by contrast, robots are free to murder and commit mayhem. There are no rules that enforce an artificial morality. Rather, these robots are designed from the very beginning to desire to help humans rather than destroy them. They choose to be benevolent.

  This has given rise to a new field called “social robotics,” which is designed to give robots the qualities that will help them integrate into human society. Scientists at Hanson Robotics, for example, have stated that one mission for their research is to design robots that “will evolve into socially intelligent beings, capable of love and earning a place in the extended human family.”

  But one problem with all these approaches is that the military is by far the largest funder of AI systems, and these military robots are specifically designed to hunt, track, and kill humans. One can easily imagine future robotic soldiers whose missions are to identify enemy humans and eliminate them with unerring efficiency. One would then have to take extraordinary precautions to guarantee that the robots don’t turn against their masters as well. Predator drone aircraft, for example, are run by remote control, so there are humans constantly directing their movements, but one day these drones may be autonomous, able to select and take out their own targets at will. A malfunction in such an autonomous plane could lead to disastrous consequences.

  In the future, however, more and more funding for robots will come from the civilian commercial sector, especially from Japan, where robots are designed to help rather than destroy. If this trend continues, then perhaps friendly AI could become a reality. In this scenario, it is the consumer
sector and market forces that will eventually dominate robotics, so that there will be a vast commercial interest in investing in friendly AI.

  MERGING WITH ROBOTS

  In addition to friendly AI, there is also another option: merging with our creations. Instead of simply waiting for robots to surpass us in intelligence and power, we should try to enhance ourselves, becoming superhuman in the process. Most likely, I believe, the future will proceed with a combination of these two goals, i.e., building friendly AI and also enhancing ourselves.

  This is an option being explored by Rodney Brooks, former director of the famed MIT Artificial Intelligence Laboratory. He has been a maverick, overturning cherished but ossified ideas and injecting innovation into the field. When he entered the field, the top-down approach was dominant in most universities. But the field was stagnating. Brooks raised a few eyebrows when he called for creating an army of insectlike robots that learned via the bottom-up approach by bumping into obstacles. He did not want to create another dumb, lumbering robot that took hours to walk across the room. Instead, he built nimble “insectoids” or “bugbots” that had almost no programming at all but would quickly learn to walk and navigate around obstacles by trial and error. He envisioned the day that his robots would explore the solar system, bumping into things along the way. It was an outlandish idea, proposed in his essay “Fast, Cheap, and Out of Control,” but his approach eventually led to an array of new avenues. One by-product of his idea is the Mars Rovers now scurrying over the surface of the Red Planet. Not surprisingly, he was also the chairman of iRobot, the company that markets buglike vacuum cleaners to households across the country.

  One problem, he feels, is that workers in artificial intelligence follow fads, adopting the paradigm of the moment, rather than thinking in fresh ways. For example, he recalls, “When I was a kid, I had a book that described the brain as a telephone-switching network. Earlier books described it as a hydrodynamic system or a steam engine. Then in the 1960s, it became a digital computer. In the 1980s, it became a massively parallel digital computer. Probably there’s a kid’s book out there somewhere that says the brain is just like the World Wide Web ….”

  For example, some historians have noted that Sigmund Freud’s analysis of the mind was influenced by the coming of the steam engine. The spread of railroads through Europe in the mid-to late 1800s had a profound effect on the thinking of intellectuals. In Freud’s picture, there were flows of energy in the mind that constantly competed with other flows, much like in the steam pipes in an engine. The continual interaction between the superego, the id, and the ego resembled the continual interaction between steam pipes in a locomotive. And the fact that repressing these flows of energy could create neuroses is analogous to the way that steam power, if bottled up, can be explosive.

  Marvin Minsky admitted to me that another paradigm misguided the field for many years. Since many AI researchers are former physicists, there is something called “physics envy,” that is, the desire to find the single, unifying theme underlying all intelligence. In physics, we have the desire to follow Einstein to reduce the physical universe to a handful of unifying equations, perhaps finding an equation one inch long that can summarize the universe in a single coherent idea. Minsky believes that this envy led AI researchers to look for that single unifying theme for consciousness. Now, he believes, there is no such thing. Evolution haphazardly cobbled together a bunch of techniques we collectively call consciousness. Take apart the brain, and you find a loose collection of minibrains, each designed to perform a specific task. He calls this the “society of minds”: that consciousness is actually the sum of many separate algorithms and techniques that nature stumbled upon over millions of years.

  Rodney Brooks was also looking for a similar paradigm, but one that had never been fully explored before. He soon realized that Mother Nature and evolution had already solved many of these problems. For example, a mosquito, with only a few hundred thousand neurons, can outperform the greatest military robotic system. Unlike our flying drones, mosquitoes, with brains smaller than the head of a pin, can independently navigate around obstacles, find food and mates. Why not learn from nature and biology? If you follow the evolutionary scale, you learn that insects and mice did not have the rules of logic programmed into their brains. It was through trial and error that they engaged the world and mastered the art of survival.

  Now he is pursuing yet another heretical idea, contained in his essay “The Merger of Flesh and Machines.” He notes that the old laboratories at MIT, which used to design silicon components for industrial and military robots, are now being cleaned out, making way for a new generation of robots made of living tissue as well as silicon and steel. He foresees an entirely new generation of robots that will marry biological and electronic systems to create entirely new architectures for robots.

  He writes, “My prediction is that by the year 2100 we will have very intelligent robots everywhere in our everyday lives. But we will not be apart from them—rather, we will be part robot and connected with the robots.”

  He sees this progressing in stages. Today, we have the ongoing revolution in prostheses, inserting electronics directly into the human body to create realistic substitutes for hearing, sight, and other functions. For example, the artificial cochlea has revolutionized the field of audiology, giving back the gift of hearing to the deaf. These artificial cochleas work by connecting electronic hardware with biological “wetware,” that is, neurons. The cochlear implant has several components. A microphone is placed outside the ear. It receives sound waves, processes them, and transmits the signals by radio to the implant that is surgically placed inside the ear. The implant receives the radio messages and converts them into electrical currents that are sent down electrodes in the ear. The cochlea recognizes these electrical impulses and sends them on to the brain. These implants can use up to twenty-four electrodes and can process half a dozen frequencies, enough to recognize the human voice. Already, 150,000 people worldwide have had cochlear implants.

  Several groups are exploring ways to assist the blind by creating artificial vision, connecting a camera to the human brain. One method is to directly insert the silicon chip into the retina of the person and attach the chip to the retina’s neurons. Another is to connect the chip to a special cable that is connected to the back of the skull, where the brain processes vision. These groups, for the first time in history, have been able to restore a degree of sight to the blind. Patients have been able to see up to 50 pixels lighting up before them. Eventually, scientists should be able to scale this up so that they can see thousands of pixels.

  The patients can see fireworks, the outlines of their hands, shining objects and lights, the presence of cars and people, and the borders of objects. “At Little League games, I can see where the catcher, batter, and umpire are,” says Linda Morfoot, one of the test subjects.

  So far, thirty patients have had artificial retinas with up to sixty electrodes. But the Department of Energy’s Artificial Retina Project, based at the University of Southern California, is already planning a new system with more than 200 electrodes. A 1,000-electrode device is also being studied (but if too many electrodes are packed onto the chip, it could cause overheating of the retina). In this system, a miniature camera mounted on a blind person’s eyeglasses takes pictures and sends them wirelessly to a microprocessor, worn on a belt, that relays the information to the chip placed directly on the retina. This chip sends tiny pulses directly into the retinal nerves that are still active, thereby bypassing defective retinal cells.

  STAR WARS ROBOTIC HAND

  Using mechanical enhancements, one can also duplicate the feats of science fiction, including the robotic hand of Star Wars and the X-ray vision of Superman. In The Empire Strikes Back, Luke Skywalker has his hand chopped off by a lightsaber wielded by the evil Darth Vader, his father. No problem. Scientists in this faraway galaxy quickly create a new mechanical hand, complete with fingers that can touch and feel.<
br />
  This may sound like science fiction, yet it is already here. A significant advance was made by scientists in Italy and Sweden, who have actually made a robotic hand that can “feel.” One subject, Robin Ekenstam, a twenty-two-year-old who had his right hand amputated to remove a cancerous tumor, can now control the motion of his mechanical fingers and feel the response. Doctors connected the nerves in Ekenstam’s arm to the chips contained in his mechanical hand so that he can control the finger movements with his brain. The artificial “smart hand” has four motors and forty sensors. The motion of his mechanical fingers is then relayed to his brain so he has feedback. In this way, he is able to control and also “feel” the motion of his hand. Since feedback is one of the essential features of body motion, this could revolutionize the way we treat amputees with prosthetic limbs.

  Ekenstam says, “It’s great. I have a feeling that I have not had for a long time. Now I am getting sensation back. If I grab something tightly, then I can feel it in the fingertips, which is strange, since I don’t have them anymore.”

  One of the researchers, Christian Cipriani of the Scuola Superiore Sant’Anna, says, “First, the brain controls the mechanical hand without any muscle contractions. Second, the hand will be able to give feedback to the patient so he will be able to feel. Just like a real hand.”

  This development is significant because it means that one day humans may effortlessly control mechanical limbs as if they were flesh and bone. Instead of tediously learning how to move arms and legs of metal, people will treat these mechanical appendages as if they were real, feeling every nuance of the limbs’ movements via electronic feedback mechanisms.

 

‹ Prev