Physics of the Impossible: A Scientific Exploration into the World of Phasers, Force Fields, Teleportation, and Time Travel
Page 14
(Developments in computers will also have an enormous impact on the future of the job market. Futurists sometimes speculate that the only people who will have jobs decades into the future will be highly skilled computer scientists and technicians. But actually workers such as sanitation men, construction workers, firemen, police, and so forth, will also have jobs in the future because what they do involves pattern recognition. Every crime, piece of garbage, tool, and fire is different and hence cannot be managed by robots. Ironically, college-educated workers, such as low-level accountants, brokers, and tellers, may lose their jobs in the future since their work is semirepetitive and involves keeping track of numbers, a task that computers excel at.)
In addition to pattern recognition, the second problem with the development of robots is even more fundamental, and that is their lack of “common sense.” Humans know, for example,
• Water is wet.
• Mothers are older than their daughters.
• Animals do not like pain.
• You don’t come back after you die.
• Strings can pull, but not push.
• Sticks can push, but cannot pull.
• Time does not run backward.
But there is no line of calculus or mathematics that can express these truths. We know all of this because we have seen animals, water, and strings, and we have figured out the truth by ourselves. Children learn common sense by bumping into reality. The intuitive laws of biology and physics are learned the hard way, by interacting with the real world. But robots haven’t experienced this. They know only what has been programmed into them beforehand.
(As a result, the jobs of the future will also include those that require common sense, that is, artistic creativity, originality, acting talent, humor, entertainment, analysis, and leadership. These are precisely the qualities that make us uniquely human and that computers have difficulty duplicating.)
In the past, mathematicians have tried to mount a crash program that could amass all the laws of common sense once and for all. The most ambitious attempt is CYC (short for encyclopedia), the brainchild of Douglas Lenat, the head of Cycorp. Like the Manhattan Project, the $2 billion crash program that built the atomic bomb, CYC was to be the “Manhattan Project” of artificial intelligence, the final push that would achieve true artificial intelligence.
Not surprisingly, Lenat’s motto is, Intelligence is 10 million rules. (Lenat has a novel way in which to find new laws of common sense; he has his staff read the pages of scandalous tabloids and lurid gossip rags. Then he asks CYC if it can spot the errors in the tabloids. Actually, if Lenat succeeds in this, CYC may actually be more intelligent than most tabloid readers!)
One of the goals of CYC is to attain “breakeven,” that is, the point at which a robot will be able to understand enough so that it can digest new information on its own simply by reading magazines and books found in any library. At that point, like a baby bird leaving the nest, CYC will be able to flap its wings and take off on its own.
But since the firm’s founding in 1984, its credibility has suffered from a common problem in AI: making predictions that generate headlines but are wildly unrealistic. Lenat predicted that in ten years, by 1994, CYC would contain 30 to 50 percent of “consensus reality.” Today CYC is not even close. As the scientists of Cycorp have found out, millions and millions of lines of code need to be programmed in order for a computer to approximate the common sense of a four-year-old child. So far the latest version of the CYC program contains only a paltry 47,000 concepts and 306,000 facts. Despite Cycorp’s regularly optimistic press releases, one of Lenat’s coworkers, R. V. Guha, who left the team in 1994, was quoted as saying, “CYC is generally viewed as a failed project…. We were killing ourselves trying to create a pale shadow of what had been promised.”
In other words, attempts to program all the laws of common sense into a single computer have floundered, simply because there are so many laws of common sense. Humans learn these laws effortlessly because we tediously continue to bump into the environment throughout our lives, quietly assimilating the laws of physics and biology, but robots do not.
Microsoft founder Bill Gates admits, “It has been much harder than expected to enable computers and robots to sense their surrounding environment and to react quickly and accurately…for example, the abilities to orient themselves with respect to the objects in a room, to respond to sounds and interpret speech, and to grasp objects of varying sizes, textures, and fragility. Even something as simple as telling the difference between an open door and a window can be devilishly tricky for a robot.”
Proponents of the top-down approach to artificial intelligence, however, point out that progress in this direction, although at times glacial, is happening in labs around the world. For example, for the past few years the Defense Advanced Research Projects Agency (DARPA), which often funds state-of-the-art technology projects, has sponsored a $2 million prize for the creation of a driverless vehicle that can navigate by itself around a rugged terrain in the Mojave Desert. In 2004 not a single entry in the DARPA Grand Challenge could finish the race. In fact the top car managed to travel 7.4 miles before breaking down. But in 2005 the Stanford Racing Team’s driverless car successfully navigated the grueling 132-mile course (although it took the car seven hours to do so). Four other cars also completed the race. (Some critics noted that the rules permitted the cars to use GPS navigation systems along a long deserted path; in effect, the cars could follow a predetermined road map without many obstructions, so the cars never had to recognize complex obstacles in their path. In real driving, cars have to navigate unpredictably around other cars, pedestrians, construction sites, traffic jams, and so forth.)
Bill Gates is cautiously optimistic that robotic machines may be the “next big thing.” He likens the field of robotics now to the personal computer field he helped to start thirty years ago. Like the PC, it may be poised to take off. “No one can say with any certainty when—or if—this industry will achieve critical mass,” he writes. “If it does, though, it may well change the world.”
(Once robots with humanlike intelligence become commercially available, there will be a huge market for them. Although true robots do not exist today, preprogrammed robots do exist and have proliferated. The International Federation of Robotics estimates that there were about 2 million of these personal robots in 2004, and that another 7 million would be installed by 2008. The Japanese Robot Association predicts that by 2025 the personal robot industry, today worth $5 billion, will be worth $50 billion per year.)
THE BOTTOM-UP APPROACH
Because of the limitations of the top-down approach to artificial intelligence, attempts have been made to use a “bottom-up” approach instead, that is, to mimic evolution and the way a baby learns. Insects, for example, do not navigate by scanning their environment and reducing the image to trillions upon trillions of pixels that they process with supercomputers. Instead insect brains are composed of “neural networks,” learning machines that slowly learn how to navigate in a hostile world by bumping into it. At MIT, walking robots were notoriously difficult to create via the top-down approach. But simple buglike mechanical creatures that bump into the environment and learn from scratch can successfully scurry around the floor at MIT within a matter of minutes.
Rodney Brooks, director of MIT’s famed Artificial Intelligence Laboratory, famous for its huge, lumbering “top-down” walking robots, became a heretic when he explored the idea of tiny “insectoid” robots that learned to walk the old-fashioned way, by stumbling and bumping into things. Instead of using elaborate computer programs to mathematically compute the precise position of their feet as they walked, his insectoids used trial and error to coordinate their leg motions using little computer power. Today many of the descendants of Brooks’s insectoid robots are on Mars gathering data for NASA, scurrying across the bleak Martian landscape with a mind of their own. Brooks believes that his insectoids are ideally suited to explore the so
lar system.
One of Brooks’s projects has been COG, an attempt to create a mechanical robot with the intelligence of a six-month-old child. On the outside COG looks like a jumble of wires, circuits, and gears, except that it has a head, eyes, and arms. No laws of intelligence have been programmed into it. Instead it is designed to focus its eyes on a human trainer, who tries to teach it simple skills. (One researcher who became pregnant made a bet as to which would learn faster, COG or her child by the age of two. The child far surpassed COG.)
For all the successes in mimicking the behavior of insects, robots using neural networks have performed miserably when their programmers have tried to duplicate in them the behavior of higher organisms like mammals. The most advanced robot using neural networks can walk across the room or swim in water, but it cannot jump and hunt like a dog in the forest, or scurry around the room like a rat. Many large neural network robots may consist of tens to perhaps hundreds of “neurons” the human brain, however, has over 100 billion neurons. C. elegans, a very simple worm whose nervous system has been completely mapped by biologists, has just over 300 neurons in its nervous system, making its nervous system perhaps one of the simplest found in nature. But there are over 7,000 synapses between these neurons. As simple as C. elegans is, its nervous system is so complex that no one has yet been able to construct a computer model of this brain. (In 1988 one computer expert predicted that by now we should have robots with about 100 million artificial neurons. Actually, a neural network with 100 neurons is considered exceptional.)
The supreme irony is that machines can effortlessly perform tasks that humans consider “hard,” such as multiplying large numbers or playing chess, but machines stumble badly when asked to perform tasks that are supremely “easy” for human beings, such as walking across a room, recognizing faces, or gossiping with a friend. The reason is that our most advanced computers are basically just adding machines. Our brain, however, is exquisitely designed by evolution to solve the mundane problems of survival, which require a whole complex architecture of thought, such as common sense and pattern recognition. Survival in the forest did not depend on calculus or chess, but on evading predators, finding mates, and adjusting to changing environments.
MIT’s Marvin Minsky, one of the original founders of AI, summarizes the problems of AI in this way: “The history of AI is sort of funny because the first real accomplishments were beautiful things, like a machine that could do proofs in logic or do well in a calculus course. But then we started to try to make machines that could answer questions about the simple kinds of stories that are in a first-grade reader book. There’s no machine today that can do that.”
Some believe that eventually there will be a grand synthesis between the two approaches, the top-down and bottom-up, which may provide the key to artificial intelligence and humanlike robots. After all, when a child learns, although he first relies mainly on the bottom-up approach, bumping into his surroundings, eventually he receives instruction from parents, books, and schoolteachers, and learns from the top-down approach. As an adult, we constantly blend these two approaches. A cook, for example, reads from a recipe but also constantly samples the dish as it is cooking.
Hans Moravec says, “Fully intelligent machines will result when the mechanical golden spike is driven uniting the two efforts,” probably within the next forty years.
EMOTIONAL ROBOTS?
One consistent theme in literature and art is the mechanical being that yearns to become human, to share in human emotions. Not content to be made of wires and cold steel, it wishes to laugh, cry, and feel all the emotional pleasures of a human being.
Pinocchio, for example, was the puppet that wanted to become a real boy. The Tin Man in the The Wizard of Oz wanted to have a heart. And Data, on Star Trek, is a robot that can outperform all humans in strength and intelligence, yet still yearns to become human.
Some people have even suggested that our emotions represent the highest quality of what it means to be human. No machine will ever be able to thrill at a blazing sunset or laugh at a humorous joke, they claim. Some say that it is impossible for machines ever to have emotions, since emotions represent the pinnacle of human development.
But the scientists working on AI and trying to break down emotions paint a different picture. To them emotions, far from being the essence of humanity, are actually a by-product of evolution. Simply put, emotions are good for us. They helped us to survive in the forest, and even today they help us to navigate the dangers of life.
For example, “liking” something is very important evolutionarily, because most things are harmful to us. Of the millions of objects that we bump into every day, only a handful are beneficial to us. Hence to “like” something is to make a distinction between one out of the tiny fraction of things that can help us over against the millions of things that might hurt us.
Similarly, jealousy is an important emotion, because our reproductive success is vital in ensuring the survival of our genes to the next generation. (In fact, that is why there are so many emotionally charged feelings related to sex and love.)
Shame and remorse are important because they help us to learn the socialization skills necessary to function in a cooperative society. If we never say we’re sorry, eventually we will be expelled from the tribe, diminishing our chances of surviving and passing on our genes.
Loneliness, too, is an essential emotion. At first loneliness seems to be unnecessary and redundant. After all, we can function alone. But longing to be with companions is also important for our survival, since we depend on the resources of the tribe to survive.
In other words, when robots become more advanced, they, too, might be equipped with emotions. Perhaps robots will be programmed to bond with their owners or caretakers, to ensure that they don’t wind up in the garbage dump. Having such emotions would help to ease their transition into society, so that they could be helpful companions, rather than rivals of their owners.
Computer expert Hans Moravec believes that robots will be programmed with emotions such as “fear” to protect themselves. For example, if a robot’s batteries are running down, the robot “would express agitation, or even panic, with signals that humans can recognize. It would go to the neighbors and ask them to use their plug, saying, ‘Please! Please! I need this! It’s so important, it’s such a small cost! We’ll reimburse you!’”
Emotions are vital in decision making, as well. People who have suffered a certain kind of brain injury lack the ability to experience emotions. Their reasoning ability is intact, but they cannot express any feelings. Neurologist Dr. Antonio Damasio of the University of Iowa College of Medicine, who has studied people with these types of brain injuries, concludes that they seem “to know, but not to feel.”
Dr. Damasio finds that such individuals are often paralyzed in making the smallest decisions. Without emotions to guide them, they endlessly debate over this option or that option, leading to crippling indecision. One patient of Dr. Damasio spent half an hour trying to decide the date of his next appointment.
Scientists believe that emotions are processed in the “limbic system” of the brain, which lies deep in the center of our brain. When people suffer from a loss of communication between the neocortex (which governs rational thinking) and the limbic system, their reasoning powers are intact but they have no emotions to guide them in making decisions. Sometimes we have a “hunch” or a “gut reaction” that propels our decision making. People with injuries that effect the communication between the rational and emotional parts of the brain do not have this ability.
For example, when we go shopping we unconsciously make thousands of value judgments about almost everything we see, such as “This is too expensive, too cheap, too colorful, too silly, or just right.” For people with this type of brain injury, shopping can be a nightmare because everything seems to have the same value.
As robots become more intelligent and are able to make choices of their own, they could likewise becom
e paralyzed with indecision. (This is reminiscent of the parable of the donkey sitting between two bales of hay that eventually dies of starvation because it cannot decide which to eat.) To aid them, robots of the future may need to have emotions hardwired into their brains. Commenting on the lack of emotions in robots, Dr. Rosalind Picard of the MIT Media Lab says, “They can’t feel what’s most important. That’s one of their biggest failings. Computers just don’t get it.”
As Russian novelist Fyodor Dostoevsky wrote, “If everything on Earth were rational, nothing would happen.”
In other words, robots of the future may need emotions to set goals and to give meaning and structure to their “lives,” or else they will find themselves paralyzed with infinite possibilities.
ARE THEY CONSCIOUS?
There is no universal consensus as to whether machines can be conscious, or even a consensus as to what consciousness means. No one has come up with a suitable definition of consciousness.
Marvin Minsky describes consciousness as more of a “society of minds,” that is, the thinking process in our brain is not localized but spread out, with different centers competing with one another at any given time. Consciousness may then be viewed as a sequence of thoughts and images issuing from these different, smaller “minds,” each one grabbing and competing for our attention.
If this is true, perhaps “consciousness” has been overblown, perhaps there have been too many papers devoted to a subject that has been overmystified by philosophers and psychologists. Maybe defining consciousness is not so hard. As Sydney Brenner of the Salk Institute in La Jolla says, “I predict that by 2020—the year of good vision—consciousness will have disappeared as a scientific problem…. Our successors will be amazed by the amount of scientific rubbish discussed today—that is, if they have the patience to trawl through the electronic archives of obsolete journals.”