Love and Sex with Robots_The Evolution of Human-Robot Relationships
Page 12
The psychological effect on computer users of interacting with an empathetic program was evaluated in an experimental study at Stanford University. The participants were asked to play casino blackjack on a Web site, in the virtual company of a computer character who was represented by a photograph of a human face. The computer character would communicate with the participants by displaying text in a speech bubble adjacent to its photograph. The participant and the computer character “sat” next to each other at the blackjack table, and both played against an invisible dealer. After each hand was completed, the computer character would react with an observation about its own performance and an observation about the participant’s performance.
Two versions of the program were used, one in which the computer character appeared to be self-centered and one where it appeared to be empathetic. In order to simulate self-centeredness, the character would express a positive emotion if it won the hand, by its facial expression and what it said, and a negative emotion if it lost, but it showed no interest in whether the user won or lost. The empathetic version displayed positive emotions when the participant won a hand and negative emotions when the participant lost.
The investigators found that when the computer character adopted a purely self-centered attitude, it had little or no effect on the participants’ reactions to its virtual personality. But when the computer character appeared to empathize with the users’ results at the blackjack table, the participants developed a liking, a trust for the character, and a perception that the character cared about their wins and losses and was generally supportive. The conclusion of the study was that “just as people respond to being cared about by other people, users respond to [computer characters] that care.”3
A robot’s social competence, and therefore the way it is perceived by humans as a social being, is inextricably linked to its emotional intelligence.* We saw in chapter 3 that the design of robot dogs benefits from the canine-ethology literature. Similarly, creating an accurate and sophisticated model of human emotion is a task that benefits from the literature on human psychology, and it is unlikely to be many years before all the key elements described in that literature have been modeled and programmed. Just imagine how powerful these combined technologies will become a few decades from now—speech, vision, emotion, conversation—when each of them has been taken to a humanlike level, a level that today is only a dream for AI researchers. The resulting combination will be an emotional intelligence commensurate with that of a sophisticated human being. The effect will be sensational.
Even though computers have such a wide range of capabilities that they are already pervasive throughout many aspects of our lives, they are not yet our intellectual and emotional equals in every respect, and they are not yet at the point where human-computer friendships can develop in a way that mirrors human-human friendships. Perhaps the strongest influence on the attitudes of those who do not believe in a future populated with virtual friends is their difficulty in relating to an artifact, an object that they know is not alive in the sense we usually employ the word. I do not for a moment expect all this to change overnight, and until computer models of emotion and personality are sufficiently advanced to enable the creation of high-quality virtual minds on a par with those of humans, it seems to me inevitable that there will be many who doubt the potential of robots to be our friends. At the present time, we are happy (or at least most of us are) with the idea of robots assembling our cars, robots mowing our lawns and vacuuming our floors, and with robots playing a great game of chess, but not with robots as baby-sitters or robots as intimate friends. Yet the concept of robots as baby-sitters is, intellectually, one that ought to appeal to parents more than the idea of having a teenager or similarly inexperienced baby-sitter responsible for the safety of their infants. The fundamental difference at the present time, between this responsibility and that of building cars or playing grandmaster level chess, is surely that robots have not yet been shown to be capable baby-sitters, whereas they have been shown to excel on the assembly line and on the chessboard. What is needed to convert the unbelievers is simply the proof that robots can indeed take care of the security of our little ones better than we can. And why not? Their smoke-detection capabilities will be better than ours, and they will never be distracted for the brief moment it can take for an infant to do itself some terrible damage or be snatched by a deranged stranger.
One example of how a strong disbelief and lack of acceptance for intelligent computer technologies can change to a diametrically opposite viewpoint has been seen in the airline industry, with automatic pilots on passenger planes. When I was first an airline passenger, around 1955, we had the comfort of seeing the captain of the aircraft walking through the cabin nodding a hello to some of the passengers and stopping to chat with others while his copilot took the controls. There was something reassuring about this humanization of the process of flying, to know that people with such obvious authority and the nice uniforms to match were up at the front ensuring that our takeoffs and landings were safe and negotiating the plane securely through whatever storms and around whatever mountain ranges might pose some risk of danger. In those days if all airline passengers had been offered the choice between having an authoritative human pilot in charge and having a computer responsible for their safety, I feel certain that the vast majority would have preferred the human. But today, fifty-plus years later, the situation is very different. Computers have been shown to be so superior to human pilots in many situations that there have been prosecutions brought in the United States against pilots who did not engage the computer system to fly their aircraft when they should have done so. This about-face, from a lack of confidence in the capabilities of a computer to an insistence that the computer is superior to humans at the task, will undoubtedly occur in many other domains in which computer use is being planned or already implemented, including the domain of relationships. The time will come when instead of a parent’s asking an adolescent child, “Why do you want to date such a schmuck?” or “Wouldn’t you feel happier about going to the high school prom with that nice boy next door?” the gist of the conversation could be, “Which robot is taking you to the party tonight?” And as the acceptability of sociable robots becomes pervasive and they are treated as our peers, the question will be rewritten simply as, “Who’s taking you to the party tonight?” Whether it is a robot or a human will become almost irrelevant.
Different people will of course adapt to the emotional capacities of robots at different rates, depending largely on a combination of their attitude and their experience with robots. Those who accept that computers (and hence robots) already possess or will come to possess humanlike psychological and mental capabilities will be the first converts. But those who argue that a computer “cannot have emotions” or that robots will “never” have humanlike personalities will probably remain doubters or unbelievers for years, until well after many of their friends have accepted the concept and embraced the robot culture. Between those two camps, there will be those who are open-minded, willing to give robots a try and experience for themselves the feelings of amazement, joy, and emotional satisfaction that robots will bring. I believe that the vast majority in this category will quickly become converts, accepting the concept of robots as relationship partners for humans.
Bill Yeager suggests that this level of acceptance will not happen overnight, because the breadth and depth of the human experience currently go far beyond the virtual pets and robots made possible by the current state of artificial intelligence. As long as robots are different enough from us to be regarded as a novelty, our relationships with them will to some extent be superficial and not even approach the relationships we have with our pets. One of the factors that cause us to develop strong bonds with our (animal) pets is that they share our impermanence, our frailties, being caught up in the same life-death cycle that we are. Yeager believes that to achieve a level of experience comparable with that of humans, robots will have to g
row up with us; acquire our experiences with us; be our friends, mates, and companions; and die with us; and that they will be killed in automobile accidents, perhaps suffer from the same diseases, get university degrees, be dumb, average, bright, and geniuses.
I take a different view. I believe that almost all of the experiential benefits that Yeager anticipates robots will need can either be designed and programmed into them or can be compensated for by other attributes that they will possess but we do not. Just as AI technologies have made it possible for a computer to play world-class chess, despite thinking in completely different ways from human grandmasters, so yet-to-be-developed AI technologies will make it possible for robots to behave as though they had enjoyed the full depth and breadth of human experience without actually having done any such thing. Some might be skeptical of the false histories that such behavior will imply, but I believe that the behavior will be sufficiently convincing to minimize the level of any such skepticism or to encourage a robot’s owner to rationalize its behavior as being perhaps influenced by a previous existence (with the same robot brain and memories but in a different robot body).
I see the resulting differences between robots and humans as being no greater than the cultural differences between peoples from different countries or even from different parts of the same country. Will robots and humans typically interact and empathize with one another any less than, say, Shetland Islanders with Londoners, or the bayou inhabitants of Louisiana with the residents of suburban Boston?
Preferring Computers to People
Many people actually prefer interacting with computers to interacting with other people. I first learned of this tendency in 1967, in the somewhat restricted domain of medical diagnosis. I was a young artificial-intelligence researcher at Glasgow University, where a small department had recently started up—the Department of Medicine in Relation to Mathematics and Computing. The head of this department, Wilfred Card, explained to me that his work into computer-aided diagnosis took him regularly to the alcoholism clinic at the Western Infirmary, one of Glasgow’s teaching hospitals. There he would ask his patients how many alcoholic beverages they usually drank each day, and his computer program would ask the same patients the same question on a different day. The statistics proved that his patients would generally confess to a significantly higher level of imbibing when typing their alcohol intake on a teletype* than when they were talking to the professor. This phenomenon, of people being more honest in their communication with computers than they are to humans, has also been found in other situations where questions are asked by a computer, such as in the computerized interviewing of job applicants. Another example stems from a survey of students’ usage of drugs, investigated by Lee Sproull and Sara Kiesler at Carnegie Mellon University, in which only 3 percent of the students admitted to using drugs when the survey was conducted with pencil and paper, but when the same survey was carried out by e-mail, the figure rose to 14 percent.
A preference for interacting with a computer program that appeared sociable rather than with a person was observed a year or so after Card’s experience by Joseph Weizenbaum at MIT, when a version of his famous ELIZA program was run on a computer in a Massachusetts hospital. ELIZA’s conversational skills operated simply by turning around what a user “said” to it, so that if, for example, the user typed, “My father does not like me,” the program might reply, “Why does your father not like you?” or “I’m sorry to hear that your father doesn’t like you.”* Even though ELIZA was dumb, with no memory of the earlier parts of its conversation and with no understanding of what the user was saying to it, half of those who used it at the hospital said that they preferred interacting with ELIZA to interacting with another human being, despite having been told very firmly by the hospital staff that it was only a computer program. This stubbornness might have arisen from the fact that the patients knew they were not being judged in any way, since they would have assumed, correctly in this case, that the program did not have any judgmental capabilities or tendencies.
The preference for interacting with computers rather than with humans helps to explain why computers are having an impact on social activities such as education, guidance counseling, and psychotherapy. As long ago as 1980, it was found that a computer could serve as an effective counselor and that its “clients” generally felt more at ease communicating with the computer than with a human counselor. Sherry Turkle describes this preference as an
infatuation with the challenge of simulated worlds…. Like Narcissus and his reflection, people who work with computers can easily fall in love with the worlds they have constructed or with their performances in the worlds created for them by others.4
Communicating information is by no means the only task for which people prefer to interact with a computer rather than with another human being. It was also noticed in early studies of human-computer interaction that people are generally as influenced by a statement made by a computer as they are when the same statement is made by a human and that the more someone interacts with a computer the more influential that computer will be in convincing the person that it is telling the truth.
I strongly suspect that the proportion of men preferring interaction with computers to interaction with people is significantly higher than the proportion of women, though I’m not aware of any quantitative psychology research in this area. Evidence from the McGill Report, for example, shows men to be more prone than women to eschewing human friendships, leaving men with more time and inclination than women to relate to computers. This bias, assuming that it does exist, suggests that men will always be more likely than women to develop emotional relationships with robots, but although this might be the case in the early years of human-robot emotional relationships, I suspect that in the longer term, women will embrace the idea in steadily increasing numbers. One reason, as will be discussed in chapters 7 and 8, is that women will be extremely enthusiastic about robot sex, once the practice has received good press from the mainstream media in general and women’s magazines in particular, and in their robot sexual experiences, women will, more than men, want a measure of emotional closeness with their robot. Another scenario that I foresee as being likely is that from the positive publicity about human-robot relationships women who are in or who have recently left a bad relationship will come to realize that there’s more than one way of doing better. Yes, it would be very nice to start a relationship with a new man, but one can never be sure how it’s going to work out. I believe that having emotional relationships with robots will come to be perceived as a more dependable way to assuage one’s emotional needs, and women will be every bit as enthusiastic as men to try this out. In today’s world, there are many women, particularly the upwardly mobile career-minded sort, who would have more use for an undemanding robot that satisfied all of their relationship needs than they would for a man.
What is the explanation for the preference of interacting with a computer over interacting with people? The feeling of privacy and the sense of safety that it brings make people more comfortable when answering a computer and hence more willing to disclose information. And some psychologists explain why people often prefer computers to people and can develop a strong affection for computers by describing this form of affection as an antidote to the difficulties many people face in forming satisfactory human relationships. While this is undoubtedly true in a significant proportion of cases, there are also many people who enjoy being with computers simply because computers are cool, they’re fun, they empower us.
Robotic Psychology and Behavior
The exploration of human-robot relationships is very much a new field of research. While the creation of robots and the simulation of humanlike emotions and behaviors in them are fundamentally technological tasks, the study of relationships between humans and robots is an even newer research discipline, one that belongs within psychology. This field has been given the name “robotic psychology” and practitioners within the field are known as “
robopsychologists.” Among those who have taken a lead in developing this nascent science are a husband-and-wife team at Georgetown University’s psychology department, Alexander and Elena Libin, who are also the founders of the Institute of Robotic Psychology and Robotherapy in Chevy Chase, Maryland.
The Libins define robotic psychology as “a study of compatibility between robots and humans on all levels—from neurological and sensory-motor to social orientation.”5 Their own research into human-robot communication and interaction, although still in its infancy, has already demonstrated some interesting results. They conducted experiments to investigate people’s interactions with NeCoRo, a sophisticated robotic cat covered with artificial fur, manufactured by the Omron Corporation and launched in 2001. NeCoRo stretches its body and paws, moves its tail, meows, and acts in various other catlike ways, getting angry if someone is violent to it and expressing happiness when stroked, cradled, and treated with lots of love. Additionally, NeCoRo’s software incorporates learning methods that cause the cat to become attracted to its owner day by day and to adjust its personality to that of its owner. One of the Libins’ earliest experiments was designed to investigate how biological factors such as age and sex, psychological factors such as a person’s past experiences with real pets and with technology, and cultural factors such as the social traditions that affect people’s communication styles influence the way a person interacts with such a robot.