Book Read Free

The Vestigial Heart

Page 26

by Carme Torras


  A robotized society is undoubtedly a complex system, and given the difficulty of predicting how it will evolve, a reasonable approach is to imagine possible future scenarios and encourage debate on their advantages and risks. This allows individuals to form knowledgeable opinions, which is better for the self-regulation and improvement of society than blindly imposing rules. The material that follows is an attempt in this direction. It is organized into six sections, each loosely related to one part of the novel, and the sections share the same structure: four questions arising from the reading of particular chapters are posed to trigger debate, followed by some hints on the academic discussion of those questions.

  This material is intended for a general audience that is curious about technological innovations and concerned about social responsibility, and it can be used in reading and discussion groups, high school classrooms, and continuing education programs. Furthermore, it can serve as a teaching aid in university courses on “robot ethics,” especially in technological areas such as computer science and engineering, but also in philosophy, psychology, political science, cognitive science, and linguistics, which all have ethics-related topics in their curricula.

  With these latter readers in mind, the book is complemented by a page on the MIT Press’ website that is organized as a practical teacher’s guide. The specific passages in the novel exemplifying each of the raised questions are detailed and an overview of the scholarly treatment of the related ethical issues is provided, together with relevant up-to-date references for further reading.

  1. DESIGNING THE “PERFECT” ASSISTANT

  READINGS

  Chapter 1: Alpha+ and Dr. Craft

  Chapter 5: ROBco and Leo

  QUESTIONS

  Should public trust and confidence in robots be enforced? If so, how?

  Is it acceptable that robots be designed to generate reliance?

  Should the possibility of deception be actively excluded in the design of robots?

  Could robots be used to control people?

  HINTS FOR DISCUSSION

  The traits attributed to a “perfect” assistant vary largely among cultures, as well as among individuals. Moreover, robot manufacturers and users may have opposite interests, for example, in relation to reliance. From an entrepreneurial viewpoint, Dr. Craft argues for highly adaptable robots that would fit their owners like a glove, covering all their needs and hopefully maintaining them in a permanent state of well-being, but as a user he wants a hypothetical assistant to stimulate him to think and behave differently than usual. Similarly, Leo presumably adheres to more strict criteria (e.g., regarding safety and maintenance) in his professional design activity than he does when he tunes his robot as a user.

  The risk of deception in the social deployment of robots is high: elderly people may be led to believe that their robot assistants care about them and delegate all decision-making to them; children may have the illusion that robot toys have mental states and emotions; and the general public may begin to think that robots are truly intelligent and have intentions. A generally accepted principle is that robots should not be designed in ways that impersonate human agency; instead their machine nature should be transparent.

  Robots may reinforce certain habits and values for the user, the key questions being who decides what these should be and whom they should benefit: the user, society at large, or a particular group of people. If it is the user that, for example, wants to follow a diet, he himself may tune the robot to distract him from eating between meals, or to act as a kind of Jiminy Cricket by reminding him how ashamed he will be later on. Similar behaviors may be programmed into robots to encourage healthy habits in their users with an eye toward reducing health care costs, but this programming can likewise be used to increase the profits of some companies or to favor the political interests of a party or state. Even if performed in the interest of the user, nudging can be perceived as overly intrusive and annoying, thus running a high risk of angering people, especially those with bad-tempered personalities. Dr. Craft is one such user, and this situation is illustrated in the first scene of the novel, when he roars to his robot Alpha+: “Get off me, you confounded beast” and gives it a shove as it is trying to wake him up. Thus, the effect of this type of encouragement greatly depends on the user and the circumstances, and the need for personalization has to be taken into account during design.

  2. ROBOT APPEARANCE AND EMOTION

  READINGS

  Chapters 9 and 12: ROBbie and Celia

  Chapter 10: Leo at CraftER’s convention

  QUESTIONS

  How does robot appearance influence public acceptance?

  What are the advantages and dangers of robots simulating emotions?

  Have you heard of/experienced the “uncanny valley” effect?

  Should emotional attachment to robots be encouraged?

  HINTS FOR DISCUSSION

  Anthropomorphic appearance and simulated emotions may make robots more compelling in emergency situations, causing people to respond earlier and faster. However, a widely agreed upon guideline is that the degree of anthropomorphism and simulation should not be higher than the particular application requires. A more generic ethical consideration related to appearance is, of course, the need to avoid sexist, ableist, racist, and ethnically insensitive morphologies and expressivity in the design and programming of robots. Celia feels attached to her robot ROBbie because of its loyal, trustworthy, and predictable behavior, which is reinforced by its undeceiving machine appearance.

  Numerous studies have shown that the more anthropomorphic the robot, the more positive and empathetic the human response, until a point is reached where excessive similarity of the robot to a human causes distress and provokes a sudden repulsion; this is known as the “uncanny valley” effect. At the Disasters stand, Leo experiences such distress in front of a mechanical baby and realizes that the uncanny valley effect can doom a robot product.

  The main risk of emotional attachment to a robot is the so-called lotus eater problem, whereby the ease of relating to a robot would erode the motivation for engaging with human beings, who are not always emotionally pleasant, leading to social isolation. In the case of children this could be especially harmful, since reduced contact with family and peers could seriously disrupt their normal development, preventing them from learning to empathize, for example. Celia likes that ROBbie behaves more “rationally” than her classmates and her adoptive mother, since it has to follow rules and can’t confuse her with nonsense. Moreover, she feels protected by the robot, which she sees as a faithful companion that she can trust.

  Instead of setting up moral boundaries in the design of robots, which is the main trend today, some philosophers advocate focusing research on human-robot interactions and the way these may enrich our emotional life in a possibly different and complementary way to human-human relationships, enhancing human flourishing and happiness.

  3. ROBOTS IN THE WORKPLACE

  READINGS

  Chapter 13: Leo, ROBco, and the timeout device

  QUESTIONS

  Would robots primarily create or destroy jobs?

  How should work be organized to optimize human-robot collaboration?

  Do experiments on human-robot interaction require specific oversight?

  Do intellectual property laws need to be adapted for human-robot collaborations?

  HINTS FOR DISCUSSION

  Concern about job loss is not specific to robotics, as it can be traced back to the agricultural and industrial revolutions and, more recently, to the Internet revolution. The standard response is that human workers are thus freed from dangerous, dirty, or dull tasks (the infamous three D’s) to be able to undertake “higher value” jobs, mostly in the design, programming, deployment, maintenance, and use of these new technologies. However, this positive trend has a downside: the technological divide. Most of the displaced workers won’t be able to perform the new jobs. In developed countries, the skill shift may take at
least one generation and, for underdeveloped societies, the economic gap may become insurmountable. The challenge is to devise and establish social measures for a more equitable distribution of both work and resources.

  Robot assistants designed to closely collaborate with humans raise a new concern: how to define the boundaries between human and robot labor in a shared task, so that not only is output maximized but, more importantly, the rights and dignity of professionals are preserved. An increasingly significant issue will be how to split credit for successes and responsibility for failures between the person working with the robot and its programmer, which becomes even more difficult in the case of robots with learning capabilities, as their behavior depends on both their built-in software and their life-long learning experiences, which may include many other people with whom the robot has interacted.

  Some of these concerns and issues are exemplified by Leo struggling on two fronts: first, he fears his privacy and intellectual property rights may be violated by the timeout device installed by his employer; and second, he struggles in teaching ROBco that they have different skills and, in order to optimize their collaboration, they need to do what each does best and communicate on common ground.

  4. ROBOTS IN EDUCATION

  READINGS

  Chapter 14: Celia at school, viewed by her adoptive mother Lu

  Chapter 16: Celia, her classmate Xis, and her home tutor Silvana

  QUESTIONS

  Are there limits to what a robot can teach?

  Where is the boundary between helping and creating dependency?

  Who should define the values robot teachers would transmit and encourage?

  What should the relationship be between robot teachers and human teachers?

  HINTS FOR DISCUSSION

  Telepresence robots to teach foreign languages or music, for instance, are regarded as useful aids in the classroom, as are educational robots for initiating young children into programming or for encouraging teamwork to consolidate concepts from various disciplines. Arguments arise when autonomous robotic assistants are envisioned as taking the role of human teachers in the transmission of cultural values and critical thinking. How could a machine motivate students or provide moral guidance without life experience? How will children learn to empathize and to reason, not only logically but emotionally? How will they develop respect for their elders and admiration for the achievements of great people?

  At Celia’s school, students learn to search for solutions in EDUsys rather than trying to reason for themselves, and they are subject to an extreme, mechanical form of socialization training; not surprisingly, Xis shows symptoms of suffering from a reactive attachment disorder. Of course, better ways of teaching good social behavior can be imagined. For instance, a robot could smile or display other cues that encourage the sharing of toys between playmates, and mimic expressions of disappointment whenever a child refuses to share. In a similar way, robots could nudge children to interact with other children with whom they don’t associate so as to avoid forming cliques.

  Instead of human teachers, it is EDUsys that programs everyone’s education, and it has trouble programming Celia’s as she reacts so differently than other kids. Her creativity—an almost extinguished human trait at the time—is an important recurrent theme in the novel, highlighting the risk that technology diminishes creativity in human development.

  Lu takes for granted that parents have the right to constantly monitor what their children are doing, which may prevent them from learning to behave autonomously and impair their decision-making abilities. She further encourages child dependence by telling Silvana that she should teach Celia and ROBbie as a team, so that the robot learns to cover up the flaws of the girl.

  This raises the issue of whether robotic teaching assistants should team up with teachers or with students. On the one hand, robots can keep track of the progress and attitude of each child much more accurately than human teachers can, and build detailed student models that are very helpful for providing personalized assistance. But, on the other hand, in order to be trusted by children, robotic assistants must not disclose their “secrets” to the teacher. Establishing the balance between the former and the latter is a difficult task.

  5. HUMAN-ROBOT INTERACTION AND HUMAN DIGNITY

  READINGS

  Chapters 25 and 28: Leo and Silvana

  QUESTIONS

  Could robot decision-making undermine human freedom and dignity?

  Is it acceptable for robots to behave as emotional surrogates? If so, in what cases?

  Could robots be used as therapists for the mentally disabled?

  How adaptive/tunable should robots be? Are there limits to human enhancement by robots?

  HINTS FOR DISCUSSION

  Users would expect a robotic caregiver to have the basic interaction competencies to deal with ethically sensitive situations. For example, in order to avoid eliciting feelings of objectification and loss of control, robots should not lift or move people around without consulting them. Likewise, they should always use respectful language and never intimidate users. Reacting to what Silvana felt was a harsh piece of advice from ROBco, she asks Leo if he doesn’t find it degrading that the robot talks to him like that. Further, the useful capacity of robots to collect data about a person and transmit it for medical monitoring must be balanced with that person’s right to privacy and to control over their own life—for example, in refusing treatment. This raises questions as to the extent to which the wishes of a patient or elderly person must be followed, and the relationship between the amount of control given to them and their state of mind.

  The idea of robot companionship seems natural to some people and almost obscene to others. Given the sometimes painful and capricious nature of human relationships, it is not surprising that some might prefer to share their life with a robot, which would have predictable behavior and never criticize, cheat, or disclose their intimacy. This may be acceptable for an adult in full command of their mental faculties, but emotional surrogates should generally be avoided in the case of vulnerable users, and especially children. Note that human caregivers sometimes simulate affection to improve their patient’s well-being, and thus robots may also be allowed to do so under similar circumstances.

  There is a difference between simulating affection and showing emotionally intelligent behavior. Capturing the emotional state of the user can be very useful, although misinterpreting it may have negative consequences. Some psychologists even suggest that the illusion of emotional understanding by a robot that makes eye contact and responds to touch may be therapeutic in some contexts. Additional virtues of robots as therapists are their endless “patience,” their capacity for repetitive action without getting “bored,” and their never showing unintended feelings, which some humans cannot repress. They have had some successes in helping autistic children to acquire social skills.

  In sum, the challenge is how to ensure that robots improve the quality of our daily lives, widen our capabilities, and increase our freedom, while avoiding their making us more dependent and emotionally weak; that is, the eternal dilemma of how to take the good without the bad. In their heated discussions, Leo defends the positive view of robots as enhancers of our physical and cognitive capabilities, while Silvana highlights the downside that relating to robots ends up replacing people’s intimate relationships.

  6. SOCIAL RESPONSIBILITY AND ROBOT MORALITY

  READINGS

  Chapter 30: Alpha+ and Dr. Craft

  QUESTIONS

  Can reliability/safety be guaranteed? How can hacking/vandalism be prevented?

  Who is responsible for the actions of robots? Should moral behavior be modifiable in robots?

  When should a society’s well-being prevail over the privacy of personal data?

  What digital divides may robotics cause?

  HINTS FOR DISCUSSION

  Autonomous robots need to make decisions in situations unforeseen by their designers. This raises not
only issues of reliability and safety for users, but also the challenge of regulating automatic decision-making, particularly in ethics-sensitive contexts, as well as establishing procedures to attribute responsibility for robots.

  Some argue that robots can be better moral decision makers than humans, since their rationality is not limited by jealousy, fear, or emotional blackmail. Even assuming that general ethics rules could be implemented in robots, however, questions then arise as to who should decide what morality is to be encoded in such rules and up to what point should the rules be modifiable by the user. For instance, it is unclear whether a robot should be allowed to circumvent its user’s autonomy in order to behave more ethically toward other human beings or in the interest of society in general.

  Alpha+ says it is against the rules to abandon its PROP while he is in danger. But its PROP, Dr. Craft, is ultimately the one who decides and switches his robot off. Who is responsible for the fatal consequences? Leo feels doubly guilty, as designer of the sensory booth—a “death trap,” he calls it—and as the PROP of ROBco, the robot directly involved in the death, whereas Silvana claims that it was either an accident or a suicide.

  A robot, as a tool, is not responsible for anything, but it should always be possible to determine who is legally responsible for its actions. In the case of robots able to learn from experience, such responsibility may be shared between the designer, the manufacturer, and the user; a hacker may also be charged with it if their illegal intervention can be demonstrated. For litigation purposes, it is crucial that a robot’s decision path be reconstructible. It has been suggested that robots, like airplanes, should be equipped with a nonmanipulable black box that continuously documents the significant results of the learning process and the relevant inputs. To convince Leo that he cannot be blamed for Dr. Craft’s death, ROBco reminds him that Alpha+’s record will have saved proof that its PROP disconnected it.

 

‹ Prev