Futurama and Philosophy

Home > Other > Futurama and Philosophy > Page 13
Futurama and Philosophy Page 13

by Young, Shaun P. , Lewis, Courtland


  So what to do with such a complex ethical question? Perhaps the most important lesson we might take away from this episode relates not to eating meat but rather to gluttonous consumption. That lesson starts with the over-indulgence of Popplers on the Omicronian nursery planet. Fry suggests that the Popplers are so good the crew should “bring back a couple of pockets full.” Bender thinks they should take back “a whole Bender full.” Leela disagrees with both. Saying that they should take only what they need, Leela decrees they should “stuff the ship!”

  The absurdity of the notion that the crew would need an entire ship crammed with Popplers is obvious. Likewise, it’s revealed during the negotiations between DOOP and the Omicron Persei 8 that humanity consumed one hundred and ninety eight billion of the Omicronian young in what appears to be a relatively short period of time, which speaks directly to the gluttonous appetites of mankind. Finally, what makes the feast at the end of the episode farcical isn’t even so much the arbitrary distinctions being made about what kind of animal it’s okay to consume, but the obscene smorgasbord of meats—including veal, suckling pig, and dolphin.

  It may very well be that, collectively, human beings will never completely stop eating other living things. After all, we can’t eat rocks, even if they are sautéed in a little mud. However, while Singer and others will argue that such a stance is like tolerating “a little bit of murder,” the creative minds behind Futurama seem focused on getting us to reevaluate why we make certain decisions about consumption and direct us to making more ethical choices about our diets. As Waterfall might say, “Okay, that’s a start, that’s very Earth friendly.”

  12

  Kiss My Shiny Metal Autonomy

  DEBORAH PLESS

  To begin, it isn’t ever clear that robots ought to be regarded as individuals at all.

  —CHRISTOPHER GRAU

  Futurama envisions a world that has advanced beyond our limited science. It’s a world where trips to the moon are hackneyed and easy, where bikinis come in a spray-on can, and where robots roam freely along the streets, with their own jobs, apartments, soap operas, and political candidates. This is a world where we have Bender, the foul-mouthed, morally-challenged hindrance and menace to everyone at Planet Express, friend to a select few. The Futurama future is one with robots in it, but they’re not just any robots, they’re mean robots!

  To be honest, that’s a little surprising. Most films that involve robots in any real capacity focus on the moral issues surrounding their use. Robots are usually compared to humans, as in Blade Runner and A.I. Artificial Intelligence, or they’re being used for human protection, as in I, Robot, and sometimes, they’re just there to show us that we all have feelings, man: Wall-E, The Iron Giant, and Terminator 2. Bender, however, is like none of these. He’s capable of living his own life, and frequently enjoys living that life outside of the bounds of traditional human morality.

  Bender is a robot. That means that instead of being grown organically inside another of his species, he was created intentionally. Bender isn’t just a random assortment of genes and alleles; he’s programmed and designed with the full knowledge and permission of a multinational corporation. Season Two’s “Mother’s Day” reveals that MomCorp knowingly created Bender to be an alcoholic kleptomaniac. That being the case, should MomCorp not be held responsible for Bender’s illegal and sometimes unethical actions? Or should we consider Bender’s employment with Planet Express, which regularly necessitates that he do highly immoral things in order to save his fellow crew members (such as in the episode “Amazon Women in the Mood” when he has to seduce an unknown computer to secure Fry’s escape), to be the root of his actions and therefore the focal point for blame? But then, Bender does seem to be fully aware of his own actions and cognizant of their potential ramifications, as demonstrated in “Hell Is Other Robots,” when he’s sent to Robot Hell for his crimes, and is fearful of punishment, but also understanding that he deserves to be punished. So, is Bender responsible for his behavior?

  There’s no simple answer. It all depends on how much we can legitimately label Bender a capable and autonomous being. A robot, like a human, should be judged based on its ability to choose paths for itself, cognizant of the consequences for itself and others.

  What kind of an ethical system does Bender have? Is that system enough to give him autonomous decision-making ability? If it’s sufficient, then we can say that Bender is responsible for himself and his own actions. If it isn’t, we have to examine precisely what blame falls on the MomCorp and the Planet Express crew.

  Do You Hear Me, I’m Guilty

  Bender has an ethical system programmed into him. We know this because we’ve seen it. In “Mother’s Day” the Planet Express crew visits the MomCorp’s robot museum, and Leela gets to view the world through “Bender Vision.” Instead of her normal sight, she sees a world populated by potential suckers to steal from, and alcohol.

  This gives us a clear view of Bender’s internal logic. We can infer that these traits were specifically designed into every Benderbot, making MomCorp responsible for the early formation of Bender’s ethical system. As Asaro notes, robots are socio-technical systems: while they can function and even succeed in a social and inter-personal context, we can’t forget that robots are also technical, having been designed and programmed in a far more comprehensive and intentional way than could be applied to human reproduction. Bender has an ethical system provided to him by MomCorp. However, the question remains: what precisely is it, and how does it influence his ability to make moral decisions?

  According to most science fiction, the best ethical framework for a robot is utilitarianism, which predicates that all robots must act in the interest of the greater good. This concept is best seen in the development of Asimov’s Three Laws of Robotics, which laid the groundwork for most concepts of robot ethics. The Three Laws are as follows:

  1.a robot may not injure a human being or, through inaction, allow a human being to come to harm;

  2.a robot must obey orders given to it by human beings except when such orders would conflict with the First Law;

  and

  3.a robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

  Thus, for robots, the greater good is the preservation of humans, but achieving that good isn’t as simple as it sounds. Humans might need saving from any number of things, including other humans, thus leading to the moral dilemma for robots: when two humans are in danger, which one do you save? It’s unclear how a robot would solve this riddle without more data.

  Opposed to the utilitarian model featured in I, Robot, Futurama showcases a type of robot subjectivism—every robot for himself. There are some distinct benefits to the use of subjectivist robots by corporations, but these same benefits are arguably failures from a moral standpoint. For example, such robots would have no trouble carrying out immoral commands with no regard for human life, which might make them useful to corporations, criminals, or even military groups, but it’d make them bad moral agents. Self-interest is a powerful tool, so it makes sense that a knowing corporation would create robots that have little care for humanity at large. As Grau observes, “This thought naturally leads to a more general but related question: if we could program a robot to be an accurate and effective utilitarian, shouldn’t we?” As it would seem in Futurama, while we probably should, we probably won’t.

  The MomCorp is fully capable of programming a self-sacrificing Asimovian robot, as evidenced with the introduction of Robot 1-X in Season Four’s “Obsoletely Fabulous.” In strong contrast to Bender, Robot 1-X has no clear self or identity, and exists solely to serve the user. It’s a clean-energy producing, helpful, slavish robot that can be used simply as a tool without being offended, as it lacks cognitive self-awareness. Robot 1-X is, in great part, the anti-Bender. And, ironically, it creates the ultimate argument for Bender’s actual autonomy.

  I Say, I Say, Bender’s Has Autonomy?r />
  In its simplest form, autonomy is a basic sense of moral independence. It’s the defining factor in determining who is and isn’t a separate, responsible adult in the eyes of morality. We define autonomy as moral independence and a degree of freedom from outside interference regarding moral decisions, but we judge it on a sliding scale. We start with infants, who have no autonomy and cannot even survive without intervention, and end with adults, who are capable of caring both for their own needs and the needs of others. Adults understand the consequences and potential moral implications of their actions, and so their decisions to act have greater moral weight than those of children.

  In the case of robots, autonomy is the capacity for acting independently of human control. This means that autonomous robots don’t need human intervention to know whether or not an action is right and should be done. The robot will independently determine its own actions. Anything less than such independence can’t legitimately be labeled autonomy, and therefore has no place in discussions concerning moral responsibility. In order to be designated as free subjects or ethical beings, robots need an awareness of guilt or voice of conscience that can inform them when they make wrong choices, and still retain the freedom to make those choices anyway.

  As Bert Olivier wrote in his essay “When Robots Would Really Be Human Simulacra,” freedom is found even if you choose not to be free, so long as that choice is made freely.

  To be considered autonomous, robots must have two qualities. First, they must have “capacities equal to or exceeding human beings,” and second, they need the capacity for moral responsibility. As Sparrow points out, autonomy and moral responsibility are inextricably linked. Bender’s responsibility for his actions, then, can be analyzed according to these criteria. He does have capacities “equal to or exceeding” those of humans, as we can see in nearly every episode. He’s highly rational (or capable of being so), possesses human-like emotions, and a very “human” fear of death and punishment, as we see in “Hell Is Other Robots”; but those qualities don’t necessarily ensure that Bender possesses moral responsibility.

  We don’t consider all humans to possess moral responsibility, so it makes sense to be hesitant to extend such a claim to non-humans. Human children or the mentally infirmed aren’t considered to have equal moral responsibility because they aren’t deemed to be morally independent from their caregivers. Their self-awareness and ethical understanding hasn’t progressed to such a point (or no longer is at such a point) that we’d be confident they understand the potential consequences of their actions. If Bender had been created with a child-like mind, it would explain his low impulse control, utter lack of self-conscientiousness, and general “id-centric” behavior.

  By these standards, Bender could be determined to be as morally responsible as a child. He’s a slave to his programming, and responsible only in the sense that he has performed immoral actions on the orders of others (namely, the Planet Express crew and MomCorp). We might say of a robot in this situation, it’s “just following orders.” When a policy is mandated by an institution, and that policy is unjust, we can either blame the workers who enforce an unjust policy, or we can, more accurately, blame the institution that created the policy. According to philosopher Peter Asaro, this holds true for robots too.

  Bender’s self-interest, however, seems to show a developed state of autonomy, and therefore to a degree of moral responsibility. While moral responsibility doesn’t equal legal responsibility, we must remember that to discount the agency of any thinking creature is to degrade it. As Mark Coeckelbergh observes, “Robots—including military robots—are not mere means to ends, but shape those ends.”

  Creation, Instruction, and Self-Awareness

  Who is Bender, and who shapes him? We know that MomCorp designed him in the first place, and they designed him to be immoral and hedonistic, at least by our standards. Is MomCorp, then, responsible for any damage that Bender might do?

  This is similar to the issue of driver responsibility in robotic cars. These cars, piloted by an internal computer, have a sophisticated Artificial Intelligence (AI) capable of making split-second decisions. But in the case of a car accident, whom should we blame? The human involved was not driving, and therefore cannot reasonably be held accountable, except in the sense that he bought the car. Similarly, the robot driver can’t be held accountable, because it was merely programmed with the needed codes for all conceivable circumstances: it isn’t sentient enough to be independent of its programming. Blame, therefore, must fall on the designers who created the robotic driver.

  In examining Bender’s behavior, though, we can clearly see that he’s more aware of himself and his responsibility than a robotic car might be. That brings us back to the difference between Bender and Robot 1-X. While Robot 1-X would easily be complicit in any number of crimes, simply because he’s programmed to follow his owner’s orders. Bender has a degree of sentience that allows him to override his programming at times, effectively rendering him independent of MomCorp.

  As we see in “Mother’s Day,” Bender’s able to throw off Mom’s orders to kill all humans upon realizing that he wouldn’t enjoy a solely robot society. Therefore, Bender can’t be judged as slavishly following his programming, and his actions can’t be blamed on the MomCorp: As Sparrow concludes, the more that robots can be considered autonomous, the less their programmers can be held responsible for the robots’ actions.

  Similarly, Planet Express frequently presents Bender with situations in which he must act immorally to help his fellow crew members, like in Into the Wild Green Yonder when he rescues Leela from prison, or in “That’s Lobstertainment!” when he must help rig the Oscars to save Zoidberg. Bender is, however, able to ignore these situations if he so chooses. While Robot 1-X would fall under the responsibility of its programmers and owners, it is, as Asaro has noted, “primarily the people and the actions they take with respect to the technology that are ascribed . . . responsibility,” Bender himself frequently chooses the most immoral action, while making it clear that he is aware of the moral consequences.

  Ultimately, Bender must be held accountable for his own actions. His autonomy is provable. He disobeys directives in order to make self-interested choices that go against his programming, and he’s aware of his actions and their potential consequences. We know that Bender fears punishment, because of his reaction to an eternity of suffering in “Hell Is Other Robots.” If he were not afraid of suffering or punishment, then he would have no reason to escape Robot Hell.

  No Free Will, Not My Fault!

  Bender is autonomous enough to be responsible for his own actions. Now we must answer a different question: are those actions immoral? “Sophisticated robots will require a kind of functional morality, such that the machines themselves have the capacity for assessing and responding to moral considerations.” While some societies might deem some of Bender’s actions to be immoral, before we can place a moral judgment upon him we must examine whether or not those actions are objectively immoral.

  A “moral sense” is something that must develop organically in a person or robot. It must grow out of the programming and become a socio-technical aspect of the robot, one that is part and parcel of its sentience. This development is closely linked with the development of robotic learning: As noted by Sparrow, “the actions of these machines will be based on reasons, but these reasons will be responsive to the internal states—‘desires’, ‘beliefs’, and ‘values’—of the system itself.” In other words, a learning robot would need to have an internal programming that could give it actions and reasons for those actions, but it would also require the ability to ignore its programming and build a new framework.

  Moral robots will also possess the ability to learn from experience. This ability is what influences humans’ moral compasses, and determines how much we can judge ourselves to be immoral. Similarly to how we don’t deem small children to be autonomous, we don’t deem their actions to be immoral until they can have a
sense of what is moral and how their actions differ from it. In other words, immorality is a matter of intent. There’s a continuum of moral agency—not all moral agents can be regarded as equal. We don’t grant children the same moral status we reserve for teenagers, nor do we allow teenagers all the same rights as adults. Instinctively, we recognize that there is a sliding scale of moral cognition and agency. Children have less freedom because they are less able to take moral responsibility for their actions. As a person’s ability to understand their own choices and their implications increases, so too does their freedom in society.

  Robots must be able to grow and develop their own morality in relation to outside stimuli. Bender does this. While we do know that MomCorp has programmed him with an original morality, Bender is capable of rewriting his own code. His friendship with Fry is evidence of this, as Bender wasn’t originally programmed with a value for friendship or the desire for it. However, we see time and time again that Bender is zealously chasing after Fry’s friendship and attention. Furthermore, we see on several occasions that Bender will fight against his operational morality in order to do something truly kind. In “The Cyber House Rules,” when he adopts twelve children in order to scam the government, what starts out as a money scheme develops into a (seemingly) heartfelt relationship—albeit, one with a dollop of cynicism.

  Is Bender’s morality “our” morality, and do we have any right to expect it to be? Ethical subjectivism, the belief that there’s no such thing as objectivity in ethical matters, is contradicted in Futurama—the show clearly does not depict a relativistic universe. While Bender may attain his goals using immoral actions, the narrative does not defend him, as evidenced by the very existence of Robot Hell. Therefore, there must be a higher morality against which we can be judged, and that Bender gleefully ignores.

 

‹ Prev