If something is sentient, then it can suffer and experience joy. If it can experience negative and positive feelings like pain and joy, then that object’s interests and well-being should be taken into account when deciding what to do. Sentience, then, makes a thing an appropriate object of moral consideration; if doing something causes another sentient thing to suffer, then that’s a reason not to do it. This is why we treat dogs differently than tomatoes.
Most of us are confident that Kif, and even Nibbler, are sentient, but many things fall into an uncertain grey area. It doesn’t seem unreasonable to suppose that spiders may have no conscious experience, and so, there may be nothing “it is like” to be a small house spider. When you observe a spider, its behavior appears simple and mechanical. If it’s true that spiders don’t feel, then it’s even more likely that plants don’t feel. After all, vegetation is much more limited than a spider in its ability to move and interact with its environment. But what about fish, birds, chipmunks, and brain slugs? These all seem more sophisticated than plants and house spiders, but how can we know for sure that they feel?
Robots and machines occupy this same grey area. We don’t think our smart phones and cars are sentient, even if we often act as if they are when we get angry at them for breaking (even when it’s our fault). We even doubt present-day robots, which can walk, read stories, and play the violin, have anything remotely close to experiences (sorry Siri). However, the Robots of the thirty-first century are much more sophisticated than these present-day machines. In fact, they seem to be able to do everything humans can do (and maybe even more).
Walk Like, Talk Like, and Solve Differential Equations Like Robots
One way to determine whether a thing has mental experiences is to look and see if its behavior resembles that of other things which we know to be sentient. When a human is poked with a sharp object, or touches a hot stove, it displays pain behavior by trying to move away from the harmful object. So, if other creatures show the same type of avoidant behavior in response to things that are dangerous, we may have good reason to believe that those creatures can experience conscious pain as well.
Nevertheless, many have correctly objected that just because something behaves as if it can feel, does not mean that it does in fact feel. An organism could display pain behavior (or fear or sadness behavior) without it having any conscious mental feelings. For instance, shrimp try to avoid threatening objects, but the mechanisms which enable this behavior are quite simple. A few neural chords and antennae allow it to navigate its environment by responding to predators and prey. More to the point, a simple robot with wheels can easily be designed to move backwards when a knife is thrust toward it. Its receptors could simply be programmed to respond to objects shaped like a knife. We wouldn’t say, however, that this robot is afraid of the knife in the sense of having subjective experiences that generate certain negative feelings. For the same reason, there’s a good chance shrimp, which share similar underlying processes with the aforementioned robot, don’t have conscious sentient states.
Further, many science-fiction works contain robots that are near-human duplicates in appearance and behavior. Yet, in many of these works, the robots don’t display fear and pain behavior at the sign of danger or in response to damage to their bodies (for example, robots in the Spielberg movie A.I. Artificial Intelligence). The background assumption here seems to be that robots can’t be made to feel emotions and experience pain. Films like these show it’s conceivable that a machine which is nearly behaviorally equivalent to a normal human is not sentient.
In the final analysis, the objection to using behavior as a guide to experience does have problems. We’ve entertained the idea that a robot that can act like a human, to the extent that we can’t tell it apart from a real human, may not be sentient, but this idea forces us to come to grips with why we believe other humans besides ourselves are sentient. Have you ever experienced another person’s thoughts or feelings directly? No! It’s not possible. We only have direct, first-person access to our own conscious states. It’s conceivable that all other humans besides you are just biological zombies, having no conscious, subjective mental states at all.
It may be possible to avoid this disconcerting problem by claiming that it’s not behavior that matters for determining sentience but, instead, the type of stuff a thing is made out of. Fry himself uses this defense in the Hal Institute of Criminally Insane Robots when trying to prove he’s not a robot: “I’m a human. See I’m all squishy and flabby” (“Insane in the Mainframe,” Season Three). The cinematic robots and droids we’re familiar with are composed of silicon and micro-chips which direct electricity in strategic ways. Biological beings, on the other hand, are made of complicated carbon molecules which combine to form tissues, organs, and bodily fluids. Humans bleed; robots don’t.
Discriminating between human and human-like machines based on their composition, however, is nothing more than a prejudice like discriminating based on skin color. Humans can be replaced with synthetic parts. Not only are there prosthetic limbs, artificial hearts, and synthetic arteries, but we now even have brain replacements.
The science is still young, but a device that creates an electric current can be placed in the brain to control sporadic limb movements caused by motor diseases like Parkinson’s. In fact, cells in the brain (called neurons) operate by sending electricity to other cells, and digital computation (the mechanism enabling the complicated behaviors of computers and many robots) operates by sending electricity. The circuit boards are just plastic instead of organic.
In the end, it’s not a matter of what you’re made out of that counts, it’s a matter of what the stuff you’re made out of does. A clock can be electrical or it can work by mechanical gears. It can even work based on the location of the sun (sundials). But they’re all clocks because they share the same function—keeping time. In the same way, a pattern of electricity triggered by a cut on your skin is sent to your brain through a complex set of nerves, which could just as easily be sent by a silicon-based circuit; and both could result in a feeling of pain.
Given these considerations, the fact—or the possibility—that Bender’s internal parts do the same kind of thing as, say, Professor Farnsworth’s internal parts, along with their outward behavioral similarities, provide very good reasons for believing Bender is sentient. It may be short of proof, but then again, we don’t have proof that our best friends are sentient either.
Your Honor, I Intend to Demonstrate Beyond 0.5 Percent of a Doubt . . .
Since there’s good reason to believe that Bender, Calculon, and Hedonism Bot are sentient beings, it wouldn’t be morally permissible to use them as mere instruments for our own advantage. For example, it would be wrong for MomCorp to use these robots as slaves to mine dark matter on Vergon 6, assuming this would cause these robots distress and suffering. So, sentience gives an object at least some moral worth. Humans have moral value, which grants them certain protections from harm and exploitation, and this applies to dogs, pigs, cows, and Nibbler since they are likely sentient.
Nonetheless, for humans, morality is a double-edged sword: It serves to protect us from the potential harms of others, but it also prohibits us from inflicting similar harms on those same “others.” This puts a bit of a damper on things. But, imagine Clamps gives you a clamping. If a human or mutant did such a thing we would surely think he or she should be punished for such a wrongdoing. However, it’s not immediately obvious that a robot like Clamps is responsible for his actions, at least not in the same way that mutants and humans are. This point brings out a very important distinction in ethics.
A being may be sentient, which means that it can be a recipient of harm and benefit, without itself being responsible for what it does to other sentient things. When a tiger eats a rabbit, or a human for that matter, we don’t think it’s morally bad or evil. We may in fact “punish” it, but the punishment is only to prevent it from doing any future harm. It’d be a mistake to say i
t deserves punishment. The same seems to hold for our household pets. Leela may reward and punish Nibbler for his good or bad behavior, but the rewards and punishments are used merely to reinforce his behavior. (We shouldn’t forget, though, that Lord Nibbler is actually a covert protector of Earth!) When a human commits an atrocious act, incarceration prevents him or her from harming anyone else, but we also feel that he or she simply deserves the punishment because it’s just and fair, even if it doesn’t protect anyone from future harm.
We call the moral status that grants an object protection from harm and exploitation moral patiency. On the other hand, we call the moral status that makes an object responsible for its actions, and an appropriate object of reward, punishment, and criticism, moral agency. Agency simply means having the ability to deliberate over, and choose, which course of action to take at a point in time. A moral patient is similar to a medical patient. Medical patients are operated on by someone else. So moral patiency makes a thing a possible recipient of harm and benefit at the hands of moral agents who can be held accountable for how they treat the thing as a moral patient.
This complicates our picture of moral worth. It seems that some things can have more moral value than others. Very young human children, non-human animals like cats and pigs, and maybe even humans who are clinically insane, are moral patients but not moral agents. They are sentient, and as moral agents we should avoid harming them when this is possible, but they aren’t responsible for what they do. This may seem like getting your cake and eating it too; full reign to do whatever you want. But historically, it’s been this very lack of moral agency which has led many brilliant minds to the conclusion that only humans, and any other possible beings with similar abilities, have any moral worth at all. These thinkers use moral agency as the only mark of moral value. They argue that, since non-human animals are stupid and dumb, and can’t make choices for themselves, we can treat them however we want. This is still an issue of disagreement among philosophers. Above, a contrary view was presented which takes sentience as a guarantee of at least some moral worth. It’s possible that some philosophers who disagree with this position have confused the concept of moral standing by not separating moral agency from moral patiency.
That’s Her, Officer, She Programmed Me for Evil
Are the robots of the thirty-first century (like Clamps) moral agents, or are they more like children and primitive animals, being only moral patients? Just as we looked for a characteristic which makes a creature a moral patient (sentience), we can try to do the same for moral agency. To do this, consider the Sphex Wasp—not a distant relative of the Space Bees, but the real twenty-first-century insect.
These wasps build a nest and then look for prey. After finding a victim, they paralyze it and drag it near the opening of the nest. Next, they inspect their nest before going back for the paralyzed prey and dragging it into the nest. However, if an experimenter moves the prey a few inches away while the wasp is inspecting its nest, the wasp will find the prey, bring it back near the nest opening, and then inspect the nest again. This iteration will go on and on if the experimenter keeps moving the prey. The wasp seemingly can’t escape its preprogrammed pattern of behavior—it’s a slave to its behavior.
Contrary to the Sphex Wasp, humans can reason and think about how they should act. Humans have strong natural tendencies and impulses, but they’re also able to use their intelligence to deliberate about, and act contrary to, these urges. In this way, it appears that humans are not deterministic machines bound by their natures. They have freewill, allowing them to think about, and then choose, how to act at any point in time. It’s this which makes us accountable for what we do.
Robots appear to be machines running deterministic programs. Their behavior is fixed by the programmer in advance, and they’re bound by the logic of the program—Calculon is programmed to act, Bender to bend. On the other hand, humans and mutants aren’t hardwired with circuit boards that run fixed programs. We have direct experiences of ourselves choosing how we want to act. I can go out tonight or not. I can lie to a friend or not.
Not so fast! It’s a mistake to believe that humans aren’t hardwired by their natures to behave certain ways. For example, humans have a natural aversive response to spiders and a natural positive response to flowers. These natural tendencies can be overridden with learning and experience, but they’re still the presets we come built with. On average, females are more selective when choosing mates, a likely result of the dynamics of natural selection. Male humans form strong male coalitions against other outside groups. This last fact may be the natural preset for war. Yet, we can overcome or redirect these impulses. For instance, sports may be a safer way to fulfill the coalition impulse. Much of the modern world has seen a movement away from our natural behavioral presets and mental impulses.
But keep in mind the same applies to Bender and his mechanical brethren. He expresses a pent-up need to bend when he hasn’t bent in awhile, but as we know, Bender spends most of his time in mischief, pursuing things outside of his natural function of bending. For example, he competes in Iron Cook to become a master chef, pursues pro wrestling, and attempts a folk music career with Beck’s head. Also, Calculon may be hardwired to act, but he can also pursue love. So, the robots of Futurama demonstrate very flexible behavior and don’t perform just one function like bending or acting. They can overcome their natural functions and pursue other goals if they choose. Further, they also struggle with non-programmed compulsions. Recall Bender “jacking-on” as he wrestles with his electricity addiction.
These examples illustrate that if it ever becomes possible to build something which can perform complicated tasks self-sufficiently, like doing school work and going to classes, then it will inevitably be able to learn and function in ways the programmer never intended. A robot built to socially entertain house guests may inevitably have the capacity to form friendships. The programmer may not have intended this, and may even attempt to prevent it in the future, since a robot that has the ability to form friends would likely be hard to control. With respect to evolution, we’re similar. The capacity for art appreciation is probably not something that increased our ancestor’s survival and therefore is not likely an evolutionary adaptation, but it may have been adaptive to find some environments more attractive than others, since some are better suited for human survival. This adaptation may be the basis of art appreciation, making art an accidental by-product.
If the charge that Futurama robots are not free, and therefore aren’t moral agents, holds at all, it appears to apply to humans to the same degree. Both have natural impulses and urges, but for very difference reasons: Humans, due to their long evolutionary past, and intelligent machines due to a design intended by a roboticist. Yet, these inclinations don’t completely determine their behavior. If this is correct, the robot mafia is no more and no less responsible than the biological crew of Planet Express.
Be You Robot or Human?
Most of us would likely agree that restricting moral value to only the human species within the Futurama realm would be a type of discrimination. We could call it speciesism, as it wrongly says that what species you belong to is important for moral worth in a way similar to how racists claim that what race you belong to is important. To see exactly why this is immoral see Greg Ahrenhoerster’s and Joseph Foy’s chapter “Pop a Poppler,” in this book.
If we, instead, included all biological beings in the Futurama universe in the moral sphere but excluded only robots, this would be a type of biologism—discrimination based on the type of stuff a thing is made out of. These would both be types of immoral discrimination because, if there’s good reason to believe something is sentient and can make free choices, then it qualifies as both a moral patient and agent. Since we have good reason to believe the sophisticated robots of the thirty-first century have these abilities, we shouldn’t exclude them from full moral status. These robots are moral agents, that deserve the full set of protections and respon
sibilities that humans share.
11
Pop a Poppler?
GREG AHRENHOERSTER AND JOSEPH J. FOY
What happens when the Planet Express crew is in a remote corner of outer space, their space ship having been freshly ransacked on the Planet of the Moochers? Hungry, they touch down on a remote planet and are fortunate enough to discover a ditch full of things that look like fried shrimp.
The thing Captain Leela wears on her wrist tells them the shrimp-like things aren’t poisonous, and it turns out they’re delicious (but it later turns out they’re the offspring of the Omicronians—an advanced and powerful species). A hilarious twenty-two minutes of animated insights into the ethics of eating meat . . . that’s what happens!
In “The Problem with Popplers” (Season Two) three very clear stances on the issue of the morality of eating meat are presented through three characters: Free Waterfall, Jr., an activist associated with Mankind for Ethical Animal Treatment (MEAT); Joseph “Fishy Joe” Gillman, CEO of a company that runs a chain of seafood restaurants; and Planet Express spaceship captain, Turanga Leela.
Although the episode whimsically plays with stereotypes and travesties related to meat eating, it by no means shies away from the actual complexity of the ethics of the issue. Each character demonstrates an actual and ongoing debate about animal rights and liberation in the world today, including the market-based arguments of producers and the nutritional arguments of some consumers. This episode exemplifies what Futurama does best: making caricatures meaningful. So, let’s skip the roddenberries and jump right into the flesh of the argument.
The One Called “Smelly Hippie” Is Right
Free Waterfall, Jr. takes a hardline stance, completely opposed to eating meat. As the spokesperson for a group of animal-rights protesters outside the Planet Express building, Waterfall’s position is established early in the episode. After being chastised by Professor Farnsworth for being a “penniless hippie” and for the ruckus (“Unless this is a nude love-in,” Farnsworth shouts, shaking his wrinkled fist, “get the hell off my property!”), Waterfall explains, “Popplers are living creatures, you gotta stop harvesting them for food!” He clarifies his stance by saying, “You shouldn’t eat things that feel pain.”
Futurama and Philosophy Page 11