Futurama and Philosophy
Page 10
When his errors are discovered, he readjusts his thinking in accordance with the new premise, an excellent example of the human capability to adapt to and act in accordance with new information about the world. For instance, consider the following conversation that occurs in Season Two’s “The Lesser of Two Evils”:
FRY: Well, you guys might both be losers, but I just made out with that radiator woman from the radiator planet.
LEELA: Fry, that’s a radiator.
FRY: Oh, is there a burn ward within ten feet of here?
Fry’s brain thing provides us with insight as to the motivation of human action in general. We act in accordance with particular beliefs about the world, of which we’re certain enough (we must assume some level of predictability about the world, if human action is conceived of as effective in any way). The special case of Philip J. Fry provides us with an example of both what happens when a misconception is introduced into our worldview, and the primacy of logical coherence in our conceiving of an external reality.
I Choose to Believe What I Was Programmed to Believe!
A worldview is a set of beliefs we tend to think of as being in some way definitive of the world in which we live. Identifying a plausible answer to the question of how we acquire these beliefs is, indeed, a huge philosophical task. The ways in which we acquire beliefs is as varied as the range of our beliefs, which can be simple like “the floor is white,” or complicated, like “the societal constructs involved in our belief formation limit the possible worldviews of someone within a particular geographical area.” The belief that “the floor is white” is based on perception. A complicated judgment, on the other hand, requires more.
Somewhere in the middle, we identify objects as something. In order to make the statement, “the floor is white,” I must have already identified the white thing as a floor. As to whether this is a perception or a judgment, philosophers diverge. Aristotle (384–322 B.C.) attributes the identification of objects to perception (aesthēsis), when he claims that there’s a kind of “incidental perception” responsible for identifying something as something.
Aristotle gives the example of identifying the pale thing we actually see as Cleon’s son. It’s “incidental” because we do not actually see “Cleon’s son,” but what we do see happens to be the son of Cleon. On the other hand, perhaps it’s our judgment that’s responsible for the fact that we see something as something (that we identify the objects we see as objects, rather than a mish mash of colors and shapes). What we actually see is a particular arrangement of colors with defined boundaries, and we impose a name on it by judging that this thing we see is a unity, and a particular thing, namely, “The Professor.”
We also tend to group similar things together, and then we come up with universal terms for similar things, like “Decapodian” (a race of sapient, lobster-like creatures from the planet Decapod 10). According to David Hume (1711–1776), by grouping similar things together that aren’t identical, we introduce the possibility of mistakes in our reasoning. In An Enquiry Concerning Human Understanding Hume states: “Ambiguity, by this means, is gradually introduced into our reasonings: similar objects are readily taken to be the same: and the conclusion becomes at last very wide of the premises.”
As we group things together based on similar qualities, and our thinking about them comes to be more high-level, we introduce at every step the possibility of error. This error can be carried all the way through our reasoning. As we think about things at increasingly sophisticated levels of reasoning, there are opportunities all along the way to introduce errors. If we carry through these errors without re-examining them, they multiply at every level of reasoning. Imagine, for instance, Mom’s reasoning as to why Fry would want to buy a 1000-year-old can of anchovies (in “A Fishful of Dollars,” Season One). She attributes to him the motivation of a grand scheme to destroy her robot oil business, and devises a complex plan to get the anchovies based on this reasoning (that plan being to bankrupt Fry, therefore forcing him to sell them). But her whole plan is based on a misconception; when she realizes, eventually, that he has no such scheme, her plan seems needlessly complicated and, overall, to have missed the point.
So, along with the various ways in which we acquire beliefs, we can also identify the vast possibility for error in those beliefs. We might be relatively certain that the floor is hard, and we will not fall through it, but we might be less certain of who will win Oscars this year. The former belief depends on sense perception, whereas the latter belief depends on our own judgment of the quality of aspects of films, as well as a prediction of the actions of other people. We might go so far as to doubt all of our beliefs, like René Descartes (1596–1650) did. But I think, in general, we can be certain enough of how the world works to get along in it.
That Explains the Boat Eggs
Where Fry goes wrong is in his interpretation of what it is he’s interacting with, and how it fits in with what he already knows. In general, people also go wrong pretty often. Out of the corner of our eye, a lamp post looks like a person, or a coffee cup blowing across the street looks like an animal, but upon further inspection, we usually correct ourselves. In this process, we bring in other beliefs to make sense of what’s actually happening. So, if we were to encounter something that seems to be a boat but somehow produces eggs, we would likely reconsider our assumption that the thing is a boat. Fry, on the other hand, integrates the eggs into his conception of a boat, so that he doesn’t question his original identification of the object as a boat, but instead decides that boats lay eggs. (After all, he has a boat, and eggs seem to be coming out of it!)
What’s interesting about Fry is that his logic is always comprehensible to us. We can imagine what someone would think if they held to particular fundamental beliefs and assumptions. Even if we don’t believe the same things, we can imagine what someone who does believe them would think. While the particular beliefs are different, my ways of reasoning are pretty similar to other people’s. We do this a lot, in fact, when we try to think of what a particular person would do in a particular situation, or what a particular kind of person would do in a particular situation. When the Professor hides a box containing Universe A, for instance, he puts it in a location where, he says, “only a crazy lobster” would think of looking for it—that is, inside a treasure chest in a fish tank (“The Farnsworth Parabox,” Season Four). So, when the box isn’t there, we infer that Zoidberg has it—and he does.
When we’re attempting to figure out how the world works, we tend to take as most certain the things we can actually observe firsthand. We say, for instance, “This happened, I was there,” as support for a belief in the fact that something happened. We notice, over time, that certain things tend to go together. Then we come to expect them. And the more we see things together, the more likely we are to believe that one is associated with the other. For instance, since in most episodes Leela is dressed in a white tank top, I have reason to believe that in any given episode that’s what she’ll be wearing. (Or, if it’s chilly out, perhaps an off-the-rack lime green affair.)
Fry makes weird associations. Look at “Mother’s Day.” When the Planet Express employees try to figure out a way of traveling without a hover car, Fry comes up with an idea:
FRY: Wait! In my time we had a way of moving things long distances without hovering.
HERMES: Impossible!
FRY: It was called . . . let me think. It was really famous—Ruth Gordon had one. The wheel!
For some reason, the concept of the wheel and Ruth Gordon go together for Fry, and it works for him. He remembers what a wheel is (sort of). But sometimes he doesn’t make the connections he should. And, so, his wheels end up being ovals. Leela, who might be inferring from theory, or who might just associate roundness with things that roll, suggests that the wheel might work better if it were round. But Fry will have none of it. He doesn’t connect the increased work required to roll something oval with the work he will have to do to pull a
wagon.
Oh, I Would Dearly Love to Believe That Were True. So I Will!
The more we want to believe something, and the longer we’ve believed it, the more invested we are in a belief being true; or the more evidence we think we have for a belief, the more attached we become to it. So, if we come across something that seems to contradict the idea that the belief in question is true, the more likely we are to attempt to reconcile this new information with our already held belief, as opposed to questioning our fundamental assumptions.
In Season Two’s “The Cryonic Woman,” Michelle, Fry’s girlfriend from the past, has a lot of trouble adjusting to the year 3000, whereas Fry, not so much. She seems a lot more uncomfortable with everything that’s different from what she recalls from the past. Fry, in the series’s pilot episode (“Space Pilot 3000”), has a little bit of trouble, but when he meets Bender, he adapts rather easily. As opposed to being freaked out by the idea of humanoid robots, he wants Bender to be his friend.
BENDER: You really want a robot for a friend?
FRY: Yeah, ever since I was six.
Fry exhibits a kind of adaptability that we might expect from someone who’s used to discovering that their fundamental assumptions are misconceptions. We might attribute this to the fact that since the beginning of the series he has been adapting to living in the future, and is therefore used to finding his assumptions are inaccurate. But it seems more likely to be a result of his brain thing, itself a result of faulty inference.
In Season Three’s “Roswell that Ends Well,” Fry’s particular brand of logic leads him to the conclusion that the woman he meets, Mildred, can’t really be his grandmother (since his [assumed] grandfather is dead, yet he still exists). But the professor sees the situation more clearly. He realizes that it’s less likely that Mildred isn’t Fry’s grandmother than it is that Fry has another grandfather. And he quickly identifies this other grandfather as Fry himself.
FARNSWORTH: What the hell have you done, Fry?
FRY: Relax! She can’t be my grandmother. I figured it all out.
FARNSWORTH: Of course she’s your grandmother, you perverted dope! Look!
MILDRED: Come back to bed, deary.
FRY: It’s impossible! I mean, if she’s my grandmother, who’s my grandfather?
FARNSWORTH: Isn’t it obvious? You are!
Once Fry has the situation worked out in his mind, he acts in accordance with what he believes to be true. He both seems to believe wholeheartedly in the situation as he conceives of it, but, when new evidence arises, he is also clearly more willing to reconceive of a situation than most. An exception might be when in “The Cryonic Woman” Fry believes he has been frozen for one thousand years and is now in the year 4000. We can see him trying to hold on to the idea that he has been transported in time, instead of just location. He seems to think he has pretty good evidence for the fact that he is in the future, instead of merely in L.A., despite the fact that, somehow, his friends are all there too.
FRY: So you’re saying these aren’t the decaying ruins of New York in the year 4000?
FARNSWORTH: You wish! You’re in Los Angeles!
FRY: But there was this gang of ten-year-olds with guns.
LEELA: Exactly, you’re in L.A.
FRY: But everyone is driving around in cars shooting at each other.
BENDER: That’s L.A. for you.
FRY: But the air is green and there’s no sign of civilization whatsoever.
BENDER: He just won’t stop with the social commentary.
FRY: And the people are all phoneys. No one reads. Everything has cilantro on it. . . .
Some of this evidence is clearly inserted for comedic effect. (Fry hasn’t actually run into any non-reading phoneys eating cilantro at this point.) But the point is that he seeks to maintain the assumption that he has been transported in time—thus allowing for the possibility of an extensive degradation of society—rather than accept that such a vast difference could be accounted for by a change of location (New New York to L.A.).
Other times, Fry’s observations are apt, but he seems to have a different focus than we might. For instance, in “The Devil’s Hands Are Idle Playthings” (Season Four), Fry makes an inference that, while true, might not be what’s immediately evident to the rest of us, based on what precedes it:
FRY: [singing] Destiny has cheated me,
By forcing me to decide upon,
The woman that I idolize,
Or the hands of an automaton,
Without these hands I can’t complete,
The opera that was captivating her,
But if I keep them,
And she marries him,
Then he probably won’t want me dating her.
In the end, Fry’s little confusions are what make him such an endearing character. We can understand why he thinks the way he does, even though we might not think the same way. Fry is working with a constantly evolving set of assumptions, which are sometimes revealed in hilarious ways. He tries to fit new information into this set of assumptions so that everything coheres, but he runs into trouble when what he thinks does not correspond to what is the case. When his co-workers point out difficulties in his thinking, he adapts and re-evaluates his assumptions. But this process is not unique to Philip J. Fry. When we consider the extent to which our beliefs may be erroneous, the difference between us and Fry is only a matter of how much and how often. Our capacity to adapt might, in fact, be inferior.
FRY: Hey, I’m starting to think you all don’t think I’m very smart.
FARNSWORTH: You can barely remember your own name, Einstein.
FRY: Einstein is a hard name to remember. (“The Duh-Vinci Code”)
Aliens, Robots, and Mutants . . . Oh My!
10
Moral and Immoral Robots
CURTIS D. VON GUNTEN
FRY: So let me get this straight. This planet is completely uninhabited?
BENDER: No, it’s inhabited by robots.
FRY: Oh, kinda like how a warehouse is inhabited by boxes.
FRY: But Bender, we’re your friends!
BENDER: Friends? That activates my hilarity unit! I’m just a machine to you. You’re no more friends with me than you are with the toaster, or the phonograph, or the electric chair!
—“Fear of a Bot Planet”
The Galaxy of the thirty-first century contains an impressive list of aliens. Much of that world’s comic appeal stems from the fact that these strange life forms act a lot like us. Yes, there’re some differences—Decapodians like Zoidberg and Amphibiosans like Kif don’t share our diets or mating mechanics, but the differences pale in comparison to the overwhelming commonalities.
The aliens use language, appreciate humor, seek careers like cooking and acting, and even have competitive impulses expressed through sports like Blernsball. It might not be surprising, then, that these creatures possess the same amount of moral worth as human beings. The Futurama Galaxy is a gigantic ethical community where intelligent life is treated fairly and humanely; economies are connected through the Intergalactic Stock Exchange and political alliances are formed through the Democratic Order of Planets.
Yet, there’s another variety of “life-form” populating this galaxy which appears different from the rest. I’m referring to our favorite shiny, metal robots. One of the more extraordinary aspects of Futurama is the seamless integration of the digital, electrically-circuited lives of robots with the lives of biological beings in all their squishy, bloody glory. But do these robots have any moral worth? After all, aren’t they just clockwork machines running programs designed by humans? If our answer to the question of whether Dr. Zoidberg should be treated fairly and without harm seems obvious, the question when applied to Bender or Calculon is less clear.
I’m Filled with a Large Number of Powerful Emotions
There’s a lot at stake when deciding which objects belong in the moral domain and which don’t. If you have moral status it means, at minimum, there ar
e constraints on how you can be treated. Others are restricted from making you suffer for no reason, and when you suffer, you’re an appropriate object of others’ sympathy.
Some things clearly have moral value while others clearly don’t. If Robot Santa set a Christmas tree on fire, it’s generally believed your feelings would be misplaced if you felt bad for the tree, since the tree doesn’t have rights, doesn’t care about fairness, and certainly can’t suffer. If a thing doesn’t have moral standing, then it’s not appropriate to feel sympathy for it, and you’re free to use it however you want for your own selfish purposes, as long as in the process you don’t harm other things with moral worth. If, for instance, the burning tree harmed an innocent Neptunian, it would be appropriate to feel bad for the Neptunian. So, some objects belong in the moral sphere (Neptunians like Elzar) and some do not (like cans of Slurm). The answer to the question of what gives something moral worth, however, is less clear.
If it makes sense to say that we shouldn’t inflict unnecessary pain on Bender, then Bender must be able to experience pain. More to the point, if something can’t have any experiences at all, then there’s no way in which the life of that thing can be good or bad and become better or worse. If, on the other hand, an object can have conscious, subjective experiences, then, from its perspective, life can seem better or worse at any moment depending on whether its current mental state feels positive or negative. This characteristic of having experiences is called sentience.