Book Read Free

Halo and Philosophy

Page 11

by Cuddy, Luke


  If this is right, then what we should be looking for are ways to pump our intuitions about what is conceivable—to take our intuitions beyond what they might ordinarily be in our cultural or historical situation. Science fiction is especially good at this, and if science fiction is good at intuition pumping, videogames are even better, since they incorporate the same science-fiction narratives while at the same time providing an immersive experience that makes distant possibilities more salient and plausible. Fiction woven from familiar videogame experiences (like RvB) is in a position to prime our intuition pumps even more.

  AIs and Personal Identity in RvB

  There are several points where the RvB narrative raises interesting questions about the nature of the person. In the Halo games there is an important role played by AIs—the artificial intelligence agents that guide figures like Master Chief (and, of course, guide the user through the gameplay). AIs in fiction are nothing new, but the intimate role that they play with their assigns in Halo adds a twist—the AIs are an ever-present inner voice that can offer suggestions, provide data, and counsel courses of action. In RvB the tightness of this bond is highlighted, and the potential downside of an ever-present voice is noted.

  Among the characters in RvB are a group of special agents who are assigned their own personal AIs. As we will see, these AIs are problematic in several ways, and one of the special agents—Agent Washington—was temporarily driven crazy by his AI and had to have it removed. It seems Washington was at one point unable to distinguish his own thoughts from those of the AI.

  The intimate link between these special agents and their AIs (and even between Halo’s Master Chief and his AI, Cortana, for that matter) suggest new and better ways of illustrating the extended mind hypothesis advocated by David Chalmers and Andy Clarke—the thesis that our minds include certain external tools that we use.43 Chalmers has been known to wave his iPhone, declaring it to be part of his extended mind. But this is not as effective an intuition pump as playing a game in which your personal AI assistant is constantly guiding you or, for that matter, imagining a scenario in which a special agent has such an AI 24/7 and the AI is having destructive psychological effects.

  The RvB narrative is also effective at illustrating new ways in which artificial intelligence agents might be realized. One of the Blue Team members, Church, turns out to be an AI. He and the team discover that he can jump into both human bodies and robots, basically reprogramming them. The scenario is not, as far as we can tell, a “ghost in the machine” scenario. Church is not a little person inside another person—a homunculus—but rather the host body is rewired (reprogrammed) for the duration of Church’s occupation of it. There are also elements where the homunculus story seems to be in effect, as when Church and another AI enter into the cavernous mind of the Blue Team member Caboose, but this doesn’t appear to be a case of take-over and reprogramming so much as visitation.

  Church also is able to appear as a ghost-like form when disembodied (in fact he even takes himself to be a ghost). How do we make sense of this? One possibility is that there’s an additional medium that is programmable but which is causally inert until it can enter into a programmable physical system like a human body or a robot. We could think of it as being like the “soul stuff” discussed by Hilary Putnam44 or as dynamic patterns of information that are (at least temporarily) self-sustaining without the usual physical or biological substrate.

  Project Freelancer

  For our new thought experiments the relevant story line revolves around Project Freelancer, a twenty-sixth-century military-industrial R&D project under the control of one Dr. Leonard Church, otherwise known as “The Director.” The goal of Project Freelancer is to equip elite soldiers with enhanced armor and dedicated AIs to assist them in the operation of the armor and presumably in other matters.

  The initial plan of Project Freelancer is to have forty-nine special agents (each one code named for one of the remaining states in the United States—Florida has gone rogue) and each is to be assigned an AI.

  As we write this, the plot line is still unfolding, but as far as we can determine, the following is the case. At some point shortly after its inception, Project Freelancer is terminated with only two AIs having been created. One is an AI based upon the Director—this AI is sometimes known as “The Alpha” and sometimes as “Church,” and the other is based upon memories of a woman in The Director’s past—sometimes known as “Alison” and sometimes as “Tex.”

  It’s unclear at this point whether Tex/Alison was made by The Director or by The Alpha. It’s possible that the plot could evolve in such a way that The Alpha is identical to The Director. That mindbending possibility doesn’t undermine the thought experiment we are imagining here, which only requires that they could be distinct even if psychologically continuous.

  The Director is undeterred by the cancellation of the project (or cancellation of funding for the project) and finds a way to generate AIs on the cheap. This is not as trivial as it sounds because apparently these AIs are not replicable in the way that typical computer programs are. Exactly why is never explained, but we can speculate. Perhaps the complexity of the AIs is such that simple copying is not possible. Or, it may be that they employ elements of quantum computation in which a read/write copying mechanism would not work because the attempt to read the state of the The Alpha would have the effect of altering its state (the idea being that quantum states cannot be precisely observed without changing those states).

  Whatever the reason for the copying problem, The Director finds that he can fracture off fragments of the original AI by torturing the AI until it undergoes a kind of psychological fission—for example separating the memory from various emotional components. Once the AI’s personality has undergone this fission bits of each component can be fractured off. This isn’t a copying process but a process in which new AIs are chipped away from the original. It seems that the psychological fission allows more AIs to be created, or perhaps the fission is necessary for the fracturing to be possible (we’ll say more about this in a bit).

  It may be hard to see how this could work from a scientific point of view, but we can imagine various scenarios. Suppose we thought of the AI as being like a hologram. The interesting thing about a hologram (the physical film, not what appears to us) is that every point in the hologram records light from the entirety of the original scene, so that in principle the whole scene can be reconstructed from any arbitrarily small piece of the hologram (although one loses resolution—that is, the image becomes fuzzier).

  If psychological fission is required before fracturing off AIs, then we can speculate that The Director discovered that one could not get a holographic fragment of the whole, but only from the sub-components. This is certainly plausible on the assumption of the modularity of the mind as advocated by Jerry Fodor—there would be no uniform central processing but rather a number of sub-components of the mind that act in concert with each other.45 One can only get fragments from these sub-components because the sub-components are “encapsulated”—none of them carry information about the entire Alpha AI.

  The individual AIs are thus produced via fracturing, each having the distinctive features of certain aspects of The Alpha’s (Church’s) psychology. For example, The Epsilon unit contains a number of relevant memories; The Omega unit has distinctive emotions such as anger. (Given that The Epsilon unit retains the memories of being tortured it is no surprise that things did not go well when it was assigned to agent Washington.)

  A key element of the plot line involves one of the Freelancer agents, Agent Maine, who has gone about collecting the various AIs and incorporating them with his own. This agent is also called “The Meta.” The Meta becomes quite powerful at one point by virtue of acquiring the AIs and the enhanced armor components that they deploy. Although it is not explicitly stated, one may also speculate that The Meta is driven by a desire to recombine the components that were fractured off of The Alpha.

  W
here Parfit Went Wrong

  This is a very partial description of the plot line, but we can already see many elements of the philosophical thought experiments we discussed earlier. On the one hand, the fact that AIs can have different forms of embodiment argues against materialist theories of personal identity for such agents—theories that say identity is constituted by the body. At first glance, this seems to be consistent with Locke’s argument. We might therefore think that Locke’s theory of personal identity fits well within the story line, but it really doesn’t; in RvB the memory component is important but it is not sufficient to secure the identity of the AI.

  Is RυB then a thought experiment that lends support to Parfit’s theory of personal identity? At first glance the answer seems to be “yes,” however there are important ways in which the RυB scenarios cause trouble for Parfit’s position. The first and most obvious problem is that for Parfit, psychological continuity is the key element in the survival of an agent, yet in the RvB story line when AIs are made by cracking off bits of The Alpha there is plenty of psychological continuity between the source and the “copy” (arguably more continuity than in the split-brain cases), but little intuitive pull to think this ensures the survival of the source. For example, the various AIs that are extracted from The Alpha could all be combined into something—let’s say The Meta—with the psychology of The Alpha, but it doesn’t follow that we would think The Alpha could survive the destruction of its “soul stuff” simply because The Meta is still around.

  Why is this? Perhaps it’s because there’s no reason to think that the fractured off pieces of The Alpha are significant enough chunks to ensure The Alpha’s survival. The split-brain cases discussed by Parfit are different because the continuing individuals retain substantial portions of the original brain. If smaller bits are fractured off—bits that carry the psychology of the whole (like a bit of a holographic film can carry the entire image)—we do not get the intuitions that the original object (hologram or AI or person) can survive its material destruction simply because psychologically continuous fragments of it are still around. The intuitions go against Parfit in such cases.

  This is even more clear if the fragments must be recombined into something like The Meta before we get something that is psychologically continuous with the original object. Suppose that The Alpha is destroyed but that The Meta is assembled from fragments of The Alpha. We resist saying that The Alpha has survived in such a case. Once The Meta is assembled it may have the psychology of The Alpha, but something about its history—being fractured off of mental modules and then assembled—seems to undermine our intuition that The Alpha has survived.

  Parfit addresses cases of simple fission where two halves are rejoined, but what are the intuitions for cases where there is a massive fission of the original, not into functional halves but into more or less dysfunctional components that are then recombined into a functional whole in the form of The Meta? Again, we think the intuitions break against Parfit.

  If Parfit’s story is weak here, is there an alternative? The RυB story line suggest an account of personal identity in which it is possible (at least for AIs) to have a kind of disembodied existence that is nonetheless organized computationally. This is actually a possibility—discussed by Putnam—that the computational theory of mind could apply just as well to “soul stuff” as to minds. In the RυB universe, it seems that the soul stuff (or whatever substance explains the visible manifestation of the AI when disembodied) may well be the key to survival for an AI. Carve off a little bit and you may get something psychologically continuous with the source, but you won’t thereby get the survival of the source, and you won’t get something with the goals and plans of the source.

  The question here is not whether the story is an accurate picture of how things in fact stand, but whether such a scenario is conceivable and thus illuminating of our concept of personal identity. Perhaps we did not see this as a possible position before RυB. If so, then we can think of RυB as an exercise in philosophy, designed to get us to see possibilities that had previously escaped our attention.

  In a famous passage from Shakespeare (Hamlet Act 1, Scene 5), Hamlet cautions Horatio about the limits of philosophy: “There are more things in heaven and earth, Horatio, Than are dreamt of in your philosophy.”

  The cautionary advice can be flipped on Hamlet, because if we extend the limits of our philosophical intuitions with RυB-like thought experiments, what we can dream of may well outrun the furnishings of our real world. Coming to Horatio’s defense, we might put it this way: “There are more things in Blood Gulch (and our philosophy), Hamlet, than are found in your heaven and earth.”

  8

  Enlightenment through Halo’s Possible Worlds

  LUKE CUDDY

  Consider the following scenarios from Halo 3 right before you rescue Cortana at the end of Floodgate, both of which are true, sort of:Scenario 1: My checkpoint begins. There’s a Flood running at me with a tentacle-like claw swinging menacingly in the air. He jumps, but I still have six rounds left on my Brute Shot, and one of them has this Flood’s name on it. But when I shoot I miss, causing me to panic and miss again. He swipes me in the face twice with his claw, taking my power down almost half way. I fire again in a continued panic, but the Flood is right in front of me now, and the Brute Shot is too powerful: the shot kills us both.

  Scenario 2: My checkpoint begins. There’s a Flood running at me with a tentacle-like claw swinging menacingly in the air. He jumps, but I still have six rounds left on my Brute Shot, and one of them has this Flood’s name on it. A split second later his appendages and insides are splattered all over the rock wall to my right and the canyon to my left.

  How is it that I can say these are both true, when they seem to be competing instances in time? Both cannot have actually happened, right? But both scenarios are possible worlds that exist in my playing of Halo 3 (though I would prefer the second to be the actual world). Some philosophers and, in recent scientific history, scientists have suggested that the existence of possible worlds is a real possibility. In other words, these people suggest that possible worlds besides the one we live in actually exist, as we’ll see.

  Something else to consider is that the possible world of scenario 1 will never again occur. It’s suspended in time (somewhere on my Xbox?) in that specific state. One could wonder what happens to that future and the creatures in it. For those of us who’ve used emulators (software that fully reproduces old videogame consoles like NES, Atari, or Genesis), possible worlds are very familiar because of save states. Save states allow you to override the constraints of the original game by saving your progress at any point. For example, in the original The Legend of Zelda, when you die, you either start at the beginning of the last dungeon you were in (minus most of your hearts), or at the beginning point in the overworld. But while running this game on an emulator, you can save it at any point, including right before a very difficult dungeon boss, or at a room in a dungeon where there are three doors. If you don’t know where these doors lead, you can create a save state in that room, then try one door in one possible world, then the other, then the last. Of course, if you try one door and it’s a dead end or too difficult, you can start that save state over and try another. It makes the game much easier, but it also illustrates possible worlds in videogames.

  In games like The Legend of Zelda on emulators, we have more time to reflect on our choices in these possible worlds, whereas in the Halo series we don’t. In Halo, we don’t care about what happens in what future; we don’t care about what puzzles of space and time are illuminated by the concept. We only care about achieving the future where we can actually move on to the next part of the level! In the context of Halo, as we will see, possible futures can potentially lead to a high emotional state, such as anger or frustration, via both single player and multiplayer—something that doesn’t occur the same way in other videogames. Halo’s ability to produce this state in its players is what makes it a ripe c
andidate for leading those players to enlightenment oriented thinking in the Buddhist sense.

  Possible Worlds

  In philosophy one could argue that the notion of possible worlds goes back to Gottfried Liebniz’s idea that the actual world we live in is the best of all possible worlds. According to Liebniz, there are other possible worlds out there that we could live in, but we don’t because God instead created our world which is, apparently, better than any other possibilities. David Lewis, the twentieth-century American philosopher, wrote about possible worlds in relation to language. For instance, when you get pwned by a seven-year-old in Halo 3’s multiplayer, you might say, “Yeah, he pwned me but I could have pwned him.” If you didn’t pwn him, then how can it also be true that you could have pwned him? According to Lewis, statements like the latter can have a truth value because there actually is another world where they are true, existing in time simultaneously with our world.

  In the realm of science, the idea of possible worlds came to the fore with the advent of quantum physics and, eventually, string theory. Today, string theory is hotly contested but remains a frontrunner for the theory of everything—a theory that unites our understanding of the physical world at the quantum (small stuff) and Newtonian (big stuff) levels. Among other things, string theory suggests that there are, in fact, parallel worlds existing alongside our own. Although string theory has yet to be backed up by hard observable evidence, its acceptance by many members of the scientific community has made the notion of possible worlds more appealing to our culture in general.46

 

‹ Prev