Halo and Philosophy
Page 23
In the “safe” possibility space between the fixed rules of a game, which is analogous to the real space between the limitations of Power-Knowledge, players can practice (in both senses of the term) ways of refining lived and played experience. Although the specific strategies of expansive gameplay (playing through Halo 3 without killing any enemies, or using the game’s explosive weapons in an aesthetic performance) don’t really translate into the real world, and hardly count as making life a work of art, the forms that these strategies take very much do. As noted above, imposing new rules on existence and re-purposing available tools are entirely consistent with Foucault’s notion of aesthetic self-creation.
The active contemplation (or at the very least, the implicit acknowledgment) of the rules and possibilities of a videogame that is so fundamental to engaging in expansive gameplay can be seen as a small-scale, simulated version of the problematization Foucault insists is necessary for aesthetic self-fashioning. Expansive gameplay emerges from the exploration of a game’s rules and the spaces for play that they constitute, just as practices of freedom emerge from the exploration and contemplation of the self as constituted by the rules of society. As a group of players explores the basic multiplayer modes in Halo 2 and experiments with different combinations of weapons and game modes, the rules and possibility space of the game become apparent, and are reconfigured into the expansive gameplay of Zombie.
Players’ relations to themselves and to other players, and the idea of transforming these relations, are inevitably considered in the course of expansive gameplay. As players develop ways of refining their in-game experience, expansive gameplay can (perhaps) help them to problematize real-world rules and systems and to cultivate the inward and outward-looking critical stance necessary for aesthetic self-fashioning.
This is the utopian view. The gloomier interpretation is that expansive gameplay actually replaces real-world practices of freedom, by allowing individuals to fashion and produce themselves in a straightforward, entertaining, and harmless manner that has minimal impact on their real-world experience. Expansive gameplay allows people to enjoy the illusion of liberty while their real lives remain unchallenged and unchanged.
In either case, thinking of expansive gameplay as a simulation of sorts is a productive way of understanding it as a cultural phenomenon. When considered as analogous to aesthetic self-fashioning (but on a smaller scale and at a remove from the real world), an interesting relationship between the two practices becomes apparent. The kinds of strategies and forces that can be employed by an individual constituted by and existing within a system of rules, and the way in which these practices are cultivated and produced through the active contemplation of spaces of possibility, helps draw connections between the playful, “safe” exploratory practice of expansive gameplay and the more substantive transformations enabled by a rigorous, real-world practice of Foucauldian self-fashioning.
So next time you’re trying to beat a record speed run of Floodgate, blasting a jeep across Blood Gulch, or pwning n00bs in a round of Zombie, consider the relationship between expansive gameplay and Michel Foucault’s philosophy of aesthetic self-fashioning. Reflect on the structures of Power-Knowledge that determine who you are and the possibility spaces available to you, and think about ways of taking control of your lived experience, just like you’ve taken control of your played experience of Halo.
15
Would Cortana Pass the Turing Test?
SHEROL CHEN
Artificial Intelligence (AI) has its beginnings in philosophy and has been adopted by a number of communities of thought today. The Halo series presents potential layers of application for better AI as it relates to interactive experiences. Through Cortana, Halo gives a common example of Artificial Intelligence in science fiction. In fiction, there are countless examples of what AI could be. In reality, however, there are a number of less obvious applications, most of which are far from being practical.
The AI Cortana stands in the Pillar of Autumn (Halo: Combat Evolved, 2001).
So, where else do we find AI or the lack of AI in a game like Halo? This all depends on the community you ask. If you’re talking to the AI Summit at the Game Developers Conference, then the AI is the path finding and combat strategies of Non Player Characters (NPCs). Game AI, experienced as alien combat strategies for you to thwart, is the only aspect of Halo where the actual practice of Artificial Intelligence is appropriately applied. While psychologists and cognitive scientists are interested in the user-end perceptions and experiences from the challenge of First Person Shooters (FPSs), there’s also the emotional engagement and overall fun that a player has. I’ve interacted with lead AI developer for Halo 2 and Halo 3, Damian Isla, in a few different sorts of settings, from AI conferences, to game conferences, to conferences on Disney Cruise Ships. Not only has Damian made practical many of the advancements in game AI, he’s actively pursuing those things not yet feasible now at Moonshot Games
Combat visualization for NPC AI in Halo games (Halo GDC talks, 2002 and 2008)
Making NPCs better at FPS games is a given, but current AI research in game technology also aims to create characters that exhibit unscripted intelligence, stories that can intelligently construct themselves, and mechanics that can assist or achieve the game designers’ goals of an, otherwise, unattainable end experience. To give insight into the pursuits of what is not yet possible, it’s important to understand the overall experience and the limits that consumers take for granted. Areas of research such as Game Studies, Expressive Intelligence, and Serious Games enable more intentional means for application and analysis of interactive experiences or development processes. Through an understanding of the technology, we are able to realistically expect what is impossible, or, at the very least, not settle for less than what is currently possible. By focusing on common problems in the history of AI and their relation to believability in videogames, this chapter will ask the question of what would make Halo, as defining as it is, more than just a well-thought-out franchise.
A Visualization of Actual AI in Practice Today (Handling Complexity in Halo 2 AI, Damian Isla, GDC 2005)
What Is Artificial Intelligence?
Cortana is a clear fictional example of an Artificially Intelligent existent. So, what isn’t Artificial Intelligence? Well, humans aren’t AI, because they’re naturally intelligent, and plasma rifles aren’t AI because they aren’t intelligent. In the realm of science fiction, Artificial Intelligence takes the form of machines that believably emulate human behaviors. In real life, applications of AI are far less dramatically compelling. In the widely used AI textbook Artificial Intelligence: A Modern Approach Russell and Norvig introduce four primary pursuits of AI:
Some definitions of AI. They are organized into four categories
Systems that think like humans. Systems that think rationally.
“The exciting new effort to make computers think . . . machines with minds, in the full and literal sense” (Haugeland, 1985) “The study of mental faculties through the use of computational models” (Charniak and McDermott, 1985)
“[The automation of] activities that we associate with human thinking, activities such as decision-making, problem solving, learning . . . ” (Bellman, 1978) “The study of the computations that make it possible to perceive, reason, and act ” (Winston, 1992)
“The art of creating machines that perform functions that require intelligence when performed by people” (Kurzweil, 1990) “A field of study that seeks to explain and emulate intelligent behavior in terms of computational processes.” (Schalkoff, 1990)
“The study of how to make computers do things at which, at the moment,
people are better” (Rich and Knight, 1991) “The branch of computer science that is concerned with the automation of intelligent behavior” (Luger and Stubblefield, 1993)
Systems that act like humans. Systems that act rationally.
First, the AI textbook distinguishes between being rational and being human, with primary emphasis on rational AI. Currently, human behavior and thought do not provide much practical application to today’s relevant scientific pursuits. The AI that we see in science fiction is not only a non-priority for scientists, but seemingly more unattainable than current AI research endeavors.
The study of AI as rational agent design therefore has two advantages. First, it is more general than the “laws of thought” approach, because correct inference is only a useful mechanism for achieving rationality, and not a necessary one. Second, it’s more amenable to scientific development than approaches based on human behavior or human thought, because the standard of rationality is clearly defined and completely general. Human behavior, on the other hand, is well-adapted for one specific environment and is the product, in part, of a complicated and largely unknown evolutionary process that still may be far from achieving perfection (Russell and Norvig, p. 12).
Now to distinguish between thinking and action, what it means to think humanly has its own area of academic pursuit in cognitive science and philosophy of mind. The textbook calls this the “cognitive modeling approach.” Experientially, however, it only matters that Cortana, in her actions, has believable human behavior, with no regard to how accurately her thought process matches the thinking of human beings. This test for believability was famously published by Alan Turing, known as the “Turing Test.”
Can Machines Think?
In his paper, Alan Turing posed the question “can machines think?” He thought the answer could be tested empirically. Turing took a common party game, the imitation game, and used it to test for intelligence. The Turing Test has had a number of criticisms since Turing first published it,95 such as Searle’s Chinese room96 or French’s seagull test argument97 (both of which will be discussed in more detail soon). Jack Copeland defends Turing by pointing out that he never proposed a formal definition for consciousness, rather Turing was merely testing for evidence of intelligence.98 As quoted below, Turing was not trying to contest what defines intelligence, nor was he claiming that he could create intelligence; rather, he was proposing a test of whether or not computers could feign intelligence.
I do not wish to give the impression that I think there is no mystery about consciousness. There is, for instance, something of a paradox connected with any attempt to localize it. But I do not think these mysteries necessarily need to be solved before we can answer the question with which we are concerned in this paper. (Turing 1950, p. 5)
In an earlier article99 Turing created a fundamental architecture for future computing, the “logical computing machine,” and established that any computable operation can be performed on such a machine. This is known today as the Turing machine, forerunner for computers as we know them in this century. As an extension of the Turing machine, Alan Turing described the type of machine that would be interrogated in the imitation game as the digital computer, a machine that is intended to “carry out any operations which could be done by a human computer” (Turing 1950). Systems evaluated through the Turing test are the first applications of artificial intelligence, or more specifically, the first instantiation of the believable agent.
Believable Intelligence
As Copeland argues, thought experiments that counter-argue the appropriateness of Turing’s test are moot, as they refute a claim that Turing was not trying to make. Chinese rooms and seagulls, however, provide a means of analysis of experiences in games like Halo through widely discussed thought experiments from the philosophy of mind community. First, in his own words, Turing describes the inspiration for the Turing Test, the Imitation Game:The new form of the problem can be described in terms of a game which we call the “imitation game.” It is played with three people, a man (A), a woman (B), and an interrogator (C) who may be of either sex. The interrogator stays in a room apart front the other two. The object of the game for the interrogator is to determine which of the other two is the man and which is the woman. . . .
We now ask the question, “What will happen when a machine takes the part of A in this game?” Will the interrogator decide wrongly as often when the game is played like this as he does when the game is played between a man and a woman? These questions replace our original, “Can machines think?” (Turing 1950, p. 6)
How does Cortana do it? Does she have electrodes that are modeled after our neurons? History tells us, maybe, but probably not. Take artificial flight. If birds are capable of natural flight, then does this mean that artificial flight is most appropriately an exact physical replication of birds? No, we, of course, don’t rule it out as possible, but it is evidently not the only way, as the Wright brothers have demonstrated. The Turing Test doesn’t ask how Cortana is able to behave humanly; it asks whether or not she behaves humanly. As far as Master Chief is concerned, the constitution doesn’t change the experience.
In 1956, John McCarthy called together a community of researchers together at Dartmouth University. This summer research project coined the area of Artificial Intelligence, creating AI labs at Carnegie Melon, MIT, Edinburg University, and Stanford. Early conversational AI systems such as ELIZA, Parry, Hacker, Sam, and Frump were developed from research projects soon after. Current conversational agents are developed for the Loebner Prize Contest, held every year, as a direct application of Turing’s test.
Just because It Acts Intelligent, Doesn’t Mean It Is
In response to Turing, Searle said that performance does not constitute intelligence. For example, imagine there is a Chinese room as Searle describes:Suppose that I’m locked in a room and given a large batch of Chinese writing. Suppose furthermore (as is indeed the case) that I know no Chinese, either written or spoken, and that I’m not even confident that I could recognize Chinese writing as Chinese writing distinct from, say, Japanese writing or meaningless squiggles. To me, Chinese writing is just so many meaningless squiggles.... Suppose also that after a while I get so good at following the instructions for manipulating the Chinese symbols and the programmers get so good at writing the programs that from the external point of view—that is, from the point of view of somebody outside the room in which I am locked—my answers to the questions are absolutely indistinguishable from those of native Chinese speakers. Nobody just looking at my answers can tell that I don’t speak a word of Chinese.... As far as the Chinese is concerned, I simply behave like a computer; I perform computational operations on formally specified elements. For the purposes of the Chinese, I am simply an instantiation of the computer program. (Searle, p. 3)
Going beyond Cortana’s natural language capabilities, as gamers, do we really care whether the Covenant forces mobilize as a result of making decisions through a human thought process or that they simply behave in a challenging manner? Through the lens of Searle’s Chinese room, if I’m only looking for the experience of having a conversation in Chinese, then I disregard the constitution of the interaction. Believability is a matter of behavior and function, as opposed to constitution.
The Seagull Test
But how do we even know we are accurately measuring intelligence through comparison? After all, an instance does not define an entirety. French’s seagull test shows another shortcoming of the Turing Test:Consider the following parable: It so happens that the only flying animals known to the inhabitants of a large Nordic island are seagulls. Everyone on the island acknowledges, of course, that seagulls can fly. One day the two resident philosophers on the island are overheard trying to pin down what “flying” is really all about.... They decide to settle the question by, in effect, avoiding it. They do this by first agreeing that the only examples of objects that they are absolutely certain can fly are the seagulls that populate their island. . .
. On the basis of these assumptions and their knowledge of Alan Turing’s famous article about a test for intelligence, they hit upon the Seagull Test for flight. The Seagull Test works much like the Turing Test. Our philosophers have two three-dimensional radar screens, one of which tracks a real seagull; the other will track the putative flying machine. They may run any imaginable experiment on the two objects in an attempt to determine which is the seagull and which is the machine, but they may watch them only on their radar screens. The machine will be said to have passed the Seagull Test for flight if both philosophers are indefinitely unable to distinguish the seagull from the machine.
In fact, under close scrutiny, probably only seagulls would pass the Seagull Test, and maybe only seagulls from the philosophers’ Nordic island, at that. What we have is thus not a test for flight at all, but rather a test for flight as practiced by a Nordic seagull. For the Turing Test, the implications of this metaphor are clear: an entity could conceivably be extremely intelligent but, if it did not respond to the interrogator’s questions in a thoroughly human way, it would not pass the Test. The only way, I believe, that it would have been able to respond to the questions in a perfectly human-like manner is to have experienced the world as humans have. What we have is thus not a test for intelligence at all, but rather a test for intelligence as practiced by a human being. (French, p. 2)