by Cuddy, Luke
The Seagull-test tells us that believability can only be assessed among behaviors of the natural instances being compared. In regards to Halo, whether artificially recreating the experience of gun fights, emotional reactions, or the craft of storytelling, intelligence, when given purpose, is a straightforward endeavor. Believability, in practice, aims to recreate some authored and designed experience, such as human conversation, as the Turing Test demonstrates.
Machines Don’t Have Our Background
French continues to say that, “Humans, over the course of their lives, develop certain associations of varying strength among concepts” (French, p. 2), presenting a test that, he claims, a computer would have much difficulty passing. This, he calls “Associative Priming,” and illustrates with a hypothetical situation:The Turing Test interrogator makes use of this phenomenon as follows: The day before the Test, she selects a set of words (and nonwords), runs the lexical decision task on the interviewees and records average recognition times. She then comes to the Test armed with the results of this initial test, asks both candidates to perform the same task she ran the day before, and records the results. Once this has been done, she identifies as the human being the candidate whose results more closely resemble the average results produced by her sample population of interviewees.
The machine would invariably fail this type of test because there is no a priori way of determining associative strengths (i.e., a measure of how easy it is for one concept to activate another) between all possible concepts. Virtually the only way a machine could determine, even on average, all of the associative strengths between human concepts is to have experienced the world as the human candidate and the interviewees had. (French, p. 4)
Halo, as a virtual experience, does not 1. need to be accurately constituted like its natural counterparts as the Chinese room might suggest, nor does it 2. need to perform beyond the natural instance that it recreates as the Seagull-Test might suggest. Nor does Halo need to recreate unspecified experiences with high accuracy of precision as Associative Priming may suggest. The success of the series is proof enough of these three challenges do not stifle the overall Halo experience. The points made by these three examples may define insurmountable technological roadblocks, but this, by no means, suggests that videogames cannot or should not perform beyond what they are currently doing. The implications of these three challenges give potential examples for richer, higher-agency interactive experiences, but are not sole gate-keepers of greater believability. As the Wright Brothers demonstrated, we are not limited by natural conventions. Believability, as discussed in the Russell and Norvig AI textbook, is still underexplored as a science. Today, game researchers aim to build the bridge that brings the Artificial Intelligence of science towards the Artificial Intelligence of science fiction for its expressive potential as an art form:
Within AI, there has not been a big effort to try to pass the Turing test. The issue of acting like a human comes up primarily when AI programs have to interact with people, as when an expert system explains how it came to its diagnosis, or a natural language processing system has a dialogue with a user. These programs must behave according to certain normal conventions of human interaction in order to make themselves understood. The underlying representation and reasoning in such a system may or may not be based on a human model (Russell and Norvig, p. 12).
Researchers of game technologies have the privilege to develop and understand the science behind the sound and scene of modern interactive experiences. We can see that there’ve been great advances in graphics, animation, audio, and game mechanics from Pong to Halo: Reach. The technology which drives the dramatically compelling experiences, however, has made comparably less progress as digital storytellers. The Chinese room makes a good point in regards to the technology that is under-the-hood of virtual experiences: that old technology can be made to do new things, despite not being inherently novel in itself.
Throughout the decades of gaming, conventions were formed around the limits and convenience of development. Game players, perhaps unknowingly, hold many of these conventions to be universal, such as: the blind introduction to a gameworld, death and respawning, and lack of agency. Those examples are three very typical occurrences, but there are many other examples more specific to particular games. Once the narrative (or presentation) of games can be separated from the mechanics and technology that drive the game, we are able to easily identify these limitations.
Advances in Artificial Intelligence not only provide more tools to build with, but more authorial leverage and potential to build immersive experiences. Comparatively, while AI is being researched at a much slower rate, developers, designers, and writers have made their own advancements in the narrative, discourse, and presentation of videogames.
Innovations in Storytelling
So, what breaks your sense of presence in a story? The culture of videogame playing has developed a tolerance for the common practices and limitations in designing and producing games. We’ve stopped asking “Why?” and have come to expect the typical input arrangements, the impermanence of death, and restrictions of our own free will. I’ve found, in my personal “research” of popular games, that despite the predictability, certain innovations in narrative are notably novel.
We can break down a game into three layers: paidia, ludus, and narrative, an area that is quite nontrivial is the connection between paidia and narrative. Traditionally in game studies, paidia and ludus are known to be types of games or styles of play. Paidia would be simulation-like through open world exploration, and ludus would be goal and achievement driving through winning conditions. Games however, have all aspects of play to some degree.
Gonzalo Frasca describes ideological rule sets in games as including paidia, ludus, and representation (or narrative).100 For instance, in Halo, the paidia is the open world simulation, the movements and actions made available and the experiences that come of their consequences. The paidia or open world of Halo is apparent through the rules that drive its simulation of gun fights. The ludus or winning conditions of Halo varies in context. In the campaigns, the winning condition is to drive the story forward to completion. As a multiplayer game, the winning condition is typically to have the most kills. Finally, the representation or the narrative of Halo is within the context of intergalactic religion and politics. Often, your paidia is constrained such that you don’t ruin the narrative layer in the game. For example, it is common that your agency is restricted in order to maintain the story elements—if everyone died at the beginning, there’d be no story left to tell.
In story driven games, the paidia and narrative layers find themselves compromising. Ultimately, the paidia-narrative relationship determines the user’s agency in a game and the overall flow and presence in a story. The question I want to pose here is: What breaks your sense of presence in a story? Three typical examples are:1. Entering a story with no prior domain knowledge
2. The occurrence of dying and respawning
3. Having a terrible sense of agency101
In my experiences, I’ve felt that creative innovations in game writing and game design (or conforming the narrative layer to overcome the limitations of paidia) were apparent in the following approaches:
Introducing a New World
In Halo, the opening cut scene briefs the characters in the story, along with the player watching, about what the current situation is. The player is then introduced to Master Chief, or “the new guy in town.” Not only is the player unaware of the state of the gameworld, but so is the protagonist. As a soldier being suited up, this also gives the player the ability to learn the button controls as a feature of the story. In later games, there is an apparent expectation that the player understands some of what has transpired, and that she has some knowledge acquired from previous games.
Almost every game has some way of introducing a new user to the experience. Usually, this includes teaching a user how to play in addition to priming the user to the introduc
tory narrative context. Early forms of this include opening cut scenes and easy challenges in the early gameplay. StarCraft campaigns, for instance, introduce you to the complex build system by composing a story around a mission-based tutorial. With very little contextual priming, the 1995 game ChronoTrigger starts with the player character as a red spikey haired young man being woken up by his mother. My initial thoughts as I start this game are: where am I? Who am I? What is going on? What am I supposed to do next? It follows that the protagonist, Chrono, doesn’t have much of a personality represented throughout the game. In contrast, player characters such as Cloud from Final Fantasy 7 and Pheonix from Phoenix Wright: Justice for All, with more complexly defined personalities and lives, are introduced to the player with temporary memory loss. Suitably, the world must be explained to the player character and coincidentally presents information that the user needs to know. As you are discovering the world around you, you are also discovering yourself as a character that is recovering from amnesia.
Death in Games
Death is a common occurrence in many games. There are pretty typical approaches to handling life and death in videogames. It is expected that if you do poorly enough, you lose your life and will be set back in some way. Super Mario games employ extra lives. Final Fantasy uses save spots. Halo uses check points.
In the midst of a big dungeon, there’s always that awkward explanation of “here and only at shiny spots like these can you save your progress.” When you die, the game ends, but when you start again, you resume from previously saved state. We just accept this as how things are, but Planescape takes it a step further. In Planescape, the main character is “cursed” with immortality, and when he “dies,” he wakes up in a mortuary . . . the closest mortuary to the location of death (and no, time has not stopped nor rewound). In fact, he’s lived and died so many times, that his lives, recorded in a tomb, have taken many possible paths (in addition to the path you are currently on). Your living, dying, and resurrection, within play, is just an intentional feature of the story experience.
Mind Control
It goes without saying that stories in games are fairly linear. When it comes to choices, there really aren’t many that make a difference. Eventually, you will go from point A to point B to point C with nominal embellishments along the way. If I don’t accept the current quest, then the story stops until I decide otherwise. In FPS campaigns, I accomplish the mission objective and await my next orders. Bioshock, like Halo, is an FPS that progresses quite linearly. You take in the presented circumstances, the interesting setting, music, and dialogue, and you go along with it. For the sake of progressing through the story, you do what you are instructed (I mean, what else would you do?). What’s different in Bioshock is that you’re not meant to have a choice, because you, the player character, were genetically engineered to be mind-controlled by the trigger phrase, “Would you kindly.” For the first half of the game, you just go along with your lack of autonomy, and for the second half it is cleverly worked into the story.
And voila, here are three instances where we have gone from typical constraint to novel feature. Until we begin to formalize and create new ways in designing games, our paidia remains a bit limited. Fortunately, we still have the expansiveness of our imaginations at the narrative layer’s disposal (in the meantime).
Would Cortana Pass the Turing Test?
Currently, technology enables us to create games in certain ways. The look, feel, and sound of games have changed drastically in the past two decades when compared to other aspects, such as character believability and narrative intelligence. Game players have accepted many of these limitations as game conventions. Designers continuously build different sorts of experiences with the tools that technology makes available. Researchers, middle-ware developers, and (the more adventurous) game developers aim to create new experiences by creating a new toolsets to build with. Increasingly there are university research labs that specialize in this area of study and contribution: University of Alberta, IT University of Copenhagen, University of California at Santa Cruz, Georgia Institute of Technology, North Carolina State University, and University of Southern California, just to name a few.
However far the area of Artificial Intelligence advances, new criticism comes with it—just as it did with the Turing Test, with counterpoints about Chinese rooms, and seagulls. So would Cortana pass the Turing Test? Well, it’s relative to the context in which she exists because without the new discoveries about AI that allow for the existence of Cortana, there’s the lack of new criticism to evaluate. For example, Cortana exists in a space-age where such AI has been discovered; it’s not as though she just appeared at a research lab in the year 2011 at the University of California at Santa Cruz. Likely, Cortana would pass the Turing Test if she were accessible to people in our present time and day. In her time, however, there would be perhaps new discoveries of the limitations or irreconcilable discrepancies between man and machine. Without the knowledge of these limitations, we take one step closer to the dangerous scenarios described in the sundry post-apocalyptic movies. The fear that machines will outdo us and take us over, however, may be a bit far fetched, but there is evidence that our relationships with technology have gone sour in the past.
On one hand, the humanities may take issue with falling short of what constitutes believability and science may find the pursuit of believability a fool’s errand; however, in understanding the science behind the stimuli, this gives developers the inspiration to aim for the sky, and Halo players, among others, the opportunity to know what is possible will soon be made practical.
But, why would this matter to the everyday gamer? Well, I suppose it’s a matter of whether you’d prefer to be a spectator or an influential participant of your own life, whether you’d rather be dictated by technology or be the one to dictate technology. I wrote this chapter to let you know that this choice is available and entirely up to you.
UNSC Debriefing
Don’t Look Now, the Boogeyman’s Behind You—Or Is It the Flood?
ROGER NGIM
The movie Halloween changed the way I look at the world. No, it wasn’t Citizen Kane, 2001: A Space Odyssey or The Four Hundred Blows. It was a screaming Jamie Lee Curtis running around suburban Illinois while an escaped lunatic butchered her friends with a kitchen knife. Go ahead and laugh, but John Carpenter’s film did something interesting and irreversible to the way I experienced filmic space.
Fifteen years later, the pioneering computer game Doom had a similar effect on me, transforming my understanding of game space as a vertical plane. Like games in the Halo series, Doom is a First Person Shooter (FPS), meaning a player can move freely through three-dimensional space and experiences most of the game staring down the barrel of a gun. FPS games are such a staple of the videogame industry that we rarely give a thought to how radically the earliest entries changed the concept of gamespace. Technologically speaking, film has made similar leaps, again largely taken for granted—the invention of devices that allowed smooth tracking shots, for example.
When I was a teenager, I was obsessed with horror movies. I read a magazine for horror fans called Fangoria that was so explicit I hid it under my bed right next to—well, whatever teenage boys keep under their beds. I would study the grim photos of gaping wounds and decapitations with the intensity of a forensic scientist, searching for I-don’t-know-what—perhaps a reassuring flaw in a bloody special effect or clues as to why slasher movies held me so rapt. No doubt the scares satisfied an adrenaline craving brought on by raging hormones coursing through my adolescing body, but certain horror movies of the 1970s and 1980s, as many videogames would do later, functioned in a particular way: they frequently employed POV shots that forced the viewer to inhabit the body of the killer, an uncomfortable but sometimes exhilarating place to be. In doing so, they brought us into the movie and surrounded us with a virtual world.
Halloween wasn’t the first film to do this, but it was the first I remember seein
g. The movie opens with a brutal stabbing committed by an unseen person. The perpetrator is unseen because the entire seamless sequence is shown through the eyes of a killer, mostly through the eye holes of a mask, which reduces our vision to two spots near the center of the screen (human sight really doesn’t work this way, but we get the idea). We see a hand, presumably ours, grab a knife from a drawer, and then we head up the stairs to commit the act. Only after we flee downstairs and out the front door does the point of view change. As the camera pulls back to a wide shot of the front of the house, we finally see who we are (or were, as it were): a little boy in a clown outfit holding the bloodied weapon.
At several other points in the film, the POV switches to the killer’s and we hear the sound of (our) breathing within a mask. Our potential victims glance nervously over their shoulders or obliviously go about their business as we gaze at them from inside a car or around a hedge. As shocking as the opening of Halloween is, these quieter moments had a more profound effect on me. In the theater, those shots placed me within the film as a character, creating a kind of subliminal effect of cinematic space around me—there was a left, a right, a forward, a back, an up and a down. I was the unseen menace lurking in the yard. More disturbing, once outside of the theater I realized that my experience of reality essentially was the same: I could gaze at a person who could be my victim, and I could move through space seeing the world as if operating my own Steadicam.
Fortunately, I did not grow up to become a serial killer. Any such tendencies I worked out by playing videogames.