by Tom Bissell
In our first multiplayer game, which was called “Guardian,” one had to kill the opposing team’s leader and all those who protect him. By the end, a teammate and I happened to be the only survivors (I achieved this status largely by hiding), and we encountered Bleszinski crouching behind a stack of sandbags. We decided to charge. Bleszinski popped out from cover and, with a shotgun, expertly exploded the head of my teammate before beating me to death as I rounded the edge of his hiding place. Bleszinski ended the game with twenty-one kills; I had three.
The next contest, then known as “Meatflag” but since renamed, amounted to a game of capture the flag—though the flag was, bracingly, a human being. In this match, I was simultaneously chainsawed to death by three people, a spectacle that everyone in the room claimed never to have seen before, in all their hours of play. Our final game was called “Wingman,” which is played in pairs. Bleszinski and I buddied up, and I shouted across the room to him for some general guidance. “Basically,” he said, “kill anyone who doesn’t look like you. Our foreign policy.”
I fared miserably again, and pride compelled me to point out that I had finished Gears 1 on its most challenging difficulty level. No one was listening, and Bleszinski stood up. “Now’s the fun part,” he said. “Figure out what’s a bug and what’s not a bug.” He conferred in a huddle with the other designers about what to enter into the defect-tracking database. “You tell people what you do for a living,” Bleszinski said later, “and they’re like, ‘Oh, you play video games for a living.’ No, I play a game that’s not as fun as it should be, that’s broken, until it’s no longer broken. Then I give it to other people to have fun with.”
FIVE
To learn what the video-game industry at large thought of itself and where it believed it was going, I went to Las Vegas, a city to which I had moved two years earlier for a ten-month writing fellowship. I had not expected to enjoy my time in Vegas but, to my surprise, I did. I liked the corporate diligence with which upper-tier prostitutes worked the casino bars and the recklessness with which the Bellagio’s fountains blasted the city’s most precious resource into the air a dozen times a day, often to the chorus of “Proud to Be an American.” Some days I sat on my veranda and watched the jets float in steady and low over the city’s east side, bringing in the ice-encased sushi and the Muscovite millionaires and the husky midwesterners and the collapsed-star celebrities booked for a week at the Mirage. I even liked the sense I had while living in Las Vegas that what separated me from a variety of apocalyptic ruins was nothing more than a few unwise decisions.
Las Vegas itself is as ultimately doomed as a colony of sea monkeys. One vexation is water, of which it is rapidly running out. Another is money, of which it needs around-the-clock transfusions. The city’s murder-suicide pact with its environment and itself is in-built, congenital. Constructed too shoddily, governed too erratically, enjoyed and abused by too many, Las Vegas was the world’s whore, and whores do not change. Whores collapse.
Collapsing was what Las Vegas in the winter of 2009 seemed to be doing. The first signs were small. From my rental car I noticed that a favorite restaurant had a sign that read RECESSION LUNCH SPECIAL. Laundromats, meanwhile, promised FREE SOAP. More ominously, one of Vegas’s biggest grocery store chains had gone out of business, resulting in several massive, boarded-up complexes in the middle of stadium-sized parking lots, as indelible as the funerary temples of a fallen civilization. Entire office parks had been abandoned down to their electrical outlets. Hand-lettered SAVE YOUR HOUSE signs marked every other intersection, while other signs, just below them, offered FORECLOSURE TOURS. At one stoplight a GARAGE SALE BEHIND YOU notice turned me around. I found a nervous middle-aged white woman selling her wedding dress ($100) and a small pile of individual bookcase shelves ($1). She smiled hopelessly as I considered her wall brackets ($.15) and cracked flowerpots ($.10), all set out on an old card table ($5).
This was not the Vegas I remembered, but then most of my time there was spent playing video games. A game I played only because I lived in Vegas was Ubisoft’s shooter Rainbow Six Vegas 2, one of many iterations of a series licensed out in the name of the old scribbling warhorse Tom Clancy. Rainbow Six Vegas 2 is mostly forgettable, though it is fun to fight your way through the Las Vegas Convention Center and take cover behind a bank of Las Vegas Hilton slot machines. It was also fascinating to see the latest drops of conceit wrung from Rainbow Six’s stirringly improbable vision of Mexican terrorists operating with citywide impunity upon the American mainland. The game’s story is set in 2010. While no one will be getting flash-banged in the lobby of Mandalay Bay anytime soon, driving around 2009 Las Vegas made the game’s casino gunfights and the taking of UNLV seem slightly less unimaginable.
Out at Vegas’s distant Red Rock hotel and casino, the Academy of Interactive Arts & Sciences was throwing its annual summit, known as DICE (Design Innovate Communicate Entertain), which gathers together—for the purpose of panels, networking, an awards show, and general self-celebration—the most powerful people in the video-game industry. With the Dow torpedoed, layoffs occurring in numbers that recall mass-starvation casualties, and newspapers and magazines closing by the hour (including the game-industry stalwart Electronic Gaming Monthly), DICE held out the reassurance of mingling with the dukes and (rather more infrequently) duchesses of a relatively stable kingdom—though it, too, had been bloodied. Electronic Arts, the biggest video-game publisher in the world, lost something like three-quarters of a billion dollars in 2008. Midway, creators of the sanguinary classic fighting game Mortal Kombat and one of the few surviving game developers that began in the antiquity of the Arcade Age, had been recently sold for quite a bit less than a three-bedroom home on Lake Superior. One of the still-unreleased games Midway (with Surreal Software) had spent tens of millions of dollars in recent years developing is called This Is Vegas, an open-world game in the Grand Theft Auto mode that, according to some promotional material, pits the player against “a powerful businessman” who wants to turn Vegas “into a family-friendly tourist trap.” The player, in turn, must fight, race, gamble, and party his or her “way to the top.” In today’s Las Vegas, the only thing one could hope to party his way to the top of is the unemployment line, and the game’s specter of a “family-friendly” city seemed suddenly, even cruelly, obtuse.
Upon check-in every DICE attendee received a cache of swag that included a resplendent laptop carrying case, an IGN.com water bottle (instructively labeled HANG OVER RELIEF), a handsome reading light–cum–bookmark, the latest issue of the industry trade magazine Develop (President Obama somehow made its cover, too), and a paperback copy of a self-help business book titled Super Crunchers. Shortly after receiving my gift bag, I ran into a young DICE staffer named Al, who responded to my joke about the Obama cover (“Yes Wii Can”) by reminding me of the Wii that currently occupied the White House rec room. “By 2020,” Al told me, “there is a very good chance that the president will be someone who played Super Mario Bros. on the NES.” I had to admit that this was pretty generationally stirring. The question, I said, was whether the 2020 president-elect would still be playing games. Maybe he would. The “spectacle” of games, Al told me, was on its way out. Increasingly important, he said, was “message.”
Many have wondered why a turn toward maturity has taken the video game so long. But has it? Visual mediums almost always begin in exuberant, often violent spectacle. A glance at some of the first, most popular film titles suggests how willing film’s original audiences were to delight in the containment of anarchy: The Great Train Robbery, The Escaped Lunatic, Automobile Thieves. Needless to say, a film made in 1905 was nothing like a film made twenty years later. Vulturously still cameras had given way to editing, and actors, who at the dawn of film were not considered proper actors at all, had developed an entirely new, medium-appropriate method of feigned existence. Above all, films made in the 1920s were responding to other films—their blanknesses and stillnessess and hesitations.
While films became more formally interesting, video games became more viscerally interesting. They gave you what they gave you before, only more of it, bigger and better and more prettily rendered. The generation of game designers currently at work is the first to have a comprehensive growth chart of the already accomplished. No longer content with putting better muscles on digital skeletons, game designers have a new imperative—to make gamers feel something beyond excitement.
One designer told me that the idea of designing a game with any lasting emotional power was unimaginable to him only a decade ago: “We didn’t have the ability to render characters, we didn’t know how to direct the voice acting—all these things that Hollywood does on a regular basis—because we were too busy figuring out how to make a rocket launcher.” After decades of shooting sprees, the video game has shaved, combed its hair, and made itself as culturally presentable as possible. The sorts of fundamental questions posed by Aristotle (what is dramatic motivation? what is character? what does story mean?) may have come to the video game as a kind of reverse novelty, but at least they had finally come.
At DICE, one did not look at a room inhabited by video-game luminaries and think, Artists. One did not even necessarily think, Creative types. They looked nothing like gathered musicians or writers or filmmakers, who, having freshly carved from their tender hides another album or book or movie, move woundedly about the room. But what does a “game developer” even look like? I had no idea. The “game moguls” I believed I could recognize, but only because moguls tend to resemble other moguls: human stallions of groomed, striding calm. A number of DICE’s hungrier attendees wore plush velvet dinner jackets over Warcraft T-shirts, looking like youthful businessmen employed by some disreputably edgy company. There was a lot of vaguely embarrassing sartorial showboating going on, but it was hard to begrudge anyone that. Most of these people sit in cubicle hives for months, if not years, staring at their computer screens, their medium’s governing language—with its “engines” and “builds” and “patches”—more akin to the terminology of auto manufacture than a product with any flashy cultural cachet. (In actual fact, the auto and game industries have quite a bit in common. Both were the unintended result of technological breakthroughs, both made a product with unforeseen military applications, and both have been viewed as a public safety hazard.)
There was another kind of DICE attendee, however, and he was older, grayer, and ponytailed—a living reminder of the video game’s homely origins, a man made phantom by decades of cultural indifference. An industry launched by burrito-fueled grad school dropouts with wallets of maxed-out credit cards now had groupies and hemispherical influence and commanded at least fiduciary respect. Was this man relieved his medium’s day had come or sad that it had come now, so distant from the blossom of his youth? It was surely a bitter pill: The thing to which he had dedicated his life was, at long last, cool, though he himself was not, and never would be.
Like any complicated thing, however, video games are “cool” only in sum. Again and again at DICE, I struck up a conversation with someone, learned what game they had done, told them I loved that game, asked what they had worked on, and been told something along the lines of, “I did the smoke for Call of Duty: World at War.” Statements such as this tended to freeze my conversational motor about as definitively as, “I was a concentration camp guard.”
Make no mistake: Individuals do not make games; guilds make games. Technology literally means “knowledge of a skill,” and a forbidding number of them are required in modern game design. An average game today is likely to have as much writing as it does sculpture, as much probability analysis as it does resource management, as much architecture as it does music, as much physics as it does cinematography. The more technical aspects of game design are frequently done by smaller, specialist companies: I shook hands with the CEO of the company who did the lighting in Mass Effect and chatted with another man responsible for the facial animation in Grand Theft Auto IV.
“Games have gotten a lot more glamorous in the last twenty years,” one elder statesman told me ruefully. Older industry expos, he said, usually involved four hundred men, all of whom took turns unsuccessfully propositioning the one woman. At DICE there were quite a few women, all of whom, mirabile dictu, appeared fully engaged with rampant game talk. At the bar I heard the following: Man: “It’s not your typical World War II game. It’s not storming the beaches.” Woman: “Is it a stealth game, then?” Man: “More of a run-and-gun game.” Woman: “There’s stealth elements?” The industry’s woes often came up. When one man mentioned to another a mutual friend who had recently lost his job, his compeer looked down into his Pinot Noir. “Lot of movement this year,” he said grimly. Fallen comrades, imploded studios, and gobbled developers were invoked with a kind of there-but-for-the-power-up-of-God-go-we sadness.
Many had harsh words for the games press. “They don’t review for anyone but themselves,” one man told me. “Game reviewers have a huge responsibility, and they abuse it.” This man designed what are called “casual games,” which are typically released for handheld systems such as the Nintendo DS or PSP. In many cases developer royalties are attached to their reviewer-dependent Meta-critic scores, and because game journalists can be generally relied upon to overpraise the industry’s attention-hoarding AAA titles (shooters, RPGs, fighting games, and everything else aimed at the eighteen-to-thirty-four male demographic—a lot of which games I myself admire), the anger from developers who worked on smaller games was understandable. Another man introduced himself to tell me that, in four months, his company would release its first game on Xbox Live Arcade, the online service that allows Xbox 360 owners access to a growing library of digitally downloaded titles. This, he argued, is the best and most sustainable model for the industry: small games, developed by a small group of people, that have a lot of replay value, and, above all, are fun. According to him, pouring tens of millions into developing AAA retail titles is part of the reason why the EAs of the world are bleeding profits. The concentration on hideously expensive titles, he said, was “wrong for the industry.” (For one brief moment I thought I had wandered into a book publishing party.)
Eventually I found myself beside Nick Ahrens, a choirboy-faced editor for Game Informer, which is one of the sharpest and most cogent magazines covering the industry. “These guys,” Ahrens said, motioning around the room, “are using their childhoods to create a business.” The strip-mining of childhood had taken video games surprisingly far, but childhood, like every natural resource, is exhaustible.
DICE’s first panel addressed the tricksy matter of “Believable Characters in Games.” As someone whose palm frequently seeks his forehead whenever video-game characters have conversations longer than eight seconds, I eagerly took my seat in the Red Rock’s Pavilion Ballroom long before the room had reached even 10 percent occupancy. The night before there had been a poker tournament, after which a good number of DICE attendees had carousingly traversed Vegas’s great indoors. Two of my three morning conversations had been like standing at the mouth of a cave filled with a three-hundred-year-old stash of whiskey, boar meat, and cigarettes.
“Believable characters” was an admirable goal for this industry to discuss publicly. It was also problematical. For one thing, the topic presupposed that “believability” was quantifiable. I wondered what, in the mind of the average game designer, believability actually amounted to. Oskar Schindler? Chewbacca? Bugs Bunny? Because video-game characters are still largely incapable of actorly nuance, they frequently resemble cartoon characters. Both are designed, animated, and artisanal—the exact sum of their many parts. But games, while often cartoonish, are not cartoons. In a cartoon, realism is not the problem because it is not the goal. In a game, frequently, the opposite is true. In a cartoon, a character is brought to life independent of the viewer. The viewer may judge it, but he or she cannot affect it. In a game, a character is more golemlike, brought to life first with the incantation of code and the
n by the gamer him-or herself. Unlike a cartoon character, a video-game character does not inhabit closed space; a video-game character inhabits open situations. For the situations to remain compelling, some strain of realism—however stylized, however qualified—must be in evidence. The modern video game has generally elected to submit such evidence in the form of graphical photorealism, which is a method rather than a guarantee. By mistaking realism for believability, video games have given us an interesting paradox: the so-called Uncanny Valley Problem, wherein the more lifelike nonliving things appear to be, the more cognitively unsettling they become.
The panel opened with a short presentation by Greg Short, the co-founder of Electronic Entertainment Design and Research. What EEDAR does is track industry trends, and according to Short he and his team have spent the last three years researching video games. (At this, a man sitting next to me turned to his colleague and muttered, “This can’t be a good thing.”) Short’s researchers identified fifteen thousand attributes for around eight thousand different video-game titles, a task that made the lot of Tantalus sound comparatively paradisaical. Short’s first Power-Point slide listed the lead personas, as delineated by species. “The majority of video games,” Short said soberly, “deal with human lead characters.” (Other popular leads included “robot,” “mythical creature,” and “animal.”) In addition, the vast majority of leading characters are between the ages of eighteen and thirty-four. Not a single game EEDAR researched provided an elderly lead character, with the exception of those games that allowed variable age as part of in-game character customization, which in any event accounted for 12 percent of researched games. Short went on to explain the meaning of all this, but his point was made: (a) People like playing as people, and (b) They like playing as people that almost precisely resemble themselves. I was reminded of Anthony Burgess’s joke about his ideal reader as “a lapsed Catholic and failed musician, short-sighted, color-blind, auditorily biased, who has read the books that I have read.” Burgess was kidding. Mr. Short was not, and his presentation left something ozonically scorched in the air. I thought of all the games I had played in which I had run some twenty-something masculine nonentity through his paces. Apparently I had even more such experiences to look forward to, all thanks to EEDAR’s findings. Never in my life had I felt more depressed about the democracy of garbage that games were at their worst.