Bitwise
Page 18
When the dust clears you are somewhere else.
Ruins
You are outside, alone, in a pile of rubble and destruction that extends as far as you can see. But that is not terribly far, as smoke and clouds of ash are everywhere, and your line of sight is interrupted every few feet by a fire, a pile of detritus, or simply a huge, black, burnt-out slag.
> call my wife
(taking your phone out of your pocket)
You have no reception.
> look at phone
You have zero bars. Either you are out of range, or the destruction is blocking the signal.
> look at date on phone
It is September 11, 2001.
Seeing the date gives you a stronger sense of identity. You are only a few years out of school, working at Microsoft. Free of the nuclear paranoia that fueled Trinity, you lived through the odd period of stasis in the 1990s after the collapse of communism, when neoliberalism was ascendant and the problems of the world were very far from the problems of the United States. Your ignorance was shattered on this particular day, and you would spend years coming to terms with the world as it really was rather than how it had seemed to be from your partial and privileged vantage.
And you recognize this to be a dream that haunted you occasionally during those years, in which you found yourself right inside the site of the demolition of the myth of being posthistorical and enlightened.
> wake up
Done.
**** You have returned ****
In 18 turns, you have achieved the rank of Mnemonist. Would you like to start over, restore a saved game position, undo your last turn, or end this session of the game?
(Type RESTART, RESTORE, UNDO, or QUIT)
> quit
Moriarty’s passage about the Nagasaki survivor chilled me like little else in computer games. In the depiction of a single, nameless character, whom the player sees first as a scarred, elderly woman, and later as an innocent child prior to the Nagasaki bombing, the collective weight of fifty years of nuclear anxiety and trauma fell on me. It hit my brother even worse. Only eight years old at the time, he took a look at the game and was seized with the fear of nuclear war. I understood his fear. There were no bad guys in Trinity. It was a portrait of a world on the precipice, having discovered the secret of its own annihilation. And my character in the game walked through it and saw the nightmare. The old woman was the crux: the terrifying power wasn’t a hypothetical, future doom. It was already here, and it already had taken victims. That passage awakened me, much as September 11, 2001, did over ten years later.
Trinity’s text was written by a human, not a computer. How did its placement in a text adventure affect its impact? Text adventures graft human language on top of algorithms. Instead of regimenting human life into numbers and equations, they refuse automation. In this regard, text adventures are the opposite of role-playing games, their complement and counterpart in the history of computer games. While RPGs put your character’s statistics at the surface, text adventures only sparsely quantify their worlds. There are no explicit statistics or dice rolls in adventure games. Ironically, while the cognitive biases of RPG designers get baked into RPG algorithms, text adventures isolate these biases at the level of human language, leaving the underlying plumbing mostly free of the quantifying models of life. Exodus and Wizardry, both of which effectively port the mechanics of Dungeons & Dragons wholesale to computers, are comprised of numbers and maps and diagrams. So were computer wargames. Text adventures foreground the dialogue between creator and player. The human and computer layers are distinct.
These points apply to adventure games more generally, including those with graphics, but I wish to focus on the tradition of text adventures, as pioneered by Adventure and Zork, for two reasons. First, they attempted to deal with the algorithmic problem of understanding human language in a primitive yet ingenious way: they let players type what they wanted to do in English, rather than having them select words from a menu or push a button. Second, they showed me how computers could provide immersive and moving narrative experiences, integrating the most ineffable parts of human creativity to do so.
At the time, it was marvelous to behold that a computer could understand the text I typed into the computer, including complicated constructions with adjectives and prepositions. In this Infocom was peerless: their parser was far more sophisticated than that of most games, which had trouble getting too far beyond the basic VERB OBJECT (GET GOLD, KILL TROLL, etc.) command form. But Infocom built a general-purpose parser that they employed and refined. Its ability to handle and interpret a variety of expressions remains an outstanding technical achievement.
The infamous Babelfish puzzle in Infocom’s game The Hitchhiker’s Guide to the Galaxy asks the player to obtain a universal translator fish from a dispensing machine, but all sorts of devices and furniture get in the way of obtaining one. The puzzle is ridiculously hard. When I went as a kid to get my copy of Dirk Gently’s Holistic Detective Agency signed by Douglas Adams, I overheard Adams telling the person in front of me how to solve that puzzle.*3 It was that difficult, and there was no internet available for hints. (Sales of hint books were a major revenue source in the late 1980s.)
Yet despite the fiendishness of the puzzle, the parser doesn’t impede the player. The problem is figuring out what to do, not figuring out how to express it to the parser. That may not come as much comfort to players bashing their head against the wall trying to derive the next step in the Rube Goldberg machine to obtain the Babelfish, but it is a testament to the strength of Infocom’s parser (and its playtesting). Interactive fiction guru Graham Nelson uses this particular puzzle to explain the game’s immersive interaction: “Without puzzles, or problems, or mechanisms to allow the player to receive the text a little at a time—whichever phrase you prefer—there is no interaction.” He gives a sample transcript of the beginning of the puzzle:
>examine machine
The dispenser is tall, has a button at around eye-level, and says “Babel Fish” in large letters. Anything dispensed would probably come out the slot at around knee-level. It bears a small label which reads “Another fine product of the Sirius Cybernetics Corporation.”
>press dispenser button
A single babel fish shoots out of the slot. It sails across the room and through a small hole in the wall, just under a metal hook.
>remove dressing gown
Okay, you’re no longer wearing your gown.
>hang gown on hook
The gown is now hanging from the hook, covering a tiny hole.
>push dispenser button
A single babel fish shoots out of the slot. It sails across the room and hits the dressing gown. The fish slides down the sleeve of the gown and falls to the floor, vanishing through the grating of a hitherto unnoticed drain.
>put towel over drain
The towel completely covers the drain.
This is a best-case scenario. The player is more likely to spend time examining random objects, trying to place different things on the hook and over the drain or engaging in crazier antics like pulling the hook out of the wall or pouring things into the drain, before stumbling on the solution. Whether the puzzle is good is a subjective matter, but the parser does a passable job of indulging the player’s efforts and offering meaningful and relevant responses to the player’s attempts. The game walls off possibilities that might have worked in real life. Yet the text parsing is sufficiently good that some segment of players felt the puzzle was “fair,” the ultimate test of the quality of a game.
By 2000, an Infocom-derived parser (in the TADS language) became sufficiently sophisticated and robust to allow for a game like Andrew D. Pontious’s Rematch, a single-move game that takes place in a pool hall, just before a car crashes through the window and (usually) kills you. Its Rube Goldberg mechanism fires in response to this single command, which wins the game:
>whisper
to Nick to dare Ines to throw page 351 with cueball at loudmouth
The ensuing chaos caused by this command results in you, Nick, and Ines getting out of the way just before the car crashes through the window. As with the Babelfish puzzle, the mechanism is diabolical to discover, requiring repeated attempts and deaths, but expressing one’s intentions never stands in the way. The parser is that strong, though when I first played it I could count the seconds as the computer ground away trying to process the “whisper” command.
An unfair puzzle requires verbiage so rigidly and arbitrarily constrained that the player is frustrated trying to reach it. In these cases, a player has to guess at the hidden logic in the code itself rather than the overt logic presented in the text of the game. In other words, the player has to read the author’s mind. A classic example is from the charming yet flawed 1986 game The Pawn, by Rob Steggles and Anita Sinclair, in which the player must plant cannabis: the correct command is “PLANT THE POT PLANT IN THE PLANT POT WITH THE TROWEL.” But slight variations, from “PLANT PLANT” to “PLANT PLANT IN POT” and beyond, yield completely unhelpful responses like “I don’t follow you.” Only a command that is very close to the initial, awkward phrasing works. The game trips over itself when it has to distinguish between multiple senses of “pot” and “plant.” The computer is trying to speak a foreign language, but reveals its lack of familiarity with how English functions. Unless you know the phrasing that the author wants you to use, your chances of guessing are slim indeed. In letting the limits of its code show, the puzzle fails.
What’s going on here bears no resemblance whatsoever to how humans use language. Infocom’s parser does not “understand” the player’s commands in the way that humans understand each other. But it does a good job of seeming to understand, and that feat of prestidigitation is at the heart of how computers have become more human. We buy into the tricks that programmers play to make computers seem more human, until those tricks slowly become real.
At their best, these tricks in text adventures are in the service of art. I did not feel tricked by Trinity’s parser any more than I felt tricked by Kōbō Abe’s or Virginia Woolf’s symbolism. The parser and its primitive yet skillful understanding of human language help the player identify with the protagonist of the game in a way that is different from the experience of reading fiction. Some authors of text adventure games were attracted to the possibilities of player interaction: in addition to Douglas Adams, Robert Pinsky contributed his whimsically surreal prose to Mindwheel and Thomas M. Disch wrote the self-reflexive noir of Amnesia. The degree to which text adventures’ novel yet limited interactivity was utilized varied. Sometimes it was little more than mere branching narratives of the sort employed by John Fowles’s The French Lieutenant’s Woman or Choose Your Own Adventure books. But despite text adventures’ limited understanding of language, the ability to type in a command rather than select from a list gave a greater impression of involvement and possibility. A world could be parceled out in small chunks driven by the wandering eye of the reader. When, as the player character of Trinity, I encountered the old woman and read “Her face is wrong,” I felt a visceral immediacy. I was not reading about someone encountering a survivor of Nagasaki, but felt that I myself was staring at her with a mixture of shame and curiosity. It was in part from the new modality of interaction that these games were able to achieve their emotional impact. Yet these effects were achieved in the absence of computer understanding. The meaningful content still existed separately from the mechanical code executing underneath it. It was through the medium of computers, not through computer intelligence, that authors like Brian Moriarty succeeded at their chosen art.
At their worst, the poorer Infocom games and the vast majority of non-Infocom text adventures were too circumscribed in their possibilities, too inept in their understanding, or too illogical in their construction. It was sometimes impossible to form a smooth bond between player and game.*4 When Madventure told me that I had come to a fork in the road, I needed to take the fork in order to progress. This was less interactive than irritating. In the 1980s, though, even very flawed games still held a certain magic over players, because the nature of that interaction was so new.
The question for companies like Google and Facebook became, how intelligent could computers seem to be in the absence of truly being intelligent?
*1 The line is from J. M. Barrie’s The Little White Bird. Beyond Barrie and Carroll, the game quoted Emily Dickinson, Herman Melville, and Alexander Pope.
*2 This wonderful line is borrowed verbatim from Trinity. Typically, other games obeyed the command, responding, “If you insist” (as in Starcross) or simply “Done” and ending the game.
*3 Douglas Adams exerted a greater influence on me than I realized at the time. The mix of science-fiction clichés with acidic British irony fit me quite well in my middle-school years, and served as a stepping-stone to Kurt Vonnegut, who in turn was a stepping-stone to more “serious” and “literary” fiction.
*4 Douglas Adams’s second collaboration with Infocom and Michael Bywater, Bureaucracy, made a virtue out of this fault by fighting the player at every turn, replacing the traditional player score with a measure of the protagonist’s blood pressure, which started at 120/80 and shot up every time the player was vexed by the game—say by using a word the game didn’t understand. Like most text adventure diehards, I was more amused than annoyed when my character died of a heart attack because I had made a typo.
PART III
7
BIG DATA
From the Client to the Cloud
Men build their cultures by huddling together, nervously loquacious, at the edge of an abyss.
—KENNETH BURKE
WHEN I WAS a child, code thrilled me with its elegant power to perform near miracles with a few lines of Logo. I was amazed that one could do so much with so little. Google, in its early years, performed the same sort of magic, but with data. Google amassed an unprecedented amount of data about the web, then developed the simplest and most elegant methods for analyzing it.
Everything was bigger at Google than it had been at Microsoft. When I arrived at their Mountain View campus in 2004 as a journeyman software engineer, I was surprised at how differently things worked there, and how differently I worked there. I had coded servers at Microsoft, but Microsoft’s entire business was based around the PC desktop. That focus was one of the primary reasons why Microsoft found it hard to shift into the internet age, where the locus was not a home or workplace PC but a web server or database somewhere else on the internet—what’s today known as the “cloud,” the amorphous mass of data that floats around and above us, only dimly visible. Unlike Microsoft, Google knew servers from the beginning. We worked on Linux, the free operating system based on Unix that had been uniquely designed for networked computers in the first place, rather than retrofitted to it as Windows had been. All engineers shared a single enormous code repository, rather than having a different repository for each team, as Microsoft had had. We had machines at our disposal: thousands upon thousands of them, in massive clusters around the country and eventually around the world. These machines were always running, and they were always running my code. Automated test infrastructures made sure that my code changes didn’t break existing functionality—and automatically sent nagging emails to me, my manager, and lots of other people if I did. It was amazing.
At Microsoft, I tested the Messenger server by running a build of it on a few local machines. There were a few dozen test servers available for shared use. It was quicker to launch it on my own work machine. Microsoft’s first real excursion into web services had come with the purchase of Hotmail, which had their own home-brewed mechanisms unsuited to general-purpose usage. Microsoft took years to build out a robust server infrastructure. Here, Google was a decade ahead of Microsoft.
It helped that I was fortunate enough to work wit
h some of the best software engineers I’d ever met, and in particular under a technical lead, Arup, who was one of the most careful and comprehensive people I’d met in any field. He had a preternatural capacity for anticipating anything that could possibly go wrong and making sure our team handled it before it did go wrong. It made for fewer emergencies than I’d ever encountered in engineering, and remarkably low overhead, as our team of ten tended just to communicate informally rather than sitting down for long meetings. I learned that a small group of high-quality coders with a top-notch lead could accomplish vastly more in a month than some of my old teams had in a year.
The more significant difference was data, and how much of it there was at Google. While Microsoft had succeeded as a software company, Google’s lifeblood was data. Google needed software to collect, store, and manage this data, but at Google, software served data. The advent and exponential growth of the web, which was reaching hundreds of billions of pages by the mid-2000s, required that there be an organized, comprehensive system to fetch, analyze, and retrieve that data en masse and at top speed. Google was the first company to do this. In the 2000s, Google came to own data in much the way that IBM owned the mainframe, Microsoft owned the PC, and Apple owned the mobile device.*1
At Google, I was able to command a thousand machines at the push of a button and analyze billions of web pages in minutes. At Microsoft, the data carried by the Messenger servers, even at peak volume, was fairly small. A few million users sent a few million messages every minute. This was manageable on a few dozen servers. But a corpus of hundreds of billions of web pages was far beyond not just what any one human could sift through, but even what any one computer could sift through. Analyses were performed through massive parallelization and partitioning of data, in order to produce statistical breakdowns and data reductions that could then be used to return relevant search results. If I wanted, say, to know the most popular words used on web pages, then I’d send a couple billion web pages each to a thousand servers. Each server would analyze its portion of data, then pool the results to be analyzed on yet another machine. This chained process of quantitative analysis was central to Google’s operations, and it became central to my life for the years I was there. It was also beautiful.