Work Won't Love You Back
Page 31
There is also the question of costs—some of the programming is outsourced to countries like India, where the wages are lower and the working conditions less regulated. “Somebody working in India and somebody working in Sweden can have completely different working conditions,” he noted, “even though they are working at the same company on the same game and the same project, maybe even the same feature.”
The grueling hours lead to high turnover at the jobs in the industry, even more so than the programming schools. It’s a workload, Agwaze and the others said, designed for young men without families or caring responsibilities, who can dedicate their entire lives to the job. And indeed, the demographics of the industry bear this out: recent surveys of the United Kingdom’s games workforce found that the vast majority were young men. Only 14 percent were women, and as for workers of color, like Agwaze, in 2015 they made up a dismal 4 percent. In the United States, meanwhile, a 2019 study found that only 19 percent of the workforce was female, while a slightly better 32 percent identified as something other than white. When the appeal of working on games no longer trumps the desire to have a life outside of work, programmers leave and go into a different industry. Their skills might have been honed to make blockbuster games, but the same code that makes up the backbone of Red Dead Redemption can also be used to make the latest financial technology app, for more money and shorter hours. “It’s just a different planet,” Agwaze said.2
That turnover itself makes the industry less efficient than it could be: rather than trying to retain experienced workers, companies bring in more young workers like Agwaze to make up the difference. Meanwhile, senior positions sometimes go unfilled for months. It becomes a circular problem: hours stretch longer and longer as junior developers scramble to fix bugs; they get tired of the struggle and quit; and then a new person with even less practice is plugged into their spot. And the companies’ idea of how to make the job more sustainable is to put in a Ping-Pong table and give out free food. Agwaze laughed, “Let’s put a bed in there! Sleepover! Put in showers!” Studio Gobo’s website promotes “Gobo Friday Lunch,” with “Freshly cooked (free!) food by our in house chef, the only rule is you’re not allowed to sit next to the people you did last week. It’s an opportunity to relax and hang out as a team and some of our best ideas have emerged over a warm home-cooked meal.”
But, of course, it’s not home-cooked. Instead, it blurs the distinction between home and work. “I have time periods where, like, I sleep for two or three hours,” Agwaze said. “I’m just going home to bed and waking up and going back again. I don’t remember what happened. I just remember going to bed and being in the office again.” Coworkers become close friends, late shifts can take on a party atmosphere, and the feeling that everyone is part of something important often prevails. Studio Gobo’s website again: “Fun is at the heart of what we do. We know that if we want to make fun games, we also have to have fun making games.”
Yet that fun atmosphere itself is designed to entrap workers into staying longer daily, even without direct pressure from the boss. “I had a senior employee tell me, ‘Kevin, I notice that you stay long hours a lot and I think it has a bad impact on the whole team, because if you stay longer, everybody else wonders, “Do I need to stay longer?” It puts pressure on your team. Even if you want to do that, that might negatively affect everybody else.’” At the time, Agwaze said, he shrugged it off. The individual pressures—the need to build one’s CV—mitigated against collective concern. “I remember being like, ‘Ah, whatever. I am fine. I am doing good.’”
Agwaze’s experience was rare, though, he noted—most employers applied the opposite pressures. Crunch was endemic to the industry: over half of the workers questioned in one survey said they’d worked “at least 50 percent more hours during crunch than the standard work week of 40 hours.” The issue came to the fore in 2004 with a public “open letter” from the spouse of a developer at Electronic Arts (EA), complaining of her partner’s eighty-five-hour crunch weeks. Two class-action lawsuits followed, alleging unpaid overtime. Both were settled out of court, but the practice continued up to 2020. And it’s not clear the practice is even worth it for employers. “Crunch,” Agwaze noted, “produces bad games, a lot of average games, and some good games. Just because you crunch doesn’t mean that the game is going to be any good at all.”3
Beyond their expected loyalty to their own CV, the programmers were encouraged to consider themselves part of the family, and to work hard to pull their weight within it, even if, as Agwaze said with a sardonic laugh, “Maybe I crossed the country to start this job and I was fired in my first week after they told me I had now entered the family.” While this had never happened to him, it wasn’t an uncommon experience in the industry.
Some managers in the industry are starting to realize that they need to figure out better ways to retain experienced developers than trying to make the office feel less office-like. But the culture of the industry remains mired in the idea that putting in long hours is a mark of quality and dedication, rather than burnout and inefficiency. “They can’t even imagine it as a bad thing,” Agwaze said. “This is how it is. How can anybody believe this to be bad or wrong? This is how we need to do it.”
With the arrival of COVID-19 in Britain, Agwaze joined the masses suddenly working from home. For him, that meant an even further blurring of the lines between time on and time off the job. At first, he said, he was told he needed to keep going to the office, but when the government announced its recommendations, he was allowed to stay home. He did some rearranging in his flat: when a roommate moved out, he was able to take over their room for a workspace, and he was able to borrow a computer with a bigger monitor on which to work. “I wake up, go to the other room to the PC. Then, I work for a long while. Then, at some point, I stop working. It might be after eight hours or slightly more or slightly less. I used to pretty rigorously take an hour of lunch break at 1 p.m. sharp with other people from work, but now I’m like, ‘Did I eat anything today? No, I didn’t. I should probably eat. What’s the time? Oh, it’s 2 p.m.’”
And after all the time that he spends dedicating himself to making games, he said, he doesn’t really play them that much anymore. He laughed, “I don’t have time. I sneak one in every now and then.”
PROGRAMMING, A FIELD CURRENTLY DOMINATED BY YOUNG MEN, WAS invented by a woman. Ada Lovelace was the daughter of Romantic poet Lord Byron, but her mother steered her into mathematics, “as if that were an antidote to being poetic.” Lovelace was inspired by mechanical weaving looms to design a program for Charles Babbage’s “Analytical Engine,” an early idea of a computer. Her insight was that the computer could be used not just to calculate complex equations but to handle music, graphics, words, anything that could be reduced to a code—perhaps even games. Her paper on the subject, now considered the first computer program, was published in a journal in 1843, years before anything resembling a computer had actually been built.4
These days, the tech industry—as the shorthand would have it, leaving aside the question of just what is considered “technology”—is fawned over as the main driver of innovation in the world’s major capitalist economies. Programmers are lionized in the press, their long hours held up as proof of romantic commitment to the work rather than inefficient work processes, their skills envisioned as something between God-given talent and Weberian hard work and grit. Those skilled workers are seen as geniuses the way artists used to be, gifted with superior abilities in a field inherently creative and specialized. Tech jobs are described as dream jobs, where the most skilled workers are wooed with high salaries, great benefits, stock options, and fun workplaces where you can bring your dog, get a massage, play games, and, of course, enjoy the work itself—and all of this leads to more and more work. The obsession with “innovation” is actually less than a century old, but the concept is often used to obscure the way skills become gendered and racialized, associated with a certain image of a certain kind of worker, a
nd how that perception is reproduced along with our attitudes toward work.5
Programming was not always illustrious work, and computers were not always fancy machines. “Computer” was a job title for humans, often women, hired to crunch numbers on mechanical calculators at high volumes. Women did so in the United States during World War II, when men were being sent to the front lines and the first computing machines were being developed. The Electronic Numerical Integrator and Computer (ENIAC) was designed to replace those human computers, but its ability to perform calculations relied on human hands manually moving cables and flipping switches. At the time, the programming of the computer was considered routine work, and men were in short supply, so the University of Pennsylvania, where the ENIAC was born, recruited women with math experience to work on the machine.
In 1945, the first six women learned to be computer programmers: Jean Jennings, Marlyn Wescoff, Ruth Lichterman, Betty Snyder, Frances Bilas, and Kay McNulty. The women flirted with soldiers, argued about politics, and calculated differential equations to make the complicated machine work, learning its inner workings as well as any of the male engineers who’d designed and built the thing. The ENIAC—a massive, eighty-by-eight-foot mass of vacuum tubes, cables, and thousands of switches—“was a son of a bitch to program,” Jennings later commented.6
The women knew their work was difficult, skilled labor, but the male engineers still considered the programming to be closer to clerical work—women’s work, in other words—than the hardware side. Yet it was the women who stayed up late into the night, “crunching,” to make sure the ENIAC was working for its first demonstration—to which they were not invited. “People never recognized, they never acted as though we knew what we were doing,” Jennings said.7
After the war’s end, the women who had been pressed into wartime service were encouraged to return home, free up jobs for men, and start families. Yet the women who worked on the ENIAC had a special skill set that made them harder to replace. “We were like fighter pilots,” McNulty said. Instead, they stayed on and worked to design computers for nonmilitary uses, working alongside mathematics professor and navy reservist Grace Hopper. “Women are ‘naturals’ at computer programming,” Hopper told a reporter in 1967. Yet even then, as software work gained prestige, the men were taking it over.8
Male programmers deliberately sought to shift the image of the field. Men, after all, wouldn’t want to go into a field seen as women’s work. To add cachet to the work, they created professional associations, heightened educational requirements, and even instituted personality tests that identified programmers as having “disinterest in people” and disliking “activities involving close personal interaction.” People skills, like those taken advantage of in the classroom or the retail store, were for women, and apparently just got in the way of programming, a collective task being re-envisioned for solitary nerds. As Astra Taylor and Joanne McNeil wrote, the notion of the computer hacker “as an antisocial, misunderstood genius—and almost invariably a dude—emerged from these recruitment efforts.” Changing the gender profile of programming, Taylor and McNeil wrote, also had the effect of boosting its class status. Rather than work learned by doing, programming was now the purview of rarefied graduate programs at the few research universities able to afford computers of their own.9
By the time the US Department of Defense bankrolled the project that would eventually become the Internet, computing was so thoroughly masculinized that there were no women involved. Instead, the Advanced Research Projects Agency Network (ARPANET) would be, in the words of Katie Hafner and Matthew Lyon, Where Wizards Stay Up Late. The men who built the network—funded by the DOD’s Advanced Research Projects Agency (ARPA) in order to link computer labs up around the country to share research—were “geniuses” whose commitment to their work involved a lot of one-upmanship about who could work longer hours.10
ARPA’s Information Processing Techniques Office was funding cutting-edge research that the private sector, and even the universities, might otherwise have shied away from throughout the 1960s. Created in reaction to the USSR’s launch of the Sputnik 1 satellite, ARPA reflected the fear that the United States was falling behind. It was this same fear that led to an increase in the education budget and expanded public schooling, but it funded plenty of research that didn’t have clear military applications. One of those projects was ARPANET.11
Making computers communicate required all sorts of new technologies. At the time, most computers didn’t speak the same language. In Hafner and Lyon’s words, “Software programs were one-of-a-kind, like original works of art.” The innovations that would make the ARPANET, and then the Internet, possible were the result of a collective process between dozens of programmers and graduate students on multiple continents. Despite the tendency to ascribe progress to the unique genius of each of these men, researchers in different countries came up with similar ideas at nearly the same time.12
These computer whizzes were building on one another’s breakthroughs, and the ARPANET would help them integrate their collective knowledge more deeply. In the obsession with the individual genius, we miss the real story, assuming that works of brilliance are the result of singular minds rather than collaboration—a notion that just happens to mitigate against the idea of organizing. “If you are not careful, you can con yourself into believing that you did the most important part,” programmer Carl Baran said. “But the reality is that each contribution has to follow onto previous work. Everything is tied to everything else.”13
The fetish for the tech innovator who dropped out of college may have begun, too, with the creation of the ARPANET. Bolt, Beranek and Newman, the firm given the contract to make the network a reality, was known for hiring dropouts from the Massachusetts Institute of Technology (MIT) in its hometown of Cambridge. Dropouts were smart enough to get into MIT, but without the degree, they cost less to hire. In just a few short years, the field had gone from instituting degree requirements as a class and gender barrier to entry to preferring those who cheerily tossed those requirements aside—and not long after that, to the legend of the Stanford or MIT dropout who created a company in his garage.14
There were a lot of sixteen-hour days, a lot of late nights and missed dinners, and a lot of sleeping at the desk for the programmers involved in creating the network—as well as for the graduate students who, at the various receiving sites for the ARPANET-connected computers, did much of the work of getting computers to talk to one another. They hammered out protocols, shared resources, and came up with the very first email programs collaboratively, sharing information with one another and hashing out disputes informally. The early Internet took the shape of the men who made it—it was anarchic, a place for sleepless computer nerds to express themselves, and argue for hours, whether it was about their ideas for the network or their political convictions (Defense Department money or no Defense Department money). They even figured out how to make games for it—a stripped-down version of the tabletop game Dungeons and Dragons, called Adventure, for example, was built by one of the Bolt, Beranek and Newman coders and spread widely across the Net.15
Video games were the perfect sideline for workers expected to be chained to their desks late into the night in a field where one’s sleeplessness itself was a status symbol. If the programmers played with the network as much as they did hard work on it, that was just another way that they expanded its capabilities and kept themselves interested in the work they were doing. Later theorists named this playbor, simultaneously work and play, unforced yet productive. Adventure gaming blurred the lines between work and play just as the lines between work and home were being blurred by all those long nights at the office. That the network could be used for fun made the labor that went into making it seem even more worthwhile.16
Early video-game companies capitalized on these same ideas. As Jamie Woodcock wrote in Marx at the Arcade, “companies like Atari promised ‘play-as-work’ as an alternative to the restrictive c
onditions of industrial or office-based Fordism.” The 1970s were, after all, the decade in which the rebellion against the Fordist factory was slowly synthesized into the neoliberal workplace. Forming a union was out. Instead, little forms of disobedience, like playing video games on the office computer, would come in and be absorbed into the workflow in the tech industry itself. Atari, which at this time developed early home consoles for playing video games on personal televisions, was the first company to prove that games could be big business. And as the computer business boomed, the tension between work and play, between fun and profits, only continued to grow.17
Programmers had been given a huge amount of freedom in the early days of the ARPANET. Coder Severo Ornstein from Bolt, Beranek and Newman had even turned up to a meeting at the Pentagon wearing an anti–Vietnam War button. But as the private sector began to get into the act (and woo away many of the academics and public employees who had been instrumental to the project), the question of how much power individual workers could be allowed to have was occurring to managers. Far from the purview of a handful of unique “wizards” and “geniuses,” the daily workings of what was now a rapidly growing “tech” industry required a lot of work from a lot of skilled but interchangeable laborers. And those laborers had to be prevented from organizing.18
Silicon Valley eclipsed Cambridge as the tech hub for many reasons, but one of them was that the nonunion atmosphere allowed companies to maintain their cherished “flexibility.” While Massachusetts had a long-established union culture, California was the wide-open frontier. Nevertheless, the 1970s and 1980s saw some attempts to unionize at tech companies from Atari to Intel, stories mostly written out of the history of tech as the industry grew.19