That’s why the pipeline is such a myth. Regardless of how many women and underrepresented minorities study computer science, the industry will never be as diverse as the audience it’s seeking to serve—a.k.a., all of us—if tech won’t create an environment where a wider range of people feel supported, welcomed, and able to thrive.
The good news is there’s actually no magic to tech. As opaque as it might seem from the outside, it’s just a skill set—one that all kinds of people can, and do, learn. There’s no reason to allow tech companies to obfuscate their work, to call it special and exempt it from our pesky ethics. Except that we’ve never demanded they do better.
But we can—and if we do, we’ll not only make things better for all the Kayas and Fatimas of the world, we’ll also make things better for ourselves, every time we pick up our phones or open a browser tab.
Chapter 3
Normal People
Are you a “Kelly,” the thirty-seven-year-old minivan mom from the Minneapolis suburbs? Or do you see yourself as a “Matt,” the millennial urban dweller who loves CrossFit and cold-brew coffee? Maybe you’re more of a “Maria,” the low-income community college student striving to stay in school while supporting her parents.
No? Well, this is how many companies think about you. From massive businesses like Walmart and Apple to fledgling startups launching new apps, organizations of all types use tools called personas—fictional representations of people who fit their target audiences—when designing their products, apps, websites, and marketing campaigns.
Personas are often meant to feel like real people—sometimes right down to Kelly’s 2014 Toyota Sienna (which she purchased with her husband while she was pregnant with their second child), or Matt’s iPhone 7 Plus (which he just replaced because he dropped his last one outside the rock-climbing gym). The specificity can be unnerving: you half expect to start hearing about a persona’s childhood chicken pox or aversion to cilantro. What does that have to do with how they use a website, again?
This level of specificity isn’t added by accident. It aims to give personas enough descriptive detail and backstory to feel relatable to the teams that use them—so that, ideally, team members think about them regularly and internalize their needs and preferences.
That’s great in theory, but when personas are created by a homogenous team that hasn’t taken the time to understand the nuances of its audience—teams like those we saw in Chapter 2—they often end up designing products that alienate audiences, rather than making them feel at home.
That’s what happened to Maggie Delano. She’s a PhD candidate at MIT and an active participant in the Quantified Self movement, a loose organization of people who are interested in tracking everything from moods to sleep patterns to exercise. One day in 2015, she decided to investigate tools for tracking something people have been monitoring for millennia: her period. Her cycle had been recently irregular, and she wanted to do a better job of tracking both her period and her moods in relation to it. So she test-drove some menstrual cycle apps, looking for one that would help her get the information she needed.
What she found wasn’t so rosy.
Most of the apps she saw were splayed with pink and floral motifs, and Delano immediately hated the gender stereotyping. But even more, she hated how often the products assumed that fertility was her primary concern—rather than, you know, asking her.
A typical example of a persona, with lots of made-up personal detail. (Eric Meyer and Sara Wachter-Boettcher)
As a “queer woman not interested in having children,” Delano found one app, Glow, particularly problematic. She wrote:
The first thing I was asked when I opened the app was what my “journey” was: The choices were avoiding pregnancy, trying to conceive, or fertility treatments. And my “journey” involves none of these. Five seconds in, I’m already trying to ignore the app’s assumptions that pregnancy is why I want to track my period. The app also assumes that I’m sexually active with someone who can get me pregnant.1
The first screen in Glow’s onboarding process. What if none of these options apply to you?
Delano’s experience with Glow might have made sense back in 2013, when Glow launched with the mission of using big data “to help get you pregnant.” 2 But in 2014, the founders realized that about half of Glow’s users were actually using the app to avoid getting pregnant.3 So, with $17 million in new funding in hand, the team set out to transform Glow from a narrow, fertility-focused experience to a product that could serve all women—including, it would seem, women like Delano. “We live in a time when people are tracking everything about their bodies . . . yet it’s still uncomfortable to talk about your reproductive health, whether you’re trying to get pregnant or just wondering how ‘normal’ your period is,” the company website stated. “We believe this needs to change.” 4 And the people who thought they were the ones to change it? Glow’s founding team: Max Levchin, Kevin Ho, Chris Martinez, and Ryan Ye. All men, of course—men who apparently never considered the range of real people who want to know whether their period is “normal.”
Eve by Glow, a newer app designed by the makers of Glow for young women. Except it, too, makes assumptions about what its audience cares about.
Since Delano’s article, Glow has actually updated its products and how it talks about them—repositioning Glow as an “ovulation calculator” and launching a separate app, Eve by Glow, for period tracking and sexual health. Only one problem: Eve might offer the features Delano wants—it can track her periods and her moods—but it still makes a ton of assumptions about its users, referring to them as “girls,” using slang like “hookups,” and describing sex in a way that’s centered entirely on male genitalia: a banana with a condom, a banana without a condom, or no banana. If you’re an adult woman in a relationship with anyone who’s not a man, you’re probably still going to feel left out.
WHEN “NORMAL” BECOMES NARROW
This kind of thing happens all the time: companies imagine their desired user, and then create documents like personas to describe them. But once you hand them out at a meeting or post them in the break room, personas can make it easy for teams to start designing only for that narrow profile. And it can happen even in a tech company where women are on staff, like Etsy.
Etsy is an online marketplace for buying and selling handmade goods directly from their creators—anything from letterpress greeting cards to hand-knit baby booties to wood shelving made from salvaged barn wood. As you might guess, it’s a great place to shop for unique gifts.
That’s precisely what Etsy wanted Erin Abler to do in January 2017, when they sent her an alert on her phone: “Move over, Cupid!” it read. “We’ve got what he wants. Shop Valentine’s Day gifts for him.”
The alert that Erin Abler received from Etsy. She doesn’t want Valentine’s Day gifts “for him”—her partner is a woman. (Erin Abler)
But, as with Maggie Delano, Abler’s partner isn’t a man. She’s not buying anything for “him” on Valentine’s Day. Apparently, Etsy’s designers and copywriters never thought about this—never considered just how many people they might alienate with this message. Abler was irritated. “‘Come on, what are the odds we’ll get a gay one’ Uh, 100%,” she joked on Twitter.5
This sort of problem happens whenever a team becomes hyperfocused on one customer group, and forgets to consider the broader range of people whose needs could be served by its product. In Etsy’s case, that oversight resulted in leaving out tons of people—not just those in the LGBTQ community, but also those who are single and might want to buy gifts for loved ones . . . or simply not be told they ought to have a “him” to shop for. And all because the team tailored its messages to an imagined ideal user—a woman in a heterosexual relationship—without pausing to ask who might be excluded, or how it would feel for them.
That’s what we saw in Glow too. Eve by Glow works well for teen girls and young women who are sexually active with boys. Glow works well for wo
men who are trying to get pregnant with a partner. But for everyone else, both services stop making sense—and can be so alienating that would-be users feel frustrated and delete them.
NARROW VISION, NARROW DEFAULTS
This kind of narrow thinking about who and what is normal also makes its way into the technology itself, in the form of default settings. Defaults are the standard ways a system works—such as the ringtone your phone is already set to when you take it out of the box, or the fact that the “Yes, send me your newsletter!” checkbox comes preselected in so many online shopping carts.
These settings are powerful, and not just because we might not notice that a checkbox is already selected (though you can bet marketers are relying on that). Defaults also affect how we perceive our choices, making us more likely to choose whatever is presented as default, and less likely to switch to something else. This is known as the default effect.
Between the default effect making us more likely to value preselected choices and the fact that many of us either don’t want to bother adjusting our settings or don’t know that we can, very few of us actually change the default settings on the systems we use. That’s why you’ll hear the iPhone Marimba ringtone everywhere you go (and see more than one person nearby check their bags and pockets).
People who design digital products know this, and some of them use that fact to make money—like when New York City cabs implemented touchscreens in every vehicle. The screens defaulted to show your fare and then a few options to automatically add the tip to your total: 20 percent, 25 percent, or 30 percent. Average tips went from 10 percent to 22 percent, because the majority of riders—70 percent—opted to select one of the default options, rather than doing their own calculation.6
Defaults can also be time-savers for users. One could even argue that the tipping defaults in New York taxis are just that, since they allow customers to skip the math when paying their fares (though, it would be hard to convince anyone that’s all the designers had in mind). Or, if a company has primarily US customers, it might default to United States when users enter their address into a shipping form, so that most users don’t need to scroll through a big list to find their country.
Default settings can be helpful or deceptive, thoughtful or frustrating. But they’re never neutral. They’re designed. As ProPublica journalist Lena Groeger writes, “Someone, somewhere, decided what those defaults should be—and it probably wasn’t you.” 7
What happens when those someones are the people we met in Chapter 2: designers and developers who’ve been told that they’re rock stars, gurus, and geniuses, and that the world is made for people like them?
In 2015, middle-school student Madeline Messer found out firsthand. Like many kids her age, Messer loves playing games on her phone, often alongside her friends. One day, she noticed a friend playing a game using a boy avatar. When Messer asked her why she wasn’t playing as a girl, her friend replied that it simply wasn’t an option: only boy characters existed in the game.
This didn’t sit well with Messer. “I started to pay attention to other apps my friends and I were playing,” she wrote in the Washington Post. “I saw that a lot of them featured boy characters, and if girl characters did exist, you were actually required to pay for them.” 8
With her parents’ permission, Messer embarked on an experiment: she downloaded the top fifty “endless-runner” games from the iTunes Store and set about analyzing their default player settings. Endless runners are games where players aim to keep their characters running as long as possible, racking up as many points as they can before, eventually, they hit obstacles and are defeated.
Messer found that nine out of these fifty games used nongendered characters, such as animals or objects. Of the remaining forty-one apps, all but one offered a male character—but only twenty-three of them, less than half, offered female character options. Moreover, the default characters were nearly always male: Almost 90 percent of the time, players could use a male character for free. Female characters, on the other hand, were included as default options only 15 percent of the time. When female characters were available for purchase, they cost an average of $7.53—nearly twenty-nine times the average cost of the original app download.
A similar default is at play whenever you sign up for a new app or create an account on a website that uses profile photos, and you’re automatically given a male avatar—the icon of a person’s silhouette used by the system to depict anyone who hasn’t uploaded a picture yet. In fact, that’s how Facebook treated profiles without an image, up until 2009 or so, when a female version was added to the mix. Today, more sites are defaulting to neutral avatars—either by making the silhouettes more abstract, and therefore less gendered, or by using some other icon to represent a user, such as their initials.
We can also see default biases in action by returning to the smartphone assistants I mentioned in Chapter 1: Apple’s Siri, Google Now, Samsung’s S Voice, and Microsoft’s Cortana. In addition to not understanding queries like “I was raped,” these services all have another thing in common: women’s voices serve as the default for each of them. As Adrienne LaFrance, writing in the Atlantic, put it, “The simplest explanation is that people are conditioned to expect women, not men, to be in administrative roles” 9 (just think about who you picture when you hear the term “secretary”).
Or let’s look once more at Snapchat. In addition to the so-called “anime-inspired” filter we saw earlier, the app is known for releasing filters that purport to make you prettier, like the popular “beauty” and “flower crown” features. These filters smooth your skin, contour your face so your cheekbones pop, and . . . make you whiter.10 Why is whiter the default standard for beauty? Well, that’s a complex cultural question—but I doubt it’s one that the three white guys from Stanford who founded Snapchat ever thought about.
These might seem like small things, but default settings can add up to be a big deal—both for an individual user like Messer, and for the culture at large. Just look at the requirements for formatting a paper in almost any college class: Times New Roman, 12 points. But that wasn’t the case until relatively recently—namely, the 1990s, when Microsoft Word started shipping with Times New Roman as the default font. Most people stuck to the default, and eventually, that default became the standard.
Default styles for your freshman paper comparing the portrayal of heroism in The Odyssey versus Beowulf might not matter much (“Since the beginning of time . . .” is a trite opening sentence in every font). But when default settings present one group as standard and another as “special”—such as men portrayed as more normal than women, or white people as more normal than people of color—the people who are already marginalized end up having the most difficult time finding technology that works for them.
Perhaps worse, the biases already present in our culture are quietly reinforced.
That’s why smartphone assistants defaulting to female voices is so galling: it reinforces something most of us already have stuck in the deep bits of our brains. Women are expected to be more helpful than men—for example, to stay late at work to assist a colleague (and are judged more harshly than men when they don’t do it).11 The more we rely on digital tools in everyday life, the more we bolster the message that women are society’s “helpers”—strengthening that association, rather than weakening it. Did the designers intend this? Probably not. More likely, they just never thought about it.
THE MYTHICAL MIDDLE
Try to bring up all the people design teams are leaving out—whether its gay people buying gifts for loved ones or women who want to play games—and many in tech will reply, “That’s just an edge case! We can’t cater to everyone!”
Edge case is a classic engineering term for scenarios that are considered extreme, rather than typical. It might make sense to avoid edge cases when you’re adding features: software that includes every “wouldn’t it be nice if . . . ?” scenario that anyone has ever thought of quickly becomes bloa
ted and harder to use.
But when applied to people and their identities, rather than to a product’s features, the term “edge case” is problematic—because it assumes there’s such a thing as an “average” user in the first place.
It turns out there isn’t: we’re all edge cases. And I don’t mean that metaphorically, but scientifically: according to Todd Rose, who directs the Mind, Brain, & Education program at the Harvard Graduate School of Education, the concept of “average” doesn’t hold up when applied to people.
In his book The End of Average, Rose tells the story of Lt. Gilbert S. Daniels, an air force researcher, who, in the 1950s, was tasked with figuring out whether fighter plane cockpits weren’t sized right for the pilots using them. Daniels studied more than four thousand pilots and calculated their averages for ten physical dimensions, like shoulders, chest, waist, and hips. Then he took that profile of the “average pilot” and compared each of his four-thousand-plus subjects to see how many of them were within the middle 30 percent of those averages for all ten dimensions.
The answer was zero. Not a single one fit the mold of “average.” Rose writes:
Even more astonishing, Daniels discovered that if you picked out just three of the ten dimensions of size—say, neck circumference, thigh circumference and wrist circumference—less than 3.5 per cent of pilots would be average sized on all three dimensions. Daniels’s findings were clear and incontrovertible. There was no such thing as an average pilot. If you’ve designed a cockpit to fit the average pilot, you’ve actually designed it to fit no one.12
So, what did the air force do? Instead of designing for the middle, it demanded that airplane manufacturers design for the extremes instead—mandating planes that fit both those at the smallest and the largest sizes along each dimension. Pretty soon, engineers found solutions to designing for these ranges, including adjustable seats, foot pedals, and helmet straps—the kinds of inexpensive features we now take for granted.
Technically Wrong Page 3