That’s why we all need to pay a lot closer attention to the minutiae we encounter online—the form fields and menus we tend to gloss over so quickly. Because if we want tech companies to be more accountable, we need to be able to identify and articulate what’s going wrong, and put pressure on them to change (or on government to regulate their actions).
It’s never been more important that we demand this kind of accountability. Failing to design systems that reflect and represent diverse groups can alienate customers and make people feel marginalized on an individual level, and that would be reason enough for us to demand better. But there’s also a pressing societal concern here. When systems don’t allow users to express their identities, companies end up with data that doesn’t reflect the reality of their users. And as we’ll see in the coming chapters, when companies (and, increasingly, their artificial-intelligence systems) rely on that information to make choices about how their products work, they can wreak havoc—affecting everything from personal safety to political contests to prison sentences.
Chapter 5
Delighted to Death
One day in 2015, Dan Hon put his toddler, Calvin, on the scale. He was two and a half years old, and he clocked in at 29.2 pounds—up 1.9 pounds from the week before, and smack in the middle of the normal range for his age. Hon didn’t think twice about it.
But his scale did. Later that week, Hon received Calvin’s “Weekly Report” from Withings, the company that makes his “smart scale” and accompanying app. It told Calvin not to be discouraged about his weight gain, and to set a goal to “shed those extra pounds.” 1
“They even have his birth date in his profile,” Hon tweeted about the incident. “But engagement still needs to send those notifications!” 2
Withings specializes in “smart” scales, meaning internet-connected devices that save your data to an account you access using an app on your smartphone or other device. In the app, you can see your weight over time, track trends, and set goals.
There’s just one problem: the only goal Withings understands is weight loss.
An update from Dan Hon’s “smart” scale, shaming his toddler son for gaining weight. (Dan Hon)
Sometimes, like in Calvin’s case, the result is comically absurd: most people would chuckle at the idea of a healthy two-year-old needing a weight goal. But in other cases, it might be downright hurtful. Like the default message that Withings sends if you weigh in at your lowest ever: “Congratulations! You’ve hit a new low weight!” the app exclaims. Hon’s family got that one too—this time, for his wife. She’d just had a baby, not met a goal. But Withings can’t tell the difference.
Have an eating disorder? Congratulations!
Just started chemo? Congratulations!
Chronically ill? Congratulations!
Withings is designed to congratulate any kind of weight loss—even if that’s not your goal. (Dan Hon)
Withings is far from the only service with this problem. Everywhere you turn online, you’ll find products that just can’t wait to congratulate, motivate, and generally “engage” you . . . no matter what you think about it.
That’s what went wrong in 2014 for Eric Meyer, whose tragic experience I described back in Chapter 1. Facebook wanted to delight its users, but instead it forced Meyer to relive the death of his daughter—putting her face in the center of his Year In Review, and surrounding her by illustrations of dancing partygoers and balloons.
As soon as Meyer and I began talking about this problem, people started sending us screenshots. So many screenshots, each more absurd than the last.
One of those came from Timehop, a service that “helps you celebrate the best moments of the past with your friends” 3 by re-sending you items that you posted to social media sites one or more years ago. This time, though, Timehop shared one of the worst moments from a user’s past: when a man sent a message to his friends and family letting them know about where and when the memorial services for a young friend who had died suddenly would be held. According to Timehop, this wasn’t a tragic memory; it was just “2010’s longest Facebook post.” 4
Was that post really the right one to dredge back up? Probably not. But to make the situation much worse, Timehop also added its own canned commentary to the top of the notification:
THIS WAS A REALLY LONG POST THAT YOU WROTE IN THE YEAR OF TWO THOUSAND AND WHATEVER5
It was meant to be funny, of course. But in this context, it comes off as condescending and judgmental—as if the user should have written less about his friend’s death. Ouch.
In another example, a woman tried to email an airline a copy of her mother’s death certificate. When she typed the word “death,” the operating system, Apple iOS, kept suggesting that she add a cute little skull emoji to her message.
In yet another, Facebook turned all its reaction icons—the hearts, laughing faces, and the like that you can use to react to a friend’s post—into spooky icons for Halloween. It was cute, unless you wanted to react to a serious post and all you had was a sad Frankenstein.
If you don’t work in tech, you might be wondering at this point: What the hell are all these companies trying to do, exactly? Why do they care so much about shoving skulls and Frankensteins into our lives at awkward or sad moments? Why do they want us to relive funerals and tragedies? Why do we need to constantly be congratulated along the way?
Why won’t our technology just leave us alone?
CELEBRATE THE WORST TIMES
On July 9, 2016, DeRay McKesson, one of the most prominent activists in the Black Lives Matter movement, was in Baton Rouge, Louisiana. He was there to protest the death of Alton Sterling, a thirty-seven-year-old black man who had been held down by police in front of a convenience store and shot at close range just a few days before.
McKesson—perhaps best known as @deray, the Twitter handle he uses to communicate with his several hundred thousand followers—had spent the day tweeting from the protests. He posted an image of a woman holding a homemade sign: “I can’t keep calm. I have black sons,” it read. He praised the community’s efforts to de-escalate the situation with police. And he posted video of officers rushing up on protestors as they walked on the shoulder of a sidewalk-less highway.
McKesson was streaming the scene on Periscope, an app that broadcasts video directly to a live web feed, when police ran up behind him and placed him under arrest. He recorded the whole thing, including showing that he was staying well outside the painted line at the edge of the highway. He was charged with obstructing a highway of commerce anyway, and taken to jail with dozens of others.
McKesson had a long night in store for him. He would spend sixteen hours inside a cell, packed in like a sardine with as many as fifty other protestors.
It was also his thirty-first birthday.
I know this because, as I followed the news from Baton Rouge, I visited McKesson’s Twitter feed. But rather than seeing his latest tweets, I saw dozens of animated balloons flutter up my screen, obscuring his posts and videos.
Twitter plastered DeRay McKesson’s feed with balloons for his birthday—on the same night he was live-tweeting from a protest.
The design was meant to be cute—to celebrate a user and remind followers of their birthday. But as I watched those multicolored balloons twirl their way over pictures of police in riot gear and tweets fearing for people’s safety, I was anything but charmed. It was dissonant, uncomfortable—a surreal reminder of just how distant designers and product managers can be from the realities of their users.
These sorts of misplaced celebrations are everywhere, from Dan Hon and his “smart” scale, to Eric Meyer and his Year In Review, to the everyday reminders to wish someone a happy birthday that we all get on Facebook. They would seem almost quaint, as if from a simpler time, back when social media was just fun and games—not the primary way people communicate with friends and family and engage with current events. Almost quaint, that is, if they didn’t feel so unsettling.
/> WELCOME TO YOUR PAST
Another category where cute ideas turn creepy is reminders: features that are designed to encourage users to relive moments from their past, like we saw with Timehop. Timehop, at least, is specifically designed for this purpose: if you don’t want to relive the past, you have no reason to sign up for it in the first place. But other digital products have no problem tacking on “hey, remember when” features, without your ever opting in.
The worst offender is Facebook, which never misses an opportunity to “reengage” its users. Year In Review was one of the first of these features, but in the last few years, we’ve seen tons more. The most widespread one is probably On This Day, which works basically just like Timehop—except that Facebook never asked if you wanted to use it. If you were on Facebook when it launched in 2015, you simply started receiving On This Day reminders—with no way to opt out. So perhaps it’s no surprise that Facebook’s help forums were filled with people begging for a way to turn off the feature, saying things like, “No one deserves to be reminded of things unwillingly” and “I NEVER WANTED THIS.” They told stories of lost children, chronic disease, and divorce. Months later, Facebook finally allowed users to filter out specific people and dates—but On This Day couldn’t simply be turned off. So frustrated users started sharing hacks they’d found to trick the feature. More than a year after the original launch, Facebook finally introduced the ability to turn off notifications from On This Day. But the application itself? It can’t be deleted, no matter what you do.
Despite these problems, Facebook keeps expanding the feature. For example, it now also sends out Friendversary updates, which tell you that you became friends with a person on this date some number of years ago. Sometimes it’s cute—until Facebook decides you need to relive your relationship with, say, an ex or an old boss.
In February 2016, on its twelfth anniversary, Facebook took this concept even further and created a fake holiday called Friends Day. To celebrate, it built minute-long videos for every user. Each video was constructed from photos plucked from the user’s account, with the goal of highlighting all the good times they’d had with their friends.
Only, human editors weren’t creating these videos, of course—not for more than a billion users. The photos were algorithmically selected, and as usual, the algorithm didn’t always get it right. “Hi Tyler,” one man’s video starts, using title cards. “Here are your friends.” He’s then shown five copies of the same photo. The result is equal parts funny and sad—like he has just that one friend. It only gets better (or worse, depending on your sense of humor) from there. Another title card comes up: “You’ve done a lot together,” followed by a series of photos of wrecked vehicles, culminating in a photo of an injured man giving the thumbs up from a hospital bed. I suppose Facebook isn’t wrong, exactly: getting in a car accident is one definition of “doing a lot together.”
The video keeps going. “You’ve shared these moments,” it says. Then the same photos display again, like we’re all stuck in some kind of tragicomic loop. By the 37-second mark, I’m absolutely losing it: a title card comes up, saying, “And remember this?”—and then displays one of the wrecked vehicles for the third time. After one last repeat of the hospital bed photo (title card: “Your friends are pretty awesome”), the video wraps with a hearty “Happy Friends Day” message from Facebook. And, of course, it’s all set to the kind of peppy tune you’d expect when you’re waiting for a conference call to start.
Then there’s Facebook Moments, which launched in the summer of 2016. This feature allows you to do things like create collections of photos taken by multiple people at a single event. But like the Friends Day video, Moments also automatically creates montages, and sets them to music. You can probably guess what went wrong: in one, Facebook created a montage of a man’s near-fatal car crash, set to an acoustic-jazz ditty. Just imagine your photos of a totaled car and scraped-up arms, taken on a day you thought you might die, set to a soft scat vocal track. Doo-be-doo-duh-duh, indeed.
It’s not just Facebook either. Google Photos launched a similar feature in 2016—resulting in a baby’s Catholic baptism video being automatically set to cheesy techno music.
These videos are truly absurd—seriously, take a break right now and go watch them.6 And then consider this: Why are tech companies so determined to take your content out of its original context and serve it up to you in a tidy, branded package?
NEVER MISS A TERRIBLE THING
One day in September 2016, Sally Rooney felt her phone buzz. She looked at the screen and saw a notification from Tumblr: “Beep beep! #neo-nazis is here!” it read.
Rooney’s not a neo-Nazi. She’s an Irish novelist. “I just downloaded the app—I didn’t change any of the original settings, and I wasn’t following that tag or indeed any tags,” she told me. “I had a moment of paranoia wondering if I’d accidentally followed #neo-nazis, but I hadn’t.” 7
Sally Rooney’s Tumblr notification. (Sally Rooney)
Yet there Rooney was anyway, getting alerts about neo-Nazis, wrapped up in the sort of cutesy, childish little package you’d expect to hear in a preschool. How did this happen? After a screenshot of the notification went viral on Twitter, a Tumblr employee told Rooney that it was probably a “what you missed” notification. Rooney had previously read posts about the rise in fascism, and the notification system had used her past behavior to predict that she might be interested in more neo-Nazi content.
Now on to the copy. As you might guess, no one at Tumblr sat down and wrote that terrible sentence. They wrote a text string: a piece of canned copy into which any topic could be inserted automatically: “Beep beep! #[trending tag] is here!” (In fact, another Tumblr user shared a version of the notification he received: “Beep beep! #mental-illness is here!”)
Text strings like these are used all the time in software to tailor a message to its context—like when I log into my bank account and it says, “Hello, Sara” at the top. But in the last few years, tech companies have become obsessed with bringing more “personality” into their products, and this kind of copy is often the first place they do it—making it cute, quirky, and “fun.” I’ll even take a little blame for this. In my work as a content strategy consultant, I’ve helped lots of organizations develop a voice for their online content, and encouraged them to make their writing more human and conversational. If only I’d known that we would end up with so many inappropriate, trying-too-hard, chatty tech products.
One of those products is Medium, the online-publishing platform launched in 2012 by Ev Williams, one of Twitter’s original founders. In the spring of 2015, Kevin M. Hoffman wrote a post on Medium about his friend Elizabeth, who had recently died of cancer. Hoffman works in tech, in web design and information architecture, and he knew Elizabeth from their time spent putting on conferences together. So he wanted to share his memorial in a place his peers, and hers, would see it. Medium was an obvious choice.
A few hours after posting his memorial, he got an email from Medium letting him know how his post was doing, and telling him that three people had recommended it. And inserted in that email was the headline he had written for his post, “In Remembrance of Elizabeth,” followed by a string of copy: “Fun fact: Shakespeare only got 2 recommends on his first Medium story.” It’s meant to be humorous—a light, cheery joke, a bit of throwaway text to brighten your day. If you’re not grieving a friend, that is. Or writing about a tragedy, or a job loss, or, I don’t know, systemic racial inequalities in the US criminal justice system. All of which are topics that you might find on Medium, a site that mixes paid journalistic pieces with free, anyone-can-be-an-author posts.
When the design and product team at Medium saw Kevin’s screenshot, they cringed too—and immediately went through their copy strings, removing the ones that might feel insensitive or inappropriate in some contexts. Because, it turns out, one of the key components of having a great personality is knowing when to express it, and when to hold
back. That’s a skill most humans learn as they grow up and navigate social situations—but, sadly, seem to forget as soon as they’re tasked with making a dumb machine “sound human.”
That’s why I find Siri, Apple’s virtual assistant, so grating. As we learned in Chapter 1, when Siri doesn’t understand, or thinks you’re messing with it, it teases you. It’s marginally funny—if you’d rather get sass back from your phone than have it simply try to help. (“Hey Siri, what should I get my mom for Christmas?” “I hear the internet is good for these kinds of questions.” Sick burn, I guess . . . but wouldn’t most people rather that the assistant just looked up the phrase “Christmas gifts for moms” instead?) Maybe I’m the only one who’s just not interested in snotty comebacks from my phone, though I doubt it. But even if it works sometimes—even if it makes some people laugh uproariously—it’s hard to imagine anything less helpful during a crisis than a robot voice parroting canned humor at me.
The problem is that Siri just doesn’t know the difference—because for all its artificial intelligence, its emotional intelligence is basically nonexistent. And that’s fine, actually. Siri’s not a person; it’s a virtual assistant that can’t speak without adding awkward pauses at every comma. Why should it be good at navigating the complexity of human experience? It’s that it tries too hard to slather a thin layer of personality on top of a system that’s nowhere near socially advanced enough for wit—and the result feels a little like someone interrupting a funeral with a fart joke.
That’s something the team at email marketing platform MailChimp learned firsthand (well, the part about personality being tricky to get right, at least; I don’t know what they think about fart jokes). A few years back, MailChimp invested tons of time in building a voice for its brand—from its jovial mascot, Freddie the chimp, to its commitment to writing legal terms and conditions in plain English. In 2011, the company published these guidelines on a public website, Voice & Tone (voiceandtone.com)—and soon, those guidelines were featured on Fast Company and popping up in countless conference talks as the way forward for online communication. You could almost say they ushered in the talk-like-a-human movement in tech products. So, in the summer of 2015, when Eric Meyer and I were researching empathetic web content and design practices, we called up MailChimp’s communications director, Kate Kiefer Lee. And what she told us surprised me: MailChimp was, slowly but surely, pulling back from its punchy, jokey voice. When I asked what had gone wrong, Kiefer Lee told me there were too many things to count.
Technically Wrong Page 7