The stoning metaphor comes up again and again when you read the commentary on episodes like these. It’s no coincidence that it’s the death penalty of choice for the ancient religions: there is no single executioner; the community carries out the punishment. No one can say who struck the fatal blow, because everyone did together.2 For a burgeoning tribe, fighting to preserve itself and its god in a hostile world, what better prescription could there be? There is strength in collective guilt, and guilt is diffused in the sharing. Extirpate the Other and make yourselves whole again.
In Justine’s case, people on three continents had assembled to destroy her. Pulling self-descriptions from just a handful of their Twitter bios you find it takes all types: Lobbyist. Communist. Hater. Aspie. Leader. Nature Enthusiast. Blogger. Gator. Dad. Writer. Imperfect Christian. Professional Shade Detector. Pop Culture Virtuoso. Daughter of the Sea, Sister to the Wind. These people had nothing in common but a target and a hashtag at hand, and they got the blood they came for. Justine lost her job. BuzzFeed put her face up on their front page with a big “LOL” over it.
The reach of social media makes the force of these gatherings immense. Within twenty-four hours of her tweet, Safiyyah had been called down in front of 7.4 million people. And 62 million saw #HasJustineLandedYet that first day.
Not everyone under the curve read the tweets or cared, but many did, and all were in some way a witness.
Sir Qwap Qwap @BeardedHistoria
Literally every one of the first 20 tweets on my home feed has #HasJustineLandedYet. I must have missed something, Tweet-fiends.
It’s worth pointing out that this fantastic volume should be an embarrassment to social media—evidence not just of its power but of how hollow that power can be. In Justine’s case, AIDS, racism, and the stubborn, shameful poverty of postcolonial Africa are all enormous problems that tweeting does absolutely nothing to solve.
We may think of human sacrifice as something from a savage past, and the physical act might now only exist in films about temples and doom, but the instinct remains within us, seemingly burned by deep time into the reaches of the animal mind. When food is scarce, lions kill their cubs. Fish eat their own eggs. In multiple human pregnancies a womb will sometimes absorb a fetus to preserve the others. To destroy the one for the many is possibly a practice as old as life itself. Now that this ritual is carried out in bits (and thankfully with no actual blood on anyone’s hands, though you get the idea, reading some of these tweets, that people view this as a bug rather than a feature), it’s become a topic we can rigorously study for the first time. Social scientists have devoted considerable energy to the question of why and how negative ideas spread, and the Internet has given them both limitless source material and a powerful tracking mechanism. Marine biologists tag sharks in the wild to understand their movements and to limit their threat to humans.3 Here it’s the words that have teeth. My three cases above aren’t precisely rumors or gossip, but mob outrage follows many of the same pathways, both neurological and person-to-person, and the science of rumors can help us understand what has happened to people like Natasha, Safiyyah, and Justine—and why.
Rumors are mentioned in our earliest texts. The archaic pantheons—Norse, Egyptian, Greek—all have a god dedicated to the dark art of gossip. The book of Proverbs treats the topic thoroughly; one verse from many cautions that “a man who lacks judgment derides his neighbor, but a man of understanding holds his tongue.” “Judge not lest you be judged” is one of the most famous phrases in the whole Bible. Several sources maintain that the Romans enshrined a goddess named “Rumor”—a winged demon with a hundred eyes and a hundred mouths who spoke only the most hurtful side of the truth. Appropriately enough, I can’t seem to confirm this.
Evolutionary biologists believe that gossip and rumors arose from our ancestors’ need to understand their surroundings through speech. The theory is, when ancient man had to figure out if x was true, language gave him a way to investigate. So he talked about it. And, true or false, word spread. Rumors—essentially group speculation over the truth of an idea—became a way to build bonds and social capital. Stories create status for those who share them, especially when they concern important individuals, because information about powerful people is a form of power itself.
But the advent of social media has changed the calculus in a couple ways. First, it gives us metrics—follower counts, retweet counts, favorites counts—to judge our status. Be the first to spread the news, get more retweets. Say something especially cutting, and your followers applaud your wit. The social capital you build by sharing information is now explicit; in fact, it’s in little numbers that increment before your very eyes. Writing in the Boston Globe, Jesse Singal was discussing the motivations of traditional person-to-person gossip but might’ve easily been talking about Twitter when he said, “To the extent people do have an agenda in spreading rumors it’s directed more at the people they’re spreading them to, rather than at the subject of the rumor.” The Internet gives people a wider audience than ever before.
The second change is that the Internet has also made everyone a public figure. High-status individuals were once chieftains, and then celebrities and presidents, but, here, the leveling scythe of technology shows its obverse edge. If anyone can become an overnight celebrity, anyone can become an overnight leper. One of my least favorite Internet-evangelist talking points is about technology “empowering” people—inevitably the most empowered of all is the speaker and his investors. But here we find some truth in the cliché—social media empowers you to the extent that it makes you worth tearing down. At the same time, it gives everyone else the tools to do it. Demon Rumor now has a million mouths.
So much of what makes the Internet useful for communication—asynchrony, anonymity, escapism, a lack of central authority—also makes it frightening. People can act however they want (and say whatever they want) without consequences, a phenomenon first studied by John Suler, a professor of psychology at Rider University. His name for it is the “online disinhibition effect.” The webcomic Penny Arcade puts it a little better:
Greater Internet Fuckwad Theory
normal person + anonymity + audience = total fuckwad
But it’s not the vitriol, nor even the anonymity, that’s unique here. The Internet hasn’t been quite the revolution in trollery you’d think. The old CB radio channels that truckers used were notoriously filled with racist diatribes and masturbation fantasy.4 Before caller ID took away that necessary additive, anonymity, the Jerky Boys were churning out fuckwaddedness for decades. People still flame one another on ham radio—as if being a ham radio operator in 2014 isn’t burn enough. No, the unique thing that the Internet brings to our long history of negativity is that we can finally constructively respond to it. In some way, Tumblr’s thighgap intervention discussed in chapter 7 is just a special case of what’s now broadly possible. We can pinpoint the speaker, the words, the moment, even the latitude and longitude of human communication. As I pointed out earlier, by 2015, Twitter users will have exchanged more words than have ever been printed. The question is how to harness the chatter.
The government has the greatest vested interest in tracking negativity. Mathematical models already exist to predict the outcome of armed conflict—how long it will last, who will win, and how many people will die—and the models of late have learned to accommodate guerrilla warfare, since that’s the shape of today’s war. But armed insurgency is often preceded by unarmed unrest—which itself is often propagated, even coordinated, through social media.5 Those nascent movements, being digitized, have attracted the attention of researchers.
Using Western movements as his test subjects, MIT’s Peter Gloor has developed software to track the ebb and flow of sentiment in a network of protestors. He calls it Condor, because that’s what projects like this always seem to be called: Condor, spirit-bird of government grants. In any event, the software first establishes a group’s central personalities by looking at its so
cial graph—much like we portrayed a marriage as edges and nodes before, the software lays out the network, then algorithmically determines its most important dots. Next, it looks at what those dots are saying. Condor has found that while the foci of a movement are positive in their word choice, the movement is vibrant. But negative words like “hate,” “not,” “lame,” and “never” signal decline, and when, as The Economist put it, “complaints about idiots in one’s own movement or such infelicities as the theft of beer by a fellow demonstrator” begin to appear, the movement is all but over. Oh, Occupy!
As for deciphering the aims of unrest, which is where this technology can move beyond mere spying and into doing some good, similar kinds of textual analysis have been used to determine, for example, which Egyptian towns will be most upset by border incidents with Israel, and to pinpoint water insecurity in a drought-stricken countryside.
Any software that follows the thread of a thought through a network must track not only the idea but the “susceptibility” of people exposed to it. It must see what takes hold, what gets repeated, and who moves it along. Relaying someone else’s opinion isn’t unique to the Internet any more than negativity is: television and radio made “talking points” into a phrase long before AOL came along, let alone Twitter. Rush Limbaugh’s staunchest fans call themselves “Dittoheads”—but nothing makes parroting an idea more simple, or more trackable, than the Like, the Ping, the Reblog, or the Retweet button. Remember: 27.5 percent of Twitter’s 500 million tweets a day are retweets, people just passing along someone else’s thought.
Facebook’s data team investigated their version of the phenomenon, tracing the evolution of a single status update from the health-care debates in 2009 through the network:
No one should die because they cannot afford health care, and no one should go broke because they get sick. If you agree, post this as your status for the rest of the day.
This was reposted, verbatim, more than 470,000 times and also spawned 121,605 different variants, which themselves received about 800,000 more posts. Someone who didn’t quite feel that the update spoke for him would change it slightly, and versions spread outward into different social circles. When you put each version against the political bias of the people posting it (–2.0 is maximally liberal, +2.0 conservative) not only do you get an interesting look at the American political spectrum—extremes of right and left, plus a center that has opted-out of the discussion—but you also see how political belief translates into words. People at the top and bottom of this list use the same framework to speak at cross-purposes:
No one should … political bias of the person posting
… die because they cannot afford health care … –0.87 more liberal
… be frozen in carbonite because they couldn’t pay Jabba the Hutt … –0.37
… die because of zombies if they cannot afford a shotgun … –0.30
… have to worry about dying tomorrow, but cancer patients do … –0.02
… be without a beer because they cannot afford one … +0.22
… die because the government is involved with health care … +0.88
… die because Obamacare rations their health care … +0.96
… go broke because government taxes and spends … +0.97 more conservative
In 1950, at the dawn of the age of television, the American Political Science Association actually called for more polarization in national politics—the parties had grown too close together, the electorate didn’t have clear choices. The APSA got their wish, and in the old genie-style, too, with plenty to regret about its granting. Now, sixty years later, we’re more divided than ever, and you can track this, too, through the words. The repetition of partisan speech both in Congress itself and in print (as tracked through Google Books) correlates with political gridlock, which is at an all-time high. That we’re divided might be the only thing we can, in fact, agree on.
This paradox was driven home to me when I turned to Facebook in the aftermath of Justine’s tweet. In my post was a link to an article from breitbart.com—the namesake site of Tea Party instigator Andrew Breitbart. A lot about the article was regrettable, but the author was one of the only people pointing out how out-of-proportion the reaction was. I’d always imagined uncritical outrage as a vice of the political right—I’d hear about the ridiculous “War on Christmas” or the belief that Obama was “taking people’s guns away” and think, What fools these people are to believe this stuff! Why talk about things in such extreme terms? Why look at something only in the worst possible light? But it took this incident on Twitter to make me see that people on the “left” could be just as self-righteously uninformed as anyone else. It was eye-opening, and shame on me for having them closed in the first place.
So theories aside—and the science is so new that no doubt Condor will look like Zork in a few years—this, to me, is why the data generated from outrage could ultimately be so important. It embodies (and therefore lets us study) the contradictions inherent in us all. It shows we fight hardest against those who can least fight back. And, above all, it runs to ground our age-old desire to raise ourselves up by putting other people down. Scientists have established that the drive is as old as time, but this doesn’t mean they understand it yet. As Gandhi put it, “It has always been a mystery to me how men can feel themselves honored by the humiliation of their fellow beings.”
I invite you to imagine when it will be a mystery no more. That will be the real transformation—to know not just that people are cruel, and in what amounts, and when, but why. Why we search for “nigger jokes” when a black man wins; why inspiration is hollow-eyed, stripped, and, above all, #thin; why people scream at each other about the true age of the earth. And why we seem to define ourselves as much by what we hate as by what we love.
1 If Facebook ever gets tired of that minimalist f and wants a new logo, I suggest, on a blue background: two white people arguing about what another white person said about Africa.
2 It would be interesting to see if residents of countries where stoning is still used as a real-world punishment take as much joy in the digital version.
3 In Australia these tags are outfitted with transponders that notify local beachgoers when a shark is nearby. The tags communicate to us … via Twitter.
4 And, as they do online, the users even had “handles.”
5 The Arab Spring, for example, was Twitter’s debut as a tool of global importance, and the service has also facilitated protests in Guatemala, Moldova, Russia, and Ukraine.
10. Tall for an Asian
11. Ever Fallen in Love?
12. Know Your Place
13. Our Brand Could Be Your Life
14. Breadcrumbs
10.
Tall for an Asian
When I was applying to college, I had to write about myself. I’m sure you did, too. I can’t even remember the question on the application because whatever it was actually asking was beside the point. The essay was there to get me to talk about Christian Rudder, so the Admissions people could decide if they liked what they heard. As the Common Application now puts it: “The personal essay helps us become acquainted with you as a person.”
Being a sucker for melodrama even then, I wrote about how sad I would be to leave my dog behind when I went to school. We’d gotten Frosty when I was six, so he and I had grown up together. But with dog years working like they do, he’d gotten too old too fast. My family had moved around a lot, and he was that last connection to deep childhood: clubhouses, neighborhood pools, friends; I’d left them all in Houston, or Cleveland, or Louisville, but Frosty always came with me. The next move, however, I knew I’d have to make on my own.
In any event, adrift in pathos and extra-large M. C. Escher T-shirts, I completed my college application. I haven’t written many self-statements since, but involved as I am in the business of understanding people I can’t help but think back on my seventeen-year-old self and the essay he chose to write. Why talk about Frosty
and getting older? Why not talk about baseball? Or basketball? Or tennis? Or rotisserie baseball? Or any other of my diverse interests? What was it, when the prompt was “Who are you?” that made me respond like I did? And, even more important, how were other kids answering the question?
Now, twenty years later, I find myself sitting on millions of essays—billions of words—more or less written to answer that same prompt: “Who are you?” And this body of text actually allows me to do the inverse of the college application process. Instead of matching essays one at a time against a preconceived ideal (i.e., “college material”), I can mush all the essays together and see what ideals they reveal to me. There are times when a data set is so robust that if you set up your analysis right, you don’t need to ask it questions—it just tells you everything anyway. How do people describe themselves? What’s important, what’s typical, what’s atypical? When everyone else gets a turn to put down in words who they are, what identities do they sketch?
We’re going to look at broad categories here: black people, white people, Asians, females, males, and so on. A problem in studying any particular group is that you always bring your own prejudices and preconceptions along with you. What you choose to notice, remember, and transcribe is as much a matter of how you look as what’s actually there. In social science, knowledge, like water, often takes the shape of its vessel. So if we want to take all the self-statements I’ve collected and pull from them a sense of who the writers are—what makes ethnicities and sexes and orientations unique—we’ll need to develop an algorithm that takes the “us” out of it and leaves just the “them.”
Dataclysm: Who We Are (When We Think No One's Looking) Page 11