The Attention Merchants
Page 35
Meanwhile, life goes on. It eventually became not only acceptable but proper etiquette to use Facebook for announcing important developments in one’s life, like new relationships, the birth of a child, even someone’s death. It became widely used for the most primitive varieties of social display, whether of one’s children or things. It also supplanted the holiday card as the means for cultivating the perimeter of one’s acquaintance. The sociologist Zeynep Tufekci argued that Facebook and other social media had formed a substitute for “gossip, people-curiosity and small talk,” which, she wrote, were “the human version of social grooming in primates: an activity that is essential to forging bonds, affirming relationships, displaying bonds, and asserting and learning about hierarchies and alliances.” Significantly, she wrote, “Social grooming should be seen as both a bonding activity and a competitive activity: it is a means to improve one’s reputation and status as well as access to resources and social and practical solidarity.”15
—
In February 2012, having accumulated 845 million active monthly users, Facebook announced its long-anticipated initial public offering. It had not, however, conquered all doubts surrounding its business model, for as compared with Google—its most obvious predecessor—Facebook faced additional challenges. On the positive side, it did know more about everyone in the world than anyone else (“big data” in the vernacular of the 2010s), allowing advertisers to nanotarget at an unprecedented level. Facebook ran side ads that allowed an advertiser to home in on his quarry with criteria like location, gender, age, schools attended, stated interests, and more. Thus, the advertiser of vintage 1970s speakers could target men between fifty and sixty-five living in suburban New Jersey who listed an interest in the Kinks and Led Zeppelin. That was the good part.
The bad part for Facebook was that it didn’t quite have the metrics Google did. On Google one knew people were looking for something, and also what they were looking for. People went to Facebook for a different reason the “social grooming” just described. In advertising jargon, if Google’s users were very close to the final stage before making a sale, Facebook users were, at best, at an initial, awareness stage.*4 Translated, that means people didn’t click on Facebook ads very often. As blogger and venture capitalist Chris Dixon wrote in 2012, “Facebook makes about 1⁄10th of Google’s revenues even though they have twice the pageviews. Some estimates put Google’s search revenues per pageviews at 100–200x Facebook’s.”16
During the 2010s, Facebook found a few answers to this problem. First, it invested heavily in developing metrics to convince advertisers that its advertisements were valuable even though people weren’t clicking on them. A well-funded “Measurement and Insights” Department endeavored to prove that Facebook was creating brand awareness, even without conscious notice of ads, let alone clicks. Its head, Brad Smallwood, would tell advertisers that “99 percent of sales generated from online branding ad campaigns were from people that saw but did not interact with ads,” claiming to prove “that it is the delivery of the marketing message to the right consumer, not the click, that creates real value for brand advertisers.” In this way, Facebook left much of the hard selling to Google, or The Huffington Post, while itself posing as heir to the MacManus tradition of branding. And over time, advertisers would agree that Facebook had game.17*5
Facebook also began letting commercial entities and even products create their own pages, and then, effectively, pay to gain lots of friends. This dovetailed with the site’s invention of the “like” button—a particularly brilliant idea that could be put to noncommercial use (like approving of a friend’s engagement, or weirdly an announcement in memoriam) while also allowing companies to know precisely those in whom they’d instilled brand loyalty. Facebook allowed people to buy advertisements in one’s news feeds, giving its advertisements more contextual relevance. Finally, those “like” buttons also justified Facebook’s heavy investments in tracking technologies. Scattered around the web, they allowed Facebook to follow users wherever they wandered online, sending messages back to the mother ship (“She’s looking for cruises”). That would allow, for example, the Carnival Cruise Line to hit the user with cruise ads the moment they returned to Facebook, perhaps by throwing a sponsored “cruise” message into her “news feed.” Facebook relied on those tracking, or spying, technologies to improve the data it held on each and every user—a group rapidly approximating the size of the entire world’s buying public.*6
Ultimately, the public had struck a grand bargain with Facebook—not exactly unknowingly, but not with full cognizance either. Having been originally drawn to Facebook with the lure of finding friends, no one seemed to notice that this new attention merchant had inverted the industry’s usual terms of agreement. Its billions of users worldwide were simply handing over a treasure trove of detailed demographic data and exposing themselves to highly targeted advertising in return for what, exactly? The newspapers had offered reporting, CBS had offered I Love Lucy, Google helped you find your way on the information superhighway. But Facebook?
Well, Facebook gave you access to your “friends.” And now, much of the energy formerly devoted to blogs or other online projects was now channeled into upgrading one’s Facebook profile and, with it, the value of Facebook itself. In this way, the public became like renters willingly making extensive improvements to their landlord’s property, even as they were made to look at advertisements. Facebook’s ultimate success lay in this deeply ingenious scheme of attention arbitrage, by which it created a virtual attention plantation.
If they noticed, most seemed not to mind being used this way, especially since everyone was now on Facebook, not just the college crowd, but kids of all ages, their parents and grandparents—even some pets had their own page. At the same time, more and more people began to complain that being on the site made them unhappy. It should have been no surprise, given what we know about human nature and the way Facebook was first conceived to play on the social dynamics of anxious adolescents, but watching the highlight reels of other people’s lives was bound to make you feel inadequate. Others found Facebook (like email) a compulsion in that same Skinneresque manner—usually disappointing, but rewarding occasionally enough to keep you hooked. A variety of studies—none entirely conclusive, to be sure—associated depressive symptoms with Facebook usage, one finding that “compared to browsing the Internet, Facebook is judged as less meaningful, less useful, and more of a waste of time, which then leads to a decrease in mood.” One is reminded of Marcuse’s observation that people in the industrialized West had “made their mutilation into their own liberty and satisfaction.”18
Facebook had supposedly replaced cyberspace with something more “real,” but what it created in fact was just another realm of unreality, one that, on account of looking real, was more misleading. Here was a place where friends always congratulated and celebrated; where couples did little but eat at nice restaurants, go on vacation, or announce engagements or newborns; and where children never cried or needed diaper changes or hit each other. On Facebook, all happy families were alike; the others may have each been unhappy in their own way, but they were not on Facebook. Of course, all human communication is slightly inauthentic, but in person or even on the telephone there are limits to our dissimulations. The sugared-cookie-cutter self-styling enabled by Facebook made America seem a Lake Wobegon online. In retrospect, the 1950s looked dark and angst-ridden by comparison.
* * *
*1 Zuckerberg was fortunate; a superficially similar stunt pulled off on the MIT network by a hacker named Aaron Swartz led to a federal indictment on multiple felony counts.
*2 Gates, incidentally, scored a 1590/1600 on his SAT, while Zuckerberg scored a perfect 1600.
*3 In her study, boyd quoted a white teenager who explained why she’d made the switch. “I’m not really into racism,” she said, “but I think that MySpace now is more like ghetto or whatever.” The many spammers, in Boyd’s view, were like gangs, the exces
sive, eyesore advertising not unlike urban blight.
*4 In sales and advertising jargon, the “purchase funnel” refers to the steps consumers go through before making a purchase. Typical stages are “awareness,” “consideration,” and “conversion”—versions of the funnel have different steps.
*5 Especially after 2012 on mobile platforms, where, by good fortune, Facebook was one of the few companies putting large ads in front of consumers’ faces on iPhones and even Google’s Android phones.
*6 The foregoing is just an incomplete summary of the advertising technologies that Facebook devised over the 2010s. Others included inventions such as creating a “Lookalike Audience”—that is, allowing a company to use their existing customers to target those who are very similar, or look-alikes, based on everything that Facebook knows about them. It also, on mobile, began selling “install” and “engagement” ads that encouraged the installation or usage of apps.
CHAPTER 24
THE IMPORTANCE OF BEING MICROFAMOUS
In 2008 a young man named Rex Sorgatz moved from Seattle to New York City to seek his fortune in the New York tech business. “That was a weird time,” he later reflected, for “media and tech were clashing for the first time.” Perhaps not exactly sure what line of work he’d end up in, he printed a business card that listed everything he could do. “Rex Sorgatz,” it read, “creative technologist, strategist, entrepreneur, writer, designer, advisor, consultant.” Being recently from the West Coast, he was slightly ahead of the New Yorkers in some ways. He remembers telling people to try Twitter, but “they would just laugh, laugh, laugh at me.”1
With an odd manner, spiky hair, and a good sense of humor, Sorgatz turned out to be a pretty good fit in the New York scene, proving particularly popular with the ladies. But not long after his arrival, he noticed something weird about New York web entrepreneurs, bloggers, and associated hangers-on: most of them were trying to become famous. Well, not traditionally famous, in the manner of a Hollywood celebrity or the Queen of England. They were studiously seeking something else, which Sorgatz called “microfame” and others called “Internet famous.” Through their blogs, start-ups, cultivation of journalists, and endless rounds of parties, New York’s tech people pursued this goal with a kind of grim determination. In this, the scene was just different from the one back West, where glory belonged solely to those who wrote the best algorithm. There, people wanted to be rich. Here, everyone wanted to be Internet famous.
“When we say ‘microfamous,’ our inclination is to imagine a smaller form of celebrity, a lower life-form striving to become a mammal—the macrofamous or suprafamous, perhaps,” Sorgatz wrote with his accustomed wit. “But microfame is its own distinct species of celebrity, one in which both the subject and the ‘fans’ participate directly in the celebrity’s creation. Microfame extends beyond a creator’s body of work to include a community that leaves comments, publishes reaction videos, sends e-mails, and builds Internet reputations with links.”
That definition appeared in his 2008 article for New York magazine entitled “The Microfame Game,” which was ostensibly a guide to becoming microfamous. As Sorgatz explained, “Microfame is practically a science. It is attainable like running a marathon or acing the LSAT. All you need is a road map.” He recommended, among other things, “oversharing,” “self-publishing,” and one tip that Sorgatz may himself have followed: “separat[ing] yourself from the cacophony by being a little weird. Scratch that—really weird.”2
Meanwhile, Sorgatz himself says that he was most certainly not seeking microfame (“Oh dear God no.”). Yet having become the apparent expert on the subject, he did gain a measure of it. As he recounts, “When social media started to embed itself into people’s lives, I somehow appeared sage, so people associated it with me—for better and worse.” He had his blog readers, his Twitter followers, consulting deals aplenty, and profiles in the New York Observer and New York Times (the latter calling him a “Social Networking Butterfly”). In retrospect, he writes, “I definitely got caught up in some of the personal drama of that era, but the only thing I ever ‘wanted’ was to hang out with people who had unique ideas about the world.”
The oxymoron “microfame” is among those terms from the early 2000s, which include “blogging,” “hashtag,” and “selfie,” that would have made absolutely no sense to people from the last century. There was, once upon a time, a relatively clear line distinguishing the famous from normal people. Crossovers were extremely rare, like the rise of a star, or ephemeral, as in the case of Charles Van Doren or participants of the 1950s and 1960s show Queen for a Day. As People’s editor defined it, to be “famous,” and therefore worthy of a cover, meant having a face known to 80 percent of the public. Hence, Princess Diana was famous; Robert Redford was famous; but tech entrepreneurs, video bloggers, and those who took daily pictures of themselves were not, even if a surprising number of people recognized them.
Even the ancien régime did recognize gradations, however, and these found expression in the Ulmer Scale, created in the 1980s by a reporter named James Ulmer; he named it after himself, perhaps in his own small bid at microimmortality. The scale was designed to measure the celebrity status of actors, for the express purpose of estimating their “bankability” (i.e., how much value they added to a production just by appearing in it). In practice the scale divided famous actors into an A-list, B-list, and C-list, not unlike a bond-rating service on Wall Street. Ulmer called it a movie star “racing form.”3
In the early 2000s, the D-list entered common parlance as a loose category meant to cover a new kind of figure who was somehow famous but not in a way understood by existing metrics. As Gareth Palmer writes, those on the D-list occupied the “space between the unknown mass of ordinary people and the celebrity.”4 The D-listers had no bankability; their achievement, instead, was, as one writer put it, having “triumphed over obscurity.”5 The archetype was, of course, the reality television star, though it could include others like models, romantic partners of celebrities, or faded pop stars. To be D-listed was not necessarily flattering, for it seemed also to refer to those who didn’t know their place; whose efforts to become or remain famous were embarrassing and therefore worthy of broadcast for general amusement. Nonetheless, the very establishment of a D-list undeniably suggested that the line between famous and not was beginning to blur.
But as information technology grew more sophisticated, the D-list began to seem far too crude a measure; new tools, like powerful telescopes, could recognize faint glimmers of fame previously invisible. By the early 2000s, a Google search of someone’s name represented a significant metric of fame. Consider, say, Fred Savage, a former child star, with his 494,000 hits, versus Scarlett Johansson, actress, at 18.2 million, or the highly bankable George Clooney, 29.7 million.
But it was Twitter that would provide the first finely calibrated measurement of microfame, nanofame, and smaller trace levels. Not that this had been its founding vision exactly. Instead, its four quarreling founders, Jack Dorsey, Evan Williams, Biz Stone, and Noah Glass, had repackaged a fairly mundane idea, AOL’s “status update,” and made it easy to broadcast on the web. The first tweets were true status updates, nuggets of TMI, like “I am eating eggs for breakfast.” If it had launched later, Twitter might still be announcing breakfasts. But fortunately it arrived just when the enthusiasm for full-form blogging was beginning to wane, even though the taste for public self-expression persisted. Tweeting thus evolved to become blogging lite, a far less taxing form. With Twitter, one could post interesting links, thoughts, denouncements, cheers, and so on, just as on a blog, but with the 140-character limit it was never as much bother. At the time, much was made of the character limit as a quasi-poetical form. But in truth it was just easier. Where blogging demanded something close to professional dedication, on Twitter a sentence a day was good enough to keep a following engaged, and the famous could rely on a staffer to craft that sentence anyhow.
If there was an
ingenious innovation, it was Twitter’s system of “followers”—anyone could “follow” anyone else and thereby receive their tweets, or posts, automatically. Unlike blogs, one did not need to go looking for new tweets; they just arrived. And by indicating interest, even though roughly, the follower system became the new measure of fame. Those of established celebrity amassed millions of followers, like the singer Katy Perry (83.2 million followers) or President Barack Obama (70.3 million). But Twitter was sufficiently sensitive to detect and indicate the smallest quantities. Rex Sorgatz, new in town, had his 10,000 followers. A fairly obscure tech founder whose companies never quite took off nonetheless had 80,000 Twitter followers and therefore a measure of fame within his world. And it might turn out that a given blogger was read widely enough to have roughly three times the followers of Fred Savage. But following was not genetically determined or written in stone. With ably managed utterances, one could grow one’s following, and with it one’s general sense of influence and currency in the new sector of the attention economy. Everyone felt compelled to tweet, and everyone thus submitted to being weighed in the balance: microlevels of fame could now be ascribed to print journalists, some scientists and professors, cable television pundits, minor politicians, outspoken venture capitalists—essentially anyone willing to shoot their mouth off to their micropublic. In this way, figures could remain unknown to 99 percent, 99.9 percent, or even 99.99 percent of the population and nonetheless be “famous” in the highly granular sense. Twitter thus sparked microfame, measured it, and threw fuel on the fire.