Book Read Free

You May Also Like

Page 23

by Tom Vanderbilt


  It is not just that popularity can be self-fulfilling; it is that not being popular is even more so. In his classic 1963 book, Formal Theories of Mass Behavior, the social scientist William McPhee introduced a theory he called “double jeopardy.” He was struck, looking at things like polls of movie star appeal and the popularity of radio shows, that when some cultural product was less popular, it was not only less well known (and thus less likely to be chosen) but less chosen by those who actually knew it—hence the double jeopardy. Did this mean the pop charts worked, that the best rose to the top? Not necessarily. McPhee speculated that the “lesser known alternative is known to people who know too many competitive alternatives.” The favorites, by contrast, “become known to the kind of people who, in making choices, know little else to choose from.” In other words, the sorts of people who listen to more obscure music probably like a lot of music a little, whereas the most devoted listeners of the Top 10 tend to concentrate their love. Through sheer statistical distribution, McPhee suggested, a “natural” monopoly emerged.

  If this was already the case decades ago, why have things gotten so much more top-heavy, so much more sticky? It could be, as I discussed in chapter 3, that having the world’s music in your pocket is too overwhelming, the blank search box of what to play next too terrifying, and so people take refuge in the exceedingly familiar. Or it could be that the more we know about what people are listening to—via new routes of social media—the more we are also listening.

  This was what the network scientist Duncan Watts and colleagues found in a famous 2006 experiment. Groups of people were given the chance to download songs for free from a Web site after they had listened to and ranked the songs. When the participants could see what previous downloaders had chosen, they were more likely to follow that behavior—so “popular” songs became more popular, less popular songs became less so. These socially influenced choices were more unpredictable; it became harder to tell how a song would fare in popularity from its reported quality. When people made choices on their own, the choices were less unequal and more predictable; people were more likely to simply choose the songs they said were best. Knowing what other listeners did was not enough to completely reorder people’s musical taste. As Watts and his co-author Matthew Salganik wrote, “The ‘best’ songs never do very badly, and the ‘worst’ songs never do extremely well.” But when others’ choices were visible, there was greater chance for the less good to do better, and vice versa. “When individual decisions are subject to social influence,” they write, “markets do not simply aggregate pre-existing individual preference.” The pop chart, in other words, just like taste itself, does not operate in a vacuum.

  The route to the top of the charts has in theory gotten more democratic, less top-down, more unpredictable: It took a viral video to help make Pharrell’s “Happy” a hit a year after the fact. But the hierarchy of popularity at the top, once established, is steeper than ever. In 2013, it was estimated that the top 1 percent of music acts took home 77 percent of all music income.

  While record companies still try to engineer popularity, Molanphy argues it is “the general public infecting each other who now decide if something is a hit.” The inescapable viral sensation “Gangnam Style,” he notes, was virtually forced onto radio, where it became the number 12 song in the United States (without even factoring in YouTube, where it was mostly played). “Nobody manipulated that into being; that was clearly the general public being charmed by this goofy video and telling each other, ‘You’ve got to watch this video.’ ” The snowball effect, he suggests, is reflected in radio. “Blurred Lines,” the most played song of 2013 in the United States, was played twice as much as the most played song of 2003.

  This is in sharp contrast to the 1970s, the period in which I did my most obsessive Top 40 listening, when it was an industry truism that, as the veteran radio consultant Sean Ross put it to me, after what could seem an unendurably long wait, “you heard your favorite song and you turned off the radio—your mission was accomplished.” Molanphy suggests that if radio then had the access to sales and listening data that it does now, it would have played those favorite songs much more than it actually did, and a song like “Yesterday” would have spent more time on the charts. What ever-sharper, real-time data about people’s actual listening behavior do is more strongly reinforce the feedback loop. “We always knew that people liked the familiar,” he says. “Now we know exactly when they flip the station and, wow, if they don’t already know a song, they really flip the station.” There is an almost desperate attempt to convert, as fast as possible, the new into the familiar.

  —

  Pop songs have always been fleeting affairs. What about baby names, which are presumably more organic and enduring? Here, popularity has become more evenly distributed. As the researchers Todd Gureckis and Robert Goldstone point out, the name Robert was the “snowball smash” of 1880: Nearly one in ten baby boys was named Robert. By contrast, Jacob, 2007’s top name, only reached 1.1 percent of boys. The most popular names, they note, have lost “market share.” But something else changed over those years. At the turn of the twentieth century, the names at the top rather randomly fluctuated, because, one might imagine, more families with fathers named Robert had boys that year.

  In the last few decades, however, a statistical pattern emerged in which the direction a name was headed in one year tended to predict—at a level greater than chance—where it was going the next year. If Tom was falling this year, Tom was likely to keep falling next year. Names acquired momentum. As naming lost the weight of cultural tradition, where did people look when making their choice? To each other. In 1880, even if names were freely chosen, it would have taken a while for name popularity to spread. But now, as parents-to-be visit data-heavy baby name Web sites or try out suggestive names on Facebook, they seem to be able to mystically divine where a name is headed and can latch on to a rising name (as long as it is not rising too quickly, for that is taken as a negative signal of faddishness) and stray from one that is falling. It is like trying to buy long-term stocks amid the noise of short-term volatility.

  Something similar is happening in both pop music and naming. Things have at once become more horizontal—there are ever more songs to hear, ever more possible names to choose from—and more “spiky,” as if, in the face of all that choice, people gravitate toward what others seem to be doing. Social learning has become hyper-social learning. In his famous 1930 tract, The Revolt of the Masses, the Spanish philosopher José Ortega y Gasset described how “the world had suddenly grown larger.” Thanks to modern media, he noted, “each individual habitually lives the life of the whole world.” People in Seville could follow, as he described, “what was happening to a few men near the North Pole.” We also had vastly increased access to things: “The range of possibilities opened out before the present-day purchaser has become practically limitless.” There was a “leveling” among social classes, which opened up “vital possibilities,” but also a “strange combination of power and insecurity which [had] taken up its abode in the soul of modern man.” He feels, he wrote, “lost in his own abundance.”

  Ortega’s vision seems quaint now. Simply to live in a large city like New York is to dwell among a maelstrom of options: There are said to be—by many orders of magnitude—more choices of things to buy in New York than there are recorded species on the planet. As Bentley put it to me, “By my recent count there were 3,500 different laptops on the market. How does anyone make a ‘utility-maximizing’ choice among all those?” The cost of learning which one is truly best is almost beyond the individual; there may, in fact, actually be little that separates them in terms of quality, so any one purchase over another might simply reflect random copying (here is the “neutral drift” at work again, he argues). It is better to say—here he borrows the line from When Harry Met Sally—“I’ll have what she’s having.”

  And boy do we know what she’s having. If, for Ortega, journalistic dispa
tches from explorers seemed to thrust one into a vertiginous global gyre, what would he make of the current situation, where a flurry of tweets comes even before the breaking news announcements, which then turn into wall-to-wall coverage, followed by a think piece in the next day’s newspaper? He would have to factor in social media, in which it often seems as if we were really living “the life of the whole world”; one has a peripheral, real-time awareness of any number of people’s whereabouts, achievements, status updates, via any number of platforms.

  Ortega called this “the increase of life,” even if it often seems to come with the cost of time in one’s own life, or indeed our happiness (studies suggest social media can be bad for one’s self-esteem). If media (large broadcasters creating audiences) helped define his age of mass society, social media (audiences creating ever more audiences) help define our age of mass individualism. The Internet is exponential social learning: You have ever more ways to learn what other people are doing; how many of the more than thirteen thousand reviews of the Bellagio in Las Vegas do you need to read on TripAdvisor before making a decision? There are ever more ways to learn that what you are doing is not good enough or was already done last week by someone else, that what you like or even who you like is also liked by some random person you have never met. It is social learning by proxy. Remotely seeing the perfect Instagram post of an artisanal pastry in San Francisco engenders a “frenzy” in others to consume it, not unlike Julie’s grass-in-the-ear trick.

  People have always wanted to be around other people and to learn from them. Cities have long been dynamos of social possibility, foundries of art, music, and fashion. Slang, or, if you prefer, “lexical innovation,” has always started in cities—an outgrowth of all those different, densely packed people so frequently exposed to one another. It spreads outward, in a manner not unlike infectious disease, which itself typically “takes off” in cities. If, as the noted linguist Leonard Bloomfield contended, the way a person talks is a “composite result of what he has heard before,” then language innovation would happen where the most people heard and talked to the most other people. Cities drive taste change because they offer the greatest exposure to other people, who not surprisingly are often the creative people cities seem to attract. Media, ever more global, ever more penetrating, spread language faster to more people (to cite just one example, the number of entries in Japanese “loanword” dictionaries—words “borrowed” from English—more than doubled from the 1970s to 2000).

  With the Internet, we have a kind of city of the mind, a medium that people do not just consume but inhabit, even if it often seems to replicate and extend existing cities (New Yorkers, already physically exposed to so many other people, use Twitter the most). As Bentley has argued, “Living and working online, people have perhaps never copied each other so profusely (since it usually costs nothing), so accurately, and so indiscriminately.” Things spread faster and more cheaply; more people can copy from more people.

  But how do we know what to copy and from whom? The old ways of knowing what we should like—everything from radio station programmers to restaurant guides to book critics to brands themselves—have been supplanted by Ortega’s “multitudes,” acting not en masse but as a mass of individuals, connected but apart, unified but disparate. Whom to follow? What to choose? Whom can you trust?

  This is why things have become both flatter and spikier: In an infinite realm of choice, our choices often seem to cluster by default toward those we can see others making (or away from those we sense too many are choosing). Whatever the direction, experimental work has shown that when “wise crowds” can see what others in the crowd are thinking, when there is too much “social influence,” people start to think more like one another (and not like the “ideal judges” with whom we are about to visit in the next chapter). They take less information into account to make their decisions yet are more confident that what they are thinking is the truth—because more people seem to think that way. As in a high-frequency trading market, social imitation has gotten easier, faster, and more volatile; all those micro-motives of trying to be like others and yet different can intensify into explosive bursts of macro-behavior. The big waves have gotten bigger, and we know that they will come, but it is harder to tell from where in the vast and random ocean surface they will swell.

  * * *

  *1 There are also just those episodes of sheer randomness, such as the “accidental hipster,” as a friend once dubbed it, the old guy at the bus stop wearing the thrift store clothes—for him an economic necessity—that happen to be the same ones currently fetishized by distinctiveness-seeking hipsters.

  *2 Social learning can, of course, be maladaptive. Everyone “learned” to smoke from someone else; some even learned to smoke on the advice of health professionals.

  *3 Namely, “Western, Educated, Industrialized, Rich, and Democratic” countries, a construction made popular by Henrich.

  *4 In my own Brooklyn neighborhood, I often have the sense parents are rather overbrandishing their children’s names, like product placements in their own lifestyle marketing campaign.

  CHAPTER 6

  BEER, CATS, AND DIRT

  HOW DO EXPERTS DECIDE WHAT’S GOOD?

  UP TO STANDARD: WHAT MAKES THE IDEAL IDEAL

  I have been describing to you, over the last several hundred pages, how our tastes are so elusive, even to us; how they are inevitably malleable to social influence; what a fleeting grasp we have of the things we put in our mouths or before our eyes. If all this was really so messy, I began to think, it seemed worth spending some time with people who need to reasonably think about, and compellingly articulate, why they like things—or, at least, explain why certain things are not only good (and I would argue one does not generally like what one does not think is good), but better than other things. I am talking about judges in competitions. Surely they would be able to cut steely-eyed through our fog of proclivities and bring crystalline neutrality to the murky thickets of taste. What might we learn from them to bring more clarity to our own liking?

  Let us begin with a simple inquiry, about something with which most of us have at least a passing familiarity: What makes a good cat? To find out, I have traveled to Paris, where, in a small conference center in the twelfth arrondissement, the Salon International du Chat is under way. Despite its grandiose title, it seems a pretty regional affair, a medium-sized hall’s worth of blue-eyed Ragdolls and fluffy woolen Selkirk Rexes and sleekly poised European Burmese. Perhaps sensing it is getting away with something, a Seeing Eye dog leads its owner through the show aisles, but even the presence of this large hound does not measurably stir these unperturbed show cats.

  I am not here because this is a particularly important cat show, and cat shows, it must be said, are, like their owners, more low-key than dog shows. Rather, I am here because one of the judges, a Dutchman named Peter Moormann, happens to be not only a cat judge but a professor of psychology at the University of Leiden in the Netherlands. To bring it full circle, he has investigated the psychology of judges in competitions.

  Moormann, whose swept-back, flowing silver hair and sympathetic eyes give him an air of elegant Continental authority, got into cats around the same time he got into psychology. Born in colonial Indonesia to parents who were survivors of the Burma railroad and Japanese prisoner-of-war camps, he fled with his family to Holland. There, some old friends from Indonesia were raising Persian cats. Because he seemed to have an affinity for handling animals, they asked him along to a show. He steadily climbed through the cat show ranks: steward, pupil judge, judge. Meanwhile, he was a psychology student and a champion skater; first roller, then ice. (“I have always tried to combine things in life,” he said.) His dissertation was on the psychology of figure skating performance, not a hard sell in the skating-mad Netherlands. It included a chapter on “involuntary bias when judging figure skating performance,” which presumably he tried to rein in during the multiple times he was a judge on the television
program Sterren Dansen op het Ijs, the Dutch version of Dancing on Ice.

  As I sat next to Moormann at a folding table in the judge’s area, a procession of owners, smiling and expectant, presented their cats to him. The first thing that became apparent was that as soon as the cats were on the table, the dynamics of just who was being judged were called into question. The cats seemed to pull off an astonishing double axel of casual haughtiness: They at once looked as if they owned the place and appeared vaguely annoyed for having been deposited here, in front of this cheery Dutchman waving a feather at them. The feather is a common judge’s ploy to get the cats to, in essence, be cats. As one judge had described it to me, “You get the toy out, you want to see expression, the ears up.”

  One is hesitant to assign national characteristics to man or animal, but it is difficult to resist seeing in these French cats something of the famous and colossal disregard exhibited by French waiters, who will look upon you with an almost sympathetic glance as you wait for service, as if they were watching the playing out of an existential drama over which they have no control. Some cats glance at Moormann’s feather with the world-weary pity the garçon exhibits toward a patron trying to signal for the check.

  Moormann poked and prodded, feeling for skull deformations, scanning for “tail faults” or indistinct markings, probing for missing testicles. As with a used car, looks can be deceiving. Some breeds are even dismissed as being a “paint job,” simply daubed up with a new coat color or pattern. A lot of the judging is done by feel: the length of the cat, the muscle tone, or whether, as one judge told me, “they have a funky front end.” As he examined, Moormann occasionally issued a word of encouragement like “bonne” or “très expressif.” “All cats,” he declared, “have something you can penalize. No cat is perfect.” But neither is any judge a robot. Sitting across from him is a human owner, who has paid money, some of which has gone to bringing the judges to this show. “You almost want to make the person feel…” He searched for the word. “Happy.”

 

‹ Prev