You May Also Like

Home > Other > You May Also Like > Page 22
You May Also Like Page 22

by Tom Vanderbilt


  —

  This raises the question of how much our tastes evolve, on the wider social level, due to more or less accidental, random processes, cultural “mutations” that are not necessarily better, just different. Music is filled with moments where mistakes became innovations (for example, the rise of hip-hop “scratching,” Cher’s exaggerated use of Auto-Tune in “Believe”), innovations that ultimately shifted taste. The first use of guitar distortion on a record is, like many creation stories, a matter of historical debate. Some guitarist no doubt had a piece of equipment that malfunctioned—or maybe he simply turned it up too loud—and found some pleasure in the resulting imperfection. Then someone else likes what he has heard and decides to imitate it, while putting his own gloss on it, pushing the effect further along.

  And so in a couple of decades you have gone from the slight (though certainly edgy at the time) buzz in a forgotten, proto-rock song like Goree Carter’s 1949 “Rock Awhile,” to the meatier growl of the Kinks’ “You Really Got Me” (fashioned by Dave Davies’s taking a razor blade to the amp), to the full-blown howl of Jimi Hendrix (now electronically engineered via a custom fuzz box and big Marshall amps). No guitarist really knew he would like it until it happened; otherwise, he would already have been playing that way. Even Pete Townshend’s act of smashing his guitar began as a “complete accident.” As Bourdieu once wrote, “To discover something is to one’s taste is to discover oneself, to discover what…[o]ne had to say and didn’t know how to say, and, consequently, didn’t know.”

  Taste change is like Wall Street’s “random walk,” or the idea that the past is a shaky guide to the future. We expect convulsive change on the pop charts, but think of something like the most common colors in home furnishings, the most popular dog breeds, or the top baby names. In any given year, there would be a certain order. But this would almost certainly have been different five years earlier, just as it is sure to be different five years down the road. Could this turnover be explained, even predicted? I do not mean in the sense of which breeds or names or colors would rise and which would fall (because, per Wall Street’s “efficient market” hypothesis, if we knew what was going to be popular, it already should be). But could the rate of change be predicted? That is the promise of what has been called the “neutral model” of cultural change.

  The idea comes from a theory in genetics, revolutionary when it was introduced in 1968, which “predicted that the vast majority of evolutionary changes at the molecular level are caused not by selection but by random drift of selectively neutral mutants.” In other words, most changes in genes just happened. They were driven not by external, functional selection pressures (for example, some factor of the local environment) but on their own, as if guided by some internal logic, one whose probabilities could be estimated.

  When applied to culture, the “neutral model” says that something like a list of breed popularity will regularly shift. Some dogs will suddenly become popular—not because some breed is inherently better than another or the upper classes suddenly favor one over another. Rather, popularity shifts through “random copying,” or one person wanting a dog because she saw another person with one. This was what R. Alexander Bentley, an anthropologist at England’s University of Durham, and his co-researchers found after they sifted through many years of breed registration data. Statistically, the dog breed popularity index follows a power law: A dozen or so top dogs command a majority of the registrations in each year. But what those dogs are is subject to change, and that change seems entirely random. A dog can rise from obscurity to popularity with no dedicated promotional campaign behind it; similarly, it can fall from popularity with no apparent explanation.

  It is not as if the top dogs became popular, for instance, because they were intrinsically better dogs. A study that looked at positive breed characteristics (good behavior, breed life, fewer genetic disorders) and breed popularity found no link between the two. Sometimes, those bred to be least healthy rise most in popularity (call it “unnatural selection”). Humans often do not even seem to pick dogs that are functionally adaptive for humans. Harold Herzog, a co-author of Bentley’s and a professor of psychology at Western Carolina University, notes that Rottweilers surged from twenty-fifth place to the United States’ most popular in a decade. What followed, Herzog notes, was a steep rise in the number of people killed by Rottweilers and then, not surprisingly, a sharp subsequent decline in Rottweiler registrations.

  There are certainly cases of selective pressure on dog breed popularity. One of the strongest is movies for children: After Disney’s 101 Dalmatians and The Shaggy Dog, dalmatian and sheepdog registrations rose. The bigger the box office, the bigger the breed boost (though, in certain cases, the tail might have been wagging the dog, because the breed was already on the rise, which might be why it was chosen for the film). Movie tie-in breed fads, however, notes Herzog, are “the exception, not the rule,” and they have been losing strength. After Taco Bell’s famous Chihuahua ad campaign ran, he points out, Chihuahua registrations actually plummeted; what was at least initially good for Taco Bell sales was apparently bad for the breed. What about winning the Westminster Dog Show? This, after all, was said to explain the “fabulous rise in poodle popularity” in the 1950s. If it did then, it apparently does not anymore: Westminster winners do not seem to move the needle, breed-wise, in the years after their win.

  As with Edwin Long’s Babylonian Market, whatever breeds are currently top dogs—and however much we would like to think they are there because they are somehow best—the only thing that can be predicted of future taste is that it will change. Once, in a top-floor conference room in an art college in London, I witnessed a top secret annual meeting held by Pantone, the color company, surrounded by color experts. These are people who do not just see the color black, but can amiably chat about the “family of black.” Their goal was to try to forecast what colors would be big the next year. Like movie producers in search of an ideal dog—one starting to show up at the margins but not overexposed—the colorists were attuned to what was already gaining some steam or being employed in a new way (for example, “a good navy is going to fulfill the role that black used to play”). Having found the spark, the company’s color “forecasters” piled on the fuel.

  When, for example, the company predicted orange for the summer of 2011, I was later told by an executive at Firmenich, the flavor and fragrance company, “you can look at what’s out in the marketplace—this red orange, or flame orange.” It lurked on the new Camaro, the Sony Vaio computer, Hugo Boss’s new Orange line. “You’re connecting dots here that are traceable,” the executive told me. Like surfers, the forecasters were catching a wave that had already begun, and as with the complicated physics that explain those “rogue waves,” which just surge “out of nowhere,” orange, in all likelihood, was just coughed up from an ocean of color possibility. Like rogue waves, popularity tends to be nonlinear: Once it gets going, it gets bigger than you would have been able to predict from its initial condition (rogue waves “steal” energy from surrounding waves; popular dogs “steal” momentum from other dogs).

  —

  What makes the neutral model so compelling, suggests Bentley, is that it provides a way of thinking, at the wider “population level,” about why things like tastes just seem to come and go. Statistically, the rate of turnover, on quite distinct indices of popularity—ranging from the Billboard Hot 100 to baby names to which “keywords” appear in academic papers in a given year—seems to look the same, as if there was some natural law of churn.

  With baby names, Bentley argues that even as the population of countries grows, new names are created (and others disappear), and specific names rise and fall in popularity, the overall statistical shape of name popularity changes little, because of the way people randomly copy names from each other. Remember that from its origin in genetics, the neutral model says that genes cannot be under selection. They cannot be chosen for an “adaptive” reason, where o
ne is “intrinsically” better than another. Are baby names, as Bentley argues, really “value-neutral cultural traits chosen proportionally from the population of existing names, created by ‘mutation’ and lost through sampling”?

  Baby names have long fascinated taste researchers. As Stanley Lieberson, a sociologist at Harvard University, has pointed out, names, unlike many other fashions, are generally for life. No advertisers are cajoling you into a particular name, and they are “value-neutral,” in terms of actual money. “It costs no more in dollars and cents to name a daughter Lauren or Elizabeth,” he writes, “than it does to name her Crystal or Tammy.” Names, notes Lieberson, were once largely bound up in tradition and social strictures; one took a family name or a name inspired by one’s religion—sometimes to the point where the naming pool was beginning to get a bit small. In the genetic model, they were strongly selected, particularly for boys (in nineteenth-century England, for example, a consistent flurry of Williams and Johns and Henrys). But in the late nineteenth century, names, like so many aspects of culture, were becoming increasingly based on individual choice: “on whether parents like or dislike the name.”

  Names went from tradition to fashion. And fashion, argues Lieberson, is driven by two large and distinct forces. The first are external factors, big societal ripple effects, like the way the name Jacqueline began to rise in the United States in 1961, thanks to the prominence of the famous First Lady. These large external correlations often do not work, however. A rise in biblical names, Lieberson has found, corresponded to a decline in church attendance; what’s more, the least religious people were using the names.

  More important, he suggests, are the “internal mechanisms” that drive taste changes even “in the absence of external shifts.” In what he calls the “ratchet effect,” some new, small taste change is introduced (like a simple change of letter in a name, from Jenny to Jenna). A mutation, as the genetic model would have it. Another taste change subtly expands on that, typically in a similar direction. So skirt lengths or hair get a bit longer and then a bit longer, until reaching some point of disutility—or just ridiculousness. It echoes Loewy: most advanced, yet acceptable.

  The attempt may be made to pin a taste change, after the fact, on some social factor (X became popular because Y). But it is often hard to escape the sense of sheer randomness and copying. Lieberson notes that boys’ names ending in n became popular in the second half of the twentieth century (possibly hitting their zenith in 1975, when Jason, dramatically less popular decades earlier, hit number 2). They then began to decline. What happened? It is not as if an n sound confers any more intrinsic worth or that we are biologically programmed to prefer n names—for why would its ascent have stopped? Rather, it is as if people, like those warblers, were hearing a sound, presumably liked the way it sounded, and so took it on themselves. One statistical analysis, of a century’s worth of names, found that names, even after taking their past popularity into account, were more likely to be used when a sound contained in the name was popular the previous year.

  Once a sound is introduced to the naming picture, it opens the doors to “errors,” imitations that are slightly off: The popular 1970s moniker Jennifer, writes Lieberson, “generates interest” in a number of similar-sounding names (like Jessica). The event that kick-starts a popular sound can literally be a matter of chance: A study of naming patterns in the wake of hurricanes—whose names are randomly drawn from a list—found an increase in names sharing the first letter of the named hurricane. The bigger the hurricane, the bigger the increase (up to a point), simply because the “phoneme” was thrust in the air. This is not so different from the way a “genre” book that hits the New York Times best-seller list can boost sales for non-best-selling books of the same genre, as if once people had read one, they were subtly influenced to read others.

  You are probably protesting by now that this makes it sound as if we are all mindless drones marching in lockstep, naming our kids based on something we overheard at the grocery store or on the Weather Channel, doing things without any conscious thought. Indeed, critics of the neutral model insist there is almost always some kind of biased selection going on. Most common is popularity itself; what is popular gets reproduced because it is popular.

  But there is an opposing selection force as well: when people begin to not do something (choose a name, retweet a tweet) because they sense too many other people are doing it. Economists call this “nonfunctional demand,” or everything driving (or reducing) demand that has nothing to do with “the qualities inherent in the commodity.”

  While neutral drift says one choice is not somehow better than another, names often do have some intrinsic value. As one study showed, certain so-called racial names (for example, Latonya or Tremayne) were less likely to get callbacks on job interviews; another analysis found that having a German name after World War I made it harder to get a seat on the New York Stock Exchange (and fewer kids were named Wilhelm and Otto). Or they have perceived intrinsic value, like social cachet.*4 Names that appear to be neutrally distributed throughout the culture could be under some kind of “weak” selection pressure. Perhaps one parent, having read the novel We Need to Talk About Kevin, a mother’s tale of a violent son, decides not to give her child that name (thus reducing the chance someone else will copy her) because of a negative connotation that only a few may be aware of.

  When I raised this subject with Bentley, he insisted that this was precisely the value of the neutral model: If culture change viewed at the big, population-wide level looks as if random copying were driving everything, then that noisy statistical wallpaper makes an easier backdrop against which to see when selection pressures really are at work. When one looks at a crowded, rush-hour highway from above, it seems as if every driver were essentially copying the other; the highway seems to drift along neutrally. But look more closely, and one driver may be following another too closely, applying “selective pressure” that then influences the driver ahead. Taste is like traffic, actually—a large complex system with basic parameters and rules, a noisy feedback chamber where one does what others do and vice versa, in a way that is almost impossible to predict beyond that at the end of the day a certain number of cars will travel down a stretch of road, just as a certain number of new songs will be in the Hot 100.

  All this leads to one last question. If taste moves along via imitative social learning, whether random or not, whether “biased” or not, what happens when people—thanks to the Internet—have ever more opportunity to see, in ever finer detail, what other people are doing?

  —

  When I was a teenager in the 1980s, I tuned one day, by accident, to a station on the far left of the dial and discovered a show playing punk rock and other eclectic forms of music. I felt as if I had walked into a private conversation being spoken in another language: Here were songs I had never heard before (my tastes were admittedly quite conventional) that sounded little like anything I had heard before.

  As I quickly became a fan of this strange cacophony, I realized how time-consuming the pursuit was: long hours spent tracking down obscure albums in obscure record stores in obscure parts of town, driving to sweaty all-ages shows in not-quite-up-to-code social halls, talking to the few other kids in my school who knew what I was even talking about, never having a sense of how many people in other towns might like this same music. The whole time, I nursed a conviction that if only more people knew about this music, it would become more popular (leaving aside the awkward question, per optimal distinctiveness, of if my own liking for it would decline because more people liked it).

  Things are now incredibly different. The Internet means that one click can access most of the world’s music; via chat rooms and other forums, fans of the most rarefied genres can find each other; technology has blown open distribution bottlenecks, making it cheaper and easier for anyone to put a recording out into the world. As the Echo Nest showed, entire genres could spring up virtually overnight and find fans.
In theory, my teenage hope had come alive: There was little, physically, preventing anyone from listening to anything. Music was horizontal: It took no more effort to listen to something obscure than to something popular. Perhaps, as I had imagined, the formerly less popular would become more popular, at the expense of the already popular, which would decline in importance as more people found more things on the “long tail” to listen to. At the very least, the hits on the radio, the ones you quickly grew tired of hearing so often, would turn over faster because of the sheer increase in new material.

  This is not necessarily how it turned out, as I learned in speaking to Chris Molanphy, a music critic and obsessive analyst of the pop charts. “There was this big theory that all this sort of democracy in action, this capturing of people’s taste, was going to lead to more turnover, not less,” he said. “In fact, if you watch the chart, it’s totally the opposite. The big have gotten bigger.” It is true that music sales as a whole declined in the new digital environment, but it was the albums further down the charts—from 200 to 800—that fared worst. Hit songs, meanwhile, gobbled up even more of the overall music market than they did before the Internet. The curving long tail chart, as he put it, looks more like a right angle. “It’s kind of like once the nation has decided that we’re all interested in ‘Fancy’ by Iggy Azalea or ‘Happy’ by Pharrell”—to name two pop hits of 2014—“we’re all listening to it.”

  He calls these “snowball smashes”: They gather momentum and pick up everything in their wake. With more momentum comes more staying power. The song “Radioactive” by Imagine Dragons lingered on the Hot 100, “Billboard’s flagship pop chart,” for two years. By contrast, a song like the Beatles’ “Yesterday”—as Molanphy notes, “the most covered song of all time”—lasted a mere eleven weeks on the charts.

 

‹ Prev