Book Read Free

The Better Angels of Our Nature: Why Violence Has Declined

Page 84

by Steven Pinker


  Whether you call it herd behavior, the cultural echo chamber, the rich get richer, or the Matthew Effect, our tendency to go with the crowd can lead to an outcome that is collectively undesirable. But the cultural products in these examples—buggy software, mediocre novels, 1970s fashion—are fairly innocuous. Can the propagation of conformity through social networks actually lead people to sign on to ideologies they don’t find compelling and carry out acts they think are downright wrong? Ever since the rise of Hitler, a debate has raged between two positions that seem equally unacceptable: that Hitler single-handedly duped an innocent nation, and that the Germans would have carried out the Holocaust without him. Careful analyses of social dynamics show that neither explanation is exactly right, but that it’s easier for a fanatical ideology to take over a population than common sense would allow.

  There is a maddening phenomenon of social dynamics variously called pluralistic ignorance, the spiral of silence, and the Abilene paradox, after an anecdote in which a Texan family takes an unpleasant trip to Abilene one hot afternoon because each member thinks the others want to go.274 People may endorse a practice or opinion they deplore because they mistakenly think that everyone else favors it. A classic example is the value that college students place on drinking till they puke. In many surveys it turns out that every student, questioned privately, thinks that binge drinking is a terrible idea, but each is convinced that his peers think it’s cool. Other surveys have suggested that gay-bashing by young toughs, racial segregation in the American South, honor killings of unchaste women in Islamic societies, and tolerance of the terrorist group ETA among Basque citizens of France and Spain may owe their longevity to spirals of silence.275 The supporters of each of these forms of group violence did not think it was a good idea so much as they thought that everyone else thought it was a good idea.

  Can pluralistic ignorance explain how extreme ideologies may take root among people who ought to know better? Social psychologists have long known that it can happen with simple judgments of fact. In another hall-of-fame experiment, Solomon Asch placed his participants in a dilemma right out of the movie Gaslight.276 Seated around a table with seven other participants (as usual, stooges), they were asked to indicate which of three very different lines had the same length as a target line, an easy call. The six stooges who answered before the participant each gave a patently wrong answer. When their turn came, three-quarters of the real participants defied their own eyeballs and went with the crowd.

  But it takes more than the public endorsement of a private falsehood to set off the madness of crowds. Pluralistic ignorance is a house of cards. As the story of the Emperor’s New Clothes makes clear, all it takes is one little boy to break the spiral of silence, and a false consensus will implode. Once the emperor’s nakedness became common knowledge, pluralistic ignorance was no longer possible. The sociologist Michael Macy suggests that for pluralistic ignorance to be robust against little boys and other truth-tellers, it needs an additional ingredient: enforcement.277 People not only avow a preposterous belief that they think everyone else avows, but they punish those who fail to avow it, largely out of the belief—also false—that everyone else wants it enforced. Macy and his colleagues speculate that false conformity and false enforcement can reinforce each other, creating a vicious circle that can entrap a population into an ideology that few of them accept individually.

  Why would someone punish a heretic who disavows a belief that the person himself or herself rejects? Macy et al. speculate that it’s to prove their sincerity—to show other enforcers that they are not endorsing a party line out of expedience but believe it in their hearts. That shields them from punishments by their fellows—who may, paradoxically, only be punishing heretics out of fear that they will be punished if they don’t.

  The suggestion that unsupportable ideologies can levitate in midair by vicious circles of punishment of those who fail to punish has some history behind it. During witch hunts and purges, people get caught up in cycles of preemptive denunciation. Everyone tries to out a hidden heretic before the heretic outs him. Signs of heartfelt conviction become a precious commodity. Solzhenitsyn recounted a party conference in Moscow that ended with a tribute to Stalin. Everyone stood and clapped wildly for three minutes, then four, then five . . . and then no one dared to be the first to stop. After eleven minutes of increasingly stinging palms, a factory director on the platform finally sat down, followed by the rest of the grateful assembly. He was arrested that evening and sent to the gulag for ten years.278 People in totalitarian regimes have to cultivate thoroughgoing thought control lest their true feelings betray them. Jung Chang, a former Red Guard and then a historian and memoirist of life under Mao, wrote that on seeing a poster that praised Mao’s mother for giving money to the poor, she found herself quashing the heretical thought that the great leader’s parents had been rich peasants, the kind of people now denounced as class enemies. Years later, when she heard a public announcement that Mao had died, she had to muster every ounce of thespian ability to pretend to cry.279

  To show that a spiral of insincere enforcement can ensconce an unpopular belief, Macy, together with his collaborators Damon Centola and Robb Willer, first had to show that the theory was not just plausible but mathematically sound. It’s easy to prove that pluralistic ignorance, once it is in place, is a stable equilibrium, because no one has an incentive to be the only deviant in a population of enforcers. The trick is to show how a society can get there from here. Hans Christian Andersen had his readers suspend disbelief in his whimsical premise that an emperor could be hoodwinked into parading around naked; Asch paid his stooges to lie. But how could a false consensus entrench itself in a more realistic world?

  The three sociologists simulated a little society in a computer consisting of two kinds of agents.280 There were true believers, who always comply with a norm and denounce noncompliant neighbors if they grow too numerous. And there were private but pusillanimous skeptics, who comply with a norm if a few of their neighbors are enforcing it, and enforce the norm themselves if a lot of their neighbors are enforcing it. If these skeptics aren’t bullied into conforming, they can go the other way and enforce skepticism among their conforming neighbors. Macy and his collaborators found that unpopular norms can become entrenched in some, but not all, patterns of social connectedness. If the true believers are scattered throughout the population and everyone can interact with everyone else, the population is immune to being taken over by an unpopular belief. But if the true believers are clustered within a neighborhood, they can enforce the norm among their more skeptical neighbors, who, overestimating the degree of compliance around them and eager to prove that they do not deserve to be sanctioned, enforce the norm against each other and against their neighbors. This can set off cascades of false compliance and false enforcement that saturate the entire society.

  The analogy to real societies is not far-fetched. James Payne documented a common sequence in the takeover of Germany, Italy, and Japan by fascist ideologies in the 20th century. In each case a small group of fanatics embraced a “naïve, vigorous ideology that justifies extreme measures, including violence,” recruited gangs of thugs willing to carry out the violence, and intimidated growing segments of the rest of the populations into acquiescence.281

  Macy and his collaborators played with another phenomenon that was first discovered by Milgram: the fact that every member of a large population is connected to everyone else by a short chain of mutual acquaintances—six degrees of separation, according to the popular meme.282 They laced their virtual society with a few random long-distance connections, which allowed agents to be in touch with other agents with fewer degrees of separation. Agents could thereby sample the compliance of agents in other neighborhoods, disabuse themselves of a false consensus, and resist the pressure to comply or enforce. The opening up of neighborhoods by long-distance channels dissipated the enforcement of the fanatics and prevented them from intimidating enough conformists int
o setting off a wave that could swamp the society. One is tempted toward the moral that open societies with freedom of speech and movement and well-developed channels of communication are less likely to fall under the sway of delusional ideologies.

  Macy, Willer, and Ko Kuwabara then wanted to show the false-consensus effect in real people—that is, to see if people could be cowed into criticizing other people whom they actually agreed with if they feared that everyone else would look down on them for expressing their true beliefs.283 The sociologists mischievously chose two domains where they suspected that opinions are shaped more by a terror of appearing unsophisticated than by standards of objective merit: wine-tasting and academic scholarship.

  In the wine-tasting study, Macy et al. first whipped their participants into a self-conscious lather by telling them they were part of a group that had been selected for its sophistication in appreciating fine art. The group would now take part in the “centuries-old tradition” (in fact, concocted by the experimenters) called a Dutch Round. A circle of wine enthusiasts first evaluate a set of wines, and then evaluate one another’s wine-judging abilities. Each participant was given three cups of wine and asked to grade them on bouquet, flavor, aftertaste, robustness, and overall quality. In fact, the three cups had been poured from the same bottle, and one was spiked with vinegar. As in the Asch experiment, the participants, before being asked for their own judgments, witnessed the judgments of four stooges, who rated the vinegary sample higher than one of the unadulterated samples, and rated the other one best of all. Not surprisingly, about half the participants defied their own taste buds and went with the consensus.

  Then a sixth participant, also a stooge, rated the wines accurately. Now it was time for the participants to evaluate one another, which some did confidentially and others did publicly. The participants who gave their ratings confidentially respected the accuracy of the honest stooge and gave him high marks, even if they themselves had been browbeaten into conforming. But those who had to offer their ratings publicly compounded their hypocrisy by downgrading the honest rater.

  The experiment on academic writing was similar, but with an additional measure at the end. The participants, all undergraduates, were told they had been selected as part of an elite group of promising scholars. They had been assembled, they learned, to take part in the venerable tradition called the Bloomsbury Literary Roundtable, in which readers publicly evaluate a text and then evaluate each other’s evaluation skills. They were given a short passage to read by Robert Nelson, Ph.D., a MacArthur “genius grant” recipient and Albert W. Newcombe Professor of Philosophy at Harvard University. (There is no such professor or professorship.) The passage, called “Differential Topology and Homology,” had been excerpted from Alan Sokal’s “Transgressing the Boundaries: Towards a Transformative Hermeneutics of Quantum Gravity.” The essay was in fact the centerpiece of the famous Sokal Hoax, in which the physicist had written a mass of gobbledygook and, confirming his worst suspicions about scholarly standards in the postmodernist humanities, got it published in the prestigious journal Social Text.284

  The participants, to their credit, were not impressed by the essay when they rated it in private. But when they rated it in public after seeing four stooges give it glowing evaluations, they gave it high evaluations too. And when they then rated their fellow raters, including an honest sixth one who gave the essay the low rating it deserved, they gave him high marks in private but low marks in public. Once again the sociologists had demonstrated that people not only endorse an opinion they do not hold if they mistakenly believe everyone else holds it, but they falsely condemn someone else who fails to endorse the opinion. The extra step in this experiment was that Macy et al. got a new group of participants to rate whether the first batch of participants had sincerely believed that the nonsensical essay was good. The new raters judged that the ones who condemned the honest rater were more sincere in their misguided belief than the ones who chose not to condemn him. It confirms Macy’s suspicion that enforcement of a belief is perceived as a sign of sincerity, which in turn supports the idea that people enforce beliefs they don’t personally hold to make themselves look sincere. And that, in turn, supports their model of pluralistic ignorance, in which a society can be taken over by a belief system that the majority of its members do not hold individually.

  It’s one thing to say that a sour wine has an excellent bouquet or that academic balderdash is logically coherent. It’s quite another to confiscate the last bit of flour from a starving Ukrainian peasant or to line up Jews at the edge of a pit and shoot them. How could ordinary people, even if they were acquiescing to what they thought was a popular ideology, overcome their own consciences and perpetrate such atrocities?

  The answer harks back to the Moralization Gap. Perpetrators always have at their disposal a set of self-exculpatory stratagems that they can use to reframe their actions as provoked, justified, involuntary, or inconsequential. In the examples I mentioned in introducing the Moralization Gap, perpetrators rationalize a harm they committed out of self-interested motives (reneging on a promise, robbing or raping a victim). But people also rationalize harms they have been pressured into committing in the service of someone else’s motives. They can edit their beliefs to make the action seem justifiable to themselves, the better to justify it to others. This process is called cognitive dissonance reduction, and it is a major tactic of self-deception.285 Social psychologists like Milgram, Zimbardo, Baumeister, Leon Festinger, Albert Bandura, and Herbert Kelman have documented that people have many ways of reducing the dissonance between the regrettable things they sometimes do and their ideal of themselves as moral agents.286

  One of them is euphemism—the reframing of a harm in words that somehow make it feel less immoral. In his 1946 essay “Politics and the English Language,” George Orwell famously exposed the way governments could cloak atrocities in bureaucratese:In our time, political speech and writing are largely the defense of the indefensible. Things like the continuance of British rule in India, the Russian purges and deportations, the dropping of the atom bombs on Japan, can indeed be defended, but only by arguments which are too brutal for most people to face, and which do not square with the professed aims of the political parties. Thus political language has to consist largely of euphemism, question-begging and sheer cloudy vagueness. Defenseless villages are bombarded from the air, the inhabitants driven out into the countryside, the cattle machine-gunned, the huts set on fire with incendiary bullets: this is called pacification. Millions of peasants are robbed of their farms and sent trudging along the roads with no more than they can carry: this is called transfer of population or rectification of frontiers. People are imprisoned for years without trial, or shot in the back of the neck or sent to die of scurvy in Arctic lumber camps: this is called elimination of unreliable elements. Such phraseology is needed if one wants to name things without calling up mental pictures of them.287

  Orwell was wrong about one thing: that political euphemism was a phenomenon of his time. A century and a half before Orwell, Edmund Burke complained about the euphemisms emanating from revolutionary France:The whole compass of the language is tried to find sinonimies and circumlocutions for massacres and murder. Things are never called by their common names. Massacre is sometimes called agitation, sometimes effervescence , sometimes excess; sometimes too continued an exercise of a revolutionary power.288

  Recent decades have seen, to take just a few examples, collateral damage (from the 1970s), ethnic cleansing (from the 1990s), and extraordinary rendition (from the 2000s).

  Euphemisms can be effective for several reasons. Words that are literal synonyms may contrast in their emotional coloring, like slender and skinny, fat and Rubenesque, or an obscene word and its genteel synonym. In The Stuff of Thought I argued that most euphemisms work more insidiously: not by triggering reactions to the words themselves but by engaging different conceptual interpretations of the state of the world.289 For example, a euphemism can co
nfer plausible deniability on what is essentially a lie. A listener unfamiliar with the facts could understand transfer of population to imply moving vans and train tickets. A choice of words can also imply different motives and hence different ethical valuations. Collateral damage implies that a harm was an unintended by-product rather than a desired end, and that makes a legitimate moral difference. One could almost use collateral damage with a straight face to describe the hapless worker on the side track who was sacrificed to prevent the runaway trolley from killing five workers on the main one. All of these phenomena—emotional connotation, plausible deniability, and the ascription of motives—can be exploited to alter the way an action is construed.

  A second mechanism of moral disengagement is gradualism. People can slide into barbarities a baby step at a time that they would never undertake in a single plunge, because at no point does it feel like they are doing anything terribly different from the current norm.290 An infamous historical example is the Nazis’ euthanizing of the handicapped and mentally retarded and their disenfranchisement, harassment, ghettoization, and deportation of the Jews, which culminated in the events referred to by the ultimate euphemism, the Final Solution. Another example is the phasing of decisions in the conduct of war. Material assistance to an ally can morph into military advisors and then into escalating numbers of soldiers, particularly in a war of attrition. The bombing of factories can shade into the bombing of factories near neighborhoods, which can shade into the bombing of neighborhoods. It’s unlikely that any participant in the Milgram experiment would have zapped the victim with a 450-volt shock on the first trial; the participants were led to that level in an escalating series, starting with a mild buzz. Milgram’s experiment was what game theorists call an Escalation game, which is similar to a War of Attrition.291 If the participant withdraws from the experiment as the shocks get more severe, he forfeits whatever satisfaction he might have enjoyed from carrying out his responsibilities and advancing the cause of science and thus would have nothing to show for the anxiety he has suffered and the pain he has caused the victim. At each increment, it always seems to pay to stick it out one trial longer and hope that the experimenter will announce that the study is complete.

 

‹ Prev