Book Read Free

The Republican Brain

Page 34

by is Mooney


  To establish scientifically that conservatives are more motivated reasoners in our study, it was necessary to do the following: 1) measure study participants’ ideology; 2) measure their general tendency to reason in a motivated way; and then 3) demonstrate a relationship between these two “variables,” such that more political conservatism was statistically linked to a heightened tendency to engage in motivated reasoning, or MR.

  Measuring ideology is something political psychologists do all the time, and doing so here was relatively straightforward. When students sat down at their computer consoles to take our study, they were asked their political opinions on both moral and fiscal issues, as well as to place themselves on a scale from “very liberal” to “very conservative,” with a number of gradations in between. They also answered several other questions that allowed us to locate them politically, as well as questions to determine their “Big Five” personality traits, religiosity, and degree of authoritarianism. Furthermore, since motivated reasoning has often been shown to increase with political sophistication, the students were asked standard political knowledge questions to determine how much they actually knew.

  Measuring the subjects’ tendency to engage in motivated reasoning, however, was a more difficult challenge. And to describe how we did it, it will be necessary to get a bit wonky for a moment.

  In scientific parlance, we wanted to create a scale of general motivated reasoning—a measure of an individual’s general tendency to be more or less slanted in his or her reactions to “evidence” that we provided on a wide variety of topics, mostly not political ones. This last detail was particularly crucial. To establish motivated reasoning as a general psychological tendency—an element of a person’s style of thinking and responding to information in general, and not just a result of his or her views about one particular political topic—we needed to show its presence as individuals responded to a variety of issues across different walks of life. As far as we knew, nobody had ever attempted to construct such a motivated reasoning scale before, one in which subjects’ answers to a variety of questions would capture their general motivated reasoning tendency.

  Our strategy, then, was this. We asked our participants to state their opinions on twelve quite diverse topics. Then we showed them some “information”—lies, mostly, but always presented as convincing-sounding “evidence”—that, in each case, either supported or undercut that opinion. The information came in the form of essays, bullet-points, or in some cases, simple ratings and quotations. In most cases we claimed to have found the information on the Internet.

  The order in which the participants encountered our twelve items, and whether they received congenial or uncongenial information on any particular one of them, was determined at random. Then, after each item, we asked the student (A) to indicate how persuasive he or she found the information, and (B) to restate his or her initial opinion, so that we could determine whether it had changed.

  These answers allowed us to derive two separate measures of motivated reasoning. For question A, the “spread” or difference between participants’ persuasiveness ratings for friendly (or “pro-attitudinal”) versus unfriendly (or “counter-attitudinal”) information constituted our first measure. We expected most participants to find friendly information more persuasive than unfriendly information, of course—but how much more persuasive would constitute a measure of just how motivated an individual’s reasoning was, relative to others in the sample.

  For question B, participants’ reacting to friendly essays by strengthening their pre-existing opinions, combined with their reacting to unfriendly essays by resisting changes to their opinions (or even by strengthening their prior views, the “backfire effect”), would constitute our second measure. Here, we weren’t just measuring whether our subjects thought our essays were “persuasive,” but whether their minds actually seemed to change.

  What were the essays about? This was the really fun part of the study design, and one where Everett came up with a number of highly believable phony essays attacking any number of things that people care about, and doing so in seemingly authoritative fashion.

  First, we included essays that either did or did not support our participants prior beliefs on two politicized scientific topics, global warming and nuclear power. These were chosen for an obvious reason: One might expect conservatives to be more biased on the former, and liberals to be more biased on the latter.

  The essays provided a barrage of scientific “facts” and were pretty in-your-face, mimicking the language that you might find on a very ideological blog. For instance, here’s a brief (and highly misleading) excerpt from the global-warming-is-bogus item:

  IT IS A FACT that whatever global warming we are experiencing is mostly natural. The Earth’s orbital cycles, complex changes in solar radiation, and other natural causes can account for most of the measured temperature increase. While the climate science “establishment” may claim that human contributions have swamped this natural variability, the opposite is actually the case. Human influences on the Earth’s vast climate system are puny in comparison with the power of the sun.

  And here’s some bogus information on nuclear power, from our “anti” essay on this subject:

  It doesn’t take a meltdown to cause nuclear-related deaths. Disturbing statistics point to increases in cancer, low birth weight, and even mental illness in areas near perfectly good-functioning nuclear power plants. Experts estimate premature deaths worldwide from mere proximity to nuclear power plants could exceed 100,000 per year.

  Thus did we attempt to get a rise out of liberals and conservatives, alike, on politicized scientific issues. (But bear in mind that the study participants might have gotten essays that confirmed their views about either of these topics, rather than attacking them.)

  Beyond our global warming and nuclear power items, everything else in the study was pretty apolitical. We asked our study subjects to read fake essays that either trashed or heaped praise on their favorite brand of car, their home city, their alma mater, and their favorite musician, film, writer, and football quarterback. We also gave them contrary “facts” about the alleged superiority of Macs and PCs, culled from internet debates on the subject. And we asked them to read essays about the reality of extra sensory perception, the validity of astrology, and whether it is better to breastfeed or bottle-feed a child.

  For instance, if study subjects told us they were fans of the New Orleans Saints (and many LSU students are), they might have read an essay from a “sports writer” citing bogus statistics to put down ace quarterback Drew Brees:

  A little known statistic kept by the NFL is the frequency of interceptions in crucial situations. “Crucial situations” are defined as drives where a failure to score essentially either rules out the possibility of winning the game, or hands the other team an opportunity to come from behind. So it includes last-minute comeback drives, and drives that run out the clock when your team has a narrow lead.

  This statistic shows that Brees has one of the worst five interceptions-during-clutch-drives numbers in the HISTORY of the league since they started keeping the statistic. I know it sounds incredible, but those are the facts. Basically, if the game is on the line, you don’t want the ball in this guy’s hands.

  And if any of the students said they liked the singer Lady Gaga, they might have read a phony music journalism “expose,” channeling the gripes of two anonymous studio recording engineers:

  According to B.T., Lady Gaga has serious trouble singing in tune. “We used more auto-tune on her than I’ve ever used. And we not only fixed tunings, but we fixed timing using Pro-Tools.” (Pro Tools is a digital recording program that makes manipulating music in many ways possible.) “We ‘Pro-Tooled’ pretty much every note.”

  The studio horror stories go beyond the disasters that happen when the talent gets behind the microphone. One of B.T.’s colleagues, A.G., another engineer who also asked to remain unnamed, was hovering nearby during some son
gwriting sessions in the studio. According to A.G., listening to Lady Gaga composing was a painful experience. “I heard her playing the piano and trying to write a song. She knew like two chords.” So what about the songwriting? “Let’s just say if the album credits her with writing any of the songs, that’s a lie. I know the guy who pretty much wrote all those songs. It’s called show business. That’s just how it’s done.”

  Needless to say, Everett—who happens to be both a musician, and a diehard football fan—had a lot of fun writing these items. I was personally most amused by the one in which the James Randi Educational Foundation, which offers $ 1 million for anyone who can show the existence of paranormal abilities in a controlled experiment, is forced to actually pay up because ESP is shown to be real (yeah, right).

  Unbeknownst to the subjects, as they read the essays the computer program was timing them, measuring how many seconds—indeed, how many milliseconds—they spent per page of essay. Most of the essays required several onscreen “pages” to complete, where one page corresponded to a computer screen containing one or more paragraphs of text.

  As it happened, this measurement of time-spent-reading yielded an unexpected and strong political result.

  So what did we learn?

  1. Openness to Experience is Still Strongly Related to Political Liberalism. First, we were able to reconfirm a key relationship between personality and politics discussed earlier in this book. In our study, Openness to Experience was linked with liberalism of every type, no matter how we measured it—that is, with social or moral liberalism, economic liberalism, liberalism based on self-identification and by party affiliation (with Democrats versus Republicans), and a couple of other measures.

  But what do we mean by “linked”?

  In a popular book like this one, it would be off-putting to get too deep into the statistical nature of the relationships that we found. And yet at the same time, we know many readers will want some details. So let us briefly try to make everybody happy, with one sweeping explanation of what these kinds of findings mean. (Warning: we are entering wonk land again.)

  For the most part, our study was correlational, not causal. That means we detected a variety of correlations, which are statistical measures of associations between two variables that range from −1 to +1. A correlation of 1 or −1 means the two variables are perfectly associated, either positively or negatively. In other words, if you know a person’s measure on one variable, you know precisely the person’s measure on the other. A correlation of 0 means that knowing a person’s measure on the first variable gives you no clue whatsoever as to his measure on the second.

  Stated in these terms, Openness correlated at 0.25 with fiscal liberalism, and negatively at −0.28 with authoritarianism (among other findings). So what does a correlation of .25 mean?

  Imagine that there is some great, unobserved “source” of commonality between two variables. When this source pushes a person toward the positive side of variable A, it also pushes that person, in exactly the same amount, toward the positive side of variable B. If two variables both drew 25 percent of their variability from this common source (and, obviously, each variable drew 75 percent of its variability from other unobserved sources that were unrelated to the sources of the correspondingly unexplained 75 percent of the other variable) then the two variables would be correlated at 0.25.

  That might not sound like much. But in this kind of research, which involves huge amounts of purely random measurement error as we try to gauge a person’s “level of Openness” or “level of liberalism,” correlations verging on .3 are quite convincing, and, we think, easy to detect in the “real world.” In other words, it’s relatively easy to meet 10 average conservatives and 10 average liberals and intuitively pick up personality differences that make for a correlation with ideology of .25 or .3. (We’ll bet you agree.) And our study picked up just such differences.

  In fact, not only did we find a positive correlation between Openness and fiscal liberalism (among other measures of liberalism) and a negative correlation with authoritarianism, but these findings were strongly statistically significant. In terminology familiar to scientists, we might say that Openness was correlated with liberal fiscal ideology at a significance level of p = 0.002, and negatively with authoritarianism at p = 0.0006.

  For the non-pros, what that means is that, if these two variables actually somehow aren’t related (if their correlation is truly zero, so that we could only have found these correlations in our unique sample by accident), then we would expect to have to collect 1000 samples of similar size to get two additional findings of an association that strong or stronger for Openness. And for authoritarianism, we’d have to collect 10,000 samples of similar size to “find” 6 more associations that strong or stronger.

  That gives us good confidence that the finding is not accidental, but is a result of real differences between liberals and conservatives. (Please note that we will report results in this same format—providing first a correlation, and then a level of significance—throughout this chapter. When we say “r = .2” that means the correlation between two variables was .2, on that scale of −1 to 1.)

  Thus, the idea that conservatives—economic ones included, and maybe even especially—are less Open or flexible in their cognitive style, continues to receive strong support.

  2. On Nuclear Power, Conservatives Were More Biased Than Liberals. As noted in the last chapter, nuclear power is an issue often cited in order to suggest that liberals have their own anti-science biases. But this book argued, to the contrary, that liberals are actually quite flexible on this topic—and our data lend this idea new and surprisingly strong support.

  On our first measure of motivated reasoning, we found that all kinds of conservatives (social, fiscal, authoritarian, self-identifying, and so on) engaged in more motivated thinking about nuclear power. In other words, conservatives perceived a bigger difference between the persuasiveness of pro- and counter-attitudinal nuclear power essays than did liberals. These correlations (of MR with various kinds of conservatism) were all positive, but they were not uniformly large—and only the correlations with self-identified fiscal conservatism (r = .26, p = .06) and party identification (r = .23, p = .055) approached statistical significance at the conventional level of p < .05. So we shouldn’t make too much of this finding.

  However, on the second measure of motivated reasoning, conservatives across the board were harder to persuade about nuclear power when given counter-attitudinal evidence. Here, correlations between conservatism and motivated reasoning ranged from 0.25 to 0.38, and most of them were statistically significant at conventional levels. Here are a few of the stronger and more significant relationships: self-identified conservatism (r = .35, p = .02), Republican party identification (r = .32, p = .03), self-identified fiscal conservatism (r = .38, p = .04), and issue-based moral conservatism (r = .36, p = .016).

  Let’s unpack a little more what this means, focusing on the last finding in particular. You might think of it like this: As a person went from being very morally liberal to being very morally conservative in our study, his willingness to be persuaded by an unfriendly essay about nuclear power decreased by about .6 points on a 2 point scale (from −1 to +1)—in other words, by about 30 percent!

  Conservatives might argue that this result is just a reflection of their being “right”: Since conservatives favor nuclear power, and since, they might claim, the facts support the safety of nuclear power, this is just a case of their “knowing they’re right.” The problem with this interpretation is that liberals and conservatives did not differ in their initial support for nuclear power. Instead, liberals were about as likely as conservatives to enter the survey with positive feelings about nuclear power. It’s just that they were more willing to consider essays that opposed their pre-existing point of view—whether that view was for or against nukes.

  Thus, the idea that liberals are extremely motivated thinkers on nuclear power seems questionab
le. Perhaps in a more politically knowledgeable sample, one in which both the liberals and the conservatives were strongly committed to opposing positions on the issue, you’d find the liberals more motivated, yielding equivalent levels of MR on both sides. But the idea that conservatives are flexible in considering the dangers of nukes, while liberals are relatively inflexible in considering the benefits? The evidence here says it’s very likely the other way around.

  Indeed, the evidence clearly suggests that there was something about our nuclear power item that tickled conservatives emotionally—perhaps drawing a negative reaction to what they perceived as environmental “alarmism”?—and so triggered significant motivated reasoning.

  3. On Global Warming, Science Deniers Appear Less Cognitively Flexible Than Those Who Accept What Scientists Know. We had hypothesized that less Openness would cause conservatives to engage in more motivated reasoning. And on our two purely political items, the results did indeed seem to lend support to our idea. In fact, the findings are quite consistent with results described earlier in this book. We’ve already seen as much for nuclear power, but now consider global warming.

  First of all, on this issue we found that those who spent more time reading our essays (which could be considered a measure of curiosity, and therefore related to Openness), as well as those who were more Open to Experience by our standard measure, were more likely to accept from the outset that global warming is caused by humans. The first result was statistically significant across the board (r = .18, p = 0.027). The second result was only significant in the more politically knowledgeable quarter of the sample, where it became quite strong (r = .37, p = .04). So taken as a whole, it does appear that the more curious or Open people in our study started out from the position of being more scientifically correct about human-caused global warming.

 

‹ Prev