Book Read Free

Randal Marlin

Page 25

by Propaganda


  CHAPTER 3: PRoPAgAnDA TECHnIQUE: An AnALySIS 119

  BV-Propaganda-Interior-04.indd 119

  9/5/13 4:52 PM

  learn anything from respondents but hopes to encourage awareness and indignation in relation to the problems noted by the questions. People who work in marketing are

  not happy with this pseudo-polling because they believe it will encourage resistance

  to answering their own information-seeking polls.

  The ways in which polls can and do mislead can be divided into groups:

  1. The problem of randomness in sampling procedure;

  2. Effects resulting from who it is that does the pol ing;

  3. Mathematically determinable ranges of error built into the theory of pol ing, and

  reports that ignore those ranges;

  4. Bias or incompetence in the wording of the questions and their contexts, includ-

  ing the order in which different questions are asked;

  5. Lying respondents;

  6. Dishonest survey collectors;

  7. Biased or incompetent interpretation of answers;

  8. Fluctuation of opinion;

  9. Deliberate attempts to skew the results in some way; and

  10. The use of totally unscientific “polls” carried out simply to persuade, not to deter-

  mine public opinion.

  1. Randomness. The theory behind pol ing is that one can get an idea of the composi-

  tion of a whole population by examining a small sample taken from this population

  and examining it. Provided the sample is absolutely random and is reasonably large

  (a few hundreds, at least, for a population in the millions), one has a fair chance that

  the composition of the sample will be closely matched with that of the population as

  a whole. Randomness is obviously a requirement. It takes little effort to see how mis-

  leading a sample taken entirely, say, from Montreal, with its high proportion of French

  speakers, would be for inferring the proportion of French speakers in the population

  of Canada as a whole.

  A famous example of error through biased (non-random) sampling occurred

  during the 1936 election in the United States. The Literary Digest mailed a survey

  to names taken from directories of automobile owners and telephone subscribers.

  On the strength of the two million responses, a prediction was made that the new

  president would be Alf Landon, who would win 370 electoral votes, while Franklin

  Delano Roosevelt would win only 161. In fact, Roosevelt won the election handily,

  and the discredited magazine folded shortly after. The sampling was taken from a

  higher proportion of the wealthier part of the population, but Roosevelt was sup-

  ported by the not-so-wealthy, whose opinion was not measured adequately in the

  polling. Today telephones are so widely distributed that there is no longer the same

  bias toward the rich. Omitting unlisted numbers through use of telephone directories

  could be a source of bias, but such numbers can be reached by random dialing. It is

  120 PROPAGANDA AND THE ETHICS OF PERSUASION

  BV-Propaganda-Interior-04.indd 120

  9/5/13 4:52 PM

  well-known that biases against stay-at-homes are created when pol ing is done on the streets and that owners of vicious dogs chained near the front entrance to a house are

  less likely to be represented in a door-to-door pol .

  Today, a new problem has arisen regarding the demographic and related political

  differences between cellphone and landline users. A recent study found that a quarter

  of US households have only a cellphone.39 To combat possible bias some pollsters

  include both cellphone and landline users in their sampling. In the British Columbia

  election on May 15, 2013, most pollsters were spectacularly wrong in their predic-

  tions, giving a large margin to the left of centre New Democratic Party when the more

  centrist Liberal Party led by Christy Clark won a majority of seats. Curiously, the

  one polling group correctly predicting the Liberal win, Forum Research, used only

  landline contact. Perhaps the tilt to the elderly with landlines matched the general

  propensity of older people to exercise their vote more frequently in relative terms than

  the younger demographic. No doubt pollsters will be examining their methodology

  carefully to determine exactly where it was flawed.40

  People who do not want to answer questions do not get represented. Michael

  Wheeler reports a pollster’s estimate that upwards of 20 per cent of those polled refuse

  to answer.41 Informal discussions with students who have worked for pollsters suggests

  to me that today the percentage is higher. Seniors appear to be more willing to answer,

  not being pressed for time, but also in some cases just welcoming the chance to talk to

  someone. That could be another source of bias.

  2. Interviewer effects. Biases can also be created by whoever it is that does the polling. Studies have indicated that when African Americans ask the questions, African

  American respondents are prepared to speak their minds more freely about racism

  than they would to white pollsters.42 Since interviewers for Gallup (a major polling

  firm) are almost all women, this may produce a bias regarding opinions expressed to

  them on women’s issues. Opinion polls taken prior to the 1980 referendum on Quebec

  independence tended to overestimate support for the Oui side in favour of seces-

  sion. Some speculated that badges for the pollster, the Institut Québécois d’Opinion

  Publique, resembled something official, suggesting a tie-in to the ruling independent

  party, the Parti Québécois (PQ). This may have caused respondents to be more reti-

  cent to proclaim their support for federalism and the Non side. As the Ottawa Citizen reported, “The poll showed the Oui forces with 40.4 per cent of the vote compared to 36.5 per cent for the Non side but there was a steep 23.1 per cent who were undecided or would not say how they would vote.”43 The outcome showed that the undecideds

  were overwhelmingly supporters of the Non.

  How pollsters divide up the undecideds can make an important difference to the

  assessment of public opinion. Barry Kiefl, director of research for the CBC, noted

  that, “Some pollsters assume the undecided will split the same way as decided voters,

  others weight some of the undecided by the direction in which they are ‘leaning’ or

  CHAPTER 3: PRoPAgAnDA TECHnIQUE: An AnALySIS 121

  BV-Propaganda-Interior-04.indd 121

  9/5/13 4:52 PM

  according to the party they voted for in an election. Some polls exclude those who indicate they’re unlikely to vote, others include all eligible voters.” Kiefl pointed out

  that the procedures can “completely alter results, especially in a campaign during

  which there have been major shifts in opinion.”44

  3. Mathematical Limitations. To get a useful glimpse of what is involved in polling

  theory (actual polling theory is much more complicated), we might compare it to a

  huge urn containing hundreds of thousands of black and white balls and nothing else.

  Assuming we don’t know how many of each there are, how sound an estimate could

  we get of the composition as a whole if we took samples? If we took 100 balls and

  found that 45 were white and 55 were black, what is the probability that the compo-

  sition of the contents of the whole urn are in exactly that proportion? If we already

  know the composition of the urn, we can consider what happens when we
take sam-

  ples and what chances there will be of a match between the composition of the sample

  and that of the urn as a whole.

  Obviously, the larger the sample, the greater likelihood of exact matching. But

  if we can be content with rough matching, we can get a close approximation with a

  relatively small sample. Suppose, to make things simple, that the composition of the

  urn is half white balls and half black balls. What would happen if we took a sample of

  two and guessed that this represented exactly the composition of the urn as a whole?

  Assuming that the balls are well stirred and that each ball is picked at random, there

  will be an exactly even chance of picking either a white ball or a black ball first. The

  same holds true for the second pick. The result is one of four different, but equally

  likely outcomes: WW, WB, BW, BB. In two of these cases (WB and BW) there is

  complete and exact matching, but in the other two the prediction would be 50 per

  cent wrong. In other words, one predicts that the whole urn is white because one took

  out two white balls, or black if two black balls were picked. If three balls are taken, the

  sequences become equally probable: WWW, WBW, BWW, BBW or WWB, WBB,

  BWB, BBB. Out of the eight equiprobable results, two are wrong by a margin of 50

  per cent. Of the other six, none is an exact match since the prediction based on them

  is one-third of one colour and two-thirds of the other. The error in the prediction

  amounts then to 50 per cent less 33 ⅓ per cent, or 16 ⅔ per cent for one set of balls,

  and 66 ⅔ per cent less 50 per cent, or 16 ⅔ per cent, for the other set. Being out by 16

  ⅔ per cent six times out of eight is still far off the mark, but it is better than being out

  by 50 per cent. If we take a sample of four, the equiprobable outcomes are sixteen or

  24. A sample size of N would number 2N, and it is easy, although tedious, to calculate

  the likely compositions of increasingly large samples in the way already done. Thus,

  two variables need to be taken into account. First, there is the proximity of equiprob-

  able outcomes to the known composition, and, secondly, there is the proportion of

  equiprobable outcomes that meet this degree of proximity. So, in the sample of three,

  16 ⅔ per cent measures the closeness, but “six times out of eight” measures, equally

  122 PROPAGANDA AND THE ETHICS OF PERSUASION

  BV-Propaganda-Interior-04.indd 122

  9/5/13 4:52 PM

  importantly, the proportion in which these outcomes are to be found among the total outcomes in the sample.

  For this reason, polling results must always take account of those two variables.

  By convention, “19 times out of 20” has been treated as an acceptably close frequency

  and so is not always stated. A Gallup poll comes with statements such as the follow-

  ing: “The study in Canada was conducted with a random sample of 741 adults in

  mid-September, with personal interviews in homes across the nation. A sample of this

  size produces results accurate within a 4 percentage point margin of error, 19 out of

  20 times.”45 The larger the sample, the smaller the range of error. Curiously, the math-

  ematical outcome of the analysis indicates that, when we deal with large populations,

  the important variable is sample size. The same sample size gives roughly the same

  range of error whether we deal with Canada or the United States, which has a popu-

  lation ten times greater. To make this result intuitively plausible, some compare the

  operation with taking samples from different pots of soup. We sample a soup by taking

  a spoonful or two; provided the soup is well stirred, it doesn’t make much difference

  to the accuracy of our tasting whether the spoonful is from a small pot or a large vat.

  Polling theory is much more complex, but we are now able to understand the

  potential for deception. For one thing, a given poll could be the 1-in-20 “rogue pol ”

  that is way off the mark. Secondly, the poll gives us information about the probability

  of something being within a certain range; it does not get more specific than that.

  So, if one poll tells us that 36 per cent of Canadians favour Jean Chrétien as prime

  minister, and another poll later tells us that 38 per cent favour him as prime minister,

  and the range of error of the poll is plus or minus four percentage points, we cannot

  assume that his popularity has increased. The range of error in the first case takes us

  from 32 per cent to 40 per cent and in the second case from 34 per cent to 42 per cent.

  Quite possibly, the first poll was measuring a true figure of 40 per cent, and the second

  poll was measuring a true figure of 34 per cent, both figures being within the stated

  ranges. In other words, a real drop in support might have appeared as an increase in

  support. That is why, when newspapers trumpet a supposed increase in support, when

  the “increase” is within the range of sampling error, readers should take the trumpet-

  ing with a grain of salt.

  Another important observation is the increasing unreliability of polls when sub-

  sets of populations are considered. A 1990 Globe and Mail and CBC poll of 2,259

  respondents spread evenly across Canada (except for the Northwest Territories and

  the Yukon) gave a stated range of error of plus or minus 2.2 percentage points. But if

  we then start talking about Quebec, we refer to the sample from that province alone,

  which of course is smaller than the total sample for Canada including Quebec. This

  will entail a considerable increase in the range of error as a result. As worked out by the

  pollsters, the relevant range became 4.3 percentage points for Quebec, 4.4 percentage

  points for Ontario, 4.8 percentage points for the Prairies, 4.9 percentage points for

  British Columbia, and 4.8 percentage points for Atlantic Canada.46

  CHAPTER 3: PRoPAgAnDA TECHnIQUE: An AnALySIS 123

  BV-Propaganda-Interior-04.indd 123

  9/5/13 4:52 PM

  Newspapers do not always spell out the change in error range and, thus, can mislead the public as to the true state of opinion. The exclusion of the Yukon and the

  Northwest Territories, perhaps explainable by cost factors, skews the overall results

  very slightly but more importantly makes for blindness as to possible significant dif-

  ferences of opinion in those areas.

  Just before the October 30, 1995 referendum on Quebec sovereignty, a mass

  rally was held in Montreal to express support from the rest of Canada for a “No”

  vote. Those who were at the rally felt that it boosted support against sovereignty, but

  newspaper reports claimed differently, on the ground that surveys of Quebec opinion

  before and after the rally showed a drop of one percentage point. As pollster Michael

  Marzolini pointed out, the surveys were of only 400 Quebecers, thus giving a mar-

  gin of error of five percentage points. Also, no question had been asked about the

  rally. His own firm, Pollara, which he identified as the Liberal Party’s polling firm,

  asked strictly about the rally and, in his own words, “found that the rally actually won

  over almost 10 per cent of voters to the federalist side, based on a sample of 1,000

  Quebecers.”47

  4. Wording and Context of the Question. Of all the ways in which opinion polls can

  be used to shape public opinion, perhaps th
e most important is the wording of the

  question. “Are you in favour of nuclear power and the reduction of coal-fired, pol ut-

  ing, ecologically harmful power stations?” can be expected to elicit a more favourable

  response to the nuclear industry than a question such as “Do you favour nuclear power

  despite its high cost, the problems of nuclear waste disposal, and the remote possi-

  bility of meltdown?” A report on the poll might emphasize that people were “for”

  or “against” nuclear power without spelling out the full question. In any responsible

  report on a pol , therefore, the full question should always be given. Readers should be

  given the opportunity to test how they might have responded to the question and for

  what reasons. Journalistic integrity requires that the wording and the methodology be

  presented for the readers’ inspection, even if only at the end of a story.

  The semantic impact of sentences that appear closely synonymous often elicit very

  different answers from respondents in ways that seem inconsistent. Part of the expla-

  nation may have to do with the context of the question and with the kinds of sentences

  with which it is contrasted. For example, the Legal Research Institute at the University

  of Manitoba found respondents from Montreal, Toronto, and Winnipeg answered

  in the proportion of roughly 93 per cent in the affirmative to the question “I must

  always obey the law.” Yet, four questions later, the very same respondents answered

  affirmatively in the proportion of about 47 per cent to the question “There are situ-

  ations when it is right not to obey the law.”48 That means 40 per cent are upholding

  both sides of a contradictory pair of sentences. Anyone who felt “I must always obey

  the law” would have to deny that “There are situations when it is right not to obey the

  law,” for how can one be obligated to do the opposite of what it is right to do?

  124 PROPAGANDA AND THE ETHICS OF PERSUASION

  BV-Propaganda-Interior-04.indd 124

  9/5/13 4:52 PM

  A clue to the resolution of the seeming inconsistency comes from the context of the two questions. The first was contrasted with “It is all right to break the law as long

  as you don’t get caught” (answered affirmatively only by 7 per cent). The second ques-

  tion was contrasted with “Disobedience of the law can never be tolerated” (answered

 

‹ Prev