Book Read Free

Trust Us, We're Experts PA

Page 38

by Sheldon Rampton


  What both the IPA’s list and Peter Sandman’s 12 points have in common is that they focus on emotional issues rather than the public’s rational concerns. This is indeed a pattern that is common to propagandists in general. The modern-day propagandists who work in advertising and public relations can tell you endless stories that “prove” how easily news and public opinion can be manipulated by irrational appeals. This is just the way people are, they say. This is how the media works. And indeed, only someone who is blind to history would deny that emotional and irrational appeals have frequently succeeded in manipulating the public. This, however, is only a partial truth about human nature. People are complicated creatures with multifaceted personalities. The poet Ezra Pound, for example, was simultaneously a sensitive artist and a vulgar, anti-Semitic shill for the Nazis. A lot of the way we behave depends upon which parts of our personality express themselves. If you appeal to someone’s better nature, you will get a different result than if you appeal to the same person’s worst impulses. In a world full of propaganda, it is hardly surprising that some of the worst appeals succeed. What propagandists can’t tell you, however, is whether and to what degree the public’s irrationality is a self-fulfilling prophecy of their own creation. That is a question that perhaps you can answer better than they can, by learning to tell the difference between communication strategies that treat you like a child and strategies that treat you like an adult.

  Growing Up Guided

  The difference between the world of a child and the world of an adult can largely be described in terms of control, competence, and responsibility. When you were a child, you had little control over decisions that affected you. You were expected to eat what you were given, go to school at the assigned time, go to sleep at a designated bedtime, and so forth. Adults made the decisions because it was assumed that you lacked the capacity to decide for yourself. Even the decisions you did make were not necessarily binding, and it was your parents, not you, who were responsible for the consequences of your mistakes.

  As an adult, you are responsible for all these decisions and more. The responsibilities of adults in fact extend beyond their actual areas of competence, which explains a lot about the way the world works. If you want to build an addition to your home, you hire a contractor. To take care of your health, you hire a physician; for legal matters, an attorney. You buy shoes from a company with expertise in manufacturing footwear. In all of these situations, the fact that you yourself lack expertise is not much of a problem, because you know what you want, and the expert’s job is simply to fulfill your wishes. In the words of the philosopher Georg Hegel, “We do not need to be shoemakers to know if the shoes fit, and just as little have we any need to be professional to acquire knowledge of matters of universal interest.”

  With regard to decisions about public issues, expertise in terms of skill, knowledge, or experience is often less important than basic questions of values. Is abortion wrong? Is it moral to deny medical care to a child whose parents have no health insurance? Should murderers be put to death? Is it acceptable to perform medical experiments on human beings without their consent? There are no scientific answers to these questions, or thousands more like them. They can only be answered by asking ourselves what we believe and what we value. In addressing these questions, finding knowledgeable experts is actually less important than finding experts who share our values. This doesn’t mean that knowledge is unimportant. Knowledge matters, whether you are deciding about abortion or hiring someone to remodel your kitchen. But the contractors who remodel your kitchen don’t get to tell you what color to paint the walls or whether you should have wood versus linoleum floors. Their advice is limited to letting you know how much each option will cost. In a democracy, that’s the kind of deference we should expect from experts on public policy. And a contractor who spends a lot of time studying ways to minimize your outrage is probably not someone you really want to hire.

  When hiring a contractor, you can turn to a state licensing board or the Better Business Bureau to see if someone has valid credentials and a reputation for doing honest work. There is no such system for accrediting public policy experts. However, if someone makes claims of a scientific nature you can ask what kind of education, licensing, and other credentials they possess in the field for which they are claiming expertise. It is also worth asking how experts rank among their peers, although you should bear in mind that every profession has its blind spots and tends to “circle the wagons” against outside criticisms. To judge from the literature of the American Medical Association, for example, you would think that malpractice lawsuits are a bigger problem than actual medical malpractice. As a rule of thumb, you should assume that specialists in any field are given to underestimating harm for which their own profession is responsible.

  Expertise is justifiably linked in the public’s mind to talent, skill, education, and experience. There are also a number of stereotypical attributes that are unjustifiably linked to expertise, and it is important to avoid relying on them. These stereotypes include age, wealth, maleness, whiteness, self-confidence, credentials, specialization, and techno-elitism. When evaluating a speaker’s message, it is worth asking yourself if you are giving him extra points for having gray hair, a deep voice, an impressive-sounding degree, and a distinguished-looking business suit.

  Scientific Uncertainties

  Our society’s esteem for science actually tends to encourage the very unscientific notion that science is a source of infallible truths. In fact, all science is uncertain to some degree. Nature is complex, and research is difficult. The most that science can tell us about a given question is that there is a strong probability that such-and-such an answer is true. To understand scientific information, therefore, it helps to understand something about the statistical techniques that scientists use to quantify uncertainty. One of the classic journalistic textbooks on the subject is News and Numbers: A Guide to Reporting Statistical Claims and Controversies in Health and Other Fields, by the late Victor Cohn, a former science editor at the Washington Post.

  Scientists live with uncertainty by measuring probability. An accepted numerical expression is the P value, a statistical calculation of the probability that a given result could have occurred just by chance. A P value of .05 or less—the conventionally accepted cutoff for “statistical significance”—means there are probably only five or fewer chances in 100 that a result reported in a scientific study could have happened by chance alone. When studying health risks, statistical significance is often impossible to achieve. If something kills one in 1,000 people, you would actually have to study several thousand people in order to achieve a P value of .05 or less, and even then the possibility of other confounding factors might call your result into question. “A condition that affects one person in hundreds of thousands may never be recognized or associated with a particular cause,” Cohn says. “It is probable and perhaps inevitable that a large yet scattered number of environmentally or industrially caused illnesses remain forever undetected as environmental illnesses, because they remain only a fraction of the vastly greater normal case load.”6

  If you find any of these concepts difficult to grasp, you can take comfort in the fact that you are not alone. “Every major study of statistical presentations in the medical literature has found very high error rates, even among the best journals,” says Thomas Lang, medical editing manager at the Cleveland Clinic Foundation and coauthor of How to Report Statistics in Medicine: Annotated Guidelines for Authors, Editors, and Reviewers. “Many of those errors were serious enough to call the authors’ findings into question.”

  There are some specific guidelines to consider when evaluating scientific information. Cohn recommends that when someone tells you they’ve done a study you should ask, “What kind? How confident can you be in the results? Were there any possible flaws in the study?” The last question is particularly important, he says, because the answer may tell you whether you are dealing with an honest inves
tigator or a salesperson who is trying to convince you of a particular point of view. “An honest researcher will almost always report flaws,” Cohn says. “A dishonest one may claim perfection.” Other questions to ask include:• What kind of study protocol was used? Is enough information offered to satisfy you that the research method is sound in its design and that its conclusions are reliable?

  • Why was the study performed?

  • What is the study’s statistical significance and margin for error?

  • Was it submitted to independent peer review? Has it been published in a reputable scientific journal? (Bear in mind, however, that authors can pay to have scientific findings published, even in some peer-reviewed journals.)

  • Are the results consistent with the results from other studies performed by other researchers?

  • Is there a consensus among people in the same field?

  • Who disagrees with you, and why?

  Asking some of these questions may seem daunting. Scientific studies are laden with jargon of the trade that makes it difficult for outsiders to understand—words like “chi-square,” “allele,” “epizootic,” and so forth. Don’t let the language put you off. Often you can find a friendly scientist at your local university who is willing to translate things into plain English. University scientists are trained and paid to be educators, and many of them are happy to assist an intelligent, motivated person with questions. Above all, don’t be afraid to ask, and don’t let the incomprehensible stuff intimidate you. If someone wants you to believe something, the burden of proof should be on them to explain it to you in language that you can understand. If something is too complicated to explain, maybe it’s also too complicated to be safe.

  The Precautionary Principle

  Given the uncertainties inherent to science (and to all human endeavors), we are strong believers in the importance of the precautionary principle, which we discussed in Chapter 6. Throughout this book, we have also stressed the importance of democracy in making decisions about technology and its impact upon people’s lives. The reason that democracy matters in science and scientifically influenced policy is precisely that uncertainty exists and that different people reach different conclusions about important issues. Debate and compromise are the processes through which people resolve these differences. When a new technology is introduced, such as nuclear power or genetic engineering, some people will focus entirely on the potential benefits of the new technology while ignoring the dangers. Others will focus on the dangers and ignore the potential benefits, while other people fill in the continuum of opinion between these two poles. In an ideal decision-making process, the interplay of debate over differing views will hold the “reckless innovators” in check but enable beneficial innovations to move forward after the concerns of the “fearmongers” have been thoroughly vetted in scientific and public forums. This process may slow the pace of introduction of new technologies, which indeed is part of the point to having a democratic decision-making process.

  By training and enculturation, most experts in the employ of government and industry are technophiles, skilled and enthusiastic about the deployment of technologies that possess increasingly awesome power. Like the Sorcerer’s Apprentice, they are enchanted with the possibilities of this power, but often lack the wisdom necessary to perceive its dangers. It was a government expert, Atomic Energy Commission chairman Lewis L. Strauss, who promised the National Association of Science Writers in 1954 that atomic energy would bring “electrical energy too cheap to meter” within the space of a single generation.7 Turn to the back issues of Popular Science magazine, and you will find other prophecies so bold, so optimistic, and so wrong that you would be better off turning for insight to the Psychic Friends Network. If these prophecies had been correct, we should by now be jet-packing to work, living in bubble-domed cities beneath the ocean, colonizing the moon and Mars. The cure to cancer, like prosperity, is always said to be just around the corner, yet somehow we never actually turn that corner. Predictions regarding computers are notorious for their rhetorical excess. “In from three to five years, we will have a machine with the general intelligence of an average human being,” MIT computer scientist Marvin Minsky predicted in 1970. “I mean a machine that will be able to read Shakespeare, grease a car, play office politics, tell a joke, have a fight. At that point, the machine will begin to educate itself with fantastic speed. In a few months, it will be at a genius level, and a few months after that, its power will be incalculable.”8 Expert predictions of this sort have been appearing regularly ever since, although the day when computers will be able to grease your car (let alone read Shakespeare) keeps getting pushed back.

  The views of these techno-optimists deserve to be part of the decision-making process, but they should not be allowed to crowd out the views and concerns of the skeptics—the people who are likely to experience the harmful effects of new technologies and who deserve to play a role in deciding when and how they should be introduced. Just as war is too important to leave to the generals, science and technology are too important to leave in the hands of the experts.

  Opponents of the precautionary principle have caricatured it as a rule that “demands precautionary action even in the absence of evidence that a health or environmental hazard exists” and says “if we don’t know something we mustn’t wait for studies to give answers.” This is not at all its intent. It is a guide for policy decisions in cases where knowledge is incomplete regarding risks that are serious or irreversible and that are unproven but plausible in the light of existing scientific knowledge. No one is suggesting that the precautionary principle should be invoked regarding purely fanciful risks. There are legitimate debates over whether a risk is plausible enough to warrant the precautionary principle. There are also reasonable debates over how to implement the precautionary principle. However, groups that seek to discredit the principle itself as “unscientific” are engaged in propaganda, not science.

  Follow the Money

  When you hire a contractor or an attorney, they work for you because you are the one who pays for their services. The PR experts who work behind the scenes and the visible experts who appear on the public stage to “educate” you about various issues are not working for you. They answer to a client whose interests and values may even run contrary to your own. Experts don’t appear out of nowhere. They work for someone, and if they are trying to influence the outcome of issues that affect you, then you deserve to know who is paying their bills.

  Not everyone agrees with this position. Jeff Stier is the associate director of the American Council on Science and Health (ACSH), which we described in Chapter 9. Stier goes so far as to claim that “today’s conventional wisdom in favor of disclosing corporate funding of research is a ‘new McCarthyism.’ ” Standards of public disclosure, he says, should mirror the standards followed in a court of law, where “evidence is admissible only if the probative value of that evidence exceeds its prejudicial effect.” To disclose funding, he says, can have a “prejudicial effect” if it “unfairly taints studies that are scientifically solid.” Rather than judging a study by its funding source, he says, you should simply ask whether its “hypothesis, methodology and conclusion” measure up to “rigorous scientific standards.”9 When we asked him for a list of ACSH’s corporate and foundation donors, he used these arguments to justify his refusal. With all due respect, we think Stier’s argument is an excuse to avoid scrutiny. Even in a court of law, expert witnesses are required to disclose what they are being paid for their testimony.

  Some people, including the editors of leading scientific journals, raise more subtle questions about funding disclosure. The problem, they say, is knowing where to draw the line. If someone received a small grant 20 years ago from a pharmaceutical company to study a specific drug, should they have to disclose that fact whenever they comment about an entirely different drug manufactured by the same company? And what about non-financial factors that create bias? Nonprofit organiz
ations also gain something by publishing their concerns. They may have an ideological ax to grind, and publicity may even bring indirect financial benefits by helping attract new members and contributions. Elizabeth Whelan of ACSH made these points during a letter exchange with Ned Groth of the Consumers Union. “You seem to believe that while commercial agendas are suspect, ideological agendas are not,” Whelan complained. “This is a purely specious distinction. . . . A foundation’s pursuit of an ideological agenda—perhaps one characterized by a desire for social change, redistribution of income, expanded regulatory control over the private sector, and general promotion of a coercive utopia—must be viewed with at least as much skepticism and suspicion as a corporation’s pursuit of legitimate commercial interests.”10

  There is a certain amount of truth to Whelan’s line of reasoning. Nevertheless, corporate funding is particularly important to track, for the following reasons:• Corporations are consistently driven by a clear and self-evident bias—namely, the desire to maximize profits, whereas assessing “ideological bias” in nonprofit foundations is itself subjective and ideological.

 

‹ Prev