Bullshit and Philosophy

Home > Other > Bullshit and Philosophy > Page 9
Bullshit and Philosophy Page 9

by Reisch, George A. ; Hardcastle, Gary L.


  Confirmation bias helps to explain our dogged resistance to changing our beliefs.36 But it may appear that confirmation bias must play only a negligible role in the initial formation of new beliefs. As such, it may appear to be of little aid to the propagandist or the bullshit artist in gaining initial leverage over our beliefs. Though there is a certain truth to this, we should not underestimate the extent to which confirmation bias aids the spread of bull. One has only to consider the rise of information cocoons like Fox News, right wing talk radio, Air America, and the fragmented and unruly blogosphere. Information cocoons systematically promote a certain narrow range of views and outlooks and systematically misrepresent or exclude alternative points of view and competing sources of evidence. That more and more Americans self-consciously seek their news and information from information cocoons is the direct result of confirmation bias run rampant. Though the creators of such cocoons are merely responding to our own self-generated demand, they are nonetheless able to exert great influence over public discourse through their highly skilled management of such cocoons. Once an information consumer’s confirmation bias has led her to give herself over to the managers of an information cocoon, she has, I suggest, made herself easy pickings for the propagandist, the spinner, and the bullshit artist.

  There are, to be sure, a host of foibles of the mind that more directly and immediately affect the initial formation of our beliefs—and preferences—rather than just the dogged maintenance of them. I have in mind our susceptibility to framing effects on the formation of beliefs and preferences.37 Imagine that the US government is preparing for an outbreak of the Avian flu. Suppose that without intervention the disease is expected to kill, say, six thousand people. Two alternative programs to combat the disease have been proposed. The exact scientific estimates of the consequences of each program are as follows:

  If Program A is adopted, two thousand people will be saved.

  If program B is adopted, there is one-third chance that six thousand people will be saved and a two-thirds chance that no one will be saved.

  When experimental subjects are asked to choose between these programs, seventy-two percent choose program A, while twenty-eight percent choose program B.

  Notice that the “expected return” in lives saved of the two programs is identical. So why are subjects not indifferent between the two programs? Because we tend to be risk averse when we choose between outcomes all of which have a positive expected return. That is, people tend to prefer a sure thing to a risky thing of equal or greater expected return when both expected returns are positive. This means that people tend to prefer the certainty of saving two thousand lives over an alternative that risks losing more lives, even if that alternative involves the possibility that more lives will be saved. In preferring A to B, people assign disproportionately greater weight to the two thousand additional lives that might be lost than to the additional four thousand lives that might be saved by pursuing program B.

  Now consider an alternative scenario that is typically presented to a different set of experimental subjects. As before, the government is preparing for an outbreak of the Avian flu. Two programs are being contemplated in response to the outbreak. The exact scientific estimates of the effectiveness of the programs look like this:

  If program C is adopted, four thousand people will die.

  If program D is adopted, there is a one-third chance that no one will die and a two-thirds chance that six thousand people will die.

  Presented with a choice between programs C and D, seventy-eight percent of experimental subjects will choose program D, while twenty-two percent choose program C. Again, the expected return, this time in lives lost, is identical on the two programs. And again, we might wonder why subjects should prefer plan D to plan C. The answer is that people tend to be risk seeking with respect to losses. This means that people tend to prefer pursuing the chance that no one will die—even if it pursuing that chance means running the risk of more deaths—to the certainty that fewer will die. In preferring D to C, people are, in effect, assigning disproportionately less weight to the two thousand additional lives that might be lost than to the four thousand additional lives that might be saved by pursuing plan D over plan C.

  What is striking about these results is the fact that program C and program A are identical programs. They are merely described differently—one in terms of lives lost, the other in terms of lives saved. If we pursue program A, two thousand people will be saved. But that just means that four thousand will die who otherwise might not have. Exactly this set of outcomes is envisioned by program C. Similarly, programs B and D also envision the same exact outcomes, with just the same probabilities. But B describes those outcomes in terms of lives saved, while D describes those outcomes in terms of lives lost. It seems painfully obvious that whatever rational basis there can be for preferring A to B, or vice versa, obtains equally well for the choice between C to D. But over and over again, experimenters find the choice between equivalent scenarios to be highly sensitive to the way in which the choice is framed.

  Our sensitivity to the way a set of alternatives is “framed,” together with our insensitivity to that which is invariant across different ways of framing the same set of alternatives provides powerful leverage for purveyors of spin, propaganda, and bull. To take a not altogether fanciful example, imagine two politicians, Smith and Jones. Smith wants to convince the voters that program A (that is, C) ought to be pursued. She wants to do so because program A will be highly beneficial to a pharmaceutical company that has made significant contributions to her campaign. On the other hand, because a certain medical supply company will benefit highly from program D (that is, B) Jones wants to convince the voters that program D (that is, B) ought to be pursued. Smith knows that if she succeeds in framing the choice in terms of potential lives saved, she has a better chance of swaying the voters. Jones knows that if she succeeds in framing the issue in terms of potential lives lost, her arguments have a better chance of swaying the voters. Neither has an incentive to point out the frame-invariant regularities. Both have an incentive for exploiting our susceptibility to framing effects. To that extent, they co-operate in jointly misleading the voter into thinking that he has been subject to a real debate about competing options fairly and dispassionately considered. In reality, he has been no more than fodder in a war over the framing of the issues.38

  Another kind of framing effect has to do with simple if-then reasoning. Human beings have a complex understanding of the causal structure of the world, more so than any other creature on this planet. It would not be unreasonable to expect that we as a species must be rather adept a simple if-then reasoning. Not just our understanding of the causal structure of the physical world, but all of social life would seem to be founded on our capacity for if-then reasoning. But, surprisingly, we are not as adept at such reasoning as one might antecedently have expected. Consider the so-called Wason selection task. That task tests for the ability to falsify conditional hypotheses. Here is a typical experimental set-up. Subjects are given four cards. They are told that each card has a number on one side and a letter on the other. They are asked to name those cards and only those cards which should be turned over in order to determine whether the following rule is true or false of these four cards:

  If a card has the letter D on one side, it has the number 3 on the other

  D

  A

  3

  7

  Applying straight-forward propositional logic, the correct cards are the D card and the 7 card. If a D is on the other side of the 7, then the rule is falsified. If anything other than a 3 is on the other side of the D card, the rule will be falsified again.

  Subjects perform remarkably poorly on this task. Typically, less than twenty-five percent of subjects give the correct choice. Indeed, in some version of Wason’s original experiment this was as low as five percent. The most frequent choices are that only the D card need be turned over or that the D card together with
the 3 card should be turned over. The 7 card is seldom chosen by subjects. Moreover, subjects are remarkably resistant to training on this task. If shown the correct response for a particular run, they get the point, but they seem to lack the ability to generalize to new runs of essentially the same task.

  Notice that turning over the 3 card cannot falsify the rule. Whatever is on the other side of the 3 card is consistent with the rule. So there is a weak sense in which the 3 card might be thought to “confirm” the rule. Perhaps that is why subjects tend to turn it over. So we may be seeing our old friend confirmation bias rearing its head again.

  The persistent inability of subjects to perform well on this and other tests that would seem to require little more than a certain minimal logical acumen has tempted many to conclude that human cognition is irredeemably irrational. But that conclusion is hasty and crude. For one thing, whatever can be said for the rational powers of this or that individual mind, our amazing cognitive achievements as a species suggest that human cognition, taken as a whole, must be one of natural selection’s most consequential innovations. Only the first advent of sexual reproduction, I suspect, was more consequential. I do not mean to deny that most of us probably are destined for some degree of cognitive mediocrity. But the real key to our cognitive success as a species rests, I conjecture, on our evolved capacity for culture. Where cultural mechanisms function to spread the benefits of one or more individual’s cognitive innovations and successes to others, it is not necessary that everyone be an Einstein, Newton or Leonardo. In effect, our shared capacity for shared culture enables the many to free-ride on the cognitive achievements of the few. This is a fortunate fact indeed and another testament to natural selection’s sheer brilliance at mind-design.

  Even granting that many or most of us may be cognitive free-riders on the astounding cognitive achievements of the few, it would still be a mistake to conclude too hastily that human minds are irredeemably irrational. Sometimes, in fact, our sensitivity to framing effects works to give our minds a greater semblance of rationality. For example, performance on Wason Selection Tasks is known to improve dramatically when the conditional in question is “re-framed” in terms of something like a social contract. You are a bartender. Your task is to see that there is no underage drinking. That is, you must see to it that the following conditional is true: If someone is drinking beer, then she must be older than 21. Which cards should you turn over?

  drinking beer

  drinking coke

  25 years old

  16 years old

  From a purely logical point of view, this problem has exactly the same structure as the earlier one. Nonetheless, subjects perform significantly better on the second version of the task than on the first.

  Some evolutionary psychologists have concluded on the basis of this sort of data that natural selection has endowed the human mind with a special purpose “cheater detection” module. 39 Since the making and enforcing of social contracts of vary-ing scope and complexities is no doubt a core human competence, it would not be at all surprising if we were somehow naturally and specially adapted to be able to determine swiftly and reliably whether a contract was being respected or violated. Still, it is striking that we are apparently unable to generalize, to transport what works in a given problem domain to different but structurally similar problem domains. The evolutionary psychologist concludes, partly on the basis of such inability, that our minds are not general-purpose problem solving machines. Rather, they were specifically adapted to solve specific cognitive problems that were of recurring significance in the environments for which we are evolved. Often those recurring problems came with what we might call built-in frames that enabled certain structures in our mind to quickly and effortlessly recognize the kind of reasoning that had to be applied. In his book, How the Mind Works (New York: Norton, 1997), Stephen Pinker puts it nicely:

  No organism needs content-free algorithms applicable to any problem no matter how esoteric. Our ancestors encountered certain problems for hundreds of millions of years—recognizing objects, making tools, learning the local language, finding a mate, predicting an animal’s movements, finding their way—and encountered certain other problems never—putting a man on the moon, growing better popcorn, proving Fermat’s last theorem. The knowledge that solves a familiar kind of problem is often irrelevant to any other one. The effect of slant on luminance is useful in calculating shape but not in assessing the fidelity of a potential mate. The effects of lying on tone of voice help with fidelity but not with shape. Natural selection does not care about the ideals of liberal education and should have no qualms about building parochial inference modules that exploit eons-old regularities in their own subject matters. (p. 304)

  If this is right, then it is not altogether surprising that at least some framing effects actually improve the functioning of the human mind. And that conclusion provides some grounds for hope that if we could always but frame matters rightly, much cognitive detritus might well be swept away. Once again, we see that for good or for ill, he who controls the frame may well control all.

  Reclaiming the Public Square

  Our all-too brief examination of just a few of the many cognitive foibles of the human mind supports both a bleak conclusion and a more hopeful one. The hopeful conclusion is that our minds appear to be finely tuned instruments, well adapted for solving the plethora of recurrent cognitive challenges that were endemic in the information processing environments of our hunter-gather progenitors. To the extent that contemporary information processing environments match those in which we were designed to function, our cognitive capacities serve us well. Unfortunately, the modern world subjects human cognition to stresses and strains unlike anything encountered on the ancient savannah. We are bombarded with information and misinformation in a dizzying variety, often intentionally framed in ways unsuited for our natural cognitive capacities. The mismatch between our cognitive capacities and the informational environments in which we now find ourselves partly explains both why there is so much bull, spin, and propaganda about, and why we are so often taken in by it.

  Now it bears stressing that the fundamental cognitive architecture of the human mind was fixed eons ago on the ancient savannah. So my claim is not that contemporary humans, as such, are any more or less susceptible to bullshit and other forms of misrepresentation than humans have ever been. Our minds are as they have always been. Only our circumstances have changed. Nor do I wish to deny the evident powers and achievements of the evolved human mind. The long march of human history has decisively established what a wondrous instrument the human mind is. It has scaled great cognitive heights. It has peered deeply into the innermost secrets of the natural world; it has given rise to cultures and to social formations complex and various; and it has even plumbed the depths of its own operations.

  Lest I be accused of nostalgia for some bygone cognitive order, let me stress that I am fully aware that in every age and epoch, the mind has produced a profuse abundance of cognitive detritus. In every age of humankind, superstition, illusion, and falsehood of every variety has existed along side the highest art and deepest knowledge that the age has mustered. Moreover, we are blessed to live at a time when human beings collectively have scaled greater cognitive heights than humans ever have before. We see far more deeply into the workings of everything natural and human. So how could it possibly be that there is more cognitive detritus about in our own times?

  The answer is, I think, twofold. First, the masters of bullshit, propaganda, and spin have paradoxically been aided by our improved the understanding of the workings of the human mind. In our times, the masters of the dark arts are astute students of the enduring foibles of the human mind. Second, the means of public representations and persuasion available to the masters of the dark arts have a vastly greater reach and efficacy than they have ever had. Consequently, in our own times, the masters of the dark arts are vastly more effective than their predecessors could have dreamt of being.<
br />
  I don’t mean to say that those who seek a hearing for sweet reason in the public square have no weapons of their own. The battle must be waged on at least two different fronts. First, it must be waged in the trenches of education. We must seek to instill in our children distaste for all dogma, an enduring suspicion of all easy and comforting falsehoods. We must instill in them an insatiable appetite for unyielding argument, a propensity to seek out and confront even the most disquieting evidence, even if doing so would undermine their or our most cherished beliefs. They must learn never to take at face value frames that are merely given. They must learn the skills of re-framing, the habit of asking after that which is invariant across alternative frames. If our children are educated in this way, their minds will provide far less fertile ground for the spread of bullshit.

  Though such a mind-by-mind slog in the trenches of education is necessary, it will not suffice. In addition, we must reconfigure the very means of public representation and persuasion. In our times, a narrow, self-serving elite, interested mostly in its own power, wealth and prestige enjoys a certain privileged access to the means of public representations and persuasion. We must seek to diminish that access by all the ways and means available to us—via the fragmented and unregulated internet, via politics, in still unoccupied small niches of the mass media. The purveyors of institutional and official bullshit will of course not yield easily. They are powerful, clever, and determined. Moreover, experience bears ample witness to the fact that good discourse does not spontaneously drive out bad. Neither, however, will bad discourse wither on its own. If bullshit is to be driven from the public square, only those who seek more than bullshit can drive it out. So let the battle be joined.

 

‹ Prev