Analysis of adequate evidence divides itself into two main categories: causal fallacies and missing evidence.
Oversimplification and post hoc have been covered in previous chapters. Confusion of necessary with sufficient was covered above. Among other causal fallacies we should know about are neglect of a common cause, confusion of cause and effect, the less the better, the more the better, the ubiquitous gambler’s fallacy, and last but not least, the psychological fallacy.
An argument that neglects a common cause is inadequate.
Two seemingly related events may not be causally related at all, but they may relate to a third item that is their common cause. Two events associated in time do not imply cause and effect because they could relate to something else. That’s what we learned in the post hoc fallacy. Here we have the same thing. Two events associated in any way do not p. 271 imply cause and effect because they might better relate to something else. Because lightning seems to precede thunder, many observers were led to believe that lightning causes thunder. It turns out that lightning and thunder are both caused by the sudden intra-atmospheric discharge of electricity. Because light travels faster than sound, the light from the discharge arrives before the sound, even though both were generated at the same time by the same electric discharge.
Another example: “Alcoholics tend to be undernourished. Poor diet must contribute to alcoholism.” More likely alcoholics eat poorly because they are too busy drinking. In other words, the malnutrition and the alcoholism relate to a common cause—the addictive effects of ethyl alcohol.
A third example: “Business executives have very large vocabularies. Therefore, if you want to have a successful business career, study words.” Here, business executives are linked with vocabularies, and the vocabulary is asserted as the cause of business success. More likely, both business success and the large vocabulary the executives have are two items that both are caused by many other common factors related to business success, including college education, extensive reading, high IQ, and so forth. Since both items relate to a third unmentioned set of items, the evidence is inadequate to support the causal conclusion between success and vocabulary. This means that the evidence is also inadequate to support the prediction that studying words would make for success in business.
A last example: “I wish I had a giant practice like yours. But my patients don’t love me the way your patients love you.”
“Just return their calls.”
The two items, big practice and the love the patients have for their doctor, are related to a third factor, which is more likely to be the controlling factor, as the second physician admits: he returns patient calls.
An argument that confuses cause with effect does not provide adequate evidence for a conclusion.
When I was a Boy Scout at summer camp, every Sunday we enjoyed fried chicken, which was the only decent food served all week. Sunday was visitors’ day, and my parents were always impressed at how well we seemed to eat at camp that day. To argue that my parents seemed always to know when we had a good meal and only visited on that day would be to miss the point of cause and effect and to get things backward. The camp served a good meal on visitors’ day to impress the parents.
p. 272 Scene in the Houston unemployment office: “No wonder these people can’t get jobs. They are so irritable!” Reversing the cause and effect gives a more plausible explanation. The unemployed are irritable because they have no jobs.
Or: “The reason Bill is so irritable is that the customers haven’t been giving him tips lately.” It is more likely that the irritability caused the fewer tips than vice versa.
Or: “The homeless are homeless because they have no homes.” I’ll leave this tautology, which also neglects a common cause, for you to work out. Hint: The homeless are not homeless because they have no homes. The homeless are homeless for another reason. What is that reason?
The less the better and (the closely related) the more the better fallacies are both inadequate.
This was covered partially but deserves elaboration. Less is not necessarily better. Stress is bad, but no stress is bad, too. In high doses, vitamin B6 (pyridoxine) is toxic to nerves. But without small amounts of B6, the nerves can’t function. Too much is bad, and too little is bad. What is needed is the correct amount, no more and no less. Therefore, arguments based on extrapolation of less and more without evidence are inadequate.
Take this statement, for example: “Fat is bad. It causes heart attacks and strokes. Therefore, no fat at all is best.” Without dietary fat, vitamins A, D, and K can’t be absorbed. Since these vitamins are essential to life, a diet without fat would result in serious illnesses.
The more the better fallacy is more often committed than the less the better fallacy. This is largely because in many cases, the effects of things increase as we increase their quantity. But keep in mind that a pinch of salt may be fine, but twenty pinches can ruin the taste. A lot of more is better is an overgeneralization and an oversimplification, proving that a fallacy may overlap several areas of logical interest and bear dual citizenship in the country of boo-boo, blunder, miscalculation, and error.
Drug effects do not often increase with dose, but side effects may.
Always ask for evidence that increasing the dose will increase the benefit without increasing the side effects. Colistin is a great antibiotic for various severe kidney infections. But in high doses, Colistin causes kidney failure. The right dose conforms to the reality principle. The wrong dose does not. Excessive intake of Colistin may lead to death.
Gambler’s fallacy is both inadequate and irrelevant.
The gambler’s fallacy is a humdinger and if you remember anything p. 273 from this book, please remember this. The defective reasoning is so common that I believe there is an epidemic of gambling out there accompanied by an epidemic of the gambler’s fallacy.
Because a chance event has had a run, the probability of its occurrence in the future is not significantly altered. Those who think the probability is altered commit the gambler’s fallacy. The fallacy is named after gamblers who erroneously think that the chances of winning are better or significantly improved because of a certain run of events in the past: “I can’t lose because I’m hot”; “My luck has got to change because I’ve been losing all night.” Both these people are unaware that a chance event, such as the outcome of a coin toss or a roll of dice or the spin of the roulette wheel, is totally independent of all the tosses or rolls or spins preceding.
“Honey, let’s try again. Since we’ve had three girls in a row, the next one has to be a boy.” Probably not. In fact, the chance of having a boy is almost exactly the chance of having a girl, namely, one chance in two, that is, fifty-fifty. One cannot infer a greater probability of having a boy from the chance events of the past because the evidence supporting such a claim is not only inadequate but also nonexistent.
Take, for example, “I have been playing the Texas state lottery every week for five years. I have to win soon.” The implicit premise represents a faulty causal analysis of chance events and provides no support for the conclusion. The chances of winning any particular lottery do not improve as a result of past disappointments.
Or this: “I haven’t caught a bluefish in the last fifteen times I have been fishing. Surely, I’ll catch one today.” Don’t hold your breath.
Or this: “The market has to turn around soon because we have had three down years in a row, and that hasn’t happened since the 1940s.” I have been hearing that for a while. Whether and when the market turns depends not on the duration of the losing streak already experienced, not on the past history of the Dow, but on a host of other realities, including government policy, interest rates, energy costs, CEO psychology, war, and so forth. It is a simplification to conclude that a market in decline for three years must soon turn around. Those who felt that way about the Japanese stock market have been caught in a decline that has lasted over a decade and is likely to continue because the
fundamentals that caused the decline have not been corrected.
The psychological fallacy is inadequate justification.
Any conclusion must be supported by evidence and reasons. After that, we can go on to an explanation by citing what we think are probp. 274able causes. That explanation can support the conclusion just as the discovery of a motive can support the conclusion for the reason for the crime. But an explanation per se cannot justify an action. Because someone hates his mother-in-law doesn’t justify killing her. To justify an action, we must establish the (moral) grounds for believing that the action was right. Ultimately, this justification must appeal to moral principles, self-defense being one such justification for homicide. Thus, moral justification must be radically separated from explanation. No explanation in and of itself justifies a conclusion.
Example: “Why did you [stab that woman to death], son?” asked Frank O’Connor, the district attorney of Queens, New York.
“She wouldn’t let go of the pocketbook.”[6]
This kid certainly has a reasonable explanation of why he stabbed the woman. It is an explanation that we understand and that we believe is true. But that doesn’t justify the killing. In fact, the law has a rather dim view of murders committed during a felony. The Texas law considers such crimes capital offenses punishable by death.
Psychological explanations are not justifications.
It is true that the explanation of an act might give us the psychodynamics, the psychological forces, emotions, habits, unconscious drives, purposes, attitudes, and so on, that drove someone to commit the act. The daunting question, then, is, “Does such an explanation justify the act?” In general, the answer to this question is no. Explaining things just doesn’t justify them any more than disclosure of a conflict of interest justifies that conflict of interest. Disclosure and conflict of interest are two different things. Explanation and justification are two different things. Never the twain shall meet. When an explanation is offered as a justification, we are led from the truth to error and therefore commit the psychological fallacy.
Andrea Yates, nurse, honor student, and mother, killed her five children by drowning them in the bathtub. Multiple psychiatrists and psychologists took the stand and explained the complex delusional beliefs that led Andrea to commit this act. The big question, however, was not whether she had reasons for doing what she did (she obviously had reasons—they were crazy, but they were there), but whether her act was morally justified. The case did not turn on why she killed the kids but on the jury finding that Yates knew at some time what she was doing and that what she was doing was wrong. The jury found that her act was not morally justified and sentenced her to life in prison. In so doing, the jury understood the psychological explanation for the crime but did p. 275 not think that the psychological explanation justified, on moral grounds, the killing of five children. In reaching this conclusion, the jury followed Texas law, which requires that if the person knew what she was doing and that it was wrong, then, regardless of the explanation, including well-grounded psychological explanations, the act is a crime and punishable as such.
Missing evidence is inadequate.
To reach a conclusion on the basis of inadequate or missing evidence is a mistake and will lead away from truth toward error. In this connection, we have already discussed arguing from ignorance, contrary to fact hypotheses, the fallacy of groupthink and popular wisdom, partial selection of evidence, and special pleading. There remain some other things we should mention: insufficient evidence, omission of key evidence, the fallacy of impossible precision, and evidence taken out of context.
Insufficient evidence is inadequate.
To justify a conclusion there must be enough evidence. If there is not enough, the evidence is inadequate and the conclusion is not reasonable.
All contradictions cancel themselves, resulting in zero evidence, which is insufficient to support any conclusion. Therefore, all contradictions produce evidence that is inadequate to justify a conclusion.
Note on the refrigerator door:
I hate you, Mommy.
Love,
Jimmy
Well, which is it? Does he hate his mother, or does he love her? Both? Neither? We don’t know. The evidence is contradictory. If he hates her, why did he close his note with the words Love, Jimmy? If he loves her, why did he say he hated her? The two statements contradict each other and therefore provide no evidence for either conclusion.
What about the man who says, “I don’t mind blacks moving into the neighborhood. I just don’t want them on my block.” Is he prejudiced or not? If he is prejudiced, why does he say he doesn’t mind? If he is not prejudiced, why does he say he does mind?
The falsity of these statements arises from the denial of the very statement made. In standard form, it might look like this:
S and not-S
p. 276 where S is any statement and not-S is the denial of that same statement.
“Experience teaches that men learn absolutely nothing from experience.” This quotation, allegedly from George Bernard Shaw, boils down to an implied contradiction. Can you see why?
With this understanding in hand, we are now prepared to answer that immortal question, “What happens when an irresistible force runs up against an immovable object?”
The answer is nothing.
The answer is nothing because an irresistible force and an immovable object cannot exist at the same time in the same place. They cannot exist because they contradict each other. An irresistible force is incompatible with that of an object that can resist any force. This is the equivalent of saying, “There is a force F and an object O such that F can move O and F cannot move O.”
“I have no problems with hippies. I just don’t approve of their lifestyle.”
Contradictions are easy to spot and flag in a most dramatic way the absence of evidence. Absence of evidence may indeed not be evidence of absence. But absence of evidence fails to meet the evidence requirement of the uniform field theory. When there is no evidence, we just don’t know. When we don’t know, we cannot reach a conclusion about where the truth is or what it is.
Most times, the evidence is not absent, however. Most times the evidence is merely inconsistent or insufficient to reach a conclusion. When the evidence is weak, skimpy, or deficient in number, kind, or weight, we must reserve judgment and not jump to hasty, unwarranted conclusions. Rush to judgment, hasty decisions, and premature actions often are not necessary, especially when the issues are complex. Rushed decisions often result in disaster.
N of one is often inadequate evidence to reach a general conclusion.
“The Italian butcher cheated me on that chuck chop that I bought. When I got home, it weighed 0.8 pounds, not the 1.0 pound I paid for. All Italians are cheats.” The evidence (only one case, N=1) is too small to conclude that all Italians are cheats. There is simply not enough data to justify that overly general conclusion. The evidence is relevant because the most likely explanation is that the Italian butcher did cheat. But to conclude from a sample of one instance that all Italians are cheats doesn’t follow. One might conclude that that particular Italian butcher is a cheat. The evidence appears strongly in favor of that conclusion. Certainly if he tried to cheat the next time around, the conclusion would be p. 277 even more firmly established. But there still wouldn’t be enough evidence to implicate all Italian butchers, butchers in general, much less all Italians. The flaw has to do with insufficiency of data. The quantity of evidence is just too limited and the sample too small to constitute evidence sufficient to lead to the particular conclusion about all Italians.
Another example: “My ex and I never got along. It was so bad, I don’t see why anyone would want to get married.” One experience with marriage convinced him that marriage is no good for him, for his friends, or for anyone else. Complete evaluation of the pros and cons of marriage requires much more evidence than the experience of one couple. Perhaps their marriage hit the shoals for reasons other
than those that relate to the institution of marriage per se. Perhaps the marriage failed because of flaws in the wife, in the husband, or in both. Perhaps the mother-in-law was at fault.
Unrepresentative data is partially selected and insufficient.
Closely related to insufficient evidence is the error of attributing to a larger group some opinion found in a unrepresentative or biased sample: “A recent survey shows that 98 percent of people support private ownership of machine guns.”
The survey might have shown that if it were taken among the licensed machine gun dealers of the United States. But it would be a mistake to conclude that because this group of people feels that way that most people share this opinion. Everyday I am bombarded by opinion data gathered by a political party or by an advocacy group that tells me stuff I know is highly suspect. If one were interested in campus opinions about football, one would not survey just the members of the varsity club. Nor would one survey just the nonathletes.
This book is about seeking the truth. It is not about winning arguments. To get to the truth, we must consider all the evidence and omit none. If evidence that is crucial to the support of the conclusion or that definitively proves the conclusion wrong is omitted from consideration, we cannot get to heart of the matter at the core of the truth. To omit crucial evidence from consideration is not unlike preparing a mixed drink and leaving out the alcohol. You miss the point entirely.
Truth, Knowledge, or Just Plain Bull: How to tell the difference Page 33