Believing
Page 9
When subjects arrived at the laboratory, they were told only that they would be communicating via a Teletype machine for an hour, that they could discuss whatever they wished, and that they would be paid for participating in the study, irrespective of what they typed.
The software of the remotely located computer consisted of approximately one hundred rules for developing responses to the typed input of subjects. For example, if a subject typed a sentence in which the word if appeared, such as, “I plan to go to the beach tomorrow, if it doesn’t rain,” the computer rule was: “Disregard all words in the sentence prior to the word if, repeat the word if and the words that follow, and add the phrase tell me more.” For this example, the computer reply was, “If it doesn’t rain, tell me more.” None of the computer software rules were more complex than the if rule. Computer replies simulated human typing speed at approximately thirty words per minute.
Described out of context, the software rules seem embarrassingly simple. Nonetheless, subjects experienced the computer’s replies to what they typed as if they were communicating with another person. After an hour-long test period and anywhere from thirty to sixty subject-computer exchanges, the subjects were asked a variety of questions, including “Do you think you were communicating with a person or a computer?” Ninety percent of the subjects answered “A person.” When subsequently asked, “Is it possible that you were communicating with a computer?” over 80 percent answered “No.”
The research then changed in focus to try and identify the fewest number of software rules that would result in at least 50 percent of the subjects responding that they were communicating with a computer or, if not that, not with a human being. A new group of subjects participated in these studies. Over time, the software rules were degraded systematically. This resulted in computer-generated replies that omitted key words (usually verbs), made striking grammatical errors, and often made no sense. Through each degraded state, the majority of subjects continued to believe that they were communicating with another person, not a computer. When asked if they could be communicating with a computer, over half of the subjects still said “No.” Some subjects volunteered explanations as to why: “Someone is trying to convince me that he is a computer,” and “Computers aren’t that stupid.” Those subjects who believed that they were communicating with a computer commented: “People don’t talk like that,” “The replies didn’t feel human,” and “The replies were stiff.”6
One feature of the studies wasn’t part of the research design. As noted, the computer software was written to simulate average human typing speed. This it did, except at moments of computer malfunction. When this occurred, and before discontinuing operation, the computer sent the following message at a rate of 180 words per minute: “CTSS [Computer Time-Sharing System] is shutting down.” Because the computer system functioned most of the time, only a subset of subjects received this message. Those who did were asked if they thought the message suggested they were communicating with a computer. None did. They offered a variety of reasons for their responses: “The person at the other Teletype must be going to lunch or the bathroom,” or “That person types amazingly fast.” Such responses are consistent with Michael Shermer’s concepts of agenticity: that is, “the tendency to infuse patterns with meaning, intention, and agency.”7
There are many ways to interpret the results of the studies. It was possible that another person was typing replies. There was no foolproof strategy that subjects could adopt to disconfirm this possibility. Even the “CTSS is shutting down” message could be explained as a Teletype system failure affecting both a subject and a hypothetical person at a remotely located site. Determining who or what was typing replies was further compromised by the absence of speech inflections and nonverbal gestures, which so influence face-to-face communication, TV, and radio. In short, no matter what subjects typed, they couldn’t be certain they were not interacting with a person. Given this, the computer replies were accommodated to the belief subjects brought to the experiment: verbal exchanges take place between human beings.
In fairness to the subjects, there was no reason for them to suspect that they would be communicating with a machine. Verbal exchanges, whether face-to-face, via the telephone, over a short-wave radio, through Morse code, or via sign language all involve other people. Books and articles are primarily communications from authors to readers. Even for TV, although there is seldom direct verbal feedback from viewing audiences, it is clear that what is said is intended for other humans. From this perspective, the results of the experiment are not surprising.
Also in fairness to the subjects, these studies were conducted in the 1960s. At the time, there were few precedents for machines that communicated on their own verbally or via written word. Of course, it was possible to visit an amusement arcade, insert a dime into a machine, and have one’s fortune told by a voice emanating from the machine. But everyone except young children knew that it wasn’t the machine that was talking, but a recorded voice. Times have changed. In the decades since the 1960s, science fiction, TV dramas, movies, and industry have introduced their audiences and customers to machines that talk, make decisions, and, at times, initiate action. If the same studies were repeated today, different responses would be likely.
Still, the study’s findings do not differ in principle from those of similar studies. For example, subjects can be provided with incorrect formulas that lead them to believe that spheres are 50 percent larger than they actually are. They are then asked to compare the formula-predicted volume of actual spheres with their own measurements. The discrepancy between the theoretical and actual volumes leads to doubt, discomfort, adjustment of measurements, and ad hoc explanations about the discrepancy. Subjects rarely abandon their belief in the incorrect formula measurement in favor of their own measurement.8 Formulas too can become beliefs and acquire authority.
In both the computer and the formula-sphere studies, there is a bias that favors retaining the views that are brought to novel situations. This is particularly so when prior experience has not required revision of one’s beliefs: it’s humans who produce words and sentences, and it’s formulas that give correct answers. There is also the possibility that, even when dealing with machines, people attribute humanlike features to them and, in turn, respond as if their behavior is human generated. That is, relating to machines may be modeled after human-human interactions. Farmers frequently talk about their tractors as if they are human—“She isn’t going to go to work today”—and in offices filled with computers and homes with electronic devices, it is not unusual to hear statements such as “Hurry up, damn it,” “I just had you fixed,” and “I didn’t ask for that program.” Is this different from attributing humanlike personalities to pets or supernatural entities? Or, from another perspective, does the study hint at one attraction of mathematics, that it permits an escape from anthropomorphism?
The computer study may simulate conversations in which people use only a few reply rules and yet are able to create a semblance of understanding. A familiar example is giving instructions to someone who replies with “I understand,” “No, I won’t do that,” and so forth, only to find out later that the instructions were misunderstood. Similar events take place between teacher and student, atheist and believer, husband and wife, parent and child, and doctor and patient. One may sense that there is understanding during the interaction when, in reality, it didn’t exist.
Where are the divides in these examples? For thirty-eight of the forty people who were interviewed about their beliefs, the divides for their strongly held beliefs were narrow or nonexistent. The same point holds for those who view the Bradshaw paintings as products of bleeding bird beaks. The fluctuating and unpredictable permutations of events that make up daily life had minimal effect on what they believed. For the majority of subjects in both the computer study and the wrong-formula study, divides also were narrow even though in both studies there was evidence that could have widened them. Interpreted
this way, the findings are consistent with those who have argued that people are “bound to believe”9 and “can’t help it.”10
I’M HEARING THINGS
The computer study is far from an isolated instance of seeing what one believes. In 1973, in a now-classic study, the psychologist David Rosenhan and his colleagues entered mental hospitals and reported to hospital staff that they were suffering from auditory hallucinations.11 An auditory hallucination is a symptom nearly always associated with the clinical diagnosis of schizophrenia. Eight people participated in the experiment. None had a history of mental illness. All were admitted to various hospitals as inpatients. Seven were assigned the diagnosis of schizophrenia, one as manic-depressive. During their hospitalization, hospital records indicated that all were cooperative, which is unusual for people with the diagnosis of schizophrenia. It was left to the experimenters to figure out how to arrange their discharge from the hospitals. After varying periods of hospitalization, all the experimenters were discharged with the diagnosis of schizophrenia in remission.
The study is of interest for two reasons. First, similar to the computer study, the clinicians had no foolproof way of disproving the reported hallucinations. Auditory hallucinations are audible only to those who hear them, not to others, such as medical personnel who might be conducting clinical evaluations. Also similar to the computer study, medical personal apparently didn’t expect that the experimenters might not fully explain their behavior. It is understandable why subjects in the computer study believed they were communicating with another person—at the time, few people had Teletype machines connected to computers. But not suspecting deception is surprising in mental-health settings: medical lore and particularly psychiatric lore stresses that patients deceive and provide incomplete and distorted histories.
Second, although in principle the several types of schizophrenia require the presence of a number of different clinical signs and symptoms to establish a diagnosis, in practice the presence of a single sign or symptom may be sufficient to initiate a diagnosis. What appears to have happened in the Rosenhan study is that the experimenters’ claims of auditory hallucinations were sufficient to initiate among the clinicians a diagnosis of schizophrenia. Once the diagnosis—essentially a belief—was initiated, there was no certain way to disprove it, although the good behavior of the experimenters should have been suggestive. Divide narrowing likely was a contributing factor.
What is to be made of these examples other than people see what they believe?
I live in the country, where the majority of land is devoted to farming or grazing. The population of the nearest town is approximately one thousand. There is regional TV, radio, and a newspaper, but much of the critical local news is communicated via the local “bush telegraph”—neighbors talk with one another. One morning, a neighbor dropped by to discuss the spread of West Nile virus in our area. As is the norm with such meetings, a wide range of topics is usually discussed. That morning was not an exception. After a time, he asked what I was doing. I told him of my work on a book about belief. A few moments in our discussion follow:
“Is your book about belief in God?” he asked.
“No, it’s more general, about why and how people believe.”
“Do you believe in God?” was his next question.
“No.”
“That’s unfortunate,” he replied.
“But you do believe in God, correct?” I asked.
“Yes, all my life.”
“Would you tell me why?”
“Because he watches over me.”
“There is evidence for that?”
“You’re a scientist. I’m not. We probably view things differently. For me, the Bible tells of real events. Then there’re reports by people who have communicated with God, like Joan of Arc. I believe that Christ existed and that he is the son of God. He watches over me and influences my decisions.”
“In what ways?”
“It’s simple, really. I have thoughts that he puts in my head and they lead to decisions that I wouldn’t make without them.”
“For example?
“Last year, I suddenly had the thought that I should pray for a good harvest. I never had that thought before. No one suggested it to me. But I prayed anyway.”
“And?”
“We had an unusually good harvest.”
“Let me be sure I understand what you’re saying: your prayer was responsible for your good harvest? God acted on your behalf because of the prayer?”
“Yes.”
“But what if your harvest had been bad?
“It would mean that I had sinned and God was not going to answer my prayer.”
“But he is watching over you anyway?”
“Yes, he always does.”
LINGERING CONTROVERSIES
The centuries-old controversy between science and religion is inviting to study because the proponents of differing views have possessed some of mankind’s finest brains. A continuing question in these controversies is this: Do science and religion differ in the ways they develop beliefs and interpret evidence?
At first glance, this might seem to be a non-question. There are self-evident differences. Religion is based on belief in authority while science is based on the belief that evidence meets specific requirements for interpretation or that evidence can be found. But it’s not that straightforward.
A close look at the controversy reveals the human tendency to create artificial categories and then assume that they are separate and unique. I was no exception. I had long harbored the belief that religion and science are distinct and separate endeavors. The twain doesn’t meet. But once I began examining the details of the controversy, my categories fell apart.
EVIDENCE AGAIN
Evidence is important. At times, it confirms what people believe. At other times, it alters beliefs much as it did following the discoveries of the Americas and Christopher Columbus’s voyages. At still other times, it is the goal of scientific experiments, as when chemists seek to identify the structure of a molecule. In short, some of what people believe and don’t believe hinges on evidence. Some also hinges on their perception of divides separating beliefs and evidence. And some beliefs can’t be disproved. Nowhere are these points more relevant than in the now centuries-old discourse between religion and science.
Is there a god or a higher power? No and yes. “No” for atheists. They reject the possibility because they believe there is an absence of justifying evidence. Also, perhaps, the idea may seem implausible. “Yes” for the majority of the world’s adults whose confidence has its source in “that state of mind by which it assents to propositions, not by reason of their intrinsic evidence, but because of authority.”1 Authority alone may be both proof and explanation. If it is accepted, there is seldom a divide, or at least it is very narrow. Perhaps, too, there are spontaneously created beliefs about gods and evil forces. Studies suggest that such beliefs are an inbuilt feature of human nature—that is, beliefs are products of the brain on its own.2 If so, religion may serve to embellish and manage them.
DOMAIN OVERLAP OR SEPARATENESS?
It might be reasoned that acceptance of religious authority would settle matters dealing with evidence and belief as they apply to religion and science. They simply are not the same. Or, as Stephen J. Gould puts it, science and theology are “nonoverlapping magisteria.” Gould defines magisterium as “a domain where one form of teaching holds the appropriate tools for meaningful discourse and resolution.”3 His view would separate the discourse of science from that of religion with science addressing the empirical realm and theology addressing the realm of meaning.
There are many beliefs that are consistent with Gould’s view. For example, the evangelist Billy Graham asserts that “it is impossible for us who were created for eternity ever to find anything in this world to satisfy our souls.”4 Said a bit differently, thinking mystically differs from the usual rules of evidence and logic and for which options differ from
those we experience on earth in daily life.5 The two domains thus would appear to have their own ways of selecting, valuing, and interpreting information. Yet the view of nonoverlapping magisteria is not without ambiguities and critics, some of whom detect far less separation.6
If science’s current view of evidence is taken as the benchmark, it is easy enough to agree with Gould. There is no scientifically acceptable evidence for or against God, a higher power, or an afterlife7—in effect, there is nothing to report one way or another. It’s worth adding that no potentially informative experiments are on the drawing board. Viewed this way, there is an immediate and clear answer to the title of this chapter: yes, religion is an exception to science.
But an exception to what about science: to its current evidence requirements, to scientific method, to its explanation and reasoning, to its knowledge, or possibly to something else? Answers to these questions require a closer look at the methods and practices of science and religion, particularly as they bear on authority and possible areas of conceptual and methodological overlap. Nonoverlapping magisteria may be ripe for revision.
SCIENTIFIC METHOD AND A BIT OF HISTORY
A convenient place to begin the inquiry is by noting that the modern scientific method is barely four and a half centuries old. The 1550 publication of Copernicus’s description of how the Earth and other planets in the solar system revolve around the Sun is usually taken as its inception date.8 Why the modern method took so long to develop is an interesting question in its own right. Clearly there were many very competent scientists as far back as Babylonian times (circa 1750 BCE) and no doubt well before.9 Part of the answer is found in the modus operandi of scientists prior to 1550. They focused primarily on proving their ideas. This they did with the information they had, often with ingenious reasoning, but without a well-articulated or generally accepted methodology to systematically assess their explanations and evidence. This was the case for example among many of the pseudoscientists.