The Story of Psychology
Page 60
In Milgram’s opinion, his series of experiments went far to explain how so many otherwise normal Germans, Austrians, and Poles could have operated death camps or, at least, accepted the mass murder of the Jews, Gypsies, and other despised groups. (Adolf Eichmann said, when he was on trial in Israel, that he found his role in liquidating millions of Jews distasteful but that he had to carry out the orders of authority.)
Milgram validated his interpretation of the results by varying the script in a number of ways. In one variation, a phone call would summon the researcher away before he said anything to the teacher about the importance of continuing to ever higher shock levels; his place would be taken by a volunteer (another confederate) who seemed to hit on the idea of increasing the shocks as far as needed and kept telling the teacher to continue. But he was a substitute, not the real authority; in this version of the experiment only 20 percent of the teachers went all the way. Milgram also varied the composition of the team. Instead of an affable, pudgy, middle-aged learner and a trim, stern, young researcher, he reversed the personality types. In that condition, the proportion of teachers going all the way decreased but only to 50 percent. Apparently, the roles of authority and victim, not the personalities of the persons who played the parts, were the crucial factor.
A disturbing adjunct to Milgram’s results was his investigation of how people thought they would behave in the situation. He described the experimental set-up in detail to groups of college students, behavioral scientists, psychiatrists, and laymen, and asked them at what level of shock people like themselves would refuse to go on. Despite the differences in their backgrounds, all groups said people like themselves would defy the experimenter and break off at about 150 volts when the victim asked to be released. Milgram also asked a group of undergraduates at what level one should disobey; again the average answer was at about 150 volts. Thus, neither people’s expectations of how they would behave nor their moral views of how they should behave had anything to do with how they actually behaved in an authority-dominated situation.
Milgram’s obedience study attracted immense attention and won the 1964 award of the American Association for the Advancement of Science for sociopsychological research. (In 1984, when Milgram died of a heart attack at fifty-one, Roger Brown called him “perhaps the most gifted experimentalist in the social psychology of our time.”) Within a decade or so, 130 similar studies had been undertaken, including a number in other countries. Most of them confirmed and enlarged Milgram’s findings, and for some years his procedure, or variations of it, was the principal one used in studies of obedience.35 But for more than two decades no researcher has used such methods, or would dare to, as a result of historical developments we’ll look at shortly.
The Bystander Effect
In March 1964, a murder in Kew Gardens, in New York City’s borough of Queens, made the front page of the New York Times and shocked the nation, although there was nothing memorable about the victim, murderer, or method. Kitty Genovese, a young bar manager on her way home at 3 A.M., was stabbed to death by Winston Moseley, a business-machine operator who did not know her, and who had previously killed two other women. What made the crime big news was that the attack lasted half an hour (Moseley stabbed Genovese, left, came back a few minutes later and stabbed her again, left again, and returned to attack her once more), during which time she repeatedly screamed and called for help, and was heard and seen by thirty-eight people looking out the windows of their apartments. Not one tried to defend her, came to help when she lay bleeding, or even telephoned the police. (One finally did call—after she was dead.)
News commentators and other pundits interpreted the inaction of the thirty-eight witnesses as evidence of the alienation and inhumanity of modern city dwellers, especially New Yorkers. But two young social psychologists living in the city, neither one a native New Yorker, were troubled by these glib condemnations.36 John Darley, an assistant professor at New York University, and Bibb Latané, an instructor at Columbia University who had been a student of Stanley Schachter’s, met at a party soon after the murder and found that they had something in common. Though unlike in many ways—Darley was a dark-haired, urbane, Ivy League type; Latané a lanky, thatch-haired fellow with a Southern country-boy accent and manner—they both felt, as social psychologists, that there had to be a better explanation of the witnesses’ inactivity.
They talked about it for hours that night and had a joint flash of inspiration. As Latané recalls:
The newspapers, TV, everybody, was carrying on about the fact that thirty-eight people witnessed the crime and nobody did anything, as if that were far harder to understand than if one or two had witnessed it and done nothing. And we suddenly had an insight: maybe it was the very fact that there were thirty-eight that accounted for their inactivity. It’s an old trick in social psychology to turn a phenomenon around and see if what you thought was the effect was actually the cause. Maybe each of the thirty-eight knew that a lot of other people were watching—and that was why they did nothing.37
Late though it was, the two immediately began designing an experiment to test their hypothesis. Many weeks later, after much planning and preparation, they launched an extended investigation of the responses of bystanders, under varied circumstances, to an emergency.
In the study, seventy-two NYU students in introductory psychology courses took part in an unspecified experiment in order to fulfill a class requirement. Each arriving participant was told by Darley, Latané, or a research assistant that the experiment involved a discussion of the personal problems of urban university students. The session was to be conducted in two-person, three-person, or six-person groups. To minimize embarrassment when revealing personal matters, they would be in separate cubicles and would communicate over an intercom system, taking turns and talking in an arranged sequence.
Whether the naïve participant was supposedly talking to only one other person or to two or five others—supposedly, because in fact everything he heard others say was a tape-recorded script—the first voice was always that of a male student who told of difficulty adjusting to life in New York and to his studies, and confided that under stress he was prone to epileptic seizures. The voice was that of Richard Nisbett, then a graduate student at Columbia University and today a professor at the University of Michigan, who in tryouts had proved the best actor. The second time it was his turn to talk, he started to sound disordered and incoherent; he stammered and panted, said that he had “one of these things coming on,” started choking and pleading for help, gasped, “I’m gonna die—er-er—help—er-er—seizure-er,” and, after more choking sounds, fell silent.
Of the participants who thought that they and the epileptic were the only ones talking to each other, 85 percent popped out of their cubicles to report the attack even before the victim fell silent; of those who thought four other people were also hearing the attack, only 31 percent did so. Later, when the students were asked whether the presence of others had influenced their response, they said no; they had been genuinely unaware of its powerful effect on them.
Darley and Latané now had a convincing sociopsychological explanation of the Kew Gardens phenomenon, which they called “the social inhibition of bystander intervention in emergencies,” or, more simply, “the bystander effect.” As they had hypothesized, it was the presence of other witnesses to an emergency that made for passivity in a bystander. The explanation of the bystander effect, they said, “may lie more in the bystander’s response to other observers than in presumed personality deficiencies of ‘apathetic’ individuals.”38
They suggested later that three processes underlie the bystander effect: hesitancy to act in front of others until one knows whether helping or other action is appropriate; the feeling that the inactive others understand the situation and that nothing need be done; and, most important, “diffusion of responsibility”—the feeling that, since others know of the emergency, one’s own obligation to act is lessened.39 A number of later exper
iments by Latané and Darley, and by other researchers, confirmed that, depending on whether bystanders can see other bystanders, are seen by them, or merely know that there are others, one or another of these three processes is at work.
The Darley and Latané experiment aroused widespread interest and generated a crop of offspring. Over the next dozen years, fifty-six studies conducted in thirty laboratories presented apparent emergencies to a total of nearly six thousand naïve subjects who were alone or in the presence of one, several, or many others. (Conclusion: The more bystanders, the greater the bystander effect.) The staged emergencies were of many kinds: a crash in the next room followed by the sound of a female moaning; a decently dressed young man with a cane (or, alternatively, a dirty young man smelling of whiskey) collapsing in a subway car and struggling unsuccessfully to rise; a staged theft of books; the experimenter himself fainting; and many others. In forty-eight of the fifty-six studies, the bystander effect was clearly demonstrated; overall, about half the people who were alone when an emergency occurred offered help, as opposed to 22 percent of those who saw or heard emergencies in the presence of others.40 Since there is less than one chance in fifty-one million that this aggregate result is accidental, the bystander effect is one of the best-established hypotheses of social psychology. And having been so thoroughly established and the effects of so many conditions having been separately measured, it has ceased in recent years to be the subject of much research and become, in effect, another closed case.
However, research on helping behavior in general—the social and psychological factors that either favor or inhibit nonemergency altruistic acts—continued to grow in volume until the 1980s and has only lately leveled off. Helping behavior is part of prosocial behavior, which, during the idealistic 1960s, began to replace social psychology’s postwar obsession with aggressive behavior, and it remains an important area of research in the discipline.
A Note on Deceptive Research: One factor common to most of the closed cases dealt with above—and to a great many other research projects in social psychology—is the use of elaborately contrived deceptive scenarios. There is almost nothing of the sort in experimental research on personality, development, or most other fields of present-day psychology, but for many years deceptive experimentation was the essence of social psychological research.
In the years following the Nuremberg Trials, criticism of experimentation with human subjects without their knowledge and consent was on the rise, and deceptive experimentation by biomedical researchers and social psychologists came under heavy attack. The Milgram obedience experiment drew particularly intense fire, not only because it inflicted suffering on people without forewarning them and obtaining their consent, but because it might have done them lasting psychological harm by showing them a detestable side of themselves. Milgram, professing to be “totally astonished” by the criticism, asked a sample of his former subjects how they felt about the experience, and reported that 84 percent said they were glad they had taken part in the experiment, 15 percent were neutral, and only 1 percent regretted having participated.41
But in the era of expanding civil rights, the objections on ethical grounds to research of this sort triumphed. In 1971 the Department of Health, Education, and Welfare adopted regulations governing eligibility for research grants that sharply curtailed the freedom of social psychologists and biomedical researchers to conduct experiments with naïve subjects. In 1974 it tightened the rules still further; the right of persons to have nothing done to them without their informed consent was so strictly construed as to put an end not only to Milgram-type procedures but to many relatively painless and benign experiments relying on deception, and social psychologists abandoned a number of interesting topics that seemed no longer researchable.
Protests by the scientific community mounted all through the 1970s, and in 1981 the Department of Health and Human Services (successor to DHEW) eased the restrictions somewhat, allowing minor deception or withholding of information in experiments with human beings provided there was “minimum risk to the subject,” the research “could not practicably be carried out” otherwise, and the benefit to humanity would outweigh the risk to the subjects.42 “Risk-benefit” calculations, made by review boards before a research proposal is considered eligible for a grant, have permitted deceptive research—though not of the Milgram obedience sort—to continue to the present. Deception is still used in about half of all social psychology experiments but in relatively harmless forms and contexts.43
Still, many ethicists regard even innocuous deception as an unjustifiable invasion of human rights; they also claim it is unnecessary, since research can use nonexperimental methods, such as questionnaires, survey research, observation of natural situations, interviews, and so on. But while these methods are practical in many areas of psychology, they are less so, and sometimes are quite impractical, in social psychology.
For one thing, the evidence produced by such methods is largely correlational, and a correlation between factor X and factor Y means only that they are related in some way; it does not prove that one is the cause of the other. This is particularly true of sociopsychological phenomena, which involve a multiplicity of simultaneous factors, any of which may seem to be a cause of the effect under study but may actually be only a concurrent effect of some other cause. The experimental method, however, isolates a single factor, the “independent variable,” and modifies it (for instance, by changing the number of bystanders present during an emergency). If this produces a change in the “dependent variable,” the behavior being studied, one has rigorous proof of cause and effect. Such experimentation is comparable to a chemical experiment in which a single reagent is added to a solution and produces a measurable effect. As Elliot Aronson and two co-authors said in their classic Handbook of Social Psychology, “The experiment is unexcelled in its ability to provide unambiguous evidence about causation, to permit control over extraneous variables, and to allow for analytic exploration of the dimensions and parameters of a complex phenomenon.”44
For another thing, no matter how rigorously the experimenter controls and manipulates the experimental variables, he or she cannot control the multiple variables inside the human head unless the subjects are deceived. If the subjects know that the investigator wants to see how they react to the sound of someone falling off a ladder in an adjoining room, they are almost sure to behave more admirably than they otherwise might. If they know that the investigator’s interest is not in increasing memory through punishment but in seeing at what point they refuse to inflict pain on another person, they are very likely to behave more nobly than they would if ignorant of the real purpose. And so, for many kinds of sociopsychological research, deceptive experimentation is a necessity.
Many social psychologists formerly prized it not just for this valid reason but for a less valid one. Carefully crafted deceptive experimentation was a challenge; the clever and intricate scenario was highly regarded, prestigious, and exciting. Deceptive research was in part a game, a magic show, a theatrical performance; Aronson has likened the thrill felt by the experimenter to that felt by a playwright who successfully recreates a piece of ordinary life.45 (Aronson and a colleague once even designed an experiment in which the naïve subject was led to believe that she was the confederate playing a part in a cover story. In fact, her role as confederate was the actual cover story and the purportedly naïve subject was the actual confederate.46) In the 1960s and 1970s, by which time most undergraduates had heard about deceptive research, it was an achievement to be able still to mislead one’s subjects and later debrief them.
During the 1980s and 1990s, however, the vogue for artful, ingenious, and daring deceptive experiments waned, although deceptive research remains a major device in the social psychologists’ toolbox. Today most social psychologists are more prudent and cautious than were Festinger, Zimbardo, Milgram, Darley, and Latané, and yet the special quality of deceptive experimentation appeals to a certain kind of re
searcher. When one meets and talks to practitioners of such research, one gets the impression that they are a competitive, nosy, waggish, daring, stunt-loving, and exuberant lot, quite unlike such sobersides as Wundt, Pavlov, Binet, and Piaget.
Ongoing Inquiries
Of the wide variety of topics in the vast, amorphous field of social psychology, some, as we have seen, are closed cases; others have been actively and continuously investigated for many decades; and many others have come to the fore more recently. The currently ongoing inquiries, though they cover a wide range of subjects, have one characteristic in common: relevance to human welfare. Nearly all are issues not only of scientific interest but of profound potential for the improvement of the human condition. We will look closely at two examples and briefly at a handful of others.
Conflict Resolution
Over half a century ago social psychologists became interested in determining which factors promote cooperation rather than competition and whether people function more effectively in one kind of milieu than another. After a while, they redefined their subject as “conflict resolution” and their concern as the outcome when people compete, or when they cooperate, to achieve their goals.
Morton Deutsch, now a professor emeritus at Teachers College, Columbia University, was long the doyen of conflict-resolution research. He suspects that his interest in the subject may have its roots in his childhood.47 The fourth and youngest son of Polish-Jewish immigrants, he was always the underdog at home, an experience he trans-muted into the lifelong study of social justice and methods for the peaceful resolution of conflict.
It took him a while to discover that this was his real interest. He became fascinated by psychology as a high school student when he read Freud and responded strongly to descriptions of emotional processes he had felt going on in himself, and in college he planned to become a clinical psychologist. But the social ferment of the 1930s and the upheavals of World War II gave him an even stronger interest in the study of social problems. After the war he sought out Kurt Lewin, whose magnetic personality and exciting ideas, particularly about social issues, convinced Deutsch to become a social psychologist. For his doctoral dissertation he studied conflict resolution, and continued to work in that area throughout his long career. The subject was congenial to his personality: unlike many other social psychologists, he is soft-spoken, kindly, and peace-loving, and as an experimenter relied largely on the use of games that involved neither deception nor discomfort for the participants.