No Two Alike
Page 19
Such differences in personality and intelligence are, in part, heritable. If the compliers differ in personality and intelligence from the noncompliers, their children will also tend to differ in these ways. And the ways in which they will tend to differ are precisely those that affect how well they will do in school. Parents who conscientiously follow the instructions of a Cowan or a Forgatch are likely, for genetic reasons alone, to have children who conscientiously follow the instructions of a teacher. Their children are likely to do well in school, with or without an intervention, just as the patients who complied with the drug regimen had lower fatality rates, with or without the drug.
Medical researchers deal with the problem of compliance bias by analyzing data on an “intention to treat” basis. This means that the division into treatment and control groups has to take place before the experiment begins, and the analysis of results has to compare the two groups as originally constituted. You can’t allow patients to switch groups because that destroys the randomization. The path analysis used by Cowan and Cowan and by Forgatch and DeGarmo is the equivalent of allowing the parents to decide for themselves whether to be in the treatment group or the control group.
One of the researchers whose work Philip Cowan advised me to look at was Rex Forehand. Forehand is part of the old guard of the intervention field; he started running parent-training programs back in the 1970s. Cowan’s tip led Joan and me to a review article coauthored by Forehand: an overview of the results of two decades of interventions aimed at changing parents’ behavior. Forehand and his coauthor reported that parents found the training to be effective; their children behaved better at home. “However,” the authors admitted, “research has been unable to show that child behavior is modified at school.”46
Children’s behavior at home is, in part, a response to the behavior of their parents. Anything that changes the behavior of the parents—a parent-training intervention, a divorce, a serious illness—can change the way the kids behave at home, no question about it. So perhaps I spoke too strongly when I said, in chapter 3, to run the other way if you see an advice-giver coming toward you. If you are having trouble getting your kid to listen to you, there are advice-givers who might be able to help. But beware of advice-givers who make more sweeping claims.
Because children discriminate sharply between situations, the way to improve their behavior in school is not by modifying their parents’ behavior but by modifying their environment at school. School-based interventions, aimed at changing the behavior of whole classrooms of kids, can accomplish things such as reducing aggressiveness and bullying on the playground. Unfortunately, as Walter Mischel and I would predict, they produce no improvement in the way the children behave at home.47
As I said, all the currently popular theories of personality development are based on the assumption that learned behaviors or learned associations transfer readily and automatically from one situation to another—in particular, from the home to other social contexts. If this assumption is false, why do so many people believe it?
The most obvious explanation is that they’ve mistaken the effects of genes for the effects of learning. They’ve noticed that some children are aggressive or conscientious or timid both at home and in school, and attributed the consistencies in behavior to the generalization of behavior acquired at home. Evidence that most of these consistencies are due to genetic influences on behavior has turned up only recently—too recently to be incorporated into existing theories of personality development.
But there is another reason why theorists—even those who are well acquainted with the behavioral genetic evidence—make the erroneous assumption about learning and generalization. It’s the fundamental attribution error. Psychologists, like other human beings, are prone to seeing a sample of someone’s behavior as an indication of how that individual will behave in other situations. This bias makes developmentalists assume that they can find out what makes children tick by observing how they behave with their parents, or even by asking their parents. Their research methods rely heavily on observations of the children at home or in the presence of their parents, or questionnaires about the children’s behavior filled out by their parents. Sometimes the questionnaires are filled out by the children themselves, but the researchers usually administer these tests in the children’s homes. As Mischel pointed out, how children respond to a questionnaire depends in part on where they are when they check off their answers. Most developmentalists carry out their research in profound ignorance of the implications of the person-situation controversy. When they discover that mothers’ judgments agree poorly, or don’t agree at all, with judgments made by the children’s teachers, the mothers’ judgments may be dismissed as inaccurate, instead of being seen as evidence that children discriminate sharply between situations and behave differently in different settings.48
Like the rest of us, professors of psychology are predisposed to attribute a sample of behavior—even a hopelessly inadequate sample—to the enduring characteristics of the individual who is doing the behaving. But, as I pointed out in chapter 1, we humans were provided with no built-in explanation for these presumed enduring characteristics: the explanation is provided by the culture. When cultures change, so does the explanation. Alice James, in the nineteenth century, attributed her enduring characteristics to heredity; John Cheever, in the twentieth, attributed his to childhood experiences within his family.
Though talking about heredity is more acceptable now than it was in the 1960s and ’70s, our culture still overwhelmingly favors environmental explanations of behavior.49 The erroneous assumption I’ve been attacking in this chapter is the result of two biases working together: a built-in bias that causes us to expect consistencies of behavior across contexts, and a cultural bias that causes us to attribute the consistencies to learning.
These biases can explain, for example, the widespread belief that birth order has enduring effects on personality. Firstborns and laterborns do behave differently in the presence of their parents and siblings. When people notice these differences they assume, incorrectly, that they are good predictors of how firstborns and laterborns will behave in other contexts—that a firstborn who bosses around his younger siblings will be bossy in the workplace as well. Popular stereotypes of firstborns and laterborns are based on the way they behave in the context of their families. The reason we have these stereotypes is that our observations of firstborn and laterborn behavior are based, perforce, on people whose birth orders we know, which generally means people we’ve seen in a family context. The people we’ve seen only at work or in other public places have contributed little or no data to our stereotypes, because we are unlikely to know whether they are firstborns or laterborns.
The two biases also influence the work and the beliefs of professionals—clinical psychologists and psychiatrists—who administer psychotherapy. Traditional psychotherapy, which seeks the source of patients’ current unhappiness in the history of their early interactions with their parents and siblings, is like a factory that makes sausages out of sausages. By encouraging their patients to relive their childhood experiences with parents and siblings, the psychotherapists are tapping into the feelings associated with those relationships. What their patients say under these conditions is likely to reinforce the therapists’ belief in the power of family relationships to shape (and perhaps to damage) a child’s personality. Everything about traditional psychotherapy, including the homelike setting and the therapist’s role as a substitute parent, is designed to put the patients back into the context of the family they grew up in and evoke the feelings and thoughts associated with that context.50
What’s wrong with that? Nothing, I suppose, except that it doesn’t work. Psychotherapy, like other forms of medical intervention, is now expected to be “evidence-based,” and the evidence doesn’t support the view that talking about childhood experiences has therapeutic value. Research has shown that the effective forms of therapy are those that focus on peopl
e’s current problems, rather than their ancient history.51 The basic premises of psychoanalysis—that every psychological disorder has its roots in the experiences of infancy and childhood, and that reconstructing these experiences is an essential part of psychotherapy—are being publicly questioned and sometimes publicly repudiated. Joel Paris, head of the Department of Psychiatry at McGill University, has done that in a book titled Myths of Childhood.52 Alan Stone, a trained psychoanalyst and professor of psychiatry at Harvard, has changed the way he does psychotherapy and has given this explanation for the change:
Our problem is that, in light of the scientific evidence now available to us, these basic premises [of psychoanalysis] may all be incorrect. Our critics may be right. Developmental experience may have very little to do with most forms of psychopathology, and we have no reason to assume that a careful historical reconstruction of these developmental events will have a therapeutic effect…. If there is no important connection between childhood events and adult psychopathology, then Freudian theories lose much of their explanatory power.53
Stone’s new focus in psychotherapy is “almost entirely on the here and now, on problem-solving, and on helping patients find new strategies and new ways of interacting with the important people in their lives.”54 He means the important people in their current lives.
If you ever took a course in introductory psychology, it’s likely that your textbook explained the term generalization by describing the famous experiment in which the behaviorist John B. Watson produced “conditioned fear” in an infant known as Little Albert. The story goes something like this. Watson made Albert afraid of a white rat—or, in some versions, a rabbit—by making a loud, unpleasant noise (produced by banging on a metal bar) whenever Albert reached for the animal, and the fear subsequently generalized to other furry things. The list of feared objects varies from one account to another, but it usually includes an assortment of animals and a Santa Claus beard, fleshed out with optional items such as Albert’s mother’s coat, a teddy bear, and the luxuriant hair on Watson’s own head.
Everyone who has gone back to the original report of the Little Albert experiment—a 1920 paper by Watson and his graduate student Rosalie Rayner (soon to become his second wife)—has concluded that the stories in the textbooks are, to say the least, exaggerated. The Little Albert experiment involved one (1) child. The procedure was a shambles, the results were ambiguous, and the report was unclear. Albert was a stoic infant and a thumbsucker; in order to get any reaction at all from him, the experimenters had to yank his thumb out of his mouth. They didn’t just condition him to fear a rat: they gave training trials with a dog and a rabbit as well. When they tested the infant in a different room he showed little or no reaction to the animals, so they gave additional training trials in that room, to “freshen the reaction.” They did everything they could to frighten the child—when he failed to react to the sight of the rat they put it on his arm and then on his chest—but Albert’s response to the animals continued to be ambivalent. He gurgled at the rat; he reached out and felt the rabbit’s ear.55
Despite the fact that the Little Albert story has been repeatedly debunked,56 and despite the fact that later researchers were unable to produce conditioned fear in other infants using Watson’s method, the story persists; it continues to be cited as a textbook example of generalization. What few people realize is that the purpose of Watson and Rayner’s experiment was not to extend Pavlov’s work—they didn’t even mention Pavlov—but to put Freud out of business. Watson didn’t use the word generalize; he used the word transfer. Freud had talked about “transference” and “displacement,” and Watson wanted to show that he could explain these things without invoking the unconscious or anything else in the Freudian armamentarium.57 Watson and Rayner’s article ended with a sneer at Freud:
The Freudians twenty years from now, unless their hypotheses change, when they come to analyze Albert’s fear of a seal skin coat—assuming that he comes to analysis at that age—will probably tease from him the recital of a dream which upon their analysis will show that Albert at three years of age attempted to play with the pubic hair of the mother and was scolded violently for it.58
Of course, Watson expected his readers to understand that it wasn’t Albert’s mother’s pubic hair that caused the trouble: it was Watson himself, banging on a steel bar “four feet in length and three-fourths of an inch in diameter.” I wonder what Freud would have made of that.
Watson thought he was pretty clever, but notice that he accepted two of the basic assumptions of Freudian theory: that personality is shaped by the experiences of infancy and early childhood (Little Albert was eleven months old at the start of the experiment) and that learned associations transfer readily from one stimulus to another. Watson’s theory that it all boils down to conditioning didn’t pan out, but the theorists who came after him were just as willing to accept his assumption that learning transfers or generalizes. Few bothered to state it out loud, fewer still to question it.
The reason why so many theorists believe in the importance of the early years and in the generalization of learning is that they’re unaware that the continuities in behavior from the early years to adulthood, and from one situation to another, are due largely to genetic influences on behavior. They see an infant who is afraid of rats and rabbits turning into a timid child and later into a timid adult, and they attribute his timidity to something that happened to him in infancy. They are wrong; it was something that happened to him at conception. The timid infant who becomes a timid adult has, in all likelihood, inherited a predisposition to be timid.
My concern, however, is not with heredity; this chapter was about how young humans learn from their experiences. I’ve shown that the assumptions that underlie popular theories of personality development—that learned behaviors transfer readily from one situation to another, that children learn things at home which they automatically carry with them to other settings, that their experiences with their parents will color their subsequent interactions with other social partners—are incorrect. Generalization occurs (it has to), but it doesn’t happen in this senseless, automatic way. Whether a learned behavior or emotion will be generalized from one situation to another depends on whether the situations are regarded as equivalent or as different. A child who initially regards two situations as different can learn through experience that in some relevant way they are equivalent. A baby learns very quickly that Mommy in a blue shirt and Mommy in a red shirt are in all important respects the same. But Mommy in a blue shirt and the babysitter in a blue shirt continue to be seen as different. Babies are predisposed from birth to make certain kinds of distinctions.
Over the past hundred years, a good deal has been written about personality development. Most of it was wrong. For far too long, psychologists have been constructing theories of personality on the same shaky foundation used by their predecessors. My purpose in this chapter was to clear away the rubble and get down to bedrock. Now I can start to build a new theory.
6
The Modular Mind
THERE IS ANOTHER Dr. Watson who is even more famous than the one who tormented Little Albert. He’s the physician who serves as sidekick and chronicler to Sherlock Holmes. In a story called “The Five Orange Pips,” Holmes said this to Watson:
Now let us consider the situation and see what may be deduced from it…. I think that it is quite clear that there must be more than one of them. A single man could not have carried out [these] two deaths in such a way as to deceive a coroner’s jury. There must have been several in it, and they must have been men of resource and determination.1
Holmes was using the word “deduced” loosely. To a philosopher the word has a specific meaning: reasoning from given premises to something that follows inexorably from those premises. An example is the old syllogism: All men are mortal, Socrates is a man, therefore Socrates is mortal. If the first two statements are true, then the third has to be true. Deduction is infallible.
/> But Sherlock Holmes wasn’t infallible—he occasionally made mistakes—and the way he solved his mysteries was not deduction. Philosophers use the word “abduction” for the reasoning process Holmes actually used.2 Abduction in the philosophers’ sense (not to be confused with the way it is used by folks who believe they’ve been borrowed by extraterrestrials) means finding the hypothesis that provides the best explanation for the data. “Data! Data! Data!” Holmes exclaimed impatiently to Watson. “I can’t make bricks without clay.”3
The hypothesis that wins does so on the basis of probability, not infallibility. The less probable hypotheses are eliminated. This is how Sherlock Holmes reasoned. Ideally, it is how scientists and medical diagnosticians reason. Ideally, it is how you and I reason. It can go wrong if there is an insufficiency of clay or if some hypotheses are given privileged status.
What Holmes abduced in the case of “The Five Orange Pips” is that sometimes the best hypothesis isn’t the simplest. Sometimes there is more than one perpetrator.
A famous rule in science is called “Occam’s razor”: entities should not be multiplied needlessly. Much has been written about the harm done to psychology by Locke’s blank slate and Rousseau’s noble savage,4 but what about the harm done by Occam’s razor? The simplest explanation is not necessarily the correct one.
I didn’t begin to see the solution to the mystery of individuality until four years after The Nurture Assumption was published. What got in my way was the confusion between socialization and personality development. Most psychologists, including those who study children and those who administer psychotherapy to adults, think of them as basically the same process—a learning process that has long-term effects on behavior. I thought so too. I was slow in realizing that there must be at least two processes, with different goals, and slower still (I’m no Sherlock Holmes!) in working out the implications. The goal of socialization is to fit children to their society. Socialization makes children more similar to their peers. But, as the behavioral genetic data showed, something other than genes makes children differ from one another in personality. Whatever it is, it can’t be socialization.