The Story of Psychology

Home > Other > The Story of Psychology > Page 73
The Story of Psychology Page 73

by Morton Hunt


  A few psychologists believe they can, although no single overall scheme seems dominant. But the general view, to judge from a sampling of top textbooks, is that the three major theories—the James-Lange, the Cannon-Bard, and the cognitive appraisal (Schachter, Lazarus, and others)—all have grains of truth. But so do a number of the variants and developments of them that we have seen.90 Not a simple answer, to be sure.

  To hark back to the question asked at the beginning of this chapter— Why do we do what we do?—at this time there is no one integrated theory, no overall design, to what has become a theoretical patchwork quilt. Those who must have a simple, easily understood answer will not find it in psychology. At least, not yet.

  SIXTEEN

  The

  Cognitivists

  Revolution

  In 1960, George A. Miller, though youthful and somewhat pixieish in appearance at forty, was a professor of psychology at Harvard and assured of his prestigious post and comfortable style of living for the rest of his career. Yet that year he felt compelled, despite deep misgivings, to reveal his true colors even if it meant giving up his place at Harvard.

  His revelation would not be about radical politics or radical sex, both on the rise at that time, but about his interest in the mind.

  The mind? What could be subversive or disreputable about that? Wasn’t it the core concern of psychology?

  No, not then, nor had it been since the beginning of the behaviorist dominion over American psychology four decades earlier. To behaviorists, the mind, invisible, nonmaterial, and conjectural, was an obsolete metaphysical concept that no experimental psychologist concerned about his career and reputation would talk about, much less devote himself to.

  But Miller had become a covert mentalist over the years. Born and raised in Charleston, West Virginia, as a freshman in college he had been uninterested in and even a trifle hostile toward psychology; in a memoir he says, tongue in cheek (a frequent mode of his), that he saw drawings of the brain and other organs in a psychology textbook and, “raised by Christian Scientists, I had been trained to avoid materia medica, and I could recognize the devil when I saw him.”1

  Either education or infatuation changed his outlook. In his junior year at the University of Alabama, Miller, smitten with a girl (whom he later married), went to the informal seminars in psychology she was attending, given by Professor Donald Ramsdell at his home. Miller made such an impression on Ramsdell that a couple of years later, when he completed a master’s in speech and communication, Ramsdell offered him a job teaching psychology to undergraduates, although Miller had never had a formal course in the subject. By then married and a father, Miller needed the job and took it; a year of teaching psychology made a convert of him.

  He went to Harvard for graduate studies, received a solid grounding in behaviorist psychology, and so distinguished himself that after earning his doctorate he was made an instructor. For the next fourteen years, first at Harvard and then at the Massachusetts Institute of Technology, he conducted experimental studies in speech and communication. Despite his behaviorist training, this work, unlike rat-based research, forced him willy-nilly to think about human memory and other higher mental processes. He drifted still closer to mentalism after attending a summer seminar at Stanford, where he worked closely with the psycholinguist Noam Chomsky, and a sabbatical year at the Center for Advanced Study in the Behavioral Sciences at Palo Alto, where he was exposed to new ways of doing research on thinking, especially the simulation of thought processes by computer programs.

  In the fall of 1960 Miller returned to Harvard a changed man. As he tells it in his memoir:

  I realized I was acutely unhappy with the narrow conception of psychology that defined the Harvard department. I had just spent a year romping wildly in the sunshine. The prospect of going back to a world bounded at one end by psychophysics and at the other by operant conditioning was simply intolerable. I decided that either Harvard would have to let me create something resembling the interactive excitement of the Stanford Center or else I was going to leave.

  Miller confided in his friend and colleague, the social psychologist Jerome Bruner, about his discontent and the dream of a new center devoted to the study of mental processes. Bruner shared both his feelings and his vision. Together they approached McGeorge Bundy, provost of the university, won his approval, and with funding from the Carnegie Corporation established the Harvard Center for Cognitive Studies. Naming it that made Miller feel like a declared apostate:

  To me, even as late as 1960, using “cognitive” was an act of defiance. It was less outrageous for Jerry [Bruner], of course; social psychologists were never swept away by behaviorism the way experimental psychologists had been. But for someone raised to respect reductionistic science, “cognitive psychology” made a definite statement. It meant that I was interested in the mind—I came out of the closet.

  And became a leader of the movement that radically changed the focus and methods of psychology and has guided it ever since.

  George Miller’s coming-out typifies what was happening to experimental psychologists in the 1960s. At first a few, then many, and soon a majority abandoned rats, mazes, electric grids, and food-dispensing levers in favor of research on the higher mental processes of human beings. Within the decade, the movement had assumed such proportions as to earn the name “the cognitive revolution.”

  Many forces had been building toward it. During the two previous decades, Gestaltists, personality researchers, developmentalists, and social psychologists were all, in their different ways, exploring mental processes. Coincidentally, a series of developments in several other scientific fields (some of which we have already heard about, some of which we will hear about shortly) were producing knowledge of other kinds about how the mind works. Specifically:

  —Neuroscientists, using microelectrode probes and other new techniques, were observing the neural events and cellular interconnections involved in mental processes.

  —Logicians and mathematicians were developing information theory and using it to account for both the capabilities and limitations of human communication.

  —Anthropologists, analyzing the thought patterns of people in other cultures, were discovering which mental processes vary among cultures, and which are universal and therefore possibly innate.

  —Psycholinguists, studying language acquisition and use, were learning how the mind acquires and manipulates the intricate symbol system we call language.

  —Computer scientists, a new hybrid (part mathematician, part logician, part engineer), were contributing a brand-new theoretical model of thinking, and designing machinery that seemed to think.

  By the late 1970s, cognitive psychology and these related fields came to be known as the cognitive sciences; a number of enthusiasts called them collectively, “cognitive science” and regarded it as a new and distinctive field.2 In the 1980s and early 1990s they expected it to replace the field of psychology; instead, standard psychology morphed, absorbing the new ideas of cognitive science. Today, most departments of psychology include many cognitive science topics, and the relatively few separate departments of cognitive science that exist include many or most classical psychology topics.3 The bottom line: The cognitive revolution was more than a remarkable broadening and deepening of psychology; it was the extraordinary—indeed, wholly improbable—simultaneous development in six sciences of new knowledge bearing on mental processes.

  Computer science had by far the greatest impact on psychology. This new field was the product of intense research during World War II, when Allied forces urgently needed calculating machines that could rapidly handle large sets of numbers to direct antiaircraft guns, operate navigation equipment, and the like. But even very high-speed calculating machines needed to be told by a human operator, after each calculation, what to do next, which severely limited their speed and introduced inaccuracies. By the late 1940s, mathematicians and engineers were starting to provide the machines with sets of
instructions (programs) stored in their electronic memories. Now the machines could swiftly and accurately guide their own operations, carry out lengthy sequences of operations, and make decisions about what needed to be done next. The calculating machines had become computers.

  At first, computers dealt only with numerical problems. But as the mathematicians John von Neumann and Claude Shannon and other computer experts soon pointed out, any symbol can represent another kind of symbol. A number can stand for a letter and a series of numbers for a word, and mathematical computations can represent relationships expressed by language. For instance, [H11005] can stand for “is the same as,” [HS11005] for “is not the same as,” > for “more than” or “too much.” Given a set of rules by which to turn words into numbers and algebraic relationships and then back into words, a computer can perform operations analogous to some kinds of human reasoning.4

  In 1948 the idea that the computer might in some ways function like a mind—at the time this seemed more like science fiction than science—was first broached by von Neumann and the neurophysiologist Warren McCulloch at a California Institute of Technology conference, “Cerebral Mechanisms in Behavior.”

  That notion captivated Herbert Simon, then a young professor of political science at the Carnegie Institute (now Carnegie-Mellon University).5“Professor of political science” hardly describes him, however. Simon, the son of an electrical engineer, was so bright that he was skipped in school and was considerably younger than his friends and classmates. Add to that his being unathletic and growing up in Wisconsin keenly aware of his Jewishness, and it is not surprising that he solaced himself by becoming an exceptional student. In college he liked to think of himself as an intellectual, but in fact his interests were freakishly wide-ranging; although he became a political scientist, he was interested and self-taught in mathematics, economics (for which he was awarded a Nobel Prize in 1978), administration, logic, psychology, and computer science.

  In 1954, Simon and a brilliant young graduate student of his, Allen Newell, discovered that they shared passionate interests in computers and thinking (both men later earned degrees in psychology), and in creating a computer program that would think. For a first attempt, they chose a very limited kind of thinking, namely, proving theorems in formal logic, an entirely symbolic and almost algebraic process. Simon’s task was to work out proofs of theorems while “dissecting as minutely as possible, not only the proof steps, but the cues that led me to each one.” Then the two men together tried to incorporate this information in a flow diagram that they could turn into a computer program.

  After a year and a half of work, Simon and Newell electrified the audience at a 1956 symposium on information theory at MIT with a description of their intellectual offspring, Logic Theorist. Running on JOHNNIAC, a gigantic, primitive, vacuum-tube computer, it was able to prove a number of theorems in formal logic in anywhere from under a minute to fifteen minutes per proof.6 (On a modern computer it would do the same thing in virtually the blink of an eye.) Logic Theorist, the first artificial intelligence program, wasn’t very intelligent; it could prove only logic theorems—at about the same speed as an average college student—and only if they were presented in algebra-like symbols. Still, as the first computer program that did something like thinking, it was a breathtaking achievement. (George Miller was at the presentation; he regards that day as the birthday of cognitive science, even though it took him another four years to declare his apostasy from behaviorism.7)

  By the end of the following year, 1957, Newell, Simon, and a colleague, Clifford Shaw, had created a much cleverer program, General Problem Solver (GPS), which incorporated a number of broad principles common to many intellectual tasks, including proving theorems in geometry, solving cryptarithmetic problems, and playing chess. GPS would make a first move or probe to begin determining the “problem space” (the area containing all possible moves between its initial state and the desired goal), look at the result to see whether the move had brought it closer to the goal, concoct possible next moves and test them to see which one would advance it toward the goal, back up to the last decision point if the train of reasoning veered off course, and start again in another direction. A simple problem that GPS solved easily early in its career went as follows (the problem was presented not in these words, which GPS could not understand, but in mathematical symbols):

  A heavy father and two young sons have to cross a swift river in a deep wood. They find an abandoned boat that can be rowed across, but will sink if overloaded. Each young son weighs 100 pounds. Two sons weigh as much as the father, and more than 200 pounds is too much for the boat. How do the father and the sons cross the river?8

  The solution, though simple, requires a seeming retreat in order to advance. The two sons get in and row across; one debarks and the other rows back and lands; the father rows across and gets out; the son on that side rows back, picks up his brother, and returns to the far shore. GPS, in devising and testing this solution, was doing something akin to human problem solving. By means of the same heuristic—a broad stratagem of exploration and evaluation—it was able to solve similar but far more difficult problems.

  Two basic features of GPS and later artificial intelligence (AI) programs brought about a metamorphosis in cognitive psychology by giving psychologists a more detailed and workable conception of mental processes than any they had previously had, plus a practical way to investigate them.9

  The first of those features is representation: the use of symbols to stand for other symbols or events. In GPS, numbers stand for words or relationships, and in the hardware (the actual computer) operated by GPS, groups of transistors, acting as binary switches that are either on or off, stand for those numbers. By analogy, cognitive psychologists could conceive of the images, words, and other symbols stored in the mind as representations of external events, and of the brain’s neural responses as representations of those images, symbols, and thoughts. A representation, in other words, corresponds to the thing it represents without being at all similar to it. But this was actually an old discovery in new form; Descartes and Fermat discovered long ago that algebraic equations can be represented by lines drawn on a graph.

  The second feature is information processing: the transforming and manipulating of data by the program in order to achieve a goal. In the case of GPS, incoming information—the feedback of each step—was evaluated as to where it had led, used to determine the next step, stored in memory, retrieved if needed again, and so on. By analogy, cognitive psychologists could conceive of the mind as an information-processing program that transforms perceptions and other incoming data into mental representations and, step by step, evaluates them, uses them to determine what to do next in the attempt to reach its goal, adds them to memory, and retrieves them for use again as needed.

  The information-processing (IP) or “computational” model of thinking has been the guiding metaphor of cognitive psychology ever since the 1960s, and has enabled researchers and theorists to explore the inner universe of the mind as never before.

  One specimen of such an exploration will exemplify how the IP model enables cognitive psychologists to ascertain what takes place in the mind. In a 1967 experiment, a research team headed by Michael Posner asked its subjects to say aloud, as fast as possible, whether two letters projected on a screen had the same or different names. When the subjects saw this

  AA

  they almost instantly said “Same,” and when they saw this

  Aa

  they again almost instantly said “Same.” But the researchers, using a highly accurate timer, measured a minuscule difference. On average, subjects replied to AA in 549 milliseconds and to Aa in 623 milliseconds. A tiny difference, to be sure—but a statistically significant one.10 What could account for it?

  The IP model envisions any simple cognitive process as a series of step-by-step actions performed on the data. The following simple flow diagram, typical of many drawn by cognitive psychologists, symb
olizes what goes on when we see and recognize something:

  FIGURE 39

  A typical information-processing diagram

  That accounts for the reaction-time difference in the experiment. If an image proceeds directly from the first “processing” box to “consciousness,” it does so in less time than when it must pass through two or three boxes. In order to identify the letters in AA as having the same name, subjects had to perform only visual pattern recognition on the visual image; to identify those in Aa as having the same name, they had to locate the name of each letter in memory and then see whether they were the same—additional processing that took 74 milliseconds more, a tiny but consequential difference, and strong evidence of how the mind performed this little task. In a follow-up experiment subjects had to say whether AU were both vowels, and in another whether SC were both consonants; the AU response took somewhat longer than AA or Aa had, and SC much longer (nearly a second). Again, these longer reaction times indicated that more steps of mental processing were required.11 Thus even trifling experiments based on the IP model can reveal something of what goes on in the mind.

  To be sure, the finding is an inference from results, not a direct observation of the process. But contrary to behaviorist dogma, inference of an unseen process from results is considered legitimate in the “hard” sciences. Geologists infer the events of the past from sediment layers, cosmologists the formation and development of the universe from the ancient light of distant galaxies, physicists the characteristics of short-lived atomic particles from tracks they leave in a cloud chamber or emulsion, and biologists the evolutionary path that led to Homo sapiens from fossils. So, too, with the interior universe of the mind: psychologists cannot voyage into it, but they can deduce how it works from the track, so to speak, made by an invisible thought process.

 

‹ Prev