Neurones and Psychons
In 1895, Sigmund Freud abandoned his Project for a Scientific Psychology, in which he hoped to bind the neurons of the brain to psychological states and solve the mind-body divide. The manuscript was lost, discovered, and finally published in 1950. Freud stated his goal clearly in his introduction: “The intention is to furnish a psychology that shall be a natural science: that is to represent psychical qualities as quantitatively determinate states of specifiable material particles, thus making those processes perspicuous and free from contradiction.” Freud specified that the “material particles” under examination were “neurones.” 218
The years Freud spent working as a “hard scientist” are generally left out of popular references to him and his theories. I will never forget the time I gently reminded a science journalist I met at a literary festival that Freud had, after all, been a neurologist. He stared at me in disbelief. What I thought was a reminder turned out to be wholly new information to him. The facts are that Freud worked as a neurobiologist at the Institute of Physiology at the University of Vienna under Ernst Wilhelm von Brücke, specifically on the structure of nerve cells in the lamprey and the river crayfish and published scientific papers on the subject that stand as contributions to the literature. He also worked at the Institute of Brain Anatomy under the famous psychiatrist Theodor Meynert, where he studied the human nervous system. His Project was an attempt to bind his knowledge of the dynamic nervous system to psychic qualities and describe an economics of mental energy.
Just shy of a half century after Freud decided not to pursue his neurobiological Project, another psychiatrist, this one American, Warren McCulloch, published a paper with a young, brilliant logician, Walter Pitts, in which they purported to have solved the mind-body problem through a model of working brain physiology. Like Freud, McCulloch hoped to understand neurons and neural nets as the avenue to human psychology in general and psychiatric illness in particular. Rather than atoms or genes, he proposed “psychons” as the fundamental unit of the human mind. Unlike Freud, who gave up on the Project for reasons that remain a subject of considerable debate, McCulloch did not abandon his psychon theory. The Bulletin of Mathematical Biophysics published the now landmark McCulloch-Pitts paper, “A Logical Calculus of the Ideas Immanent in Nervous Activity.”219 The title alone alerts the reader to the tantalizing idea that neurons are in some way bearing ideas. Finally, the neuron, with its axon and dendrites, and a thought in the human mind such as, Are there lemons in the refrigerator? will be united as one. The pulsing electrical activity of the wrinkled organ inside the human skull will be securely bound to the mental world. How did they do it? Through the logic of binary neurons:
The psychon is no less than the activity of a single neuron. Since that activity is inherently propositional, all psychic events have an intentional, semiotic character. The “all-or-none” law of these activities, and the conformity of their relations to those of the logic of propositions, insure that the relations of psychons are those of the two-valued logic of propositions. Thus in psychology, introspective, behavioristic, or physiological, the fundamental relations are those of two-valued logic.220
Neurons obey binary logic. Therefore all of our psychological states, whatever they are, obey the same logic.
McCulloch did an internship in organic neurology at Bellevue Hospital in New York, was fascinated by neurological disorders such as Parkinson’s, but was also attracted by logic and mathematics, Whitehead and Russell’s Principia Mathematica in particular. He worked in Joannes Dusser de Barenne’s Laboratory of Neurophysiology at Yale and later moved to the University of Illinois at Chicago, where he met the members of the Committee on Mathematical Biology, led by Nicolas Rashevsky. Rashevsky’s dream was to exploit the mathematical techniques of theoretical physics for biology. By the early 1940s, McCulloch had read Turing’s 1936 paper on computable numbers, which he later declared had sent him in the “right direction.”221
The right direction, described in the McCulloch-Pitts paper, was to create a greatly simplified model of neuronal processes. It is important to stress that the authors knew their neurons were idealizations that left out the complexity of real neurons. Their model of neural nets wasn’t meant to be an exact replica of actual neuronal systems. They wanted to show how something like a neuronal system could explain the human mind. Founded on the simple idea that neurons are inhibited or excited, fire or don’t fire, are on or off, they reduced them to binary, digital abstractions. Neuronal activity therefore followed the binary essence of Boolean logic: “Because of the ‘all-or-none’ character of nervous activity, neural events and the relations among them can be treated by means of propositional logic.” A proposition is a claim that may be expressed in a sentence such as, Jane is smoking a fat Cuban cigar. Every proposition or statement may be proved true or false. Propositions are the atoms or bricks of a logical argument that follow one another in a systematic way. If one of the bricks is “false,” the whole building collapses. Propositional logic is more complex than this, but for a broad understanding of what McCulloch and Pitts were up to, it will suffice. By mathematical means, McCulloch and Pitts mapped propositional psychological content—via true or false binary logic—onto their simplified neurons, which was enough to justify the word “semiotic,” the study of signs. The mind’s form can be reduced to propositional logic.
Even though a neuron can be described as on or off, in an active rather than an inhibited state, which is analogous to a logical proposition that can be declared true or false, how exactly do you conflate one with the other? The idea is that the same law is at work, that human physiology and human psychology are under the sway of a universal binary reality. Although crucial to the McCulloch-Pitts thesis, this has not been borne out by research. Discussing the paper in his book An Introduction to Neural Networks, James A. Anderson writes, “Our current understanding of neuron function suggests that neurons are not devices realizing the propositions of formal logic.”222 The scientists Walter J. Freeman and Rafael Núñez are more adamant: “Contrary to widespread beliefs among computer scientists and Cognitivists, action potentials [changes in the membrane of a neuron that lead to the transmission of an electrical impulse] are not binary digits, and neurons do not perform Boolean algebra.”223 So how did the brain as a computational device become a truism? After all, the first digital computer, ENIAC, appeared in 1946, a decade after Turing’s imaginary machine. Computation had been understood as one human activity among others. Doing an arithmetic problem is computing; daydreaming is not. How did the idea that everything the brain does is computable become so widespread, the dogma of CTM?
Many neuroscientists I have met, who do not subscribe to a Boolean model of neurons or believe they can be understood via propositional logic or Turing machines, routinely use the word “computation” to describe what the brain does. At the end of his insightful critique of the McCulloch-Pitts paper, Gualtiero Piccinini speaks to its legacy and the fact that CTM became a model for human mental processes.
But in spite of the difficulties, both empirical and conceptual, with McCulloch and Pitts’s way of ascribing computations to the brain, the computational theory of mind and brain took on a life of its own. McCulloch and Pitts’s views—that neural nets perform computations (in the sense of computability theory) and that neural computations explain mental phenomena—stuck and became the mainstream theory of brain and mind. It may be time to rethink the extent to which those views are justified in light of current knowledge of neural mechanisms.224
I may be the first to link Freud’s Project and McCulloch and Pitts’s paper. McCulloch was extremely hostile to psychoanalysis, so pairing him with Freud may smack of the outrageous. Freud did not connect neural activity to mathematics or to propositional logic. He addressed the excited and resistant character of neurons as biological entities, however, and hoped to explain perception and memory with two classes of them, as well as create an overall description of energy rel
ease and conservation in the brain. Freud’s desire to root psychic phenomena in neural synaptic processes and design a working scientific model for mental life, including mental illnesses, however, was similar to McCulloch’s. Both Freud and McCulloch wanted to close the explanatory gap, to make mental and physical not two but one.
There are neuroscientists who are more impressed by the prescience of Freud’s theory of mind than with Pitts and McCulloch’s logical calculus.225 Karl Pribram (1919–2015), a scientist who was not shy about using mathematics as a tool in his own work, argued that the Project both used the neurological knowledge of the day (neurons were still controversial) to great effect and anticipated future science. In fact, Pribram taught Freud’s Project as if it were his own theory in the early 1960s to colleagues and students, who received it with enthusiasm. Only at the very end of the lecture did he reveal that he was teaching Freud, not himself. His audience reacted with disbelief. “Why this reluctance to believe?” Pribram asked in a lecture delivered on the one hundredth anniversary of the Project. “Why is Freud considered so differently from Pavlov or Hebb?” He answers his own question: “I believe the answer is simple. Pavlov and Hebb couched their neuropsychological speculations in neuroscientific terminology—voilà, they are neuroscientists. Freud, by contrast, couched his terminology in psychological, subjective terms.”226 Freud’s language is no doubt part of the reason for his separate status, but I believe it is more complicated than that. Unlike Pavlov and his famous dogs or Donald Hebb, the mid-twentieth-century neuroscientist who has gone down in history for the law named after him—neurons that fire together, wire together—Freud himself became subject to an all-or-nothing, right-or-wrong, true-or-false thinking in the wider culture. This has always bewildered me. Why is it necessary to take all of Freud rather than accepting aspects of his thought and rejecting others, as one does with every other thinker?227
No one remembers the psychon. It has gone the way of craniometry (the skull measuring that Broca, among many others, practiced, most often to determine racial and sexual differences). Computation as a model for the mind continues to thrive. Why all the continued confidence about computation, despite universal agreement that the McCulloch-Pitts paper did not solve the mind-body problem? In her massive Mind as Machine: A History of Cognitive Science, Margaret Boden puts her finger on the answer. After noting that the material embodiment of the McCulloch-Pitts neuron was irrelevant to its function, even though the authors of the paper did not say so “explicitly,” Boden writes, “In sum, the abstractness . . . of McCulloch and Pitts’ networks was significant. It licensed von Neumann to design electronic versions of them . . . It permitted computer scientists, including AI workers, who followed him to consider software independently of hardware. It enabled psychologists to focus on mental (computational) processes even while largely ignorant of the brain” 228 (my italics). The word inside Boden’s parenthetical synonym for “mental”—“computational”—places her squarely in the computational cognitive camp.
John von Neumann, mathematician, physicist, and inventor, took the McCulloch-Pitts neuron as a starting point for his cellular automata and self-organizing systems. What matters in this discussion is not how von Neumann managed these remarkable feats but that in order to manage them he, too, had to employ a model that simplified organisms into a form that was more easily manipulated. Like Pitts and McCulloch, von Neumann was keenly aware of the distinction between model and living organism. In his 1951 paper, “The General and Logical Theory of Automata,” von Neumann wrote, “The living organisms are very complex—part digital and part analogy mechanisms. The computing machines, at least in their recent forms . . . are purely digital. Thus I must ask you to accept this oversimplification of the system . . . I shall consider the living organisms as if they were purely digital automata.”229 The simulation requires simplification.
It would be foolish to argue against simplification or reduction as a scientific tool. Like a Matisse cutout of a dancer that seems to describe the music of the human body itself, a simplified model may reveal some essential quality of what is being studied. In science and in art, the boiled-down may tell more than an immense, lush, baroque, and more unwieldy description of the same object or story.
Here, for example, is Robert Herrick’s perfect poem, “Upon Prue, His Maid.”
In this little urn is laid
Prudence Baldwin, once my maid,
From whose happy spark here let
Spring the purple violet.230
Such reductions serve as vehicles of discovery. On the other hand, some simplifications risk eliminating what matters most. This is the dilemma that faces artist and scientist alike. What to leave in and what to take out? In a 2011 paper, Peter beim Graben and James Wright contemplated the legacy of the McCulloch-Pitts model in terms of its importance for neurobiology. “Ideally, such observation models need to be simple enough to be tractable analytically and/or numerically, yet complicated enough to retain physiological realism. Sadly, we do not know which physiological properties are truly the essential, nor even whether such a distinction can be made.”231 In other words, the computational neural nets that became so vital to cognitive psychology and to artificial intelligence are treated with far more pessimism among those who are not entirely ignorant of that still mysterious organ: the brain.
GOFAI vs. Know-How
As is true of every discipline, the story of artificial intelligence has not been free of conflict. In the days of the Macy conferences, for example, many and diverse disciplines and points of view were represented. John Dewey was on the Macy board. Warren McCulloch and the anthropologists Gregory Bateson and Margaret Mead belonged to the core conference group, as did the strong-willed and articulate psychiatrist-psychoanalyst Lawrence Kubie. I find it amusing that on one occasion Kubie did his best to discuss the unconscious with an uncomprehending Walter Pitts, who compared it to “a vermiform appendix” that “performs no function” but “becomes diseased with extreme ease.”232 Norbert Wiener, John von Neumann, the philosopher Susanne Langer, Claude Shannon, and the psychologist Erik Erikson all participated in or were guests at the conferences. Extraordinary thinkers were brought together in one place. Cybernetics was interdisciplinary by definition. It addressed systems and control in a wholly abstract, dematerialized way that could be applied to anything. Furthermore, it was not reductionist. It emphasized the relations between and among the various parts of any dynamic system. The movement of information and feedback, both positive and negative, were key to the system’s self-organization, an organization that was in no way dependent on the matter in which it was instantiated. Without this precedent, Pinker could not claim that concepts such as “information,” “computation,” and “feedback” describe “the deepest understanding of what life is, how it works and what forms it is likely to take elsewhere in the universe.” Cybernetics and linked theories, such as systems theory, have had strikingly diverse applications. They have come to roost in everything from cellular structures to corporations to tourism and family therapy.
And yet, the interdisciplinary character of cybernetics made definitions of the concepts involved all that more important. There was much intense discussion at the Macy conferences about digital (or discrete) processes versus analog (or continuous) ones, and there was no general agreement on how to understand the distinction. Gregory Bateson observed, “It would be a good thing to tidy up our vocabulary.” Tidying up vocabulary may indeed be one of the most difficult aspects of doing any science. J. C. R. Licklider, a psychologist who did research that would lead to the creation of the Internet, wanted to know how the analog/digital distinction related to an actual nervous system. Von Neumann admitted, “Present use of the terms analogical and digital in science is not completely uniform.” He also said that in “almost all parts of physics the underlying reality is analogical. The digital procedure is usually a human artifact for the sake of description.”233 Just as the physician explained to his medical students
that the “mechanical steps” he outlined to describe labor and birth were a way to divide up “a natural continuum,” von Neumann viewed the digital as the scientist’s descriptive tool and the analogical as its referent. In his later theory of automata paper, von Neumann would characterize living organisms as both digital and analog. The ideal neurons of McCulloch and Pitts functioned digitally. Their hope was that, despite simplification, they nevertheless resembled an actual nervous system, although that correspondence, as I have shown, has had a dubious legacy.
I do not think these problems vanished. Cybernetics has led to all kinds of interesting thoughts about complex nonlinear systems. Without cybernetics, it seems unlikely that chaos theory, for example, would have been applied to explain the unpredictability of systems of varying kinds, from weather patterns to economics. The difficulties are nevertheless the same: How exactly does the model relate to its multiple referents? When is simplification good and when is it bad? What does it actually mean to apply the same model to living and nonliving systems, to cellular structures and to machines? In the early days of artificial intelligence, these questions were not settled, but after an initial period of many flowers blooming, the field of artificial intelligence settled for some time into what is now called GOFAI: good, old-fashioned artificial intelligence.
CTM is crucial to GOFAI. The definition of computational theory of mind in the glossary of The Cambridge Handbook of Artificial Intelligence makes this explicit. The prose is not pretty, but it is worth quoting with some explanatory comments of my own:
A Woman Looking at Men Looking at Women: Essays on Art, Sex, and the Mind Page 31