If we return to my moment at the conference and my triumphant but also guilty response after critiquing the bad paper, Nagel would argue that even if my experience could be perfectly described in terms of the physical processes of my brain and nervous system from a third-person point of view, it would leave out something important—mine-ness. In his Psychology William James describes this “for me” quality of conscious life. The Latin word for it is ipseity. At the very end of his essay, Nagel suggests that it might be possible to devise a new method of phenomenology. Phenomenology seeks to investigate and explain conscious experience itself, a philosophical tradition that started in the early twentieth century with the German philosopher Edmund Husserl. Husserl, who read William James, understood that every experience presupposes a subject. Every perspective has an owner. When Simone de Beauvoir called on Martin Heidegger, Jean-Paul Sartre, and Maurice Merleau-Ponty as proponents of the idea that the body is a situation, she was referring to a philosophical tradition to which she belonged: phenomenology.
Husserl was profoundly interested in logic and mathematics, and he wrestled with Frege, but he criticized scientific formulations that left out lived experience and relied exclusively on an ideal mathematics in the tradition of Galileo. Nagel’s “objective” phenomenology of the future is one he argues should “not [be] dependent on empathy or the imagination.”199 I would say this is not possible, that empathy and the imagination cannot be siphoned out of phenomenology and the desire to do so demonstrates a prejudice against feeling, which is part of a long rationalist tradition that denigrated the passions. Husserl faced the same problem. He did not advocate a purely subjective or solipsistic theory of consciousness—the idea that each of us, human or bat, is forever stuck in his or her own body’s perspective and can never get out of it. In his late writings, in particular, Husserl offered an idea of transcendental intersubjectivity. What is this? Intersubjectivity refers to our knowing and relating to other people in the world, our being with and understanding them, one subject or person to another, and how we make a shared world through these relations. Reading Husserl is not like reading Descartes, Nagel, or James. Husserl is knotty and difficult. I can say, however, that Husserl’s idea of intersubjectivity necessarily involves empathy, and that for Husserl empathy is an avenue into another person.200
In Mind and Cosmos, Nagel suggests a broad teleological view of nature that includes mind as a possible explanation, one that resonates with Aristotle’s ideas of nature moving toward an end. Although Nagel is not religious, this idea brought him far too close to God for many, which is why he was criticized so severely. He stepped on a paradigm just as sacred to some in science as the Trinity is to Christianity. Nagel is right that subjective conscious experience, the mine-ness or ipseity of being, remains a problem in much scientific thought. Even if we could explicate every aspect of the physical brain in all its complexity, the first-person point of view, the experience of being awake and aware and thinking or asleep and dreaming, will be missing from that account. Consciousness has become a philosophical and scientific monster.
The Wet Brain
But let us ask from a third-person point of view whether the brain, the actual wet organ of neurons and synapses and chemicals, is a digital computational device or even like one. Of course, if that ethereal commodity, information, is superior to or deeper than biology, or if psychology can be severed from biology altogether, if there really are two substances, body and mind, then the question becomes less urgent. But I am interested in the brain that is located inside the mammalian skull inside an animal’s body, and I am also interested in why computational theory of mind lost its status as a hypothesis in cognitive science and became so widely accepted that it was and still is treated as a fact by many. Isn’t this exactly what Goethe warned against?
It is important to understand that despite huge strides in brain science in the last half century, vast amounts of accumulated data, and myriad speculations on how the brain-mind or mind-brain—depending on your emphasis—might work, we do not know how it works. Nevertheless, saying there is no consensual theoretical model for how the brain works is not at all the same as saying scientists know nothing. Much more is known now than was known fifty years ago, but the modular information-processing machine mind, as it is conceived in various sciences and in some philosophy, is neither a fact nor a scientific theory, and the evolutionary psychological brand of massive modularity is even more controversial. Although evolutionary psychologists may refer to behavioral genetics and neuroscience to support their model of how the mind works, the massively modular model of the mind is founded on not biology but rather on the hypothesis that our minds are composed of discrete modules and the mind as a whole behaves like or as a computer, an idea that rose out of logic and cybernetics and may have less to do with actual organic processes than with idealized models of how a system works in any material, human or machine. Georg Northoff describes this theory as one in which the brain does not constitute mental states. “Instead,” he writes, “any kind of device—a brain, a computer, or some machine—can, in principle, run the program required to produce mental states.” Northoff calls this position “the denigration of the brain.”201
Are there modules or something like modules in the wet brain? Advocates of versions of locationism and antilocationism are still with us, emphasizing either specific regions of the brain that are correlated with certain states or functions, and others who lean toward more connective models. For example, a good deal of research has been done on the brain’s visual cortex and its various parts, each of which is recruited for different “jobs” involved in seeing an object in the world—shape, motion, color, location, etc. The visual cortex was long understood as a good example of a self-contained sensory mode in the brain, its visual faculty. Empirical evidence, especially in the last decade, suggests something more complex. It turns out that the visual cortex takes in not only visual stimuli but auditory stimuli as well and that there is considerable interaction between the auditory cortex and the visual cortex. This is usually referred to as cross-modal interaction, one sensory mode communicating with another. Studies have shown that what we hear affects what we see. Similarly, what we hear can affect our tactile sensations. In a 2010 paper, Ladan Shams and Robyn Kim write, “Therefore, visual processing does not appear to take place in a module independently of other sensory processes. It appears to interact vigorously with other sensory modalities in a wide variety of domains.”202
Cross-modal research goes on, and it lends support to a hypothesis about human development that says we may be born without radically distinct sensory perceptions, that for an infant the senses blur and then separate as she grows. This would help explain the many forms of synesthesia people experience, hearing colors, seeing letters and numbers as colors, or feeling sounds.203 Synesthetes retain cross-modal experiences that other people lose. But to one degree or another intermodal perception is part of all of our conscious lives. Metaphor jumps across the senses all the time. I am feeling blue. Listen to that thin sweet sound. What a sad color. Or diverse lines lifted from the inimitable Emily Dickinson: “’Twas such an evening bright and stiff,” “And a Green Chill upon the Heat,” and “They have a little Odor—that to me / Is metre—nay—’tis melody—”204
We know that parts of the brain are relatively mature at birth—the brain stem, for example, part of MacLean’s reptilian brain. It controls breathing, heart rate, body temperature, and other autonomic functions and is, in evolutionary terms, an ancient part of the larger organ, one we share with many other animals, including frogs. To think of this aspect of brain function as an automaton is, in fact, apt. The mammalian neocortex, however, the most recently evolved part of our brain and other animal brains, appears to be strikingly plastic, probably most plastic in human beings.
The human cortex develops enormously after birth, and there is considerable agreement that it develops in part through experience.205 But does that mean we can establish
a “relation” between nature and nurture as separable entities, that M’s feeling of entitlement, for example, can be found in his nature (vigorous genes from good stock), not his nurture (beloved boy of adoring parents)? Does it even make logical sense? What is nature and what is nurture in this case? Would we say that the natural cortex is the brain at birth, a brain already shaped before birth through the uterine environment, and that its synaptic development after birth is nurture because its dynamic organic form cannot be understood without the person’s experience? No, because genetic dispositions are at work, too, and experience affects gene expression or suppression.
Isn’t the experience of the organism that affects synaptic connections in the brain inseparable from its nature? Doesn’t this thought mirror the role of the gene in context, but extrapolated to the level of the whole organism? Without the environment, which includes food and air, parents who rock you, yell at you, touch you, and talk to you, as well as all kinds of entanglements with the world and with others—in short, without experience—there is no recognizable human being, but that does not mean we have no heritable traits or genetic story. But that story, in terms of gene suppression, as we have seen, may be influenced by what happens to an animal. Without this dynamic development, a narrative that includes an array of influences, there will never be a philosopher sitting in his or her room alone thinking.
A remarkable form of plasticity can be seen in the visual cortex of people who are born blind.206 The visual cortex is recruited for other senses—hearing and touch—but also, it seems, for cognitive abilities, such as language.207 Furthermore, an infant can lose an entire hemisphere of its brain and grow up to be a “normal” person. Pinker discusses plasticity at some length in The Blank Slate but insists that it “does not show either that learning is crucial in shaping the brain or that genes fail to shape the brain.”208 Plasticity no doubt involves still uncertain genetic factors, but many neuroscientists do believe learning is crucial to “shaping” the brain’s cortex, including a scientist whose research Steven Pinker draws upon in one of his more recent books, The Better Angels of Our Nature, the scientist who coauthored the paper with Mark Solms on the id and ego: Jaak Panksepp.
In “The Seven Sins of Evolutionary Psychology” (2000), Jaak and Jules Panksepp argue that the plastic cortex suggests not a host of “genetically-guided modules” but an experience-dependent “general purpose cognitive-linguistic-cultural ‘playground’ for regulating the basic affective and motivational tendencies that are organized elsewhere.”209 The “elsewhere” they refer to is the subcortical affective part of the brain, which Solms and Panksepp locate as the seat of a primitive consciousness, older than the neocortex in evolutionary terms and which binds us anatomically to other mammals, including rats. Jaak and Jules Panksepp are referring particularly to brain regions involved in emotion, which are less plastic. They write, “We believe that some currently fashionable versions of evolutionary psychology are treading rather close to neurologically implausible views of the human mind.”210 Fifteen years after they published their paper, I would say those views are looking even more implausible. The Panksepps note that this may be especially true for language development and that it is not known whether our language capacity emerges from genetic influences or from a reconfiguration of adaptations. Stephen Jay Gould believed that language might simply be an accidental by-product of our large human brains, what he called an “exaptation.”
In other words, no one doubts that human beings learn to talk and use symbols in ways mice and frogs never do. This must involve some native capacity, but exactly how that works remains mysterious and open to multiple explanations. The evolutionary biologist Terrence Deacon has argued that brain plasticity is itself an adaptation. He and many others do not accept Pinker’s language “instinct” or module, an idea that Pinker bases on Noam Chomsky’s innovative theory of generative grammar. There has been a strong move against Chomsky’s idea of an innate language “organ” in contemporary linguistics.211 I am not a person who thinks recent ideas are always the best. On the contrary, my reading in the history of science has given me at times a rather jaded perspective on the notion of progress. It is pretty much universally accepted, however, that there is a critical period for learning language. If a young child is deprived of language stimuli between birth and age ten (the time window changes depending on the research), no amount of teaching will make up for the deficit.
Notably, Chomsky referred to his early work on syntactic theory as “Cartesian linguistics.” He relied on logical, mathematical processes for his explication of a universal grammar, one that necessarily left out the meaning of and the context for language. As Michael Tomasello points out in a review of Pinker’s The Language Instinct, the Chomsky model is founded on a “mathematical approach,” which “yields from the outset structures that are characterized as abstract and unchanging Platonic forms.”212 The desire is to get to the bottom of things, to strip away the flotsam and jetsam and uncover an essence, one that can be described in purely logical terms. The disagreements about language development are intense, ongoing, and unresolved. At the very least, there is reason to suspect our minds are not wholly modular or determined by natural selection, which is not the same as saying that natural selection has not played a role in our present reality or that human beings are born blank slates, something even John Locke, who is credited with the phrase, did not believe. Moreover, if cortical plasticity is as pervasive as it appears to be, drawing firm distinctions between nature and nurture begins to look rather odd.
Unnatural Wonders
CTM would not exist without the extraordinary mathematician Alan Turing, the originator of the modern computer. In 1935–36, Turing worked on answering a mathematical question posed by David Hilbert that had long been unsolved, and in order to answer it he invented an imaginary basic computing machine, which stored information and executed a finite number of operations with the information it had stored. The information in this machine was fed to it on a tape marked with discrete symbols, either a 0 or a 1. In his biography of Turing, Andrew Hodges beautifully explicates for nonmathematicians the importance of Turing’s machine. Hodges refers to a book Turing loved as a child called Natural Wonders Every Child Should Know, in which the brain is described as “a machine, a telephone exchange or an office system.”213 This was commonplace. The English physician William Harvey (1578–1657) used the metaphor of a hydraulic system for the heart and blood circulation to great effect. Henri Bergson used the telephone switchboard as a brain metaphor. Freud called upon the telephone receiver as an analogy for the analyst.
Hodges continues, “What he [Turing] had done was to combine such a naive mechanistic picture of the mind with the precise logic of pure mathematics. His machines—soon to be called Turing machines—offered a bridge, a connection between abstract symbols, and the physical world.”214 Just around the same time, Alonzo Church also answered Hilbert’s question, but by another route altogether. What is now called the Church-Turing thesis maintains that the Turing machine can solve any computable calculation if it has enough tape and enough time. Turing published “On Computable Numbers” in 1936, and although he was disappointed in the response to it at the time, it would change the scientific landscape, including the idea of mind in psychology. Turing’s ambitions went far beyond making useful machines. His ambition was to build a brain. He wrote, “We may hope that machines will eventually compete with men in all purely intellectual fields.”215 He wanted to invent a machine that would think for itself, one that would not depend on a programmer.
In “Intelligent Machinery,” a text published after his death, Turing noted that language development in children was not due to an innate English- or French-speaking brain area and that a person’s “linguistic [brain] parts” develop through “different training.” Turing concluded, “There are large parts of the brain, chiefly in the cortex, whose function is largely indeterminate.” In children, this flexibility is much greater tha
n in the adult. It all depends on learning—“on the training in childhood.” Turing winds up articulating the fundamental insight of behaviorism, albeit in mechanical terms: “All of this suggests that the cortex of the infant is an unorganized machine, which can be organized by suitable interfering training. This organizing might result in the modification of the machine into a universal machine or something like it.”216 Turing’s knowledge of biology was not extensive. His knowledge of mathematics was, and it rested on a fundamental assumption that certain mental operations are computable and therefore can be computed on a universal machine.
Computable operations are procedures that move forward according to a set of logical rules, step-by-step, without missing a beat—an algorithm. As Hobbes argued, one step determines the next. Because Turing’s machine is capable of imitating such procedures, it can imitate rational processes and, because a computer program is entirely legible, it follows that human reason is similarly legible. Note how closely Turing’s mental machinery resembles Watson and Crick’s central dogma, the neat sequential processing of a symbolic code, a biological algorithm that established a unilateral flow of information from DNA to RNA through transcription and then to proteins through translation.
Turing, however, was well aware that people’s mental lives would not be easily duplicated in machines. The human computer that had inspired the machine computer indulged in sensual pleasures impossible for the machine. In “Intelligent Machinery,” Turing also speculated on the body question. The intelligent machine he dreamed of “would still have no contact with food, sex, sport and many other things of interest to the human being.” Therefore it seemed best to explore what “can be done with a ‘brain’ which is more or less without a body, providing at most organs of sight, speech and hearing.”217 Turing thought that mathematics, cryptography, and human languages all seemed suitable to such research.
A Woman Looking at Men Looking at Women Page 30