I Am a Strange Loop
Page 6
columns in the visual cortex
area 19 of the visual cortex
the entire visual cortex
the left hemisphere
Although these are all legitimate and important objects of neurological study, to me this list betrays a limited point of view. Saying that studying the brain is limited to the study of physical entities such as these would be like saying that literary criticism must focus on paper and bookbinding, ink and its chemistry, page sizes and margin widths, typefaces and paragraph lengths, and so forth. But what about the high abstractions that are the heart of literature — plot and character, style and point of view, irony and humor, allusion and metaphor, empathy and distance, and so on? Where did these crucial essences disappear in the list of topics for literary critics?
My point is simple: abstractions are central, whether in the study of literature or in the study of the brain. Accordingly, I herewith propose a list of abstractions that “researchers of the brain” should be just as concerned with:
the concept “dog”
the associative link between the concepts “dog” and “bark”
object files (as proposed by Anne Treisman)
frames (as proposed by Marvin Minsky)
memory organization packets (as proposed by Roger Schank)
long-term memory and short-term memory
episodic memory and melodic memory
analogical bridges (as proposed by my own research group)
mental spaces (as proposed by Gilles Fauconnier)
memes (as proposed by Richard Dawkins)
the ego, id, and superego (as proposed by Sigmund Freud)
the grammar of one’s native language
sense of humor
“I”
I could extend this list arbitrarily. It is merely suggestive, intended to convey my thesis that the term “brain structure” should include items of this general sort. It goes without saying that some of the above-listed theoretical notions are unlikely to have lasting validity, while others may be increasingly confirmed by various types of research. Just as the notion of “gene” as an invisible entity that enabled the passing-on of traits from parents to progeny was proposed and studied scientifically long before any physical object could be identified as an actual carrier of such traits, and just as the notion of “atoms” as the building blocks of all physical objects was proposed and studied scientifically long before individual atoms were isolated and internally probed, so any of the notions listed above might legitimately be considered as invisible structures for brain researchers to try to pinpoint physically in the human brain.
Although I’m convinced that finding the exact physical incarnation of any such structure in “the human brain” (is there only one?) would be an amazing stride forward, I nonetheless don’t see why physical mapping should constitute the be-all and end-all of neurological inquiry. Why couldn’t the establishment of various sorts of precise relationships among the above-listed kinds of entities, prior to (or after) physical identification, be just as validly considered brain research? This is how scientific research on genes and atoms went on for many decades before genes and atoms were confirmed as physical objects and their inner structure was probed.
A Simple Analogy between Heart and Brain
I wish to offer a simple but crucial analogy between the study of the brain and the study of the heart. In our day, we all take for granted that bodies and their organs are made of cells. Thus a heart is made of many billions of cells. But concentrating on a heart at that microscopic scale, though obviously important, risks missing the big picture, which is that a heart is a pump. Analogously, a brain is a thinking machine, and if we’re interested in understanding what thinking is, we don’t want to focus on the trees (or their leaves!) at the expense of the forest. The big picture will become clear only when we focus on the brain’s large-scale architecture, rather than doing ever more fine-grained analyses of its building blocks.
At some point a billion years or so ago, natural selection, in its usual random-walk fashion, bumped into cells that contracted rhythmically, and little beings possessing such cells did well for themselves because the cells’ contractions helped send useful stuff here and there inside the being itself. Thus, by accident, were pumps born, and in the abstract design space of all such proto-pumps, nature favored designs that were more efficient. The inner workings of the pulsating cells making up those pumps had been found, in essence, and the cells’ innards thus ceased being the crucial variables that were selected for. It was a brand-new game, in which rival architectures of hearts became the chief contenders for selection by nature, and on that new level, ever more complex patterns quickly evolved.
For this reason, heart surgeons don’t think about the details of heart cells but concentrate instead on large architectural structures in the heart, just as car buyers don’t think about the physics of protons and neutrons or the chemistry of alloys, but concentrate instead on high abstractions such as comfort, safety, fuel efficiency, maneuverability, sexiness, and so forth. And thus, to close out my heart–brain analogy, the bottom line is simply that the microscopic level may well be — or rather, almost certainly is — the wrong level in the brain on which to look, if we are seeking to explain such enormously abstract phenomena as concepts, ideas, prototypes, stereotypes, analogies, abstraction, remembering, forgetting, confusing, comparing, creativity, consciousness, sympathy, empathy, and the like.
Can Toilet Paper Think?
Simple though this analogy is, its bottom line seems sadly to sail right by many philosophers, brain researchers, psychologists, and others interested in the relationship between brain and mind. For instance, consider the case of John Searle, a philosopher who has spent much of his career heaping scorn on artificial-intelligence research and computational models of thinking, taking special delight in mocking Turing machines.
A momentary digression… Turing machines are extremely simple idealized computers whose memory consists of an infinitely long (i.e., arbitrarily extensible) “tape” of so-called “cells”, each of which is just a square that either is blank or has a dot inside it. A Turing machine comes with a movable “head”, which looks at any one square at a time, and can “read” the cell (i.e., tell if it has a dot or not) and “write” on it (i.e., put a dot there, or erase a dot). Lastly, a Turing machine has, stored in its “head”, a fixed list of instructions telling the head under which conditions to move left one cell or right one cell, or to make a new dot or to erase an old dot. Though the basic operations of all Turing machines are supremely trivial, any computation of any sort can be carried out by an appropriate Turing machine (numbers being represented by adjacent dot-filled cells, so that “•••” flanked by blanks would represent the integer 3).
Back now to philosopher John Searle. He has gotten a lot of mileage out of the fact that a Turing machine is an abstract machine, and therefore could, in principle, be built out of any materials whatsoever. In a ploy that, in my opinion, should fool only third-graders but that unfortunately takes in great multitudes of his professional colleagues, he pokes merciless fun at the idea that thinking could ever be implemented in a system made of such far-fetched physical substrates as toilet paper and pebbles (the tape would be an infinite roll of toilet paper, and a pebble on a square of paper would act as the dot in a cell), or Tinkertoys, or a vast assemblage of beer cans and ping-pong balls bashing together.
In his vivid writings, Searle gives the appearance of tossing off these humorous images light-heartedly and spontaneously, but in fact he is carefully and premeditatedly instilling in his readers a profound prejudice, or perhaps merely profiting from a preexistent prejudice. After all, it does sound preposterous to propose “thinking toilet paper” (no matter how long the roll might be, and regardless of whether pebbles are thrown in for good measure), or “thinking beer cans”, “thinking Tinkertoys”, and so forth. The light-hearted, apparently spontaneous images that Searle puts up for mockery are in realit
y skillfully calculated to make his readers scoff at such notions without giving them further thought — and sadly, they often work.
The Terribly Thirsty Beer Can
Indeed, Searle goes very far in his attempt to ridicule the systems that he portrays in this humorous fashion. For example, to ridicule the notion that a gigantic system of interacting beer cans might “have experiences” (yet another term for consciousness), he takes thirst as the experience in question, and then, in what seems like a casual allusion to something obvious to everyone, he drops the idea that in such a system there would have to be one particular can that would “pop up” (whatever that might mean, since he conveniently leaves out all description of how these beer cans might interact) on which the English words “I am thirsty” are written. The popping-up of this single beer can (a micro-element of a vast system, and thus comparable to, say, one neuron or one synapse in a brain) is meant to constitute the system’s experience of thirst. In fact, Searle has chosen this silly image very deliberately, because he knows that no one would attribute it the slightest amount of plausibility. How could a metallic beer can possibly experience thirst? And how would its “popping up” constitute thirst? And why should the words “I am thirsty” written on a beer can be taken any more seriously than the words “I want to be washed” scribbled on a truck caked in mud?
The sad truth is that this image is the most ludicrous possible distortion of computer-based research aimed at understanding how cognition and sensation take place in minds. It could be criticized in any number of ways, but the key sleight of hand that I would like to focus on here is how Searle casually states that the experience claimed for this beer-can brain model is localized to one single beer can, and how he carefully avoids any suggestion that one might instead seek the system’s experience of thirst in a more complex, more global, high-level property of the beer cans’ configuration.
When one seriously tries to think of how a beer-can model of thinking or sensation might be implemented, the “thinking” and the “feeling”, no matter how superficial they might be, would not be localized phenomena associated with a single beer can. They would be vast processes involving millions or billions or trillions of beer cans, and the state of “experiencing thirst” would not reside in three English words pre-painted on the side of a single beer can that popped up, but in a very intricate pattern involving huge numbers of beer cans. In short, Searle is merely mocking a trivial target of his own invention. No serious modeler of mental processes would ever propose the idea of one lonely beer can (or neuron) for each sensation or concept, and so Searle’s cheap shot misses the mark by a wide margin.
It’s also worth noting that Searle’s image of the “single beer can as thirst-experiencer” is but a distorted replay of a long-discredited idea in neurology — that of the “grandmother cell”. This is the idea that your visual recognition of your grandmother would take place if and only if one special cell in your brain were activated, that cell constituting your brain’s physical representation of your grandmother. What significant difference is there between a grandmother cell and a thirst can? None at all. And yet, because John Searle has a gift for catchy imagery, his specious ideas have, over the years, had a great deal of impact on many professional colleagues, graduate students, and lay people.
It’s not my aim here to attack Searle in detail (that would take a whole dreary chapter), but to point out how widespread is the tacit assumption that the level of the most primordial physical components of a brain must also be the level at which the brain’s most complex and elusive mental properties reside. Just as many aspects of a mineral (its density, its color, its magnetism or lack thereof, its optical reflectivity, its thermal and electrical conductivity, its elasticity, its heat capacity, how fast sound spreads through it, and on and on) are properties that come from how its billions of atomic constituents interact and form high-level patterns, so mental properties of the brain reside not on the level of a single tiny constituent but on the level of vast abstract patterns involving those constituents.
Dealing with brains as multi-level systems is essential if we are to make even the slightest progress in analyzing elusive mental phenomena such as perception, concepts, thinking, consciousness, “I”, free will, and so forth. Trying to localize a concept or a sensation or a memory (etc.) down to a single neuron makes no sense at all. Even localization to a higher level of structure, such as a column in the cerebral cortex (these are small structures containing on the order of forty neurons, and they exhibit a more complex collective behavior than single neurons do), makes no sense when it comes to aspects of thinking like analogy-making or the spontaneous bubbling-up of episodes from long ago.
Levels and Forces in the Brain
I once saw a book whose title was “Molecular Gods: How Molecules Determine Our Behavior”. Although I didn’t buy it, its title stimulated many thoughts in my brain. (What is a thought in a brain? Is a thought really inside a brain? Is a thought made of molecules?) Indeed, the very fact that I soon placed the book back up on the shelf is a perfect example of the kinds of thoughts that its title triggered in my brain. What exactly determined my behavior that day (e.g., my interest in the book, my pondering about its title, my decision not to buy it)? Was it some molecules inside my brain that made me reshelve it? Or was it some ideas in my brain? What is the proper way to talk about what was going on in my head as I first flipped through that book and then put it back?
At the time, I was reading books by many different writers on the brain, and in one of them I came across a chapter by the neurologist Roger Sperry, which not only was written with a special zest but also expressed a point of view that resonated strongly with my own intuitions. I would like to quote here a short passage from Sperry’s essay “Mind, Brain, and Humanist Values”, which I find particularly provocative.
In my own hypothetical brain model, conscious awareness does get representation as a very real causal agent and rates an important place in the causal sequence and chain of control in brain events, in which it appears as an active, operational force….
To put it very simply, it comes down to the issue of who pushes whom around in the population of causal forces that occupy the cranium. It is a matter, in other words, of straightening out the peck-order hierarchy among intracranial control agents. There exists within the cranium a whole world of diverse causal forces; what is more, there are forces within forces within forces, as in no other cubic half-foot of universe that we know….
To make a long story short, if one keeps climbing upward in the chain of command within the brain, one finds at the very top those over-all organizational forces and dynamic properties of the large patterns of cerebral excitation that are correlated with mental states or psychic activity…. Near the apex of this command system in the brain…. we find ideas.
Man over the chimpanzee has ideas and ideals. In the brain model proposed here, the causal potency of an idea, or an ideal, becomes just as real as that of a molecule, a cell, or a nerve impulse. Ideas cause ideas and help evolve new ideas. They interact with each other and with other mental forces in the same brain, in neighboring brains, and, thanks to global communication, in far distant, foreign brains. And they also interact with the external surroundings to produce in toto a burstwise advance in evolution that is far beyond anything to hit the evolutionary scene yet, including the emergence of the living cell.
Who Shoves Whom Around Inside the Cranium?
Yes, reader, I ask you: Who shoves whom around in the tangled megaganglion that is your brain, and who shoves whom around in “this teetering bulb of dread and dream” that is mine? (The marvelously evocative phrase in quotes, serving also as this chapter’s title, is taken from “The Floor” by American poet Russell Edson.)
Sperry’s pecking-order query puts its finger on what we need to know about ourselves — or, more pointedly, about our selves. What was really going on in that fine brain on that fine day when, allegedly, something calling itself “I
” did something called “deciding”, after which a jointed appendage moved in a fluid fashion and a book found itself back where it had been just a few seconds before? Was there truly something referable-to as “I” that was “shoving around” various physical brain structures, resulting in the sending of certain carefully coordinated messages through nerve fibers and the consequent moving of shoulder, elbow, wrist, and fingers in a certain complex pattern that left the book upright in its original spot — or, contrariwise, were there merely myriads of microscopic physical processes (quantum-mechanical collisions involving electrons, photons, gluons, quarks, and so forth) taking place in that localized region of the spatiotemporal continuum that poet Edson dubbed a “teetering bulb”?
Do dreads and dreams, hopes and griefs, ideas and beliefs, interests and doubts, infatuations and envies, memories and ambitions, bouts of nostalgia and floods of empathy, flashes of guilt and sparks of genius, play any role in the world of physical objects? Do such pure abstractions have causal powers? Can they shove massive things around, or are they just impotent fictions? Can a blurry, intangible “I” dictate to concrete physical objects such as electrons or muscles (or for that matter, books) what to do?
Have religious beliefs caused any wars, or have all wars just been caused by the interactions of quintillions (to underestimate the truth absurdly) of infinitesimal particles according to the laws of physics? Does fire cause smoke? Do cars cause smog? Do drones cause boredom? Do jokes cause laughter? Do smiles cause swoons? Does love cause marriage? Or, in the end, are there just myriads of particles pushing each other around according to the laws of physics — leaving, in the end, no room for selves or souls, dreads or dreams, love or marriage, smiles or swoons, jokes or laughter, drones or boredom, cars or smog, or even smoke or fire?