Now you might say that this whole book buys into the cold, glazed-eyes, zombie vision of human beings, since it posits that the “I” is, when all is said and done, an illusion, a sleight of mind, a trick that a brain plays on itself, a hallucination hallucinated by a hallucination. That would mean that we all are unconscious but we all believe we are conscious and we all act conscious. All right, fine. I agree that that’s a fair characterization of my views. But the swarm of zombie-fearing philosophers all want our inner existence to be richer than that. They claim that they can easily conceive of a cold, icy universe populated solely by nightmarishly hollow zombies, yet not distinguishable in any objective way from our own universe; at the same time, they insist that such is not the universe we live in. According to them, we humans don’t just act conscious or claim to be conscious; we truly are conscious, and that’s another matter entirely. Therefore Hofstadter and Parfit are wrong, and David Chalmers is right.
Well, I think Dan Dennett’s criticism of such philosophers hits the nail on the head. Dan asserts that these thinkers, despite their solemn promises, are not conceiving of a world identical to ours but populated by zombies. They don’t even seem to try very hard to do so. They are like SL #642, who, when imagining what a strange loop would say on looking at a brilliant purple flower, chose the dehumanizing verb “drone” to describe how it would talk, and likened its voice to a mechanical-sounding recorded voice in a hated phone menu tree. SL #642 has a stereotype of a strange loop as soul-less, and that prejudice rides roughshod over the image of perfectly natural, normal human behavior. Likewise, philosophers who fear zombies fear them because they fear the mechanical drone, the glazed eyes, and the frigid inhumanity that would surely pervade a world of mere zombies — even if, only a moment before, they signed off on the idea that such a world would be indistinguishable from our world.
Consciousness Is Not a Power Moonroof
In debates about consciousness, one of the most frequently asked questions goes something like this: “What is it about consciousness that helps us survive? Why couldn’t we have had all this cognitive apparatus but simply been machines that don’t feel anything or have any experience?” As I hear it, this question is basically asking, “Why did consciousness get added on to brains that reached a certain level of complexity? Why was consciousness thrown into the bargain as a kind of bonus? What extra evolutionary good does the possession of consciousness contribute, if any?”
To ask this question is to make the tacit assumption that there could be brains of any desired level of complexity that are not conscious. It is to buy into the distinction between Machines Q and Z sitting side by side on the old oaken table in Room 641, carrying out identical operations but one of them doing so with feeling and the other doing so without feeling. It assumes that consciousness is some kind of orderable “extra feature” that some models, even the fanciest ones, might or might not have, much as a fancy car can be ordered with or without a DVD player or a power moonroof.
But consciousness is not a power moonroof (you can quote me on that). Consciousness is not an optional feature that one can order independently of how the brain is built. You cannot order a car with a two-cylinder motor and then tell the dealer, “Also, please throw in Racecar Power® for me.” (To be sure, nothing will keep you from placing such an order, but don’t hold your breath for it to arrive.) Nor does it make sense to order a car with a hot sixteen-cylinder motor and then to ask, “Excuse me, but how much more would I have to throw in if I also want to get Racecar Power®?”
Like my fatuous notion of optional “Racecar Power®”, which in reality is nothing but the upper end of a continuous spectrum of horsepower levels that engines automatically possess as a result of their design, consciousness is nothing but the upper end of a spectrum of self-perception levels that brains automatically possess as a result of their design. Fancy 100-hunekerand-higher racecar brains like yours and mine have a lot of self-perception and hence a lot of consciousness, while very primitive wind-up rubber-band brains like those of mosquitoes have essentially none of it, and lastly, middle-level brains, with just a handful of hunekers (like that of a two-year-old, or a pet cat or dog) come with a modicum of it.
Consciousness is not an add-on option when one has a 100-huneker brain; it is an inevitable emergent consequence of the fact that the system has a sufficiently sophisticated repertoire of categories. Like Gödel’s strange loop, which arises automatically in any sufficiently powerful formal system of number theory, the strange loop of selfhood will automatically arise in any sufficiently sophisticated repertoire of categories, and once you’ve got self, you’ve got consciousness. Élan mental is not needed.
Liphosophy
Philosophers who believe that consciousness comes from something over and above physical law are dualists. They believe we inhabit a world like that of magical realism, in which there are two types of entities: magical entities, which possess élan mental, and ordinary entities, which lack it. More specifically, a magical entity has a nonphysical soul, which is to say, it is imbued with exactly one “dollop of consciousness” (a dollop being the standard unit of élan mental), while ordinary entities have no such dollop. (Dave Chalmers believes in two types of universe rather than two types of entity in a single universe, but to me it’s a similar dichotomy, since we can consider various universes to be entities inside a greater “meta-verse”.) Now I should like to be very sure, dear reader, that you and I are on the same page about this dichotomy between magical and ordinary entities, so to make it maximally clear, I shall now parody it, albeit ever so gently.
Imagine a philosophical school called “liphosophy” whose disciples, known as “liphosophers”, believe in an elusive — in fact, undetectable — and yet terribly important nonphysical quality called Leafpilishness (always with a capital “L”) and who also believe that there are certain special entities in our universe that are imbued with this happy quality. Now, not too surprisingly, the entities thus blessed are what you and I would tend to call “leaf piles” (with all the blurriness that any such phrase entails). If you or I caught a glimpse of such a thing and were in the right mood, we might exclaim, “Well, what do you know — a leaf pile!” Such an enthusiastic outburst would more than suffice for you and me, I suspect. We would not be likely to dwell much further on the situation.
But for a liphosopher, it would lead to the further thought, “Aha! So there’s another one of those rare entities imbued with one dollop of Leafpilishness, that mystical, nonphysical, other-worldly, but very real aura that doesn’t ever attach itself to haystacks, reams of paper, or portions of French fries, but only to piles of leaves! If it weren’t for Leafpilishness, a leaf pile would be nothing but a motley heap of tree debris, but thanks to Leafpilishness, all such motley heaps become Leafpilish! And since each dollop of Leafpilishness is different from every other one, that means that each leaf pile on Earth is imbued with a totally unique identity! What an amazing and profound phenomenon is Leafpilishness!”
No matter what your opinion is on consciousness, reader, I suspect you would scratch your head at the tenets of liphosophy. It would be unnatural if you didn’t wonder, “What is this nutty Capitalized Essence all about? What follows from having this invisible, undetectable aura?” You would also be likely to wonder, “Who or what agent in nature decides which entities in the physical world will receive dollops of Leafpilishness?”
Such musings might lead you to posing other hard questions, such as: What exactly constitutes a leaf pile? How many leaves, and of what size, does it take to make a leaf pile? Which leaves belong to it, and which ones do not? Is “belonging” to a given leaf pile always a black-and-white matter? What about the air between the leaves? What about the dirt on a leaf? What if the leaves are dry, and a few (or half, or most) of them have been crushed into tiny pieces? What if there are two neighboring leaf piles that share a few leaves between them? Is it 100 percent clear at all times where the borders of a leaf pile are? In short
, how does Mother Nature figure out in a perfectly black-and-white fashion what things are worthy recipients of dollops of Leafpilishness?
If you were in a yet more philosophical mood, you might ask yourself questions such as: What would happen if, through some freak accident or bizarre mistake, a dollop of Leafpilishness got attached to, say, a leaf pile with an ant crawling in it (that is, to the compound entity consisting of leaf pile plus ant)? Or to just the upper two-thirds of a leaf pile? Or to a pile of seaweed? Or to a child’s crumbly sand castle on the beach? Or to the San Francisco Zoo? Or to Andromeda galaxy? Or to my dentist appointment next week? What would happen if two dollops of Leafpilishness accidentally got attached to just one leaf pile? (Or zero dollops, yielding a “zombie” leaf pile?) What dreadful or marvelous consequences would ensue?
I suspect, reader, that you would not take seriously a liphosopher who argued that Leafpilishness was a central and mystical aspect of the cosmos, that it transcended physical law, that items possessing Leafpilishness were inherently different from all other items in the universe, and that each and every leaf pile had a unique identity — thanks not to its unique internal composition but rather to the particular dollop of Leafpilishness that had been doled out to it from who knows where. I hope you would join me in saying, “Liphosophy is a motley belief pile!” and in paying it no heed.
Consciousness: A Capitalized Essence
So much for liphosophers. Now let’s turn to philosophers who see consciousness as an elusive — in fact, undetectable — and yet terribly important nonphysical aspect of the universe. In order to distinguish this notion of consciousness from the one I’ve been talking about all through this book, I’m going to capitalize it: “Consciousness”. Whenever you see this word capitalized, just think of the nonphysical essence called élan mental, or else make an analogy to Racecar Power® or Leafpilishness; either way, you won’t be far off.
At this point, I have to admit that I have a rather feeble imagination for Capitalized Essences. In trying to picture in my mind a physical object imbued with a nonphysical essence (such as Leafpilishness or élan mental), I inadvertently fall back on imagery derived from the purely physical world. Thus for me, the attempt to imagine a “dollop of Consciousness” or a “nonphysical soul” inevitably brings to mind a translucent, glowing swirl of haze floating within and perhaps a little bit around the physical object that it inhabits. Mind you, I know all too well that this is most wrong, since the phenomenon is, by definition, not a physical one. But as I said, my imagination is feeble, and I need this kind of physical crutch to help it out.
In any case, the idea of a sharp dichotomy between objects imbued with dollops of Consciousness and those deprived of such leads to all sorts of puzzling riddles, such as the following:
Which physical entities possess Consciousness, and which ones do not? Does a whole human body possess Consciousness? Or is it just the human’s brain that is Conscious? Or could it be that only a certain part of the brain is Conscious? What are the exact boundaries of a Conscious physical entity? What organizational or chemical property of a physical structure is it that graces it with the right to be invaded by a dollop of Consciousness?
What mechanism in nature makes the elusive elixir of Consciousness glom onto some physical entities and spurn others? What wondrous pattern-recognition algorithm does Consciousness possess so as to infallibly recognize just the proper kinds of physical objects that deserve it, so it can then bestow itself onto them?
How does Consciousness know to do this? Does it somehow go around the physical world in search of candidate objects to glom onto? Or does it shine a metaphorical flashlight metaphorically down at the world and examine it piece by piece, occasionally saying to itself, “Aha! So there’s an entity that deserves one standard-size dollop of me!”
How does Consciousness get attached to some specific physical structure and not accidentally onto nearby pieces of matter? What kind of “glue” is used to make this attachment? Can the “glue” possibly wear out and the Consciousness accidentally fall off or transfer onto something else?
How is your Consciousness different from my Consciousness? Did our respective dollops come with different serial numbers or “flavors”, thus establishing the watertight breach between us? If your dollop of Consciousness had been attached to my brain and vice versa, would you be writing this and I reading it?
How does Consciousness coexist with physical law? That is, how does a dollop of Consciousness push material stuff around without coming into sharp conflict with the fact that physical law alone would suffice to determine the behavior of those things?
A Sliding Scale of Élan Mental
Now some readers might say that I am not giving élan mental (a.k.a. Consciousness) enough respect. They might say that there are gradations in the dispensation of this essence, so that some entities receive a good deal of it while others get rather little or none of it. It’s not just all-or-nothing; rather, the amount of Consciousness attached to any given physical structure is not precisely one dollop but can be any number of dollops (including fractional amounts). That’s progress!
And yet, for such readers, I would still have numerous questions, such as the following:
How is it determined exactly how many dollops (or fractional dollops) of Consciousness get attached to a given physical entity? Where are these dollops stored in the meantime? In other words, where is the Central Consciousness Bank?
Once a certain portion of Consciousness has been dished out to a recipient entity (Ronald Reagan, a chess-playing computer, a cockroach, a sperm, a sunflower, a thermostat, a leaf pile, a stone, the city of Cairo), is it a permanent allotment, or is the size of the allotment variable, depending on what physical events take place involving the recipient? If the recipient is in some way altered, does its allotment (or part of it) revert to the Central Consciousness Bank, or does it just float around forevermore, no longer attached to a physical anchor? And if it floats around unattached, does it retain traces of the recipient to which it was once attached?
What about people with Alzheimer’s disease and other forms of dementia — are they still “just as Conscious” as they always were, until the moment of their death? What makes something be “the same entity” over long periods of time, anyway? Who or what decreed that the changing pattern that over several decades was variously known as “Ronnie Reagan”, “Ronald Reagan”, “Governor Reagan”, “President Reagan”, and “Ex-President Reagan” was “one single entity”? And if it truly, objectively, indisputably was one single entity no matter how ephemeral and wispy it became, then mightn’t that entity still exist?
And what about Consciousness for fetuses (or for their growing brains, even when they consist of just two neurons)? What about for cows (or their brains)? What about for goldfish (or their brains)? What about for viruses?
As I hope these lists of enigmas make clear, the questions entailed by a Capitalized Essence called “Consciousness” or élan mental abound and multiply with out end. Belief in dualism leads to a hopelessly vast and murky pit of mysteries.
Semantic Quibbling in Universe Z
There is one last matter I wish to deal with, and that has to do with Dave Chalmers’ famous zombie twin in Universe Z. Recall that this Dave sincerely believes what it is saying when it claims that it enjoys ice cream and purple flowers, but it is in fact telling falsities, since it enjoys nothing at all, since it feels nothing at all — no more than the gears in a Ferris wheel feel something as they mesh and churn. Well, what bothers me here is the uncritical willingness to say that this utterly feelingless Dave believes certain things, and that it even believes them sincerely. Isn’t sincere belief a variety of feeling? Do the gears in a Ferris wheel sincerely believe anything? I would hope you would say no. Does the float-ball in a flush toilet sincerely believe anything? Once again, I would hope you would say no.
So suppose we backed off on the sincerity bit, and merely said that Universe Z’s Dave believes the f
alsities that it is uttering about its enjoyment of this and that. Well, once again, could it not be argued that belief is a kind of feeling? I’m not going to make the argument here, because that’s not my point. My point is that, like so many distinctions in this complex world of ours, the apparent distinction between phenomena that do involve feelings and phenomena that do not is anything but black and white.
If I asked you to write down a list of terms that slide gradually from fully emotional and sentient to fully emotionless and unsentient, I think you could probably quite easily do so. In fact, let’s give it a quick try right here. Here are a few verbs that come to my mind, listed roughly in descending order of emotionality and sentience: agonize, exult, suffer, enjoy, desire, listen, hear, taste, perceive, notice, consider, reason, argue, claim, believe, remember, forget, know, calculate, utter, register, react, bounce, turn, move, stop. I won’t claim that my extremely short list of verbs is impeccably ordered; I simply threw it together in an attempt to show that there is unquestionably a spectrum, a set of shades of gray, concerning words that do and that do not suggest the presence of feelings behind the scenes. The tricky question then is: Which of these verbs (and comparable adjectives, adverbs, nouns, pronouns, etc.) would we be willing to apply to Dave’s zombie twin in Universe Z? Is there some precise cutoff line beyond which certain words are disallowed? Who would determine that cutoff line?
I Am a Strange Loop Page 45