The Ravenous Brain: How the New Science of Consciousness Explains Our Insatiable Search for Meaning

Home > Other > The Ravenous Brain: How the New Science of Consciousness Explains Our Insatiable Search for Meaning > Page 3
The Ravenous Brain: How the New Science of Consciousness Explains Our Insatiable Search for Meaning Page 3

by Bor, Daniel


  DESCARTES AND THE MIND-BODY DUALITY

  The seventeenth-century philosopher René Descartes is the landmark father figure of the philosophy of mind. In his most famous work, Meditations on First Philosophy, Descartes contemplated the possibility that a “malicious demon of the utmost power and cunning” had deceived him about the existence of all external things, including his own body. This is essentially also the premise of the film The Matrix, in which the hero, Neo, goes about his daily life believing that he is living in a twentieth-century U.S. city, only to be woken up from this extended dream to realize that it was a simulated reality that devious computers had been generating and wiring into his brain.

  Descartes recognized, though, that no matter how malicious this demon was, there was one realm of thought that was certain, impervious to the demon’s illusions: his own existence as a thinking being. Like Neo, you may believe in blissful ignorance that you have the same body you’ve always had, as that is what the computers feed into your senses. But the one act beyond the power of these evil computers is to fool you about your own existence. There are two options: If you do believe you exist, then logically you must exist—at least as some kind of conscious being—since the act of believing requires the existence of a conscious being to believe it. Alternatively, if you try, somehow, to believe you don’t exist, then the very act of doubting confirms your existence again, since doubt also requires a conscious being to perform the doubting, as it were. Therefore, just by the act of thinking (with doubt as one example), you know that there must be a conscious entity around, and you also know that it is you!

  In the meditations, Descartes articulated this idea as: “I must finally conclude that this proposition, I am, I exist, is necessarily true whenever it is put forward by me or conceived in my mind.” But he put it more famously and succinctly in Discourse on the Method as “Cogito ergo sum” (I think therefore I am).

  One of Descartes’ main arguments for justifying the mind-body duality was intimately bound to these views on doubt. The argument was deceptively simple and superficially persuasive: Because we can so effectively doubt the existence of our own bodies, but can never doubt the existence of our own minds, the mind is completely distinct from and independent of the body (a modern spin on this argument might substitute “body” for “brain”).

  The brilliant philosopher, mathematician, and logician Gottfried Wilhelm von Leibniz, who was born around the time of Descartes’ death, was quick to vilify Descartes. Leibniz pointed out that all Descartes had actually shown was that he could contemplate that his conscious mind was distinct from his body. He certainly hadn’t proven anything. This critique can be illustrated by a slight twist on a well-known example. Say I happen to be walking the streets of Metropolis and from a distance I see a tall, well-built man with thick, ugly glasses hurrying into a telephone cubicle in an alley. My friend tells me that he’s the Daily Planet reporter, Clark Kent. Suddenly, on the other side of the street, five gunmen descend on a security van, looking to steal hundreds of thousands of dollars in cash. I’m terrified and excited at the same time, and believe they’ll get away with it. Then I feel a momentary swirling wind, and miraculously, as if from nowhere, Superman flies past me toward the criminals. He has them disarmed and tied up in ropes in the blink of an eye. My friend looks at me, his head cocked sideways, and asks: “Do you . . . do you think there’s any chance that this Clark Kent guy is the same person as Superman?” I laugh at how ludicrous that suggestion is, and quickly retort: “Listen, I definitely know who Superman is—I’ve seen him fly around loads of times. I’ve even interviewed him twice for my magazine. I barely know this Clark Kent guy, and besides, from my fuzzy glimpse of him a minute ago, he even looks different because he wears glasses. Therefore I’m certain that Superman and Clark Kent are two completely distinct people.” My friend nods, impressed at my watertight logic, and I feel a warm, comforting sense of smugness at yet another example of my superior intellect.

  The Superman observer is making two mistakes here: First, he’s assuming that his own level of knowledge of Superman/Clark Kent is an actual characteristic of Superman/Clark Kent; second, he’s assuming that superficial differences between Superman and Clark Kent must mean they are different people rather than two versions of the same person. But if he were a decent, professional reporter, seeking out definitive evidence like a bloodhound, his conclusions on Clark and Superman would be very different. If he studied Clark’s bizarrely frequent visits to telephone booths and knew that Superman always popped out of the very same booths moments later, if he found out that Clark always wore a Superman costume under his normal clothes, if he saw what Clark looked like without glasses, and so on—it would be blindingly obvious they were actually one and the same person.

  Descartes’ argument, essentially based on his ignorance of the brain, is underpinned by a similar unwillingness to explore the evidence. To comprehensively test his claim, just as the bystander should be studying every detail of both Clark Kent and Superman, we would need to know everything about our brains and our awareness. If there are instances when consciousness radically alters, but brain activity is unchanged, then we can start talking about independence of brain and mind—but not until. As it is, all brain-scanning experiments to date have shown that even the subtlest of changes in consciousness are clearly marked by alterations in brain activity. The alternative perspective, then, that consciousness is a physical, brain-based process, is eminently more plausible than the belief that consciousness is independent of the physical world.

  But Descartes also claimed that our minds are necessarily private, subjective, and unobservable by others. It’s worth lingering on this point. When I look out at the vast ocean, hear the pulsing murmur of the waves, and feel a sense of peace and contentment, no one else will ever experience precisely what I experience at that moment. In an absolute sense, it seems that I really am trapped, alone, inside my head, and there’s nothing science can do to change this. To extend Descartes’ assertion in the modern world, brain scanners may capture an approximation of my consciousness, but could they ever, even in principle, enable someone else perfectly to experience what I just experienced? This question reflects the abiding mystery of subjectivity, which remains the inspiration for modern attempts to demonstrate the independence of mind and brain.

  Finally, it is worth pointing out that Descartes, like everyone else who thought about such things until a century or so ago, assumed that the mental realm simply meant everything he was conscious of. Descartes would probably have viewed the concept of unconscious thoughts as an oxymoron, and certainly would never have accepted that our unconscious minds could influence our consciousness, as we all now largely assume. For the record, whenever I use the term “mind” from now on, or discuss “mental states,” I’m including all cognitive processing, conscious or not.

  Although Descartes had contemporary critics who essentially believed that the mind was the physical brain (most notably the English philosopher Thomas Hobbes), Descartes’ mind-brain duality was largely accepted, even by philosophers, for centuries.

  MODERNITY ARRIVES AND GHOSTS LEAVE

  Despite Descartes’ prominence, beginning in the mid-nineteenth century within the medical and fledgling neuroscience communities there was mounting evidence that a dualistic position was simply untenable. The most famous neurological case of this period was that of Phineas Gage. Gage was a foreman working in railroad construction in Vermont. One day, while he was helping to clear a volume of rock using explosives inserted into a hole, the gunpowder exploded prematurely. The tamping iron he was using shot out of the hole like a bullet. The frightening piece of metal was 3 centimeters wide, over a meter long, and weighed roughly 6 kilograms. It penetrated his left cheek, shattering the bone, then shot through his left frontal lobe, probably destroying much of the front part of the brain (see Figure 1). Finally it shot out through the top of his skull, eventually landing 25 meters away. Although obviously in shock and los
ing blood, amazingly, Gage remained conscious at the time and could speak within a few minutes. He was even able to walk unaided, and he managed to sit upright on the short cart journey to the physician. He eventually made a remarkable recovery, with one prominent exception.

  A landmark paper by his physician, John Harlow, described how, before the accident, Gage was well balanced, smart, sociable, responsible, and respected. Afterward, however, Gage became immature, regularly profane, disrespectful, capricious, and seemingly unable to follow through with most of the dizzying numbers of plans he kept conceiving. In short, his previous friends believed he was “no longer Gage.” Shockwaves rippled through society with the story that a man’s personality could be so radically altered by brain damage. Although there was considerable controversy about the behavioral details of this case, in the decades to follow, dozens of similar instances were reported where brain damage led to changes in personality or intellect. Science was slowly beginning to turn the conceptual tanker around toward the idea that the mind simply was the brain.

  It wasn’t until the middle of the twentieth century, though, that the most famous and biting attack on Descartes’ dualistic position was mounted. It came from an English philosopher named Gilbert Ryle. In his seminal 1949 work, The Concept of Mind, Ryle described Descartes’ position as a “philosopher’s myth.” Ryle pointed out that Descartes, in positing the independence of mind and body, was making a basic “category mistake.” As an example of a category mistake, let’s say a foreign friend visits me in Cambridge, and wants a tour of the university. I show her St. John’s College, where I was based as a PhD student, with its beautiful covered Bridge of Sighs over the boats punting on the river Cam, and its majestic New Court, which rather resembles a wedding cake. I then take her through various other departments and colleges, but after a while she grows impatient and asks, “Okay okay, I’ve seen where members of the college live, where scientists carry out research and all that, but I thought you were going to show me Cambridge University!” What my friend fails to understand is that all these buildings and people make up the university. They are not independent of it in any way, but are subcategories of a larger category called Cambridge University.

  For Ryle, Descartes was making exactly the same kind of category mistake with the mind and brain. Descartes was perhaps aware that various brain regions contributed to sensory processing, but he nevertheless believed that the brain had nothing to do with our mental life. Instead, as Ryle put it, Descartes believed in “the dogma of the Ghost in the Machine.” For Descartes, this mind of mine is some mysterious ghost living inside the biological machine that is my brain. But there is no need for a ghost in the machine. The machine of the brain is all that’s required for a conscious mind to exist.

  Before I leave discussing Ryle, I would like to return to a subtlety of the analogy, because I think it highlights something very interesting. Although, of course, my foreign friend was wrong to assume that the colleges and research departments were irrelevant to the concept of Cambridge University, perhaps she wasn’t entirely wrong. If I said “Cambridge University equals every building, student and staff now attached, or who has ever been attached to Cambridge University,” some people might argue that I was being too reductionist or unromantic in turning an eight-hundred-year-old institution into a set of components. There is meaning living within the phrase “Cambridge University” that cannot be entirely captured by a mere list of its component parts. There is the image that the population has of the university, which, for instance, is exploited in literature and films, and which carries with it perhaps a traditional, formerly aristocratic aura. The students themselves interact in ways that make the university embody something very academically minded, perhaps even a little nerdy, which is an atmosphere that cannot easily be explained by examining the university’s parts. In short, there are emergent properties to the concept “Cambridge University” that will never really be captured by a shopping list of items that Ryle would class as subcategories of the university.

  In many domains, surprisingly sophisticated forms of knowledge can materialize out of the intricate combination of ideas at a lower level, and these can very much seem to be greater than the sum of those subterranean parts. One clear example of this is money. Studying the atomic properties of credit cards and coins will not generate very much understanding about how the world’s financial system works. Instead, many economic rules can be seen as emerging from the social interaction of people wishing to buy and sell goods and services. They can scale up to fiendishly complicated levels, with few people being able to predict the 2008 credit crunch, and hardly anyone understanding exactly why it happened. Another fascinating example is that of ants. A single ant is a very stupid animal indeed, capable of only the most rudimentary learning. One might assume that if you have a colony of stupid ants, all you have is millions of stupid creatures. But something almost magical happens when the ants interact (largely by chemical signaling)—they develop incredibly complex behaviors. These include farming (humans weren’t the first species to farm, by about 50 million years), complex nest-building, an intricate division of labor among seemingly identical animals, and about the only known non-mammal example of teaching, as one ant guides another to a food source, even pausing every so often to let its student catch up. This has led some to suggest that an ant colony should not be seen as a collection of ants, but more as a superorganism. Perhaps our global community of humans, now so tightly interconnected via the Internet, is another such superorganism.

  Emergent properties are not the sole realm of animate objects. The laws of gravity are relatively simple, and yet the stunning spiral shape of our Milky Way arises out of them. The equations for fractals tend also to be just a handful of characters in length, despite generating shapes of seemingly infinite and quite unexpected complexity. (See Figure 2 for various illustrations of emergentism.)

  When you have an object such as the human brain, which is the most complex lump of matter in the known universe, there is a good chance that various emergent properties will materialize there as well. I’m not for one moment proposing Descartes’ immaterial ghost, and shudder at its unscientific and religious connotations. But within a scientific, physical-based framework, I endorse and will discuss the idea that the brain is much more than the sum of its parts, and that consciousness may be its most shining, fascinating product.

  THE IMPENETRABILITY OF “WHAT IT IS LIKE”

  By the middle of the twentieth century, philosophy had largely caught up with neuroscience in believing that the mind was equal to the brain, and that any thought or feeling was really a collection of brain cells firing away. In fact, with the advent of computers, and the acknowledgment that we were nothing more than biological machines, the equation of mind with brain soon mutated into the equation of mind with computer program. We happened, by the quirks of evolution, to have been lumbered with this particularly wrinkly, jelly-like computer to substantiate the program of our minds, but it didn’t have to be this way; we could, in principle, have just the same thoughts with a “brain” made of silicon chips.

  This theory of “mind as a computer, which accidentally equals brain,” is the most widely discussed philosophical position about the mind held today. It is also the view that almost all neuroscientists assume by default. But that hasn’t stopped some modern philosophers from attacking it from almost every angle.

  The first doubt comes from the suggestion that the mind can be entirely reduced to the brain (or computer, or whatever other physical object one would care to mention). Descartes opposed the possibility of this reduction, assuming that there was something intrinsically subjective and nonphysical about the mental world. In 1974, Thomas Nagel, in one of the most famous philosophy papers of the past hundred years (“What Is It Like to Be a Bat?”), echoed Descartes’ position in modern form. Nagel accepted that thoughts could be characterized according to their ability to cause other thoughts and behavior, and he was certainly not opp
osed outright to the idea that minds were simply brains. But he did think there could be a problem with this view. If you and I hear Shostakovich’s Tenth Symphony, I can make a great stab at imagining what it was like for you to hear the music. Of course, I may be entirely wrong in my imagination, but I can at least generate a plausible guess as to what you experienced. I might even have a good go at imagining what my cat experiences when she hears the doorbell ring. We have similar ears mechanistically, and our brains’ primary hearing centers also aren’t entirely dissimilar. But if I try to imagine what a bat “see/ hears” when it uses echolocation to navigate, then I have no idea where to begin. Assuming that a bat is conscious, then our two consciousnesses seem totally incompatible. I can gain absolutely no knowledge about bat consciousness—at least not in the realm of echolocation.2 And if I can’t even imagine what it is like to sense with echolocation, what hope is there that I can get a foothold using any of science’s tools?

  This “what is it like” aspect of thought, Nagel claimed, was the essence of consciousness, and it posed a problem, in particular, for those wishing to reduce consciousness to a physical process in the brain. Nagel believed that if some animal was conscious, then it had to have a “what is it like?” aspect to it. Nagel did not state that it was impossible for us to understand what it was like to be a bat, although he did suggest that this barrier was a fundamental problem for science, one that it had to face with radically different, novel approaches.

  Australian philosopher Frank Jackson took this position one step further and argued that it actually is impossible for science to explain mental states using only physical processes. His argument revolved around a thought experiment, which went something like the following.

 

‹ Prev