by Bor, Daniel
Imagine that the year is 2412 and evil philosophers control the planet. One small coven of hellish thinkers calling themselves the Descartes Brigade hatches an idea for a cruel but potent experiment. The most celebrated scientist husband and wife in the world have just given birth to a beautiful daughter called Mary, but at the precise moment she’s born, the Descartes Brigade kidnaps her and locks her up in a windowless black and white room. They bleach her skin white, and even cosmetically alter her irises to be black, along with her hair. They feed her black and white foods through a small black hatch in the white wall, and as she grows up they entertain and teach her on black and white laptops and monitors. The physical sciences have been completed by this stage, and it is possible to know everything about physics, chemistry, and biology, especially including the brain sciences. Mary has little else to do, and anyway, like her parents, she has an aptitude for and love of science, so she takes it upon herself to learn this completed science. By the age of thirty, after decades of diligent study, she knows absolutely everything about the physical world (stupendously implausible, of course, but let’s for the moment assume it’s possible), from the nature of all the subatomic particles to the activity of every brain cell that represents color vision in humans. The members of the Descartes Brigade at this stage know that their plan is coming to fruition and, finally, they unlock the door to Mary’s prison, letting her wander outside for the first time in her life. Dazzled by what she sees, overwhelmed, overjoyed, she stumbles into a nearby garden and bends down to stare at a red rose. As she views the scarlet color she exclaims, in shock: “Before being released I knew every physical detail of how the brain generates consciousness, but now I know something new: I know what it is like to see red. This extra knowledge is something that the physical sciences could never capture. Therefore there is something nonphysical about consciousness!” At this stage, she collapses and suffers a terrible nervous breakdown, but the philosophers of the Descartes Brigade callously rejoice. They believe that their poor guinea pig, Mary, has helped them show that consciousness is at least partly nonphysical. The evil philosophy gang members end their pamphlet by boldly proclaiming that Descartes was right all along!
Although this argument has indeed been influential, it is not as watertight as it might at first appear. In some ways it suffers from the same problems as Descartes’ argument. Descartes made the mistake of overreaching with his level of knowledge (because he didn’t know the existence of his body with as much certainty as he knew the existence of his mind, he leaped to the conclusion that his body was distinct from his mind, even though he never actually established that this was the case). Here Jackson was similarly overreaching by making strong assumptions about what a complete physical understanding of the universe would entail. Lord Kelvin, one of the greatest physicists of the nineteenth century, is reputed to have proclaimed, as recently as 1900, that “There is nothing new to be discovered in physics now. All that remains is more and more precise measurement.” The timing of this claim was somewhat comical: That same year, Max Planck initiated one physics revolution by introducing quantum mechanics to the world. Then, five years later, Albert Einstein followed with a set of his own revolutionary theories, including special relativity and the equivalence of energy and mass.
We have absolutely no idea what this “completed physics” will look like in four hundred years’ time. In fact, startling revolutions could turn up at any moment to thoroughly embarrass anyone clinging to scientific dogmatism. A twenty-fifth-century portrait of the universe may well be far more bizarre than superstring theory or quantum mechanics, and it would be rather pointless to speculate on the details. It would also be foolhardy in the extreme to assume with certainty, as Jackson’s argument above seems to do, that such future physical scientific wisdom could not include a complete explanation of consciousness.
Indeed, this is not the only argument that can be raised against Jackson’s thought experiment. Suppose that Mary, bunkered down in her black and white room, actually has a wistful fascination for the flora and fauna that lie just outside her philosophers’ jail. So when she’s not studying the physical properties of neuronal circuitry, her hobby involves learning about the garden just outside. She hacks into a robot in the neighborhood, which happens to have some decent cameras for eyes. She steers it to her nearby garden and by this means finds out exactly what wavelengths of light are being emitted from the rose bush over the next few days. She therefore discovers what shade of red the rose is. Just to be certain, she practices more hacking skills on some nearby twenty-fifth-century flying brain scanners, which just happen to be a popular tool of the Big Brother government of the time. By this means, she sees the brain activity of all the passersby as they glance at the roses, and she infers easily that this activity corresponds to the experience of seeing red in every case. So before Mary ever leaves the room, she has incontrovertible, highly detailed knowledge from a range of sources that the roses in the garden near her room are indeed red. It’s important to emphasize, therefore, that when Mary finally is released from her room, she doesn’t necessarily suddenly discover that the rose is red—she could already know this. All she actually knows now that she did not know before is “what it is like to see” red. And this is a very strange kind of knowledge indeed. So what has she actually learned, if anything? Some philosophers suspect that she has not learned any new information whatsoever.
Because of this suspicion, a few critics of Jackson’s thought experiment have argued that “knowing what it is like to see color” is really like an ability to gather knowledge rather than knowledge itself. (In fact, Frank Jackson himself should be included on this list, because he has since rejected his former argument, largely in favor of this idea.) Our color vision allows us to learn many useful things, such as when fruits are ripe, or, in more modern climes, to know when a traffic light indicates that we should stop the car. But this information relates to knowing that something is red. Knowing what it is like to see red is more abstract, and perhaps could best be described as an ability to recognize red if we came across it in the future; in other words, a rather specific ability to gather color information directly, without recourse to external machines such as cameras.
Nowadays, we have multiple ways in which to acquire color information using natural and artificial technologies, such as our eyes or a digital camera. But whether the source is our eyes or some fabulous feat of modern technology, all that really matters is that the information is acquired, rather than the way it is acquired. The information, the knowing that the rose is red, is independent of the tool by which that information is acquired. By contrast, “knowing what it is like” is dependent on the tool used to gather the information—in this case, Mary’s eyes. Consequently, it isn’t “knowing” at all—it is merely the ability to use the specific tool of our eyes in order to acquire knowledge. So it’s at least plausible that Mary didn’t, after all, know anything new when she was finally let out of her monochrome prison and saw the red rose. Therefore, consciousness can still be a purely physical event.
CAN A PROGRAM HAVE FEELINGS?
The other aspect of the standard model of consciousness is that it’s not only a physical process, carried out by the brain, but also a computational one. Modern philosophers have taken issue with this stance as well, attempting to argue that there are unique characteristics to consciousness in its natural biological form, which means we could never be converted into some silicon equivalent.
One prominent attack on the computational view of consciousness revisits the “what is it like” aspect of awareness, which includes all of our emotions and senses. The argument claims that the existence of this vital aspect of experience proves that consciousness cannot be captured by a computer. You and I both know that strawberries are red and blueberries blue, but what if my inner experiences of reds and blues are your experiences of blues and reds? Arguments along these lines assume it’s quite conceivable that we would neither behave nor think diff
erently when faced with a fruit salad. Consequently, the software equivalent of our minds could pick any old values to represent the reds and blues—or even entirely omit this bothersome bit of code, and by extension the rest of our color vision and all other senses and emotions—without weakening the fidelity of the program. But if this defining facet of our consciousness is a mere irrelevance to its computational equivalent, then that is a step too far, and computers simply cannot represent consciousness.
However, when the scientific details are taken into account, there is something ridiculous in the idea that you can simply swap red with blue, and leave all thoughts and behavior otherwise unchanged. Our perception of something as “red” is generated not just from the wavelength our eyes pick up, but also the vividness of the color, its comparison with the surrounding colors, its brightness, the meanings and categories of colors, and so on—and all this interacts with our other senses and feelings in an incredibly complex network of information (just think of “the blues” as a form of music depicting a class of emotion). All this perfectly mirrors the architecture of the brain, which is an inordinately dense web of connectivity, such that changing one region may modify the function of many others.
Consequently, my red cannot be your blue because there is no single, independent class of experience as “red.” The truth, instead, is that all examples of “what it is like” that you care to pick, from “burgundy” to “melancholy,” represent rich information about ourselves and the outside world unique to this moment, crucially not in isolation, but as a network of links between many strands of knowledge, and in comparison with all the other myriad forms of experience we are capable of. In this way, far from being irrelevant, our senses and feelings, although undeniably complex, serve a vital computational role in helping us understand and interact with the world.3
CAN A LAPTOP REALLY UNDERSTAND CHINESE?
The most famous defense of the idea that there is something special and nonprogrammable about our biological form of consciousness is the Chinese Room argument, first proposed by John Searle in 1980. The main purpose of this thought experiment was to demonstrate the impenetrability not of feeling, but of meaning. Searle was keen to prove that a human brain could not be reduced to a set of computer instructions or rules.
To describe the thought experiment, we need to turn to another gang of philosophers from the year 2412, Turing’s Nemesis. Restless and rebellious, these philosophers are prowling the streets of New York with an aggressive itch for a dialectic fight. Soon, they come across a group of Chinese tourists and decide to play a mischievous trick on them. They show the Chinese group a plain white room, which is entirely empty, except for a table, a chair, blank pieces of paper, and a pencil. They allow the Chinese to inspect every nook and cranny of the simple space. The only features to note, apart from a door and a naked lightbulb in the ceiling, are two letterboxes on either side of the windowless room, linking it with the outside world. One box is labeled IN and the other OUT.
The ringleader of Turing’s Nemesis, a thoroughly devious person, melodramatically explains to the Chinese group that he reveres their culture above all others and believes everyone else in the world does, too. In fact, he’s willing to bet a month’s wages that these Chinese people can pick any random sucker on the street, place him in this room, and that person will show that he worships their culture as well, because he will be able to fluently speak their language via the interchange of words on paper through the letterboxes. The exchanges will take place with the Chinese people on the outside and the random subject inside the room. The Chinese are quick to take up this bet (even in 2412, although quite a few non-Chinese people speak Mandarin, only a small proportion can write in the language).
The Chinese tourists take their time and pick a young Caucasian man. He does not seem particularly bright. He looks a little bewildered as they stop him on the street and pull him over. The ringleader of the philosophy gang accepts the man and helps him into the room. Out of sight, though, just as the ringleader shuts the door, he hands the man a thick book. He whispers to him that if he follows the simple guidelines in the book, there’s a week’s worth of wages in it for just a few hours of work. This book is effectively a series of conversion tables, with clear instructions for how to turn any combination of Chinese characters into another set of Chinese characters.
The man in the room then spends the next few hours accepting pieces of paper with Chinese writing through the IN box. The paper has fresh Chinese sentences from a member of the Chinese group outside. Each time the man trapped in the room receives a piece of paper, he looks up the squiggles in the book, and then converts these squiggles into other squiggles, according to the rules of the book. He then puts what he’s written into the OUT box—as instructed. He is so ignorant that he doesn’t even know he’s dealing in Chinese characters; nevertheless, every time he sends them his new piece of paper, the Chinese are amazed that the answer is articulate and grammatically perfect, as if he were a native Mandarin speaker. Though the young man does not know it, he is sending back entirely coherent, even erudite, answers to their questions. It appears to the Chinese that they are having a conversation with him. The Chinese observe in virtual shock that he seems, totally contrary to first impressions, rather charming and intelligent. Amazed and impressed, the Chinese reluctantly pay the bet and walk away, at least able to take home the consolation of a glow of pride at the universal popularity of their culture.
With the Chinese group out of the way, the Turing’s Nemesis philosophers decide to keep their human guinea pig locked in the room a couple of hours longer. One of the Turing’s Nemesis members does in fact speak and read Chinese, and he translates each of the paper questions originally asked of the man in the room into English. He sends each question into the room in turn. The written answers, this time in English, come quite a bit faster. Although they aren’t nearly as well articulated as they were in Mandarin, they are somewhat similar to the Mandarin responses he had copied from the book. This time, however, the man actually understands everything that’s asked of him, and understands every answer he gives.
Now, claims the Chinese Room argument, if the mind were merely a program, with all its “if this, then that” rules and whatnot, it could be represented by this special book. The book contains all the rules of how a human would understand and speak Mandarin, as if a real person were in the room. But absolutely nowhere in this special room is there consciousness or even meaning, at least where Mandarin is concerned. The main controller in the room, the young man, has absolutely no understanding of Chinese—he’s just manipulating symbols according to rules. And the book itself cannot be said to be conscious—it’s only a book after all, and without someone to carry out the rules and words in the book, how can the book have any meaning? Imagine if almost all life on the planet went extinct, but somehow this book survived. On its own, without anyone to read it, it’s a meaningless physical artifact.
The point of all this is that, when the rules of the book are used to write Chinese, there is no consciousness or meaning in the room, but when English is written later on, and a human is involved, there is consciousness and meaning. The difference, according to Searle, is that the book is a collection of rules, but there is something greater in the human that gives us consciousness and meaning. Therefore meaning, and ultimately consciousness, are not simply programs or sets of rules—something more is required, something mysteriously unique to our organic brains, which mere silicon chips could never capture. And so no computer will ever have the capacity for consciousness and true meaning—only brains are capable of this, not as biological computers, with our minds as the software, but something altogether more alien. Searle summarized this argument by stating that “syntax is not semantics.”
This argument—like all of the others I’ve described—may appear to be an unbreakable diamond of deductive reasoning, but it is in fact merely an appeal to our intuitions. Searle wants, even begs, us to be dismissive of the id
ea that some small book of rules could contain meaning and awareness. He enhances his plea by including the controller in the room, who is blindly manipulating the written characters even though he is otherwise entirely conscious. Perhaps most of us would indeed agree that intuitively there is no meaning or awareness of Mandarin anywhere in that room. But that’s our gut feeling, not anything concrete or convincing.
When you start examining the details, however, you find the analogy has flaws. It turns out that there are two tricks to this Chinese Room thought experiment that Searle has used, like a good magician, to lead our attention away from his sleight of hand.
The first, more obviously misleading feature is the fact that a fully aware man is in the room. He understands the English instructions and is conscious of everything else around him, but he is painfully ignorant of the critical details of the thought experiment—namely, the meaning of the Chinese characters he is receiving and posting. So we center our attention on the man’s ignorance, and automatically extend this specific void of awareness to the whole room. In actual fact, the man is an irrelevance to the question. He is performing what in a brain would not necessarily be the conscious roles anyway—that of the first stages of sensory input and the last aspects of motor output. If there is absolutely any understanding or meaning to be found in that room for the Mandarin characters, it is in the rules of that book and not in the man. The man could easily be replaced by some utterly dumb, definitely nonconscious robot that feeds the input to the book, or a computerized equivalent of it, and takes the output to the OUT slot. So let’s leave the human in the room out of the argument and move on to the second trick.
And to understand the second trick, we must pose a fundamental question: Does that book understand Mandarin Chinese or not?