The Big Picture

Home > Other > The Big Picture > Page 58
The Big Picture Page 58

by Carroll, Sean M.

The Chinese Room thought experiment forces those of us who think con-

  21

  sciousness is purely physical to confront what a dramatic claim we are mak-

  22

  ing. Even if we don’t purport to have a fully fleshed- out understanding of

  23

  consciousness, we should try to be clear about what kinds of things could

  24

  possibly qualify as “conscious.” In the Chinese Room, that question is

  25

  raised about a pile of papers and an instruction book, but really those are

  26

  just colorful ways of talking about the information and processing inside a

  27

  computer. If we believe “consciousness” is just a way of talking about under-

  28

  lying physical events, what kind of uncomfortable situations does that

  29

  commit us to?

  30

  The one system we generally agree is conscious is a human being—

  31

  mostly the brain, but we can include the rest of the body if you like. A hu-

  32

  man can be thought of as a configuration of several trillion cells. If the

  33

  physical world is all there is, we have to think that consciousness results

  34

  from the particular motions and interactions of all those cells, with one

  S35

  another, and with the outside world. It is not supposed to be the fact that

  N36

  34 1

  Big Picture - UK final proofs.indd 341

  20/07/2016 10:02:54

  T H E B IG PIC T U R E

  01

  cells are “cells” that matters, only how they interact with one another, the

  02

  dynamic patterns they carve out in space as they move through time. That’s

  03

  the consciousness version of multiple realizability, sometimes called sub-

  04

  strate independence— many different substances could embody the patterns

  05

  of conscious thought.

  06

  And if that’s true, then all kinds of things could be conscious.

  07

  Imagine that we take one neuron in your brain, and study what it does

  08

  until we have it absolutely figured out. We know precisely what signals it

  09

  will send out in response to any conceivable signals that might be coming

  10

  in. Then, without making any other changes to you, we remove that neuron

  11

  and replace it with an artificial machine that behaves in precisely the same

  12

  way, as far as inputs and outputs are concerned. A “neuristor,” as in Hein-

  13

  lein’s self- aware computer, Mike. But unlike Mike, you are almost entirely

  14

  made of your ordinary biological cells, except for this one replacement neu-

  15

  ristor. Are you still conscious?

  16

  Most people would answer yes, a person with one neuron replaced by an

  17

  equivalently behaving neuristor is still conscious. So what if we replace two

  18

  neurons? Or a few hundred million? By hypothesis, all of your external ac-

  19

  tions will be unaltered— at least, if the world is wholly physical and your

  20

  brain isn’t affected by interactions with any immaterial soul substance that

  21

  communicates with organic neurons but not with neuristors. A person

  22

  with every single one of their neurons replaced by artificial machines that

  23

  interact in the same way would indisputably pass the Turing test. Would it

  24

  qualify as being conscious?

  25

  We can’t prove that such an automated thinking machine would be con-

  26

  scious. It’s logically possible that a phase transition occurs somewhere along

  27

  the way as we gradually replace neurons one by one, even if we can’t predict

  28

  exactly when it would happen. But we have neither evidence nor reason to

  29

  believe that there is any such phase transition. Following Turing, if a cyborg

  30

  hybrid of neurons and neuristors behaves in exactly the same way as an or-

  31

  dinary human brain would, we should attribute to it consciousness and all

  32

  that goes along with it.

  33

  Even before John Searle presented the Chinese Room experiment, phi-

  34

  losopher Ned Block discussed the possibility of simulating a brain using the

  35S

  entire population of China. (Why everyone picks China for these thought

  36N

  experiments is left as an exercise.) There are many more neurons in the brain

  342

  Big Picture - UK final proofs.indd 342

  20/07/2016 10:02:54

  W h At t h I n K S ?

  than there are people in China or even the whole world, but by thought-

  01

  experiment standards that’s not much of an obstacle. Would a collection of

  02

  people running around sending messages to one another, in perfect mim-

  03

  icry of the electrochemical signals in a human connectome, qualify as “con-

  04

  scious”? Is there any sense in which that population of people— collectively,

  05

  not as individuals— would possess inner experiences and understanding?

  06

  Imagine mapping a person’s connectome, not only at one moment in

  07

  time but as it develops through life. Then— since we’re already committed

  08

  to hopelessly impractical thought experiments— imagine that we record

  09

  absolutely every time a signal crosses a synapse in that person’s lifetime.

  10

  Store all of that information on a hard drive, or write it down on (a ridicu-

  11

  lously large number of) pieces of paper. Would that record of a person’s

  12

  mental processes itself be “conscious”? Do we actually need development

  13

  through time, or would a static representation of the evolution of the phys-

  14

  ical state of a person’s brain manage to capture the essence of consciousness?

  15

  16

  •

  17

  These examples are fanciful but illustrative. Yes, reproducing the processes

  18

  of the brain with some completely different kind of substance (whether

  19

  neuristors or people) should certainly count as consciousness. But no, print-

  20

  ing things out onto a static representation of those processes should not.

  21

  From a poetic- naturalism perspective, when we talk about conscious-

  22

  ness we’re not discovering some fundamental kind of stuff out there in the

  23

  universe. It’s not like searching for the virus that causes a known disease,

  24

  where we know perfectly well what kind of thing we are looking for and

  25

  merely want to detect it with our instruments so that we can describe what

  26

  it is. Like “entropy” and “
heat,” the concepts of “consciousness” and “under-

  27

  standing” are ones that we invent in order to give ourselves more useful and

  28

  efficient descriptions of the world. We should judge a conception of what

  29

  consciousness really is on the basis of whether it provides a useful way of

  30

  talking about the world— one that accurately fits the data and offers insight

  31

  into what is going on.

  32

  A form of multiple realizability must be true at some level. Like the Ship

  33

  of Theseus, most of the individual atoms and many of the cells in any hu-

  34

  man body are replaced by equivalent copies each year. Not every one— the

  S35

  atoms in your tooth enamel are thought to be essentially permanent, for

  N36

  343

  Big Picture - UK final proofs.indd 343

  20/07/2016 10:02:54

  T H E B IG PIC T U R E

  01

  example. But who “you” are is defined by the pattern that your atoms form

  02

  and the actions that they collectively take, not their specific identities as

  03

  individual particles. It seems reasonable that consciousness would have the

  04

  same property.

  05

  And if we are creating a definition of consciousness, surely “how the

  06

  system behaves over time” has to play a crucial role. If any element of con-

  07

  sciousness is absolutely necessary, it should be the ability to have thoughts.

  08

  That unmistakably involves evolution through time. The presence of con-

  09

  sciousness also implies something about apprehending the outside world

  10

  and interacting with it appropriately. A system that simply sits still, main-

  11

  taining the same configuration at every moment of time, cannot be thought

  12

  of as conscious, no matter how complex it may be or whatever it may repre-

  13

  sent. A printout of what our brain does wouldn’t qualify.

  14

  Imagine you were trying to develop an effective theory of how human

  15

  beings behave, but without any recourse to their inner mental states. That

  16

  is, you are playing the role of an old- time behaviorist: person receives input,

  17

  person behaves accordingly, without any unobservable nonsense about an

  18

  inner life.

  19

  If you wanted to make a good theory, you would end up reinventing the

  20

  idea of inner mental states. Part of the reason is straightforward: the sen-

  21

  sory input might be hearing someone ask, “How are you feeling?” and the

  22

  induced reaction might be “I’m a little gloomy at the moment, to be hon-

  23

  est.” The easiest way to account for such behavior is to imagine that there is

  24

  a mental state labeled “gloomy,” and that our subject is in that state at the

  25

  moment.

  26

  But there’s also another reason. Even when an individual behaves in

  27

  ways that do not overtly refer to their inner mental state, real human behav-

  28

  ior is extremely complex. It’s not like two billiard balls coming together on

  29

  a pool table, where you can reliably predict what will happen with relatively

  30

  little information (angle of impact, spin, velocities, and so on). Two differ-

  31

  ent people, or even the same person in slightly different circumstances, can

  32

  react very differently to the same input. The best way to explain that is by

  33

  invoking internal variables— there is something going on inside the per-

  34

  son’s head, and we had better take it into account if we want to correctly

  35S

  predict how they will behave. (When someone you know well is behaving

  36N

  strangely, remember: it might not be about you.)

  34 4

  Big Picture - UK final proofs.indd 344

  20/07/2016 10:02:54

  W h At t h I n K S ?

  If we weren’t familiar with consciousness already, in other words, we’d

  01

  have to invent it. The fact that people experience inner states as well as outer

  02

  stimuli is absolutely central to who they are and how they behave. Inner

  03

  lives aren’t divorced from outer actions.

  04

  Daniel Dennett has made essentially this point with what he calls the

  05

  intentional stance. There are many circumstances in which it is useful to speak 06

  as if certain things have attitudes or intentions. We therefore, quite sensibly,

  07

  speak that way— we attribute intentionality to all sorts of things, because

  08

  that’s part of a theory that provides a good account of the thing’s behavior.

  09

  Talking “as if” is the only thing we ever do, as there is no metaphysically dis-

  10

  tinct “aboutness” connecting different parts of the physical world, just rela-

  11

  tionships between different pieces of matter. Just as when we discussed the

  12

  emergence of “purpose” in chapter 35, we can think of intentions and atti-

  13

  tudes and conscious states as concepts that play essential roles in a higher- level

  14

  emergent theory describing the same underlying physical reality.

  15

  What Turing was trying to capture in his imitation game was the idea

  16

  that what matters about thinking is how a system would respond to stimuli,

  17

  for example, to questions presented to it by typing on a terminal. A com-

  18

  plete video and audio recording of the life of a human being wouldn’t be

  19

  “conscious,” even if it precisely captured everything that person had done

  20

  to date, because the recording wouldn’t be able to extrapolate that behavior

  21

  into the future. We couldn’t ask it questions or interact with it.

  22

  Many of the computer programs that have attempted to pass cut- rate

  23

  versions of the Turing test have been souped-up chat bots— simple systems

  24

  that can spit out preprogrammed sentences to a variety of possible ques-

  25

  tions. It is easy to fool them, not only because they don’t have the kind of

  26

  detailed contextual knowledge of the outside world that any normal person

  27

  would have, but because they don’t have memories even of the conversation

  28

  they have been having, much less ways to integrate such memories into the

  29

  rest of the discussion. In order to do so, they would have to have inner men-

  30

  tal states that depended on their entire histories in an integrated way, as

  31

  well as the ability to
conjure up hypothetical future situations, all along

  32

  distinguishing the past from the future, themselves from their environ-

  33

  ment, and reality from imagination. As Turing suggested, a program that

  34

  was really good enough to convincingly sustain human- level interactions

  S35

  would have to be actually thinking.

  N36

  345

  Big Picture - UK final proofs.indd 345

  20/07/2016 10:02:54

  T H E B IG PIC T U R E

  01

  •

  02

  03

  Cynthia Breazeal, a roboticist at MIT, leads a group that has constructed a

  04

  number of experiments in “social robotics.” One of their most charming

  05

  efforts was a robot puppet named Leonardo, who had a body created by

  06

  Stan Winston Studio, a special- effects team that had worked on such Hol-

  07

  lywood blockbusters as The Terminator and Jurassic Park. Equipped with 08

  more than sixty miniature motors that enabled a rich palette of movement

  09

  and facial expressions, Leonardo bore more than a passing resemblance to

  10

  Gizmo from the Steven Spielberg film Gremlins.

  11

  The ability to have facial expressions, it turns out, is enormously useful

  12

  in talking to human beings. Brains work better when they’re inside bodies.

  13

  Leonardo interacted with the researchers in Breazeal’s lab, both reading

  14

  their expressions and exhibiting his own. He was also programmed with a

  15

  theory of mind— he kept track of not only his own knowledge (from what

  16

  his video- camera eyes picked up happening in front of him) but also the

  17

  knowledge of other people (from what he saw them doing). Leonardo’s ac-

  18

  tions were not all preprogrammed; he learned new behaviors through inter-

  19

  acting with humans, mimicking gestures and responses he witnessed in

  20

  others. Without knowing anything about his programming, anyone watch-

  21

  ing Leonardo in action could easily tell whether he was happy, sad, afraid,

  22

  or confused, just by observing his expressions.

  23

  One illustrative experiment with Leonardo was a type of false- belief

  24

  task: checking that a subject understands that a different person might hold

  25

  a certain belief even if that belief is not true. (Humans seem to develop this

  26

  capacity around the age of four years old; younger children labor under the

  27

  misconception that everyone has the same beliefs.) Leonardo watches one

  28

 

‹ Prev