The Big Picture

Home > Other > The Big Picture > Page 57
The Big Picture Page 57

by Carroll, Sean M.

07

  08

  09

  10

  11

  12

  13

  14

  15

  16

  17

  18

  19

  20

  21

  22

  23

  24

  25

  26

  27

  28

  29

  30

  31

  32

  33

  34

  S35

  N36

  335

  Big Picture - UK final proofs.indd 335

  20/07/2016 10:02:54

  01

  02

  03

  39

  04

  05

  What Thinks?

  06

  07

  08

  09

  10

  11

  12

  13

  14

  15

  16

  17

  In Robert A. Heinlein’s novel The Moon Is a Harsh Mistress, colonists

  on the moon revolt against the Lunar Authority back on Earth. Their

  cause would have been essentially hopeless if it hadn’t been for the aid

  of Mike, a centralized computer that controlled all major automated func-

  18

  tions in most of the Lunar cities. Mike wasn’t just an important piece of

  19

  machinery— he had, without anyone planning it, become self- aware. As the

  20

  novel’s narrator puts it,

  21

  22

  Human brain has around ten-to- the- tenth neurons. By third

  23

  year Mike had better than one and a half times that number of

  24

  neuristors.

  25

  And woke up.

  26

  27

  The narrator, Manuel O’Kelly Davis, is a computer technician who

  28

  doesn’t spend much time wondering about the origin or deeper meanings

  29

  of Mike’s emergence into consciousness. There’s a revolution to be won, and

  30

  presumably self- awareness is just the kind of thing that happens when

  31

  thinking devices become sufficiently large and complex.

  32

  The reality would probably be a bit more complicated. A human brain

  33

  has a lot of neurons in it; but those neurons aren’t just connected up ran-

  34

  domly. There is structure to the connectome, developed gradually through

  35S

  the course of natural selection. There is structure in a computer architecture

  36N

  as well, both hardware and software, but it seems unlikely that the kind of

  336

  Big Picture - UK final proofs.indd 336

  20/07/2016 10:02:54

  W h At t h I n K S ?

  structure a computer has would hit upon self- awareness essentially by ac-

  01

  cident.

  02

  And what if it did? How would we know that a computer was actually

  03

  “thinking,” as opposed to mindlessly pushing numbers around? (Is there a

  04

  difference?)

  05

  06

  •

  07

  These issues were addressed in part by British mathematician and computer

  08

  scientist Alan Turing back in 1950. Turing proposed what he called the

  09

  imitation game, which is now more commonly known as the Turing test.

  10

  With admirable directness, Turing opened his paper by stating, “I propose

  11

  to consider the question, ‘Can machines think?’ ” But he immediately de-

  12

  cided that this kind of question was subject to endless squabbling over

  13

  definitions. In the best scientific tradition, he therefore tossed it out and

  14

  replaced it with a more operational query: Can a machine converse with a

  15

  person in such a way as to make the person believe that the machine was

  16

  also a person? (The best philosophical tradition would have dived into the

  17

  definitional squabbling with gusto.) Turing put forward the ability to pass

  18

  as human in such a test as a reasonable criterion for what it means to

  19

  “think.”

  20

  The Turing test has entered our cultural lexicon, and we regularly read

  21

  news stories about this or that program that has finally passed the test. It

  22

  might not be hard to believe, surrounded as we are by machines that send

  23

  us email, drive our cars, and even talk to us. In truth, no computer has come

  24

  close to passing a real Turing test. The competitions we read about in news

  25

  reports are invariably set up to prevent interlocutors from really challenging

  26

  a computer in the way Turing envisioned. We will very likely get there at

  27

  some point, but contemporary machines do not “think” in Turing’s sense.

  28

  When and if we do manage to construct a machine that can pass the

  29

  Turing test to almost everyone’s satisfaction, we will still be debating

  30

  whether that machine truly thinks in the same sense that a human being

  31

  does. The issue is consciousness, and the closely related issue of “under-

  32

  standing.” No matter how clever a computer became at carrying on conver-

  33

  sations, can it truly understand what it’s saying? If the discussions turn to

  34

  aesthetics or emotions, could a piece of software running on a silicon chip

  S35

  experience beauty or feel grief as a human can?

  N36

  337

  Big Picture - UK final proofs.indd 337

  20/07/2016 10:02:54

  T H E B IG PIC T U R E

  01

  Turing anticipated this, and in fact labeled it the argument from con-

  02

  sciousness. He quite properly identified the issue as a distinction between a 03

  third- person perspective (what others see me doing) and a first- person per-

  04

  spective (how I see and think myself). The argument from consciousness

  05

  seemed, to Turing, to ultimately be solipsistic: you could never know that

  06

  anyone was conscious unless you actually were that person. How do you

  07

  know that everyone else in the world is actually conscious at all, other than

  08

  by how they behave? Turing was anticipating the idea of a philosophical

  09

  zombie— someone who looks and acts just like a regular person but has no

  10

  inner experience, or qualia.

  11

  Turing thought that the way to make progress was to focus on questions

  12

  that could be objectively answered by watching what happens in the world,

  13

  rather than taking refuge in talk of personal experiences that are necessarily

  14

  hidden from external observation. With a bit of charming optimism, he

/>   15

  concluded that anyone who thought about things carefully would ulti-

  16

  mately come to agree with him: “Most of those who support the argument

  17

  from consciousness could be persuaded to abandon it rather than be forced

  18

  into the solipsist position.”

  19

  But it’s possible to insist that thinking and consciousness cannot be

  20

  judged from the outside while at the same time accepting that other people

  21

  probably are conscious. Someone might think: “I know that I’m conscious,

  22

  and other people are basically like me, so they’re probably conscious as well.

  23

  Computers, however, are not like me, so I can be more skeptical.” I don’t

  24

  think this is the right attitude, but it’s a logically consistent one. The ques-

  25

  tion then becomes, are computers really so different? Is the kind of think-

  26

  ing done in my brain really qualitatively distinct from what happens inside

  27

  a computer? Heinlein’s protagonist didn’t think so: “Can’t see it matters

  28

  whether paths are protein or platinum.”

  29

  •

  30

  31

  The Chinese Room is a thought experiment, proposed by American phi-

  32

  losopher John Searle, that attempts to highlight how the Turing test might

  33

  fall short of capturing what we really mean by “thinking” or “understand-

  34

  ing.” Searle asks us to imagine a person locked in a room with huge stacks

  35S

  of paper, each of which contains some Chinese writing. There is also a slot

  36N

  in the wall of the room, through which pieces of paper can be passed, and

  338

  Big Picture - UK final proofs.indd 338

  20/07/2016 10:02:54

  W h At t h I n K S ?

  a set of instructions in the form of a lookup table. The person speaks and

  01

  reads English, but doesn’t understand any Chinese. When a piece of paper

  02

  with some Chinese writing comes into the room through the slot, the

  03

  person inside can consult the instructions, which will indicate one of the

  04

  existing pieces of paper. The person then passes that paper back out through

  05

  the slot.

  06

  Unbeknownst to our test subject, the pieces of paper that come into the

  07

  room are perfectly sensible questions written in Chinese, and the pieces of

  08

  paper that they are instructed to send out in return are perfectly sensible

  09

  Chinese answers— ones that a regular thinking person might give. To a

  10

  Chinese- speaking person outside the room, it looks for all the world as if

  11

  they are asking questions of a Chinese speaker inside the room, who in turn

  12

  is answering them in Chinese.

  13

  But surely we agree, Searle argues, that there isn’t actually anyone in the

  14

  room who understands Chinese. There’s just an English- speaking person,

  15

  some large stacks of paper, and an exhaustive set of instructions. The room

  16

  seems able to pass the Turing test (in Chinese), but no real understanding

  17

  is present. Searle’s original target was research in artificial intelligence,

  18

  which he felt would never be able to achieve a truly human level of think-

  19

  ing. In the terms of his analogy, a computer that tries to pass the Turing test

  20

  is like the person in the Chinese room: it might be able to push symbols

  21

  around to give the illusion of understanding, but no real comprehension is

  22

  present.

  23

  Searle’s thought experiment has generated an enormous amount of com-

  24

  mentary, much of it aimed at refuting his point. The simplest refutation

  25

  succeeds pretty well: of course the person in the room can’t be said to un-

  26

  derstand Chinese, it’s the combined system of “person plus set of instruc-

  27

  tions” that understands Chinese. Like Turing with the argument from

  28

  consciousness, Searle saw this argument coming, and addressed it in his

  29

  original paper. He was not very impressed:

  30

  31

  The idea is that while a person doesn’t understand Chinese,

  32

  somehow the conjunction of that person and bits of paper might

  33

  understand Chinese. It is not easy for me to imagine how some-

  34

  one who was not in the grip of an ideology would find the idea

  S35

  at all plausible.

  N36

  339

  Big Picture - UK final proofs.indd 339

  20/07/2016 10:02:54

  T H E B IG PIC T U R E

  01

  Like many such thought- experiment journeys, the first step of the Chi-

  02

  nese Room— the existence of some bits of paper and an instruction manual

  03

  that could mimic human conversation— is a doozy. If the instruction man-

  04

  ual literally indicated a single answer for every question that might be

  05

  asked, it would never pass the Turing test against a marginally competent

  06

  human interlocutor. Consider questions like “How are you doing?,” “Why

  07

  do you say that?,” or “Could you tell me more?” Real human conversations

  08

  don’t simply proceed on a sentence-to-sentence basis; they depend on con-

  09

  text and what has gone before. At a minimum, the “slips of paper” would

  10

  have to include a way to store memories, as well as a system for processing

  11

  information that would integrate those memories into the ongoing conver-

  12

  sation. It’s not impossible to imagine such a thing, but it would be a lot

  13

  more complex than a pile of papers and an instruction book.

  14

  In Searle’s view, it doesn’t matter what parts of the setup we include in

  15

  what we call the “system”; none of it will ever achieve understanding in the

  16

  true sense. But the Chinese Room experiment doesn’t provide a convincing

  17

  argument for that conclusion. It does illustrate the view that “understand-

  18

  ing” is a concept that transcends mere physical correlation between input

  19

  and output, and requires something extra: a sense in which what goes on in

  20

  the system is truly “about” the subject matter at hand. To a poetic natural-

  21

  ist, “aboutness” isn’t an extra metaphysical quality that information can

  22

  have; it’s simply a convenient way of talking about correlations between

  23


  different parts of the physical world.

  24

  To take the Chinese Room as an argument that machines cannot think

  25

  begs the question rather than addressing it. It constructs a particular ver-

  26

  sion of a machine that purports to be thinking, and says, “Surely you don’t

  27

  think there’s any real understanding going on here, do you?” The best an-

  28

  swer is “Why not?”

  29

  If the world is purely physical, then what we mean by “understanding”

  30

  is a way of talking about a particular kind of correlation between informa-

  31

  tion located in one system (as instantiated in some particular arrangement

  32

  of matter) and conditions in the external world. Nothing in the Chinese

  33

  Room example indicates that we shouldn’t think that way, unless you are

  34

  already convinced we shouldn’t.

  35S

  That’s not to downplay the difficulty in clarifying what we mean by

  36N

  “understanding.” A textbook on quantum field theory contains

  340

  Big Picture - UK final proofs.indd 340

  20/07/2016 10:02:54

  W h At t h I n K S ?

  information about quantum field theory, but it doesn’t itself “understand”

  01

  the subject. A book can’t answer questions that we put to it, neither can it

  02

  do calculations using the tools of field theory. Understanding is necessarily

  03

  a more dynamic and process- oriented concept than the mere presence of

  04

  information, and the hard work of defining it carefully is well worth doing.

  05

  But as Turing suggested, there’s no reason why that hard work can’t be

  06

  carried out at a purely operational level— referring to how things actu-

  07

  ally behave, rather than invoking inaccessible properties (“understand-

  08

  ing,” “consciousness”) that are labeled as unobservable to outsiders from the

  09

  start.

  10

  Searle’s original target with his thought experiment wasn’t the problem

  11

  of consciousness (what it means to be aware and experiencing), but the

  12

  problems of cognition and intentionality (what it means to think and to

  13

  understand). The issues are closely related, however, and Searle himself later

  14

  considered the argument to have demonstrated that a computer program

  15

  can’t be conscious. The extension is straightforward enough: if you think

  16

  the system inside the room doesn’t really “understand,” you probably don’t

  17

  think it’s aware and experiencing either.

  18

  19

  •

  20

 

‹ Prev