Broca's Brain

Home > Science > Broca's Brain > Page 29
Broca's Brain Page 29

by Carl Sagan


  There is much about the Jovian magnetosphere and radio emissions that we still do not understand. The details of the decameter emissions are still deeply mysterious. Why are there localized sources of decameter emission on Jupiter probably less than 100 kilometers in size? What are these emission sources? Why do the decameter emission regions rotate about the planet with a very high time precision—better than seven significant figures—but different from the rotation periods of visible features in the Jovian clouds? Why do the decameter bursts have a very intricate (submillisecond) fine structure? Why are the decameter sources beamed—that is, not emitting in all directions equally? Why are the decameter sources intermittent—that is, not “on” all the time?

  These mysterious properties of the Jovian decameter emission are reminiscent of the properties of pulsars. Typical pulsars have magnetic fields a trillion times larger than Jupiter’s; they rotate 100,000 times faster; they are a thousandth as old; they are a thousand times more massive. The boundary of the Jovian magneto-sphere moves at less than one thousandth of the speed of the light cone of a pulsar. Nevertheless, it is possible that Jupiter is a kind of pulsar that failed, a local and quite unprepossessing model of the rapidly rotating neutron stars, which are one end product of stellar evolution. Major insights into the still baffling problems of pulsar emission mechanisms and magnetosphere geometries may follow from close-up spacecraft observation of Jovian decameter emission—for example, by NASA’s Voyager and Galileo missions.

  EXPERIMENTAL ASTROPHYSICS is developing rapidly. In another few decades at the very latest, we should see direct experimental investigation of the interstellar medium: the heliopause—the boundary between the region dominated by the solar wind and that dominated by the interstellar plasma—is estimated to lie at not much more than 100 astronomical units (9.3 billion miles) from the Earth. (Now, if there were only a local solar system quasar and a backyard black hole—nothing fancy, you understand, just little baby ones—we might with in situ spacecraft measurements check out the greater body of modern astrophysical speculation.)

  If we can judge by past experience, each future venture in experimental spacecraft astrophysics will find that (a) a major school of astrophysicists was entirely right; (b) no one agreed on which school it was that was right until the spacecraft results were in; and (c) an entire new corpus of still more fascinating and fundamental problems was unveiled by the space vehicle results.

  * With the sole exception of the meteorites (see Chapter 15).

  * I have discussed these successful inferences and their spacecraft confirmations in Chapters 12, 16 and 17 of The Cosmic Connection.

  CHAPTER 20

  IN DEFENSE OF

  ROBOTS

  WILLIAM SHAKESPEARE,

  Thou com’st in such a questionable shape

  That I will speak to thee …

  Hamlet, Act I, Scene 4

  THE WORD “ROBOT,” first introduced by the Czech writer Karel Čapek, is derived from the Slavic root for “worker.” But it signifies a machine rather than a human worker. Robots, especially robots in space, have often received derogatory notices in the press. We read that a human being was necessary to make the terminal landing adjustments on Apollo 11, without which the first manned lunar landing would have ended in disaster; that a mobile robot on the Martian surface could never be as clever as astronauts in selecting samples to be returned to Earth-bound geologists; and that machines could never have repaired, as men did, the Skylab sunshade, so vital for the continuance of the Skylab mission.

  But all these comparisons turn out, naturally enough, to have been written by humans. I wonder a small self-congratulatory element, a whiff of human chauvinism, has not crept into these judgments. Just as whites can sometimes detect racism and men can occasionally discern sexism, I wonder whether we cannot here glimpse some comparable affliction of the human spirit—a disease that as yet has no name. The word “anthropocentrism” does not mean quite the same thing. The word “humanism” has been pre-empted by other and more benign activities of our kind. From the analogy with sexism and racism I suppose the name for this malady is “speciesism”—the prejudice that there are no beings so fine, so capable, so reliable as human beings.

  This is a prejudice because it is, at the very least, a prejudgment, a conclusion drawn before all the facts are in. Such comparisons of men and machines in space are comparisons of smart men and dumb machines. We have not asked what sorts of machines could have been built for the $30-or-so billion that the Apollo and Skylab missions cost.

  Each human being is a superbly constructed, astonishingly compact, self-ambulatory computer—capable on occasion of independent decision making and real control of his or her environment. And, as the old joke goes, these computers can be constructed by unskilled labor. But there are serious limitations to employing human beings in certain environments. Without a great deal deal of protection, human beings would be inconvenienced on the ocean floor, the surface of Venus, the deep interior of Jupiter, or even on long space missions. Perhaps the only interesting results of Skylab that could not have been obtained by machines is that human beings in space for a period of months undergo a serious loss of bone calcium and phosphorus—which seems to imply that human beings may be incapacitated under 0 g for missions of six to nine months or longer. But the minimum interplanetary voyages have characteristic times of a year or two. Because we value human beings highly, we are reluctant to send them on very risky missions. If we do send human beings to exotic environments, we must also send along their food, their air, their water, amenities for entertainment and waste recycling, and companions. By comparison, machines require no elaborate life-support systems, no entertainment, no companionship, and we do not yet feel any strong ethical prohibitions against sending machines on one-way, or suicide, missions.

  Certainly, for simple missions, machines have proved themselves many times over. Unmanned vehicles have performed the first photography of the whole Earth and of the far side of the Moon; the first landings on the Moon, Mars and Venus; and the first thorough orbital reconnaissance of another planet, in the Mariner 9 and Viking missions to Mars. Here on Earth it is increasingly common for high-technology manufacturing—for example, chemical and pharmaceutical plants—to be performed largely or entirely under computer control. In all these activities machines are able, to some extent, to sense errors, to correct mistakes, to alert human controllers some great distance away about perceived problems.

  The powerful abilities of computing machines to do arithmetic—hundreds of millions of times faster than unaided human beings—are legendary. But what about really difficult matters? Can machines in any sense think through a new problem? Can they make discussions of the branched-contingency tree variety which we think of as characteristically human? (That is, I ask Question 1; if the answer is A, I ask Question 2; but if the answer is B, I ask Question 3; and so on.) Some decades ago the English mathematician A. M. Turing described what would be necessary for him to believe in machine intelligence. The condition was simply that he could be in teletype communication with a machine and be unable to tell that it was not a human being. Turing imagined a conversation between a man and a machine of the following quality:

  INTERROGATOR: In the first line of your sonnet which reads “Shall I compare thee to a Summer’s day,” would not “a Spring day” do as well or better?

  WITNESS: It wouldn’t scan.

  INTERROGATOR: How about “a Winter’s day”? That would scan all right.

  WITNESS: Yes, but nobody wants to be compared to a Winter’s day.

  INTERROGATOR: Would you say Mr. Pickwick reminded you of Christmas?

  WITNESS: In a way.

  INTERROGATOR: Yet Christmas is a Winter’s day, and I do not think Mr. Pickwick would mind the comparison.

  WITNESS: I don’t think you’re serious. By a Winter’s day one means a typical Winter’s day, rather than a special one like Christmas.

  No device of this sophistication
has yet been built, although I am not sure how many humans would pass Turing’s human test. But the amount of effort and money put into artificial intelligence has been quite limited, and there are only about a half-dozen major centers of such activity in the world. One of the more striking results obtained in a very limited universe of discourse—that of children’s blocks—has come from the work of Terry Winograd, then at the Massachusetts Institute of Technology. Here is a dialogue between man and machine, the machine having been programmed by Winograd:

  Q: What does the box contain?

  A: The blue pyramid and the blue box.

  Q: How many blocks are not in the box?

  A: Four of them.

  Q: Is at least one of them narrower than the one which I told you to pick up?

  A: Yes, the red cube.

  Q: Is there a steeple?

  A: Sorry, I don’t know the word “steeple.”

  Q: A steeple is a stack which contains two green cubes and a pyramid, for example.

  A: I understand.

  Q: Are there any steeples now?

  A: No.

  Q: Build one.

  The machine does. It reaches out its mechanical hand, moves the cubes and builds the structure that has just been described.

  Q: When did you pick up the pyramid?

  A: While I was stacking up the red cube, the large red block and the large green cube.

  Q: Why?

  A: To get rid of it.

  Q: Why did you clear off that cube?

  A: To put it on the large green cube.

  As another example, a machine psychiatrist has been developed by Joseph Weizenbaum, also at MIT. This is a much simpler program than Winograd’s, since it involves nondirective psychotherapy, which, generally speaking, requires extremely simple-minded computer programs. Here is a striking dialogue between a human being and a computer psychotherapist (There is certainly a selection effect here. Many other such machine/human psychiatric dialogues are not nearly so striking. But I suppose most human/human psychiatric dialogues are not very striking either.) In this interaction, in order to spell out clearly the respective roles of therapist and patient, the computer therapist types upper case while the patient types lower case:

  men are all alike

  IN WHAT WAY

  they’re always bugging us about something or other

  CAN YOU THINK OF A SPECIFIC EXAMPLE

  well, my boy friend made me come here

  YOUR BOY FRIEND MADE YOU COME HERE

  he noticed that i’m depressed much of the time

  I AM SORRY TO HEAR YOU ARE DEPRESSED

  he’s right. I am unhappy

  DO YOU THINK COMING HERE WILL HELP YOU NOT TO BE UNHAPPY

  i need some help, that much is clear

  WHAT WOULD IT MEAN TO YOU IF YOU GOT SOME HELP

  perhaps i could learn to get along with my mother

  TELL ME MORE ABOUT YOUR FAMILY

  my mother takes care of me

  WHO ELSE IN YOUR FAMILY TAKES CARE OF YOU

  my father

  YOUR FATHER

  you are like my father in some ways

  WHAT RESEMBLANCE DO YOU SEE

  you are not very aggressive, but i don’t think you want me to notice that

  WHAT MAKES YOU THINK I AM NOT VERY AGGRESSIVE

  you don’t argue with me

  WHY DO YOU THINK I DON’T ARGUE WITH YOU

  you are afraid of me

  DOES IT PLEASE YOU TO BELIEVE I AM AFRAID OF YOU

  it pleases my father to think i am afraid of him

  WHAT ELSE COMES TO MIND WHEN YOU THINK OF YOUR FATHER

  bullies

  DOES THAT HAVE ANYTHING TO DO WITH THE FACT THAT YOUR BOY FRIEND MADE YOU COME HERE

  This astonishing—one is very tempted to say “perceptive”—response from the computer is, of course, preprogrammed. But, then, so are the responses of human psychotherapists. In a time when more and more people in our society seem to be in need of psychiatric counseling, and when time-sharing of computers is widespread, I can even imagine the development of a network of computer psychotherapeutic terminals, something like arrays of large telephone booths, in which, for a few dollars a session, we are able to talk to an attentive, tested and largely nondirective psychotherapist. Ensuring the confidentiality of the psychiatric dialogue is one of several important steps still to be worked out.

  ANOTHER SIGN of the intellectual accomplishments of machines is in games. Even exceptionally simple computers—those that can be wired by a bright ten-year-old—can be programmed to play perfect tic-tac-toe. Some computers can play world-class checkers. Chess is of course a much more complicated game than tic-tac-toe or checkers. Here programming a machine to win is more difficult, and novel strategies have been used, including several rather successful attempts to have a computer learn from its own experience in playing previous chess games. Computers can learn, for example, empirically the rule that it is better in the beginning game to control the center of the chessboard than the periphery. The ten best chess players in the world still have nothing to fear from any present computer. But the situation is changing. Recently a computer for the first time did well enough to enter the Minnesota State Chess Open. This may be the first time that a non-human has entered a major sporting event on the planet Earth (and I cannot help but wonder if robot golfers and designated hitters may be attempted sometime in the next decade, to say nothing of dolphins in free-style competition). The computer did not win the Chess Open, but this is the first time one has done well enough to enter such a competition. Chess-playing computers are improving extremely rapidly.

  I have heard machines demeaned (often with a just audible sigh of relief) for the fact that chess is an area where human beings are still superior. This reminds me very much of the old joke in which a stranger remarks with wonder on the accomplishments of a checker-playing dog. The dog’s owner replies, “Oh, it’s not all that remarkable. He loses two games out of three.” A machine that plays chess in the middle range of human expertise is a very capable machine; even if there are thousands of better human chess players, there are millions who are worse. To play chess requires strategy, foresight, analytical powers, and the ability to cross-correlate large numbers of variables and to learn from experience. These are excellent qualities in those whose job it is to discover and explore, as well as those who watch the baby and walk the dog.

  With this as a more or less representative set of examples of the state of development of machine intelligence, I think it is clear that a major effort over the next decade could produce much more sophisticated examples. This is also the opinion of most of the workers in machine intelligence.

  In thinking about this next generation of machine intelligence, it is important to distinguish between self-controlled and remotely controlled robots. A self-controlled robot has its intelligence within it; a remotely controlled robot has its intelligence at some other place, and its successful operation depends upon close communication between its central computer and itself. There are, of course, intermediate cases where the machine may be partly self-activated and partly remotely controlled. It is this mix of remote and in situ control that seems to offer the highest efficiency for the near future.

  For example, we can imagine a machine designed for the mining of the ocean floor. There are enormous quantities of manganese nodules littering the abyssal depths. They were once thought to have been produced by meteorite infall on Earth, but are now believed to be formed occasionally in vast manganese fountains produced by the internal tectonic activity of the Earth. Many other scarce and industrially valuable minerals are likewise to be found on the deep ocean bottom. We have the capability today to design devices that systematically swim over or crawl upon the ocean floor; that are able to perform spectrometric and other chemical examinations of the surface material; that can automatically radio back to ship or land all findings; and that can mark the locales of especially valuable deposits—for example, by low-frequency
radio-homing devices. The radio beacon will then direct great mining machines to the appropriate locales. The present state of the art in deep-sea submersibles and in spacecraft environmental sensors is clearly compatible with the development of such devices. Similar remarks can be made for off-shore oil drilling, for coal and other subterranean mineral mining, and so on. The likely economic returns from such devices would pay not only for their development, but for the entire space program many times over.

  When the machines are faced with particularly difficult situations, they can be programmed to recognize that the situations are beyond their abilities and to inquire of human operators—working in safe and pleasant environments—what to do next. The examples just given are of devices that are largely self-controlled. The reverse also is possible, and a great deal of very preliminary work along these lines has been performed in the remote handling of highly radioactive materials in laboratories of the U.S. Department of Energy. Here I imagine a human being who is connected by radio link with a mobile machine. The operator is in Manila, say; the machine in the Mindanao Deep. The operator is attached to an array of electronic relays, which transmits and amplifies his movements to the machine and which can, conversely, carry what the machine finds back to his senses. So when the operator turns his head to the left, the television cameras on the machine turn left, and the operator sees on a great hemispherical television screen around him the scene the machine’s searchlights and cameras have revealed. When the operator in Manila takes a few strides forward in his wired suit, the machine in the abyssal depths ambles a few feet forward. When the operator reaches out his hand, the mechanical arm of the machine likewise extends itself; and the precision of the man/machine interaction is such that precise manipulation of material at the ocean bottom by the machine’s fingers is possible. With such devices, human beings can enter environments otherwise closed to them forever.

 

‹ Prev