Book Read Free

The Dragons of Eden

Page 18

by Carl Sagan


  A non-directive psychotherapy program developed by Joseph Weizenbaum of the Massachusetts Institute of Technology is a good example of the present serviceability of psychiatric computer programs (which are, incidentally, far from the most difficult or most elaborate programs in existence). After the psychotherapy program was developed, a human psychiatrist was brought in to relate in lower-case type (via a typewriter keyboard on an interactive console) with the program, which prints out in upper case. The exchange was probably intended as heuristic but may just possibly have gotten out of hand—on the human side. The conversation begins with the human psychiatrist saying,

  doctor, i have terrible news

  YOU SAY YOU HAVE TERRIBLE NEWS

  yes, there is a computer program at m.i.t. that’ll ruin us

  WHY DO YOU SAY THAT

  i have seen it working

  YOU SAY YOU HAVE SEEN IT WORKING

  it’s frightening—you must see it

  YOU THINK I MUST SEE IT

  you sound depressed to me

  IN YOUR FANTASY PERHAPS WE SOUND DEPRESSED TO EACH OTHER

  yes—perhaps

  I UNDERSTAND

  i do too

  YOU SAY YOU DO TOO

  we are both in the same boat with respect to that computer

  LET’S CONTINUE TO TALK ABOUT YOU—SHALL WE.

  you are a comfort

  WHAT MAKES YOU THINK I AM A COMFORT

  at a time like this you want to talk about me

  WHY DO YOU THINK I WANT TO TALK ABOUT YOU

  because you share my feelings about that damned computer

  WHAT DOES THAT SUGGEST TO YOU

  that, of course, you hate computers too

  YOU THINK I HATE COMPUTERS TOO

  why not, they’re hardly human, are they

  PERHAPS THAT’S WHAT WE ARE HERE TO FIND OUT

  no computer could converse as you can, for example

  OH

  i can’t understand how anyone could be fooled by a mere machine

  CAN’T YOU REALLY

  no. after all, there is nothing creative about a computer program

  YOU ARE BEING NEGATIVE

  i am convinced i am correct

  ARE YOU WORRIED TO HAVE BEEN CONVINCED YOU ARE CORRECT

  no, not in the least, not at all. not even a little.

  YOU ARE BEING NEGATIVE

  you said you agreed with me, didn’t you

  IS THAT WHAT YOU HEARD ME SAY

  why don’t you try to dissuade me—computers diddle with numbers, that’s all

  YOU WISH ME TO TRY TO DISSUADE YOU

  you can’t

  YOU THINK I CAN’T

  for example, could a computer in a conversation like this raise a new topic

  YOU SEEM TO HAVE AVOIDED SPEAKING OF YOUR PARENTS ALTOGETHER

  precisely as you just did

  YOU THINK I JUST DID

  yes, and that’s what shows the difference between a man and a computer

  I SEE

  The computer psychiatrist lacks sophistication, training, sensitivity, human intuition; it is mechanical (of course!), inflexible and relatively unresponsive to emotional nuances and nonverbal cues. And yet it has produced a conversation more intelligent than many. Its response on “a new topic” is stunning. But that response is very likely only a fluke. The program is undoubtedly designed to pay attention to words such as “mother,” “father,” “parent,” and so on; after the computer’s clock has ticked away so many minutes, if these words have not been introduced, the program is designed to come up with “You seem to have avoided …” Emerging at just the moment it did, the remark gives an eerie impression of insight.

  But what is the game of psychotherapy if not a very complex, learned set of responses to human situations? Is not the psychiatrist also preprogrammed to give certain responses? Non-directive psychotherapy clearly requires very simple computer programs, and the appearance of insight requires only slightly more sophisticated programs. I do not intend these remarks to disparage the psychiatric profession in any way, but rather to augur the coming of machine intelligence. Computers are by no means yet at a high enough level of development to recommend the widespread use of computer psychotherapy. But it does not seem to me a forlorn hope that we may one day have extremely patient, widely available and, as least for certain problems, adequately competent computer therapists. Some programs already in existence are given high marks by patients because the therapist is perceived as unbiased and extremely generous with his or her or its time.

  Computers are now being developed in the United States that will be able to detect and diagnose their own malfunctions. When systematic performance errors are found, the faulty components will be automatically bypassed or replaced. Internal consistency will be tested by repeated operation and through standard programs whose consequences are known independently; repair will be accomplished chiefly by redundant components. There are already in existence programs—e.g., in chess-playing computers—capable of learning from experience and from other computers. As time goes on, the computer appears to become increasingly intelligent. Once the programs are so complex that their inventors cannot quickly predict all possible responses, the machines will have the appearance of, if not intelligence, at least free will. Even the computer on the Viking Mars lander, which has a memory of only 18,000 words, is at this point of complexity: we do not in all cases know what the computer will do with a given command. If we knew, we would say it is “only” or “merely” a computer. When we do not know, we begin to wonder if it is truly intelligent.

  The situation is very much like the commentary that has echoed over the centuries after a famous animal story told both by Plutarch and by Pliny: A dog, following the scent of its master, was observed to come to a triple fork in the road. It ran down the leftmost prong, sniffing; then stopped and returned to follow the middle prong for a short distance, again sniffing and then turning back. Finally, with no sniffing at all, it raced joyously down the right-hand prong of the forked road.

  Montaigne, commenting on this story, argued that it showed clear canine syllogistic reasoning: My master has gone down one of these roads. It is not the left-hand road; it is not the middle road; therefore it must be the right-hand road. There is no need for me to corroborate this conclusion by smell—the conclusion follows by straightforward logic.

  The possibility that reasoning at all like this might exist in the animals, although perhaps less clearly articulated, was troubling to many, and long before Montaigne, St. Thomas Aquinas attempted unsuccessfully to deal with the story. He cited it as a cautionary example of how the appearance of intelligence can exist where no intelligence is in fact present. Aquinas did not, however, offer a satisfactory alternative explanation of the dog’s behavior. In human split-brain patients, it is quite clear that fairly elaborate logical analysis can proceed surrounded by verbal incompetence.

  We are at a similar point in the consideration of machine intelligence. Machines are just passing over an important threshold: the threshold at which, to some extent at least, they give an unbiased human being the impression of intelligence. Because of a kind of human chauvinism or anthropocentrism, many humans are reluctant to admit this possibility. But I think it is inevitable. To me it is not in the least demeaning that consciousness and intelligence are the result of “mere” matter sufficiently complexly arranged; on the contrary, it is an exalting tribute to the subtlety of matter and the laws of Nature.

  It by no means follows that computers will in the immediate future exhibit human creativity, subtlety, sensitivity or wisdom. A classic and probably apocryphal illustration is in the field of machine translation of human languages: a language—say, English—is input and the text is output in another language—say, Chinese. After the completion of an advanced translation program, so the story goes, a delegation which included a U.S. senator was proudly taken through a demonstration of the computer system. The senator was asked to produce an English phrase fo
r translation and promptly suggested, “Out of sight, out of mind.” The machine dutifully whirred and winked and generated a piece of paper on which were printed a few Chinese characters. But the senator could not read Chinese. So, to complete the test, the program was run in reverse, the Chinese characters input and an English phrase output. The visitors crowded around the new piece of paper, which to their initial puzzlement read: “Invisible idiot.”

  Existing programs are only marginally competent even on matters of this not very high degree of subtlety. It would be folly to entrust major decisions to computers at our present level of development—not because the computers are not intelligent to a degree, but because, in the case of most complex problems, they will not have been given all relevant information. The reliance on computers in determining American policy and military actions during the Vietnam war is an excellent example of the flagrant misuse of these machines. But in reasonably restricted contexts the human use of artificial intelligence seems to be one of the two practicable major advances in human intelligence available in the near future. (The other is enrichment of the preschool and school learning environments of children.)

  Those who have not grown up with computers generally find them more frightening than those who have. The legendary manic computer biller who will not take no—or even yes—for an answer, and who can be satisfied only by receiving a check for zero dollars and zero cents is not to be considered representative of the entire tribe; it is a feeble-minded computer to begin with, and its mistakes are those of its human programmers. The growing use in North America of integrated circuits and small computers for aircraft safety, teaching machines, cardiac pacemakers, electronic games, smoke-actuated fire alarms and automated factories, to name only a few uses, has helped greatly to reduce the sense of strangeness with which so novel an invention is usually invested. There are some 200,000 digital computers in the world today; in another decade, there are likely to be tens of millions. In another generation, I think that computers will be treated as a perfectly natural—or at least commonplace—aspect of our lives.

  Consider, for example, the development of small, pocket computers. I have in my laboratory a desk-sized computer purchased with a research grant in the late 1960s for $4,900. I also have another product of the same manufacturer, a computer that fits into the palm of my hand, which was purchased in 1975. The new computer does everything that the old computer did, including programming capability and several addressable memories. But it cost $145, and is getting cheaper at a breathtaking rate. That represents quite a spectacular advance, both in miniaturization and in cost reduction, in a period of six or seven years. In fact, the present limit on the size of hand-held computers is the requirement that the buttons be large enough for our somewhat gross and clumsy human fingers to press. Otherwise, such computers could easily be built no larger than my fingernail. Indeed, ENIAC, the first large electronic digital computer, constructed in 1946, contained 18,000 vacuum tubes and occupied a large room. The same computational ability resides today in a silicon chip microcomputer the size of the smallest joint of my little finger.

  The speed of transmission of information in the circuitry of such computers is the velocity of light. Human neural transmission is one million times slower. That in nonarithmetic operations the small and slow human brain can still do so much better than the large and fast electronic computer is an impressive tribute to how cleverly the brain is packaged and programmed—features brought about, of course, by natural selection. Those who possessed poorly programmed brains eventually did not live long enough to reproduce.

  Computer graphics have now reached a state of sophistication that permits important and novel kinds of learning experiences in arts and sciences, and in both cerebral hemispheres. There are individuals, many of them analytically extremely gifted, who are impoverished in their abilities to perceive and imagine spatial relations, particularly three-dimensional geometry. We now have computer programs that can gradually build up complex geometrical forms before our eyes and rotate them on a television screen connected to the computer.

  At Cornell University, such a system has been designed by Donald Greenberg of the School of Architecture. With this system it is possible to draw a set of regularly spaced lines which the computer interprets as contour intervals. Then, by touching our light pen to any of a number of possible instructions on the screen, we command the construction of elaborate three-dimensional images which can be made larger or smaller, stretched in a given direction, rotated, joined to other objects or have designated parts excised. (See figures on this page.) This is an extraordinary tool for improving our ability to visualize three-dimensional forms—a skill extremely useful in graphic arts, in science and in technology. It also represents an excellent example of cooperation between the two cerebral hemispheres: the computer, which is a supreme construction of the left hemisphere, teaches us pattern recognition, which is a characteristic function of the right hemisphere.

  There are other computer programs that exhibit two- and three-dimensional projections of four-dimensional objects. As the four-dimensional objects turn, or our perspective changes, not only do we see new parts of the four-dimensional objects; we also seem to see the synthesis and destruction of entire geometrical subunits. The effect is eerie and instructive and helps to make four-dimensional geometry much less mysterious; we are not nearly so baffled as I imagine a mythical two-dimensional creature would be on encountering the typical projection (two squares with the corners connected) of a three-dimensional cube on a flat surface. The classical artistic problem of perspective—the projection of three-dimensional objects onto two-dimensional canvases—is enormously clarified by computer graphics; the computer is obviously also a major tool in the quite practical problem of picturing an architect’s design of a building, made in two dimensions, from all vantage points in three dimensions.

  Computer graphics are now being extended into the area of play. There is a popular game, sometimes called Pong, which simulates on a television screen a perfectly elastic ball bouncing between two surfaces. Each player is given a dial that permits him to intercept the ball with a movable “racket.” Points are scored if the motion of the ball is not intercepted by the racket. The game is very interesting. There is a clear learning experience involved which depends exclusively on Newton’s second law for linear motion. As a result of Pong, the player can gain a deep intuitive understanding of the simplest Newtonian physics—a better understanding even than that provided by billiards, where the collisions are far from perfectly elastic and where the spinning of the pool balls interposes more complicated physics.

  Example of a simple computer graphics routine. Each figure was created solely by drawing free-hand contours with a “light pen” on a television screen. The computer converted this into perspective drawings in elevation from any view angle—directly from the side of this free-form sculpture at left and at an angle at right. The tower was “webbed” automatically, and is tilted toward the reader in the right-hand diagram. In addition to a full capability for rotation and zoom, the observer can request with his “light pen” orthogonal, perspective, or stereoscopic dynamic images (Program WIRE by Marc Levoy, Laboratory of Computer Graphics, Cornell University).

  This sort of information gathering is precisely what we call play. And the important function of play is thus revealed: it permits us to gain, without any particular future application in mind, a holistic understanding of the world, which is both a complement of and a preparation for later analytical activities. But computers permit play in environments otherwise totally inaccessible to the average student.

  A still more interesting example is provided by the game Space War, whose development and delights have been chronicled by Stuart Brand. In Space War, each side controls one or more “space vehicles” which can fire missiles at the other. The motions of both the spacecraft and the missiles are governed by certain rules—for example, an inverse square gravitational field set up by a nearby “p
lanet.” To destroy the spaceship of your opponent you must develop an understanding of Newtonian gravitation that is simultaneously intuitive and concrete. Those of us who do not frequently engage in interplanetary space flight do not readily evolve a right-hemisphere comprehension of Newtonian gravitation. Space War can fill that gap.

  The two games, Pong and Space War, suggest a gradual elaboration of computer graphics so that we gain an experiential and intuitive understanding of the laws of physics. The laws of physics are almost always stated in analytical and algebraic—that is to say, left-hemisphere—terms; for example, Newton’s second law is written F = m a, and the inverse square law of gravitation as F = G M m/r2. These analytical representations are extremely useful, and it is certainly interesting that the universe is made in such a way that the motion of objects can be described by such relatively simple laws. But these laws are nothing more than abstractions from experience. Fundamentally they are mnemonic devices. They permit us to remember in a simple way a great range of cases that would individually be much more difficult to remember—at least in the sense of memory as understood by the left hemisphere. Computer graphics gives the prospective physical or biological scientist a wide range of experience with the cases his laws of nature summarize; but its most important function may be to permit those who are not scientists to grasp in an intuitive but nevertheless deep manner what the laws of nature are about.

  There are many non-graphical interactive computer programs which are extremely powerful teaching tools. The programs can be devised by first-rate teachers, and the student has, in a curious sense, a much more personal, one-to-one relationship with the teacher than in the usual classroom setting; he may also be as slow as he wishes without fear of embarrassment. Dartmouth College employs computer learning techniques in a very broad array of courses. For example, a student can gain a deep insight into the statistics of Mendelian genetics in an hour with the computer rather than spend a year crossing fruit flies in the laboratory. Another student can examine the statistical likelihood of becoming pregnant were she to use various birth control methods. (This program has built into it a one-in-ten-billion chance of a woman’s becoming pregnant when strictly celibate, to allow for contingencies beyond present medical knowledge.)

 

‹ Prev