Murray Leinster

Home > Other > Murray Leinster > Page 25
Murray Leinster Page 25

by Billiee J. Stallings; Jo-an J. Evans


  “It was a swell world,” I says, homesick for the dear dead days-before-yesterday. “We was playin’ happy with our toys like little innocent children until some -

  thin’ happened. Like a guy named Joe come in and squashed all our mud pies.” Then it hit me. I got the whole thing in one flash. There ain’t nothing in the tank set-up to start relays closin’. Relays are closed exclusive by logics, to get the

  “A Logic Named Joe”

  185

  information the keys are punched for. Nothin’ but a logic coulda cooked up the relay patterns that constituted logics service. Humans wouldn’t ha’ been able to figure it out! Only a logic could integrate all the stuff that woulda made all the other logics work like this...

  There was one answer. I drove into a restaurant and went over to a pay-logic an’ dropped in a coin.

  “Can a logic be modified,” I spell out, “to cooperate in long-term planning which human brains are too limited in scope to do?” The screen sputters. Then it says:

  “Definitely yes.”

  “How great will the modifications be?” I punch.

  “Microscopically slight. Changes in dimensions,” says the screen. “Even modern precision gauges are not exact enough to check them, however. They can only come about under present manufacturing methods by an extremely improbable accident, which has only happened once.”

  “How can one get hold of that one accident which can do this highly necessary work?” I punch.

  The screen sputters. Sweat broke out on me. I ain’t got it figured out close, yet, but what I’m scared of is that whatever is Joe will be suspicious. But what I’m askin’

  is strictly logical. And logics can’t lie. They gotta be accurate. They can’t help it.

  “A complete logic capable of the work required,” says the screen, “is now in ordinary family use in —“

  And it gives me the Korlanovitch address and do I go over there! Do I go over there fast! I pull up the Maintenance car in front of the place, and I take the extra logic outta the back, and I stagger up the Korlanovitch fiat and I ring the bell. A kid answers the door.

  “I’m from Logics Maintenance,” I tell the kid. “An inspection record has shown that your logic is apt to break down any minute. I come to put in a new one before it does.”

  The kid says “O.K.!” real bright and runs back to the livin’-room where Joe —

  I got the habit of callin’ him Joe later, through just meditatin’ about him — is runnin’ somethin’ the kids wanna look at. I hook in the other logic an’ turn it on, conscientious making sure it works. Then I say:

  “Now kiddies, you punch this one for what you want. I’m gonna take the old one away before it breaks down.”

  And I glance at the screen. The kiddies have apparently said they wanna look at some real cannibals. So the screen is presenting a anthropological expedition scientific record film of the fertility dance of the Huba-Jouba tribe of West Africa. It is supposed to be restricted to anthropological professors an’ post-graduate medical students. But there ain’t any censor blocks workin’ any more and it’s on. The kids are much interested. Me, bein’ a old married man, I blush.

  I disconnect Joe. Careful. I turn to the other logic and punch keys for Maintenance. I do not get a services flash. I get Maintenance. I feel very good. I report that I am goin’ home because I fell down a flight of steps an’ hurt my leg.

  I add, inspired:

  “An’ say, I was carryin’ the logic I replaced an’ it’s all busted. I left it for the dustman to pick up.”

  “If you don’t turn ’em in,” says Stock, “you gotta pay for ’em.”

  “Cheap at the price,” I say.

  186

  A P P E N D I X A

  I go home. Laurine ain’t called. I put Joe down in the cellar, careful. If I turned him in, he’d be inspected an’ his parts salvaged even if I busted somethin’

  on him. Whatever part was off-normal might be used again and everything start all over. I can’t risk it. I pay for him and leave him be.

  That’s what happened. You might say I saved civilization an’ not be far wrong.

  I know I ain’t goin’ to take a chance on havin’ Joe in action again. Not while Laurine is livin’. An’ there are other reasons. With all the nuts who wanna change the world to their own line o’ thinkin’, an’ the ones that wanna bump people off, an’ generally solve their problems — Yeah! Problems are bad, but I figure I better let sleepin’ problems lie.

  But on the other hand, if Joe could be tamed, somehow, and got to work just reasonable — He could make me a coupla million dollars, easy. But even if I got sense enough not to get rich, an’ if I get retired and just loaf around fishin’ an’

  lyin’ to other old duffers about what a great guy I used to be — Maybe I’ll like it, but maybe I won’t. And after all, if I get fed up with bein’ old and confined strictly to thinking — why I could hook Joe in long enough to ask: “How can a old guy not stay old?” Joe’ll be able to find out. An’ he’ll tell me.

  That couldn’t be allowed out general, of course. You gotta make room for kids to grow up. But it’s a pretty good world, now Joe’s turned off. Maybe I’ll turn him on long enough to learn how to stay in it. But on the other hand, maybe —

  Appendix B.

  “To Build a Robot Brain”

  “To Build a Robot Brain” was published in Astounding Science-Fiction in April 1954. In this essay, Will plays with the idea of how far scientists can go in developing computers that could function as well as human brains. It was selected for this biography because, as an essay, it shows how Will’s mind worked, how he developed an idea and tried to explain it to his audience, as in conversation. The last sentence also gives a glimpse of his inner self and his core beliefs.

  To Build a Robot Brain

  by Murray Leinster

  The technician will use the tools, and assemble the parts. Before that, the physicist-engineer will design the parts. But even before that, the philosopher has to design the concept.

  Not too long ago a man I’ll call Casey got scared nearly to death by a thinking machine. This is not fiction, you understand. This is honest-to-Hannah fact.

  You’d recognize the name of the machine if I told you. It’s one of those big computers with an all-capital-name like a government agency in Washington. It is a honey of a device, with some thousands of vacuum tubes, relays, special devices to prepare tape for it to read, and an electric typewriter to type out its answers.

  It handles letters as well as numbers, and you can feed it lists of names, for example, and it will sort them out alphabetically and make its answer-typewriter write them out in proper sequence. Also it calculates ballistic data and how to make wings for jet planes, and tabulates percentages on presidential elections, and little things like that.

  But it nearly scared Casey to death.

  It was two o’clock in the morning and the machine was running silently as usual. The whole building in which it was set up was empty of people. Maybe a watchman or two on other floors, but nobody but Casey right here on the job.

  Light bulbs glowed at one spot and another, with plenty of darkness in between.

  The thinking machine didn’t even hum. There was no sign of activity any-187

  188

  A P P E N D I X B

  where about it, except small indicator-lights on the monitor panel, which turned on and off in a sort of meditative fashion. The spool of metal tape feeding to the computer was turning slowly. Now and again it paused in its movement. That was when the memory banks were being consulted for instructions on memory-data. At such moments the machine was doing exactly what a man does when he scratches his head.

  Casey — and I repeat that this is history, not fiction — leaned back in his comfortable chair. There was a two-spool problem being run through. Somebody else had prepared the tape. Casey was simply there. He hadn’t a thing to do. So, on stand-by watch over the most intellectual machi
ne in creation, Casey was reading a comic book.

  Suddenly there was uproar. Against all precedent, the electric output typewriter was clicking furiously before the problem was solved. A loudspeaker made a din. The thinking machine was working the typewriter and had turned on the loudspeaker alarm to call Casey on the run. He got to the typewriter in a hurry.

  Its keys still clicked. They stopped indignantly, as he read: “Casey, you blank-blanked-son-of a so-and so, you forgot to change the spool to Number Two.” Casey’s hair stood on end, and he wanted to run. He thought for a moment that the machine had come alive on him and was bawling him out.

  Two seconds later he was hopping mad, of course. As soon as he thought, he knew what had happened. The man who’d prepared the two spools of tape had known Casey would run the problem through. So, at the end of the first tape, he’d zestfully included instructions for the machine to blast the loudspeaker and type that abuse to Casey, before the normal signal for change-of-spools came on.

  When those instructions-on-tape took effect, Casey’s tranquil ease was shattered.

  Far a moment though, it had seemed even to Casey, that the machine had a personality and reactions of its own. It hadn’t. But most of us are inclined to think that machines have minds of their own, and practically all of us, expect that presently we will have actually thinking machines. As of now, the people who handle this machine say that it can only do half of the things a human brain can do — remember, recall, associate these instructions with that action, integrate numerals, and so on. Half of what a human brain can do is rather remarkable, but Casey’s fellow-workers tend to restrain the use of the word

  “thinking” to the things an electronic computer cannot do.

  Still, what with the progress of science and all, most of us assume that pres -

  ently we will have robots to do all the heavy labor of the world. Perhaps the most eagerly awaited robots are robot minds to do that especially heavy labor known as thought. But up to now nobody seems to have estimated the problems to be faced in designing a truly thinking machine. Not in print at any rate. The basic principles for the operation of robot minds do not seem to be stated. Here goes.

  It looks rather promising at the beginning. A baby starts out with a mind that is blank of information and ideas. It receives sense-perceptions of this and that.

  After some tens of thousands of days, during which its eyes and ears and fingers and sensory equipment generally feed data to it, the formerly bland mind has a reasonably coherent idea of the universe around it. In fact, a baby starts out as a potentially rational animal, and with nothing but constant information to help, winds up an adult with occasional flashes of reasonableness.

  A thinking machine should be able to duplicate that, with greater ease and more efficiency. A machine that is to think about science doesn’t need all the

  “To Build a Robot Brain”

  189

  data a human needs for living. A machine doesn’t need to know what will happen if he drinks boiler-makers, because it won’t drink. It needn’t know the difference between Republicans or Democrats. It won’t vote. A great deal of painfully learned information can be skipped by a machine which has no gender.

  So a robot’s brain can work to splendid advantage with only the education needed for its specialty.

  We don’t have to duplicate interests to make a useful machine. It has to be able to take in information — the computer just referred to does just that, and so does a human baby — and make use of it. The computer that scared Casey takes its information from dots of magnetism on a metal tape. It would seem that if one feeds specialized information to a thinking machine — a robot brain — with specialized interests, it should reason merrily away. A computer is “interested” only in numerals and letters. Make a brain to handle other thoughts, and it should reason with a speed and precision no man could duplicate. Given a process for thinking instead of computation, it seems that we should be able to make a high-speed, high-precision brilliant brain.

  The process for thinking looks practical enough. With symbolic logic one can reduce any problem to graphic statement and the processes of logic are beautifully adaptable to robot operations.

  Take a routine logical operation. “James is a man. A man is a rational animal.

  Therefore James is a rational animal.”

  Put symbolically, it reads:

  J)M

  M)RA

  That’s the problem only: “The idea of ‘James’ implies or includes the idea of

  ‘Man.’ The idea ‘Man’ implies or includes the idea ‘Rational Animal.’ Such a problem can be fed to a perfectly practical machine-brain. It will cancel the identical terms, and come up with:

  J)M

  M)RA

  J)RA

  “The idea of ‘James’ implies or includes the idea ‘Rational Animal.’” In short,

  ‘James’ is a rational animal. Nothing could be clearer, and Aristotle himself couldn’t do better. It is certainly within the capacity of a machine to do so. We could use numbers instead of letters, to stand for our terms, like a short of alge-bra used hind-and-foremost. So:

  Let 5 — James

  6 — Man

  7 — Rational Animal

  We get:

  5)6

  6)7

  5)7

  Here numerals — familiarly used in machines — are used in a mechanical duplication of thought. It works. Obviously, a machine can be made to perform logical operations — which is to say is to think.

  190

  A P P E N D I X B

  You might contemplate this lovely set-up for a while. If you care to gloat over it, go ahead. Have your fun. But there is s slight objection that can be raised, which ultimately produces a small chilly sensation in the midsection of one’s enthusiasm. This is a thinking process that a machine can perform. But it does not necessarily give a right answer under normal operating conditions with a man working the machine.

  Mr. Will Durant exemplifies the catch in his book, “The Story of Philosophy.” His raising of the point will do as well as any. Using the same logical process, only with the name “Socrates” instead of “James,” he arrives at the same result: “Soc -

  rates is a rational animal.” But then he triumphantly points out that this particular Socrates might be insane, in which case no logic would make him rational.

  The objection is not quite right, of course. When we say that James or Socrates is a man, and that man is a rational animal, we use the term “man” with the same value in both statements. We reassert the equality of meaning when we let the two cancel mathematically or logically or otherwise. Mr. Durant didn’t think of that. His argument would be expressed by somebody using the numeri-cal expression above, with 5s and 6s and 7s, and then crying gleefully “April Fool!” One of those sixes wasn’t a six, but only five and seven-eights! So your system of thinking doesn’t work!” It is a way of saying that a method is wrong if it isn’t proof against cheating. I think one can drop the objection — qua objection — in the wastebasket.

  But one cannot dismiss the objection that if a robot brain has to depend on the honesty or the reasonableness of the human who gives it information, then the answers are going to depend on the man and not the machine. This is true of mathematical computers, but people do not have opinions about numbers.

  They are neither dishonest nor unreasonable when they ask for the result of the integration of numerals. But they do cheat when they ask questions about matters of general interest — which is exactly why we want a thinking machine, a robot brain, to be able to answer.

  A thinking machine has a highly special requirement for utility. It has to have sense. It has to be presented with the problem, not merely with symbols plus instructions to do such-and-such with them. That is where a computer falls short of being a thinking machine. It does not do anything better or more brilliantly than a human brain. It simply and exclusively does it faster. But a real robot brain will nee
d to be smarter than mere men, or there is no point in making one.

  To dodge the difficulty of depending on a man to tell it what to do and what with, a true thinking machine needs to understand a problem presented to it, so that it can tell whether it has adequate data for a solution. Make a machine that can tell you when it needs information, and what kind, and that means you have, at least, a rudimentary thinker right away.

  But if it depends on men to provide it with information, it will be slow! And also it will accept any data given it. It can hardly tell that a man says six when it really is five and seven-eights. So such a machine will be slow and no more accurate in its answers than the man-provided information. For accuracy alone — not to mention speed — a useful robot brain will need to hunt up the information to solve any problem presented to it. The only useful kind of robot brain will accept a problem, devise its own method of solution, seek out the data needed for the solution, and then produce the answer.

  “To Build a Robot Brain”

  191

  And in theory, at this point, that looks possible. A robot brain could use pho-tocells for eyes, microphones for ears, and all sorts of artificial sensory organs to gather information. As a matter of fact, our most accurate information comes from artificial sensory devices. Microscopes are sharper than eyes and microphones than ears. Spectroscopes can gather information our senses balk at, calipers make measurements we can’t approach, and in case of need, a robot brain might use an electron microscope to get accurate information otherwise unobtainable.

  A robot brain could, then, have information of a much higher degree of accuracy than we human brains can attain. Its information would not be slanted by prejudice, distorted by personal errors of observation, or tied in knots by emotional associations. A robot brain that gathers its own information should be vastly better informed than any man could possibly be. It should think with strictly accurate logical processes. It should think sounder, faster, more sanely. A robot brain like this is exactly what we want — and do we need it!

  But I suggest a slight pause here for deflation announcement.

 

‹ Prev