The Turing Option

Home > Science > The Turing Option > Page 26
The Turing Option Page 26

by Harry Harrison


  “I’ll want a say in the decorating.”

  “You pick it out—we’ll pick up the tab. Electronic kitchen, Jacuzzi bath—anything you want. The army engineers will install it.”

  “Offer accepted. When do I get the catalogs?”

  “I have them in my office right now.”

  “Ben—you’re terrible. How did you know I would go along with this plan?”

  “I didn’t know—just hoped. And when you look at it from all sides it really turns out to be the only safe thing to do.”

  “Can I see the catalogs now?”

  “Of course. In this building, room 412. I’ll call my assistant and have her dig them out.”

  Shelly started for the door—then spun about. “I’m sorry, Brian. I should have asked you first if you need me.”

  “I think it’s a great idea. In any case I have some other things to do today away from the lab. What do we say we meet there at nine A.M. tomorrow?”

  “Right.”

  Brian waited until the door had closed before he turned to Ben, chewed his lip in silence before he managed to speak. “I still haven’t told her about the CPU implant in my brain. And she hasn’t asked me about that session where it produced the clue about the theft. Has she mentioned it to you?”

  “No—and I don’t think she will. Shelly is a very private person and I think she extends the same privacy to others. Is it important?”

  “Only to me. What I told you before about feeling like a freak—”

  “You’re not, and you know it. I doubt if the topic will come up again.”

  “I’ll tell her about it, someday. Just not now. Particularly since I have arranged some lengthy sessions with Dr. Snaresbrook.” He glanced at his watch. “The first one will be starting soon. The main reason I am doing this is that I am determined to speed up the AI work.”

  “How?”

  “I want to improve my approach to the research. Right now all that I am doing is going through the material from the backup data bank we brought back from Mexico. But these are mostly notes and questions about work in progress. What I need to do is locate the real memories and the results of the research based upon them. At the present time it has been slow and infuriating work.”

  “In what way?”

  “I was, am, are …” Brian smiled wryly. “I guess there is no correct syntax to express it. What I mean is the me that made those notes was a sloppy note maker. You know how, when you write a note to yourself, you mostly scribble a couple of words that will remind you of the whole idea. But that particular me no longer exists, so my old notes don’t remind me of anything. So I’ve started working with Dr. Snaresbrook to see if we can use the CPU implant to link the notes to additional disconnected memories that are still in my brain. It took me ten years to develop AI the first time—and I’m afraid it will take that long again if I don’t have some help. I must get those lost memories back.”

  “Are there any results of your accessing these memories?”

  “Early days yet. We are still trying to find a way to make connections that I can reliably activate at will. The CPU is a machine—and I’m not—and we interface badly at the best of times. It is like a bad phone connection at other times. You know, both people talking at once and nothing coming across. Or I just simply cannot make sense of what is getting through. Have to stop all input and go back to square A. Frustrating, I can tell you. But I’m going to lick it. It can only improve. I hope.”

  Ben walked Brian over to the Megalobe clinic and left him outside Dr. Snaresbrook’s office. He watched him enter, stood there for some time, deep in thought. There was plenty to think about.

  The session went well. Brian could access the CPU at will now, use it to extract specific memories. The system was functioning better—although sometimes he would retrieve fragments of knowledge that were hard to comprehend. It was as though they came as suggestions from someone else rather than from his own memories. Occasionally, when he accessed a memory of his earlier, adult self, he would find himself losing track of his own thoughts. When he regained control he found it hard to recall how it had felt. How strange, he thought to himself. Am I maintaining two personalities? Can a single mind have room for two personalities at once--one old, the other new?

  The probing certainly was saving a great deal of time in his research and, as the novelty began to wear off, Brian’s thoughts returned to the most serious problems that still beset him on the AI. All the different bugs that led to failures—to breakdowns in which the machine would end up at one extreme of behavior or another.

  “Brian—are you there?”

  “What—?”

  “Welcome back. I asked you the same question three times. You were wandering, weren’t you?”

  “Sorry. It just seems so intractable and there is nothing in the notes to help me out. What I need is to have a part of my mind that is watching itself without the rest of the mind knowing what is happening. Something that would help keep the system’s control circuitry in balance. That’s not particularly hard when the system itself is stable, not changing or learning very much—but nothing seems to work when the system learns new ways to learn. What I need is some system, some sort of separate submind that can maintain a measure of control.”

  “Sounds very Freudian.”

  “I beg your pardon?”

  “Like the theories of Sigmund Freud.”

  “I don’t recall anyone with that name in any AI research.”

  “Easy enough to see why. He was a psychiatrist working in the 1890s, before there were any computers. When he first proposed his theories—about how the mind is made of a number of different agencies—he gave them names like id, ego, superego, censor and so on. It is understood that every normal person is constantly dealing, unconsciously, with all sorts of conflicts, contradictions, and incompatible goals. That’s why I thought you might get some feedback if you were to study Freud’s theories of mind.”

  “Sounds fine to me. Let’s do it now, download all the Freudian theories into my memory banks.”

  Snaresbrook was concerned. As a scientist, she still regarded the use of the implant computer as an experimental study—but Brian had already absorbed it as a natural part of his lifestyle. No more poring over printed texts for him. Get it all into memory in an instant, then deal with it later.

  He did not go back to his room, but paced the floor, while in his mind he dipped first into one part of the text, then another, making links and changing them—then gasped out loud.

  “This has to be it—really it! A theory that fits my problem perfectly. The superego appears to be a sort of goal-learning mechanism that probably evolved on top of the imprinting mechanisms that evolved earlier. You know, the systems discovered by Konrad Lorenz, that are used to hold many infant animals within a safe sphere of nurture and protection. These produce a relatively permanent, stable goal system in the child. Once a child introjects a mother or father image, that structure can remain there for the rest of that child’s life. But how can we provide my AI with a superego? Consider this—we should be able to download a functioning superego for my AI if we can find some way of downloading enough of the details of my own unconscious value structure. And why not? Activate each of my K-lines and nemes, sense and record the emotional values associated with them. Use that data to first build a representation of my conscious self-image. Then add my self-ideal—what the superego says I ought to be. If we can download that, we might be much further on the way toward being able to stabilize and regulate our machine intelligence.”

  “Let’s do it,” Snaresbrook said. “Even if no one has proven yet that the thing exists. We’ll simply assume that you do indeed have a perfectly fine one inside your head. And we are perhaps the first people ever to be in a position to find it. Look at what we have been doing for months now, searching out and downloading your matrix of memories and thought processes. Now we may as well push a little further—only backward instead of forward in time. We can
try to do more backtracking toward your infancy, and see if we can find some nemes and attached memories that might correspond to your earliest value systems.”

  “And you think that you can do this?”

  “I don’t see any reason why not—unless what we’re seeking just doesn’t exist. In any case the search will probably involve locating another few hundred thousand old K-lines and nemes. But cautiously. There might be some serious dangers here, in giving you access to such deeply buried activities. I’ll first want to work up a way to do this by using an external computer, while disabling your own internal connection machine for a while. That way, we’ll have a record of the structures we discover in external form, which might be used in improving Robin. This will prevent the experiments from affecting you until we’re more sure of ourselves.”

  “Well, then—let’s give it a try.”

  25

  May 31, 2024

  “Brian Delaney—have you been working here all night? You promised it would just be a few minutes more when I left you here last night. And that was at ten o’clock.” Shelly stamped into the lab radiating displeasure.

  Brian rubbed his fingers over rough revelatory whiskers, blinked through red-rimmed guilty eyes. Equivocated.

  “What makes you think that?”

  Shelly flared her nostrils. “Well, just looking at you reveals more than enough evidence. You look terrible. In addition to that I tried to phone you and there was no answer. As you imagine I was more than a little concerned.”

  Brian grabbed at his belt where he kept his phone—it was gone. “I must have put it down somewhere, didn’t hear it ring.”

  She took out her own phone and hit the memory key to dial his number. There was a distant buzzing. She tracked it down beside the coffeemaker. Returned it to him in stony silence.

  “Thanks.”

  “It should be near you at all times. I had to go looking for your bodyguards—they told me you were still here.”

  “Traitors,” he muttered.

  “They’re as concerned as I am. Nothing is so important that you have to ruin your health for it.”

  “Something is, Shelly, that’s just the point. You remember when you left last night, the trouble we were having with the new manager program? No matter what we did yesterday the system would simply curl up and die. So then I started it out with a very simple program of sorting out colored blocks, then complicated it with blocks of different shapes as well as colors. The next time I looked, the manager program was still running—but all the other parts of the program seemed to have shut down. So I recorded what happened when I tried it again, and this time installed a natural language trace program to record all the manager’s commands to the other subunits. This slowed things down enough for me to discover what was going on. Let’s look at what happened.”

  He turned on the recording he had made during the night. The screen showed the AI rapidly sorting colored blocks, then slowing—then barely moving until it finally stopped completely. The deep bass voice of Robin 3 poured rapidly from the speaker.

  “ … K-line 8997, response needed to input 10983—you are too slow—respond immediately—inhibiting. Selecting subproblem 384. Response accepted from K-4093, inhibiting slower responses from K-3724 and K-2314. Selecting subproblem 385. Responses from K-2615 and K-1488 are in conflict—inhibiting both. Selecting …”

  Brian switched it off. “Did you understand that?”

  “Not really. Except that the program was busy inhibiting things.”

  “Yes, and that was its problem. It was supposed to learn from experience, by rewarding successful subunits and inhibiting the ones that failed. But the manager’s threshold for success had been set so high that it would accept only perfect and instant compliance. So it was rewarding only the units that responded quickly, and disconnecting the slower ones—even if what they were trying to do might have been better in the end.”

  “I see. And that started a domino effect because as each subunit was inhibited, that weakened other units’ connection to it?”

  “Exactly. And then the responses of those other units became slower until they got inhibited in turn. Before long the manager program had killed off them all.”

  “What a horrible thought! You are saying, really, that it committed suicide.”

  “Not at all.” His voice was hoarse, fatigue abraded his temper. “When you say that, you are just being anthropomorphic. A machine is not a person. What on earth is horrible about one circuit disconnecting another circuit? Christ—there’s nothing here but a bunch of electronic components and software. Since there are no human beings involved nothing horrible can possibly occur, that’s pretty obvious—”

  “Don’t speak to me that way or use that tone of voice!”

  Brian’s face reddened with anger, then he dropped his eyes. “I’m sorry, I take that back. I’m a little tired, I think.”

  “You think—I know. Apology accepted. And I agree, I was being anthropomorphic. It wasn’t what you said to me—it was how you said it. Now let’s stop snapping at each other and get some fresh air. And get you to bed.”

  “All right—but let me look at this first.”

  Brian went straight to the terminal and proceeded to retrace the robot’s internal computations. Chart after chart appeared on the screen. Eventually he nodded gloomily. “Another bug of course. It only showed up after I fixed the last one. You remember, I set things up to suppress excessive inhibition, so that the robot would not spontaneously shut itself down. But now it goes to the opposite extreme. It doesn’t know when it ought to stop.

  “This AI seems to be pretty good at answering straightforward questions, but only when the answer can be found with a little shallow reasoning. But you saw what happened when it didn’t know the answer. It began random searching, lost its way, didn’t know when to stop. You might say that it didn’t know what it didn’t know.”

  “It seemed to me that it simply went mad.”

  “Yes, you could say that. We have lots of words for human-mind bugs—paranoias, catatonias, phobias, neuroses, irrationalities. I suppose we’ll need new sets of words for all the new bugs that our robots will have. And we have no reason to expect that any new version should work the first time it’s turned on. In this case, what happened was that it tried to use all of its Expert Systems together on the same problem. The manager wasn’t strong enough to suppress the inappropriate ones. All those jumbles of words showed that it was grasping at any and every association that might conceivably have guided it toward the problem it needed to solve—no matter how unlikely on the face of it. It also showed that when one approach failed, the thing didn’t know when to give up. Even if this AI worked there is no rule that it had to be sane on our terms.”

  Brian rubbed his bristly jaw and looked at the now silent machine. “Let’s look more closely here.” He pointed to the chart on the machine. “You can see right here what happened this time. In Rob-3.1 there was too much inhibition, so everything shut down. So I changed these parameters and now there’s not enough inhibition.”

  “So what’s the solution?”

  “The answer is that there is no answer. No, I don’t mean anything mystical. I mean that the manager here has to have more knowledge. Precisely because there’s no magic, no general answer. There’s no simple fix that will work in all cases—because all cases are different. And once you recognize that, everything is much clearer! This manager must be knowledge-based. And then it can learn what to do!”

  “Then you’re saying that we must make a manager to learn which strategy to use in each situation, by remembering what worked in the past?”

  “Exactly. Instead of trying to find a fixed formula that always works, let’s make it learn from experience, case by case. Because we want a machine that’s intelligent on its own, so that we don’t have to hang around forever, fixing it whenever anything goes wrong. Instead we must give it some ways to learn to fix new bugs as soon as they come up. By itself, witho
ut our help.”

  “So now I know just what to do. Remember when it seemed stuck in a loop, repeating the same things about the color red? It was easy for us to see that it wasn’t making any progress. It couldn’t see that it was stuck, precisely because of being stuck. It couldn’t jump out of that loop to see what it was doing on a larger scale. We can fix that by adding a recorder to remember the history of what it has been doing recently. And also a clock that interrupts the program frequently, so that it can look at that recording to see if it has been repeating itself.”

  “Or even better we could add a second processor that is always running at the same time, looking at the first one. A B-brain watching an A-brain.”

  “And perhaps even a C-brain to see if the B-brain has got stuck. Damn! I just remembered that one of my old notes said, ‘Use the B-brain here to suppress looping.’ I certainly wish I had written clearer notes the first time around. I better get started on designing that B-brain.”

  “But you’d better not do it now! In your present state, you’ll just make it worse.”

  “You’re right. Bedtime. I’ll get there, don’t worry—but I want to get something to eat first.”

  “I’ll go with you, have a coffee.”

  Brian let them out and blinked at the bright sunshine. “That sounds as though you don’t trust me.”

  “I don’t. Not after last night!”

  Shelly sipped at her coffee while Brian worked his way through a Texas breakfast—steak, eggs and flapjacks. He couldn’t quite finish it all, sighed and pushed the plate away. Except for two guards just off duty, sitting at a table on the far wall, they were alone in the mess hall.

  “I’m feeling slightly less inhuman,” he said. “More coffee?”

  “I’ve had more than enough, thank you. Do you think that you can fix your screw-loose AI?”

  “No. I was getting so annoyed at the thing that I’ve wiped its memory. We will have to rewrite some of the program before we load it again. Which will take a couple of hours. Even LAMA-5’s assembler takes a long time on a system this large. And this time I’m going to make a backup copy before we run the new version.”

 

‹ Prev