This inherent capability of 79 meant that the computer could, when necessary, reject a single solution to a problem. It did this by peering into its own future requirements to ponder a multiplicity of choices. In essence, 79 could make a selection based on the "human prerogative" of choosing the lesser of two evils. It did just this in respect to its own caretaking.
Since 79 enjoyed the benefits of such deduction—I insist this is reasoning—there existed minimal difficulties in permitting the machine to attend to its "sustenance and repair." Not only how to button its pants, but to replace buttons that had broken off. The breeder reactor was an excellent example. If 79
required additional power to operate simultaneously an unusually large number of subsystems, or to activate new systems, or even to increase the cooling capacity of its components, it sensed this need through the instruments located throughout its structure. It judged also the specific quantity of its power requirements, and transmitted this data to its mechanical nerve center—the functions that are performed for the human body by the autonomic centers of the brain. That is, the nonconscious controls of body temperature, rate of digestion, rate of respiration, and so forth. In a sense, the mechanical nerve center of 79 was autonomic. After all, the computer did respond to deficiencies or requirements of its system, and trying to draw a hard line between the flesh system and the electronic-mechanical one becomes a classic instance of splitting semantic hairs.
It required only seconds for 79 to diagnose a problem and to institute corrective action. Robot devices within the complex could switch into any area of the system. They would jolt a failed module with an electrical charge that unplugged the unit. Robot clamps removed it from place, slid in the new module, and jolted the replacement to lock the plug-in. Immediately the maintenance center tested the unit, evaluated its performance, and commanded the robot servo-units to return to standby.
Ever since the assembly of 79 had begun, thousands of plug-in modules had been manufactured and stored where they could be handled by the servomechanisms of the cybernetics complex. Thus 79
was diagnostician, doctor, and surgeon all at the same time. Those parts removed from the system for improper assembly or parts failure were carried to a disposal department where, in effect, 79 rejected the faulty unit.
It didn't take long for the staff psychologists to mark a red flag on the operations board of 79's disposal and repair shops. As predicted, the technicians there became sullen, and finally vented their emotions in destructive rage. In effect they wanted to kick in the teeth of the machine that required them only to pick up the rejected modules of the system. No one wants to be a garbage man for a superbrain that couldn't care less.
Within three or four years, we predicted, working only with the raw ingredients of its mechanical and electronic systems, 79 would be almost wholly free of human servicing requirements. Beyond that?
Well, it was only a matter of time and mechanics for such a system to be linked with automated factories so that it could produce its own parts, have the materials delivered, and function without human counsel or assistance.
I keep returning to the unique security system built into the great mountainous complex of Project 79. Because of the element of human failure, the decision had been made early in the program to entrust the computer itself with a major portion of its own security defenses. This was one area where I fell immediately into conflict with the project heads—including Tom Smythe.
Soon after I arrived I reviewed the many projects already under way with 79. I was astonished to discover that even as Chief Programmer I was refused access to several working studies—on the grounds that I lacked the all-important "need to know" of the work under way. In particular, a program identified only as DOD 6194—against which I would stumble more than once in the coming months—brought on heated words. I insisted that as Chief Programmer I must be cognizant of any effort involving 79; otherwise there could exist interference of which I knew nothing, and that could affect the accuracy of my own work.
But it was a stone wall I couldn't tear down or go over. The decision on this fracturing of knowledge, Smythe told me, had the full backing of the White House.
I insisted that putting Project 79 in charge of its own security was a grave error. There is a random factor in the human being —total unpredictability is a precious asset to Homo sapiens— that could never be included within the rationalization capabilities of any computer. Obviously, every possible element of the computer should be utilized for the purposes of its own security. But the bulk of that responsibility?
Jesus, no!
You might as well give a six-year-old a loaded .38 and tell him to defend the household. He'll never do the job because he can't possibly recognize the many potential dangers.
Notwithstanding my arguments, the security system had been programmed long before my arrival, and I couldn't do anything about it. I was more concerned with the computer's internal security system developing electronic suspicions to the extent that it would interfere with my work than I was about skulking saboteurs trying to muck up the works. Because 79's security consciousness was a constant thorn in my side.
No human being could penetrate the critical areas of 79 without being passed through by the computer itself. This is where the personal identifications programs, the medical processing, came into use. Let's say I wanted to reach the subsystem within the complex where incoming voltage was broken down to the specific values required to operate Quadrant 4A96 of the memory cells. Before I could enter the area, I had to program the request, identify all personnel involved, justify their presence, and estimate the time required for the survey and, if necessary, alterations to the system. We couldn't move until 79 evaluated all aspects of the request and then signified entry clearance.
If we listed among the technical crew to enter Quadrant 4A96 a janitor, for example, 79 would deny permission for that man to penetrate the Quadrant. And 79 had the means to enforce that denial.
A lethal means.
Before he could reach the Quadrant the man must pass through a single-file corridor where 79 ran its security check. Computer systems examined fingerprints, ID plates, and retinal pattern. If the intruder reached this far, he dared not attempt further passage. Alarms went off immediately, and automatic systems sealed off the area to permit human security forces to apprehend and remove the intruder. But if the intruder decided to rush ahead, he still must force the restricting corridor. He might have explosives, of course, but if he moved more than six feet in either direction the entire corridor came alive—with death.
Laser beams crisscrossed the corridor in a pattern no human being could have survived.
Sometimes, thinking about these elaborate computer systems, you couldn't help wondering who was telling whom what to do.
10
ever since electronic computers were first spawned, cyberneticists have stood off alarmists decrying man's apparent willingness to delegate human responsibilities to artificial brains. The controversy, at times quite heated, revolves about questionable intrinsic responsibilities and a semantics morass, First to be argued was the degree of intelligence in the advanced cybernetics system. Second, to confound the first, there were a thousand shouted theories as to just what was intelligence. And, finally, there loomed the question of whether man would, or even could, create an artificial entity superior to himself.
Except for coffee klatsches at which we vented our theories and counterarguments these questions, in respect to Project 79, were academic. For the very good reason that we had no intention of creating a catch-all, do-anything intelligence. 79 could not tap-dance or sing Madama Butterfly or do possibly a million other things. It couldn't, and if it really had any degree of intelligence I'm quite certain it wouldn't bother.
Its sole justification was to think.
The idea that man must always enjoy a native superiority to the computer—the superiority called hunch— was so much rubbish. In the area so cherished both by the
humanists and the cyberneticists, the famed neurophysiologist Professor W. G. Walter sounded what must have seemed heresy to the humanists. "What would really be useful," claimed the professor, "is a sort of hunch generator. The hunch capacity of the mortal doctor or scientist is just being strained to the breaking point; there are too much data to examine and analyze for each patient. The robot could be fed this data and produce for us some diagnostic hunch based on the patient's pulse rate, age, and electrocardiogram."
Heresy.
Nevertheless, quite true.
A hunch is an element of human thought, a capacity of intelligence. It is not mystical. Those who misunderstood cybernetics cloaked the mental activity we call hunch with such mysticism, as if it were a holy property reserved only for the mind of man.
Well, perhaps not heresy. But certainly nonsense.
The computer represents the augmentation of human brain power. It's as simple as that. Essentially it achieves this goal through the compression of time, making up, as an extension of man's intellect, for the distressingly brief number of years during which man is intellectually productive.
A hunch is admissive of an inferiority in data. In lieu of final objective data, a man shrugs his shoulders, and guesses. It's an intellectual flip of the coin, no matter how refined the process.
And the hunch will be with us for a long time to come because there will always be a shortage of data. So the hunch is for the machine, as well as for man, an integral element of intellectual conduct.
That's why Project 79 had a Heuristics Division, under the able direction of Dr. Selig Albracht.
Heuristics? In cybernetics language this is the ability to "play hunches." Where there wasn't a specific, hard solution to a problem, heuristics considered every factor conceivable that affected the problem, and permitted you to make the best of what might be just a bad bargain of data. Because of the computer's vast memory, its ability to cross-check and cross-feed, and to forget nothing it knew, all within the span of seconds, it gave you a fourteen-karat hunch with which to commit yourself.
Early in the developing science of cybernetics, the conclusion was drawn that the artificial brain enjoyed its greatest potential if it were patterned, physically, after the structure of the human brain. This conclusion was another element of intense controversy, for, with all its apparent advantages, the so-called nerve-net system inevitably must produce a machine of crippling bulk and complexity.
The initial attempts to create a realistic artificial intelligence sounded a call for assistance from other sciences. One of these, with which I was to become deeply involved, was the burgeoning field of bionics.
Which interested me personally as well as involved my work.
The liaison assigned to my office from the Project 79 Bionics Division was sensationally packaged.
Her name was Kin Renee Michele, and there were times when I found it difficult to concentrate upon our work together in bionics. Both the word and Kim's proximity impressed upon me ideas of a more direct biological nature than called for in our work. Somewhat unscientifically, Kim hit me with all the effect of an avalanche. Unfortunately for my own biological urges, Kim never mixed immediate business with potential pleasure. . . .
Dr. Howard Vollmer rested secure in his ivory tower as the world's leading bionics scientist. As the director of the Bionics Division of Project 79, he worked closely with my office. No, that's not quite accurate. We worked closely with his division. I took advantage of every opportunity to spend time with Dr. Vollmer. The elderly scientist was more than brilliant; he was singular in his knowledge to make of bionics a meaningful force in cybernetics. There was another reason. Kim Michele usually was present during Vollmer's meetings with what he insensitively called "oafish young scientists."
Dr. Vollmer rarely was voluble; when those infrequent occasions came to pass they were memorable not only for their duration but also for Vollmer's ability to strike at the heart of a situation. I came away from one of our early meetings with the hope for more.
"In essence, Mr. Rand," Dr. Vollmer had begun after making a steeple of his fingertips and peering owlishly at me, "we are involved here in a think factory. Umm, yes, that expression is applicable. A think factory. Um-hum . . . but this demands that we, as much as our electronic playmates, must think. Eh? We have a sense of, umm, free-wheeling programming." He blinked rapidly at me, going through a series of rubbery facial expressions.
I started to reply but decided instead to make a careful study of the physical attributes of Kim. I had the distinct impression that Dr. Vollmer was aware of his fearsome reputation. Or perhaps he wasn't and just didn't care.
"What we are attempting in Project 79, Mr. Rand, and especially with your communications with the computer, when the time is right, is to enter the future without stumbling. We must equip the human race—um, yes, that is it concisely. To arm, prepare, equip the race with the means to think and to anticipate so swiftly that when an event other than that of our own making directly concerns us, possibly even our survival, we need not be caught, as in the popular idiom, flatfooted."
For several moments, immersed in a train of thought, he wandered away. Immediately I began some calculations of my own, my eyes openly admiring Kim's figure. Wonderful, just wonderful what she did to that blouse. . . . Obviously she received the message of my thoughtless staring, for I received a sudden glare and a delightful blush.
Vollmer went right on. "We're really on a threshold, you know," he said with a careful look at me.
"It may prove epochal if we're successful."
"I know—"
He motioned me to silence. He had the floor, and my place was to sit at his feet, and listen.
"We are creating a brain, Mr. Rand. The brain. We are building and shaping blocks of neurons.
And though I detest specious contests with my colleagues, you will of course encounter their fallacious arguments—with which I am unconcerned. Therefore, having the advantage of the bionics approach, you will better thread you way through the inevitable conflicts of any program in its making."
Having, in his own mind, demolished his colleagues, he proceeded in the full confidence to which he was so accustomed.
"Where was I? Um, yes; we have an intelligence. You will note that I do not refer to an artificial intelligence?" I had noted that—immediately, in fact. "I do not call this an artificial intelligence, Mr. Rand,"
he continued, "because there is nothing artificial in nature. There is only rearrangement. Therefore"—he said this as a pronouncement—"we have an intelligence. . . .
"Man has been trapped within a cranial prison for a hundred thousand years. Too long, too long, I daresay. Do you know that your brain is no larger, no more capable than that of some distant ancestor who ate the fleas from his mate's hide while they both squatted miserably in some cave? It is time for a change; it is long overdue for a change."
His hand slapped sharply against his desk. "The only measure of intelligence," he said slowly, "is determined by what you do with that intelligence. Well, we are doing something. Now, physically, Mr.
Rand, our 79 is a reproduction, umm, in only a dim sense, I suppose, of the biological nervous system.
Kim, here," offhandedly he waved at her, "assures me that through your office we may have extraordinarily intimate communication with that system. Umm, kaff?"
What the devil do you answer to "umm, kaff"? I never did respond; Dr. Vollmer rattled on without further pause.
In truth, Dr. Vollmer's preoccupation accented the need for understanding, clearly and concisely, just what we were to attempt. This was my task—to comprehend the whole and to work with it to produce meaningful man-to-machine communications. In so doing, I would be neither architect nor mason.
I was a communicator.
Baby-Sitter.
Many of the scientists with whom I worked exhibited insatiable greed in their search for absence of size. Halted finally even with microminiaturization, they turned to the nucl
ear physicists about them blandly to request working blocks of superdense matter within which nuclear structure was condensed. And that was quite a request! In effect they were being asked to duplicate the conditions within stars where matter is compressed to densities most of us could never imagine.
Dr. Vollmer's staff worked not only with manipulated electronic neurons but also with experimental force blocks of disturbed and rearranged atoms. Wildly violent forces were restrained within fields of electromagnetic energy. Deep within the interior of such laboratories, shielded by massive steel walls ... I tried to envision an artificial intelligence groping for comprehension of its own existence. An intelligence of unknown capacity, of atoms compressed, stripped of their electrons, where subnuclear forces perform neural functions of sorting, comparing, rejecting, seeking, weighing, analyzing, computing ... a flickering of raw intelligence among and within living spaces of neurons. A world forever beyond the world of men where the elemental matter of the universe works for man.
Connected, linked, tied in, communicating with us ... the core of 79. Which on the basis of available information could make meaningful decisions.
This is what I sought to bring forth from the entity created by so many others. A device—not an artificial device, as Dr. Vollmer insisted, but one in which the basic building blocks of matter were rearranged by man—that could perform the functions of deductions, that would be capable of achieving a decision-making capability so akin to that of the skilled mind that it would be virtually impossible to distinguish one from the other.
If we could hurdle the critical moments of bringing 79 to life, if we could impart to the man-created brain awareness and its first glimmerings of comprehension ... if we could do this without stumbling, then we would be at the final moment, what is so often referred to as "that moment of truth."
When a mind, man or machine, is involved in seeking the solution to a problem, that mind must know when the problem is solved.
The God Machine Page 7