Gold

Home > Science > Gold > Page 17
Gold Page 17

by Isaac Asimov


  In 1752, the French satirist Voltaire wrote Micromegas, in which visitors from Saturn and Sinus observe the Earth, but this cannot be taken literally. The visitors are merely Voltaire’s device for having Earth viewed with apparent objectivity from without in order to have its follies and contradictions made plain.

  But then in 1877, there was the discovery of thin, dark markings on Mars. This was interpreted by some as “canals” and the American astronomer Percival Lowell was convinced that they were artificial waterways built by intelligent beings trying to use the ice of the polar caps to maintain agriculture on their increasingly desiccated planet. He wrote books on the subject in the 1890s that created quite a stir.

  The British science fiction writer Herbert George Wells proceeded to make use of the notion and, in 1898, published The War of the Worlds, the first significant tale of the invasion and attempted conquest of Earth by more advanced intelligences from another world (in this case, Mars). I have always thought that Wells, in addition to wanting to write an exciting story with an unprecedented plot, was also bitterly satirizing Europe. At the time he wrote, Europeans (the British, particularly) had just completed dividing up Africa without any regard for the people living there. Why not show the British how it would feel to have advanced intelligences treat them as callously as they were treating the Africans?

  Wells’s novel created a new subgenre—tales of alien invasion. The manner in which Wells made the Martians unpitying exploiters of humanity (for the sake of excitement and, I believe, satire); the memory, perhaps, of the Mongol invasion; the feeling of guilt over the European despoliation of all the other continents; combined to make it conventional to have the alien invaders unfeeling conquerors, for the most part.

  Actually, we have no reason to think this would be so. As far as we know, no invaders from without have ever reached Earth and, for a variety of reasons, it might be argued that none ever will. However, if they do come, there is no a priori reason to suspect they won’t come in friendship and curiosity, to teach and to learn.

  Yet such is the power of humanity’s own shameful history and the conventions of fiction that very few people would be willing to consider alien invaders coming in peace as a real possibility. In fact, when plaques and recordings were placed on rocket probes designed to leave the solar system and go wandering off into interstellar space, in order that alien intelligences (if any) might find them someday, millions of years in the future, and that they might thus learn that Earthmen had once existed—there were those who thought it a dangerous process. Why advertise our existence? Why encourage ferocious aliens to come here in order to ravage and destroy?

  Here, then, in this collection, are stories of alien invasion. We have selected a variety of contemporary treatments of the problem, some a matter of excitement, some thoughtfully philosophic, some even funny. They view the possibility from all angles and stretch our minds on the matter, as good science fiction should.

  The Science Fiction Blowgun

  In science fiction, experience seems to show that long stories have an advantage over short ones. The longer the story, all things being equal, the more memorable.

  There is reason to this. The longer the story, the more the author can spread himself. If the story is long enough, he can indulge himself in plot and subplot with intricate interconnections. He can engage in leisurely description, in careful character delineation, in thoughtful homilies and philosophical discussions. He can play tricks on the reader, hiding important information, misleading and misdirecting, then bringing back forgotten themes and characters at the moment of greatest effect.

  But in every worthwhile story, however long, there is a point. The writer may not consciously put it there, but it will be there. The reader may not consciously search for it, but he’ll miss it if it isn’t there. If the point is obtuse, blunt, trivial or non-existent, the story suffers and the reader will react with a deadly, “So what?”

  Long, complicated stories can have the point well hidden under cloaking layers of material. Academic people, for whom the search for the point is particularly exciting, can whip their students to the hunt, and works of literature that are particularly deep and rich can elicit scholarly theses without number that will deal with the identification and explanations of points and subpoints.

  But now let’s work toward the other extreme. As a story grows shorter and shorter, all the fancy embroidery that length makes possible must go. In the short story, there can be no subplots; there is no time for philosophy; what description and character delineation there is must be accomplished with concision.

  The point, however, must remain. Since it cannot be economized on, its weight looms more largely in the lesser overall bulk of the short story.

  Finally, in the short short story, everything is eliminated but the point. The short short story reduces itself to the point alone and presents that point to you like a bare needle fired from a blowgun; a needle that can tickle or sting and leave its effect buried within you for a long time.

  Here, then, are some points made against the background and with the technique of science fiction. A hundred of them, to be exact, each from the science fiction blowgun of a master (to be modest, there are also a couple of my own stories), and each with a one-line introductory blurb by myself.

  Now, since it would make no sense to have an introduction longer than the stories it introduces, and having made my point—I’ll stop.

  The Robot Chronicles

  What is a robot? We might define it most briefly and comprehensively as “an artificial object that resembles a human being.”

  When we think of resemblance, we think of it, first, in terms of appearance. A robot looks like a human being.

  It could, for instance, be covered with a soft material that resembles human skin. It could have hair, and eyes, and a voice, and all the features and appurtenances of a human being, so that it would, as far as outward appearance is concerned, be indistinguishable from a human being.

  This, however, is not really essential. In fact, the robot, as it appears in science fiction, is almost always constructed of metal, and has only a stylized resemblance to a human being.

  Suppose, then, we forget about appearance and consider only what it can do. We think of robots as capable of performing tasks more rapidly or more efficiently than human beings. But in that case any machine is a robot. A sewing machine can sew faster than a human being, a pneumatic drill can penetrate a hard surface faster than an unaided human being can, a television set can detect and organize radio waves as we cannot, and so on.

  We must apply the term robot, then, to a machine that is more specialized than an ordinary device. A robot is a computerized machine that is capable of performing tasks of a kind that are too complex for any living mind other than that of a man, and of a kind that no non-computerized machine is capable of performing.

  In other words to put it as briefly as possible:

  robot = machine + computer

  Clearly, then, a true robot was impossible before the invention of the computer in the 1940s, and was not practical (in the sense of being compact enough and cheap enough to be put to everyday use) until the invention of the microchip in the 1970s.

  Nevertheless, the concept of the robot—an artificial device that mimics the actions and, possibly, the appearance of a human being—is old, probably as old as the human imagination.

  The ancients, lacking computers, had to think of some other way of instilling quasi-human abilities into artificial objects, and they made use of vague supernatural forces and depended on godlike abilities beyond the reach of mere men.

  Thus, in the eighteenth book of Homer’s Iliad, Hephaistos, the Greek god of the forge, is described as having for helpers, “a couple of maids…made of gold exactly like living girls; they have sense in their heads, they can speak and use their muscles, they can spin and weave and do their work.” Surely, these are robots.

  Again, the island of Crete, at the time of its gre
atest power, was supposed to possess a bronze giant named Talos that ceaselessly patrolled its shores to fight off the approach of any enemy.

  Throughout ancient and medieval times, learned men were supposed to have created artificially living things through the secret arts they had learned or uncovered—arts by which they made use of the powers of the divine or the demonic.

  The medieval robot-story that is most familiar to us today is that of Rabbi Loew of sixteenth-century Prague. He is supposed to have formed an artificial human being—a robot—out of clay, just as God had formed Adam out of clay. A clay object, however much it might resemble a human being, is “an unformed substance” (the Hebrew word for it is “golem”), since it lacks the attributes of life. Rabbi Loew, however, gave his golem the attributes of life by making use of the sacred name of God, and set the robot to work protecting the lives of Jews against their persecutors.

  There was, however, always a certain nervousness about human beings involving themselves with knowledge that properly belongs to gods or demons. There was the feeling that this was dangerous, that the forces might escape human control. This attitude is most familiar to us in the legend of the “sorcerer’s apprentice,” the young fellow who knew enough magic to start a process going but not enough to stop it when it had outlived its usefulness.

  The ancients were intelligent enough to see this possibility and be frightened by it. In the Hebrew myth of Adam and Eve, the sin they commit is that of gaining knowledge (eating of the fruit of the tree of knowledge of good and evil; i.e., knowledge of everything) and for that they were ejected from Eden and, according to Christian theologians, infected all of humanity with that “original sin.”

  In the Greek myths, it was the Titan, or Prometheus, who supplied fire (and therefore technology) to human beings and for that he was dreadfully punished by the infuriated Zeus, who was the chief god.

  In early modern times, mechanical clocks were perfected, and the small mechanisms that ran them (“clockwork”)—the springs, gears, escapements, ratchets, and so on—could also be used to run other devices.

  The 1700s was the golden age of “automatons.” These were devices that could, given a source of power such as a wound spring or compressed air, carry out a complicated series of activities. Toy soldiers were built that would march; toy ducks that would quack, bathe, drink water, eat grain and void it; toy boys that could dip a pen into ink and write a letter (always the same letter, of course). Such automata were put on display and proved extremely popular (and, sometimes, profitable to the owners).

  It was a dead-end sort of thing, of course, but it kept alive the thought of mechanical devices that might do more than clockwork tricks, that might be more nearly alive.

  What’s more, science was advancing rapidly, and in 1798, the Italian anatomist, Luigi Galvani, found that under the influence of an electric spark, dead muscles could be made to twitch and contract as though they were alive. Was it possible that electricity was the secret of life?

  The thought naturally arose that artificial life could be brought into being by strictly scientific principles rather than by reliance on gods or demons. This thought led to a book that some people consider the first piece of modern science fiction—Frankenstein, by Mary Shelley, published in 1818.

  In this book, Victor Frankenstein, an anatomist, collects fragments of freshly dead bodies and, by the use of new scientific discoveries (not specified in the book), brings the whole to life, creating something that is referred to only as the “Monster” in the book. (In the movie, the life principle was electricity.)

  However, the switch from the supernatural to science did not eliminate the fear of the danger inherent in knowledge. In the medieval legend of Rabbi Loew’s golem, that monster went out of control and the rabbi had to withdraw the divine name and destroy him. In the modern tale of Frankenstein, the hero was not so lucky. He abandoned the Monster in fear, and the Monster—with an anger that the book all but justifies—in revenge killed those Frankenstein loved and, eventually, Frankenstein himself.

  This proved a central theme in the science fiction stories that have appeared since Frankenstein. The creation of robots was looked upon as the prime example of the overweening arrogance of humanity, of its attempt to take on, through misdirected science, the mantle of the divine. The creation of human life, with a soul, was the sole prerogative of God. For a human being to attempt such a creation was to produce a soulless travesty that inevitably became as dangerous as the golem and as the Monster. The fashioning of a robot was, therefore, its own eventual punishment, and the lesson, “there are some things that humanity is not meant to know,” was preached over and over again.

  No one used the word “robot,” however, until 1920 (the year, coincidentally, in which I was born). In that year, a Czech playwright, Karel Capek, wrote the play R.U.R., about an Englishman, Rossum, who manufactured artificial human beings in quantity. These were intended to do the arduous labor of the world so that real human beings could live lives of leisure and comfort.

  Capek called these artificial human beings “robots,” which is a Czech word for “forced workers,” or “slaves.” In fact, the title of the play stands for “Rossum’s Universal Robots,” the name of the hero’s firm.

  In this play, however, what I call “the Frankenstein complex” was made several notches more intense. Where Mary Shelley’s Monster destroyed only Frankenstein and his family, Capek’s robots were presented as gaining emotion and then, resenting their slavery, wiping out the human species.

  The play was produced in 1921 and was sufficiently popular (though when I read it, my purely personal opinion was that it was dreadful) to force the word “robot” into universal use. The name for an artificial human being is now “robot” in every language, as far as I know.

  Through the 1920s and 1930s, R.U.R. helped reinforce the Frankenstein complex, and (with some notable exceptions such as Lester del Rey’s “Helen O’Loy” and Eando Binder’s “Adam Link” series) the hordes of clanking, murderous robots continued to be reproduced in story after story.

  I was an ardent science fiction reader in the 1930s and I became tired of the ever-repeated robot plot. I didn’t see robots that way. I saw them as machines—advanced machines—but machines. They might be dangerous but surely safety factors would be built in. The safety factors might be faulty or inadequate or might fail under unexpected types of stresses; but such failures could always yield experience that could be used to improve the models.

  After all, all devices have their dangers. The discovery of speech introduced communication—and lies. The discovery of fire introduced cooking—and arson. The discovery of the compass improved navigation—and destroyed civilizations in Mexico and Peru. The automobile is marvelously useful—and kills Americans by the tens of thousands each year. Medical advances have saved lives by the millions—and intensified the population explosion.

  In every case, the dangers and misuses could be used to demonstrate that “there are some things humanity was not meant to know,” but surely we cannot be expected to divest ourselves of all knowledge and return to the status of the australopithecines. Even from the theological standpoint, one might argue that God would never have given human beings brains to reason with if He hadn’t intended those brains to be used to devise new things, to make wise use of them, to install safety factors to prevent unwise use—and to do the best we can within the limitations of our imperfections.

  So, in 1939, at the age of nineteen, I determined to write a robot story about a robot that was wisely used, that was not dangerous, and that did the job it was supposed to do. Since I needed a power source I introduced the “positronic brain.” This was just gobbledy-gook but it represented some unknown power source that was useful, versatile, speedy, and compact—like the as-yet uninvented computer.

  The story was eventually named “Robbie,” and it did not appear immediately, but I proceeded to write other stories along the same line—in consultation with my ed
itor, John W. Campbell, Jr., who was much taken with this idea of mine—and eventually they were all printed.

  Campbell urged me to make my ideas as to the robot safeguards explicit rather than implicit, and I did this in my fourth robot story, “Runaround,” which appeared in the March 1942 issue of Astounding Science Fiction. In that issue, on page 100, in the first column, about one-third of the way down (I just happen to remember) one of my characters says to another, “Now, look, let’s start with the Three Fundamental Rules of Robotics.”

  This, as it turned out, was the very first known use of the word “robotics” in print, a word that is the now-accepted and widely used term for the science and technology of the construction, maintenance, and use of robots. The Oxford English Dictionary, in the 3rd Supplementary Volume, gives me credit for the invention of the word.

  I did not know I was inventing the word, of course. In my youthful innocence, I thought that was the word and hadn’t the faintest notion it had never been used before.

  “The Three Fundamental Rules of Robotics” mentioned at this point eventually became known as “Asimov’s Three Laws of Robotics,” and here they are:

  1. A robot may not injure a human being, or, through inaction, allow a human being to come to harm.

  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.

  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

  Those laws, as it turned out (and as I could not possibly have foreseen), proved to be the most famous, the most frequently quoted, and the most influential sentences I ever wrote. (And I did it when I was twenty-one, which makes me wonder if I’ve done anything since to continue to justify my existence.)

 

‹ Prev