Frankenstein and Philosophy
Page 14
This question of morality and moral judgment is quite tricky. I leave it to you to decide how blameworthy our monster, our android, and perhaps, even, the creators, are. They all seem to make free choices, as we all do, but judgment doesn’t seem to make sense if you don’t consider the context—and here, as in life, the situation is very complex indeed!
_________________
1Personal interview, 2013.
12
Who’s to Blame?
JAI GALLIOTT
The plot of Mary Shelley’s Frankenstein (1818) is quite different from the various watered-down Hollywood movie adaptations. Reading the original novel, we can see that the experience of the two main characters, Victor Frankenstein and his nameless monster, offers us a number of insights regarding the nature of moral responsibility and the way we blame others. Shelley’s plot has Victor working alone to create his unsightly creature. When it comes to life, he is horrified by its morbid appearance and abandons it. The poor creature, left all alone to fend for itself in the big bad world, develops a serious attitude problem and goes on a killing spree.
It’s a simple plot: scientist creates monster—monster runs berserk—justice is done to the scientist at the hands of his own creation. But, while the plot may be simple, it gives structure to a deep and eye-opening story concerning the consequences of somewhat well-intentioned scientists, engineers, and doctors failing to respect the boundaries of their disciplines and the level of moral responsibility they ought to take for their work when it all goes wrong.
Rather than the mute or grunting monster of the movies, Shelley’s creature is quite articulate and does its best to make Victor aware of his apparent responsibilities. “How dare you sport thus with life?” he asks angrily. He goes on to tell Victor:
Do your duty towards me, and I will do mine towards you and the rest of mankind. . . . Be not equitable to every other and trample upon me alone, to whom thy justice, and even thy clemency and affection, is most due. Remember that I am thy creature; I ought to be thy Adam, but I am rather the fallen angel, whom thou drivest from joy for no misdeed.
With yellow skin that barely covered the work of the muscles and arteries below it, the creature may not have been blessed with good looks, but we can see that it was certainly no dummy. Shelley portrays it as an intelligent being that gradually comes to understand its relationship with Victor and humanity at large. She clearly intends it to be seen as a creature capable of reason.
This is a morally significant difference because, although Frankenstein was framed in terms of the primitive electrical, mechanical, and medical technology of the early nineteenth century during which Shelley wrote, in all relevant ways it addresses what is essentially the same problem as it pertains to the rise of intelligent machines, which began in the late twentieth century and will continue well into the twenty-first century as scientists and engineers construct and enhance these systems for deployment in fields such as policing and warfare. Shelley was truly ahead of her time in writing about this modern Prometheus.
Twenty-First Century Frankennovation
Depicted on the ceiling of the Sistine Chapel in Rome is God, the Creator, extending his fingertips down to touch those of Adam. The Bible has us believe that in doing so, he imparts in him the gift of life. In the millennia since the Bible was put together, it seems that various mad scientists have been attempting to play God in replicating this act of creation.
They go about their “frankennovation”—which is my word for acts of creation/animation/innovation—in different ways. Owing to some damn good writing in Frankenstein, the actual detail of the act of the monster’s creation is kept vague and passes without much fanfare. We’re only told that after much effort and fatigue, Victor somehow infuses a spark of being into the otherwise lifeless corpse lying at his feet and, with a convulsive motion, it comes to life.
From this limited information, it seems that Shelley had read about experiments with “animal electricity” or Galvanic electricity, whereby two different metals are connected together and touched on the nerve endings of various body parts, causing convolutions, which also explains the neck bolts on Frankenstein’s monster.
Groundbreaking at its time, this sort of science is now all but dead. A distinguishing feature of more modern—and also seemingly more serious—attempts at frankennovation is the fact that the thing or creature being animated is far less human in nature. Some post-Frankenstein stories of creation are so far removed from the original that they abandon the concept of a human-looking monster altogether. Tales and films from the 1950s and 1960s were flooded with big, green, radioactive monsters and giant mutated insects and, in the 1970s, we saw super-scary advanced computers and out-of-control artificial intelligence. This not only frightened audience members, but also provoked scientists to think about how far they could push this sort of technology and what it might be good for. In the 1980s a general obsession with technology and enormous advancements in electronics and robotics started to make the possibility of Frankenstein 2.0 much more real.
The possibility of real-world robotic monsters and cyborg killer machines has also been mirrored in cinema for quite some time, with the best known example being the Terminator franchise, depicting a virtually indestructible killing machine from the future. However, this is not just the stuff of science fiction. A number of philosophers and self-dubbed “futurists” such as Rodney Brooks, Ray Kurzweil, and Hans Moravec believe we are very quickly approaching a moment—called the Singularity—when computers will become intelligent and not just more intelligent, but more intelligent than humans.
They believe that mankind may well be on the threshold of creating a digital monster that is capable of thinking, having self-awareness, and perhaps demonstrating moral reasoning skills. The definition of words such as “thinking” and “reasoning” are, of course, open to broad interpretation, and the question of whether machines are ultimately capable of these things plagued even the oldest robe-wearing philosophers and theologians. Either way, it’s wise to start thinking about the problems that will arise if, or when, Frankensteinian robots come knocking on your door.
If the Pentagon’s mad scientists have their way this might be sooner rather than later. The dawn of the twenty-first century has been called the decade of the military robot. Big remotely-operated and semi-autonomous robots called “unmanned aerial vehicles” already rain down Hellfire missiles on suspected terrorists as Zeus did lightning bolts, and a small but influential group of scholars-turned-intellectual-warriors is grappling with what some believe will be the next must-have piece of military kit: lethal autonomous robots.
At the center of the debate and the engineering effort is the Georgia Tech Professor Ronald Arkin, a professional roboticist and consultant for the United States Department of Defense, which spends billions of dollars each year on robotics research and development. He has devised algorithms for an “ethical governor” that would allow a robot to function in lieu of a human warfighter.
This governor acts as a suppressor of automatically generated lethal action, so in much the same way that a mechanical governor would shut down a steam engine running too hot, the ethical governor shuts down the autonomous machine when it is about to do something unethical, such as shooting good guys. He thinks that not only will his robot not present any problems for just war theory, which sets the rules governing the resort to war and its conduct, but that it would ultimately surpass its requirements, writing that “I am convinced that they [killer robots] can perform more ethically than human soldiers are capable of.”
I am not so sure. We need to ask ourselves, if human beings do eventually succeed in creating this robotic military capability in much the same way that Victor Frankenstein eventually succeeded in discovering the cause of the generation of life and bestowing it upon lifeless matter, will it also lead to the creature turning upon its creators in the manner of the Frankenstein story? Will it then turn on the public, either justly or unjustly? There’
s no clear answer. Some would say that robots have no reason to harm humans while others would have us believe that there is no reason for them not to, especially given the traditional human response to unfamiliar threats: kill, kill, and kill. One question we must therefore answer is if our creations do follow the typical plot and go berserk, who are we going to hold morally responsible?
How Responsible Is the Creator?
When you think of science, you probably think of well-defined methods, hypotheses and conclusions, applications, and benefits. All of these are supposed to be good for us, of course. With each new discovery, the human race takes one step further away from other primates on the evolutionary chain. Right? We all enjoy it when innovation results in immense gains for all involved, whether that be in the form of a new smartphone or something more advanced, but this isn’t always the case. For every good invention or innovation there are series of mistakes, and no one wants these on their plate.
Just as in the case of Victor Frankenstein and his creation gone wrong, a mistake was made which the creator had to acknowledge and attempt to correct. The only problem was that he didn’t. The infamous Frankenstein used science to help him build a human being, but when his experiment failed and he got himself a monster, he wouldn’t take responsibility for his creation (outside of trying to kill it). Upon sighting his hideous creation for the first time, he runs off with his tail between his legs. In the period following, he didn’t really stop to think about or show any compassion for the monster he let roam free. Trying to kill the creature was the easy way to act; Victor just gave in to his natural revulsion. What he was unwilling to do was take responsibility for raising and loving his creation.
Science is essentially about understanding nature and existence. It incorporates all the things around us and attempts to find out how each interacts with the others. It is also about improving what already exists or has ceased to exist. It’s about pushing the boundaries of science when it’s safe to do so. Unfortunately, scientists and engineers often go about pushing the boundaries and frankennovating even when they don’t fully grasp the complexity of their actions and haven’t done the responsible thing and put the necessary safety countermeasures in place. When Victor decided to introduce a new creature into the world, he didn’t fully understand the nature and consequences of his experimentation and he failed to think forward and consult with anyone or answer any questions about the potential results of his scientific endeavors.
Without any real oversight, with the exception of some gentle probing from his inquisitive friend Henry Clerval, Victor plays an indirect role in causing four unwarranted deaths and endangering the lives of many more, and he knows it. Throughout the book, the weight of the remorse for his role in the murders of William, Justine, dear Clerval, and even his beloved wife, Elizabeth, began to adversely affect his mental and physical health. While it is understandable that the monster ran away—I would’ve done the same—Victor had a moral obligation to ensure that it didn’t get away uncontrolled and that safety was the number one priority.
This obligation also extends to today’s professionals working on those machines that have the potential to become tomorrow’s killer robots. I say “potential” because even those machines that are intended to be non-lethal can later be adapted for lethal use or otherwise go berserk and end up killing people. This is exactly what happened with a robotic cannon recently tested by the South Africa National Defense Force. The advanced weapon, capable of selecting targets automatically, mysteriously started firing uncontrollably into the crowd of spectators, killing nine people and wounding fourteen others.
Unlike the actual act of creating the monster, which involved Victor Frankenstein alone, the creation of killer robots is a more distributed effort, involving many creators. Take the South African case. If we assume that the firing incident were a mechanical problem, with the gun jamming and exploding before discharging its rounds uncontrollably, it’s very tempting to blame the engineers who designed the weapon or the manufacturer who put it together. Others might suggest that a computer and its operating code were to blame, in which case it might be insisted that the programmer be held responsible.
After all, modern product liability rules are rather stringent and call for “due care” to be taken. These people are also inherently more knowledgeable about their products than anyone else and are expected to anticipate and design out potential harms. But should we really hold Frankenstein and his contemporaries wholly responsible for their creations’ destructive results? Well, no.
The Educator and Industry?
While these creators are, in a sense, their own worst enemies and undeniably hold some level of responsibility for their acts and the consequences that follow, science and innovation rarely take place in a vacuum, even in the case of recluses like Victor Frankenstein. Scientists and engineers are educated folks who have typically been imbued with important knowledge from a very early age and mastered their respective disciplines at the figurative knee of intellectual giants.
Shelley indicates early on that in the case of Victor, much of the blame rests with his teachers; starting with his father who quickly dismisses his interest in arcane knowledge without properly explaining why experimenting with it is potentially dangerous. When the young Frankenstein begins to enjoy studying alchemy, the medieval forerunner of chemistry, his father says “Ah! Cornelius Agrippa! Do not waste your time upon this; it is sad trash.” Long after, Victor comes to realize that if, instead of this remark, his father had taken the pains to explain to him that much of Agrippa’s science had been denounced and exploded and that a modern system of science had been introduced which possessed much greater and somewhat safer powers than the ancient, there’s a possibility that he may have thrown the book aside and not succumbed to the fatal theoretical impulse that lead to his demise.
In the University of Ingolstadt, Professor Krempe, like Victor’s father, dismisses the writings of his favorite alchemists as “nonsense,” while Professor Waldman, without the same contempt, tells that while these alchemists often promised impossibilities they did indeed perform miracles in “penetrating into the recesses of nature and showing how she works in her hiding-places. They discovered how the blood circulates and the nature of the air we breathe.” With these words, Victor reluctantly turns to the study of more modern science, but his educators fail to recognize that whatever interest he has is actually fueled by the impossibilities promised in alchemy. He continues to seek out dangerous knowledge to untangle the deepest mysteries of creation and combines it with new, practical methods of science that eventually give him his monstrous creation. It is up to mentors and educators to clearly identify dangerous knowledge or pursuits and disapprove or otherwise caution those who that go ahead regardless. Those that do not must share responsibility for any ensuing mess.
Unfortunately, in modern times, ethics, safety, and wisdom often clash with financial considerations within funding-focused universities. Government and military organizations fund a significant amount of, and perhaps even most of, the cutting-edge electronics and robotics research that has resulted in the design, manufacture, and deployment of military robots such as the controversial “Predator” drone from the United States. While the Defense Advanced Research Projects Agency (DARPA)—which is the modern-day equivalent of Frankenstein’s lab and home to the US military’s best scientists—continues to research and innovate, America has budgeted over seven billion dollars to purchase killer robots for this year alone.
Scientists, engineers and programmers face a difficult choice between accepting funding from the military-industrial complex and accepting the responsibility that ought to come with that or paying a high personal price and risk losing their positions. Academic institutions must also be held responsible as many of them, like the University of North Dakota, which now offer degree programs that train students to design, build, and operate robots that may be used to kill and maim.
Monster or Machine? Let’s Bla
me the Bot
The final possible place that additional responsibility might rest is with the creations themselves. Perhaps we should try monsters for their murderous crimes and hold military robots responsible for the deaths of noncombatants? Upon first consideration, it seems ridiculous to take seriously the idea that either monster or machine should—or even could—be held responsible for its actions. It’s not particularly hard to see how they could be causally responsible for particular harms, such as the deaths of innocent people, but it is another thing altogether to say that they are morally responsible.
The flip side of this argument is to say that they should be considered rational beings. Contrary to the many movie and stage versions of Frankenstein, the monster, as depicted by Mary Shelley, is a sensitive and emotional creature who longs to spend his life with another thinking being like himself. The monster is a highly intelligent and eloquent speaker capable of accelerated learning. Almost immediately after his creation, he has figured out how to get dressed and, early on in the novel, knows how to speak German and French. By the end, he is speaking English and quoting John Milton’s epic poem Paradise Lost. Intellectually speaking, he’s the envy of any soccer mom! Sure, he had a rough upbringing, but does this necessarily absolve him of all responsibility? No, not if he is the rational being and intentional killer that Shelley makes of him.