Medusa's Gaze and Vampire's Bite: The Science of Monsters

Home > Other > Medusa's Gaze and Vampire's Bite: The Science of Monsters > Page 19
Medusa's Gaze and Vampire's Bite: The Science of Monsters Page 19

by Matt Kaplan


  Such discussion of Frankenstein’s creature’s and Dren’s psychological journeys from innocents to killers might seem out of place when it comes to analyzing the role of science in the formation of their status as actual monsters, but it is not. To get at the fears underlying these creations, it is crucial to realize that the biological work responsible for making their horrific forms believable and terrible to behold plays only a partial role in their transformation into evil beings. The science wielded by Dr. Frankenstein and the researchers in Splice creates only hideous creatures. The infusion of evil into them depends upon their interactions with humans. In Frankenstein, the evil arises after Dr. Frankenstein’s horrified withdrawal from his own creation, the violent response of the family in the countryside, and the heated words shouted by the young brother whom the monster kidnaps. With Dren, the birth of evil stems from her imprisonment and ultimate treatment as nothing more than a specimen.

  Thus, there appear to be two fears crucial to the formation of these monster stories. There is the fear of what horrific things science is capable of creating, and then there is the more subtle fear of society’s inability to recognize a creature’s needs and react appropriately such that the creature does not become so wounded that it turns against humanity. There is also the fact that both Frankenstein’s monster and Dren are physically very powerful. We all know that power corrupts, but in recent years researchers have discovered that giving a person a combination of high power and low social status creates a particularly horrific psychological effect.

  In a study conducted by Nathanael Fast at the University of Southern California and published in 2011 in the Journal of Experimental Social Psychology, 213 participants were randomly assigned one of four situations that manipulated their status and power. All participants were informed that they were taking part in a study on virtual offices and would be interacting with, but not actually meeting, a fellow student who worked in the same fictional office. These people were later assigned either to the high-status role of “idea creator” and asked to generate important ideas, or to the low-status role of “worker” and tasked with menial jobs like checking for typos.

  To manipulate power, participants were told there would be a draw for a $50 prize at the end of the study, and that, regardless of their role, each participant would be able to dictate which activities his partner must engage in to qualify for the draw. Participants who were given a sense of power were told that one part of their job required them to determine which tasks their partner would have to complete to qualify. They were further informed that their partner would have no such control over them. In contrast, low-power participants were advised that while they had the ability to determine the tasks their partner had to engage in, their partner could remove their name from the draw if he or she wanted to.

  Participants were asked to select one or more tasks for their partner to perform from a list provided by the researchers. Some of these tasks were rated by a separate pool of participants as deeply demeaning, such as requiring participants to “say ‘I am filthy’ five times” or “bark like a dog three times,” while others were deemed neutral, like “tell the experimenter a funny joke” or “clap your hands fifty times.” Fast found that participants with high status and high power, low status and low power, and high status and low power all chose few, if any, demeaning activities for their partners to perform. In contrast, participants who were low in status but high in power were much more likely to choose demeaning tasks for their partners.

  To a certain extent, these results provide a psychological explanation for the behavior of the prison guards at Abu Ghraib in Iraq. They were locked, loaded, and very high in power, but they were prison guards; they knew they were viewed by society as low in social status. Similarly, Fast’s findings make the evil transformations seen in the socially excluded but physically powerful Frankenstein’s monster and Dren all the more believable. We fear this type of transformation because it actually happens in humans all too often.

  Of silicon and metal

  It is shortsighted, however, to focus only on monsters spawned from biology. While Dr. Frankenstein’s monster was a product of biological science, there are many recent monsters bearing a striking resemblance to these creations that are not flesh and blood.

  Like Dr. Frankenstein’s monster, the computer HAL 9000 in Arthur C. Clarke’s 2001: A Space Odyssey and Stanley Kubrick’s film version of the story is created by humans. Represented by a single red eye in the film, HAL is found throughout the spaceship that it is meant to help run. However, during the mission, something goes dreadfully wrong. An electronic malfunction leads HAL to make a mistake and declare equipment to be in need of repair when it is operating normally. The astronauts grow concerned about HAL’s error and consider shutting down the computer. They discuss this in private inside a small space pod that they believe HAL cannot eavesdrop on, but HAL, suspicious of the astronauts’ behavior, reads their lips through the pod window and works out what they are planning. This leads HAL to start killing off the crew. The computer is able to rationally explain the reason for its murderous activities since it views itself as critical to completing the space mission, but rational or not, the way HAL snuffs out the lives of the humans on board is undeniably creepy.

  In Andy and Lana Wachowski’s 1999 film The Matrix, the sagelike character Morpheus comments, “We marveled at our own magnificence as we gave birth to AI [artificial intelligence],” as he explains to the protagonist Neo that this marvelous technology turned on humanity and effectively declared a war that it mostly won. In James Cameron’s 1984 film Terminator, a similar plot unfolds, with intelligent machines invented by people rising up against their creators. Even television has carried this story, with the successful Battlestar Galactica series always opening with the bold lines “The Cylons were created by man. They rebelled… ,” providing an explanation for why humans are constantly being chased by the robotic Cylons around the galaxy.

  The machines in The Terminator and Cameron’s 1991 sequel, Terminator 2, Judgment Day, have the same reasons for attacking humanity that HAL does. They become self-aware, humans attempt to shut them down, and the machines retaliate in self-defense. While Skynet, the artificial intelligence that controls the Terminator, and HAL are both bent on killing off humans, is it right to classify a species fighting for its survival as a monster? Are Sky-net and HAL any more evil than a bear that tears the arms off of a hunter who just took a shot at its cub? To a certain extent, the answer is yes, because a bear mauling a hunter does not go off and start mauling every human it meets.77 With Skynet and HAL, a paranoid logic develops in these systems that all humans, even those who are harmless, must be killed, and this is where the evil begins to seep in.

  But why do monsters venture into the world of computers in the first place? Unlike a decomposing corpse or a deformed lab animal, computers are not inherently grotesque. HAL’s lightbulb, on its own, is just a light. There is nothing inherently frightening about it. With The Terminator, The Matrix, and Battlestar Galactica, this changes somewhat as computers are given more physical form. The Terminator is an eerie-looking skeletal robot covered in human skin, the Cylons are large and powerful with weapons on their arms (or in some cases programmable humanlike machines), and the lethal programs that function as guardians of the computer system in The Matrix appear as spooky and dispassionate government agents.78 But it does not seem that it is the physicality of computers that leads creative minds to transform them into monsters. It is what computers are capable of that drives this process.

  A team led by Louis-Philippe Morency at the University of Southern California is showing that, when properly programmed and hooked up to video cameras, computers are becoming adept at reading human body language. More specifically, Morency and his colleagues have taught computers how to read the all-important human nod.

  This might sound insignificant, but nods made at the right time in a conversation can mean “I understand,” while nods made
at the wrong time can indicate either a lack of understanding or a lack of interest. Teaching robots and computer avatars to identify these different sorts of nods and to properly nod back has been a nightmare because a definition of exactly when nods of different sorts are supposed to happen has not existed.

  Psychologists have tried for ages to figure out the many subtle elements produced by a speaker that lead a listener to nod, and the results have been poor. To solve this problem, Morency turned to computers. By recording movements and sounds during human interactions, he has generated lists of conversation cues—like pauses, gaze shifts, and the word “and”—that lead people to nod. He has also collected facial details that indicate what sorts of nods are being made. This information is now being fed into computer programs and used to teach robots when to nod during conversation and what human nods really mean.

  At the most basic level, Morency’s work, and similar face analysis technology, could ultimately prove rather valuable for authorities keen to identify expressions associated with malice and deceit. But there is much more. As computers are required to interact with humans more often, their ability to understand everything that is communicated, rather than just typed or spoken, is going to vastly improve, opening up communication pathways so computers can start playing a larger role in social interactions. Imagine educational software that can detect the glazed look of someone who is totally lost during a lesson or a spaceship computer that suspects two astronauts are lying and reads their lips to learn what they are really thinking. This sort of work is a big step forward. And just in case you thought that all of the Terminator and Matrix films might have left people wary enough of artificial intelligence to keep such systems out of war machines, be assured that you are wrong.

  A team led by the computer scientist Yale Song at the Massachusetts Institute of Technology is teaching military drones to understand how to read the body language of deck officers on aircraft carriers and follow their commands. The ultimate goal of the work is to have the drones read the silent signs and signals just as well as human pilots do. At the moment, the drones understand what they are being told only 75 percent of the time, but they are going to get a lot better as the work progresses.

  Along similar lines, Andrew Gordon at the University of Southern California has designed computers that are adept at reading blogs and constructing meanings from what they find. For example, after scanning millions of personal stories online and correlating these with incidents taking place in the real world, his computers were able to work out that rainy weather was related to increased car accidents and that guns were associated with hospitalization. Connect these sorts of developments to computers that are capable of crushing the brightest humans at chess and beating geniuses on Jeopardy! and something emerges in the imagination that certainly gives one pause.

  The day when a computer is capable of truly studying the world around it, learning from what it finds, engaging in flawless social interactions, and acting independently is not that far off. Just as Shelley’s early readers were frightened by how far transfusion and transplant technologies could be taken, so too are modern readers frightened by what sort of form artificially intelligent computers will take. Really, the idea of self-aware war machines getting tired of being treated as servants or simply malfunctioning and going on a killing spree is not hard to imagine. And it is from this fear that robotic monsters arise.

  Yet mixed with this fear is a lot of hope. In Terminator 2, a reprogrammed robot, physically identical to the robot monster in the first film, is sent back in time to protect the future leader of the resistance movement against the machines from assassination when he is just a boy. The boy alters the Terminator’s programming, giving it the ability to learn. He orders it to not kill humans, teaches it to express emotion, and encourages it to question human behavior, leading to an unexpectedly tender moment where the machine asks him why humans cry.

  In the end, the two build a bond, and the robot, which was a vile killing machine in the first film, concludes the second film by displaying an understanding of the value of human life and willingly sacrificing itself to save humanity.

  Isaac Asimov, the creator of some of the most profound robotic literature ever written, presents a similar tale in the first story of I, Robot, “Robbie.” The tale explores the social interactions that develop between a human child and a robotic nursemaid named Robbie. After two years of happy bonding, the pair are separated by the child’s mother because she decides it is socially inappropriate for robots to become so closely attached to humans. This drives the child into a state of depression that leads her toward a desperate search for her lost companion. Toward the end of this search, she finds Robbie installed in a factory. As she rushes up to meet the robot, she fails to notice an oncoming vehicle. Robbie quickly saves her, proving to the mother she was wrong in believing robots to be cold and soulless.

  “Robbie” raises a vital point that deserves some reflection. In this story, it is the mother, not the constructed creature, who is the antagonist. She is not a monster, but she is definitely the force the leading characters must struggle against. In Splice, Frankenstein, Battlestar Galactica, The Terminator, and so many other stories of human creations becoming monsters, humans might not be the antagonists, but they are definitely responsible for the monsters that come to haunt them. But when does a sheer lack of responsibility shift a character from simply being incompetent to being a villain? Can such monster-constructing villains become monsters themselves? It is with these questions in mind that it is worth taking a look at Jurassic Park.

  * * *

  71 This is not to say that stories portraying them as dangerous will not one day be discovered; they have simply not been found yet.

  72 There is a lot of debate over how old the Golem story really is and whether the 1909 text that was “found” actually had been written by the rabbi’s son-in-law. Some argue that the “finder” of this work, Yudl Rosenberg, wrote the text himself to fantasize about fighting the increasing violence that Jews were facing during the early 1900s.

  73 Fascinating work conducted by the biomedical engineer Maryam Tabrizian at McGill University in Montreal and recently published in the journal Biomacromolecules shows that we are now able to effectively make red blood cells invisible. By coating the cells in thin polymer layers that still allow them to do their duties, Tabrizian and her team showed that it is possible to grant red blood cells of one blood type the ability to function inside a body that is used to red blood cells of another type. While currently being tested only in mice, initial results suggest that the technology works and that a day may soon come when worrying about having the right blood type for a patient becomes a thing of the past.

  74 By people, I mean men.

  75 Very important always to have a measuring tape and calculator on hand during first dates for such things.

  76 An intriguing aspect of this film is that Dren undergoes a sex change. While female for much of the story, she is most violent after transforming into a male and ceases to be a character we sympathize with. Pages could be written on the gender psychology being invoked, but I’m not going to get into that. I’ll just point out that the biology here is all wrong. Although there are many animals that change gender during their lives, they do not usually switch from female to male; they switch from male to female. This all has to do with logistics. Being a male is cheap and easy. You just release sperm and let the female do the heavy lifting. Thus, if you are going to be a male and a female during one life, you want to be male first (when you are small and weak) and female second (when you are bigger, tougher, and have more resources at your disposal to produce offspring). But the Splice story wouldn’t have worked if the gender swap had been done the other way around, and I guess Natali figured that most audiences weren’t geeky enough to catch him getting the biology wrong.

  77 Contrary to popular belief, the bears that are actually the most dangerous are those that lose their fear of people by being fed
by them. These animals come to see humans as a free meal ticket and become aggressive when they are not given the food they want and expect. Bears that get shot at do everything they can to stay away from people. Sure, we still read about tourists being ripped apart after trying to nuzzle a bear cub or photograph a grizzly at close range, but seriously, are these cases of the bear being dangerous or the human being stupid?

  78 Fear of the federal government. Totally irrational, right?

  * * *

  9

  Terror Resurrected—Dinosaurs

  “The lack of humility before nature that’s being displayed here, uh… staggers me.”

  —Dr. Ian Malcolm, Jurassic Park

  While lions and bears were truly terrible threats to early humans, fossils show all too clearly that these predators were nothing compared to the animals that came before them. Yes, the media have relentlessly discussed dinosaurs in recent decades, but there is still a real lack of appreciation in society for how big and dangerous they actually were. The long-necked species Diplodocus is a good example. It topped out at around a 30 feet (10 meters) long and 20 feet (6 meters) tall, larger than many buildings. Carnivores were smaller. Tyrannosaurus rex, by no means the largest of the meat-eating dinosaurs, was only a bit taller and 50 percent longer than the largest giraffes. Of course, a statement like that is preposterous. A bit taller and 50 percent longer than a giraffe is still huge. Giraffes are big animals. Being right next to them is nerve-racking. One false step and they can easily break a human foot or, worse, snap a spine.

 

‹ Prev