The Computers of Star Trek

Home > Other > The Computers of Star Trek > Page 9
The Computers of Star Trek Page 9

by Lois H. Gresh


  In 1956, Dartmouth College in New Hampshire hosted a conference that launched AI research. It was organized by John Very simple decision tree that helps determine why your monitor isn’t displaying anything. The real logic for the tree would be far more complex. Decision trees for expert systems—diagnostics and problem solving—are often ten or twenty pages long. One of the authors of this book wrote hundreds of pages of computer diagnostic decision trees in the 1980s. The real decision tree to diagnose a monitor malfunction was perhaps five pages long. McCarthy, who coined the term “artificial intelligence.” In addition to McCarthy, Simon, Newell, and Logic Theorist (we must list the first recognized AI program as a conference participant), the attendees included Marvin Minsky, who in 1951 with Dean Edmonds had built a neural-networking machine from vacuum tubes and B-24 bomber parts. Their machine was called Snarc.

  FIGURE 5.1 Dicision Tree

  As far back as this 1956 conference, artificial intelligence had two definitions. One was top-down: make decisions in a yes-no, if-then, true-false manner—deduce what’s wrong by elimination. The other was quite different, later to be called bottom-up: in addition to yes-no, if-then, true-false thinking, AI should also use induction and many of the subtle nuances of human thought.

  The main problem with the top-down approach is that it requires an enormous database to store all the possible yes-no, if-then, true-false facts a computer would have to consider during deduction. It would take an extremely long time to search that database, and would take an extremely long time to arrive at conclusions. It would have to make its way through mazes upon mazes of logic circuits. This is not at all the way humans think. An astonishing number of thoughts blaze through the human brain all at the same time. In computer lingo, our brains are massive parallel processors.

  What top-down AI brings to the table are symbolic methods of representing some of our thought processes in machines. Put more simply, top-down AI codes known human behaviors and thought patterns into computer symbols and instructions.

  Perhaps the greatest boost to the top-down philosophy was the defeat of world chess champion, Gary Kasparov, by the IBM supercomputer, Deep Blue. Though not artificially intelligent, Deep Blue used a sophisticated IF-THEN program in a convincing display of machine over man.

  Chess, however, is a game with a rigid set of rules. Players have no hidden moves or resources, and every piece is either on a square or not, taken or not, moveable in well-defined ways or not. There are no rules governing every situation in the real world, and we almost never have complete information. Humans use common sense, intuition, humor, and a wide range of emotions to arrive at conclusions. Love, passion, greed, anger: how do you code these into if-then statements?

  A great example of top-down thinking is Data’s inability to understand jokes and other human emotions. It takes Data six years to comprehend one of Geordi’s jokes. When O‘Brien is upset, Data asks if he wants a drink, a pillow, or some nice music. Data goes through a long list of “comfort” options, none of which makes sense to O’Brien. This is why the top-down approach is inadequate. We can’t program all possibilities into a computer.

  From the very beginning of AI research, there were scientists who questioned the top-down approach. Rather than trying to endow the computer with explicit rules for every conceivable situation, these researchers felt it was more logical to work AI in the other direction—to take a bottom-up approach. That is, figure out how to give a computer a foundation of intrinsic capabilities, then let it learn as a child would, on its own, groping its way through the world, making its own connections and conclusions. After all, the human brain is pretty small and doesn’t weigh much, and is not endowed at birth with a massive database having full archives about the situations it will face.

  Top-down AI uses inflexible rules and massive databases to draw conclusions, to “think.” Bottom-up AI learns from what it does, devises its own rules, creates its own data and conclusions—it adapts and grows in knowledge based on the network environment in which it lives.

  Rodney Brooks, a computer scientist at MIT, is one of bottom-up AI’s strongest advocates. He believes that AI requires an intellectual springboard similar to animal evolution, that is, an artificially intelligent creature must first learn to survive and prosper in its environment before it can tackle such things as reasoning, intuition, and common sense. It took billions of years for microbes to evolve into vertebrates. It took hundreds of millions of years to move from early vertebrates to modern birds and mammals. It took only a few hundred thousand years for humans to evolve to their present condition. So the argument goes: The foundation takes forever, yet human reasoning and abstract thought take a flash of time.1

  Therefore, current research emphasizes “survival” skills such as robotic mobility and vision. Robots must have visual sensors and rudimentary intelligence to avoid obstacles and to lift and sort objects.

  How are the two approaches different? Captain Kirk, searching desperately for clues to a murder, instructs the ship’s computer to identify similar crimes taking place on other planets over the course of the past several hundred years. Meanwhile, Jack the Ripper’s essence invades the ship’s computer and takes control. Spock issues a “class A compulsory directive” to the computer, instructing it to “compute to the last digit, the value of pi.” The computer churns and grinds, doing nothing but calculating the infinite value of pi (“Wolf in the Fold,” TOS). Both actions, searching a huge database for a limited set of attributes as well as devoting its entire processing capability to calculating a linear sequence of numbers, mark this as a top-down machine.

  Some years later, the Enterprise-D is caught in an asteroid field by a booby-trapped derelict spaceship. Any use of the Enterprise engines is dangerous. Geordi has the computer call up a simulation of Dr. Lea Brahms, who designed the starship’s propulsion unit. Within a short time, Geordi and Lea are working together to solve the problem that threatens the crew’s existence (“Booby Trap,” TNG). The Lea simulation actually reasons and reaches conclusions about a novel situation, much as a human would do. The simulation is so human-like that Geordi grows quite attached to it, causing himself considerable embarrassment when the real Lea Brahms shows up a few months later.

  In the original series, the computers were all top-down machines. That was the generally accepted theory during the filming of the show. By the time of The Next Generation, bottom-up AI had become widely accepted. Thus the Enterprise-D computer seems much more capable than its predecessor. But perhaps not capable enough.

  A great deal of the Star Trek universe revolves around the concept of artificial intelligence. Without it, the computers of the twenty-fourth century wouldn’t be that much different from what we have today. The ship’s computer wouldn’t be able to answer questions, replicators and transporters wouldn’t work, and Data wouldn’t be nearly as interesting. Nor would Johnny Fontaine be able to give Odo advice about women.

  Let’s take a more specific look at the similarities and differences between the human brain and the computer. This will give us a basis for analyzing Data, the holosuites and holodecks, Professor Moriarty, and other facets of bottom-up AI in Star Trek.

  First the similarities:

  The brain and the computer have some obvious things in common. The brain simultaneously daydreams, calculates overdue invoices that customers haven’t paid, wonders if it’s in love, wonders when the lunch guest will finally arrive at the office or whether the guest is lost, worries about Mom, and so forth. The computer simultaneously prints a chapter of this book, saves the chapter in case the power blows, downloads a file from the Internet, calculates overdue invoices that customers haven’t paid, and so forth.

  Similar, and yet different. The brain daydreams, creates, and worries; the computer does none of those things.

  The brain accepts inputs from the eyes, skin, and blood. The computer accepts inputs from the keyboard, voice instructions, and data feeds. The brain issues output to the eyes, skin,
and blood; the computer to the screen, networking cables, data feeds.

  Both are very complex. Both have components of hardware and software, though of different materials and composition. But although we can build a working computer, we can’t build a human brain. Despite their similarities the two are very different.

  The basic circuitry in computers relies on the TRUE-FALSE, ON-OFF popping of micro-switches. Neurons in our brain also have TRUE-FALSE, ON-OFF states: excited and inhibited. When the voltage across a membrane rises sharply, the neuron is excited and releases chemicals (neurotransmitters) that latch onto receptors of other neurons. When the voltage drops sharply, the neuron is inhibited. Seems awfully similar to the binary ON-OFF states of the digital computer, doesn’t it?

  But if we look more closely at neural processes, we see a huge difference. Neurons actually behave in an analog rather than a digital manner.k Events leading to neural excitement build up, as if climbing a hill—this is a feature of analog signals. In addition, ions may cross the cell membrane even if neurotransmitters aren’t received, and these ions may excite the neuron anyway. Sometimes, a neuron oscillates between intense and minor excitement levels without any outside stimulation. The more a neuron excites itself, the more prone it will be to outside stimulation.

  In a computer, the shape of the motherboard—large rectangle, small rectangle, oblong, oval (we’ve never seen an oblong or oval motherboard, but it’s an interesting concept)—has no effect on how the computer functions. Positioning components close together, shortening circuit travel, and the choice of the actual components: these conditions certainly affect the processing speed and power of the computer. However, most motherboards are rectangles, and the actual shape really doesn’t have some radical influence, such as popping an ON to OFF or making a NOR into an XOR.

  The neuron, however, is quite different. There are approximately fifty neuron shapes that can change the state of the neuron from excited to inhibited, or vice-versa. For example, an incoming signal becomes weaker as it traverses a really long dendrite to the neuron body. A signal that travels along a short dendrite will be much more powerful when it hits the neuron body. In addition, it takes a higher dose of neurotransmitter to excite a fat neuron than to excite a small one.

  Also, the brain uses a finite set of neurons to perform a flexible number of tasks in parallel. Neurons may interact in overlapping, multiple networks within the brain; a single neuron simultaneously communicates with many others in many neural networks. And by intercommunicating constantly across these multiple networks, neurons learn to adapt and respond to their environments. We liken the brain to a muscle: the more you use it, the stronger it becomes.

  The more you do trigonometry problems, for example, the better you’ll be at them twenty years from now.l

  How do we build such properties into a computer?

  The ultimate result of bottom-up AI is what we think of as “alife,” literally artificial life. In this type of computer intelligence, digital organisms (entities, nodes, or units) not only adapt to their environment but reproduce, feed, and compete for resources. Their offspring evolve naturally over generations to become increasingly suited to their environments. Remember the nanite episode (“Evolution”) of The Next Generation, in which microscopic computer creatures infiltrated the ship? It isn’t really that farfetched. Such digital creatures exist today in prototype form.

  Some alife creatures use genetic algorithms that affect their life expectancy. The creatures have genomes to define what they’re like, how they act, what they do. To reproduce, alife creatures cross-breed, and sometimes, as with biological life, the genomes are accidentally mutated, creating a next digital generation that is quite different from its parents.m

  Some alife creatures grow through what might be thought of as digital embryonics. Such a creature exists in silicon, which is divided into cells—where a row and column intersect as on a sheet of graph paper. Each cell contains a genome that’s defined in random access memory. At the beginning of its life, the creature is the only individual in the silicon environment. This organism has a certain number of cells, just as we do at birth. Each cell has a special function, though the creature can have many cells that do the same thing. (For example, we have many skin cells and many nerve cells.) Which genes of the digital organism’s cell will be functional depends on a cell’s row and column—its location—in the creature.

  When the alife world begins, only one cell contains the entire genome of the organism. The first cell divides, just as it would in a biological embryo. Now there are two digital cells that each contain the entire genome of the organism. Soon, the entire digital creature exists, born digitally in a manner based on biology. By combining digital embryonics with evolutionary algorithms, we have the potential to grow truly complex, novel alife environments.

  Aside from Wesley Crusher’s experiment that swamped the Enterprise with nanites (“Evolution,” TNG), Star Trek features only a few alife creatures. For example, the exocomp servomechanisms on planet Tyrus VIIA, which Data protects at risk to his own life when he realizes that they have achieved self-awareness, were created by an evolutionary process (“The Quality of Life,” TNG). But Star Trek characters such as Data and Lea Brahms clearly are not alife. For example, they don’t possess such features as cellular division and reproduction. They did not evolve.

  More common than alife is the simple form of AI built into today’s robots. Back in 1969, a robot named Shakey was able to move around seven rooms that contained obstacles made of varying geometric shapes. Shakey received commands—such as “Bring me a box”—from a computer console. Then hen rode around on his little wheels, scooted past the obstacles, snaked through the rooms, scooped up a box, and returned it to some central location.

  The authors dream about going to MIT to play with the robots. We’ve read of insect robots, and even more cool, robots that wander around the laboratories and annoy people. Just reading about these robots makes us drool. Herbert the robot is extremely Borglike. He steals stuff from the office of the MIT professors. He has twenty-four microprocessors, thirty infrared sensors, a hand to pick stuff up, an arm, and an astonishing optical system. Then there’s the six-legged giant insect called Genghis, propelled by twelve motors, maneuvering around the halls using twelve force sensors, six heat sensors, and two sensory whiskers(!). At MIT and other universities, there are many other Borglike robots wandering around already. Research is underway to construct robots with dual arms, plus speech and hearing skills. This is an intensely exciting part of modern life. We’ll return to some of these issues, and others (such as vision in an android) in the chapter dealing with Data.

  For now, let’s return to the issue of artificial intelligence. Let’s have some fun. We’ll consider several idle thoughts and how a top-down AI would react compared to a bottom-up AI (see Table 5.1).

  As you can see, logic doesn’t necessarily produce correct answers. People infer things, and we make mistakes. Logic yields conclusions based on premises that we assume are true. However, if the premises are false, then the conclusions are false. Data’s sometimes distressing dealings with human behavior clearly show that linear logic isn’t always correct. Mr. Spock discovered the same truth years before Data.

  Perhaps the most astonishing artificially intelligent creatures in the Star Trek universe are the living holograms. Dr. Moriarty (“Elementary, Dear Data,” “Ship in a Bottle,” TNG) is the result of the ship’s computer trying to come up with a villain smarter than Data. Though he never achieves independence from the holodeck, Moriarty still appears to have achieved sentience. Even more interesting are the inhabitants of Yadera II, an entire village consisting of holographic images so sophisticated that they think themselves normal beings (“Shadowplay,” DS9). Odo and Dax repair the holographic generator on the planet so that life can continue without interruption.

  There’s no question that AI exists in the Trek future. Yet in some ways the AI of 300 years from now seems extr
emely primitive. Why doesn’t the computer on the Enterprise talk directly to the crew? Why does anyone need to tap a communicator badge? When Captain Picard is on the holodeck and a message arrives for him from Starfleet command, why doesn’t the computer tell him directly about the transmission? When Geordi or Rom needs to repair some damage inside a Jeffries tube, why doesn’t the computer give him instructions (much as Spock tells Dr. McCoy how to reconnect his brain in the classic adventure (“Spock’s Brain,” TOS)? Better still, why doesn’t the computer simply make the repairs itself? When the Kazon attack Voyager, why doesn’t the ship’s computer, filled with ten thousand attack scenarios, give Captain Janeway some advice on what to do?

  TABLE 5.1

  We suspect the creators of Star Trek may have felt that making the computer too powerful might worry their audience. Just as with space battles and space navigation, people prefer to think that they, not machines, are still in charge. After all, it’s only a small step from AI to some equivalent of Kirk’s computer nemesis, Landru, controlling a world—in ways that are more sophisticated but no less insidious. In the real future, three centuries from now, we suspect that the computers will be running the starships, and the crew, if present at all, will be merely along for the ride. This is a vision of space travel totally rejected by Star Trek.

  6

  Data

  In “Inheritance” (TNG), the Enterprise-D travels to the Altrean star system to help scientists infuse energy into an unstable planet core. (What? Never mind. We’re computer scientists, not geophysicists.) While working on the project, Data meets Dr. Juliana Tainer, who reveals that she was once married to Noonien Soong, Data’s creator. Having worked with Soong on Data’s creation; Tainer is in effect his mother.

 

‹ Prev