The Computers of Star Trek
Page 8
Consider the Dominion blockade that Sisko must somehow force his way past. This conflict isn’t a naval battle or even a dogfight between jet planes. It’s war in space.
According to Star Trek technical lore, phasers have a range of 300,000 kilometers, and their power fades significantly over long distances. As coherent energy beams, they obey the inverse square law, so the farther the target, the less effect the beam will have. Suppose the twelve hundred Dominion ships were deployed in a square, thirty-five ships to a side. A formation tight enough to blast any vessel trying to get through would make the square approximately 10 million kilometers on a side. That’s a pretty big blockade. But when starships moving at impulse speed travel at 75,000 kilometers per second, it’s nowhere near big enough. Why fight when you can go around? A Federation ship could fly the entire length of this blockade in 133 seconds. Not much of a detour. It’s as if the German High Command had tried to stop the invasion of Normandy by building a ten-foot-high wall the length of a tennis court on Omaha Beach.
Nor does Sisko’s fleet have to travel merely at impulse speed. Why not just accelerate to Warp 2 ( 10 times the speed of light), zip around the blockade in 3.3 seconds, and head off to Deep Space Nine? Even easier, why not just fly at Warp 2 or better between the enemy ships? Phaser beams propagate at the speed of light. A ship traveling faster than light would be gone before the enemy even knew it was there, and the phaser beam would never catch up.j
Engaging fleet against fleet in outer space makes little sense. It’s reminiscent of those stylized Revolutionary War battles in which opposing armies knelt in straight lines to fire at each other across a green meadow. But sending a fleet in a group to save the station made no sense anyway. A more intelligent strategy would have six hundred ships approaching the station from six hundred directions. Traveling at different warp speeds, coming in on many different paths, using cloaked ships, the dynamics of battle would tax the most elaborate defense strategy. The entire Dominion fleet would have a hard time coping with such an attack. And no human mind could choreograph it—but a computer could.
Battles between opposing fleets make no sense unless one of the fleets is guarding a location, such as a planet or space station. Even then, the human element in such a battle would be insignificant. Computers will fight the wars in space, not men. Human reflexes are too slow. In space war, there’s no time to issue commands like “Raise shields” or “Fire on my mark.” If you report that “they’re powering weapons,” the news is already too late by the time the words are out of your mouth. Talking doesn’t work when events are moving at nanosecond speed.
Suppose we’re on a routine exploration mission. The ship has just emerged from warp drive at the edge of an unknown solar system. Life signs are detected by the long-range sensors on the fourth planet of the solar system and you, as captain, order the ship to approach the world at full impulse power (1/4 the speed of light, 75,000 km/sec). Being cautious, you put the ship on yellow alert. Shields are immediately raised and phasers armed.
As the ship approaches the green and blue world, an enemy ship swings out from behind its moon, approximately 300,000 kilometers away, the farthest range for phaser attack. It instantly attacks. The next few ticks of the clock are filled with action.
Phasers operate at the speed of light. From 300,000 kilometers away, it takes the fire from the enemy ship one second to strike our shields. The shields flare but hold. Reacting in milliseconds to the energy burst, our ship’s computer takes control of the helm and accelerates the ship in evasive maneuvers. At the same time, the computer’s artificial-intelligence battle program goes into action.
The enemy vessel is moving at impulse speed, 1/4 the speed of light. Though the signals detected by the sensors travel at light speed, there’s no way to track the attackers. If the ship is 150,000 kilometers away, it would take the sensors a half-second to detect its position, then another half-second for the phaser fire to reach its target—a total of one second. During that second, the enemy will have traveled another 75,000 kilometers, probably not in a straight line. These are ships that accelerate to ten times the speed of light in the time it takes to fade to a commercial; they can literally turn on a dime. (To prevent the crew from being squashed to jelly by the accelerations involved in such maneuvers, they have something called “inertial dampers.”) In theory, the enemy could be anywhere within a sphere of radius 75,000 kilometers—a volume of 400 trillion cubic kilometers, or a space big enough to hold 2,000 Earth-size planets. In this situation the idea of having weapons “locked on target,” as they so often are in Star Trek space battles, is meaningless. The ships are moving too fast, over too huge a volume of space, for sensors to do any good.
One reason computers must handle the battle is that people can’t react in milliseconds. In space war, there’s no time to hesitate, no time to blink, no time to sweat. But there’s another reason that has nothing to do with speed.
Controlled by computer, our ship’s phaser bank spreads an array of beams 150,000 kilometers ahead of the enemy’s last known position. The battle has become a guessing game. With the helm completely under the computer’s control, the ship continually veers from its original course, trying to maneuver the enemy into a position where its options are reduced. In the meantime, the attackers aren’t waiting for us to act. Less than a second after the first exchange of phaser fire, they shoot again but miss. Our computer, programmed with thousands of combat simulations, has analyzed and compared the situation to similar encounters. An artificalintelligence program has extrapolated the course the enemy expected us to take and avoids it. It’s a battle between two computers. Humans don’t matter. If anything, they’re a danger.
People are too predictable. They tend to react in certain ways to danger. That’s why boxers study films of their opponent’s fights. Habits developed over years are difficult to break. A computer programmed to change course randomly won’t always resort to “Attack Pattern Omega” when the ship is fired on. Reacting predictably to an attack, showing any kind of pattern or tendency, would be instantly detected by a computer programmed to detect just such behavior and use it to direct phaser fire. The safest path is a random one, and only computers can act (almost) randomly.
Our phasers fire again, again in a wide-spread array, hoping to catch the adversary as it shifts position. Another hit. The enemy’s computer isn’t programmed as well as our computer. It follows a fairly unsophisticated battle plan. Their shields flare then go dead. A moment later, their ship explodes. In space battles, there is no chance to surrender.
The entire fight lasts less than five seconds. No chance to yell “Shields up!” In space, once a battle begins, there is no time for talking. Sorry, but human reflexes can’t react to beams traveling at the speed of light. No one can steer a spaceship moving at 75,000 kilometers per second and successfully avoid phaser fire traveling at light speed. No human can analyze thousands of attack possibilities and choose the best one in less than a millisecond. Only computers are capable of managing battles in interstellar space.
This is not to say that the human element would never be present in space war. When faced with overwhelming odds (such as the battle with the Dominion fleet), the logical choice for the ship’s computer would be not to engage the enemy. Only Sisko’s determination that the Federation break the blockade compels them to attack. Despite having control of the helm and weapons, the computer is still subservient to the captain’s commands. If he demands attack, the ship attacks, calculating the best possible actions under desperate measures. Perhaps the frequently used “Attack Pattern Omega” isn’t a specific formation but merely a command telling the computer to fight on no matter how overwhelming the situation.
Of course, battles managed by humans are much more interesting, and the writers of Star Trek aren’t the only ones to sacrifice believability for spectacle. Down the cineplex aisle, on a movie screen far away, Star Wars is no more believable.
Remember the stirring space
battle scene right after the Millennium Falcon escapes from the Death Star? The fast-paced episode where Luke and Han destroy several attacking enemy fighters? We’re looking at a level of technology not too different from Star Trek, so it’s reasonable to suppose the attackers are flying at roughly impulse speed somewhere in the neighborhood of 75,000 kilometers per second. Their ray guns are firing some type of energy beam that travels at 300,000 kilometers per second. Yet Luke and Han are swinging their futuristic ack-ack guns with human reflexes, using human eyes, squeezing the triggers with fingers that operate on millisecond, not nanosecond timescales. This fight, shown at aerial dogfight speeds, could never happen in outer space.
Worse, consider the climactic attack on the Death Star. Why is Luke piloting the ship and firing the guns, instead of R2D2? The robot’s reflexes are infinitely faster than the human pilot’s. More to the point, exactly how long does Luke spend flying in that trench leading to the access tunnel? Some minutes, that’s for sure, based on the number of conversations he has with Han Solo and Obi Wan Kenobi. The Death Star has been described as being the size of a small moon. At most it has a radius of 2,000 kilometers, giving it a maximum circumference of somewhat over 12,000 kilometers. If Luke’s flying at 75,000 kilometers per second, he’d circle the Death Star six times every second. Obviously, he’s traveling a lot slower. But then how does he dodge those ray cannons shooting laser beams that travel at light speed? The universe of Star Wars is even less logical than the universe of Star Trek.
Humans are always shown in control of space battles for the simple reason that people find the concept of humans being out-thought or out-maneuvered by a machine distasteful. We’re back to the original series’ mistrust of computers, though better disguised. One of the basic mantras of this belief is that computers can’t compete with humans because machines are incapable of original thought. Dare we observe that in a future of artificially intelligent computers, instantly remembering ten thousand battle scenarios might even the odds?
How close are we to this Star Trek future? In 1995, the Army Medical Department Center and School opened a $7.3 million-dollar Battle Simulation Center at Camp Bullis, Texas. The 13,000-square foot, high-tech facility is designed to use computer-based scenarios to teach medical staffs how to plan and carry out medical missions during major wartime campaigns. Computers simulate battlefield environments and train participants on the best ways to treat casualties and use supplies.
The Battle Simulation Center is merely one of the many projects that forms a part of the U.S. Army’s Stricom Project. STRICOM stands for simulation, training and instrumentation command. This high-tech branch of the Army is working on developing new warfighting concepts using simulation technology. One area of Stricom is devoted entirely to Inter-Vehicle Embedded Simulation Technology (INVEST) which would enable fighting vehicles and stations to use common reusable simulations components and scenarios. One of the goals of the system is to enable “direct-fire” or “line-of-sight” interactions between live and virtual systems.1 Project STRICOM a hundred years in the future, maybe much less, and you have the battle scenarios described in this chapter.
Battles in space are going to be machine against machine. Humans aboard ship are going to be spectators, nothing more. Besides, if we take the lessons of the previous chapter to heart, it’s quite probable war in space will involve one ship trying to infect its opponent with a computer virus. Why waste resources on photon torpedoes when a simple subspace transmission can cripple or destroy the enemy in milliseconds?
5
Artificial Intelligence
AI, or artificial intelligence, is a common term in the Star Trek universe. Yet it’s rarely explained or even documented. In many ways it seems as much technobabble as “dilithium crystals.” However, if we take a closer look at the computers of Trek we can deduce quite a bit about their AI abilities from the way they act.
Landru is a massive computer that has ruled Beta III for hundreds of years (“Return of the Archons,” TOS). Landru acts to protect and preserve the culture of the world. It is self-aware and destroys what it considers threats to society, including busybody space travelers. In fact, it is so protective that it has insulated the planet from all outside influences or change for centuries, reducing its human population to childlike servitude.
Landru is an artificially intelligent machine. It thinks and analyzes information, but only in a very basic way. It views the world in terms of yes and no, true or false, black or white. There is no “maybe” or adaptability in its programs. The complex idea of harm has been narrowed down to the simple, linear concept of physical harm—and the opposite idea, good, has been equated with physical safety. Landru is another anachronism blown up to gigantic speed and power, although in this case the parody is clearly intentional. It is a creation of the 1960s, when artificial intelligence was viewed primarily as the reduction of all thought processes to a series of if/then questions. This reasoning style was inadequate to deal with ambiguity or conflicting values.
Is AI the strict logic of Landru, or something entirely different?
By definition, artificial intelligence has to do with the ability of computers to think independently. Of course, the concept revolves around the basic question of how we define intelligence. Machine intelligence has always been a compromise between what we understood of our own thought processes and what we could program a machine to do.
Norbert Wiener, one of the greatest scientists of this century, was among the first to note the similarities between human thought and machine operation in the science of cybernetics that he helped found. Cybernetics is named after the Greek word for helmsman. Typically, a helmsman steers his ship in a fixed direction: toward a star or a point on land, or along a given compass heading. Whenever waves or wind throw the ship off this heading, the helmsman brings it back on course. This process, in which deviations result in corrections back to a set point, is called negative feedback. (The opposite, positive feedback, occurs when deviations from a set point result in further deviations. An arms race is the classic example.) The most famous example of negative feedback is a thermostat. It measures a room’s temperature, then turns the heat on or off to keep the room at a desired temperature. Wiener theorized that all intelligent behavior could be traced to feedback mechanisms. Since feedback processes could be expressed as algorithms, this meant that theoretically, intelligence could be built into a machine.
This simple way of looking at human logic and applying it to machines provided a foundation for computer-science theory. Early artificial intelligence attempted to reduce our thought processes to purely logical steps and then encode the steps for use by a computer.
As noted in Chapter 1, a computer functions at its lowest level by switching between two states: binary one for TRUE, and zero for FALSE. Circuits are made from combinations of ones and zeros. This fact about circuits carried some inherent limitations: It meant that computers could calculate only through long chains of yes-no, true-false statements of the form “if A is true, go to step B; if A is false, go to step C.” Statements had to be entirely true or entirely false. A statement that was 60 percent true was vastly more difficult to deal with. (When Lofti Zadeh began introducing partially true statements into computer science in the 1970s and 1980s—for example, “The sky is cloudy”—many logicians argued that this was not an allowable subject. The field of logic that deals with partially true statements is called fuzzy logic.) Ambiguity, error, and partial information were much more difficult to handle. Computers, whose original function, after all, was to compute, were much better equipped to deal with the clean, well-lighted world of mathematical calculation than with the much messier real world. It took some years before computer scientists grasped just how wide the chasm was between these worlds. Moreover, binary logic was best suited to manipulating symbols, which could always be represented as strings of ones and zeros. Geometric and spatial problems were much more difficult. And cases where a symbol could have more tha
n one meaning provoked frequent errors.
This older school of AI is what we call the top-down approach—the heuristic IF-THEN method of applying intelligence to computers. Very methodical, very Spocklike, very much like the Emergency Holographic Medical Doctor on Voyager, and corresponding to the way computers think on the original series.
A breakthrough decade for top-down AI was the 1950s. Herbert Simon, who later won a Nobel Prize for economics, and Allen Newell, a physicist and mathematician, designed a top-down program called Logic Theorist. Although the program’s outward goal was to produce proofs of logic theorems, its real purpose was to help the researchers figure out how people reach conclusions by making correct guesses.
Logic Theorist was a top-down method because it used decision trees; making its way down various branches until arriving at either a correct or incorrect solution.
A decision tree is a simple and very common software model. Suppose your monitor isn’t displaying anything—that is, your computer screen seems to be dead. Figure 5.1 is a tiny decision tree that might help deduce the cause of the problem.
Using this approach, Logic Theorist created an original proof of a mathematical theorem, and Simon and Newell were so impressed that they tried to list the program as coauthor of a technical paper. Sadly, the AI didn’t land its publishing credential. The journal in question rejected the manuscript.
In “The Changeling” (TOS), a top-down computer traveling through space, Nomad, beams onto the Enterprise. It scans a drawing of the solar system and instantly knows that Kirk and his crew are from Earth. An insane robot with artificial intelligence, Nomad mistakenly thinks that Kirk is “The Creator,” its God. According to Spock, a brilliant scientist named Jackson Roykerk created Nomad, hoping to build a “perfect thinking machine, capable of independent logic.” But somehow Nomad’s programming changed, and the machine is destroying what it perceives to be imperfect lifeforms. Spock eventually concludes that “Nomad almost renders as a life-form. Its reaction to emotion [like anger] is unpredictable.”