According to the Star Trek: The Next Generation Technical Manual, 1 the Enterprise contains four main holodecks, located on Deck 11. Twenty smaller holographic units (probably similar to the holosuites in Quark’s bar) are on Decks 12 and 33. The holodecks use holograms and replicator technology to create realistic and believable simulations.
The holodeck creates illusions in a variety of ways. The gridlike walls can generate images of immense distance, such as the ocean in Generations or the crowded vastness of a nineteenth-century London cityscape in “Elementary, My Dear Data” (TNG). Holograms are routinely projected onto the deck for scenery, creating everything from landscapes to ancient fortresses. Most of these background features, and the characters moving through them in non-active roles, have no need for physical form and are obviously mere projections. These effects are merely extensions of today’s virtual reality programs.
What’s virtual reality? It’s a computer-generated world in which we move and interact with objects, other real people, and virtual reality people. It’s a place that isn’t really there but that offers the powerful illusion of existence.
Virtual reality today comes in two flavors. One surrounds you with three-dimensional objects and scenes so that you feel you are walking through the scenes so that you’re visually surrounded by the virtual world. This effect requires equipment: virtual-reality goggles, for instance, or specially equipped rooms.
The second type of virtual reality appears before you on a two-dimensional screen, such as your computer monitor. The computer graphics and programming are so well done that a full three-dimensional world feels live on your two-dimensional screen. Many computer games are forms of virtual reality. They aren’t known as virtual reality games, though, simply as three-dimensional games with some built-in artificial intelligence. Yet when we play them, we’re there. Much of the basic programming for this kind of on-screen VR is the same as for the more elaborate kind. The computer doesn’t care whether the virtual space it constructs is an image on a screen or a three-dimensional holographic projection.
On-screen virtual reality also exists as a result of a special programming language called the Virtual Reality Modeling Language, or VRML 2.0. Uses for interactive VRML worlds include business applications, such as walking people through the internals of equipment, showing them how to fix a machine from different angles, letting them walk though an on-line shopping store, explore battle simulations, and cruising them through the layout of a new public sports arena.
If we wanted to build a virtual world we might begin as God did, with a light source. We supply numbers defining both direction and intensity; for example:
The DirectionalLight is like a stage floodlight for a virtual reality scene. It illuminates the scene with light rays parallel to the direction, a vector supplied by x, y, and z coordinates.
The intensity is the brightness of the light, and the color is the Red-Green-Blue (RGB) value that defines the light’s color. In the RGB example of 1 1 1, each 1 represents a hexadecimal code of ff, meaning Full Red, Full Green, and Full Blue. With 1 1 1, the total color combination is white. Therefore, our light is bright white in this example.
As a caveat, you might notice that, by changing the color, intensity loses its relevance. Light emission is approximately equal to intensity times color, but with color turned to maximum white, what’s the point of reducing intensity? You can just as easily reduce the color from full white to something less intense.
We might want to specify background textures or images for the ceiling (such as a sky), ground (some grass perhaps), and a wraparound world (perhaps a forest that encircles us as we move through the scenes). Or, for fast loading and easier lighting, we can just specify background colors in gradients, such as:
The groundAngle supplies a cutoff angle for each groundColor. In the example, we have four groundColor values separated by commas. Each groundColor is an RGB value, and the first (00.8 0) is what we might see if looking straight down. So there’s one more groundColor than groundAngle.
The colors for the sky are created in the same way. One more skyColor than skyAngle, with the first skyColor the RGB value we see when looking straight up.
These are very simple examples. Rather than supply colors for the ground and sky, we can instead designate background images for the entire virtual reality scene: front, back, right, left, top, and bottom. Using this second method, we essentially define a cube of images, which together, define a panorama surrounding our virtual reality world. We can place clouds in the sky, or on the floor. We can place mountains in the distance, or on the ceiling.
At this point in coding a VR world, we have to move beyond the easy steps of defining the sky and ground. We have to create the objects that will fill the world, and we must make the objects interact and move.
To understand virtual reality code requires a basic comprehension of object-oriented programming (OOP). Way beyond the scope of this book, but to get a feeling for the holodecks—which are virtual reality worlds—we have to start somewhere.
Think of OOP as a hierarchy of objects. Each object describes a “thing,” what it looks like, what it does, the data it uses. We might define various VR objects and some of the components that enable them to interact. For example, here’s a snippet of code:
PROTO Snippet defines an object called Snippet that we can use repeatedly in the program without consuming extra resources. Snippet itself is a simple three-dimensional brick.
Each exposedField can be accessed from other parts of the program, for example, to change the color of each Snippet we create. An exposedField implicitly knows how to handle two event types: an incoming set_ event that changes the field value, and an outgoing _changed event that sends the exposedField’s changed value to another node. In the example, code can change the color, position, and size of the Snippet object.
Simple geometric constructions enable us to code the appearance of each Snippet. Thus, much of the VR world can be built up from variations on specific aspects of fundamentally identical parts.
Suppose, to make the metaphor tangible, this particular Snippet is an actual brick—or a representation of one—and you see this Snippet resting on the ground. In the VR world, you might see 45 Snippet blocks on the ground. Or 100 of them. Or only one.
Each looks likes a real brick. Each has a different color. Each feels real to your touch in the VR world. It’s all programming code, it’s all created from one tiny Snippet defined in VR software language.
You’re immersed in this Snippet-filled VR world, just as Trek characters are immersed in holodeck adventures. In reality, you’re sitting on a chair in your living room. But your brain’s immersed in a fantasy world, the Snippet world on your computer screen (or delivered directly into your brain through your eyes via goggles).
Perhaps you pick up the brick and hurl it at a huge spider web obstructing your entrance to the cave of Dr. Cruelman. In reality, you’re still sitting on a chair in your living room. Only in virtual reality are you throwing the brick at a spider web.
The PlaneSensor notices that you moved the Snippet brick. The ROUTE statements and vrmlscript enable the code to move the Snippet brick on the computer screen. It seems to you in realtime that you lifted and hurled the brick. No pause. No frame jitter. You continue to play the adventure game.
Perhaps a spider flies into the scene, angry that you destroyed its web. In the real world, spiders can’t fly. In virtual reality worlds, objects and creatures can do anything we program them to do.
The VR spider might be an object composed of many parts: legs, hair, eyeballs, mouth, ears (we can do anything we want in code), a tail, and pinchers. Perhaps our VR spider has dragon-fire breath, as well. Each part of the spider can be programmed to move in any way our imagination dictates. The dragon-fire breath can spray from its tail or eyeballs. Perhaps when we throw a VR brick at the web, the spider sprays fire from whatever body part is closest to us.
In general, we can program livin
g creatures in VR worlds to do anything we want. The only limitation is our imagination. We can code one prototype spider that defines basic parts of spiders in our VR world. From the prototype, we can then create many other spiders, each of which inherits the basic spider’s properties, then adds to the mix by moving different ways, spraying bombs as well as fire, smiling sweetly and throwing flowers rather than fire, and so forth.
We route actions from one object to another. An action, such as throwing a brick at the spider web, triggers another action, such as the spider flying on-scene and hurling fireballs at us.
For more complex events, the code might be done through vrmlscript, Javascript, or Java. For example, you throw the brick at the spider web, and I want my coded spider to do three actions and another spider to do four actions, plus I want these two spiders’ actions to trigger an attack from six spider colonies, who live in giant webs on islands in my VR world. While I can route one event to multiple, additional events, when things get this complicated, simple routing statements may misfire during program execution. Using programming languages that offer more sophistication, when you throw that brick at the web, I can trigger complex, even artificially intelligent actions from creatures, settings, and objects anywhere in the world I’ve created.
Even for the twenty-fourth century, the holodeck simulations are extremely sophisticated VR programs. All the edges are perfectly done. There are no walls jutting out in the wrong place and no arches that don’t close correctly. In Trek, the holodeck geometries never become really bizarre in either color or structure. They always maintain their real-life appearance, texture, and touch.
It might be interesting if the holosuite software offered totally imaginative worlds—places constructed like Escher drawings, for example. Today, most virtual reality and three-dimensional games are based on totally fantastic constructs.
So why don’t holodecks have similar worlds? At minimum, at least during system malfunctions, Escher-type constructions and other gross abnormalities would occur. Holodeck characters wouldn’t always become evil: they would disintegrate, turn into other creatures, or most likely cease to exist. Holodeck architectures could turn into holodeck characters, and vice-versa. People could go insane inside a holodeck during a system malfunction.
The technology to immerse people in virtual reality worlds began with head-mounted devices that presented three-dimensional views. Sensors picked up hand and head movements, and fed that information into software, which then altered the three-dimensional worldview for the user. Back in the late 1960s, people were already dabbling with this kind of research, though the views were only simple wireframe models.
While a wireframe shows us the corners and lines—the entire grid—of objects, more complex rendering methods show textures, patterns, colors, shine, and shadow. Wireframes today are often used to create initial three-dimensional objects, but once we’re satisfied with how our model looks, we produce fully rendered final versions.
Today’s virtual worlds have become so lifelike that people can become disoriented: thoroughly immersed in their virtual adventures. Still, such virtual fun often requires the user to wear head-sets, hand and arm gear that looks like hospital tubes, and other special equipment. Someday these won’t be necessary, but not yet.s
Ideally, immersion means that you don’t know the difference between the physical world and the virtual world in which you’re playing. The simultaneous perceptions of what you see and what your body feels are tightly matched. A slight disconnection throws you out of the illusion that you’re in reality.
In the future, artificial intelligence combined with virtual reality will enable us to create and enter virtual worlds populated by very lifelike creatures, humans, and plants. Real people will enter these worlds and meet their inhabitants. Much as on the holodecks.
Still, there are a few logical problems with the portrayal of virtual reality holodecks in Star Trek.
We wonder, for example, how the ship’s computer stores enough object templates for all of the world’s variations and scenes on the holodecks. Every scene, every object, from a twig to an ocean ripple to a character’s facial mannerisms—everything appears instantly on the holodeck from all angles, with varying lighting quality, possessing unique textures, even retaining correct dimensions at all distances. This is extraordinary virtual reality programming. Faraway objects are never hollow. They aren’t in fog. They are always perfectly clear. Every eye blink, every wrinkle in every piece of clothing as characters move is consistent: Absolutely everything in the holodeck is perfectly coordinated at all times. Of course, the ship’s computer has a huge amount of storage, as calculated earlier. But people in Star Trek can program new adventures for the holodecks, and store and later replay many versions of these adventures. They play a seemingly endless variety of game levels. With unlimited adventures, the holodeck seemingly requires unlimited storage space.
Besides the storage problem, why are the templates, even if stored and retrieved and displayed, never shown too rapidly or slowly? A character might jerk, wobble, or pass accidentally through a wall, dip his feet accidentally through some rocks. Lips might be out of sync with words. Leaves will flutter incorrectly. These types of slipups occur in some of today’s best three-dimensional artificially intelligent games.
Yet the holodeck never seems to make mistakes.
The worlds of the holodeck are beyond anything possible today, perhaps even three hundred years from now. Even the greatest computer programs can’t function at a speed fast enough to simulate such complex worlds. In virtual reality, there are always program glitches, yet we don’t see these frame-skipping glitches and three-dimensional mind-destroying vision-klunking problems in any holodeck simulation. The virtual reality is always seamless. And when holodeck programs do malfunction, they always go off into some artificially intelligent routine that places the real people in danger instead of merely displaying fuzzy pictures or disjointed frames. If virtual reality programming is this sophisticated in three hundred years, then a malfunctioning holodeck adventure would just shut down.
Adding to the holodeck’s complexity, replicator technology is routinely used to create inanimate objects to further the illusion of reality. For example, food and drink are served at holodeck bars. There’s also water for swimming and snow for throwing snowballs. While many crewmembers enter the holodeck already dressed for their interactive novels, the holodeck can create the proper clothing for participants, as it does in First Contact.
The holodeck has treadmill-style force fields so crewmembers can walk and run for long periods. The holographic images keep this illusion of movement believable. The computer program controlling the holodeck operates enough of these specialized force fields that different people can actually feel that they are traveling in opposite directions. The code necessary to maintain such an illusion is obviously quite complex. But it isn’t impossible.
Even today we can code repulsion-type forces into virtual objects. Programming statements enable us to ensure that certain objects never collide, that virtual reality characters don’t pass through walls. We can make objects attract one another. We can make objects attract and repulse, given changes in position, distance, and size.
Particle systems, another aspect of three-dimensional animation coding, use forces such as gravity and repulsion to simulate blizzards, fireworks, and explosions. For example, we might spray fire from a volcano, then apply a gravitation and repulsion force, making the fire fall at what appears to be a graceful and natural pace.
When we observe the holodeck, we see branches moving, leaves blowing in the wind. Clouds move across the sky. The holodecks are constructed to make everything look natural, with complex systems simulating a natural environment. This requires tons of computing power. But such programs also require an interface for touch, to feel breezes blowing. Do the crewmembers have chips embedded in their fingers so they can feel leaves, weapons, and other objects? Such notions are never mentioned. So how
do people feel things and pick up holographic items in the holodeck? What does Captain Picard feel while riding a holographic horse?
In the Star Trek universe, important characters who directly interact with crewmembers are made of replicated matter guided by beams of force operating at molecular levels. However, according to the doctor in Voyager, the matter is not made of molecules, but rather of molecule-sized magnetic bubbles, which can be manipulated by the computer. These creations are artificially intelligent marionettes whose every motion is controlled by the holodeck’s computer system. They’re complete with touch, warmth, body sensations, kissing, smiles from the lovers, and violence from the killers. The holodeck magnetic bubble matter that makes up these puppets is described as partially stable stuff that can’t exist in material form outside the holodeck.
The holodeck computer is connected with the ship’s computer, and thus has access to the vast amounts of information stored in the computer core. The holodeck is capable of creating artificially intelligent imaginary characters (such as Dr. Moriarty in “Elementary, My Dear Data,” TNG) or artificially intelligent versions of real people programmed with their own personalities, such as Dr. Lea Brahms in the Next Generation episode “Booby Trap,” or Dr. Zimmerman in Voyager. The holodeck can even be used to create artificially intelligent versions of real people with altered personalities.
Transporter and replicator technology are fascinating topics but, as we’ve noted earlier, appear to be impossible by the laws of physics. Magnetic bubbles the size of molecules fall into physics. None of these topics involve computer technology other than in secondary areas such as memory storage. The artificial intelligence exhibited by the holodeck creations is our main concern.
For the holodeck to create a truly believable environment, two types of interaction are necessary. One requires some sort of interaction between the virtual reality characters; while the other involves interaction between these holodeck beings and real people.
The Computers of Star Trek Page 12