After the motorcycle ride, Rheingold found himself riding in a convertible with a young blond woman, listening to a pop song playing on the car radio. In the simulation’s finale, he watched a belly dancer perform “with a come-hither look,” a scene that originally featured the smell of cheap perfume released out of the nose vent whenever the woman came close to the viewer.
Though Heilig’s first demos seemed to position Sensorama as a goofy diversion, they were intended to catch the attention of investors who might fund development and marketing of the device for more serious applications. In Sensorama’s patent filing, Heilig described the invention as a tool that armies might use to train soldiers without subjecting them to the hazards of warfare, or that businesses could rely on to teach employees how to use heavy machinery. He even suggested that Sensorama could help solve the growing problem of overcrowding in American schoolrooms: “As a result of this situation, there has developed an increased demand for teaching devices which will relieve, if not supplant, the teacher’s burden . . . accordingly, it is an object of the present invention to provide an apparatus to simulate a desired experience by developing sensations in a plurality of the senses.”
Heilig’s invention was ahead of its time, and he struggled to find investors or customers who understood its potential. In a bid to find funding to continue his research, Heilig pitched Sensorama to vehicle manufacturers including the Ford Motor Company and International Harvester, positioning the device as an interactive showroom display that could entice customers with the simulated experience of driving a new sedan or tractor. Neither company was interested. Eventually, Heilig put his machines into amusement arcades located in tourist destinations like Times Square and Universal Studios. At first the scheme worked, and the machines started earning income—but due to their complicated design the machines broke easily and spent more time out of order than they did collecting quarters.
Sensorama might have been doomed by its delicate construction and the high cost of producing 3-D movies, but Heilig didn’t give up on his vision. In 1969, he patented a room-sized version of the device, called the Experience Theater, meant to provide the same multisensory experience to an audience of dozens or hundreds of people. The patent describes a huge concave screen, chairs that move and rumble, a system of blowers for simulating wind and distributing aromas, and special polarized glasses that allowed the audience to view a movie in three dimensions.
Though the Experience Theater also failed to reach production, it might sound familiar to anyone who’s ever visited an amusement park and gone on a motion simulator. Modern Disney park rides like Star Tours or Soarin’ look an awful lot like Heilig’s invention, even though his work was completed decades before those attractions opened. In fact, while very little of Heilig’s work ever came to fruition, time has shown the prescience of his vision—his work led the way toward modern virtual reality and inspired many other inventors to chase their own dreams of immersive, interactive environments.
* * *
—
While Morton Heilig was trying to find ways to make filmed pictures look more like the real world, other researchers were trying to teach computers to create convincing artificial environments.
In 1961, electrical engineer Ivan Sutherland began developing Sketchpad, the first interactive computer graphics program, as part of his doctoral thesis at the Massachusetts Institute of Technology. The system allowed users to draw with a light pen on a cathode ray tube, and the digital images they created could be stored, duplicated, or manipulated on the screen. Sketchpad made it possible to visualize and create highly precise diagrams, so Sutherland essentially invented the modern industrial art of computer-assisted design.
That alone would have been enough to consider him one of the forefathers of virtual reality. But then, in 1965, while he was working as a professor at Harvard University, Sutherland published a paper titled “The Ultimate Display,” which predicted a whole series of innovations that would become critical to the future of VR, including head-mounted displays, eye tracking, motion sensors, and gesture-based controls. Through the use of these technologies, Sutherland imagined creating simulations so convincing that they were indistinguishable from real life—or better yet, simulations of things that don’t actually exist in real life.
“A display connected to a digital computer gives us a chance to gain familiarity with concepts not realizable in the physical world,” he wrote. “For instance, imagine a triangle so built that whichever corner of it you look at becomes rounded. What would such a triangle look like? . . . There is no reason why the objects displayed by a computer have to follow the ordinary rules of physical reality with which we are familiar. The kinesthetic display might be used to simulate the motions of a negative mass. The user of one of today’s visual displays can easily make solid objects transparent . . . Concepts which never before had any visual representation can be shown.”
Decades before the term virtual reality would actually come into usage, Sutherland envisioned exactly that: immersive computer-generated displays, artificial environments with their own rules, and virtual spaces where users could see and do impossible things. Sutherland even imagined an extension of this technology where the virtual became physical—an idea that would enter popular culture twenty-two years later as the holodeck on Star Trek: The Next Generation.
“The ultimate display would, of course, be a room within which the computer can control the existence of matter,” Sutherland wrote. “A chair displayed in such a room would be good enough to sit in. Handcuffs displayed in such a room would be confining, and a bullet displayed in such a room would be fatal. With appropriate programming such a display could literally be the Wonderland into which Alice walked.”
In 1966, Sutherland started building hardware that would let users take a trip down the rabbit hole. His head-mounted display consisted of an unwieldy helmet with miniature CRT monitors on either side, and binocular lenses pointed at semi-transparent mirrors that reflected each screen. When a user strapped on the device and peered through the goggles, they could see simple wire-frame graphics overlaid on the real world. Two sensors, one mechanical and the other ultrasonic, measured the position and the direction of their gaze, allowing the computer to update its graphics as the user moved and looked around the room. And because all this gear made the headset unbearably heavy, it was suspended on wires from a rig attached to the ceiling—a setup that earned it the nickname the “Sword of Damocles,” after the legendary weapon that dangled precariously over a Sicilian king’s throne.
In his first experiments with the hardware, Sutherland produced a simple wire-frame cube that appeared to float in midair in front of the viewer, who could walk around it and examine it from any side. Later experiments put the user inside the cube, drawing a square “room” around them with walls, windows, and a door. “Even with this relatively crude system, the three-dimensional illusion was real,” Sutherland reported.
* * *
—
Meanwhile, academics weren’t the only people interested in the potential application of virtual reality. The US military was quick to realize that the technology could be put to use on a battlefield, and in 1967 a twenty-three-year-old second lieutenant at Wright-Patterson Air Force Base in Greene County, Ohio, started a project that would eventually earn him the nickname of “the grandfather of virtual reality.”
Thomas A. Furness III was born in 1943 and raised in Enka, North Carolina, a tiny factory town that was about as different as possible from the high-tech worlds he’d later help invent. But young Tom took an early interest in science and engineering, and by the time he was in grade school he’d taught himself how to build and repair all kinds of electronic gadgets. His teachers would let him tinker in the classroom while other students covered material Furness had learned long ago; once a week, they’d even hand over the class for “Tommy Time,” and the precocious boy would spend half an hour instru
cting his own schoolmates and showing off his latest projects. When he was fourteen, the Soviets launched Sputnik into orbit, and Furness decided he wanted to build rockets and travel into space. His junior-year science fair project, a working rocket telemetry system, won an award sponsored by the US Navy at the North Carolina state science fair. When he graduated from high school, he enrolled at Duke University, joined the Air Force Reserve Officer Training Corps, and earned a degree in electrical engineering.
After college, the Air Force sent Furness to Dayton, where engineers were working on ways to improve the high-tech combat aircraft the United States desperately needed as it escalated the war in Vietnam, and assigned him to the Armstrong Aerospace Medical Research Laboratory, or AAMRL.
“My job was to figure out, how in the world do we interface humans with machines?” Furness said. “We had a problem of complexity . . . we had aircraft that had fifty computers on board, and just one operator. How in the world was he going to make sense of all that?”
Furness realized that following the invention of the digital computer, American engineers had “improved” the nation’s fighter jets with new systems to the point that their operators were completely overwhelmed, sealed in cockpits so full of hardware they could barely see out the windows, and faced with systems so complex they could never process all the information at hand. When one pilot gave him a facetious drawing of the “pilot of the future”—a man with six arms so he could handle all the controls—Furness resolved to completely rethink the interface between aircraft and operator.
At first, Furness focused on creating helmet-mounted displays, like Ivan Sutherland’s “Sword of Damocles,” that could superimpose computer graphics onto a pilot’s view of the real world. By tracking head movements, the system could also sense where the pilot was looking, and dynamically update to show information relevant to the system in question or the task at hand. Early prototypes showed promise, so in the 1970s, Furness decided to take the idea even further and develop an entirely virtual cockpit, where pilots could be completely immersed in a computer-generated version of the world.
Furness’s project, known as the Visually Coupled Airborne Systems Simulator, or VCASS, was completed in September of 1982. The centerpiece of the system, a headset nicknamed the “Darth Vader helmet,” consisted of two CRT displays—one for each eye—hanging from a platform and connected to eight mainframe computers running computer graphics software. VCASS filled several rooms and used so much electricity that Furness joked he “had to tell Dayton Power and Light” whenever he was going to power up the system.
Once it was running, VCASS’s stereoscopic displays would show a simple, cartoonish landscape, like an early 3-D video game, that surrounded the user and would update instantly when the user looked around or turned his head. Real-time information on the aircraft’s systems appeared in the virtual environment as needed, and simple icons on the landscape represented everything from airports to enemy fighter jets. The system included eye-tracking hardware, motion sensors, and even a voice input system: pilots could fire a missile by simply speaking a command to the computer.
The Darth Vader helmet offered a radical new way for fighter pilots to control their aircraft. But what Furness created was much more than just a new avionics system: it was a profound new interface between man and machine. Inside VCASS, humans could interact with computers using the same senses, skills, and instincts they used in their everyday life. It was simple, immersive, and easy to understand—so easy that Furness said totally untrained users, including his high school–aged daughters, would strap into the system and learn how to fly a fighter jet in no time at all.
“It was amazing,” Furness said. “We found that instead of looking at a screen . . . it was like we were pulled into another world. The transformation was remarkable. When we did that, all the abilities that we used in the real world could actually now be used in this computer-generated world, which meant that users had much more power, much more bandwidth to and from the brain.”
* * *
—
Pilots flying inside of one of Furness’s rigs might seem to exist inside a virtual reality, but their interactions with the aircraft were still very physical—they had to steer with a control yoke, push pedals to move the rudder, and press all kinds of buttons and switches to perform tasks ranging from turning on the radio to firing a surface-to-air missile. What virtual reality still needed was a way to interact with completely virtual objects—to see something in the simulation that didn’t actually exist, and then be able to grab, manipulate, or move it.
Early efforts focused on developing data gloves, high-tech handwear that could track the exact movements of its wearer’s fingers. The first such device was developed in 1977 by University of Illinois scientists Daniel J. Sandin and Thomas Defanti. It was named the Sayre Glove after a colleague, Richard Sayre, who had given them the idea for the project. The glove was made with light-conducting tubes running along the top of each finger, a light source on one end, and a photocell on the other. When the user bent a finger, it reduced the amount of light that made it to the sensor, allowing a computer to approximate the amount that the finger was flexing. It was lightweight, easy to produce, and inexpensive, but not terribly accurate—really only useful for flipping virtual switches or moving virtual sliders.
In 1982, Thomas Zimmerman, a scientist working at the Atari Research Center in Sunnyvale, California, picked up the research and started building a data glove for his own pet project. At the time, Atari was one of the world’s biggest electronic entertainment and video game companies, and Zimmerman had an idea for a new gadget that would allow people to play “air guitar” and really make music—a glove that could track its user’s hand and finger positions as they pretended to work the frets and strum the strings of an imaginary instrument. Connect it to an electronic synthesizer, and pretending to play would produce real live music.
While he was working on the project, Zimmerman became friends with another coworker at the Atari Research Center, an engineer named Jaron Lanier who was also a fan of music. In 1983, they teamed up to improve the design of the data glove, and over the next two years made so much progress that they would go on to found a new company that gave birth to the modern virtual reality industry: VPL Research.
Chapter 3
CONSOLE COWBOYS
Jaron Zepel Lanier, the man who coined the phrase virtual reality, was born in 1960 in New York City, but when he was just a baby his parents fled bohemian Greenwich Village and moved off the grid to El Paso, Texas, along the US border with Mexico. His mother and father, Jewish survivors of a concentration camp in Austria and a pogrom in Ukraine, didn’t trust the government and wanted to “live as obscurely as possible.” Lanier’s mother died in a car crash when he was only nine, and not long after, a series of illnesses forced him to spend nearly a year in a hospital. When he emerged and returned to school, he was overweight, socially isolated, and a favorite target of local bullies. The few friends he made were outsiders and “oddballs”—including a radar technician from nearby Fort Bliss who he met in the aisles of a RadioShack while browsing through drawers of transistors and capacitors. The young soldier took the boy under his wing and taught him basic electronics.
After Lanier’s home was destroyed in a fire, his father moved them to the remote village of Mesilla, in the high desert of New Mexico. Young Jaron and his dad lived in a tent and spent their free time building a huge house consisting of multiple interconnected geodesic domes. “The overall form reminded me a little of the Starship Enterprise,” Lanier wrote. It took seven years to complete, but in the process, Lanier developed a taste for creating weird and wonderful environments.
Over time, Lanier made more unusual friends, including his neighbor Clyde Tombaugh, a sixtysomething astronomer who discovered the dwarf planet Pluto in 1930 and worked as the head of optic research at the White Sands Missile Range. Tombaugh taught him how to gri
nd lenses and mirrors, helped him build a telescope, and introduced him to the high-tech computers at the nearby army lab.
Precocious, brilliant, and restless, Lanier dropped out of school at age fourteen and started taking classes at New Mexico State. (He hadn’t graduated from high school or been admitted to the university, but because of his intelligence and sheer bravado, no one ever bothered to turn him away.) He took music classes, learned about composition and orchestration, and started to think about using computers to make music of his own. He also studied programming and computer graphics and, when he discovered Ivan Sutherland’s research into the Ultimate Display and the Sword of Damocles headset, fell in love with the idea of creating virtual worlds.
“Reading about Ivan’s work was challenging for me, because each sentence took me by storm,” Lanier wrote in his book Dawn of the New Everything: Encounters with Reality and Virtual Reality. “You would eventually be able to make any place and be in it via this device . . . plus, other people could be in there with you. . . .
“I was fifteen years old and vibrating with excitement. I had to tell someone, anyone. I would find myself running out the library door so that I didn’t have to keep quiet; rushing up to strangers on the sidewalk . . . ‘You have to look at this! We’ll be able to put each other in dreams using computers! Anything you can imagine! It’s not just going to be in our heads anymore!’”
A few years later—following stints as a goat farmer, art student, and midwife—Lanier moved to Silicon Valley, found work composing music and sound effects for the nascent video game industry, and eventually programmed games of his own. In 1983, he made an interactive music-making game called Moondust for the Commodore 64 home computer, and the sales produced enough profit for him to set up a garage workshop and get to work on another pet project.
Defying Reality Page 5