Japanese consumer electronics giant Sony was one of the first to officially throw down the gauntlet. In March of 2014—just a week before Facebook announced its plans to acquire Oculus—Sony revealed it was in the early stages of developing its own VR headset, code-named Project Morpheus, which would function as an accessory for its PlayStation 4 video game console. “We see it first and foremost as another way of building vibrancy and value into the PlayStation ecosystem,” Andrew House, then president and CEO of Sony Computer Entertainment, told me a few months later. “It’s unusual for us to come out and talk about a product this early in the process, but by opening up to the development community and not doing everything behind closed doors, it’s got a much better chance of succeeding. And the wave of interest we’re getting is incredible.”
Google entered the fray in June of 2014 when it simultaneously announced and released a surprisingly low-tech VR solution at the end of a keynote address during its annual developers conference: a kit made of corrugated cardboard with two built-in lenses, which could be folded and taped to form a box just big enough to hold a smartphone. The device, called Google Cardboard, began its development only six months earlier as one of the company’s “20% time” projects, where employees are encouraged to pursue interests outside of their day job. The idea’s origins, however, probably dated back to 2012, when USC professor Mark Bolas released a similar DIY kit called the FOV2GO viewer.
Google didn’t intend for Cardboard to be a stand-alone product; the design was released to the world under an open-source license, free for anyone to produce, distribute, and either give away or sell for a couple of dollars. Instead, Google hoped the viewer would serve as a platform to other products, like smartphones running its Android operating system, or as a VR interface for applications like YouTube and Google Maps. The announcement also served, according to some tech industry observers, as a way to mock Facebook’s $2 billion acquisition of Oculus. It seemed like Google founders Sergey Brin and Larry Page were telling their Silicon Valley neighbor Mark Zuckerberg: “Good luck selling $300 headsets. We’re going to give ours away.”
On March 1, 2015, Taiwanese consumer electronics company HTC unveiled another high-end headset, the Vive, the result of a collaboration with video game and software developer Valve Corporation, based in Bellevue, Washington. “We believe that virtual reality will totally transform the way that we interact with the world,” then HTC chief executive Peter Chou said at an event announcing the device. “It will become a mainstream experience for general consumers . . . the possibilities are limitless.”
At the time, HTC was mostly known as a smartphone manufacturer, but was expanding its product line to include portable electronics like fitness trackers and digital cameras. Meanwhile, Valve was one of the video game industry’s most respected software companies—its very first product, the 1998 first-person shooter Half-Life, is widely regarded as one of the most influential games ever—and Steam, its digital distribution platform for PC gaming, dominated the industry’s digital sales channel since it was launched in 2003 as a platform for the company’s own titles.
* * *
—
Valve had worked on virtual reality for years, and in many ways was directly responsible for the growth and success of the Oculus Rift. The project that would become the Vive began in earnest in 2011, after programmer Michael Abrash—an alumnus of companies including Microsoft and id Software—joined Valve and began working with a team building wearable displays. By 2012, the group had developed an ungainly prototype that amounted to an LCD screen attached to head-mounted digital cameras. The device kept track of its user’s orientation and movement by scanning the room, identifying a series of visual markers (basically large square bar codes) mounted on the walls, and working out where the device was located in relationship to these landmarks. Valve’s engineers loved the way the system allowed for “room-scale” VR, where users could walk around their physical space and have that movement reflected in the virtual world. But they weren’t eager to go all in on a system that required consumers to post weird bar codes all over their room.
So Valve kept investigating the technology, and encouraged other companies to come up with better solutions and supported some of the most promising projects. When Oculus’s Kickstarter campaign launched in August 2012, a widely distributed promotional video for the Rift featured endorsements from both Abrash (“I’m really looking forward to getting a chance to program with it and see what we can do”) and CEO Gabe Newell (“It looks incredibly exciting, if anybody’s going to tackle this set of hard problems, we think that Palmer’s going to do it”). Newell’s recommendation, in particular, lent instant credibility to the project: a headline on the video game news site Kotaku proclaimed that “John Carmack and Gabe Newell Are Very Excited About This Virtual Reality Gaming Headset.”
Meanwhile, Valve continued to develop its own hardware and software. By January 2013, the company had developed a monocular prototype called the Telescope. When users looked through a lens and moved the device around, it made it appear as if they were gazing into a computer-generated world. In March, Valve released a free VR Mode update to its game Team Fortress 2, allowing the multiplayer first-person shooter to run on virtual reality headsets, including the upcoming Rift development kit. And in April, the company’s engineers completed their first prototype for a low-persistence, head-mounted display with built-in positional tracking—an oversize and ugly piece of hardware that was never meant to leave the lab, but represented a big first step toward a commercially viable virtual reality headset.
Valve’s team continued to tinker over the spring and summer, and by fall they had something they were ready to show off—a new system that combined optical tracking, motion sensors, high-quality optics, and low-latency displays into a demo called the Room. Starting in September, engineers and developers from around the tech world made the trek to Bellevue for a tour of eighteen virtual spaces ranging from urban street scenes to the surface of Mars. Inside Valve’s room-scale VR environment, the simulations seemed especially immersive and real; since users could move around the actual room, it made them feel present in the virtual one. The demos played to the strengths of the system—users had to walk around obstacles, duck under pipes, and dodge swinging mechanical arms. Witnesses called the experience “absolutely incredible,” “world changing,” and “lightyears ahead of the original Oculus Dev Kit.”
That last claim was of particular concern to one early visitor to the Room—Oculus VR CEO Brendan Iribe. In September 2013, the Rift Kickstarter had just ended, and the Oculus management team was working on finalizing their prototypes and getting the development kit out the door. But when Michael Abrash called Iribe and invited him to visit Valve to try out their demo, he dropped what he was doing and jumped on a plane to Washington.
“Abrash called and said, ‘Brendan, we have something I think you’re going to want to see,’” Iribe said. “When Abrash calls and says that, you go.”
In Bellevue, Abrash and Valve lead VR engineer Atman Binstock led Iribe to a small room where the walls were covered with square bar codes—the fiducial markers that allowed the system to track its location in space. As they entered, Abrash made an offhand comment to Iribe.
“[He] said, ‘You know, no one’s gotten sick yet, I’m curious to see what you think,’” Iribe remembers. “I said, ‘Not even you?’ He said, ‘No.’ Michael and I are the most sensitive guys out there. Literally a few head turns and we’re out. So I was intrigued but very skeptical. Because it had still been hard for me to experience VR for long periods of time—or even short periods of time, to be honest.” Abrash strapped Iribe into a clunky 3-D-printed headset with exposed circuit boards and dangling wires, and then started the demo.
“Then came this game-changing moment, a moment that I will absolutely never forget, when I knew VR was really going to work,” Iribe said. “It was going to work for more than just enthusiasts and n
erds like us. It was going to work for the entire world. As I looked around, I felt great, and I felt like I was there. All of a sudden, the switch in the back of my head flipped. Instead of thinking, ‘Wow, this is a really neat VR demo’ I was in it, and I believed I was there . . . that’s the magic of presence. I hadn’t felt it before until that moment, and it felt great.”
Iribe left the room with a new sense of purpose. “The bar had been set, this was it,” he said. “This was what we had to deliver for the consumer VR to really work. You had to feel like you were there comfortably. You had to get presence.”
He also left knowing that while Oculus was far closer to a consumer-ready product, Valve’s technology offered a better experience, and that he’d need their help in order to catch up. So over the following months, the two companies kept talking and sharing their knowledge. Valve set up one of their demo spaces, known as the Valve Room, at Oculus headquarters. In January 2014, the two companies announced that they would collaborate to “drive PC VR forward,” and Valve said it had no plans to release its own VR hardware. In March, at Valve’s own developers conference, Abrash even urged the company’s developers to check out the Rift and said Oculus was the company most likely to ship a viable VR headset within the next two years.
Then Facebook started talking to Brendan Iribe about an acquisition, and communication between Oculus and Valve came to a sudden halt. On March 11, Atman Binstock left Valve to lead a new Oculus R&D team based in Seattle. On March 25, Facebook went public with the details of the $2 billion deal. And three days later, Michael Abrash left Valve to work as chief scientist at Oculus.
Now that it had access to Facebook’s wallet, Oculus could poach whatever talent they needed, build their own teams, and throw money at whatever technical problems popped up between the release of the first Rift development kit and an eventual consumer version of the product. But Valve now found itself without key personnel or a hardware partner. Its years of research had produced impressive technology, but the company had no clear route to commercializing what they’d discovered.
And that’s where HTC came in. The Taiwanese electronics company was looking for ways to expand its business beyond smartphones, and its Future Development Lab had tinkered with virtual reality hardware enough to realize the technology could be the next big thing. So the company reached out to Valve, and sometime in the spring of 2014, HTC cofounder Cher Wang sat down with Gabe Newell and worked out a deal.
* * *
—
Less than a year later, on March 5, 2015, just a few days after HTC and Valve officially announced their virtual reality headset, I got to try the Vive at the annual Game Developers Conference (GDC) in San Francisco, California. While Oculus had dominated the news in the period following the announcement of the Facebook deal, HTC and Valve had been busy building a high-end VR system to compete with the Rift—and now thousands of professional video game designers were practically beating down the doors of the showroom to get their first look at it.
My demo took place in a small room constructed in the south hall of the Moscone Center, a corner of the convention complex hidden away from the public somewhere deep under Howard Street, behind multiple security checks and a phalanx of public relations agents. The space was cold, heavily air-conditioned so that the heat produced by the high-end gaming PC in a corner didn’t turn it into a sauna. A tangle of cords ran from the PC to a collection of strange objects made from plastic, glass, and nylon. After I entered, an attendant started picking through the inventory and strapping pieces on my body: the large black Vive headset, its surface dotted with indentations that had sensors at the center; two SteamVR controllers, each the size of a TV remote but with fewer buttons and a round touch pad. All of this was connected to a harness that snapped around my waist and tethered cables going to my head, my hands, and the PC in the corner. When the attendant pulled the headset down over my eyes, I was momentarily in darkness. I felt the cold air in the room, heard the hum of the PC, gripped the controls, and felt like a space monkey ready to blast off into orbit.
When HTC announced the Vive, they billed the device as the first VR headset to offer a full room-scale experience—a headset that let users “get up, walk around and explore your virtual space, inspect objects from every angle and truly interact with your surroundings.” The hardware was similar to what Valve originally developed for the Room, but where that system used a camera on the headset to see its position relative to markers on the walls, the Vive’s tracking system, code-named Lighthouse, turned the idea inside out.
Instead of cameras, the system used projectors—two little boxes attached to walls in the demo room, each of which contained an array of LEDs and two scanning lasers. In order to track the location of the Vive hardware, the Lighthouse boxes first emitted a quick burst of infrared light, and then the lasers swept across the room, one from side to side and one from floor to ceiling. When tiny sensors on the Vive headset detected the infrared flash, they started counting the microseconds until they saw the beam from the lasers. Then after all the lights had been registered, the computer knew exactly how much time it took for each source to reach each sensor, so it could calculate the exact distances involved and create a model of the system. Repeat this process sixty times a second, and the Lighthouse could track a user’s precise position, orientation, and movement.
And because the Lighthouse flooded an entire room with light, it could track more than just a headset. Each of the controllers terminated in a chunky hexagon of black plastic that was studded with the same IR sensors embedded in the headset. As a result, users could control a simulation either by pushing buttons on each controller or by simply pointing and gesturing. The system would always know where its users’ hands were and what they were doing with them.
The value of the Lighthouse system was immediately apparent from the beginning of my Vive demonstration. It started in a simple virtual room, a sort of blandly high-tech space that could have been torn from any of a dozen cyberpunk novels and movies. But even though the room itself was uninteresting, what was inside it instantly grabbed my attention: two virtual versions of the SteamVR controller floating right where my body’s own proprioceptors told me they should be. For the first time, I could feel physical objects in my hands and see them in virtual reality.
When I moved my arms, the virtual controllers moved with me. Then I moved my right thumb to press down on one of the controllers’ touch pads, and something unexpected happened: a tiny red balloon inflated out of the end of the device and then, with a pop, floated off into the air in front of me. I knew it was a simulated object, but I had created it with my own hands, so I reached forward to touch it, out of instinct—and batted it away off the end of the controller. I pressed the button again, made another balloon, and then punched that one, bouncing it off into the distance.
It was a simple thing, but the effect was profound: I’d felt present in VR before but had never felt so physically connected. I could touch and manipulate virtual objects the same way I did in the real world—with my hands, with movement. The controllers bridged realities and turned me from a passive observer into an active participant.
Another part of the demo drew me in even further. When it started, I found myself standing in a dark, featureless space. The controllers were still in front of me but had changed appearance slightly. The one in my right hand now had a point on the end, like a stylus, and the left displayed a small color wheel, like you might see in a computer painting program or an art textbook.
I knew what to do. I pointed the stylus at the wheel, clicked a button, and selected a color. Then, squeezing the trigger, I waved my hand through the air and painted a bright floating ribbon. The room was my canvas, and my hands were the palette and brushes. I drew circles, then a smiling face, then wrote my name in the air, all in different colors. Before long, I’d surrounded myself with little virtual sketches, and I stood at the center of a gallery of m
y own making, its walls made of paint, all just an arm’s length away.
As I admired my handiwork, I heard a disembodied voice break in from outside the simulation. The HTC attendant who was running the demo had been watching my progress on a computer monitor. “Why don’t you walk a few steps straight ahead,” he said, “and then take a look back at what you’ve created?”
In my excitement over the SteamVR controllers, I’d forgotten the real point of the Lighthouse system was to allow users to move around in virtual reality; I didn’t have to stay in one place to do all my painting. So I stepped forward, walked through the curtain of painted colors and then into the dark, featureless space where I’d started. After a few paces I turned around and looked back at my creation. A dome of glowing graffiti sat there in the nothingness, centered on the point where I’d been standing.
Now that I realized I could move around, I started experimenting with different objects in the virtual space. I drew a bright yellow orb and then, next to it, a smaller black one; when I walked around them, it looked like the moon eclipsing the sun. I drew a long line across the room, as straight as I could, and then looked at it from the endpoint; the narrow strip of paint almost disappeared in my view, as if I were peering down the length of a wire.
As I scrambled around the virtual space, looking around, above, and below my creations, I forgot that I was actually in a small demo room in a convention center. At one point, while jumping though a virtual hoop, I would have crashed right into a wall if I hadn’t inadvertently triggered a safety mechanism called the Chaperone system—a glowing grid, like a virtual fence, that appears in the virtual world as you approach the safe boundaries of your simulation. When the Chaperone faded up into view, I stopped and let out a small gasp—in part because I remembered where I was, but also because the bright grid against the dark space looked an awful lot like the inside of a Star Trek holodeck.
Defying Reality Page 11