Defying Reality
Page 20
I asked her if she thought VR porn might even have a therapeutic benefit.
“Definitely. It’s kind of a form of exposure therapy. If you’re terrified to talk to the opposite sex and you talk to them in a virtual space, you can practice the ‘being face-to-face with a girl’ part, and then experience the rest of it later.
“We actually made a dating simulator that was sort of like social education,” she said. “It starts off in a coffee shop, and a version of me is in front of you, and she introduces herself, and then you have the opportunity to respond with questions or comments, and then she’ll respond to your question or comment based on how much of a conversation you’ve had together, so if you are too forward too fast, she is out of there. She does not have time for your nonsense, and it sort of teaches people that you need to be a real person, and don’t go up to a girl you don’t really know and just start talking about your dick, because she won’t like that.”
* * *
—
Why were so many adult enterprises making the leap into VR? Obviously it was for the money—according to Piper Jaffray, VR porn was set to grow into a $1 billion industry by 2020. But there were other reasons to push the tech aside from its commercial appeal. VR makes piracy harder, since the movies must be downloaded and played on specialized hardware; that means the videos can’t be redistributed on the countless “tube” websites that plague the industry, pirating other companies’ videos and surrounding them with ads.
In fact, while most adult entertainment companies worry about other sites taking and distributing their content, the biggest problem facing producers of VR porn was that not enough partners are willing to take the plunge. At the time, the vast majority of VR software and content was sold directly through the companies that produce VR hardware; if a consumer owned Sony’s PlayStation VR headset, they could really only buy content through the PlayStation Store. And as of 2017, none of the major digital distribution services allowed adult content in their stores. Naughty America subscribers had to download files to their computer or phone and then “sideload” them into a VR video player in order to get them to work.
“A lot of these big companies are fearful of getting associated with porn,” said Paul. “I think there’s concern about minors accessing the content, but we’ve had pay-for-view on cable systems for years, so it’s not like that problem can’t be solved technologically. There’s a way you can do verification to avoid that. So I think a lot of it is political.”
The big content platforms might have done well to open up their doors. The VHS videotape standard famously eclipsed Betamax in part because Sony wouldn’t allow porn companies to license its Betamax technology. If Oculus, HTC, Samsung, or Sony became the first to relent and allow adult content in their stores, it could be a big selling point for their hardware. “If you look at the history of technology, anytime anyone’s ever bet against porn, they’ve lost,” said Paul. “Of course we want adoption to happen faster just because it’s our business. But it’ll happen, it’s just a matter of when.”
In the interim, Naughty America didn’t mind waiting for the rest of the world to come around. “There’s always demand for our product,” said Paul. “Sometimes we joke that we’re like a utility, like a water company. We just have to make sure that we monetize it in a way that we can reinvest and keep producing.”
Chapter 11
MAGICAL THINKING
The Plantation Pointe complex was utterly pedestrian: boxy, off-white buildings, a sprawling parking lot, a few scattered palm trees. It was not the sort of place you’d expect to find a seventy-five-foot-tall war machine from another planet.
But there it was—an All Terrain Armored Transport, better known as an AT-AT, somehow transported straight from the Galactic Empire’s motor pool to the Miami metropolitan area. It was stomping around outside and I was watching it through an open door, while staying safe inside.
Or so I thought. Suddenly the whole building seemed to shake, and then the ceiling exploded. Dust and debris rained down all around me. The AT-AT had shot a hole in the roof, and when I looked up through it, I could see the combat quadruped’s massive head against the bright blue sky. It peered down at me, and its twin heavy laser cannons pivoted and fired.
Then in a flash of light, everything vanished. The AT-AT was gone, the hole in the roof was gone, even the door was gone. What I’d perceived as an exterior wall was just a floor-to-ceiling curtain. It had all been an illusion, conjured into being through the lenses of a “mixed reality” headset—the arcane invention of a start-up called Magic Leap.
Like any good magician, founder and CEO Rony Abovitz kept his cards close to his chest. Magic Leap had operated in extreme secrecy since it was founded in 2011. Only a few people got to see its technology, even fewer knew how it worked, and all of them were buried under so many nondisclosure agreements that they could barely admit the company existed.
Yet massive amounts of money were flowing down to Dania Beach, Florida, a town of 30,000 just south of Fort Lauderdale. Magic Leap had raised nearly $1.4 billion in venture capital, including $794 million in February 2016, reportedly the largest Series C round in history. Seemingly every blue-chip tech investor had a chunk of the company, including Andreessen Horowitz, Kleiner Perkins, Google, JPMorgan, Fidelity, and Alibaba—plus, there was backing from less conventional sources, such as Warner Bros. and Legendary Entertainment, the maker of films like Godzilla and Jurassic World. By the time I visited, the five-year-old company was valued at over $4.5 billion.
That cascade of money sparked strange rumors within tech circles. Magic Leap was doing something with holograms or lasers, or had invented some reality-warping machine the size of a building that would never, could never, be commercialized. The lack of hard information further fueled the whispers. Magic Leap had never released a product. It had never given a public demonstration of a product, never announced a product, never explained the proprietary “mixed reality lightfield” technology that powered its product.
But the company was finally coming out of the shadows. When I showed up in September 2016 to write a Forbes cover story about Magic Leap, Rony Abovitz told me that the company had spent over a billion dollars perfecting a prototype and had begun constructing manufacturing lines in Florida, ahead of a release of a consumer version of its technology. When it arrived, it could usher in a new era of computing, a next-generation interface we’d use for decades to follow. “We are building a new kind of contextual computer,” Abovitz said. “We’re doing something really, really different.”
Inside Magic Leap’s headquarters, even the office equipment did the impossible. Viewed through the company’s headset, a virtual high-definition television hanging on a wall seemed perfectly normal—until it vanished. A moment later it reappeared in the middle of the room, levitating in midair. When I walked over to it and looked at it from different angles, it appeared solid; when I moved around the room, it stayed in place, just floating there.
Magic Leap’s innovation wasn’t just a virtual display, it was a disruption machine. The technology could affect every business that uses screens or computers and many that don’t. It could kill the $120 billion market for flat-panel displays and shake the $1 trillion global consumer electronics business to its core. The applications were profound: throw out your PC, your laptop, and your mobile phone, because the computing power you need would be in your glasses, and they could make a display appear anywhere, at any size you like.
For that matter, they could make anything appear, like directions to your next meeting drawn in bright yellow arrows along the roads of your town. You could see what a couch you were thinking of buying looked like in your living room, from every conceivable angle, under every lighting condition, without bringing it home. Even the least mechanically inclined would be able to repair their automobiles, with an interactive program highlighting exactly which part needed to be replaced and alerti
ng them if they did it wrong. And Magic Leap was positioned to profit from every interaction: not just from the hardware and software it would sell but also, one imagined, from the torrent of data it could collect, analyze—and resell.
“It’s hard to think of an area that doesn’t completely change,” Abovitz said.
* * *
—
Virtual reality, as we already know, is an immersive computer-generated simulation. VR headsets block outside stimuli, use sight and sound to mask the real world, and make you believe you’re in a different environment. But instead of using computer simulations to replace reality, what if we just made changes to it? What if we used high-definition 3-D computer graphics to drop an alien into your living room, make your carpet look like lava, or delete the couch from existence?
The people who study simulation for a living tend to describe this as computer-mediated reality, and the people trying to sell it to the public usually brand their products as augmented reality. But whatever you call it, the idea of a system that overlays digital content onto the physical world has been around for over a century. In 1901, children’s book author L. Frank Baum wrote about a pair of electric spectacles in an illustrated novel called The Master Key: An Electrical Fairy Tale. When the book’s protagonist wears the glasses, everyone he meets is “marked upon the forehead with a letter indicating his or her character”—G for good, E for evil, W for wise, F for foolish. (He uses the device, called a Character Marker, to convince the King of England that one of his ministers is conspiring against him.)
Baum was impressively ahead of his time—other electric gadgets in the novel foretell the invention of Taser guns and live television—but it didn’t take the world long to catch up with him. By the mid-twentieth century, experiments in immersive theater (like Morton Heilig’s Sensorama) and the first head-mounted computer displays (like Ivan Sutherland’s Sword of Damocles) were providing proof of concept for computer-mediated reality. In the 1970s and ’80s, Thomas Furness’s work building high-tech cockpits for the US Air Force moved the needle even closer.
Another big step forward occurred in 1990, after engineers at Boeing were tasked with replacing the sprawling wiring diagrams that cluttered the aerospace company’s factory. Workers needed reference examples when they were assembling complicated aircraft parts, but the systems were so complex they had to be set up on sheets of plywood the size of billboards, and the process was so painstaking it caused slowdowns every time the workers advanced a step in the manufacturing process. To solve the problem, researchers David Mizell and Tom Caudell proposed a head-mounted display that overlaid digitized schematics onto a reusable board, showing the wearer exactly where to lay out wires and slashing the amount of time it took to set up each diagram. Caudell called the system augmented reality, and the terminology stuck.
Boeing never deployed the system, but it influenced the work of other researchers. In 1991, a group of Columbia University computer scientists led by professor Steve Feiner started development of a prototype they called KARMA, or Knowledge-based Augmented Reality for Maintenance Assistance. “A well-designed virtual reality can make it possible for one or more suitably outfitted users to experience a world that does not exist,” Feiner and his team wrote about the project in an academic journal. “There are many situations, however, in which we would like to interact with the surrounding real world. An augmented reality can make this possible by presenting a virtual world that enriches, rather than replaces, the real world . . . this approach annotates reality to provide valuable information, such as descriptions of important features or instructions for performing complementary tasks.”
The KARMA prototype, built in Columbia’s Computer Graphics and User Interfaces Lab, was an extraordinary tech support system for an ordinary office laser printer. Feiner’s team attached ultrasonic tracking devices to key components of the printer so KARMA could monitor its exact location and orientation. Users would wear a custom head-mounted display with a see-through screen that displayed low-resolution computer graphics. When they looked at the printer, KARMA would draw red lines around the printer’s paper tray, fainter “ghosted” lines showing where the tray would be if it was opened, and a pulsing arrow indicating that the user should pull it out. “We have developed a preliminary set of rules that allow us to augment the user’s view of the world with additional information that supports the performance of simple tasks,” Feiner wrote.
Other labs focused on developing augmented reality systems that allowed users to complete tasks at a distance. Researchers had noted that it was difficult to operate remote-controlled devices accurately, since the operator worked on a time delay and lacked normal visual and physical feedback. It’s harder to drive a car via remote than to operate it from the driver’s seat, for instance. So in 1992, Louis Rosenberg, a Stanford University graduate student working at the US Air Force’s Armstrong Laboratory, designed a simple task—guide a robot to put a peg in a hole—and built a special AR rig to complete it.
The user strapped into an upper-body exoskeleton connected to a pair of remote-controlled robotic arms. When the user moved his or her arms, the robots matched that movement. Users also wore a headset that made it appear as though the robot arms and the pegboard were directly in front of them. Finally, Rosenberg installed a series of fixtures in front of the user that allowed them to hold an actual peg and insert it into an actual hole, simulating what they wanted the robot arms to do at a distance. The effect of the system was to give users an illusion of presence at the pegboard and to receive tactile feedback on the task as they completed it. As a result, they were able to complete the job with less difficulty, and Rosenberg discovered that an AR system could significantly improve operator performance.
Over the next few years, augmented reality went prime time. In 1995, a company called Princeton Video Image inserted computer-generated advertisements into the live television broadcast of a Minor League Baseball game, making them appear to be billboards on the backstop behind home plate. The practice quickly became commonplace at all types of live sporting events. In January 1996, the Fox network’s broadcast of the National Hockey League All-Star Game debuted a more controversial piece of AR technology, called FoxTrax, which gave the puck a blue glow and a comet’s tail so it was easier to see on TV. Hockey purists hated the cartoonish effect; it was abandoned after Fox lost the rights to broadcast NHL games two years later, but is still remembered as one of the worst innovations in sports history.
Other AR implementations were embraced by fans. Stanley Honey, an engineer who led the FoxTrax initiative while working as executive vice president of technology for News Corp., spun that technology out into a new start-up called Sportvision and, in September 1998, during an ESPN broadcast of a National Football League game, debuted a new system called 1st & Ten. Viewers tuning in to watch the Cincinnati Bengals play the Baltimore Ravens were surprised to see a yellow first-down line that seemed to be painted across the grass of Ravens Stadium, and even disappeared underneath players who crossed it, but somehow moved downfield as play progressed.
Sportvision achieved the effect by placing sensors on ESPN’s cameras that captured a variety of data, from the attitude of the tripod to how much the lens was zoomed, and used that to create a computer model of where the field was and where the first down line should appear—just like KARMA knew where the laser printer was and where its paper tray should be. A complex color-matching system made sure the system drew the line only over green grass, and not on top of players or the ball. Later that year, other networks followed suit with similar systems, and today augmented reality displays are commonplace in professional sports, ranging from Major League Baseball’s pitch-tracking system to virtual lane markers in Olympic swimming and skating events.
AR went mobile in 1997, when Steve Feiner’s team at Columbia hacked together a computer and a GPS system, and loaded it into a backpack wired to a head-mounted display. They called it t
he Touring Machine, and when users wore the rig and walked around campus, the system superimposed building names onto their view of the world, and flagged points of historical interest. In 2000, a team working out of the Wearable Computer Lab at the University of South Australia took a similar system and turned it into the first augmented reality video game, ARQuake. Based on a popular first-person shooter, the game was heavily modified to run on a head-mounted display and receive control information from expensive surveying-grade positioning sensors. As players walked around in the headset, it overlaid monsters and obstacles onto their physical environment.
But AR didn’t hit the mainstream until camera phones became commonplace, since they combined the ability to record live video with the computing power necessary to process it. In 2008, an Austrian company called Wikitude launched AR Travel Guide, the first location-based augmented reality app. “Users may hold the phone’s camera against a spectacular mountain range and see the names and heights displayed,” the company bragged on its website. In 2009, the magazine Esquire published an issue with several augmented reality features and billed it as “the first-ever living, breathing, moving, talking magazine.” The cover featured the actor Robert Downey Jr. sitting astride a QR code, and when users scanned it with their phones, he jumped off the page to describe the tech and show a clip from his upcoming movie Sherlock Holmes.
Google dipped its toes into AR in 2013 when it released Project Glass, a pair of spectacles that made a virtual computer screen appear to float in front of the user. Fewer than 10,000 units were distributed before privacy and safety concerns led Google to shelve the project. But in the summer of 2016, a Google spin-off company had better luck. In July, mobile application developer Niantic released Pokémon Go, a game that used smartphone cameras to make animated monsters appear to exist in the real world—or at least on the screen of a phone. The app debuted on top of the download charts and surpassed $500 million in revenue in just over sixty days.