by Kevin Kelly
The dumbest objects we can imagine today can be vastly improved by outfitting them with sensors and making them interactive. We had an old standard thermostat running the furnace in our home. During a remodel we upgraded to a Nest smart thermostat, designed by a team of ex-Apple execs and recently bought by Google. The Nest is aware of our presence. It senses when we are home, awake or asleep, or on vacation. Its brain, connected to the cloud, anticipates our routines, and over time builds up a pattern of our lives so it can warm up the house (or cool it down) just a few minutes before we arrive home from work, turn it down after we leave, except on vacations or on weekends, when it adapts to our schedule. If it senses we are unexpectedly home, it adjusts itself. All this watching of us and interaction optimizes our fuel bill.
One consequence of increased interaction between us and our artifacts is a celebration of an artifact’s embodiment. The more interactive it is, the more it should sound and feel beautiful. Since we might spend hours holding it, craftsmanship matters. Apple was the first to recognize that this appetite applies to interactive goods. The gold trim on the Apple Watch is to feel. We end up caressing an iPad, stroking its magic surface, gazing into it for hours, days, weeks. The satin touch of a device’s surface, the liquidity of its flickers, the presence or lack of its warmth, the quality of its build, the temperature of its glow will come to mean a great deal to us.
What could be more intimate and interactive than wearing something that responds to us? Computers have been on a steady march toward us. At first computers were housed in distant air-conditioned basements, then they moved to nearby small rooms, then they crept closer to us perched on our desks, then they hopped onto our laps, and recently they snuck into our pockets. The next obvious step for computers is to lay against our skin. We call those wearables.
We can wear special spectacles that reveal an augmented reality. Wearing such a transparent computer (an early prototype was Google Glass) empowers us to see the invisible bits that overlay the physical world. We can inspect a cereal box in the grocery store and, as the young boy suggested, simply click it within our wearable to read its meta-information. Apple’s watch is a wearable computer, part health monitor, but mostly a handy portal to the cloud. The entire super-mega-processing power of the entire internet and World Wide Web is funneled through that little square on your wrist. But wearables in particular mean smart clothes. Of course, itsy-bitsy chips can be woven into a shirt so that the shirt can alert a smart washing machine to its preferred washing cycles, but wearables are more about the wearer. Experimental smart fabrics such as those from Project Jacquard (funded by Google) have conductive threads and thin flexible sensors woven into them. They will be sewn into a shirt you interact with. You use fingers of one hand to swipe the sleeve of your other arm the way you’d swipe an iPad, and for the same reason: to bring up something on a screen or in your spectacles. A smart shirt like the Squid, a prototype from Northeastern University, can feel—in fact measure—your posture, recording it in a quantified way, and then actuating “muscles” in the shirt that contract precisely to hold you in the proper posture, much as a coach would. David Eagleman, a neuroscientist at Baylor College, in Texas, invented a supersmart wearable vest that translates one sense into another. The Sensory Substitution Vest takes audio from tiny microphones in the vest and translates those sound waves into a grid of vibrations that can be felt by a deaf person wearing it. Over a matter of months, the deaf person’s brain reconfigures itself to “hear” the vest vibrations as sound, so by wearing this interacting cloth, the deaf can hear.
You may have seen this coming, but the only way to get closer than wearables over our skin is to go under our skin. Jack into our heads. Directly connect the computer to the brain. Surgical brain implants really do work for the blind, the deaf, and the paralyzed, enabling the handicapped to interact with technology using only their minds. One experimental brain jack allowed a quadriplegic woman to use her mind to control a robotic arm to pick up a coffee bottle and bring it to her lips so she could drink from it. But these severely invasive procedures have not been tried to enhance a healthy person yet. Brain controllers that are noninvasive have already been built for ordinary work and play, and they do work. I tried several lightweight brain-machine interfaces (BMIs) and I was able to control a personal computer simply by thinking about it. The apparatus generally consists of a hat of sensors, akin to a minimal bicycle helmet, with a long cable to the PC. You place it on your head and its many sensor pads sit on your scalp. The pads pick up brain waves, and with some biofeedback training you can generate signals at will. These signals can be programmed to perform operations such as “Open program,” “Move mouse,” and “Select this.” You can learn to “type.” It’s still crude, but the technology is improving every year.
In the coming decades we’ll keep expanding what we interact with. The expansion follows three thrusts.
1. More senses
We will keep adding new sensors and senses to the things we make. Of course, everything will get eyes (vision is almost free), and hearing, but one by one we can add superhuman senses such as GPS location sensing, heat detection, X-ray vision, diverse molecule sensitivity, or smell. These permit our creations to respond to us, to interact with us, and to adapt themselves to our uses. Interactivity, by definition, is two way, so this sensing elevates our interactions with technology.
2. More intimacy
The zone of interaction will continue to march closer to us. Technology will get closer to us than a watch and pocket phone. Interacting will be more intimate. It will always be on, everywhere. Intimate technology is a wide-open frontier. We think technology has saturated our private space, but we will look back in 20 years and realize it was still far away in 2016.
3. More immersion
Maximum interaction demands that we leap into the technology itself. That’s what VR allows us to do. Computation so close that we are inside it. From within a technologically created world, we interact with each other in new ways (virtual reality) or interact with the physical world in a new way (augmented reality). Technology becomes a second skin.
Recently I joined some drone hobbyists who meet in a nearby park on Sundays to race their small quadcopters. With flags and foam arches they map out a course over the grass for their drones to race around. The only way to fly drones at this speed is to get inside them. The hobbyists mount tiny eyes at the front of their drones and wear VR goggles to peer through them for what is called a first-person view (FPV). They are now the drone. As a visitor I don an extra set of goggles that piggyback on their camera signals and so I find myself sitting in the same pilots’ seats and see what each pilot sees. The drones dart in, out, and around the course obstacles, chasing each other’s tails, bumping into other drones, in scenes reminiscent of a Star Wars pod race. One young guy who’s been flying radio control model airplanes since he was a boy said that being able to immerse himself into the drone and fly from inside was the most sensual experience of his life. He said there was almost nothing more pleasurable than actually, really free flying. There was no virtuality. The flying experience was real.
* * *
• • •
The convergence of maximum interaction plus maximum presence is found these days in free-range video games. For the past several years I’ve been watching my teenage son play console video games. I am not twitchy enough myself to survive more than four minutes in a game’s alterworld, but I find I can spend an hour just watching the big screen as my son encounters dangers, shoots at bad guys, or explores unknown territories and dark buildings. Like a lot of kids his age, he’s played the classic shooter games like Call of Duty, Halo, and Uncharted 2, which have scripted scenes of engagement. However, my favorite game as a voyeur is the now dated game Red Dead Redemption. This is set in the vast empty country of the cowboy West. Its virtual world is so huge that players spend a lot of time on their horses exploring the canyons and settle
ments, searching for clues, and wandering the land on vague errands. I’m happy to ride alongside as we pass through frontier towns in pursuit of his quests. It’s a movie you can roam in. The game’s open-ended architecture is similar to the very popular Grand Theft Auto, but it’s a lot less violent. Neither of us knows what will happen or how things will play out.
There are no prohibitions about where you can go in this virtual place. Want to ride to the river? Fine. Want to chase a train down the tracks? Fine. How about ride up alongside the train and then hop on and ride inside the train? OK! Or bushwhack across sagebrush wilderness from one town to the next? You can ride away from a woman yelling for help or—your choice—stop to help her. Each act has consequences. She may need help or she may be bait for a bandit. One reviewer speaking of the interacting free will in the game said: “I’m sincerely and pleasantly surprised that I can shoot my own horse in the back of the head while I’m riding him, and even skin him afterward.” The freedom to move in any direction in a seamless virtual world rendered with the same degree of fidelity as a Hollywood blockbuster is intoxicating.
It’s all interactive details. Dawns in the territory of Red Dead Redemption are glorious, as the horizon glows and heats up. Weather forces itself on the land, which you sense. The sandy yellow soil darkens with appropriate wet splotches as the rain blows down in bursts. Mist sometimes drifts in to cover a town with realistic veiling, obscuring shadowy figures. The pink tint of each mesa fades with the clock. Textures pile up. The scorched wood, the dry brush, the shaggy bark—every pebble or twig—is rendered in exquisite minutiae at all scales, casting perfect overlapping shadows that make a little painting. These nonessential finishes are surprisingly satisfying. The wholesale extravagance is compelling.
The game lives in a big world. A typical player might take around 15 or so hours to zoom through once, while a power player intent on achieving all the game rewards would need 40 to 50 hours to complete it. At every step you can choose any direction to take the next step, and the next, and next, and yet the grass under your feet is perfectly formed and every blade detailed, as if its authors anticipated you would tread on this microscopic bit of the map. At any of a billion spots you can inspect the details closely and be rewarded, but most of this beauty will never be seen. This warm bath of freely given abundance triggers a strong conviction that this is “natural,” that this world has always been, and that it is good. The overall feeling inside one of these immaculately detailed, stunningly interactive worlds stretching to the horizons is of being immersed in completeness. Your logic knows this can’t be true, but as on the plank over the pit, the rest of you believes it. This realism is just waiting for the full immersion of VR interaction. At the moment, the spatial richness of these game worlds must be viewed in 2-D.
Cheap, abundant VR will be an experience factory. We’ll use it to visit environments too dangerous to risk in the flesh, such as war zones, deep seas, or volcanoes. Or we’ll use it for experiences we can’t easily get to as humans—to visit the inside of a stomach, the surface of a comet. Or to swap genders, or become a lobster. Or to cheaply experience something expensive, like a flyby of the Himalayas. But experiences are generally not sustainable. We enjoy travel experiences in part because we are only visiting briefly. VR, at least in the beginning, is likely to be an experience we dip in and out of. Its presence is so strong we may want it only in small, measured doses. But we have no limit on the kind of interacting we crave.
These massive video games are pioneering new ways of interacting. The total interactive freedom suggested by unlimited horizons is illusionary in these kinds of games. Players, or the audience, are assigned tasks to accomplish and given motivations to stay till the end. Actions in the game are channeled funnel-like to meet the next bottleneck of the overall narrative, so the game eventually reveals a destiny, but your choices as a player still matter in what kind of points you accumulate. There’s a tilt in the overall world, so no matter how many explorations you make, you tend to drift over time toward an inevitable incident. When the balance between an ordained narrative and freewill interaction is tweaked just right, it creates the perception of great “game play”—a sweet feeling of being part of something large that is moving forward (the game’s narrative) while you still get to steer (the game’s play).
The games’ designers tweak the balance, but the invisible force that nudges players in certain directions is an artificial intelligence. Most of the action in open-ended games like Red Dead Redemption, especially the interactions of supporting characters, is already animated by AI. When you halt at a random homestead and chat with the cowhand, his responses are plausible because in his heart beats an AI. AI is seeping into VR and AR in other ways as well. It will be used to “see” and map the physical world you are really standing in so that it can transport you to a synthetic world. That includes mapping your physical body’s motion. An AI can watch you as you sit, stand, move around in, say, your office without the need of special tracking equipment, then mirror that in the virtual world. An AI can read your route through the synthetic environment and calculate interferences needed to herd you in certain directions, as a minor god might do.
Implicit in VR is the fact that everything—without exception—that occurs in VR is tracked. The virtual world is defined as a world under total surveillance, since nothing happens in VR without tracking it first. That makes it easy to gameify behavior—awarding points, or upping levels, or scoring powers, etc.—to keep it fun. However, today the physical world is so decked out with sensors and interfaces that it has become a parallel tracking world. Think of our sensor-filled real world as a nonvirtual virtual reality that we spend most of our day in. As we are tracked by our surroundings and indeed as we track our quantified selves, we can use the same interaction techniques that we use in VR. We’ll communicate with our appliances and vehicles using the same VR gestures. We can use the same gameifications to create incentives, to nudge participants in preferred directions in real life. You might go through your day racking up points for brushing your teeth properly, walking 10,000 steps, or driving safely, since these will all be tracked. Instead of getting A-pluses on daily quizzes, you level up. You get points for picking up litter or recycling. Ordinary life, not just virtual worlds, can be gameified.
The first technological platform to disrupt a society within the lifespan of a human individual was personal computers. Mobile phones were the second platform, and they revolutionized everything in only a few decades. The next disrupting platform—now arriving—is VR. Here is how a day plugged into virtual and augmented realities may unfold in the very near future.
I am in VR, but I don’t need a headset. The surprising thing that few people expected way back in 2016 is that you don’t need to wear goggles, or even a pair of glasses, in order to get a basic “good enough” augmented reality. A 3-D image projects directly into my eyes from tiny light sources that peek from the corner of my rooms, all without the need of something in front of my face. The quality is good enough for most applications, of which there are tens of thousands.
The very first app I got was the ID overlay. It recognizes people’s faces and then displays their name, association, and connection to me, if any. Now that I am used to this, I can’t roam outside without it. My friends say some quasi-legal ID apps provide a lot more immediate information about strangers, but you need to be wearing gear that keeps what you see private—otherwise you’ll get tagged for rude behavior.
I wear a pair of AR glasses outside to get a sort of X-ray view of my world. I use it first to find good connectivity. The warmer the colors in the world, the closer I am to heavy-duty bandwidth. With AR on I can summon earlier historical views layered on top of whatever place I am looking at, a nifty trick I used extensively in Rome. There, a fully 3-D life-size intact Colosseum appeared synchronized over the ruins as I clambered through them. It’s an unforgettable experience. It also shows me comments virtually “
nailed” to different spots in the city left by other visitors that are viewable only from that very place. I left a few notes in spots for others to discover as well. The app reveals all the underground service pipes and cables beneath the street, which I find nerdly fascinating. One of the weirder apps I found is one that will float the dollar value—in big red numbers—over everything you look at. Almost any subject I care about has an overlay app that displays it as an apparition. A fair amount of public art is now 3-D mirages. The plaza in our town square hosts an elaborate rotating 3-D projection that is refreshed twice a year, like a museum art show. Most of the buildings downtown are reskinned with alternative facades inside AR, each facade commissioned by an architect or artist. The city looks different each time I walk through it.