by Kate O'Neill
We’re used to the idea of being able to shop from home, from work, or from just about anywhere—e-commerce has been around in some form or other since the midnineties—but we’re also used to the idea that shopping online means certain trade-offs. In exchange for the convenience of not having to go out, we accept that we won’t have the benefit of sensory certainty, of being able to touch items, try them on, or consider them at scale alongside other items.
But with augmented reality shopping experiences, retailers have huge opportunities, in a range of contexts, to empower the shopper with contextual experiences that are as good or nearly as good as in-person. For the shopper, from the comfort of a home environment, we can, say, use a smartphone app to visualize a piece of furniture in the living room. The sensory experience may still excel in the dedicated retail environment of the store, but the trade-offs are changing.
Virtual Reality and Stories of Place
Society feels increasingly divided, increasingly fractured. There needs to be a way to overcome our differences.
Part of the problem, it seems, is that we have trouble empathizing when we haven’t experienced something. This is where virtual reality presents an interesting advantage. Virtual reality presents compelling options for journalism and storytelling. In fact, one of the most promising ways we can use technology for social justice is through the use of virtual reality.
Virtual reality is a powerful tool for fostering empathy. Consider Chris Milk, who gave a compelling example in his TED talk40. He made the interactive video for Arcade Fire called “The Wilderness Downtown,” which prompted viewers for their childhood address and then showed an array of video windows overlaid against one another. In the video a young man runs through streets, as images are pulled from Google Maps Street View of the area and ultimately the house that matches the address each viewer entered. It was effective, genius even, and it began to demonstrate the innovative opportunities of storytelling in a format that uses the ideas of home and place interactively to achieve resonance.
Later in his talk, Milk reveals a film of a twelve-year-old Syrian girl named Sidra, who’s living in a refugee camp in Jordan. A photo or video of a girl in a refugee camp would certainly be enough to generate compassion in some people, and the most empathetic people may even begin to have a sense of what she must be going through. But with an immersive 360-degree virtual reality view, the ability for people to observe her conditions and surroundings allows them to picture themselves there in an unprecedented way.
By observing her in place—as if we share her location—we can better connect with her humanity. But the place must come alive for us first.
Through similar techniques, researchers in Japan have been working to create a virtual reality experience that simulates a tsunami with the hope of educating people living in areas at risk. The goal is for them to learn know how to respond and have a better chance of survival. In other words, that effort could connect people with their own place in a more adaptive way.
A person wearing virtual reality goggles (or some other hardware as it evolves) can be immersed in at least two senses—sight and hearing—almost as if they’re there. The sense of place is far more complete than with a fixed two-dimensional representation. As a result, the narrative possibilities are enormous and potentially transformative.
360-Degree Video
If you’ve ever been somewhere like the Grand Canyon or Niagara Falls and tried to take a snapshot to remember it by, you know the impossibility of trying to depict a large-scale place in a photograph. Even a video that pans the landscape can seem underwhelming. As a firsthand observer, you have the benefit of moving your head and looking around, maybe looking back and forth and up and down, as your brain tries to process the information it’s taking in.
Enter 360-degree video. It’s a fully surrounded video shot with a specialized omnidirectional camera or with several cameras capturing footage in all directions simultaneously. When viewing, you can control the direction in view by moving to pan and scroll. If you’re wearing goggles or headgear, you can spin around to see the imagery as if it is around you. It’s visually immersive. (Many 360-degree videos are shot with surround sound, too, so the immersion can extend to the audio as well.)
In March 2015, YouTube began supporting 360-degree videos—after all, Google owns YouTube, and with YouTube support, Google’s Cardboard headset viewer would become a fine way to experience them. So would an Oculus headset, and since Facebook owns Oculus, Facebook launched support for them in September 2015.
Like many emerging technologies, the full usefulness of the format is still being discovered. One 360-degree video that became popular in January 2016 showed the persistent lights of Times Square during the Jonas blizzard that blanketed the mid-Atlantic and Northeastern United States in as much as three feet of snow. That kind of newsworthy event—something that almost has to be seen to be believed—is probably a natural fit for such an immersive video experience. And this is probably just the beginning.
Even Twitter announced support for 360-degree video in June 2016. In a way, this step proves the technology’s relevance as a trend since Twitter, unlike Google/YouTube and Facebook, doesn’t have a VR headset product to promote. But you can still view 360-degree videos on the Twitter platform, and by dragging the image or tilting your phone, you can control the display as the video plays. Here again, the video format and the platform make for an interesting pair, particularly since breaking news happens more and more often on Twitter. But even cleverly branded content may recommend itself here. At least, it may be worth a try.
As for creating them, some of the earliest 360 videos were created with GoPro cameras and a Freedom 360 camera rig, which allows for spherical 360 video capture. These rugged cameras developed a reputation among outdoors enthusiasts and extreme athletes for being able to capture a thrilling video as the wearer skied down a steep slope or jumped off a bike ramp. They grew in popularity as demand grew for greater interactivity and immersion in video.
It’s hard to say if GoPros were partly responsible for the increase in appetite or if they simply met up with an already growing appetite in the marketplace.
On the higher end of the capture experience, there are dedicated cameras and rigs, such as Giroptic, and you can certainly expect more to enter the market. On the lower end of capture, you can simply take a panoramic photo with your phone’s camera; and if you post it to Facebook, it will show up within a 360-degree viewer.
This medium is full of exciting opportunities, and the stories you can tell are rich and compelling. The ability to communicate the experience of a place through digital channels keeps getting easier.
Camera Drones: “Augmented” Perspective
While not directly related to augmented reality or virtual reality, drones are an interesting aside and footnote to this discussion because they offer a sort of augmented take on the experience of a place that can alter how we perceive the place. While they used to be priced out of range for most private individuals, their prices have decreased. More of them have taken to the skies, capturing stunning aerial photography of cityscapes, fireworks, landmarks, and more.
The potential is high for interesting metadata, too: Every photo created by a digital camera produces a set of attributes as output known as the EXIF (Exchangeable Image File) data, such as exposure, flash, and more. Drone photo output has the potential to capture not only these image attributes but also embedded information about location, relative position, and so on; and a variety of aftermarket software solutions offer this. In this way, drones can produce a data stream to describe a perspective of place that could have creative and helpful uses in the future.
Overall, a drone is just another way to experience place. In a simplistic example, you could stand on a rooftop and use your phone’s camera to take a photo, or you could fly a drone to capture the skyline view. Whereas augmented reality experiences can involve the use of a camera to create a virtual layer “over top,” figuratively speakin
g, of your physical surroundings, a drone is the use of a camera to create an augmented (or altered) perspective of your physical surroundings, often from literally “over top” of those surroundings. Moreover, not only is the output different because of its vantage point, but how you experience the capture of the moment differs, too.
CHAPTER NINE
Algorithms and AI
The category of developments that has both facilitated a great deal of the convergence of online and offline and where a huge amount of opportunity still has to play out is algorithms and AI. Strictly speaking, these two concepts don’t necessarily belong in the same category, but I’m grouping them because today’s algorithms influence tomorrow’s artificial intelligence, and those together shape human experience.
So even beyond the consideration of experience taking place in physical space or digital, and beyond the convergence of these layers, when we’re talking about human experience at all anymore, we have to understand that more and more of it relies on algorithms and AI and the rich data model that underpins everything.
Since part of our exploration in this book is into the metaphors that describe experience, it’s of interest to note that there are a whole set of metaphors in the field of AI that anthropomorphize machines, such as machine “learning.” And of course “intelligence.”
Human Bias
Algorithms are just sets of rules. For now, until machines develop their own algorithms, we are the ones who determine those rules. I hope it is easy to see where there’s a path from quietly holding our own biases to unintentionally encoding machines with them.
Here’s the other tricky part: Artificial intelligence has to start somewhere, with some set of rules, some set of givens.
What we are then asking machines to do is process the world and learn about it from a starting point that, by some necessity, includes our own hidden intentions and prejudices.
Because it has to, at some level. When you deconstruct far enough, you can see how simply presenting world maps as north-is-up and south-is-down carries some bias. How the common phrasing of “his and hers” or “he and she” carries bias, as the male pronoun by default comes first. How black=dark=bad or evil, and white=light=pure or good. There are plenty of cultural assumptions that are widespread and nearly universal—the legacy flaws in our own human programming. That we build data models that contain these biases is a shame; that we risk passing these biases over to learning machines is nearly tragic.
Yet it happens all the time, and some of the inherited biases produce results with relatively minor consequences, such as the movie ratings on Fandango.com skewing higher than those elsewhere, as discussed in the earlier section, “Fair Isn't Always Fair.” But some of the results have consequences with real financial costs, or even costs to social justice and equality. Fresno police mine social media to assign residents a “threat score.”41 Researchers at Carnegie Mellon University found that women weren’t seeing ads for higher-paying jobs as often as men on Google’s ad network.42 Analysis of Uber wait times in D.C. showed longer waits in neighborhoods with more people of color.
It’s clear that algorithms and the platforms they power can have some inherently unfair design tics. We have to be aware of bias and privilege when we think about machine learning, because we need to recognize what we’re teaching machines to learn. We also have to be careful what we ask technology to do for us, or we risk encoding nuances of hidden intention and bias in the machines we ask to go on learning from that framework.
All in all, if we’re designing experiences, we’re accountable for how we model fairness in the data we collect through human interactions, as well as how the algorithms we build make decisions against that human data.
Filter Bubble
In 2011, Eli Pariser hit a nerve with his TED Talk and the book it drew from, The Filter Bubble: What the Internet Is Hiding from You. The upshot? That personalization algorithms for online content are shaping what we consume (and to some extent, what we are able to consume) so that we are less and less exposed to divergent ideas.43
Pariser’s talk cites how Netflix, among other commerce examples, applies personalization. That’s what got me thinking about how this algorithmic content evolution relates to online marketing, e-commerce, and our ethical responsibilities as marketers. Oddly, even though many of my friends who shared this video are themselves thinkers about digital marketing and online social sharing, there seems to be very little introspection about what the “filter bubble” effect means in terms of online marketing and ethics.
In my digital analytics agency, we encouraged our e-commerce clients to test behavioral targeting. There’s typically a great deal of convenience that this kind of targeting affords the customer. For example, Amazon knows that I tend to buy vegan cookbooks, so it tends to show me the latest and best-rated related books in my browse path. I welcome this because I get exposed to books I somehow might have missed but will almost certainly like. If I were in a bookstore, these might be shelved together anyway, but metadata and personalization can help make those recommendations even better. On Netflix, too, there’s a good chance that showing me personalized suggestions will save me time and delight me, even as it reinforces my longevity with the site and ensures my subscription payments for months to come.
This selfish/selfless balance is the new normal in marketing optimization. It’s what I personally am passionate about: using data to create better customer experiences and, simultaneously, generate incremental profits. It’s what we do with our clients, and their KPIs speak for themselves.
But are we contributing to this insular and narcissistic phenomenon where the more time individuals spend online, the more mirrors are set up around them so that they can no longer see diverse behavior, but rather increasingly similar likenesses of themselves? Perhaps. After all, one of the keys to the work we do is an emphasis on relevance. As I think of it, relevance is a form of respect. It shows customers that we respect their time and effort enough not to make them scour the site for what they’re after.
Chris Brogan and other digital thought leaders have spoken about social news as a serendipity engine. (Serendipity, incidentally, has long been my favorite word and a beloved concept.) In earlier iterations of social news, you got what you got. So, too, in early e-commerce. As the availability of information has accelerated, though, and personalization algorithms have evolved, some of that serendipity has been traded for distillation and, yes, relevance. So, sure, from an editorial perspective, in the video Pariser is justified in saying that Mark Zuckerberg’s example of “a squirrel dying in front of your house” is not as important as “people dying in Africa.” But in commerce, the dilemma of moral or ethical priority is not nearly so clear-cut. Perhaps the personalization of search and social news makes it less likely that you’ll happen upon something random and wonderful, but the continued explosion of long-tail content and commerce means there’s randomness even within niches. While the “filter bubbles” Pariser describes might obscure your view of the randomness and chaos of the web, in general, personalization does help uncover hidden gems within customers’ interests.
Because the other side of all this tailoring and customization is that the long tail is getting longer in every area, and the realization that we’re not going to be able to see most of what’s out there is starting to sink in. So personalized content and merchandising is as much a response to information overload as it is to data availability. Going back to my earlier example, if I landed on Amazon’s home page and it made no effort to customize the content for me, it’s likely I’d have little idea of the breadth and depth of its catalog as it related to the semi-obscure offerings that appeal to me. Would I think to search for chia seeds, one of my recent purchases at the site, if it hadn’t been made clear to me that Amazon carried food as well as books (and tools and shoes and sporting equipment . . . and, and, and)?
After all, relevance and targeting are not new phenomena in marketing. We study demographic and
psychographic information to understand customer profiles so that we can tailor our advertising placements, our message, and our follow-through for optimal results. What’s newer is the ability to adjust whole experiences on the fly based on behavioral performance. Imagine if you walked into a store—let’s use Nordstrom as an example, since it’s famous for its quality concierge service. As you looked around and your attention landed on an object, the other objects around you shifted. Would you feel more catered to or more pandered to? Or perhaps both? In the context of Nordstrom, where it has been established that it’s trying to improve your shopping experience, perhaps it would only seem like another level of superior customer service. If you had the same experience in, say, Walmart, chances are a savvy shopper might feel manipulated.
As a marketer, I see my job as creating meaningful connections between company and customer. (Note that I don’t say that my job is to convert customers: I’m an advocate of empathy-oriented optimization as opposed to conversion. The latter as a single KPI is too narrow and shortsighted.) As a data-driven, technology-savvy marketer, I know that behavioral similarities among visitors, and ultimately customers, often lead to clues, validated through analysis and testing, that can improve the customer experience overall—and, in turn, increase profit. That this also occasionally means limiting a customer’s view of the site and creating an insulated experience is not only an acceptable side effect, it’s intentional. That’s what customer behavior dictates. Customers become overwhelmed when presented with too much choice, and since niche options abound online, that means that if I’m HomeDepot.com and a customer comes in from a search for power tools, I’d best show top-selling power tools and not home appliances or ladders.