Pixels and Place

Home > Other > Pixels and Place > Page 8
Pixels and Place Page 8

by Kate O'Neill


  People trying to build coalitions in neighborhoods need the tried-and-true but slow and time-intensive method of going door to door for building community in the offline sense. The dynamics of community are like that: top-down community gets pushback; whereas bottom-up community takes patience and effort. But online, that model of approaching people one-by-one doesn’t often translate. What would it mean to try to build community in a one-to-one sense using online tools?

  So here we’ll step back and examine what community is and what neighborhoods are, and see if we glean something valuable in the process.

  What Community Is, Offline and Online

  What does community mean for, say, a college campus? What are the dynamics that affect the way people relate to each other?

  To some extent, it has to do with the aesthetics of a place—about the identity the people there share, as well as the shared culture.

  What does community mean for a city like, say, Nashville? Or New York City? Or Omaha?

  There are many ways to think about the meaning of a place, such as its identity, and the experiences one associates with it. But the one I think has the most profound sense of truth to me is that part of the meaning of any given place is in the community we create there.

  By “there,” I mean not only a place, but also the context of a place—such as a moment in time, and the people who were part of your community then. Like “school.”

  It’s about connecting with the people who can own that experience, who will connect with the community there. Because community is meaning writ large.

  Community is meaning writ large.

  Neighborhood Memory

  A refreshingly human take on pixels and place is the project undertaken by 596 Acres, a public land advocacy group in New York City that taps into the wonderful concept of “neighborhood memory” that’s critical to understanding community and culture of a place. Through signs posted on vacant lots, a phone number, and a website, 596 Acres is tracking the stories and knowledge of people who’ve lived in a neighborhood and capturing them in digital form. But the integration of offline—meaning, in this case, local human interaction—and online is key:

  “‘You can’t make it all work with a website. You might need a website to understand the situation so you can help the people that live in your city to solve problems,’ Segal says. ‘But without a local advocacy organization the thing doesn’t work.’”22

  There is a wonderful insight about place and community in how neighborhoods “remember” their culture through storytelling and cultural artifacts. In this case, digital tools will help to preserve the sense of a place. In some online communities, a sense of place, of sorts, needs to be designed intentionally to add significance to digital tools.

  And consider sharing economy / experience economy brands like Airbnb, Uber, Lyft—multisided marketplaces around physical experiences that by definition need to design at some level both for community and for cross-channel, connected consumption. Place is inherent in the location-specific nature of the service; what they need to foster is their sense of significance in the community of that place.

  Protecting Community Members

  One of the elements of fostering community surely comes down to safety: how safe you’re making the community members feel about being in the space, whether it’s a physical space or a digital one. When we think about online platforms like Twitter or Facebook, we may find valuable lessons in the attempts to reclaim safe place by community.

  Safety and Online Community: Twitter and #GamerGate

  In an integrated approach to experience design for any placemaking or digital strategy, the dynamics of how the interactions will play out need to be thought through. Will marginalization occur as a result of anonymity? How can that be counterbalanced?

  In the case of Twitter—a format that encourages quick, brief thoughts and lends itself to anonymity—major controversies have arisen around tensions between users or groups of users and their detractors and harassers. The #GamerGate debacle was one such spiraling meltdown involving the gaming industry that resulted in death threats and other harassment, most notably toward women. Twitter also indexes high for usage among African Americans, and as a result, a culture known as #BlackTwitter has emerged; but outspoken members of this group are often targets of harassment as well. As a result of these and other issues, Twitter is constantly being criticized by some of its most prominent users for not doing enough to protect its community.

  In December 2015, Twitter announced that it had hired Jeffrey Siminoff, who had previously headed up diversity at Apple, to be their vice president of diversity and inclusion. On the surface that sounds positive, but the fact that he’s a white man drew fire from some on Twitter, predominantly women and people of color who felt that hiring a white man for the role showed a lack of nuanced understanding on Twitter’s part about what the real issues were.

  The agility with which accounts can be created on Twitter has been an asset to many people in developing projects and corresponding accounts with a niche focus. People regularly create accounts to tweet on behalf of a dog or to automate tweets from a bot.

  But those same interaction mechanics—of being able to spring up numerous accounts as sock puppets and spew hateful rhetoric at a person until they block those accounts—may have led some members of the Twitter community to become targets.

  CHAPTER SIX

  The Ethics of Connected Experiences

  In many ways, we’ve become the frog in the pot of water, ignoring the rising temperature of the hot water we’re in. The services many of us enjoy using every day ask to track our activity or examine our conversations.

  As the saying goes, if you’re not paying for the product, you are the product.

  Because the data involved in connected experiences also includes our physical location, that means we’re vulnerable in physical ways. For example, in the days after the launch of the incredibly popular augmented reality game Pokémon GO launched, at least one incident occurred where muggers used the game to lure players to a location in order to rob at least one player.

  Does that mean we shouldn’t experiment with integrated experiences? Not at all. We simply must do so with an awareness of the very real impact our decisions may have on the people who interact with us.

  Fair Isn't Always Fair

  Imagine walking up to an airplane ticket counter, buying a ticket to Chicago, and being charged $359. Then, as you start to walk away from the counter, you hear the next person ask for a ticket to Chicago and be charged $279. Suppose you go back to the counter and ask the ticket agent why you had to pay more than the other customer, and suppose the agent tells you it is because you are wearing blue and the other person is wearing red—which is the airline’s preferred color.

  It’s an absurd example, but only in particulars. In fundamentals, it’s not unlike a lot of the disparity that happens in pricing all the time. Some of that disparity has to do with algorithms that have been encoded with certain biases, whether intentionally or unintentionally (see also the section on “Chapter Nine - Algorithms and AI”), while some of it is overzealous analysis and optimization efforts run amok.

  Staples and prices based on proximity to competitors

  A 2012 Wall Street Journal article and investigation showed that Staples was serving different prices to different users, and it seemed to have to do with location. More to the point, it seemed to be based on proximity to a competitor’s store. If you were closer to a rival store when you pulled up the Staples website, you were more likely to see a discounted price. According to the article:

  Offering different prices to different people is legal, with a few exceptions for race-based discrimination and other sensitive situations. Several companies pointed out that their online price-tweaking simply mirrors the real world. Regular shops routinely adjust their prices to account for local demand, competition, store location and so on. Nobody is surprised if, say, a gallon of gas is cheaper at the
same chain, one town over.

  But price-changing online isn't popular among shoppers. Some 76% of American adults have said it would bother them to find out that other people paid a lower price for the same product, according to the Annenberg Public Policy Center at the University of Pennsylvania.23

  In fact marketers often experiment with revenue elasticity testing. For past employers and clients, I’ve been part of running quite a few A/B experiments on price with the aim of determining what price point seems to be most effective for making the most sales. It takes some calibration to get the price point right for a new offering. Should this widget be $12.99 or $15? Okay, more people are buying at the $12.99 price point, but is it because of the lower cost or the $.99 format of the numbers in the price? Let’s try it at $12.99 and $11 to be sure. Okay, more people are still taking the $12.99 price, so we can either stop there and run with that price or run a few more tests to see if $11.99, $12.99, or $13.99 works best. You get the idea.

  But what this means is that some companies are taking those results and saying, “People who came in on Macintosh computers keep buying at the higher price points, so let’s identify them when they visit the website and push them a higher-priced offer since they’re more likely to take it.” Pushing out different pricing to different segments of customers? That’s crossing an ethical line.

  Fandango’s exceptionally high ratings

  Not all of the weirdness has to do with pricing, either. Online ratings have a tendency to skew toward the high side of the distribution anyway, but movie ratings on Fandango.com, provided by website users, tended to skew very high.

  The data analysis website FiveThirtyEight explored the topic in 2015, pointing out the oddities in the presentation of ratings. User ratings were always being rounded up, rather than rounded off, resulting in an overall exaggerated average rating. They also noted that Fandango has a sales motive in selling movie tickets to consumers.

  Several sites have built popular rating systems: Rotten Tomatoes, Metacritic and IMDb each have their own way of aggregating film reviews. And while the sites have different criteria for picking and combining reviews, they have all built systems with similar values: They use the full continuum of their ratings scale, try to maintain consistency, and attempt to limit deliberate interference in their ratings.

  These rating systems aren’t perfect, but they’re sound enough to be useful.

  All that cannot be said of Fandango, a NBCUniversal subsidiary that uses a five-star rating system in which almost no movie gets fewer than three stars, according to a FiveThirtyEight analysis. 24

  Overly high ratings may not have as direct an impact on people as skewing prices on what seem like arbitrary factors (such as what kind of computer you use), but the overall takeaway here is that the use of data to drive experience comes with a certain amount of risk. As designers of experiences, we need to be responsible for not only the data we collect and use but also for responsibly collecting and using that data. In other words, we need to protect consumer privacy, and we also need to monitor the methods by which we use consumer data to target experiences so that they don’t cross over the line of what’s ethical and fair.

  Moreover, as we consider the implications of this kind of inequity for connected experiences, it becomes clear that companies can use additional data sets and hardware like beacons to target us all based on things like our location data—and that starts to get even more dicey. So when we are the ones instigating that targeting through integrated experiences, we must be mindful of whether we’ve gone beyond offering relevance and into offering disproportionate opportunity on arbitrary factors.

  The Ethical Burden of Too Much Data

  Our objective in tracking data should be quite simple: because we can use it responsibly to align our brand purpose and objectives with our customers’ or users’ motivations and needs, and to gauge our effectiveness. Our data stores become more needlessly complex and ethically burdensome when we collect too much data we have no application for, or that the use of can in no way benefit the end user.

  If our data collection exceeds that function, we increase our risks: the risks of exposure to breaches, and the risks of misguidedly using consumer data in a way that can only be regarded as manipulative, greedy, or simply untrustworthy.

  The Ethical Burden of Data Collection

  For example, an article in the Independent reports that Uber collects battery level information from users calling for rides. As a result, they know when your phone is about to die.25 They also know that you’re more likely to pay “surge” fares as a result. Now in this article, the company representative is very quick to say that they are not using the data; but nonetheless, that’s a lot of what we might call “ethical burden” due to collecting data that doesn’t readily align their motives and their customers’ motives. If they’re going to collect it, they should be looking for how its interpretation can align their motives with customers’ motives—for example, developing a means of processing the ride in the background or expediting the response to a call.

  Either of those objectives for collecting the data, if understood by customers, would foster delight, ultimately resulting in greater customer loyalty and word-of-mouth recommendations, which could lower their cost of new customer acquisition. When the company instead evaluates whether they could get away with surge charges in that scenario, they misalign: They only stand to benefit themselves and alienate customers. Even if they don’t use the data, it has already become ethically burdensome.

  So, does Uber deserve to have access to that data? Do any of our projects deserve the access we have? What are we doing to safeguard ourselves?

  Access to user data is certainly now critical to contemporary marketing. The success of the user experience and the ability to improve it over time relies on the underpinnings of the data model, and how thoughtfully and intentionally it is created.

  After all, another way to look at these developments is depth of data. Not too long ago, if you wanted to make informed online marketing decisions, you were happy to be working off of search data. All you had was words. Well, words, plus clicks. But mostly words.

  Here’s an example. If someone was searching for the phrase “common songs,” they’d have to disambiguate between multiple interpretations of that query. It’s most probable that they wanted to find songs by the artist Common. But they also could have meant songs that are commonly known, songs that everyone could sing along to—such as “Happy Birthday” or “This Land Is Your Land” or others.

  As time has gone on, the data available to decode those types of ambiguities grew more sophisticated anyway: Beyond search terms and clicks, there were website visit histories, social graphs, social statuses, and more. Software built to offer ads and content to website visitors has gotten more sophisticated in its ability to infer what might be successful. Marketers don’t necessarily have access to the breadth of this data up front, but they can design campaigns around conditional use of these characteristics.

  Now with all this place-related data, you have movement, locations, and time to consider. If you’re a marketer and you want to successfully make relevant and timely offers, you have to anticipate the place-related data. Anticipating it requires having enough empathy to hypothesize what might be happening as a person encounters triggers that make them want to buy or research a solution like yours. You have to be able to think about their context—their surroundings, their possible setup and equipment at the moment, their emotional state—as well as their outward indications, such as search terms, click path, and so on. And you have their location, which may or may not tell you anything directly; people may be doing their banking from within a retail store, their shopping on a sailboat, their news reading on the bus, or anything else.

  So if we’re going to be intentional about this convergence, we need to look at the different metaphors we use to talk about and represent our physical and digital worlds. We can examine what associations we’re maki
ng with different experiences, and how we actually experience them.

  Is There an Ethical Limit to How Much Data We Collect?

  Overall, this does raise an interesting question: If there is an ethical burden placed on any company when it has control of data and insights about customers, and since the function of a company is to sustain itself and be profitable (meaning it will never act principally in the consumers’ interest), does that mean that companies should limit, whether voluntarily or through legislation, the amount of data they collect?

  Some in the data security space talk about a model known as “focused collection,” which is advocated by the White House’s Consumer Data Privacy Bill of Rights. “Focused collection” prescribes collecting only what’s needed from the consumer for the operations of the business.

  This is sensible; and if companies undertake it voluntarily, it’s great. And it does mean reduced risk for the company collecting the data. If they have less sensitive data, they’re proportionally less vulnerable to attack and liability if their servers are compromised.

 

‹ Prev