Book Read Free

Understanding Context

Page 21

by Andrew Hinton


  That’s where the simple affording information stops, because as soon as we add a door to the doorway, things get a lot more complex. Even though the door is physical, there are many mitigating factors involved in how we perceive its function. As Norman points out, we have to know whether the door opens inward or outward, sideways, up or down, and if we need to pull or twist something to open it, or if it’s automatic, and what behavior will trip its sensors.

  In Gibson’s terms, a door is a compound invariant—a collection of invariants that present a combined, learned function of “opening” and “closing” doors. A specific door is a solid cluster of objects that works the same way each time, following its own physical laws. Even if it doesn’t work like any other door, it persistently stays true to its own behavior.

  Similar to how language means what it does because of conventional patterns of meaning, most doors fit conventional patterns or genres of door function. That is, even simple doors require learning and convention. We learn after a while that certain form factors in doors indicate that they work in one way versus another, not unlike the mailbox discussed in Chapter 7. Even if everything about the door is visible—its hinges, its latch mechanism—it still requires our having learned how those things function for us to put together the clues into the higher-order, mechanical affordance of door use. All of this speaks to whether we understand we are in a context in which we can go through an opening or not, and what physical actions will cause the events we need in that context.

  Then, there is the nested context of the door: is it in a building where people are conditioned to avoid walking through unknown doors, such as in a highly secure office complex? Is it in a school where students learn a pattern of where to go from class to class each year? I still remember how it felt to start a new grade in school and be granted access to new rooms in new classes: the entire school felt as if it shifted under my feet, and old doors became clutter, whereas new doors became the new shape of places for what “school” meant to me...at least until the next summer.

  No door is an island, so to speak. It’s part of a larger construct of symbols, social meaning, and cultural expectation.

  I had a recent experience with a door that reminded me of Norman’s examples. In this instance, I nearly smacked my face into the glass entrance of an office supply store, because I didn’t pick up on the “Pull” label next to the door handle. Here’s a picture of the door in question.

  Figure 11-5. A door leading into a retail store[238]

  There were a lot of contextual elements that contributed to my embarrassing encounter:

  As in Norman’s examples, the handles were not shaped in a conventionally distinctive way to indicate whether they better afforded pulling than pushing. In fact, they look a lot like the handles that one normally pushes.

  The doors are transparent glass, so I was already looking inside the store, trying to spot the department I was there to visit, barely paying attention to the door itself.

  The glass also allowed me to see the handle on the other side; and since most doors with the same shape handle on both sides open both ways, my perceptual system didn’t bother looping more explicitly to cause me to consider any other possibility. As always, my body satisficed.

  The sign wasn’t invisible to me—but my perception picked it up as clutter rather than its intended, semantic meaning. It was just an object between me and where I was going; an aberrant protrusion of gray into the glass. One, simpler set of information rode along on my “loops of least resistance” to override another, more complex set of information.

  Also note how there is little difference between the capital letters spelling “PUSH” and “PULL.” So, in terms of raw physical information, this situation was relying on the narrow difference between “SH” and “LL”—on a label that was the same color as the door’s aluminum.

  I was having a conversation with my wife and daughter, who were with me, so I was verbally preoccupied. Even though our cognitive abilities can take in lots of intrinsic, physical information at once, we have a difficult time picking up clear information from more than one semantic interaction at the same time.

  I was the first to reach the door, and by the time I did so, the sign was actually below my field of vision. So when the door didn’t budge, the sign was of no help to me. Of course, my daughter’s barely stifled laughter and exclamation, “The sign says ‘Pull,’ Dad!” helped to clue me in.

  Beyond rationalizing my clumsiness, this detailed look shows how we can take a simple situation and do a rigorous analysis of environmental information to think through the cognitive scenario. We should always bring such a “close reading” approach to answering the central question we’re exploring in this book: will this environment be perceived and understood, in a real situation, with real people? The reluctance and lack of patience in design work to do this kind of analysis is precisely why so many designs still have contextual confusion.

  This door isn’t an object on its own, but a system of invariants, nested in an environment of other invariants, from simple intrinsically physical information to higher-order, complex, and signified semantic function. And, it’s nested within events involving people, some of whom are in an embodied state to comprehend “PULL” and some who aren’t. Context isn’t just one thing for everyone; it is shaped in part by the actions and perceptual state of the agent. From my perspective in this scenario, there was no clear line where affordance ended and signification began.

  We see similar issues in the simulated objects and surfaces of digital places. For example, all of us have experienced receiving marketing content via email and deciding we want to unsubscribe from it. Most of these emails provide an easy way to turn off the subscription with only a click or two. Like most doors we encounter, we approach it with expectations driven by the invariants of convention and prior experience.

  So, when my wife tried to unsubscribe from the deluge of emails she was receiving from Fab.com, she assumed it would work like the others. Tap or click “unsubscribe” in the email, then possibly verify the request at a web page. But she kept getting the emails. Take a look at Figure 11-6 and see if you can spot the problem. Notice the big, red button that would normally signify the invariant for “Yes, let me out of this!”—but here, it actually means “No, I decided to stay!”

  The interaction presents a series of steps that conventionally end with unsubscribing. A big red button at the end of most transactions means: Yes, complete this irreversible action. But in this case, it does the opposite, confounding what the user has learned from invariants in the past. This interaction was also nested within a smartphone’s display, rendering the view with tiny text that’s almost unreadable. So, not unlike the door into the retail shop, the text wasn’t doing much good here, and was easily trumped for a typical, satisficing user, relying on their cognitive “loop of least resistance.”

  Figure 11-6. The “dark pattern” of accidentally resubscribing to Fab.com takes advantage of learned invariants

  It’s similar to a technique used in so-called phishing scams, which trick users into providing information they would not otherwise offer. Phishing is named that way—after “fishing”—because, like a hungry fish biting a baited hook, a user often acts based on learned invariants without explicitly considering all the environmental factors at hand.

  When an interface takes advantage of our cognitive shortcuts, against our wishes, we tend to call that a “dark pattern”—a sort of “dark side of the force” usage of a design pattern. Whether the designers at Fab.com did this consciously or not, the effect is the same. It uses our forward motion through the environment against us rather than meeting the embodied expectations we bring to the invariants of our context.

  Ducks, Rabbits, and Calendars

  Semantic information gives us the remarkable superpowers of symbols, but at the cost of disconnecting language from the physical environment. The less contextual information we have, the more complicated significat
ion becomes, whether with visual or textual semantic information. In Philosophical Investigations, Ludwig Wittgenstein famously regards a line drawing that could look like either a duck or a rabbit, and uses it as an example for how language works.[239] He refers to the figure in numerous places throughout Investigations. In one instance, he discuss how, if we place the picture among other duck pictures, it looks more like a duck, and more like a rabbit among rabbit pictures.[240]

  Figure 11-7. From Jastrow’s “The Mind’s Eye,” 1899[241]

  Wittgenstein also explains that when we see such a figure, we don’t usually say, “I see it as a duck,” or “I see it as a rabbit.” Instead, we say, “I see a duck,” or “I see a rabbit.” That is, in our natural manner interacting with language, we don’t step back and distinguish between seeing something or seeing a representation as that something.

  An optical illusion or visual trick such as the duck-rabbit works because it’s an incomplete representation, constrained by its medium. In nature, we wouldn’t confuse a duck for a rabbit; there would be enough physical information that we could pick up through active perception to tell one from the other.

  But an optical illusion like this is not the physical world: it is a representation—a display—that leaves out the information we evolved to pick up when perceiving actual surfaces and objects. This is a quality that semantic information has generally; whether words, pictures, or gestures, it introduces ambiguities into our environment much more easily than physical information. Of course, this picture could be expanded to finish the drawing of the animal, and that would make it more clear what sort of animal it is. Like Groucho’s elephant in pajamas, though, this would spoil the “trick.” Most information environments aren’t jokes or optical tricks, however; they’re meant to be understood.

  Because semantic information is part of our environment, our cognition tries to use it in the same satisficing way we use floors or walls or stones lying on the ground. We try working with it and making our way through it as if it were physical. When we see a link in a website that says “Poetry,” in the moment of action, we don’t typically think to ourselves, “I am going to click a link that means poetry and it will take me to things that represent publications containing poems.” We take the action expecting to then see and interact with objects containing poems. We “go there” to “look at the books.” We reify and conflate, as if it were a passageway into a place. When we look at a particular book on a bookstore’s website, we treat it as if it were a book we were looking at on a physical shelf at a local bookshop, if the website’s design affords us the convenience to do so.

  But the same sort of ambiguity that we see with the duck-rabbit can creep into our software structures. Recall in my airport scenario how I had assumed that my coworker could see my travel information in my calendar? This has to do, in part, with how Google Calendar uses the word “calendar” ambiguously, but it also has to do with how a calendar isn’t just one display object anymore, but an abstraction that is instantiated in many different contexts.

  Figure 11-8 displays some of these instantiations:

  The object that iconically represents a calendar as represented in the Google Apps navigation menu.

  Within the Web view, a calendar-like interface that takes up most of the screen, from the left column to the right edge.

  Also in the Web view, the lists of “My calendars” and “Other calendars.” These are actually calendar feeds but are named here as “calendars.”

  In TripIt’s web interface, the “Calendar Feed” I use to create the published calendar-API version of my TripIt itineraries.

  In its “Home Screen” interface, my iPhone also has a “Calendar” app icon, which also represents the idea of a singular calendar-object.

  My calendar as shown when opened on my iPhone. It shows some of the same information as the Google Web view, but not all the same. Some is color-coded differently as well. It doesn’t explicitly differentiate the “feeds” other than by color and by displaying the source of an event in its event-detail view.

  Figure 11-8. The various instances of “calendar” from the airport scenario in Chapter 1

  Example B shows that I have a Project Status Meeting scheduled in the middle of my flight to San Diego. That’s because the scheduler didn’t know I was on the flight: the flight’s “calendar” is a feed generated by TripIt, and isn’t visible to those who share my Google Apps calendar.

  Did I understand how all this worked? Yes, when I thought about it explicitly. But it had been many months since I had set up the TripIt feed, so I had forgotten the rules for access permissions and was in too much of a hurry to think about them. In the satisficing actions we take in an everyday environment, we don’t always take the conscious, explicit effort required to disambiguate all the different meanings of something. This is especially true if the environment’s language conflates many functions into one semantic object—in this case, the word “Calendar.”

  There are many examples in Google’s applications suite, and in their other products, where they go to great lengths to provide these contextual cues. However, the more complex the contextual angles and facets of an environment become, the more the design has to strike a balance between context clarity and cluttering an interface. Users look at a calendar to see dates and reference or create events; comprehending the entire rule-based environment is peripheral to the main purpose, even though it is at times a crucial aspect of the application.

  In this case, I can hardly fault the design decisions behind Google Calendar; they’ve provided at least some cues: they use a differently textured background color (faint stripes) to indicate a subscribed feed versus an item that is actually part of my Google calendar data (a convention not necessarily followed in a calendar client application, however). Additionally, when I click on the flight event, it’s clear that it is something that is part of the “AHTripit” calendar (see Figure 11-9), and that I could “copy to my calendar” if I wanted.

  From an engineering perspective, everything works as it should; the system has a coherent logic that allows it to function with consistent rules. Even the interactive moment-by-moment mechanisms that I tap, click, or manipulate in these bits of software are fairly understandable. Where we find ourselves most muddled is in the information architecture of how the objects and places—and the rules that govern them—are represented with semantic information.

  Figure 11-9. Google Calendar on the Web allows me to see what “Calendar” the event is part of, and gives a one-click method to add it to my present “calendar”

  When I look at a predigital calendar, like the sort that hangs on a wall in the family kitchen, I know what I am seeing exists only in that place and time. But digital technology gives us the flexibility to create calendars that exist in many different forms. In a sense, there is no single calendar, no canonical object. It’s an aggregate, a reification; when we ask, “Will the real calendar stand up?” either they all stand, or none of them do.

  Semantic information is so second nature to humans that we simply overlook how deeply it forms and informs our experience. We can’t expect end users, consumers, customers, and travelers to ponder the nature of signs, or spend time giving a close-reading analysis to all the stuff they have to work with every day. Design has to attend to this hard, detailed work so that users don’t have to.

  Design has traditionally been centered on objects and physical environments. There is no “language design” discipline—it’s instead called “writing.” There’s nothing wrong with that, but we have to come to grips with the reality that language is a more important material for design than ever, especially with the arrival of pervasive, ambient digital systems. This distributed, decentered experience of “calendar” wouldn’t be possible without it, so our next focus will be on what it is about digital information that disrupts and destabilizes the physical and semantic modes.

  * * *

  [225] Weick, Sutcliffe, and Obstfeld. “Organiz
ing and the Process of Sensemaking Organization Science.” INFORMS 2005;16(4):409.

  [226] Dourish, Paul Where the Action Is: The Foundations of Embodied Interaction. Cambridge, MA: MIT Press, 2001:124, Kindle edition.

  [227] Weick, Sutcliffe, and Obstfeld. “Organizing and the Process of Sensemaking,” Organization Science Vol. 16, No. 4, Frontiers of Organization Science, Part 1 of 2 (Jul. - Aug., 2005), pp. 409-421.

  [228] Weick, Sutcliffe, and Obstfeld 2005.

  [229] Weick, Sutcliffe, and Obstfeld 2005.

  [230] Plassmann, Hilke, John O’Doherty, Baba Shiv, and Antonio Rangel. Marketing actions can modulate neural representations of experienced pleasantness. Published online before print January 14, 2008. doi: 10.1073/pnas.0706929105. PNAS January 22, 2008;105(3):1050–4.

  [231] Reinecke, Katharina, Tom Yeh, Luke Miratrix, Rahmatri Mardiko, Yuechen Zhao, Jenny Liu, and Krzysztof Z. Gajos. “Predicting Users’ First Impressions of Website Aesthetics With a Quantification of Perceived Visual Complexity and Colorfulness.” Proceeding CHI ‘13 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems New York: ACM, 2013:2049–58 (http://bit.ly/1FwSxNz).

 

‹ Prev