Understanding Context

Home > Other > Understanding Context > Page 20
Understanding Context Page 20

by Andrew Hinton


  [218] Beyer, Susanne and Gorris, Lothar. “Spiegel Interview with Umberto Eco: ‘We Like Lists Because We Don’t Want to Die’,” Spiegel Online International (spiegel.de) November 11, 2009 (http://bit.ly/ZLUzc1).

  [219] Wikimedia Commons “List of possessions of a temple,” (http://bit.ly/1wkj4HO).

  [220] Gawande, Atul. “The Checklist,” The New Yorker, December 10, 2007.

  [221] Based on an example from Wikimedia Commons: http://bit.ly/1DtzCkI.

  [222] Truss, Lynne. Eats, Shoots and Leaves: The Zero Tolerance Approach to Punctuation. New York: Penguin, 2003.

  [223] Gleick, James. The Information: A History, a Theory, a Flood. New York: Random House, 2011, Kindle locations: 759–60.

  [224] Wikimedia Commons: http://commons.wikimedia.org/wiki/File:Baseball_diamond.svg

  Chapter 11. Making Things Make Sense

  Thoughts exchanged by one and another are not the same in one room as in another.

  —LOUIS KAHN

  Language and “Sensemaking”

  WE CAN ACCOMPLISH A LOT OF PHYSICAL ACTIVITY WITHOUT HAVING TO CONSCIOUSLY MAKE EXPLICIT SENSE OF IT. We just do it. But sensemaking is a special sort of activity that brings another level of coherence with which we knit together our experiences, think about them, and understand them at a more abstract level.

  When we consciously try to make sense of our experience, it is an expressly linguistic activity.[225] Like perception itself, language is enacted; it is something we “do.”[226] We communicate with each other to develop a mutual understanding of our shared environment. Likewise, as individuals, we engage in a dialogue with ourselves about the environment and our choices in it, putting the mirror of language in front of us to “reflect” on our actions.

  The term sensemaking generally refers to how people gain meaning from experience. More specifically, it has been the term of art for several interdisciplinary streams of research and writing, starting in the 1970s, including human-computer interaction, organizational studies, and information science. Much of the academic work on sensemaking has been about how people close the gap between their learned experience and a newly encountered technology or corporate environment.

  When we study nature and strive to understand its complexity, we use language to create and reflect on that knowledge. The same goes for human-made environments, which are largely made of language to begin with. As one seminal article on sensemaking puts it, “When we say that meanings materialize, we mean that sensemaking is, importantly, an issue of language, talk, and communication. Situations, organizations, and environments are talked into existence.”[227]

  Our conscious engagement with context requires our use of language. When we experience new situations—whether a new software application or a new job at an unfamiliar company—we use language as an organ of understanding, calibrating our action to find equilibrium in our new surroundings. This activity involves the whole environment, including other people and the semantic information woven into the physical.

  Like basic perception of the physical environment, this sensemaking activity happens at varying levels between explicit and tacit consciousness. Somewhere between fully conscious thinking and mindless action, we perform a nascent kind of interpretation but without explicit labeling, because no name has yet emerged for the new thing we encounter.[228] It’s in the ongoing perception-action cycle that we have to exercise conscious attention and begin making explicit, thoughtful sense of the experience.

  We can perhaps think of this as a dimension added to our earlier diagram showing the perception-action loop (refer to Figure 11-1). As we make sense of our environmental context, the many “loops” of cognition that make up the overall loop are cycling not just among brain, body, and environment, but at different levels of conscious attention across the explicit/tacit spectrum.

  In each thread of sensemaking, there’s a building of understanding that takes place. Tacit, passive awareness transitions to purposeful consideration, which then (for humans) leans on the infrastructure of language to figure out “what is going on?”—which is then followed by “what do I do next?” This then spurs action that might go through the cycle all over again.

  We carve understanding out of the raw stream of experience by eventually naming and conceptually “fixing” the elements of our environment. This creates conceptual invariants around which we orient our sensemaking. Eventually, these newly identified invariants become part of the “common currency” for social engagement.[229] In other words, this individual cycle is also woven into the fabric of social sensemaking, where we all collectively add to the environment for one another, causing shared conceptual structures to emerge in our common culture.

  Figure 11-1. Explicit and Tacit spectrum over the perception-action loop

  There’s an awful lot going on in the preceding paragraphs, so here’s an example to illustrate: imagine pulling up to a fast-food restaurant’s drive-through menu. You’re hungry, and just wanting something to eat, but you’re also trying to eat more healthy fare lately, so you’re looking for better nutritional choices. Your hungry body (and emotional brain) is getting in the way of your more explicit health-goal-driven thoughts, all while trying to figure out what food is represented in the menu while an attendant is squawking, “What’s your order?” through a scratchy audio speaker.

  A lot of your decision-making is being driven tacitly: your hungry body and the emotional centers of your brain are body-slamming your ability to thoroughly parse and understand the menu, while another hungry driver revs his engine behind you.

  The convoluted menu doesn’t help much: clever labeling of a “Prime Deluxe” and a “Premium Choice Grill” doesn’t provide much distinguishing information. And the most-popular options (because they’re the most tasty and least healthy) have the biggest pictures and easy-to-order numbering schemes—like a magician forcing a card to you on stage—nudging you further toward taking the quick option rather than having to make a more difficult decision.

  To avoid slipping into the path of least resistance, you have to begin reading the menu aloud to yourself, working hard to find the specific trigger terms you’re looking for—“salad” or “heart-healthy,” or whatever—and doing the math of calorie counts. Otherwise, you know you’ll just give up and say, “Gimme a number three,” and then drive away with a sloppy burger and a bag of greasy fries.

  It’s challenging to make sense of the environment well enough to make a different choice and avoid the less-thoughtful, default “grooves” provided by the menu (and the stressful pressure to “order now” while others wait behind you). You have to stop and think, calculate, and drag your brain into explicitly reflecting with language.

  This insight about how people make sense of their experience is crucial to designing context in any environment. Users gain coherent understanding of what they’re doing, where, and with whom, through individual and communal activity and communication. What something or somewhere “is” depends on all those factors, not just the discrete interaction of one person with one object or place. Nothing we make for others to use is experienced in a vacuum; it will always be shaped and colored by its surrounding circumstances.

  Even the labels we put on products can alter our physical experience of them. In a well-known series of experiments, neuroscientists had subjects sample wine with differently priced labels—from cheap to expensive. The subjects didn’t know it was the same wine, regardless of price. Even though the semantic information specifying price was the only difference between the wines, functional Magnetic Resonance Imaging (fMRI) brain scans showed significant differences in activation of pleasure centers in the subjects’ brains—they literally enjoyed the “expensive” wine more than the “cheap” one, even though there was no physical difference in the wines.[230] Again, our perceptual systems don’t spend a lot of time parsing semantic from physical information. These subjects took price as a face-value framing for the wine, knowing nothing else about it. It’s anothe
r example of how our immediate experience of the environment is a deeply intermingled mixture of signification, affordance, cultural conditioning, and interpretation. Similar studies confirm that the aesthetic styling of websites can strongly affect users’ opinions of their value.[231]

  In the airport example from Chapter 1, much of my activity was nudged and controlled by the semantic information that surrounded me, in concert with the social context: I tended to gravitate toward actions that were similar to what others around me were doing; and most of what I did was almost completely tacitly driven, until I had to stop and think about it explicitly. And even then, I found myself asking another person about what to do, and talking to myself about the labels and physical layout I was trying to understand. There was no separating the language from the physical layout of the airport; they were both intermingled as “environment” for my perception-and-action.

  Physical and Semantic Intersections

  In a sense, ever since we started naming places together, we’ve been living in a shared “augmented reality,” in which language, places, and objects are impossible to separate. Semantic and physical information are now so intertwined in human life that we hardly notice. Consider just a few ways that they work together:

  Identification

  We name things and people so that we can talk about them and remember what and who they are. Recall that labels give us the ability to move things around in our heads (or “on paper”), piling, separating, juxtaposing. Anything of shared human importance has a name.

  Clarification

  Sometimes, we can already tell what something is, but we still need more context. I can ascertain that an egg is an egg, but information on the carton informs me if it’s organic or cage-free as well as when the eggs will expire.

  Orientation

  A typical door has clear physical information for affordance, but without more information we don’t know why we would use that door rather than another. Especially in built environments where manufactured surfaces can all look nearly alike, there’s little or no differentiation (unlike in nature) to distinguish one layout from another. So, we need supplemental orientation to tell us “long gray hallway A goes to the cafeteria, and long gray hallway B goes to the garage.” Likewise, stairways clearly go upward, but their destination is often obscured. A wider view of our stairs example from Chapter 4 reveals that a step has the label “Poetry Room” painted on it (Figure 11-2). The label adds orienting information about where we will be after we climb the stairs—semantic scaffolding that signifies the context of the physical affordance.

  Figure 11-2. The wooden stairs in the famous City Lights Bookstore in San Francisco, this time showing a label[232]

  Instruction

  Even if we know what something is, we often need help knowing how to use it. In recent years, many public bathrooms have been outfitted with automated fixtures. The invariants that many of us grew up around in bathrooms have been scrambled, so we have to figure out each new bathroom anew. In one, the sink might be automated, whereas the toilet is not, and in the next it can be the opposite. Anything that doesn’t have clearly intrinsic affordances requires some kind of instruction, even if it comes from instructing ourselves through trial and error. Instructions are an example of how all these sorts of semantic-and-physical intersections can overlap and work together. Sometimes, instructions help identify, clarify, and orient us all at once.

  Digital Intersections

  As we are called upon to create more physically integrated user experiences, these intersections between language, objects, and places are increasingly critical for us to design carefully. In software, it can be even more challenging, because the objects and places we simulate with graphical interfaces can so easily break the expectations we’ve learned from the physical environment. Recall from the Chapter 1 how hyperlinks introduced an unprecedented flexibility in connecting digital places but also introduced new opportunities for contextual confusion.

  On the City Lights Bookstore website in Figure 11-3, we see labels, but no physical information other than what is simulated with graphical elements (color blocks, lines, arrow-triangles, negative space) and layout (spatial relationship of elements to one another) telling us that the label “POETRY” is something we can touch, and that it will take us to what we expect to be another place, about poetry. The function of a hyperlink is learned through experience and established through convention, like language itself.

  Figure 11-3. The Poetry link on the City Lights Bookstore website

  If we walked up the stairs of the store, only to find that there was no “Poetry Room” but instead some other sort of room, or no room at all, we’d be disoriented. Similarly, tapping or clicking the hyperlink takes us to a place that we expect to fulfill the promise of the label. In a digital interface, however, so much of the information is semantic that the interface has to be designed with great care to reduce ambiguity, because the meanings of the labels and the subtle hints of visual layout are all we have work with as guiding structure for users.

  In the built environment of cities, we’ve created such complex structures that we struggle to rely on the shapes of surfaces alone to give us contextual clues about where to go. So, the field of architectural wayfinding has expanded over the years to be almost exclusively about using semantic information to supplement the physical. The way icons and text help us get around in a city or building can make a huge difference in our lives. For example, in hospitals, research shows that good wayfinding promotes better healing, medical care, and even improved fiscal health of the organization.[233]

  We can look at any modern city intersection and see how much semantic information is required to supplement human life there. In the image from Taipei City shown in Figure 11-4, nearly every surface has semantic markings, from the advertising to the street signs, traffic signals, and even the arrows, crosswalk markings, and street boundary lines painted on the city’s streets.

  When we say city, we are talking about all of these modalities, all at once. In fact, it’s hard to say that language is merely scaffolding here, because in some instances the buildings are there to support the cultural activity of language-use to begin with. That is, the language environment came first, and the built environment emerged to support its growth and evolution. Language is more our home medium than steel and concrete. We’ve been speaking sentences longer than we’ve been building roads.

  To understand and improve these environments, we should know how to distinguish physical from semantic, but we should not forget that the denizens of such a city can’t be expected to parse them. They are intermingled in what information architect Marsha Haverty suggests is a “phase space”—just as water can undergo a phase transition from solid (ice) to liquid (water) to gas (steam), information can move across a similar spectrum.[234]

  Figure 11-4. Taipei City, Nanyang Street, in 2013[235]

  But, unlike water, which is categorically and empirically in one state or another, semantic information adds the contextual slipperiness of language to the distinction. No matter the perceiver’s umwelt (uniquely perceived environment), steam will have the properties of steam. But the word “tripe” in reference to some Chinese cuisines, where it is a staple protein, has a radically different meaning compared to a context in which “tripe” is an insult.

  Physical and Semantic Confusion

  Just because the modes intersect doesn’t mean it always works well. Sometimes, the semantic information we encounter is actually contradictory to the physical information at hand. In The Image of the City, author Kevin Lynch explains that, even when a city street keeps going in a physically continuous direction, if its name changes along the way, it is still experienced as multiple, fragmented places.[236]

  We tend to lean on language as a supplement to the otherwise confusing parts of our physical environment. Sometimes the language we add can be helpful, but often it goes unnoticed or only adds confusion.

  In a legendary example fro
m his work, Don Norman explains some of the problems with doors, and why the semantic information we often add to them is a crutch we use to correct for poor physical design. This portion is from the revised 2013 edition of The Design of Everyday Things:

  How can such a simple thing as a door be so confusing? A door would seem to be about as simple a device as possible. There is not much you can do to a door: you can open it or shut it. Suppose you are in an office building, walking down a corridor. You come to a door. How does it open? Should you push or pull, on the left or the right? Maybe the door slides. If so, in which direction? I have seen doors that slide to the left, to the right, and even up into the ceiling. The design of the door should indicate how to work it without any need for signs, certainly without any need for trial and error.[237]

  The shape of door handles and how they indicate the proper operation of the door has been a touchstone of Norman’s influential ideas since the first edition of The Design of Everyday Things in the 1980s. The example is a great one for teaching designers that the affordances of a designed object should be intrinsically coherent as to how they should be used, especially basic objects such as hammers, kitchen sinks, and doors.

  Yet, there’s more to a door than we might assume. Physically, there is a doorway—the opening itself—which is intrinsically meaningful to our bodies. It is directly perceived as an opening in a wall, providing a medium through which we can walk, in the middle of a solid surface of a wall. No signification—in the sense of something that means something else—is required.

 

‹ Prev