Understanding Context

Home > Other > Understanding Context > Page 10
Understanding Context Page 10

by Andrew Hinton


  Of course, if our perception didn’t retain any information at all, we’d be poorly suited for survival. There is absolutely some form of retention and recall going on, and that can mean we have brain-centered experiences, thoughts, recollection.

  Our friend J.J. Gibson allowed that we can have “internal loops more or less contained within the nervous system. There is no doubt but what the brain alone can generate is experience of a sort.”[95] The difference between this allowance and the mainstream conception of memory is that embodied cognition flips the model, making the body the center of how memory works. The internal experience of remembering is more like a byproduct of aggregated, residual perception. Keep adding up these residual perceptions, and eventually you have an internal life of a “mind” where prior perception and thought (what scientists call “off-line” experience) can be considered, inhabited, manipulated.[96] But it doesn’t begin in the mind—it begins in the body.

  Learning and Remembering versus Memory

  Memory is more verb than noun. It’s more useful to think of memory as a dynamic that emerges from many different cognitive systems, one that is always in process. We’re not accessing a memory so much as picking up perceptual experience and reconstructing what it means to us. Learning and memory are inseparable and are enmeshed with adaptive perception and action. As Eleanor Gibson succinctly put it, “We perceive to learn, as well as learn to perceive.”[97]

  In fact, there is no truly stable memory to be retrieved, because the act of remembering actually changes the content of what is remembered, in a process called memory reconsolidation.[98] Each time we recall a past experience, we alter it in some way, influenced by current circumstances; when we recall the experience again later, it’s now the version that we reconstructed, interpreted yet again. It’s not unusual to remember something that happened to you only to find it happened in a different way, or even to some other person whose story you’ve heard over the years.[99]

  Learning and Remembering are Entangled with Environment

  Some memories of our past have more to do with photographs we’ve seen and stories we’ve heard from relatives than some original representation stored in a brain-cabinet.[100] We naturally interpret environmental cues as part of our actual memory, to the point that we can actually be fooled into thinking things happened to us that never did; for example, in one study, subjects were convinced they had ridden in a hot-air balloon because of manipulated photographs.[101] Our perception relies heavily on our current environment to inform what we think is true of past events.[102]

  The structure of our physical environment is used by our brains to off-load some of the work of retaining prior experience, even when the content of memory isn’t about our surroundings. One recent study tested subjects in both virtual and physical connected-room environments, and found “subjects forgot more after walking through a doorway compared to moving the same distance across a room, suggesting that the doorway or ‘event boundary’ impedes one’s ability to retrieve thoughts or decisions made in a different room.” Additionally, returning to the original room after passing through several other rooms didn’t improve memory of the original information.[103] Memory doesn’t just sit on a shelf ready to be accurately accessed again; it’s always in flux, intermingled with our surroundings. Borrowing terms Don Norman often uses, “knowledge in the world” has a strong effect on whatever exists “in the head”—even what we think of as head-based knowledge.[104]

  This makes sense, if we recall that our brains evolved to support our bodies, not the other way around. What else would memory have mainly evolved for other than recalling just enough about our surroundings to help us survive? Something like factual accuracy is an artificial idea we’ve invented in our culture. But organisms don’t separate fact from interpretation; they just retain what is needed to get by, without clear lines between invention, environment, and remembering.

  In digital interfaces, this principle is still at work. When using a search engine such as Google, the way the environment responds to our actions tacitly teaches us how to use that environment. When Google changes its results to reflect your search habits, learning from how you search for information, it’s also simultaneously teaching you how to search Google, providing auto-suggested queries and prioritized results that create a sort of environmental feedback loop.[105]

  Environment, and Explicit versus Implicit Memory

  Sometimes we consciously, purposefully work at remembering information and experiences. One might memorize a poem or tell oneself to remember to take out the trash tonight. One might also try hard to recall a name of a friend or where they were on New Year’s Eve two years ago. This intentional act of consciously working to remember something is explicit memory. It can be something we worked to remember on purpose, or something we’ve simply retained without much effort but that we’re trying to pull up from the foggy depths of our minds and reconstruct in an explicit way.

  Implicit (or in our model, “tacit”) memory is essentially the opposite: it’s the stuff we don’t have to think about intentionally. Recalling when a parent helped us learn how to ride a bike would be explicit memory. But implicit memory (specifically, procedural memory) would be how our bodies just know how to ride a bike from previous experience.

  What is important for context is that both of these sorts of memory depend on environmental interaction. Most of what we remember in our environment is learned tacitly, through repeated exposure to patterns of affordance, through action. The procedural “muscle memory” we employ when riding a bike exists only because we made our bodies ride bikes enough in the past that the ability to calibrate our body position was ingrained in us through repeated, physical activity.

  Other tacit learning can happen almost immediately if the experience causes a high spike in our fear or other emotional response. (This is a property involving the brain’s amygdala flooding the nervous system with hormones that mark the experience with sense-impressions of what the environment was like during the trauma.)[106] But, this is a highly unreliable memory resource when it comes to specific facts; for evolution, it has to be only accurate enough to keep us from traipsing accidentally into another lions’ den. It did not evolve to factually verify if another place has lions or not, or exactly what the lions looked like, or that the eucalyptus you smelled nearby during your early lion encounter isn’t actually as dangerous as the lions themselves. These effects are blunt instruments that can actually have negative consequences; for example, they can cause us to react inappropriately to safe situations, which we can see manifest in post-traumatic stress disorders.

  Explicit learning can result in accurately remembering a great deal of information, but it’s a special case, and it always involves purposefully re-exposing ourselves to information until it “sticks,” or using some environmentally tied mnemonic technique.

  One example has to do with learning to type on a keyboard: how we have to explicitly think about where they keys are until we’ve done enough typing that we can do it by touch. A common argument goes that the knowledge of the keyboard has gone from our bodies into our heads. Saying “into our heads” might lead us to think there’s a sort of representational map of the keyboard in the typist’s brain, but it turns out that’s not the case. In a recent study, it was found that skilled touch typists averaging 72 words per minute were unable to map more than an average of about 15 keys when asked to do so outside of the act of typing. If asked to type something, they can hit the right keys just fine, but it’s their fingers that seem to “know” where to go. There’s no explicit, readily retrieved representation in brain-storage. The body satisficed; it went straight to an embodied facility that translates words into “fingers making letters appear” without going to the trouble of constructing a conceptual map.

  Likewise, when we get a little stuck trying to recall a phone number, we tend to do one of two things: we try to say it aloud to ourselves in sequence, as if recalling what it feels like in our mou
ths and ears, or we reach for phone to type it out, because our bodies seem to know which buttons to press (and in what order) better than our brains can remember the symbols alone.

  Otherwise, we jot the number down someplace (on a napkin, a note, or the back of a hand). We use the environment to help us remember things all the time, even when we don’t realize it. Of course, we’re now a lot worse at remembering phone numbers because we seldom have to dial them with our fingers—we just tap a name in our phone’s contact list. As always, cognition satisfices.

  What Does All This Mean for Design?

  Regardless of the differences in one theoretical perspective or another, the overall lesson is clear: we can’t rely on an ability to invoke specific sorts of memory in users. We can’t assume they will accurately retain anything from prior experience, and we especially can’t expect them to explicitly memorize how to use a product. Even for the rare cases in which specialists are required to learn a complex system through repeated use, the system should do as much work as possible toward making its affordances clear without requiring memory. Perception satisfices, so it tacitly makes use of the environment around it directly as much as possible.

  In Don Norman’s conceptual model of “knowledge in the head” versus “knowledge in the world,” he explains that we should always try to “provide meaningful structures...make memory unnecessary: put the required information in the world.”[107] The environment is such a major player in how our brains function, “everything works just fine unless the environment changes so that the combined knowledge (between head and world) is no longer sufficient: this can lead to havoc.”[108] If you’ve ever visited a country in which they drive on the opposite side of the road, or you’ve moved the furniture around in your bedroom only to bruise yourself on a chair in the dark until you get used to the new arrangement, you know about this havoc firsthand.

  Of course, at a certain point of scale or complexity, it’s impossible to put all the knowledge in the world so that it can all be perceived at once. This is why we historically rely on extensive menus in software; users can uncover for themselves what actions are available without the screen being overwhelmed with buttons. It’s why an online retailer has to provide summary categories and search functions—you can’t see the entire inventory in one glance. And when software actions are happening beyond our perception, we simply don’t know about them unless the environment presents us with detectable information.

  This is why one of the most complex things to design in a device such as a smartphone is the notifications capability. In my iPhone’s current iOS version, there are at least four different ways I can set various apps to alert me of events happening beyond my immediate view. We’ve created a world for ourselves in which we can’t perceive much (or most) of what matters to us without these notification mechanisms.

  As Jakob Nielsen explains, “Learning is hard work, and users don’t want to do it. That’s why they learn as little as possible about your design and then stay at a low level of expertise for years. The learning curve flattens quickly and barely moves thereafter.”[109] With so much to learn, and such a low motivation and ability to learn it all, we have to rely more heavily on the conventions and implicit, structural affordances that users carry over from the physical world.

  In the physical world, most important changes in the environment have perceivable signs that we learn to interpret: storm clouds or cold winds mean bad weather approaching; blooming flowers and longer days mean a warm season is coming; and if my neighbors can see what I’m doing in my house, it should be obvious to me that a window is uncovered or a wall has gone missing.

  Software can disrupt these assumptions we’ve learned about how our environment works. When Beacon was launched, many users of Facebook had already become used to the structures of the platform as well as the structures implicit in how their browsers worked. If they were on a website in one browser window, it didn’t share places and objects with a different website in a separate browser window. The only constant was the browser itself, plus whatever plug-ins and things were part of its function. Beacon broke this environmental convention, disrupting expectations from past experience by creating a conduit that automatically gleaned information from another context and publishing it without explicit approval from the user.

  So, what makes an environment easier to learn often has to do with whether or not its affording structures meet the expectations of its inhabitants, or if they do a good enough job at signaling disruptions of convention and teaching new expectations. Next, we’ll look at the building blocks of environments and how we perceive them, which will give us some ideas about how to create understandable environments with language and software.

  The PORT Elevator System

  At a conference I attended in 2012, I and the other attendees encountered a new elevator system that the conference hotel had installed only a few months earlier.[110] Instead of calling for service by using a conventional set of Up and Down buttons, the PORT elevator system requires a guest to use a digital touch-screen to select a destination floor, as shown in Figure 5-3. The screen then displays which elevator the guest should use to get to that floor, requiring the guest to find that elevator and wait for it to arrive. Upon entering the elevator, the guest will find there are no floor-selection buttons inside. The elevator already knows the floors at which it should stop.

  Technically, this is a brilliantly engineered system that corrects the inefficiencies of conventional elevator usage by calculating the logistics of which elevator will get each guest to his destination most quickly.

  However, when attendees (including myself) encountered this system, there was widespread confusion and annoyance. Why?

  People grow up learning how to use elevators in a particular way. You push a button to go up or down, watch for the first elevator that’s going in your direction to open its doors, get in, and then select your floor. These are rehearsed, bodily patterns of use that become ingrained in our behavior. That is, we off-load the “thinking” about elevator usage to bodily, passively enacted habit. Unfortunately, these ingrained behaviors severely break the intended scenario for using the PORT elevators.

  Figure 5-3. Part of an instruction booklet from the Schindler elevator company, explaining how to use its new PORT elevator system

  The touch-screen design assumes the guests will keep watching the screen to see which elevator they should use. But people are used to looking away immediately after pressing the up or down button, so they tend to look away in this case too—meaning they might never see which elevator they are assigned.

  People habitually step into whichever elevator opens first. In using the PORT system, however, chances are that the elevator that opens first or closest to you is actually not the elevator for your selected destination.

  After entering the elevator, guests realize there’s no button panel and they have no control over floor choice. Even for people who follow the directions, discovering a lack of a button panel can be a surreal, upsetting surprise.

  Throughout the event, we noticed hotel staff hovering around the elevators to explain them to guests—essentially acting as real-time translators between the unfamiliar system and people’s learned expectations.

  The PORT system is an apt example of how an excellent engineering solution can go very wrong when not taking into account how people really behave in an environment. Remember the perception-action loop: just as people behave in any environment, they will tend to act first and think later. Requiring them to think before acting in this context is a recipe for confusion.

  This is another example of how environment controls action. It doesn’t mean that this new system is a failure; it just tricked its users by presenting affording information that they were used to perceiving and acting upon without thought, and then pulled the rug out from under those assumptions. When people learn it as a new convention, it will result in more efficient and pleasant elevator experiences for everyone. There just need
s to be an improved set of environmental structures provided to help “nudge” people toward stopping and thinking explicitly as they learn the new system, before using it improperly.[111]

  * * *

  [69] Damasio, Antonio. Descartes’ Error: Emotion, Reason, and the Human Brain. New York: Penguin Putnam, 1994.

  [70] Kahneman, Daniel. Thinking, Fast and Slow. New York: Farrar, Straus and Giroux, 2011.

  [71] Thaler, Richard H., and Cass R. Sunstein. Nudge: Improving Decisions about Health, Wealth, and Happiness. New Haven, CT: Yale University Press, 2008.

  [72] Norman, Don. The Design of Everyday Things: Revised and Expanded Edition. New York: Basic Books, 2013:56, Kindle edition.

  [73] Bates, Marcia J. “Toward an Integrated Model of Information Seeking and Searching.” (Keynote Address, Fourth international Conference on Information Needs, Seeking and Use in Different Contexts. Lisbon, Portugal, September 11, 2002.) New Review of Information Behaviour Research 2002;3:1–15.

  [74] Norman, Don. The Design of Everyday Things: Revised and Expanded Edition. New York: Basic Books, 2013:53, Kindle edition.

  [75] Tierney, John. Do You Suffer From Decision Fatigue? New York Times Magazine. Retrieved August 23, 2011 (http://en.wikipedia.org/wiki/New_York_Times_Magazine).

 

‹ Prev