Understanding Context

Home > Other > Understanding Context > Page 5
Understanding Context Page 5

by Andrew Hinton


  Semantic

  This is information people create for the purpose of communicating meaning to other people. I’ll often refer to this as “language.” For our discussion, this mode includes all sorts of communication such as gestures, signs, graphics, and of course speech and writing. It’s more fluid than physical information and harder to pin down, but it still creates environmental structure for us. It overlaps the Physical mode because much of the human environment depends on complementary qualities of both of these modes, such as the signage and maps positioned in physical locations and written on physical surfaces in an airport.

  Digital

  This is the “information technology” sort of information by which computers operate, and communicate with other computers. Even though humans created it (or created the computers that also create it), it’s not natively readable by people. That’s because it works by stripping out people-centric context so that machines can talk among one another with low error rates, as quickly and efficiently as possible. It overlaps the Semantic mode, because it’s abstract and made of encoded semantic information. But even though it isn’t literally physical, it does exist in physical infrastructure, and it does affect our physical environment more and more every day.

  I should mention: like many other models I’ll share, this one isn’t meant to be taken as mathematically or logically exact. Simple models can sometimes work best when they are clear enough to point us in the right direction but skip the complexities of precision. So, for example, the overlapping parts of the modes are there to evoke how they are seldom mutually exclusive, and actually influence one another.

  Starting from the Bottom

  I began with the Physical mode for a reason. Context is about whole environments; otherwise, we are considering parts of an environment out of context. And when we take an environmental view, we have to begin from first principles about how people understand the environment, whether there are digital networks or gadgets in it or not. To illustrate this, Figure 3-7 presents another informal model that illustrates the layers involved in the discussion moving forward.

  Figure 3-7. Pace layers of information

  These are based on a concept known as pace layers—where the lower-level layer changes more slowly over time than the next, and so on. I’ve adapted the approach so that it also implies how one layer builds on another.[17]

  Perception and cognition change very slowly for all organisms, including humans, and these abilities had to evolve long ago in order to have a functioning animal to begin with. Perception here means the core faculties of how a body responds to the environment. This is the sort of perception lizards use to climb surfaces and eat bugs or that humans use to walk around or duck a stray football.

  Spoken language is next; as we will see, it has been around for a very long time for our species, long enough to at times be hard to separate from the older perception and cognition capabilities of our bodies and brains (as mentioned earlier, I’m lumping gestures in with speech, for simplicity’s sake). Even though particular languages can change a lot over centuries, the essential characteristics of spoken language change much more slowly.

  Written/graphical language is the way we use physical objects—up until very recently, the surfaces of those objects—to encode verbal language for communicating beyond the present moment. Although spoken language is more of an emergent property of our species, writing is actually more of an ancient technology. Writing is also a way of encoding information, which is a precursor to digital code.

  Information organization and design arose as identifiable areas of expertise and effort because we had so much stuff written down, and because writing and drawing enabled greater complexity than is possible in mere speech. The ability to freeze speech on a surface and relate it to other frozen speech on the same surface opened up the ability to have maps, diagrams, broadsides, folios, all of which required organization and layout. Our methods for organizing and designing written information have also been precursors to how we’ve designed and organized digital information for computing.

  Last, there’s information technology, which is quite recent, and (as I’m defining it here) depended on the invention of digital software. We’ve seen this mode change rapidly in our own lifetimes, and it’s the layer that has most disrupted our experience of the other two modes, in the shortest time. It didn’t happen on its own, however; the ideas behind it originated in writing, linguistic theory, and other influences from further down the model.

  If we place the three modes of information on top of these layers as demonstrated in Figure 3-8, it gives a rough idea of how these models relate to each other.

  Figure 3-8. Modes and layers combined

  In reality, the boundaries are actually much more diffuse and intermingled, but the main idea is hopefully clear: the ways in which we use and perceive information have evolved over time; some aspects are foundational and more stable, whereas other aspects are more variable and quick to change.

  In my experience, most technological design work begins with information technology first and then later figures out the information organization and design and other communicative challenges lower down. Yet, starting with technology takes a lot for granted. It assumes X means X, and Y means Y; or that here is here, and there is there. What happens when we can no longer trust those assumptions? The best way to untangle the many knotted strands that create and shape context is to understand how the world makes sense to us in the first place—with bodies, surfaces, and objects—and build the rest of our understanding from that foundation.

  * * *

  [10] Wikimedia Commons: http://bit.ly/1uDL7m6

  [11] Using the term agent gives us the ability to include nonpersons, such as software or other systems that try to determine context. It’s also the term used most often in the scholarly literature for this element.

  [12] McCullough, Malcolm. Digital Ground: Architecture, Pervasive Computing, and Environmental Knowing. Cambridge, MA: MIT Press, 2004: 48, Kindle edition.

  [13] Dourish, Paul. “What We Talk About When We Talk About Context.” Personal and Ubiquitous Computing. London: Springer-Verlag, February 2004; 8(1):19–30.

  [14] Based on a search for “information” in books from 1800 to 2000, using Google’s Ngram Viewer (https://books.google.com/ngrams/).

  [15] Wurman, Richard Saul. Information Anxiety. New York: Doubleday, 1989: 38.

  [16] I especially recommend Bates, M. “Fundamental Forms of Information.” Journal of the American Society for Information Science and Technology 2006; 57(8):1033–45, and ongoing work on a taxonomy of information by Sabrina Golonka, (http://bit.ly/1ySrrik and http://bit.ly/1CM2ti6).

  [17] Borrowed and adapted from the work of Stewart Brand, particularly in How Buildings Learn, who adapted his approach from a concept called shearing layers created by architect Frank Duffy.

  Part II. Physical Information

  The Roots of Context

  THE PRODUCTS AND SERVICES WE DESIGN ARE PART OF A GREATER ENVIRONMENT, but they have the capacity to change that environment as well as the behaviors of people who use them. Smartphones influence user behavior in a different way than older cell phones, which in turn changed behavior from when only phone booths and land lines were available. Obviously, right? But, did you know that the separation between the environment, the object, and the user is mostly artificial and that they’re all part of one dynamic system? Have you also noticed that when we use software, our perception seems to expect that environment to behave according to the same laws we rely on in the physical world?

  The mechanics of physical life shape the way we understand abstractions such as language, social systems, and software. That’s why we’re going to spend some significant quality time together looking at what I’m calling Physical Information, the mode at the bottom of the diagram shown in Figure II-1.

  Figure II.1. Physical Information

  Part II introduces some essential theories about perception and action
. It explores what affordance really is, with special attention to how it was originally conceived by its creator, James J. Gibson. It also covers how the environment influences behavior, how memory and learning work, and offers models for breaking down the elements of any environment. Finally, this part shows how physical information principles translate into more-complex parts of our world such as social culture and organizations.

  Chapter 4. Perception, Cognition, and Affordance

  In the Universe, there are things that are known, and things that are unknown, and in between there are doors.

  —WILLIAM BLAKE

  Information of a Different Sort

  IF WE ARE TO KNOW HOW USERS UNDERSTAND THE CONTEXT OF OBJECTS, people, and places, we need to stipulate what we mean by understand in the first place. The way people understand things is through cognition, which is the process by which we acquire knowledge and understanding through thought, experience, and our senses. Cognition isn’t an abstraction. It’s bound up in the very structures of our bodies and physical surroundings.

  When a spider quickly and gracefully traverses the intricacies of a web, or a bird like the green bee-eater on this book’s cover catches an insect in flight, these creatures are relying on their bodies to form a kind of coupling with their environments—a natural, intuitive dance wherein environment and creature work together as a system. These wonderfully evolved, coupled systems result in complex, advanced behavior, yet with no large brains in sight.

  It turns out that we humans, who evolved on the same planet among the same essential structures as spiders and birds, also rely on this kind of body-to-environment coupling. Our most basic actions—the sort we hardly notice we do—work because our bodies are able to perceive and act among the structures of our environment with little or no thought required.

  When I see users tapping and clicking pages or screens to learn how a product works, ignoring and dismissing pop-ups with important alerts because they want to get at the information underneath, or keeping their smartphones with them from room to room in their homes, I wonder why these behaviors occur. Often they don’t seem very logical, or at least they show a tendency to act first and think about the logic of the action later. Even though these interfaces and gadgets aren’t natural objects and surfaces, users try using them as if they were.

  This theory about the body-environment relationship originates in a field called ecological psychology, which posits that creatures directly perceive and act in the world by their bodies’ ability to detect information about the structures in the environment. This information is what I will be calling physical information—a mode of information that is at work when bodies and environments do this coupled, dynamic dance of action and perception.

  Ecological psychology is sometimes referred to as Gibsonian psychology because the theory started with a scientist named James J. Gibson, whose theory of information uses neither the colloquial meaning of information nor the definition we get from information science.[18] Gibson explains his usage in a key passage of his landmark work, The Ecological Approach to Visual Perception:

  Information, as the term is used in this book (but not in other books), refers to specification of the observer’s environment, not to specification of the observer’s receptors or sense organs....[For discussing perception, the term] information cannot have its familiar dictionary meaning of knowledge communicated to a receiver. This is unfortunate, and I would use another term if I could. The only recourse is to ask the reader to remember that picking up information is not to be thought of as a case of communicating. The world does not speak to the observer. Animals and humans communicate with cries, gestures, speech, pictures, writing and television, but we cannot hope to understand perception in terms of these channels; it is quite the other way around. Words and pictures convey information, carry it, or transmit it, but the information in the sea of energy around each of us, luminous or mechanical or chemical energy, is not conveyed. It is simply there. The assumption that information can be transmitted and the assumption that it can be stored are appropriate for the theory of communication, not for the theory of perception.[19]

  Gibson often found himself having to appropriate or invent terms in order to have language he could use to express ideas that the contemporaneous language didn’t accommodate.[20] He’s having to ask readers to set aside their existing meaning of information and to look at it in a different way, when trying to understand how perception works. For him, “To perceive is to be aware of the surfaces of the environment and of oneself in it.”[21] In other words, perception is about the agent, figuring out the elements of its surroundings and understanding how the agent itself is one of those elements. And information is what organisms perceive in the environment that informs the possibilities for action.

  Even this usage of “perception” is more specific than we might be used to: it’s about core perceptual faculties, not statements such as “my perception of the painting is that it is pretty” or “the audience perceives her to be very talented.” Those are cultural, social layers that we might refer to as perception, but not the sort of perception we will mainly be discussing in Part 1.

  Even though we humans might now be using advanced technology with voice recognition and backlit touch-screen displays, we still depend on the same bodies and brains that our ancestors used thousands of years ago to allow us to act in our environment, no matter how digitally enhanced. Just as with the field and the stone wall presented in Chapter 3, even without language or digital technology, the world is full of structures that inform bodies about what actions those structures afford.

  I’ll be drawing from Gibson’s work substantially, especially in this part of the book, because I find that it provides an invaluable starting point for rethinking (and more deeply grasping) how users perceive and understand their environments. Gibson’s ideas have also found a more recent home as a significant influence in a theoretical perspective called embodied cognition.

  James. J. Gibson

  James J. (“JJ”) Gibson (1904-1979) was an American experimental psychologist, author, and theorist. He and his wife, Eleanor J. Gibson (1910-2002),—a major scientific figure in her own right—developed an extensive theoretical body of work on what they called ecological perception and learning.

  Figure 4-1. James J. Gibson and Eleanor Gibson[22]

  James Gibson developed his theories partly during research funded by the United States Air Force around the time of World War II while studying how pilots orient themselves during flight.[23] Gibson realized that his insight would mean overturning more than a century of established scientific research to get to the bottom of the problem, and insisted that a “fresh start” was required.[24] What resulted was decades of work dedicated to changing the way science understood perception.

  Gibson particularly subscribed to the perspective of American Pragmatism, and the radical empiricism developed by William James.[25] As a radical empiricist himself, Gibson insisted on understanding perception based on the facts of the natural world, versus cultural assumptions or artificially contrived experiments.

  Eleanor Gibson made major contributions to the science of childhood cognitive development as well as how people in general learn new knowledge. Her work has been foundational to later social science and psychology work on education and communities of practice. She was awarded the National Medal of Science in 1992. A famous experiment she created was the Visual Cliff, in which infants were placed on a wooden table whose surface was extended by a long portion of plate glass. She discovered that infants reacted to the perceived drop-off with caution or anxiety, but she also observed how they would adapt to their perceptions by patting the glass and learning about their environment through action.[26]

  Most of the references to “Gibsonian psychology” in this book are specifically to James Gibson’s work; but it’s important to remember that this amazing couple jointly established some of the most important insights in psychological science in the twentieth
century.

  A Mainstream View of Cognition

  Since roughly the mid-twentieth century, conventional cognitive science holds that cognition is primarily (or exclusively) a brain function, and that the body is mainly an input-output mechanism; the body does not constitute a significant part of cognitive work. I’ll be referring to this perspective as mainstream or disembodied cognition, though it is called by other names in the literature, such as representationalism or cognitivism.

  According to this view, cognition works something similar to the diagram in Figure 4-2.

  Figure 4-2. The mainstream model for cognition[27]

  The process happens through inputs and outputs, with the brain as the “central processing unit”:

  The brain gathers data about the world through the body’s senses.

 

‹ Prev