AS PARENTS INVEST in the latest academic software and teachers consider how to weave the Internet into lesson plans for the new school year, it is a good moment to reflect upon the changing world in which youths are being educated. In a word, it is digital, with computer notebooks displacing spiraled notebooks, and Web-based blogs, articles, and e-mails shaping how we read and communicate. Parents, teachers, and scholars are beginning to question how our immersion in this increasingly digital world will shape the next generation’s relationship to reading, learning, and to knowledge itself.
As a cognitive neuroscientist and scholar of reading, I am particularly concerned with the plight of the reading brain as it encounters this technologically rich society. Literacy is so much entwined in our lives that we often fail to realize that the act of reading is a miracle that is evolving under our fingertips. Over the last five thousand years, the acquisition of reading transformed the neural circuitry of the brain and the intellectual development of the species. Yet, the reading brain is slowly becoming endangered—the unforeseen consequences of the transition to a digital epoch that is affecting every aspect of our lives, including the intellectual development of each new reader. Three unexpected sources can help us negotiate the historical transition we face as we move from one prevailing mode of communication to another: Socrates, modern cognitive neuroscience, and Proust.
Similarly poised between two modes of communication, one oral and one written, Socrates argued against the acquisition of literacy. His arguments are as prescient today as they were futile then. At the core of Socrates’ arguments lay his concerns for the young. He believed that the seeming permanence of the printed word would delude them into thinking they had accessed the heart of knowledge, rather than simply decoded it. To Socrates, only the arduous process of probing, analyzing, and ultimately internalizing knowledge would enable the young to develop a lifelong approach to thinking that would lead them ultimately to wisdom, virtue, and “friendship with [their] god.” To Socrates, only the examined word and the “examined life” were worth pursuing, and literacy short-circuited both.
How many children today are becoming Socrates’ nightmare, decoders of information who have neither the time nor the motivation to think beneath or beyond their Googled universes? Will they become so accustomed to immediate access to escalating on-screen information that they will fail to probe beyond the information given to the deeper layers of insight, imagination, and knowledge that have led us to this stage of human thought? Or, will the new demands of information technologies to multitask, integrate, and prioritize vast amounts of information help to develop equally, if not more, valuable skills that will increase human intellectual capacities, quality of life, and collective wisdom as a species?
There is surprisingly little research that directly confronts these questions, but knowledge from the neurosciences about how the brain learns to read and how it learns to think about what it reads can aid our efforts. We know, for example, that no human being was born to read. We can do so only because of our brain’s protean capacity to rearrange itself to learn something new. Using neuroimaging to scan the brains of novice readers allows us to observe how a new neural circuitry is fashioned from some of its original structures. In the process, that brain is transformed in ways we are only now beginning to fully appreciate. More specifically, in the expert reading brain, the first milliseconds of decoding have become virtually automatic within that circuit. It is this automaticity that allows us the precious milliseconds we need to go beyond the decoded text to think new thoughts of our own—the heart of the reading process.
Perhaps no one was more eloquent about the true purpose of reading than French novelist Marcel Proust, who wrote: “that which is the end of their [the authors’] wisdom is but the beginning of ours.” The act of going beyond the text to think new thoughts is a developmental, learnable approach toward knowledge.
Within this context, there should be a developmental perspective on our transition to a digital culture. Our already biliterate children, who nimbly traverse between various modes of print, need to develop an expert reading brain before they become totally immersed in the digital world. Neuroscience shows us the profound miracle of an expert reading brain that uses untold areas across all four lobes and both hemispheres to comprehend sophisticated text and to think new thoughts that go beyond the text.
Children need to have both time to think and the motivation to think for themselves, to develop an expert reading brain, before the digital mode dominates their reading. The immediacy and volume of information should not be confused with true knowledge. As technological visionary Edward Tenner cautioned, “It would be a shame if the very intellect that produced the digital revolution could be destroyed by it.” Socrates, Proust, and the images of the expert reading brain help us to think more deliberately about the choices we possess as our next generation moves toward the next great epoch in our intellectual development.
< James Gee >
learning theory, video games, and popular culture
Originally published in Kirsten Drotner and Sonia Livingston, eds., The International Handbook of Children, Media, and Culture (2008), pp. 200–203.
JAMES PAUL GEE is the Mary Lou Fulton Presidential Professor of Literacy Studies in the Department of English at Arizona State University. His books include Social Linguistics and Literacies (Fourth Edition, 2011) and What Video Games Have to Teach Us About Learning and Literacy (Second Edition, 2007). More information can be found at www.jamespaulgee.com.
>>> action-and-goal-directed preparations for, and simulations of, embodied experience
VIDEO GAMES don’t just carry the potential to replicate a sophisticated scientific way of thinking. They actually externalize the way in which the human mind works and thinks in a better fashion than any other technology we have.
In history, scholars have tended to view the human mind through the lens of a technology they thought worked like the mind. Locke and Hume, for example, argued that the mind was like a blank slate on which experience wrote ideas, taking the technology of literacy as their guide. Much later, modern cognitive scientists argued that the mind worked like a digital computer, calculating generalizations and deductions via a logic-like rule system (Newell and Simon, 1972). More recently, some cognitive scientists, inspired by distributed parallel-processing computers and complex adaptive networks, have argued that the mind works by storing records of actual experiences and constructing intricate patterns of connections among them (Clark, 1989; Gee, 1992). So we get different pictures of the mind: mind as a slate waiting to be written on, mind as software, mind as a network of connections.
Human societies get better through history at building technologies that more closely capture some of what the human mind can do and getting these technologies to do mental work publicly. Writing, digital computers, and networks each allow us to externalize some functions of the mind. Though they are not commonly thought of in these terms, video games are a new technology in this same line. They are a new tool with which to think about the mind and through which we can externalize some of its functions. Video games of the sort I am concerned with are what I would call “action-and-goal-directed preparations for, and simulations of, embodied experience.” A mouthful, indeed, but an important one, and one connected intimately to the nature of human thinking; so, let us see what it means.
Let me first briefly summarize some recent research in cognitive science, the science that studies how the mind works (Bransford et al., 2000). Consider, for instance, the remarks on the following page (in the quotes, the word “comprehension” means “understanding words, actions, events, or things”): . . . comprehension is grounded in perceptual simulations that prepare agents for situated action. (Barsalou, 1999a: 77)
. . . to a particular person, the meaning of an object, event, or sentence is what that person can do with the object, event, or sentence. (Glenberg, 1997: 3)
What these remarks mean is this: human understa
nding is not primarily a matter of storing general concepts in the head or applying abstract rules to experience. Rather, humans think and understand best when they can imagine (simulate) an experience in such a way that the simulation prepares them for actions they need and want to take in order to accomplish their goals (Clark, 1997; Barsalou, 1999b; Glenberg and Robertson, 1999).
Let us take weddings as an example, though we could just as well have taken war, love, inertia, democracy, or anything. You don’t understand the word or the idea of weddings by meditating on some general definition of weddings. Rather, you have had experiences of weddings, in real life and through texts and media. On the basis of these experiences, you can simulate different wedding scenarios in your mind. You construct these simulations differently for different occasions, based on what actions you need to take to accomplish specific goals in specific situations. You can move around as a character in the mental simulation as yourself, imaging your role in the wedding, or you can “play” other characters at the wedding (e.g., the minister), imaging what it is like to be that person.
You build your simulations to understand and make sense of things, but also to help you prepare for action in the world. You can act in the simulation and test out what consequences follow, before you act in the real world. You can role-play another person in the simulation and try to see what motivates their actions or might follow from them before you respond in the real world. So I am arguing that the mind is a simulator, but one that builds simulations to prepare purposely for specific actions and to achieve specific goals (i.e., they are built around win states).
Video games turn out to be the perfect metaphor for what this view of the mind amounts to, just as slates and computers were good metaphors for earlier views of the mind. Video games usually involve a visual and auditory world in which the player manipulates a virtual character (or characters). They often come with editors or other sorts of software with which the player can make changes to the game world or even build a new game world (much as the mind can edit its previous experiences to form simulations of things not directly experienced). The player can make a new landscape, a new set of buildings, or new characters. The player can set up the world so that certain sorts of action are allowed or disallowed. The player is building a new world, but is doing so by using and modifying the original visual images (really the code for them) that came with the game. One simple example of this is the way in which players can build new skateboard parks in a game like Tony Hawk Pro Skater. The player must place ramps, trees, grass, poles, and other things in space in such a way that players can manipulate their virtual characters to skate the park in a fun and challenging way.
Even when players are not modifying games, they play them with goals in mind, the achievement of which counts as their “win state.” Players must carefully consider the design of the world and consider how it will or will not facilitate specific actions they want to take to accomplish their goals. One technical way that psychologists have talked about this sort of situation is through the notion of “affordances” (Gibson, 1979). An affordance is a feature of the world (real or virtual) that will allow for a certain action to be taken, but only if it is matched by an ability in an actor who has the wherewithal to carry out such an action. For example, in the massive multiplayer game World of WarCraft stags can be killed and skinned (for making leather), but only by characters who have learned the skinning skill. So a stag is an affordance for skinning for such a player, but not for one who has no such skill. The large spiders in the game are not an affordance for skinning for any players, since they cannot be skinned at all. Affordances are relationships between the world and actors.
Playing World of WarCraft, or any other video game, is all about such affordances. The player must learn to see the game world—designed by the developers, but set in motion by the players, and, thus, co-designed by them—in terms of such affordances (Gee, 2005). Broadly speaking, players must think in terms of: “What are the features of this world that can enable the actions I am capable of carrying out and that I want to carry out in order to achieve my goals?”
The view of the mind I have sketched argues, as far as I am concerned, that the mind works rather like a video game. For humans, effective thinking is more like running a simulation in our heads within which we have a surrogate actor than it is about forming abstract generalizations cut off from experiential realities. Effective thinking is about perceiving the world such that the human actor sees how the world, at a specific time and place (as it is given, but also modifiable), can afford the opportunity for actions that will lead to a successful accomplishment of the actor’s goals. Generalizations are formed, when they are, bottom up from experience and imagination of experience. Video games externalize the search for affordances, for a match between character (actor) and world, but this is just the heart and soul of effective human thinking and learning in any situation. They are, thus, a natural tool for teaching and learning.
As a game player you learn to see the world of each different game you play in a quite different way. But in each case you see the world in terms of how it will afford the sorts of embodied actions you (and your virtual character, your surrogate body in the game) need to take to accomplish your goals (to win in the short and long run). For example, you see the world in Full Spectrum Warrior as routes (for your squad) between cover (e.g., corner to corner, house to house), because this prepares you for the actions you need to take, namely attacking without being vulnerable to attack yourself. You see the world of Thief: Deadly Shadows in terms of light and dark, illumination and shadows, because this prepares you for the different actions you need to take in this world, namely hiding, disappearing into the shadows, sneaking, and otherwise moving unseen to your goal.
While commercial video games often stress a match between worlds and characters like soldiers or thieves, there is no reason why other types of game could not let players experience such a match between the world and the way a particular type of scientist, for instance, sees and acts on the world (Gee, 2004). Such games would involve facing the sorts of problems and challenges that type of scientist does, and living and playing by the rules that type of scientist uses. Winning would mean just what it does to a scientist: feeling a sense of accomplishment through the production of knowledge to solve deep problems.
I have argued for the importance of video games as “action-and-goal-directed preparations for, and simulations of, embodied experience.” They are the new technological arena—just as were literacy and computers earlier—around which we can study the mind and externalize some of its most important features to improve human thinking and learning....
< Jakob Nielsen >
usability of websites for teenagers
Originally published in Jakob Nielsen’s Alertbox (January 31, 2005).
JAKOB NIELSEN, PH. D., is a principal of Nielsen Norman Group (www.nngroup.com). Noted as “the world’s leading expert on Web usability” by U.S. News & World Report and “the next best thing to a true time machine” by USA Today, he is the author of Designing Web Usability: The Practice of Simplicity (1999) and Eyetracking Web Usability (2009). From 1994 to 1998, Nielsen was a Sun Microsystems Distinguished Engineer. He holds 79 U.S. patents, mainly on ways of making the Internet easier to use. His website is www.useit.com.
IT’S ALMOST CLICHÉ to say that teenagers live a wired lifestyle, but they do. Teens in our study reported using the Internet for:• School assignments
• Hobbies or other special interests
• Entertainment (including music and games)
• News
• Learning about health issues that they’re too embarrassed to talk about
• E-commerce
And, even when they don’t make actual purchases online, teens use websites to do product research and to build gift wish lists for the credit-card-carrying adults in their lives.
>>> user research
We conducted a series of usability studies to det
ermine how website designs can better cater to teenagers. We systematically tested twenty-three websites, asking teenagers to visit the sites, perform given tasks, and think out loud. We also asked test participants to perform Web-wide tasks using any website they wanted. This gave us data about a wider range of sites, along with insight into how teens decide which sites to use. Finally, we interviewed the participants about how and when they use the Web and asked them to show us their favorite sites.
In all, thirty-eight users between the ages of thirteen and seventeen participated in the tests. Most sessions were conducted in the U.S.; we also ran a few tests in Australia to assess the international applicability of the findings. We found no major differences here: factors that make websites easy or difficult for teens to use were the same in both countries, as were the design characteristics that appealed to teens.
The only big difference between the two nations confirmed a stereotype about Australians: they are nuts about sports. When asked to show us their favorite sites, almost every Australian teen nominated a team site from the Australian Football League. An Australian teen also praised Google for offering a feature to search only Australian sites. Localizing websites and offering countryspecific content and services is good advice that applies across age groups.
Within the U.S., we conducted studies in a rural Colorado and in three California locations ranging from affluent suburbs to disadvantaged urban areas. We tested a roughly equivalent number of boys and girls.
>>> focus on web usability
Teenagers are heavy users of a broad range of technology products, including music download services and MP3 players, chat and instant messaging, e-mail, mobile phones and SMS texting, online diary services, and much more. Nonetheless, we focused our research on teens’ use of websites for two reasons:• There are many existing reports about how teens use computer-mediated communication, mobile devices, and other non-Web technologies. Such studies are not always conducted using proper usability methodology, and they tend to rely too much on surveys of self-reported behavior rather than direct observation of actual behavior. Still, this area has been well covered by other researchers.
The Digital Divide Page 5