Book Read Free

The Digital Divide: Writings for and Against Facebook, Youtube, Texting, and the Age of Social Networking

Page 10

by Mark Bauerlein


  After initial difficulty finding people who had not yet used computers, we were able to recruit three volunteers in their midfifties and sixties who were new to computer technology yet willing to give it a try. To compare the brain activity of these three computer-naive volunteers, we also recruited three computer-savvy volunteers of comparable age, gender, and socioeconomic background. For our experimental activity, we chose searching on Google for specific and accurate information on a variety of topics, ranging from the health benefits of eating chocolate to planning a trip to the Galápagos.

  Next, we had to figure out a way to do MRI scanning on the volunteers while they used the Internet. Because the study subjects had to be inside a long narrow tube of an MRI scanner during the experiment, there would be no space for a computer, keyboard, or mouse. To re-create the Google-search experience inside the scanner, the volunteers wore a pair of special goggles that presented images of website pages designed to simulate the conditions of a typical Internet search session. The system allowed the volunteers to navigate the simulated computer screen and make choices to advance their search by simply pressing one finger on a small keypad, conveniently placed.

  To make sure that the functional MRI scanner was measuring the neural circuitry that controls Internet searches, we needed to factor out other sources of brain stimulation. To do this, we added a control task that involved the study subjects reading pages of a book projected through the specialized goggles during the MRI. This task allowed us to subtract from the MRI measurements any nonspecific brain activations, from simply reading text, focusing on a visual image, or concentrating. We wanted to observe and measure only the brain’s activity from those mental tasks required for Internet searching, such as scanning for targeted key words, rapidly choosing from among several alternatives, going back to a previous page if a particular search choice was not helpful, and so forth. We alternated this control task—simply reading a simulated page of text—with the Internet searching task. We also controlled for nonspecific brain stimulations caused by the photos and drawings that are typically displayed on an Internet page.

  Finally, to determine whether we could train the brains of Internet-naive volunteers, after the first scanning session we asked each volunteer to search the Internet for an hour each day for five days. We gave the computer-savvy volunteers the same assignment and repeated the functional MRI scans on both groups after the five days of search-engine training.

  As we had predicted, the brains of computer-savvy and computer-naive subjects did not show any difference when they were reading the simulated book text; both groups had years of experience in this mental task, and their brains were quite familiar with reading books. By contrast, the two groups showed distinctly different patterns of neural activation when searching on Google. During the baseline scanning session, the computer-savvy subjects used a specific network in the left front part of the brain, known as the dorsolateral prefrontal cortex. The Internet-naive subjects showed minimal, if any, activation in this region.

  One of our concerns in designing the study was that five days would not be enough time to observe any changes, but previous research suggested that even Digital Immigrants can train their brains relatively quickly. Our initial hypothesis turned out to be correct. After just five days of practice, the exact same neural circuitry in the front part of the brain became active in the Internet-naive subjects. Five hours on the Internet, and the naive subjects had already rewired their brains.

  This particular area of the brain controls our ability to make decisions and integrate complex information. It also controls our mental process of integrating sensations and thoughts, as well as working memory, which is our ability to keep information in mind for a very short time—just long enough to manage an Internet search task or dial a phone number after getting it from directory assistance.

  The computer-savvy volunteers activated the same frontal brain region at baseline and had a similar level of activation during their second session, suggesting that for a typical computer-savvy individual, the neural circuit training occurs relatively early and then remains stable. But these initial findings raise several unanswered questions. If our brains are so sensitive to just an hour a day of computer exposure, what happens when we spend more time? What about the brains of young people, whose neural circuitry is even more malleable and plastic? What happens to their brains when they spend their average eight hours daily with their high-tech toys and devices?

  >>> techno-brain burnout

  In today’s digital age, we keep our smartphones at our hip and their earpieces attached to our ears. A laptop is always within reach, and there’s no need to fret if we can’t find a landline—there’s always Wi-Fi (short for wireless fidelity, which signifies any place that supplies a wireless connection to the Internet) to keep us connected. As technology enables us to cram more and more work into our days, it seems as if we create more and more work to do.

  Our high-tech revolution has plunged us into a state of continuous partial attention, which software executive Linda Stone describes as continually staying busy—keeping tabs on everything while never truly focusing on anything. Continuous partial attention differs from multitasking, wherein we have a purpose for each task and we are trying to improve efficiency and productivity. Instead, when our minds partially attend, and do so continuously, we scan for an opportunity for any type of contact at every given moment. We virtually chat as our text messages flow, and we keep tabs on active buddy lists (friends and other screen names in an instant message program); everything, everywhere is connected through our peripheral attention. Although having all our pals online from moment to moment seems intimate, we risk losing personal touch with our real-life relationships and may experience an artificial sense of intimacy compared with when we shut down our devices and devote our attention to one individual at a time. But still, many people report that if they’re suddenly cut off from someone’s buddy list, they take it personally—deeply personally.

  When paying partial continuous attention, people may place their brains in a heightened state of stress. They no longer have time to reflect, contemplate, or make thoughtful decisions. Instead, they exist in a sense of constant crisis—on alert for a new contact or bit of exciting news or information at any moment. Once people get used to this state, they tend to thrive on the perpetual connectivity. It feeds their egos and sense of self-worth, and it becomes irresistible.

  Neuroimaging studies suggest that this sense of self-worth may protect the size of the hippocampus—that horseshoe-shaped brain region in the medial (inward-facing) temporal lobe, which allows us to learn and remember new information. Dr. Sonia Lupien and associates at McGill University studied hippocampal size in healthy younger and older adult volunteers. Measures of self-esteem correlated significantly with hippocampal size, regardless of age. They also found that the more people felt in control of their lives, the larger the hippocampus.

  But at some point, the sense of control and self-worth we feel when we maintain partial continuous attention tends to break down—our brains were not built to maintain such monitoring for extended time periods. Eventually, the endless hours of unrelenting digital connectivity can create a unique type of brain strain. Many people who have been working on the Internet for several hours without a break report making frequent errors in their work. Upon signing off, they notice feeling spaced out, fatigued, irritable, and distracted, as if they are in a “digital fog.” This new form of mental stress, what I term techno-brain burnout, is threatening to become an epidemic.

  Under this kind of stress, our brains instinctively signal the adrenal gland to secrete cortisol and adrenaline. In the short run, these stress hormones boost energy levels and augment memory, but over time they actually impair cognition, lead to depression, and alter the neural circuitry in the hippocampus, amygdala, and prefrontal cortex—the brain regions that control mood and thought. Chronic and prolonged techno-brain burnout can even reshape the underlying brain
structure.

  Dr. Sara Mednick and colleagues at Harvard University were able to experimentally induce a mild form of techno-brain burnout in research volunteers; they then were able to reduce its impact through power naps and by varying mental assignments. Their study subjects performed a visual task: reporting the direction of three lines in the lower left corner of a computer screen. The volunteers’ scores worsened over time, but their performance improved if the scientists alternated the visual task between the lower left and lower right corners of the computer screen. This result suggests that brain burnout may be relieved by varying the location of the mental task.

  The investigators also found that the performance of study subjects improved if they took a quick twenty- to thirty-minute nap. The neural networks involved in the task were apparently refreshed during rest; however, optimum refreshment and reinvigoration for the task occurred when naps lasted up to sixty minutes—the amount of time it takes for rapid eye movement (REM) sleep to kick in.

  >>> the new, improved brain

  Young adults have created computer-based social networks through sites like MySpace and Facebook, chat rooms, instant messaging, videoconferencing, and e-mail. Children and teenagers are cyber-savvy too. A fourteen-year-old girl can chat with ten of her friends at one time with the stroke of a computer key and find out all the news about who broke up with whom in seconds—no need for ten phone calls or, heaven forbid, actually waiting to talk in person the next day at school.

  These Digital Natives have defined a new culture of communication—no longer dictated by time, place, or even how one looks at the moment unless they’re video chatting or posting photographs of themselves on MySpace. Even baby boomers who still prefer communicating the traditional way—in person—have become adept at e-mail and instant messaging. Both generations—one eager, one often reluctant—are rapidly developing these technological skills and the corresponding neural networks that control them, even if it’s only to survive in the ever-changing professional world.

  Almost all Digital Immigrants will eventually become more technologically savvy, which will bridge the brain gap to some extent. And, as the next few decades pass, the workforce will be made up of mostly Digital Natives; thus, the brain gap as we now know it will cease to exist. Of course, people will always live in a world in which they will meet friends, date, have families, go on job interviews, and interact in the traditional face-to-face way. However, those who are most fit in these social skills will have an adaptive advantage. For now, scientific evidence suggests that the consequences of early and prolonged technological exposure of a young brain may in some cases never be reversed, but early brain alterations can be managed, social skills learned and honed, and the brain gap bridged.

  Whether we’re Digital Natives or Immigrants, altering our neural networks and synaptic connections through activities such as e-mail, video games, Googling (verb: to use the Google search engine to obtain information on the Internet [from Wikipedia, the free encyclopedia]), or other technological experiences does sharpen some cognitive abilities. We can learn to react more quickly to visual stimuli and improve many forms of attention, particularly the ability to notice images in our peripheral vision. We develop a better ability to sift through large amounts of information rapidly and decide what’s important and what isn’t—our mental filters basically learn how to shift into overdrive. In this way, we are able to cope with the massive amounts of information appearing and disappearing on our mental screens from moment to moment.

  Initially, the daily blitz of data that bombards us can create a form of attention deficit, but our brains are able to adapt in a way that promotes rapid information processing. According to Professor Pam Briggs of North Umbria University in the United Kingdom, Web surfers looking for information on health spend two seconds or less on any particular website before moving on to the next one. She found that when study subjects did stop and focus on a particular site, that site contained data relevant to the search, whereas those they skipped over contained almost nothing relevant to the search. This study indicates that our brains learn to swiftly focus attention, analyze information, and almost instantaneously decide on a go or no-go action. Rather than simply catching “digital ADD,” many of us are developing neural circuitry that is customized for rapid and incisive spurts of directed concentration.

  While the brains of today’s Digital Natives are wiring up for rapid-fire cybersearches, the neural circuits that control the more traditional learning methods are neglected and gradually diminished. The pathways for human interaction and communication weaken as customary one-on-one people skills atrophy. Our UCLA research team and other scientists have shown that we can intentionally alter brain wiring and reinvigorate some of these dwindling neural pathways, even while the newly evolved technology circuits bring our brains to extraordinary levels of potential.

  Although the digital evolution of our brains increases social isolation and diminishes the spontaneity of interpersonal relationships, it may well be increasing our intelligence in the way we currently measure and define IQ. Average IQ scores are steadily rising with the advancing digital culture, and the ability to multitask without errors is improving. Neuroscientist Paul Kearney at Unitec in New Zealand reported that some computer games can actually improve cognitive ability and multitasking skills. He found that volunteers who played the games eight hours each week improved multitasking skills by two and a half times.

  Other research at Rochester University has shown that video game playing can improve peripheral vision as well. As the modern brain continues to evolve, some attention skills improve, mental response times sharpen, and the performance of many brain tasks becomes more efficient. These new brain proficiencies will be even greater in future generations and alter our current understanding and definition of intelligence.

  section two

  social life, personal life, school

  < Sherry Turkle>

  identity crisis

  Excerpted from Life on the Screen (pp. 255–62).

  SHERRY TURKLE is Abby Rockefeller Mauzé Professor of the Social Studies of Science and Technology in the Program in Science, Technology, and Society at MIT. Her books include The Second Self: Computers and the Human Spirit (1984), Life on the Screen: Identity in the Age of the Internet (1995), and Alone Together: Why We Expect More from Technology and Less from Each Other (2011).

  EVERY ERA CONSTRUCTS its own metaphors for psychological well-being. Not so long ago, stability was socially valued and culturally reinforced. Rigid gender roles, repetitive labor, the expectation of being in one kind of job or remaining in one town over a lifetime, all of these made consistency central to definitions of health. But these stable social worlds have broken down. In our time, health is described in terms of fluidity rather than stability. What matters most now is the ability to adapt and change—to new jobs, new career directions, new gender roles, new technologies.

  In Flexible Bodies, the anthropologist Emily Martin argues that the language of the immune system provides us with metaphors for the self and its boundaries.1 In the past, the immune system was described as a private fortress, a firm, stable wall that protected within from without. Now we talk about the immune system as flexible and permeable. It can only be healthy if adaptable.

  The new metaphors of health as flexibility apply not only to human mental and physical spheres, but also to the bodies of corporations, governments, and businesses. These institutions function in rapidly changing circumstances; they too are coming to view their fitness in terms of their flexibility. Martin describes the cultural spaces where we learn the new virtues of change over solidity. In addition to advertising, entertainment, and education, her examples include corporate workshops where people learn wilderness, camping, high-wire walking, and zip-line jumping. She refers to all of these as flexibility practicums.

  In her study of the culture of flexibility, Martin does not discuss virtual communities, but these provide excellent examples of what
she is talking about. In these environments, people either explicitly play roles (as in MUDs—multiuser domains) or more subtly shape their online selves. Adults learn about being multiple and fluid—and so do children. “I don’t play so many different people online—only three,” says June, an eleven-year-old who uses her mother’s Internet account to play in MUDs. During our conversation, I learn that in the course of a year in RL, she moves among three households—that of her biological mother and stepfather, her biological father and stepmother, and a much-loved “first stepfather,” her mother’s second husband. She refers to her mother’s third and current husband as “second stepfather.” June recounts that in each of these three households the rules are somewhat different and so is she. Online switches among personae seem quite natural. Indeed, for her, they are a kind of practice. Martin would call them practicums.

  >>> “logins r us”

  On a WELL (Whole Earth ’Lectronic Link) discussion group about online personae (subtitled “boon or bête-noire”), participants shared a sense that their virtual identities were evocative objects for thinking about the self. For several, experiences in virtual space compelled them to pay greater attention to what they take for granted in the real. “The persona thing intrigues me,” said one. “It’s a chance for all of us who aren’t actors to play [with] masks. And think about the masks we wear every day.” 2

  In this way, online personae have something in common with the self that emerges in a psychoanalytic encounter. It, too, is significantly virtual, constructed within the space of the analysis, where its slightest shifts can come under the most intense scrutiny.3

 

‹ Prev