Book Read Free

The Digital Divide

Page 27

by Mark Bauerlein


  After spending a day cruising the greater Tampa Bay area, I find myself back at the Wales homestead, sitting with the family as they watch a video of Wales’s daughter delivering a presentation on Germany for a first-grade enrichment class. Wales is learning German, in part because the German Wikipedia is the second largest after English, in part because “I’m a geek.” Daughter Kira stands in front of a board, wearing a dirndl and reciting facts about Germany. Asked where she did her research, she cops to using Wikipedia for part of the project. Wales smiles sheepishly; the Wikipedia revolution has penetrated even his own small bungalow.

  People who don’t “get” Wikipedia, or who get it and recoil in horror, tend to be from an older generation literally and figuratively: the Seigenthalers and Britannica editors of the world. People who get it are younger and hipper: the Irene McGees and Jeff Bezoses. But the people who really matter are the Kiras, who will never remember a time without Wikipedia (or perhaps Wikia), the people for whom open-source, self-governed, spontaneously ordered online community projects don’t seem insane, scary, or even particularly remarkable. If Wales has his way—and if Wikipedia is any indication, he will—such projects will just be another reason the Internet doesn’t suck.

 

  judgment: of molly’s gaze and taylor’s watch

  why more is less in a split-screen world

  Excerpted from Distracted (pp. 71–95). This is an abridged version of the original chapter.

  Columnist and author MAGGIE JACKSON’S recent book is Distracted: The Erosion of Attention and the Coming Dark Age (2008). Jackson has published in The Boston Globe, The New York Times, BusinessWeek, Utne, and Gastronomica. She is a graduate of Yale University and the London School of Economics. For more information, see www.maggie-jackson.com.

  MOLLY WAS BUSY. A cherubic, dark-haired fourteenmonth-old still unsteady on her feet, she hung on to a bookcase with one hand and doggedly yanked toys off the shelves. One, two, three brightly colored plastic blocks dropped to the floor. A teddy bear got a fierce hug before being hurled aside. Then abruptly, she stood stock-still and swiveled her head toward a big television set at one end of the room, entranced by the image of a singing, swaying Baby Elmo. “She’s being pulled away,” whispered Dan Anderson, a psychology professor who was videotaping Molly and her mother from an adjoining room of his laboratory. “What’s happening is that she’s being pulled away by the TV all the time, rather than making a behavioral decision to watch TV.”1 As Anderson and two graduate students observed through an enormous one-way wall mirror and two video monitors, Molly stood entranced for a few seconds, took a step toward the screen and tumbled over. Her young mother, sitting on the floor nearby, turned her attention from the television in time to catch Molly before her head hit the floor. Anderson didn’t react. He was tuned to the back-and-forth in the room: Molly turning from the toy to the TV to her mother; her mother watching her baby, but mostly the video, which was being developed by Sesame Street for the exploding under-two market. This was rich fodder for a man who’s spent his life studying children’s attention to television.

  A congenial University of Massachusetts professor with a melodic voice, Anderson resembles a character in a fairy tale—perhaps the gentle wizard who shows the lost child the way home. First in viewers’ homes and now in his lab in a working-class neighborhood of Springfield, Massachusetts, he studies heart rate, eye tracking, and an array of other measures to understand what happens when we watch television. People aren’t as glued to the tube as they might think, Anderson has found. On average, both children and adults look at and away from a set up to one hundred and fifty times an hour.2 Only if a look lasts fifteen seconds or longer are we likely to watch for up to ten minutes at a stretch—a phenomenon called “attentional inertia.”3 When a show is either too fast and stimulating or too calm and slow, our attention slips away. Television attracts us because its content can challenge our cognition. But foremost, its quick cuts and rapid imagery are designed to keep tugging at our natural inclination to orient toward the shiny, the bright, the mobile—whatever’s eye-catching in our environment. It’s ingenious: entertainment that hooks us by appealing to our very instincts for survival. This is why very young viewers like Molly are entranced by the plethora of new “educational” shows and DVDs aimed at them, even though they understand little and likely learn little from this fare.4 Push and pull, back and forth, television is in essence an interruption machine, the most powerful attention slicer yet invented. Just step into the room with the enticing glow, and life changes.

  This was the intriguing discovery that Anderson made while exploring the gaze of the tiniest watchers, the final frontier of TV viewership. In all the years that he and others sought to probe the question of how much we attend to television, no one thought to ask how television changed off-screen life during an on-air moment. (The point of most such research, after all, was to measure the watching, the more of it the better, in the industry’s view.) But Anderson and his research team recently discovered that television influences family life even when kids don’t seem to be watching. When a game show is on, children ages one to three play with toys for half the amount of time and show up to 25 percent less focus in their play than they do when the TV is off.5 In other words, they exhibit key characteristics—abbreviated and less focused play—of attention-deficient children.6 They begin to look like junior multitaskers, moving from toy to toy, forgetting what they were doing when they were interrupted by an interesting snippet of the show. Not surprisingly, parents in turn are distracted, interacting 20 percent less with their kids and relating passively—“That’s nice, dear,” or “Don’t bother me, I’m watching TV”—when they do. Consider that more than half of children ages eight to eighteen live in homes where the TV is on most of the time.7 Factor in the screens in the doctor’s office, airport, elevator, classroom, backseat—and don’t forget that many, if not most, are splintered by the wiggling, blinking crawl. Then, zoom out and remember that television is just one element in a daily deluge of split focus. Wherever Molly’s gaze falls, wherever she turns, whomever she talks to, she’ll likely experience divided attention. She’s being groomed for a multitasking, interrupt-driven world. And she doesn’t need Elmo to teach her that.

  If the virtual gives us a limitless array of alternative spaces to inhabit, then multitasking seems to hand us a new way to reap time. Cyberspace allowed us to conquer distance and, seemingly, the limitations of our earthly selves. It has broken down the doors of perception. Now, we’re adopting split focus as a cognitive booster rocket, the upgrade we need to survive in our multilayered new spaces. How else can we cope with an era of unprecedented simultaneity, a place we’ve hurtled into without any “way of getting our bearings,” as Marshall McLuhan noted in 1967.8 Multitasking is the answer, the sword in the stone. Why not do two (or more) things per moment when before you would have done one? “It’s a multitasking world out there. Your news should be the same. CNN Pipeline—multiple simultaneous news streams straight to your desktop.” I am watching this ad on a huge screen at the Detroit airport one May evening after my flight home is canceled. Travelers all around me move restlessly between PDA, iPod, laptop, cell phone, and ubiquitous TV screens. “CNN Pipeline,” the ad concludes. “Ride your world.” Rev up your engines, Molly, it’s a big universe out there.

  Now working parents spend a quarter of their waking hours multitasking.9 Grafted to our cell phones, we drive like drunks; even if it kills us, we get in that call. Instant messaging’s disjointed, pause-button flavor makes it the perfect multitasking communications medium. More than half of instant-message users say they always Web surf, watch TV, talk on the phone, or play computer games while IM’ing.10 Teens say, duh, that’s the attraction: face-to-face is better, but with IM, you get more done!11 Joichi Ito’s “hecklebot,” which publicly displays the “back-channel chat” or wireless banter of conference attendees, may be just the pipe dream of a subversive venture capitalist fo
r now, but it captures the tenor of the attentional turf wars erupting in meeting rooms, conference symposia, and college classes.12 “Did he really say that?” instant-messages an audience member to fellow IM’ers in the room. “Wow? He did,” someone responds.13 This parallel channel adds a new layer to the surfing and e-mail checking already rife at live-time events . . . Bosses, speakers, and professors respond with threats and electronic blackouts to wrest people’s primary focus back to the front of the room. Audiences ignore them, asserting the right to split their focus. Are these just bumps along the road to progress? Can we time-splice our way to unlimited productivity? Certainly, the disjunction between TV news anchors and the crawl “captures the way we live now: faster than ever, wishing we had eyes in the back of our heads,” notes media critic Caryn James.14 The inventor of the Kaiserpanorama, a nineteenth-century slide show, put it more simply. In an age of technological wonders, merely traveling to far-off places won’t be enough, wrote August Fuhrmann. Next, we’ll want to penetrate the unknown, do the impossible—instantaneously. “The more we have, the more we want,” he wrote.15

  ANOTHER DAY IN THE LAB, and this time I was the baby. Sitting in a cramped booth in a basement laboratory at the University of Michigan in Ann Arbor, my head was cradled in a chin rest and capped by a headset. My right hand rested on a set of four metal keys. On the table in front of me, two eyeball-shaped video cams sat atop a computer screen, monitoring me as I struggled to correctly respond to beeps, squeaks, and colored words—red, blue, yellow, green—appearing on the screen. “Beep,” I heard and tried to recall if that was supposed to be sound one, two, or three. The lone word red appeared on the screen, and I thankfully remembered to press the corresponding pinkie finger key. Two practice rounds and then paired tones and colors flew at me simultaneously, even though I seemed to sense just one and then, after a long pause, the other. The colors I could handle, but sometimes I didn’t even hear the tones. I pictured Adam and Jonathan, the two graduate students in the next booth, rolling their eyes as they ran this test taker through her paces. I pressed on, trying to concentrate. It felt like gritting my teeth, except in my brain.

  David Meyer, head of the University of Michigan’s Brain, Cognition, and Action Lab, was my guide that day to the burgeoning realm of cognitive neuroscience research into multitasking.16 Considered by many of his peers to be one of the greatest experimental psychologists of our time, Meyer looks more like an outdoorsman than a brilliant scientist. Lanky and tall, he has a chiseled face and a down-home way of talking, with faint traces of a Kentucky accent. Blessed with the ability to translate brain science into plain English, he’s become a media darling in recent years, the one to call for a quote on the latest multitasking research. He’s more than generous with his time and willing to endure the interruptions of press calls. He’s also a driven man.

  Dressed in a faded T-shirt and blue jeans, he’d dragged himself into the lab this stifling May morning despite a painful stomach ailment. Now sixty-four, he’s made it a point in recent years, even at a cost to his time for other research, of warning anyone who will listen about the dangers of multitasking. It’s an unrecognized scourge, he believes, akin to cigarette smoking a generation ago. Is he riding a hobbyhorse, perhaps overreacting a trifle? Certainly, his “call a spade a spade” demeanor has raised eyebrows in the buttondown scientific community. He writes lengthy scientific papers and speeches when snappy four-page reports are increasingly in fashion. He refuses to lard his work with superficial pandering citations to big names in the field. At the same time, Meyer is a renaissance scientist, respected for his achievements in areas from computational modeling of the brain to “semantic priming”—or the automatic spread of mental and neural activity in response to processing the meaning of words. Is he a provocative maverick devoted to a peripheral pet cause or a prophetic visionary who can help save us from ourselves?

  Certainly, it’s ironic that Meyer is best known in the public sphere for his work in an area of study that long was a backwater in attention research. By the time Wilhelm Wundt established the first psychology lab at the University of Leipzig in 1879, a generation of scientists had spent years studying how humans perceive the world, especially visually.17 The discovery that we interpret daily life via our senses, not just digest it objectively, paved the way for endless attempts to rigorously measure how a human responds to environmental stimuli and to what extent the waters of perception are influenced by the forces of “memory, desire, will, anticipation and immediate experience,” as delineated by art historian Jonathan Crary.18 Part of the vision of William James stems from the fact that he never underestimated the complexity of such processes. Yet however crucial and enigmatic, the “input-output” transactions that fascinated early psychological researchers entail only one slice of the pie of cognition.

  It wasn’t until after World War II that scientists began to see that studying how we switch mental gears, especially under pressure, can illuminate the higher workings of the mind. Arthur T. Jersild had carried out the first systematic study of task-switching in 1927 for his dissertation by timing students solving long lists of similar or differing math problems. Then he abandoned the topic, never to return.19 Later, postwar British scientists began tackling task switching as part of their groundbreaking research into higher cognitive processing. A parallel line of research probed dual tasking, or our limited capacity to carry out two tasks literally at the same time.20 By the 1990s, an explosion of research into multitasking had ignited, inspired by the work of Alan Allport, Gordon Logan, David Meyer, Stephen Monsell, and Harold Pashler, among others—and by the demands of life today. It’s not a coincidence that such research has blossomed in an era when human work has become increasingly wedded to the rhythms of the most complex, intelligent machines ever to appear on earth. (In part, Meyer is known for his work with computer scientist and cognitive psychologist David Kieras, using computers to model the brain’s cognitive architecture, including the “mechanics” of task switching.)21 The question of how our brains compare with artificial information processors, and how well they can keep up, underlies much of our fascination with multitasking. We’re all air traffic controllers now.

  Back in the booth, I tackled a different experiment, this one measuring the speed at which I could alternate between two complex visual activities. Although the first experiment tested my ability to respond to simultaneous stimuli, both effectively measure task switching, for we can do very few things exactly at the same time. Reading e-mail while talking on the phone actually involves reading and then chatting, chatting and then reading. Cell phoning while driving demands similar cognitive switching. In this second test, when a zero popped up in one of four spaces in a line on the screen, I was to press a corresponding key with my finger. A zero in the first spot meant that I should press my index finger key. A zero in the second place prompted my second finger, and so on. Tap, tap. I got it. Easy. That was the compatible round. Next, I was supposed to hit keys that did not correspond with the zeros in the old lineup. When I saw a zero at the end of the line, I was to strike my index finger key. There was a pattern, but I barely grasped it before I had to begin alternating between compatible and incompatible cues, depending on whether the zeros were green or red. I panicked, blindly hitting any key in the harder, incompatible rounds and was thoroughly relieved when it ended. Yes, William James, there’s a whole lot more going on here than just simple inputs and outputs. My brief cerebral tussle in the test lab, in fact, neatly exemplifies the age-old, inescapable tug-of-war we experience each waking minute of our life as we struggle to stay tuned to and yet make sense of our world.

  To understand multitasking, first consider the lowly neuron, especially in the three-dozen regions of the brain that deal with vision—arguably the most crucial attentional sense. They lead something of a dog-eat-dog life. Neurons in the retina initially transmit an object’s simple, restricted characteristics, such as its color and position, while more lofty neurons in the cortex
and other areas code the object’s complex or abstract features, such as its meaning. (Is this a face or a toaster, my neighbor or my mother?) This hierarchy of neurons must work in concert, firing up a coordinated “perceptual coherence field” in scientist Steven Yantis’s words, to meaningfully represent the object in the brain.22 But with so much to process and so little time, multiple neuron groups often compete to represent sensory information to the brain for possible subsequent encoding into memory. What is the key to making meaning from this jumble? Attention. Paying attention, whether deliberately or involuntarily, highlights one coherence field and suppresses activity from “losing” neuron groups, forcing our perception of the object they are representing to fade away. Attention is so crucial to how we see the world that people with damage to areas usually in the brain’s right parietal lobe—a region key to certain forms of attention—can completely fail to notice objects in their view even though their vision is perfect. Such patients with “visual neglect” will eat only the food on the left side of their plate or dress just the left side of their bodies.23 They literally have blind spots, no-go zones for their attention. And yet even for those of us with healthy brains, focus itself creates a kind of blindness. When we shine our attentional spotlight on an object, the rest of the scene doesn’t go blank, but its suppression is truly dramatic. “The intuition that we open our eyes and see all that is before us has long been known to be an illusion,” notes Yantis.24

  We aren’t built, however, to tune out life. Our survival lies in the tricky push and pull between focusing and thus drawing meaning from the world, and staying alert to changes in our environment. This is the real tug-of-war. As much as we try to focus on pursuing our goals, at heart we are biased to remain alert to shifts—especially abrupt ones—in our environment. Babies and children are especially at the mercy of their environments, since it takes many years and much training for them to develop the brain capacity to carry out complex, goal-oriented behaviors, including multitasking. Older toddlers whose mothers constantly direct them—splicing and controlling the focus of their attention—show damaged goal-setting and independence skills a year later.25 Even as adults, our “top-down” goal-oriented powers of attention constantly grapple with our essentially more powerful “bottom-up,” stimulus-driven networks. 26 Pausing along the trail to consider whether a plant was edible, our ancestors had to tune out their environment long enough to assess the would-be food. But they had to be better wired to almost unthinkingly notice the panther in the tree above—or they would have died out rapidly. We are born to be interrupt-driven, to give in Linda Stone’s term “continuous partial attention”27 to our environment, and we must painstakingly learn and keep striving to retain the ever-difficult art of focus. Otherwise, in a sense, we cede control to the environment, argues physicist Alan Lightman in an essay titled “The World Is Too Much with Me.” After realizing that gradually and unconsciously he had subdivided his day “into smaller and smaller units of ‘efficient’ time use,” he realized that he was losing his capacity to dream, imagine, question, explore, and, in effect, nurture an inner self. He was, in, a sense, becoming a “prisoner of the world.”28

 

‹ Prev