Book Read Free

The Digital Divide: Writings for and Against Facebook, Youtube, Texting, and the Age of Social Networking

Page 28

by Mark Bauerlein


  We aren’t built, however, to tune out life. Our survival lies in the tricky push and pull between focusing and thus drawing meaning from the world, and staying alert to changes in our environment. This is the real tug-of-war. As much as we try to focus on pursuing our goals, at heart we are biased to remain alert to shifts—especially abrupt ones—in our environment. Babies and children are especially at the mercy of their environments, since it takes many years and much training for them to develop the brain capacity to carry out complex, goal-oriented behaviors, including multitasking. Older toddlers whose mothers constantly direct them—splicing and controlling the focus of their attention—show damaged goal-setting and independence skills a year later.25 Even as adults, our “top-down” goal-oriented powers of attention constantly grapple with our essentially more powerful “bottom-up,” stimulus-driven networks. 26 Pausing along the trail to consider whether a plant was edible, our ancestors had to tune out their environment long enough to assess the would-be food. But they had to be better wired to almost unthinkingly notice the panther in the tree above—or they would have died out rapidly. We are born to be interrupt-driven, to give in Linda Stone’s term “continuous partial attention”27 to our environment, and we must painstakingly learn and keep striving to retain the ever-difficult art of focus. Otherwise, in a sense, we cede control to the environment, argues physicist Alan Lightman in an essay titled “The World Is Too Much with Me.” After realizing that gradually and unconsciously he had subdivided his day “into smaller and smaller units of ‘efficient’ time use,” he realized that he was losing his capacity to dream, imagine, question, explore, and, in effect, nurture an inner self. He was, in, a sense, becoming a “prisoner of the world.”28

  When we multitask, we are like swimmers diving into a state of focus, resurfacing to switch gears or reassess the environment, then diving again to resume focus. This is a speeded-up version of the push and pull we do all day. But no matter how practiced we are at either of the tasks we are undertaking, the back-and-forth produces “switch costs,” as the brain takes time to change goals, remember the rules needed for the new task, and block out cognitive interference from the previous, still-vivid activity.29 “Training can help overcome some of the inefficiencies by giving you more optimal strategies for multitasking,” says Meyer, “but except in rare circumstances, you can train until you’re blue in the face and you’d never be as good as if you just focused on one thing at a time. Period. That’s the bottom line.” Moreover, the more complex the tasks, the steeper the switch costs. When I had to consider both tones and colors in the first experiment, I began responding almost twice as slowly to the easier color tasks as I also tried to concentrate on getting the hard-to-hear tones right. Perhaps recalling which finger key corresponded to which color word, or in Meyer’s words “rule activation,” inhibited my performance. Perhaps my brain was slowed by “passive proactive interference”; in other words, it was still tied up with the work of distinguishing the tones, a sticky business for someone whose hearing has been eroded by years of city living. Similar trade-offs occurred during the second experiment. I slowed down in doing the easy compatible work, while trying like mad to speed up my responses to the infuriatingly illogical second round of zeros. Predictably, the accuracy of my responses often suffered. These lab rat exercises and millisecond “costs” may seem abstract. Sure, an instant of inattentional blindness or a delayed reaction in noticing a darting child makes an enormous difference in a car flying down the road. The split-focus moment literally may result in shattered lives. But scale up and out of the lab and ask, how much does this matter off the road or away from the radar screen? Is multitasking as much of a scourge as Meyer believes?

  Perhaps the cumulative, fractional “switch costs,” the cognitive profit-loss columns of our split-screen life, are not the only problem. These are inefficiencies, surely a danger in some circumstances and a sin in this capitalist society, which we undoubtedly will try to shave away by sharpening our multitasking skills. More important, perhaps in this time-splicing era we’re missing something immeasurable, something that nevertheless was very much with me as I struggled to act like a good test monkey in Meyer’s lab. How do we switch gears in a complex environment? Talk to a cognitive neuroscientist or an experimental psychologist such as Meyer and chances are, within minutes, he or she will stress the limitations of our highest form of attention—the executive system that directs judgment, planning, and self-control. Executive attention is a precious commodity. Relying on multitasking as a way of life, we chop up our opportunities and abilities to make big-picture sense of the world and pursue our long-term goals. In the name of efficiency, we are diluting some of the essential qualities that make us human....

  Now, most of us are information-age workers . . . relentlessly driving ourselves to do more, ever faster. This relentless quest for productivity drives the nascent but rapidly burgeoning field of “interruption science,” which involves the study of the pivot point of multitasking. For multitasking is essentially the juggling of interruptions, the moment when we choose to or are driven to switch from one task to another. And so to dissect and map these moments of broken time is to shed light on how we live today. What emerges, in the jargon of leading interruption scientist Gloria Mark, is a portrait of “work fragmentation.” We spend a great deal of our days trying to piece our thoughts and our projects back together, and the result is often an accumulation of broken pieces with a raggedy coherence all its own. After studying workers at two West Coast high-tech firms for more than one thousand hours over the course of a year, Mark sifted the data—and was appalled. The fragmentation of work life, she says, was “far worse than I could ever have imagined.”30

  Workers on average spend just eleven minutes on a project before switching to another, and while focusing on a project, typically change tasks every three minutes, Mark’s research shows.31 For example, employees might work on a budget project for eleven minutes but flip between related e-mails, Web surfing, and phone calls during that time. This isn’t necessarily all bad. Modern life does demand nimble perception . . . and interruptions often usher in a needed break, a bit of useful information, or a eureka thought. Yet as well as coping with a high number of interruptions, workers have a tough time getting back on track once they are disconnected. Unlike in psychology labs, where test takers are cued to return to a previous task, workers have to retrieve a lost trail of work or thought themselves when interrupted. Once distracted, we take about twenty-five minutes to return to an interrupted task and usually plunge into two other work projects in the interim, Mark found.32 This is partly because it’s difficult to remember cognitive threads in a complex, ever-shifting environment and partly because of the nature of the information we are juggling today. The meaning of a panther’s presence is readily apparent in a glance. But a ping or a beep doesn’t actually tell much about the nature of the information. “It is difficult to know whether an e-mail message is worth interrupting your work for unless you open and read it—at which point you have, of course, interrupted yourself,” notes science writer Clive Thompson. “Our software tools were essentially designed to compete with one another for our attention, like needy toddlers.”33 Even brief interruptions can be as disruptive as lengthy ones, if they involve tasks that are either complex in nature or similar to the original work (thus muddying recall of the main work), Donald Broadbent has found.34 In total, interruptions take up 2.1 hours of an average knowledge worker’s day and cost the U.S. economy $588 billion a year, one research firm estimated.35 Workers find the constant hunt for the lost thread “very detrimental,” Mark reports dryly. . . .

  Mary Czerwinski, an energetic Microsoft researcher designs a kind of high-tech “wallpaper” to better our age. Czerwinski is the manager of the Visualization and Interaction Research Group in the company’s thought ghetto, Microsoft Research Labs. She originally wrote her dissertation on task switching, spent time helping NASA determine how best to interrupt busy astro
nauts, and now develops ways for computer users to cure that uncertainty rap—the necessity to unveil an interruption to size up its importance—mainly by bringing our information into the open, so to speak. Czerwinski and Gary Starkweather, inventor of the laser printer, are developing a forty-two-inch computer screen so that workers can see their project, files, or Web pages all at once. That’s three-feet-plus of LCD sensurround, a geek’s heaven. Moreover, within this big-screen universe, Czerwinski and her team are figuring out new ways to make interruptions instantly visible. A program called Scalable Fabric offers a peripheral zone where minimized but still visible windows are color-coded and wired to signal shifts in their status. A new e-mail, for example, might glow green in a partly visible in-box. Another project creates a round, radar screen–type window at the side of the screen, where floating dots represent pertinent information.36 Czerwinski is, in effect, decorating the walls of cyberspace with our thoughts, plans, conversation, and ideas. Can the “pensieve”—the misty fountain that conjures up the stored memories of Harry Potter’s sage headmaster, Albus Dumbledore—be far behind?

  Working memory is the Achilles’ heel of multitasking, and so is the focus of Czerwinski’s work. The “lost thread” syndrome that bedevils multitaskers stems from the fact that we have a remarkably limited cerebral storehouse for information used in the daily tasks of life. (Even a wizard, it seems, is a forgetful creature.) “Out of sight, out of mind” is all too true, mainly because, for survival purposes, we need to have only the most pertinent current information on our mind’s front burner. Our working memory is a bit like a digital news crawl slithering across Times Square: constantly updated, never more than a snippet, no looking back. Nearly a half-century ago, memory researchers Margaret and Lloyd Peterson found that people forget unrelated letters and words within just a few seconds once they are distracted or pulled away to another task.37 In his classic 1956 paper “The Magical Number Seven Plus or Minus Two,” George Miller hypothesized that people could hold about seven pieces of information, such as a telephone number, in their short-term verbal working memory. The seven bits, however, could also be made up of “chunks” of longer, more complex, related information pieces, noted Miller, a founder of cognitive psychology. Recent evidence, in fact, suggests that Miller was overly optimistic and that people can hold between one and four chunks of information in mind.38 Moreover, when your working memory is full, you are more likely to be distracted. This is one reason why viewers remember 10 percent fewer facts related to a news story when the screen is cluttered by a craw1.39

  When I first talked to Czerwinski by telephone, she began the conference call by teasing a PR person on the line for failing to send out an advance reminder of the appointment.40 “When I don’t get a meeting reminder, you might as well hang it up,” she said. To Czerwinski, the solution to the “lost thread” syndrome is simple: use technology to augment our memories. Of course, this is not entirely new. The alphabet, Post-it note, PDA, and now Czerwinski’s innovations represent a long line of human efforts to bolster our working memories. But while multiple streams of color-coded, blinking, at-a-glance reminders will undoubtedly jog our memories, they run the risk of doing so by snowing us even more, Czerwinski admits. Bigger screens lead to lost cursors, more open windows, time-consuming hunts for the right information, and “more complex multitasking behavior,” she observes. I would add that simultaneous data streams flatten content, making prioritization all the harder. The crawl, for instance, effectively puts a grade-B headline on a par with a top news story read by the anchor. Thirty shifting color-coded screen windows vying for our attention make trivia bleed into top-priority work. “Better task management mechanisms become a necessity” is Cerwinski’s crisp conclusion. In other words, we need computers that sense when we are busy and then decide when and how to interrupt us. The digital gatekeeper will provide the fix.

  And that’s exactly the vein of research being mined by bevies of scientists around the country. “It’s ridiculous that my own computer can’t figure out whether I’m in front of it, but a public toilet can,” says Roel Vertegaal of Queen’s University in Ontario, referring to automatic flushers. Vertegaal is developing a desktop gadget—shaped like a black cat with bulging eyes—that puts through calls if a worker makes eye contact with it. Ignored, the “eyePROXY” channels the interruption to voice mail. An MIT prototype mouse pad heats up to catch your attention, a ploy we might grow to loathe on a hot summer day. IBM software is up to 87 percent accurate in tracking conversations, keystrokes, and other computer activity to assess a person’s interruptability.41 The king of the mind-reading computer ware, however, is Czerwinski’s colleague and close collaborator, Eric Horvitz. For nearly a decade, he’s been building artificial intelligence platforms that study you—your e-mail or telephone habits, how much time you spend in silence, even the urgency of your messages. “If we could just give our computers and phones some understanding of the limits of human attention and memory, it would make them seem a lot more thoughtful and courteous,” says Horvitz of his latest prototype, aptly named “BusyBody.” 42 Artificial intelligence pioneer John McCarthy has another adjective to describe such programming: annoying. “I feel that [an attentive interface] would end up training me,” says McCarthy, a professor emeritus at Stanford.43 Long before “attentive-user interfaces” were born, French philosopher Paul Virilio had similar qualms about the unacknowledged power of the personal computer itself, which he dubbed a “vision machine” because, he said, it paves the way for the “automation of perception.”44 Recall David Byrne’s impish observation that PowerPoint “tells you how to think.”

  Is hitching ourselves to the machine the answer? Will increasingly intelligent computers allow us to overcome our limitations of memory and attention and enable us to multitask better and faster in a Taylor-inspired hunt for ever-greater heights of efficiency? “Maybe it’s our human nature to squeeze this extra bit of productivity out of ourselves, or perhaps it’s our curious nature, ‘can we do more?’ ” asks Czerwinski. Or are we turning over “the whole responsibility of the details of our daily lives to machines and their drivers,” as Morris feared, and beginning to outsource our capacity for sense-making to the computer? To value a split-focus life augmented by the machine is above all to squeeze out potential time and space for reflection, which is the real sword in the stone needed to thrive in a complex, ever-shifting new world. To breed children for a world of split focus is to raise generations who will have ceded cognitive control of their days. Children today, asserts educator Jane Healy, need to learn to respond to the pace of the world but also to reason and problem-solve within this new era. “Perhaps most importantly, they need to learn what it feels like to be in charge of one’s own brain, actively pursuing a mental or physical trail, inhibiting responses to the lure of distractions,” writes Healy.45

  Ironically, multitasking researcher Arthur Jersild foresaw this dilemma generations ago. Inspired by Taylor’s and other time management and piecework theories, Jersild quietly published his pioneering dissertation on task switching. Then he went on to become a developmental psychologist known for urging schools to foster self-awareness in children. His views were unusual. At the time, educators didn’t consider children self-perceptive and, in any case, they felt that emotional issues were the purview of the family. In a 1967 oral history given upon his retirement from Columbia, Jersild argued that children must be taught to “see themselves as capable, if they are; to be aware of their strengths; to try themselves out if they seem hesitant; . . . to prove to themselves that they have certain powers.”46 Jersild, the sixth of ten children of a strict Midwestern Danish immigrant minister, was kindly, sensitive, and doggedly self-sufficient himself. At age fourteen, while boarding as a farmhand in South Dakota in 1916, he stood in the cornfield he was weeding, raised a fist to the sky, and vowed, “I am getting out of here!” By the end of his career, he had come full circle from his early concerns about how fast workers could do piecework
to worrying that the education system placed too high a premium on speed and not enough on reflection. “It is essential for conceptual thought that a person give himself time to size up a situation, check the immediate impulse to act, and take in what’s there,” said Jersild. “Listening is part of it, but contemplation and reflection would go deeper.” The apt name of his dissertation was “Mental Set and Shift.”

  Depending too heavily on multitasking to navigate a complex environment and on technology as our guide carries a final risk: the derailing of the painstaking work of adding to our storehouses of knowledge. That’s because anything that we want to learn must be entered into our long-term memory stores, cognitive work that can take days and even months to accomplish. Attention helps us to understand and make sense of the world and is crucial as a first step to creating memory. But more than simply attending is necessary. “We must also process it at an abstract, schematic, conceptual level,” note researchers Scott Brown and Fergus Craik. This involves both rote repetition and “elaborative rehearsal,” or meaningfully relating it to other information, preferably not too quickly.47 To build memory is to construct a treasure trove of experience, wisdom, and pertinent information. If attention makes us human, then long-term memory makes each of us an individual. Without it, we are faceless, hence our morbid fascination with amnesia of all kinds. Building these stores of memory takes time and will. When we divide our attention while trying to encode or retrieve memories, we do so about as well as if we were drunk or sleep deprived. In the opening scene of Alan Lightman’s chilling novel The Diagnosis, successful executive Bill Chalmers loses his memory while taking the subway to work.48 Rapidly, he disintegrates into a lost soul who recalls only his company’s motto: “The Maximum Information in the Minimum Time.” He regains his memory only to contract a mysterious, numbing illness that ultimately reveals the emptiness of his life. Like the alienated protagonists of film and literature from The Matrix to The Magic Mountain, Chalmers is a prisoner of the modern world. A culture of divided attention fuels more than perpetual searching for lost threads and loose ends. It stokes a culture of forgetting, the marker of a dark age. It fuels a mental shift of which we are not even aware. That’s what we’re unwittingly teaching baby Molly as her budding gaze meets the world.

 

‹ Prev