I Live in the Future & Here's How It Works: Why Your World, Work, and Brain Are Being Creatively Disrupted

Home > Nonfiction > I Live in the Future & Here's How It Works: Why Your World, Work, and Brain Are Being Creatively Disrupted > Page 18
I Live in the Future & Here's How It Works: Why Your World, Work, and Brain Are Being Creatively Disrupted Page 18

by Nick Bilton


  In the early days of Apples, Dells, and IBM PCs, the computer took several minutes just to load, and then you could handle only one or two functions at a time. Each individual program took a while to open as well, and you probably could do only one thing at a time. As processors got smarter and faster, your computer began to “multitask,” giving you the concept of windows—multiple functions churning away in various boxes on the screen. In the meantime, though, each of us also got better at doing a few different things at the same time.

  Anyone who remembers the early days of the Web went through a similar experience as well; just connecting to the Internet took several minutes. There were passwords, strange fax machine–like noises and a few clicks of the mouse, and then interminable delays as the “World Wide Wait” slowly dripped into view. People kept themselves occupied by picking up a book or a magazine that sat close by, playing solitaire on the computer, or simply staring off into space, letting their minds wander.

  Gradually, as computers became faster, they enabled us to perform multiple tasks simultaneously. Rather than wait two or three seconds for someone to reply to an instant message, I can read a few more words from the article in my browser or play another few seconds of that video game I started earlier. We have adapted to a world where information moves very quickly and in many different forms, from the television, to the radio, to the computer, to the mobile phone. As technologies change and we become more adept in using them, our brains will adapt too.

  The Great Multitasking Debate

  Whether this intensified jumping from task to task is a good thing is a subject of intense debate. It might make us smarter, faster, and more agile. Or it might, in the opinion of some researchers, simply make us more stupid and prone to destructive errors. We could become like the characters in Kurt Vonnegut’s short story “Harrison Bergeron,” in which every twenty seconds or so “mental handicap” transmitters send out distracting noises to keep people “from taking unfair advantage of their brains.” In the story, sounds from gunshots to crashing cars keep the characters from completing a thought or a conversation and possibly gaining an advantage over another person. In real life, e-mail, tweets, and telephones keep us from finishing a sentence or getting our work done.

  José Saramago, the late Portuguese novelist and playwright who won the Nobel Prize for literature in 1998, offers a different analogy in his novel Blindness.1 Saramago’s story opens in a world just like the one you know today. People are living their lives, building a career, driving to work, starting and raising families, attending to errands and meetings. Then a person sitting in traffic in a car instantly becomes blind. The blind man is rushed to a doctor, who in turn soon becomes blind.

  Quickly and efficiently, blindness spreads through society as an airborne virus. The government mobilizes and begins to quarantine anyone showing signs of blindness. As people are corralled and moved into hospital-like prisons, one group isn’t fazed by this epidemic: those who were blind before the epidemic started. They take over camps full of the newly blind, who are inhibited by the new way they are forced to navigate the world. The already blind become the leaders of the new sightless society.

  The blind, once at a terrific disadvantage in a sighted world, now have a terrific advantage. For them, blindness is nothing new. They know how to get around, how to cope, how to navigate a world that no one can see anymore.

  In my mind’s eye, I see my ricochet work style and that of young workers who text on their phones, type on a computer, share videos and images, listen to music, and talk all at once as much like the blind in Saramago’s work. Our new way of working, once a disability, now has the potential to be clearly valuable. Already, you often see job descriptions with prerequisites such as “must have the ability to multitask,” which essentially translates to “Can you do ten things at once?” A quick search of the word “multitask” on the online job board Monster.com generates thousands of results asking for people who can manage X, Y, and Z at the same time.

  To me, it seems feasible that the members of a generation growing up doing their homework while simultaneously engaging in a number of other activities will come into the workforce and engage with their office duties in the same way. It is not unlike earlier generations coming into the workforce and displacing the pen with newfangled typewriters and then displacing the typewriter with the personal computer. But is this just wishful thinking? Is our jumping from task to task truly efficient—an unappreciated ability in the modern world—or merely engaging enough to lead us to think we are accomplishing plenty when in fact we are mostly spinning our wheels?

  The result matters a lot today, when we are wirelessly connected to anywhere in the world. Every year, the gizmos we carry in our pockets can do more and more things, encouraging us to take advantage of them not just when we have a break but while we’re walking down the street or driving. The temptation is to jump at every beep and buzz of the cell phone and every ding of the computer inbox, to answer every communication right away, and, for many of my generation, to search for the answer to every question that pops randomly into our heads.

  But we already know there are some huge risks with this impulsive behavior, particularly when we combine just about any cognitive task with driving, which requires alertness and quick reaction times. Although I tend to engage in multiple activities when I work, I would never do that while operating a vehicle.2 As my colleague Matt Richtel at the Times wrote in 2009, the Virginia Tech Transportation Institute put video cameras in the cabs of long-haul truckers and watched over eighteen months as the drivers talked and texted their way from location to location. The findings: Texters’ risk of a collision was twenty-three times greater than that of those who were simply driving. Another study of college students in a driving simulator found that the young people were eight times more likely to crash while they were typing on their phones.

  Is this a problem of learning and practice? Does it matter if you are young or old, tech-savvy or geek-averse? Is it possible that we could build gray and white matter so that we can efficiently handle these various tasks safely at one time? Or is our wiring such that we truly cannot master several cognitive tasks in tandem? If so, will we need to schedule them the way we do with the gym or television shows, setting aside, say, Twitter time outside of our work or our driving in order to devote our attention to them?

  Yet because we can’t text and drive, does that mean we can’t chat with friends online or text while doing homework or other tasks? Does that also mean that we can’t consume truly multimedia storytelling, watching videos, interacting with graphics or images, leaving comments for friends, and consuming the information in a thorough way? I know this is the way I work, and quite successfully.

  To find out if I was the exception to the rule, I went on my own quest, consulting leading neuroscientists and cognitive psychologists on the human potential for multitasking. I hoped that by corralling the work of these scientific experts, I could share their knowledge as it applies to the changing media landscape and whether we will have to change the way we tell and consume stories. So I asked them, Sure, we can walk and chew gum at the same time, but can we productively type, talk, and read all at the same time? And does it make us more effective or creative?

  The Cocktail-Party Problem

  The thorny problem of multitasking has been a workplace challenge for some time, I learned, dating back more than half a century, when commercial air traffic started to increase rapidly. In the early 1950s, air traffic controllers faced a serious problem. Airplane traffic was soaring and the controllers were handling a growing number of planes in the sky. But many control towers, sometimes with several people managing multiple planes, operated on a single loudspeaker. Information about individual planes came in all at once, a cacophony of crucial information that was hard to decipher. Pilots would begin their descent to airports and announce their flight patterns by radio to the control tower. But the messages from individual pilots merged, and controllers
had to decipher this blended jumble of monotone voices while trying to guide planes in for safe landings. It was more and more difficult for controllers to follow a single plane amid the alphabet soup of call letters coming in.

  “North Tower, this is Boeing 737 Alpha with a Mercer Departure at Alpha Niner Delta. Altitude 400 feet, moving at 383 knots.” This type of jumble came from multiple planes, sometimes at the same time. There was a lot of information about one flight for a controller to absorb, and, worse, the potential for disaster was enormous.

  In the 1950s when Colin Cherry, a well-known British cognitive psychologist, heard about this problem, he began to wonder how people generally distinguished between multiple voices, such as the individual voices at a party. A field of research developed around what came to be known as the cocktail-party problem.3

  It’s a great question: How do people at a noisy cocktail party hear their own names called out by a friend or easily converse with one person while ignoring the noisy discussion of surrounding guests? The question that Cherry and other researchers explored was: If you can hear your name being called out and engage in discussion at a noisy cocktail party, why can’t an air traffic controller decipher between two audio inputs simultaneously?

  To study the cocktail-party problem, Cherry decided to test a number of different audio problems. For the first set of tests, he recorded one person reading two different texts and played both of them at the same time to individuals to see if they could differentiate one from the other. The subjects were asked to listen to one of the messages and separate the two different topics they heard. The outcome showed, Cherry wrote in the 1950s, that although “the results were a Babel, nevertheless the messages may be separated.” The people were able to focus with one ear and allow the other ear to push the competing content aside—perhaps much the way a parent carries on a conversation with one ear while constantly listening with the other ear to children’s play (or fights) in another room.

  Cherry performed numerous variations on this test, using different languages, phrases, and accents to determine when pairs of voices were distinguishable and when they weren’t. In another series of tests, he put headphones on people in the hope of directing one message to the right ear and another to the left ear. And as the test progressed, he gradually changed different values and inputs while encouraging the participants to try not to listen to one ear and focus on the other, just like at a cocktail party.

  At first he tried a barrage of wacky ideas, such as playing audio into the left ear in German spoken by an Englishman. In this test, subjects were asked to decipher what they heard. Cherry later experimented with accents, switched between male and female voices, and even reversed playback of prerecorded audio. Certain attributes went completely unnoticed. Some were noticed quickly by most participants.

  He theorized that certain factors help us differentiate between multiple sounds, including the direction the voices come from and the visibility of people’s lips. Other distinctions included things as simple as a male or female voice, subject matter, accents, and a difference in pitch.

  Cherry didn’t figure out the inner workings of the brain and how it is capable of paying attention at a cocktail party while flushing out the unimportant pieces of chatter. Instead, he figured out how we filter this information. Cherry discovered that a variety of factors help us distinguish and filter a tremendous amount of auditory information. Seeing someone’s lips move is a perfect example. Accents, pitch, and the direction of the voice all play other crucial roles in deciding what our brain will process. Although Cherry found that it was impossible for most participants to consume two conversations simultaneously, he found that the brain is partially capable of paying attention to other audio inputs even if it doesn’t process and remember all the information.

  Some years later, as research progressed in this area, key experiments found that people are more capable of understanding multiple sound inputs when the inputs are drastically simplified.4 For example, if people hear the word “bread” in one ear and then hear the somewhat expected word like “knife,” which would make up “bread knife,” in the opposite ear, they can comprehend through both ears. But if they hear “bread” in one ear and something completely off topic from the word “bread,” such as “carburetor,” in the other, they’ll be much less likely to comprehend or retain the duplicate conversations. These later experiments, which were performed by the psychologist Donald Broadbent, showed that “messages containing little information can be dealt with simultaneously, while those with high information content may not.” Or as Broadbent said in one of his research papers that was cited by MIT computer scientists, “the statement ‘one cannot do two tasks at once’ depends on what is meant by ‘task.’ ”

  Research around the cocktail party problem was initially aimed at helping computers understand sounds, which still hasn’t been perfected, not to solve the mystery of multitasking. But sixty years later, researchers are still trying to completely understand the cocktail party problem and what is actually happening in our brains as we hear multiple sounds. Even in 2005, a paper in the MIT Press journal Neural Computation noted that “it seems fair to say that a complete understanding of the cocktail party phenomenon is still missing and the story is far from complete; the enigma about the marvelous auditory perception capability of human beings remains a mystery.”5

  What doesn’t remain a mystery, and what research about the cocktail-party problem tells us, is that our brains somehow can discern multiple inputs at once. Our ability to multitask is not a binary question of yes or no. It depends drastically on the task at hand. The fact that we can’t drive a car and send a text message safely at the same time doesn’t mean that we can’t engage in multiple conversations in chat windows online or even consume a new kind of book that includes audio, video, and commentary. As these studies show, if the content relates, its parts can be consumed at the same time and might even be able to tell a more engaging story.

  Blink. Don’t Blink.

  The cocktail party problem was first researched nearly sixty years ago. Brain research has since been catapulted into the mainstream, and there have been many thousands of new studies and findings on the inner workings of the brain. To understand the multitasking debate, especially when it comes to storytelling, I found I needed a better understanding of how the brain works. I was told over and over by many neuroscientists that first and foremost researchers still don’t know a whole lot about what goes on between a person’s ears. As Richard Haier, who performed the Tetris studies, noted: “One [thing about the brain], it’s really exciting to do brain research, and, two, we don’t know anything about the brain.” One neuroscientist pointed out that we still don’t know or understand how my brain can tell my hand to pick up a glass of water and bring it to my lips.

  That said, we are starting to understand small bits and pieces of the brain and how this applies to the future of storytelling. The following studies help paint a better picture of how our brains work in some of these scenarios.

  In the early 1990s, Jane Raymond, a professor of psychology at Bangor University in Wales, wanted to understand how our eyes and brains work together and how well they process information. Working with other researchers, she discovered that human brains can move only so fast before they factually miss something. At certain speeds, the brain just doesn’t process information sent by the eyes.

  Raymond and her colleagues named the phenomenon the attentional blink.6 These blinks aren’t information that is missed by the eyes as they send messages to the brain. Rather, the brain itself actually appears to blink.

  Raymond used a testing process called RSVP, for “rapid serial visual presentation,” which shows shapes or letters in rapid succession, fast enough that the images change up to ten times in a second. She found that at certain rapid speeds, the brain misses the next image. It doesn’t even register the event. It’s as if the brain were actually blinking.

  Researchers in neuroscience labs a
round the world have studied attentional blink over the last two decades to try to understand the significance of brains missing bits of information and seeing certain content only when it is delivered at a limited tempo. One key conclusion is that some tasks truly limit our brains’ ability to do two things at once—though they may be able to do two things in very rapid succession, so quickly that you could hardly tell that the actions didn’t happen simultaneously.

  Paul Dux, a cognitive psychologist now based at the University of Queensland in Australia, in particular, wanted to know if we could train our brains to move faster, just as video games can improve our response times and awareness.

  Dux describes the brain as a “pretty advanced processing system between our ears,” capable of doing amazing things—even tasks that a computer may never be able to perform. At the same time, he notes, we have severe impairments. “If you’re driving a car and trying to talk on your cell phone at the same time, you simply can’t do that successfully,” he says. “We also find it very difficult to tend to two visual tasks, or interact with more than a couple of objects at a time.”

  Most of the time, he has concluded, “you just can’t perform multiple tasks, even if they’re very, very simple.”

  But, he wondered, maybe we’ve just never been asked to perform these kinds of multiple tasks at once. Basing his hypothesis on past research, he asked, “If we practiced multitasking, could we become more capable? Could we improve our abilities?”

  Working with another neuroscientist, René Marois of Vanderbilt University, Dux asked participants to try to perform two very simple tasks simultaneously.7 For example, they showed participants one of two colored disks on a computer screen. The subjects then were asked to press the right hand’s index finger when they saw one color and press the middle finger when they saw another one. While the participants were paying attention to the color disks on the screen, they were also asked to listen to differently pitched sounds and notify the scientists when they heard a high or low pitch.

 

‹ Prev