Book Read Free

The Digital Divide

Page 8

by Mark Bauerlein


  The Internet promises to have particularly far-reaching effects on cognition. In a paper published in 1936, the British mathematician Alan Turing proved that a digital computer, which at the time existed only as a theoretical machine, could be programmed to perform the function of any other information-processing device. And that’s what we’re seeing today. The Internet, an immeasurably powerful computing system, is subsuming most of our other intellectual technologies. It’s becoming our map and our clock, our printing press and our typewriter, our calculator and our telephone, and our radio and TV.

  When the Net absorbs a medium, that medium is re-created in the Net’s image. It injects the medium’s content with hyperlinks, blinking ads, and other digital gewgaws, and it surrounds the content with the content of all the other media it has absorbed. A new e-mail message, for instance, may announce its arrival as we’re glancing over the latest headlines at a newspaper’s site. The result is to scatter our attention and diffuse our concentration.

  The Net’s influence doesn’t end at the edges of a computer screen, either. As people’s minds become attuned to the crazy quilt of Internet media, traditional media have to adapt to the audience’s new expectations. Television programs add text crawls and pop-up ads, and magazines and newspapers shorten their articles, introduce capsule summaries, and crowd their pages with easy-to-browse info-snippets. When, in March of this year, The New York Times decided to devote the second and third pages of every edition to article abstracts, its design director, Tom Bodkin, explained that the “shortcuts” would give harried readers a quick “taste” of the day’s news, sparing them the “less efficient” method of actually turning the pages and reading the articles. Old media have little choice but to play by the new-media rules.

  Never has a communications system played so many roles in our lives—or exerted such broad influence over our thoughts—as the Internet does today. Yet, for all that’s been written about the Net, there’s been little consideration of how, exactly, it’s reprogramming us. The Net’s intellectual ethic remains obscure.

  About the same time that Nietzsche started using his typewriter, an earnest young man named Frederick Winslow Taylor carried a stopwatch into the Midvale Steel plant in Philadelphia and began a historic series of experiments aimed at improving the efficiency of the plant’s machinists. With the approval of Midvale’s owners, he recruited a group of factory hands, set them to work on various metalworking machines, and recorded and timed their every movement as well as the operations of the machines. By breaking down every job into a sequence of small, discrete steps and then testing different ways of performing each one, Taylor created a set of precise instructions—an “algorithm,” we might say today—for how each worker should work. Midvale’s employees grumbled about the strict new regime, claiming that it turned them into little more than automatons, but the factory’s productivity soared.

  More than a hundred years after the invention of the steam engine, the Industrial Revolution had at last found its philosophy and its philosopher. Taylor’s tight industrial choreography—his “system,” as he liked to call it—was embraced by manufacturers throughout the country and, in time, around the world. Seeking maximum speed, maximum efficiency, and maximum output, factory owners used time-and-motion studies to organize their work and configure the jobs of their workers. The goal, as Taylor defined it in his celebrated 1911 treatise, The Principles of Scientific Management , was to identify and adopt, for every job, the “one best method” of work and thereby to effect “the gradual substitution of science for rule of thumb throughout the mechanic arts.” Once his system was applied to all acts of manual labor, Taylor assured his followers, it would bring about a restructuring not only of industry but of society, creating a utopia of perfect efficiency. “In the past the man has been first,” he declared; “in the future the system must be first.”

  Taylor’s system is still very much with us; it remains the ethic of industrial manufacturing. And now, thanks to the growing power that computer engineers and software coders wield over our intellectual lives, Taylor’s ethic is beginning to govern the realm of the mind as well. The Internet is a machine designed for the efficient and automated collection, transmission, and manipulation of information, and its legions of programmers are intent on finding the “one best method”—the perfect algorithm—to carry out every mental movement of what we’ve come to describe as “knowledge work.”

  Google’s headquarters, in Mountain View, California—the Googleplex—is the Internet’s high church, and the religion practiced inside its walls is Taylorism. Google, says its chief executive, Eric Schmidt, is “a company that’s founded around the science of measurement,” and it is striving to “systematize everything” it does. Drawing on the terabytes of behavioral data it collects through its search engine and other sites, it carries out thousands of experiments a day, according to the Harvard Business Review, and it uses the results to refine the algorithms that increasingly control how people find information and extract meaning from it. What Taylor did for the work of the hand, Google is doing for the work of the mind.

  The company has declared that its mission is “to organize the world’s information and make it universally accessible and useful.” It seeks to develop “the perfect search engine,” which it defines as something that “understands exactly what you mean and gives you back exactly what you want.” In Google’s view, information is a kind of commodity, a utilitarian resource that can be mined and processed with industrial efficiency. The more pieces of information we can “access” and the faster we can extract their gist, the more productive we become as thinkers.

  Where does it end? Sergey Brin and Larry Page, the gifted young men who founded Google while pursuing doctoral degrees in computer science at Stanford, speak frequently of their desire to turn their search engine into an artificial intelligence, a HAL-like machine that might be connected directly to our brains. “The ultimate search engine is something as smart as people—or smarter,” Page said in a speech a few years back. “For us, working on search is a way to work on artificial intelligence.” In a 2004 interview with Newsweek, Brin said, “Certainly if you had all the world’s information directly attached to your brain, or an artificial brain that was smarter than your brain, you’d be better off.” Last year, Page told a convention of scientists that Google is “really trying to build artificial intelligence and to do it on a large scale.”

  Such an ambition is a natural one, even an admirable one, for a pair of math whizzes with vast quantities of cash at their disposal and a small army of computer scientists in their employ. A fundamentally scientific enterprise, Google is motivated by a desire to use technology, in Eric Schmidt’s words, “to solve problems that have never been solved before,” and artificial intelligence is the hardest problem out there. Why wouldn’t Brin and Page want to be the ones to crack it?

  Still, their easy assumption that we’d all “be better off” if our brains were supplemented, or even replaced, by an artificial intelligence is unsettling. It suggests a belief that intelligence is the output of a mechanical process, a series of discrete steps that can be isolated, measured, and optimized. In Google’s world, the world we enter when we go online, there’s little place for the fuzziness of contemplation. Ambiguity is not an opening for insight but a bug to be fixed. The human brain is just an outdated computer that needs a faster processor and a bigger hard drive.

  The idea that our minds should operate as high-speed data-processing machines is not only built into the workings of the Internet, it is the network’s reigning business model as well. The faster we surf across the Web—the more links we click and pages we view—the more opportunities Google and other companies gain to collect information about us and to feed us advertisements. Most of the proprietors of the commercial Internet have a financial stake in collecting the crumbs of data we leave behind as we flit from link to link—the more crumbs, the better. The last thing these companies want is to encoura
ge leisurely reading or slow, concentrated thought. It’s in their economic interest to drive us to distraction.

  Maybe I’m just a worrywart. Just as there’s a tendency to glorify technological progress, there’s a countertendency to expect the worst of every new tool or machine. In Plato’s Phaedrus, Socrates bemoaned the development of writing. He feared that, as people came to rely on the written word as a substitute for the knowledge they used to carry inside their heads, they would, in the words of one of the dialogue’s characters, “cease to exercise their memory and become forgetful.” And because they would be able to “receive a quantity of information without proper instruction,” they would “be thought very knowledgeable when they are for the most part quite ignorant.” They would be “filled with the conceit of wisdom instead of real wisdom.” Socrates wasn’t wrong—the new technology did often have the effects he feared—but he was shortsighted. He couldn’t foresee the many ways that writing and reading would serve to spread information, spur fresh ideas, and expand human knowledge (if not wisdom).

  The arrival of Gutenberg’s printing press, in the fifteenth century, set off another round of teeth gnashing. The Italian humanist Hieronimo Squarciafico worried that the easy availability of books would lead to intellectual laziness, making men “less studious” and weakening their minds. Others argued that cheaply printed books and broadsheets would undermine religious authority, demean the work of scholars and scribes, and spread sedition and debauchery. As New York University professor Clay Shirky notes, “Most of the arguments made against the printing press were correct, even prescient.” But, again, the doomsayers were unable to imagine the myriad blessings that the printed word would deliver.

  So, yes, you should be skeptical of my skepticism. Perhaps those who dismiss critics of the Internet as Luddites or nostalgists will be proved correct, and from our hyperactive, data-stoked minds will spring a golden age of intellectual discovery and universal wisdom. Then again, the Net isn’t the alphabet, and although it may replace the printing press, it produces something altogether different. The kind of deep reading that a sequence of printed pages promotes is valuable not just for the knowledge we acquire from the author’s words but for the intellectual vibrations those words set off within our own minds. In the quiet spaces opened up by the sustained, undistracted reading of a book, or by any other act of contemplation, for that matter, we make our own associations, draw our own inferences and analogies, foster our own ideas. Deep reading, as Maryanne Wolf argues, is indistinguishable from deep thinking.

  If we lose those quiet spaces, or fill them up with “content,” we will sacrifice something important not only in our selves but in our culture. In a recent essay, the playwright Richard Foreman eloquently described what’s at stake: I come from a tradition of Western culture, in which the ideal (my ideal) was the complex, dense and “cathedrallike” structure of the highly educated and articulate personality—a man or woman who carried inside themselves a personally constructed and unique version of the entire heritage of the West. [But now] I see within us all (myself included) the replacement of complex inner density with a new kind of self—evolving under the pressure of information overload and the technology of the “instantly available.”

  As we are drained of our “inner repertory of dense cultural inheritance,” Foreman concluded, we risk turning into “ ‘pancake people’—spread wide and thin as we connect with that vast network of information accessed by the mere touch of a button.”

  I’m haunted by that scene in 2001. What makes it so poignant, and so weird, is the computer’s emotional response to the disassembly of its mind: its despair as one circuit after another goes dark, its childlike pleading with the astronaut—“I can feel it. I can feel it. I’m afraid”—and its final reversion to what can only be called a state of innocence. HAL’s outpouring of feeling contrasts with the emotionlessness that characterizes the human figures in the film who go about their business with an almost robotic efficiency. Their thoughts and actions feel scripted, as if they’re following the steps of an algorithm. In the world of 2001, people have become so machinelike that the most human character turns out to be a machine. That’s the essence of Kubrick’s dark prophecy: as we come to rely on computers to mediate our understanding of the world, it is our own intelligence that flattens into artificial intelligence.

  < Gary Small > < Gigi Vorgan >

  your brain is evolving right now

  Excerpted from iBrain (pp. 1–22).

  GARY SMALL is the Parlow-Solomon Professor on Aging at the David Geffen School of Medicine at UCLA and Director of the UCLA Center on Aging. He has written more than 500 scientific works. Scientific American magazine named him one of the world’s top innovators in science and technology. He is also the author or coauthor of five popular books, including The Memory Bible (2003) and iBrain: Surviving the Technological Alteration of the Modern Mind (2008). More information at www.DrGarySmall.com.

  GIGI VORGAN wrote, produced, and appeared in numerous feature films and television projects before joining her husband, Dr. Gary Small, to cowrite The Memory Bible. She also coauthored with him The Memory Prescription (2005), The Longevity Bible (2007), The Naked Lady Who Stood on Her Head: A Psychiatrist’s Stories of His Most Bizarre Cases (2010), and iBrain: Surviving the Technological Alteration of the Modern Mind. Contact: gigi@vorgan.com.

  THE CURRENT EXPLOSION of digital technology not only is changing the way we live and communicate but is rapidly and profoundly altering our brains. Daily exposure to high technology—computers, smartphones, video games, search engines like Google and Yahoo—stimulates brain cell alteration and neurotransmitter release, gradually strengthening new neural pathways in our brains while weakening old ones. Because of the current technological revolution, our brains are evolving right now—at a speed like never before.

  Besides influencing how we think, digital technology is altering how we feel, how we behave, and the way in which our brains function. Although we are unaware of these changes in our neural circuitry or brain wiring, these alterations can become permanent with repetition. This evolutionary brain process has rapidly emerged over a single generation and may represent one of the most unexpected yet pivotal advances in human history. Perhaps not since Early Man first discovered how to use a tool has the human brain been affected so quickly and so dramatically.

  Television had a fundamental impact on our lives in the past century, and today the average person’s brain continues to have extensive daily exposure to TV. Scientists at the University of California, Berkeley, recently found that on average Americans spend nearly three hours each day watching television or movies, or much more time spent than on all leisure physical activities combined. But in the current digital environment, the Internet is replacing television as the prime source of brain stimulation. Seven out often American homes are wired for high-speed Internet. We rely on the Internet and digital technology for entertainment, political discussion, and even social reform as well as communication with friends and coworkers.

  As the brain evolves and shifts its focus toward new technological skills, it drifts away from fundamental social skills, such as reading facial expressions during conversation or grasping the emotional context of a subtle gesture. A Stanford University study found that for every hour we spend on our computers, traditional face-to-face interaction time with other people drops by nearly thirty minutes. With the weakening of the brain’s neural circuitry controlling human contact, our social interactions may become awkward, and we tend to misinterpret, and even miss, subtle, nonverbal messages. Imagine how the continued slipping of social skills might affect an international summit meeting ten years from now when a misread facial cue or a misunderstood gesture could make the difference between escalating military conflict or peace.

  The high-tech revolution is redefining not only how we communicate but how we reach and influence people, exert political and social change, and even glimpse into the private lives of coworkers, neigh
bors, celebrities, and politicians. An unknown innovator can become an overnight media magnet as news of his discovery speeds across the Internet. A cell phone video camera can capture a momentary misstep of a public figure and in minutes it becomes the most downloaded video on YouTube. Internet social networks like MySpace and Facebook have exceeded a hundred million users, emerging as the new marketing giants of the digital age and dwarfing traditional outlets such as newspapers and magazines.

  Young minds tend to be the most exposed, as well as the most sensitive, to the impact of digital technology. Today’s young people in their teens and twenties, who have been dubbed Digital Natives, have never known a world without computers, twenty-four-hour TV news, Internet, and cell phones—with their video, music, cameras, and text messaging. Many of these Natives rarely enter a library, let alone look something up in a traditional encyclopedia; they use Google, Yahoo, and other online search engines. The neural networks in the brains of these Digital Natives differ dramatically from those of Digital Immigrants: people—including all baby boomers—who came to the digital/computer age as adults but whose basic brain wiring was laid down during a time when direct social interaction was the norm. The extent of their early technological communication and entertainment involved the radio, telephone, and TV.

  As a consequence of this overwhelming and early high-tech stimulation of the Digital Native’s brain, we are witnessing the beginning of a deeply divided brain gap between younger and older minds—in just one generation. What used to be simply a generation gap that separated young people’s values, music, and habits from those of their parents has now become a huge divide resulting in two separate cultures. The brains of the younger generation are digitally hardwired from toddlerhood, often at the expense of neural circuitry that controls one-on-one people skills. Individuals of the older generation face a world in which their brains must adapt to high technology or they’ll be left behind—politically, socially, and economically.

 

‹ Prev