Is the Internet Changing the Way You Think?

Home > Other > Is the Internet Changing the Way You Think? > Page 2
Is the Internet Changing the Way You Think? Page 2

by John Brockman


  Today, most people recognize that they are using the Internet only when they are interacting with a computer screen. They are less likely to appreciate that they are using the Internet while talking on the telephone, watching television, or flying on an airplane. Some air travelers may have recently gotten a glimpse of the truth, for example, upon learning that their flights were grounded due to a router failure in Salt Lake City, but for most of them this was just another inscrutable annoyance. Most people long ago gave up trying to understand how technical systems work. This is a part of how the Internet is changing the way we think.

  I want to be clear that I am not complaining about technical ignorance. In an Internet-connected world, it is almost impossible to keep track of how systems actually function. Your telephone conversation may be delivered over analog lines one day and by the Internet the next. Your airplane route may be chosen by a computer, a human being, or (most likely) some combination of both. Don’t bother asking, because any answer you get is likely to be wrong.

  Soon no human will know the answer. More and more decisions are made by the emergent interaction of multiple communicating systems, and these component systems themselves are constantly adapting, changing the way they work. This is the real impact of the Internet: By allowing adaptive complex systems to interoperate, the Internet has changed the way we make decisions. More and more, it is not individual humans who decide but an entangled, adaptive network of humans and machines.

  To understand how the Internet encourages this interweaving of complex systems, you need to appreciate how it has changed the nature of computer programming. Back in the twentieth century, programmers could exercise absolute control within a bounded world with precisely defined rules. They were able to tell their computers exactly what to do. Today, programming usually involves linking together complex systems developed by others without understanding exactly how they work. In fact, depending upon the methods of other systems is considered poor programming practice, because it is expected that they will change.

  Consider, as a simple example, a program that needs to know the time of day. In the unconnected world, computers often asked the operator to type in the time when they were powered on. They then kept track of passing time by counting ticks of an internal clock. Programmers often had to write their own program to do this, but in any case they understood exactly how it worked. Once computers became connected through the Internet, it made more sense for computers to find out the time by asking one another, so something called Network Time Protocol was invented. Most programmers are aware that it exists, but few understand it in detail. Instead, they call a library routine—a routine that queries the operating system, which automatically invokes the Network Time Protocol when required.

  It would take me too long to explain the workings of Network Time Protocol and how it corrects for variable network delays and takes advantage of a partially layered hierarchy of network-connected clocks to find the time. Suffice it to say that it’s complicated. Besides, I would be describing version 3 of the protocol, and your operating system is probably already using version 4. Even if you’re a programmer, there’s no need for you to bother to understand how it works.

  Now consider a program that is directing delivery trucks to restock stores. It needs to know not just the time of day but also the locations of the trucks in the fleet, the maps of the streets, the coordinates of the warehouses, the current traffic patterns, and the inventories of the stores. Fortunately, the program can keep track of all of this changing information by connecting to other computers through the Internet. The program can also offer services to other company systems, which need to track the location of the packages, pay the drivers, and schedule maintenance of the trucks. All these systems will depend on one another to provide information, without having to understand exactly how the information is computed. These communicating systems are being constantly improved and extended, evolving in time.

  Now multiply this picture a millionfold, to include not just one fleet of trucks but all the airplanes, gas pipelines, hospitals, factories, oil refineries, mines, and power plants, not to mention the salespeople, advertisers, media distributors, insurance companies, regulators, financiers, and stock traders. You will begin to perceive the entangled system that makes so many of our day-to-day decisions. Although we created it, we did not exactly design it. It evolved. Our relationship to it is similar to our relationship to our biological ecosystem. We are codependent and not entirely in control.

  We have embodied our rationality within our machines and delegated to them many of our choices—and thereby created a world beyond our understanding. Our current century began on a note of uncertainty, as we worried about how our machines would handle the transition to the new millennium. Now we are attending to a financial crisis caused by the banking system having miscomputed risks, and to a debate on global warming in which experts argue not so much about the data as about what the computers predict from the data. We have linked our destinies not only to one another across the globe but also to our technology. If the theme of the Enlightenment was independence, ours is interdependence. We are now all connected, humans and machines. Welcome to the dawn of the Entanglement.

  The Bookless Library

  Nicholas Carr

  Author, The Shallows: What the Internet Is Doing to Our Brains

  As the school year began last September, Cushing Academy, an elite Massachusetts prep school that has been around since Civil War days, announced that it was emptying its library of books. In place of the thousands of volumes that had once crowded the building’s shelves, the school was installing, it said, “state-of-the-art computers with high-definition screens for research and reading,” as well as “monitors that provide students with real-time interactive data and news feeds from around the world.” Cushing’s bookless library would become, boasted headmaster James Tracy, “a model for the twenty-first-century school.”

  The story gained little traction in the press—it came and went as quickly as a tweet—but to me it felt like a cultural milestone. A library without books would have seemed unthinkable just twenty years ago; today the news seems almost overdue. I’ve made scores of visits to libraries over the last couple of years. Every time, I’ve seen a greater number of people peering into computer screens than thumbing through pages. The primary role played by libraries today seems to have already shifted from providing access to printed works to providing access to the Internet. There is every reason to believe that the trend will only accelerate.

  “When I look at books, I see an outdated technology,” Tracy told a reporter from the Boston Globe. His charges would seem to agree. A sixteen-year-old student at the school took the disappearance of the library books in stride. “When you hear the word ‘library,’ you think of books,” she said. “But very few students actually read them.”

  What makes it easy for an educational institution like Cushing to jettison its books is the assumption that the words in books are the same whether they’re printed on paper or formed of pixels on a screen. A word is a word is a word. “If I look outside my window and I see my student reading Chaucer under a tree,” said Tracy, giving voice to this common view, “it is utterly immaterial to me whether they’re doing so by way of a Kindle or by way of a paperback.” The medium, in other words, doesn’t matter.

  But Tracy is wrong. The medium does matter. It matters greatly. The experience of reading words on a networked computer, whether it’s a PC, an iPhone, or a Kindle, is very different from the experience of reading those same words in a book. As a technology, a book focuses our attention, isolates us from the myriad distractions that fill our everyday lives. A networked computer does precisely the opposite. It is designed to scatter our attention. It doesn’t shield us from environmental distractions; it adds to them. The words on a computer screen exist in a welter of contending stimuli.

  The human brain, science tells us, adapts readily to its environment. The adaptation occurs at a deep
biological level, in the way our nerve cells or neurons connect. The technologies we think with, including the media we use to gather, store, and share information, are critical elements of our intellectual environment, and they play important roles in shaping our modes of thought. That fact not only has been proved in the laboratory but also is evident from even a cursory glance at the course of intellectual history. It may be immaterial to Tracy whether a student reads from a book or a screen, but it is not immaterial to that student’s mind.

  My own reading and thinking habits have shifted dramatically since I first logged on to the Web fifteen years ago or so. I now do the bulk of my reading and researching online. And my brain has changed as a result. Even as I’ve become more adept at navigating the rapids of the Net, I have experienced a steady decay in my ability to sustain my attention. As I explained in the Atlantic in 2008, “What the Net seems to be doing is chipping away my capacity for concentration and contemplation. My mind now expects to take in information the way the Net distributes it: in a swiftly moving stream of particles.”* Knowing that the depth of our thought is tied directly to the intensity of our attentiveness, it’s hard not to conclude that as we adapt to the intellectual environment of the Net our thinking becomes shallower.

  There are as many human brains as there are human beings. I expect, therefore, that reactions to the Net’s influence, and hence to this year’s Edge question, will span many points of view. Some people will find in the busy interactivity of the networked screen an intellectual environment ideally suited to their mental proclivities. Others will see a catastrophic erosion in the ability of human beings to engage in calmer, more meditative modes of thought. A great many likely will be somewhere between the extremes, thankful for the Net’s riches but worried about its long-term effects on the depth of individual intellect and collective culture.

  My own experience leads me to believe that what we stand to lose will be at least as great as what we stand to gain. I feel sorry for the kids at Cushing Academy.

  The Invisible College

  Clay Shirky

  Social and technology network topology researcher; adjunct professor, New York University Graduate School of Interactive Telecommunications Program (ITP); author, Cognitive Surplus

  The Internet has been in use by a majority of citizens in the developed world for less than a decade, but we can already see some characteristic advantages (dramatically improved access to information, very large-scale collaborations) and disadvantages (interruption-driven thought, endless distractions). It’s tempting to try to judge the relative value of the network on the way we think by deciding whether access to Wikipedia outweighs access to tentacle porn or the other way around.

  It is our misfortune to live through the largest increase in expressive capability in the history of the human race—a misfortune because surplus is always more dangerous than scarcity. Scarcity means that valuable things become more valuable, a conceptually easy change to integrate. Surplus means that previously valuable things stop being valuable, which freaks people out.

  To make a historical analogy with the last major spread of new publishing technology, you could earn a living in 1500 simply by knowing how to read and write. The spread of those abilities in the subsequent century had the curious property of making literacy both more essential and less professional; literacy became critical at the same time as the scribes lost their jobs.

  The same thing is happening with publishing. In the twentieth century, the mere fact of owning the apparatus to make something public—whether a printing press or a TV tower—made you a person of considerable importance. Today, though, publishing, in the sense of making things public, is becoming similarly deprofessionalized. YouTube is now in the position of having to stop eight-year-olds from becoming global publishers of video. The mere fact of being able to publish to a global audience is the new literacy—formerly valuable, now so widely available that you can’t make any money with the basic capability anymore.

  This shock of inclusion, where professional media give way to participation by 2 billion amateurs (a threshold we will cross this year), means that the average quality of public thought has collapsed; when anyone can say anything anytime, how could it not? If the only consequence of this influx of amateurs is the destruction of existing models for producing high-quality material, we would be at the beginning of another Dark Ages.

  So it falls to us to make sure that that isn’t the only consequence.

  To the question “How is the Internet changing the way you think?” the right answer is “Too soon to tell.” This isn’t because we can’t yet see some of the obvious effects but because the deep changes will be manifested only when new cultural norms shape what the technology makes possible.

  To return to the press analogy, printing was a necessary but not sufficient input to the scientific revolution. The Invisible College, the group of natural philosophers who drove the original revolution in chemistry in the mid-1600s, were strongly critical of the alchemists, their intellectual forebears, who for centuries had made only fitful progress. By contrast, the Invisible College put chemistry on a sound scientific footing in a matter of a couple of decades, one of the most important intellectual transitions in the history of science. In the 1600s, though, a chemist and an alchemist used the same tools and had access to the same background. What did the Invisible College have that the alchemists didn’t?

  They had a culture of sharing. The problem with the alchemists wasn’t that they failed to turn lead into gold; the problem was that they failed uninformatively. Alchemists were obscurantists, recording their work by hand and rarely showing it to anyone but disciples. In contrast, members of the Invisible College shared their work, describing and disputing their methods and conclusions so that they all might benefit from both successes and failures and build on one another’s work.

  The chemists were, to use the avant-garde playwright Richard Foreman’s phrase, “pancake people.” They abandoned the spiritual depths of alchemy for a continual and continually incomplete grappling with what was real, a task so daunting that no one person could take it on alone. Though the history of science we learn as schoolchildren is often marked by the trope of the lone genius, science has always been a networked operation. In this, we can see a precursor to what’s possible for us today. The Invisible College didn’t just use the printing press as raw capability but created a culture that used the press to support the transparency and argumentation that science relies on. We have the same opportunity.

  As we know from arXiv.org, the twentieth-century model of publishing is inadequate to the kind of sharing possible today. As we know from Wikipedia, post hoc peer review can support astonishing creations of shared value. As we know from the search for Mersenne primes, whole branches of mathematical exploration are now best taken on by groups. As we know from open-source efforts such as Linux, collaboration between loosely joined parties can work at scales and over time frames previously unimagined. As we know from NASA clickworkers, groups of amateurs can sometimes replace single experts. As we know from www.patientslikeme.com, patient involvement accelerates medical research. And so on.

  The beneficiaries of the system in which making things public was a privileged activity—academics, politicians, reporters, doctors—will complain about the way the new abundance of public thought upends the old order, but those complaints are like keening at a wake: The change they are protesting is already in the past. The real action is elsewhere.

  The Internet’s primary effect on how we think will reveal itself only when it affects the cultural milieu of thought, not just the behavior of individual users. The members of the Invisible College did not live to see the full flowering of the scientific method, and we will not live to see what use humanity makes of a medium for sharing that is cheap, instant, and global (both in the sense of “comes from everyone” and in the sense of “goes everywhere”). We are, however, the people who are setting the earliest patterns for this medium. Ou
r fate won’t matter much, but the norms we set will.

  Given what we have today, the Internet might be seen as the Invisible High School, with a modicum of educational material in an ocean of narcissism and social obsessions. We could, however, also use it as an Invisible College, the communicative backbone of real intellectual and civic change. To do this will require more than technology. It will require us to adopt norms of open sharing and participation, fitted to a world in which publishing has become the new literacy.

  Net Gain

  Richard Dawkins

  Evolutionary biologist; emeritus Professor of the Public Understanding of Science, Oxford; author, The Greatest Show on Earth

  If, forty years ago, the Edge question had been “What do you anticipate will most radically change the way you think during the next forty years?” my mind would have flown instantly to a then-recent article in Scientific American (September 1966) about Project MAC. Nothing to do with the Apple Mac, which it long predated, Project MAC was an MIT-based cooperative enterprise in pioneering computer science. It included the circle of AI innovators surrounding Marvin Minsky, but, oddly, that was not the part that captured my imagination. What really excited me, as a user of the large mainframe computers that were all you could get in those days, was something that nowadays would seem utterly commonplace: the then- astonishing fact that up to thirty people, from all around the MIT campus and even from their homes, could simultaneously log on to the same computer, simultaneously communicate with it and with each other. Mirabile dictu, the coauthors of a paper could work on it simultaneously, drawing upon a shared database in the computer, even though they might be miles apart. In principle, they could be on opposite sides of the globe.

 

‹ Prev