When We Are No More
Page 15
A recent study on the “Google Effects on Memory” shows that we are adapting quite nimbly to having so much information at our fingertips. We are able to distinguish between the information that is available online, and what is not and that we therefore need to remember ourselves. “Just as we learn … who knows what in our families and offices, we are learning what the computer ‘knows’ and when we should attend to where we have stored information in our computer-based memories.” Like graduate students who learn how to consume a dozen or more books a week in preparation for a life of intensive learning, we do not remember the content of stored information so much as where information is stored and how to find it. The authors shrewdly point out that “we have become dependent on [computers] to the same degree we are dependent on all the knowledge we gain from our friends and co-workers—and lose if they are out of touch. The experience of losing our Internet connection becomes more and more like losing a friend. We must remain plugged in to know what Google knows.” Outsourcing more and more knowledge to computers will be no better or worse for us personally and collectively than putting ink on paper. What is important in the digital age, as it has been for the print era, is that we maintain an equilibrium between managing our personal memory and assuming responsibility for collective memory. In the twenty-first century that means building libraries and archives online that are free and open to the public to complement those that are on the ground.
How do we master memory in the digital age of abundance? It will start with retooling literacy for the digital age and updating public policies to ensure investment in long-term institutions capable of securing memory into the future. It will not happen in one generation, or even two. But it is time to lay the foundations and imagine the memory systems that will be in place—and by which we will be remembered and judged—when we are no more.
LITERACY IN THE DIGITAL AGE
In April 2010, the Library of Congress announced that Twitter would provide “public tweets from the company’s inception through the date of the agreement, an archive of tweets from 2006 through April, 2010. Additionally, the Library and Twitter agreed that Twitter would provide all public tweets on an ongoing basis under the same terms” for the purpose of archiving them as historical sources. As such, Twitter is valuable less as a source of individual biographical information than in the aggregate as a database for large-scale analysis. One of the issues raised when the Twitter archive was donated to the Library of Congress was the matter of who owned the data and had the right to decide what to do with it. Many people were unpleasantly surprised to realize that they did not control their own Twitter streams. Some people also questioned the value of Twitter and its place in the national library, alongside the papers of George Washington and George Gershwin. A year later, though, headlines were ablaze with stories of the Arab Spring. Reports highlighted the role that social media platforms including Twitter played in helping to galvanize the attention of the world and organize protesters on the ground. It became increasingly clear that as a source for data mining at scale, social media data are valuable sources of information about social behavior, political actions, public health trends, and much else. We rely today on large-scale data analysis for most activities, from weather forecasting and figuring the odds on the Super Bowl, to making tax assessments based on the census and maintaining our air traffic control systems.
Large-scale data analysis demands a lot of data, and a lot of the data that come from us are used without our permission or even knowledge. Search engines know where a search originates, keep records of the searches, use those data for purposes never revealed to the searcher, and hand them over to authorities when required. We may find it extremely convenient when we make purchases on the web or search for information to have our machine “remember” where we search and what our shopping preferences are. We are less enchanted to realize those data can be accessed by intelligence agencies to trace our political affiliations and network of friends, and exploited by rogue hackers for identity theft or fraud. These practices are of concern to both citizens and consumers, but they also threaten to freeze advancement of knowledge that may benefit us all. The oceanographer James McCarthy warned in an address as president of the American Association for the Advancement of Science:
More important than technical approaches [to data collection and mining] will be public discussion about how to rewrite the rules of data collection, ownership, and privacy to deal with a sea change in how much of our lives can be observed, and by whom. Until these issues are resolved, they are likely to be the limiting factor in realizing the potential of these new data to advance our scientific understanding of society and human behavior, and to improve our daily lives.
Literacy in the digital age means achieving autonomy over our choices of what we read and publish, who uses our data and how, and what happens to them over our lifetime and beyond. Literacy online, as in print, begins by learning to read with appropriate skepticism, being able to assess whether something we see is trustworthy or not, and being responsible in our use of data, our own and other people’s. It means learning to interpret search results and how to select among them, knowing which links are “sponsored” by a company that pays for placement at the top and which not, understanding the basics of how computers encode information and display it, how data are created, collected, and used, and how to ensure the privacy and appropriate use of our data. We should be able to identify the source of information we read and evaluate its truth value, authority, and authenticity. We should be alert to a document’s inconsistent use of tense, spotty noun/verb agreement, inexplicable changes in font, and redundancies as the signature of a text that has been cut and pasted from other sources. These are fundamental skills of reading and writing on the screen as on the page, to be introduced in early education and kept up to date over the decades.
Above all, literacy is about making informed choices about how to spend our time, the only asset in our abundant information economy that is truly scarce. We no longer have to seek information; it seeks us. It follows us wherever we go and, like a pack of yapping dogs, it begs for our attention. We need to reset the filters that control the flow of information ourselves. A well-kept digital brain attic has plenty of room for short-term tasks, but the pathways to long-term memories are always kept clear. Digital autonomy looks very different from the print model. It is not just about lowering barriers to access or outsmarting censors and copyright owners. It is about filtering data for value, creating for ourselves a time for deep absorption, and setting our machines to be on call when we need them but not to intrude on our privacy.
The quest for digital filters began almost immediately after the Internet and World Wide Web entered the public sphere. In the last century, everyone who went online experienced some degree of digital vertigo. The first filters to appear were search engines that promised to sort and organize information for us according to its potential value in answering the specific question we asked. After a short but intense period of competition between several commercial search engines (AltaVista, Lycos, Yahoo!, Ask Jeeves, among others), one company, Google, emerged as the predominant force, offering “to organize the world’s information and make it universally accessible and useful.” Soon after, social media sites like Facebook, LinkedIn, and others (many, like Myspace, now roadkill on the information superhighway) began offering to manage all our social and/or professional activities. Internet commerce sites from Amazon to Zappos compete to monopolize our commercial transactions by offering everything to everybody everywhere ASAP. These commercial sites are designed to drive traffic to certain products and do so by bombarding people with distracting advertisements to get their attention and promising the irresistible reward of instant gratification. By extracting invaluable information from our use data, they create algorithms that predict our desires and streamline production facilities that offer to fulfill them even before we can articulate them—the “if you like this, you will like that” magic of user
-driven algorithms. On the one hand, these shortcuts to gratification work for us because they save us so much time. On the other hand, we end up not with more freedom of choice but less, and the results can be easily gamed without our knowledge. The trade-off between choice and convenience is always there. A digitally literate person will be able to recognize when the trade-off arises and make a decision between choice and convenience for themselves. They will know, for example, what a cookie is and choose to allow them or turn them off.
In countries lacking free markets, Internet filters are imposed by political regimes. They govern which versions of the past and present realities are available to citizens. Where political powers try to control their population by controlling what they know, think, and believe, there is a lively business in censorship. Controlling the means of distributing information, be it censoring books or blocking IP addresses, is a critical first step. But really serious regimes need much more than censorship to be effective. They need to invent false pasts as well to calibrate expectations of the future. Circulating false information is a well-honed strategy of persuasion, used by political campaigns to smear opponents and by regimes against their enemies. In the Cold War, this came to be known as “disinformation.” Such tactics are deployed in the digital age to wage ideological combat both at home and abroad, just as they were five hundred years ago. Among the earliest uses of printing presses was the production of both antipapal and anti-Lutheran propaganda. Ideological combat provided very good business for struggling early-stage start-up publishing houses.
In market economies, commercialization of “free” communication channels such as Facebook and Twitter sparks debate about a host of economic, political, and social challenges. Overlooked, however, are the potentially serious long-term implications for memory, both individual and collective. The long-term future of collective memory is not the business of commercial companies. By necessity they have a short time horizon and we cannot expect them to invest adequately in preserving their information assets for the benefit of future generations when these assets no longer produce enough income to pay for their own care and feeding in data archives. The problem for collective memory is not commerce’s narrow focus on quarterly returns, deleterious as that is for any long-term planning. It is that commercial companies come and go. When they are gone, so, too are all their information assets. Unlike institutions established to serve the public trust, commercial companies have no responsibilities to future generations. The simple solution for preserving commercially owned digital content is for companies to arrange for handoffs of their significant knowledge assets to public institutions. The donation by Twitter of its archive to the Library of Congress is a signal example of the partnership between private and public institutions that will need to be the norm in the digital age.
AVOIDING COLLECTIVE AMNESIA
The abilities to reflect on one’s own behavior, to transcend instinctual reactions, and to make thoughtful, well-reasoned choices are all enabled by a mind with deep temporal perception. Psychologist Daniel Kahneman has dubbed the ability to be aware of and reflect on one’s own behavior, to move beyond purely instinctual reactions and make conscious choices, as “slow thinking,” as opposed to instinctual reactions, “fast thinking.” This distinction between instinctual and unconscious actions and those taken with deliberation is fundamental not only to decision making—the context in which Kahneman discusses it—but also in terms of creating healthy, long-term memories. Organizations are very good at slowing down human thought and reaction time to open up space for deliberation. “Organizations are better than individuals when it comes to avoiding errors, because they naturally think more slowly and have power to impose orderly procedures … An organization is a factory that manufactures judgments and decisions.” Libraries, archives, and museums are necessary for the stewardship of our long-term memory because they are conservative—their job is to conserve—proceed in an orderly fashion to make judgments, and hold themselves accountable to their publics.
Who has the right to preserve digital content on behalf of the public—present and future? Do we own our personal data—biomedical, demographic, political—and can we control its use? If certain categories of data are private, then is metadata—data about data—about that data also private? These are not theoretical questions. Today, most of our personal digital memory is not under our control. Whether it is personal data on a commercially owned social media site, e-mails that we send through a commercial service provider, our shopping behaviors, our music libraries, our photo streams, even the documents on our hard drives written in Word or Pages—they will be inaccessible to us, unreadable in only a few years. A web-based wedding site will barely outlast the honeymoon and certainly will not be around to share with children and grandchildren in fifty years unless provisions to archive it are made by the wedding party now. A digital condolence book will disappear from the Internet long before the memories of those who still mourn the loss of that person dim. The documents on our hard drives will be indecipherable in a decade. We view our Facebook pages and LinkedIn profiles as intimate parts of ourselves and our identities, but they are also corporate assets. The fundamental purpose of recording our memories—to ensure they live on beyond our brief decades on Earth—will be lost in the ephemeral digital landscape if we do not become our own data managers. The skills to control our personal information over the course of our lives are essential to digital literacy and citizenship.
The marketplace of ideas is now conducted chiefly online. If something cannot be found online, chances are it will disappear from the public mind. The surest way to keep the records of the past readily accessible to the public is to migrate them to digital form, whether it is converting movies shot in Technicolor film to digital formats or digitizing eighteenth-century genealogical records and posting them online. These analog sources need to be preserved in their original physical formats. But digitization widens access to them at the same time it makes them accessible to many new publics.
Libraries, archives, and museums are making many of their collections available to the public online, but this vital service is hampered by chronic underfunding. The value we get from scanning is far greater than digital access alone. We have discovered that a book, manuscript, map, or painting with one value in artifactual form can have multiple novel values when converted into data. One example of digitally augmented value is the corpus of seemingly mundane ships’ logs, written in various crabbed hands over the course of centuries, each documenting the details of overseas voyages. Of course, they have long been important for people writing the history of exploration, trade, and all matters maritime. Recent conversion of these logbooks gives them new value as a database and makes invaluable historic evidence about the climate and marine ecologies accessible to computer analysis. Who could have guessed the value of the logs’ detailed information about weather, ocean currents, schools of now-rare fish that once were abundant? Historical information about the climate is difficult to come by and worth its virtual weight in gold for those trying to understand long-term patterns of stability and fluctuation in oceanic and atmospheric conditions. That information has been lying dormant in old documents and logbooks for centuries, accessible only to people who visit the archives to study them. Until now, it has been impossible to read them at scale, to understand what the entire corpus can tell us about long-term trends.
The forensic shift of the nineteenth century led to assembling large-scale collections of unimaginably diverse objects that hold valuable information in often fragile and unwieldy objects. Medical history museums have extensive collections of human and animal tissue samples that document the history of disease. Natural history museums have drawers full of birds, bugs, and bones that now can be sampled for DNA to yield information about genetic relationships, often rewriting the genealogical trees of life. The Avian Phylogenomics Project is sequencing the genomes of forty-eight birds from forty-five avian species to construct a family tree for
birds. Over 60 percent of the tissues to be sampled will come from natural history museum collections. Institutions once derided as antiquated warehouses full of stuffed specimens turn out to be the Fort Knox of the genetic research era. Some astronomical observatories hold collections of glass plate negatives of the night skies taken in the nineteenth and early twentieth centuries. Here we find records of unique celestial events, such as supernovas and asteroids. One series of images led to a discovery that could never have been anticipated at the time the plates were exposed, the hitherto unknown source of energy we now know as a quasar, first verified in 1962 by studying seventy-year-old glass plate negatives.
Digital information processing not only allows new uses of old sources. It also rescues fragments of memory once thought irretrievably lost. Technologies created at great expense to advance one field of knowledge can yield unintended benefits for others. Take the case of the Large Hadron Collider, an atom smasher that in 2012 detected the Higgs boson, a subatomic particle predicted to exist but never observed. The imaging technology that recorded traces of the elusive Higgs was adapted to access the authentic voice of Alexander Graham Bell and others trapped on unplayable recordings by “visualizing sound,” capturing transient acoustic waves on materials that “record” them.
Sound recording is a recent technology, the first recording made in 1860. Despite its youth, in many ways audio is far more vulnerable to decay and loss than parchment manuscripts that have survived for two thousand years. The materials required to fix acoustic vibrations onto physical substrates and the elaborately engineered playback systems required to create sound waves from these substrates are uniquely fragile. Sound waves can be represented three-dimensionally by grooves incised on a flat disc, wave patterns impressed on clay cylinders coated in wax, or rolls wrapped in tin foil stamped with wave. The problem is that to hear the sounds, a stylus needs to travel across the grooves, and each time it does so, it wears down the disc of lacquer, plastic, aluminum, shellac, or cylinder covered in wax or foil. Some are so worn they cannot be played. Others have little deterioration of their grooves but are broken into pieces. What we need to hear sounds engraved on objects too fragile to subject to physical replay is to capture the information on the surface without making physical contact with it.