Book Read Free

Unsuspecting Souls

Page 36

by Barry Sanders


  In 2008, the Science Museum in London built two replicas of the mathematician Charles Babbage’s Difference Engine No. 2 from Babbage’s own 1847 designs. Babbage designed the machine, which weighs five tons and has eight thousand parts, as a gear-driven calculator based on Newton’s method for performing mathematical integration. Dissatisfied with his first efforts, Babbage went on to develop the more refined Analytical Engine, a programmable mechanical computer. Many technology wonks consider it the real prototype of the modern computer. J. David Bolter, in his book Turing’s Man: Western Culture in the Computer Age, places “Babbage and his protégés, among them the Countess of Lovelace, Byron’s Daughter, [as] genuine visionaries. In their writing we often find expressions of a world view fully a century ahead of its time. If the Analytical Machine had been built, it would indeed have been the first computer . . . ”

  MUCH LIKE OUR COMPUTERS, we operate only with great difficulty in situations of ambiguity, or opt out of those situations entirely. We need sanctions for our every action. Even in the case of torture, we now know that several memoranda from the Justice Department gave wide latitude to the CIA to use interrogation methods that might otherwise be prohibited under international law. So while the Geneva Conventions prohibit “outrages upon personal dignity,” for example, the Justice Department made clear, in several memoranda, that it had not drawn a precise line in deciding which interrogation methods would violate that standard. Those memos sound like Orwell wrote them, except this statement comes from a deputy assistant attorney general: “That fact that an act is undertaken to prevent a threatened terrorist attack, rather than for the purpose of humiliation or abuse, would be relevant to a reasonable observer in measuring the outrageousness of the act.” Newspeak or Uniquack—who knows?

  The text as metaphor for organizing our interior lives has been replaced by the computer screen, a medium on which we write with letters of light, which we can make totally disappear with a touch of a delete key. The computer has brought us to the completion of a cycle of disembodiment, of disappearance at perhaps a deeper level. Not only have we become ghosts, but, writing with light, we have also turned into ghostwriters. One of the most permanent of our activities, writing, is now a tenuous and tentative activity.

  As a people, we have grown accustomed to the act of deletion; we use it with ease. Most insidiously, deletion has wormed its way into the language as one of its principal metaphors. Imagine anyone, after the experience of Hiroshima and Nagasaki, saying that he or she would even contemplate eliminating an entire nation—wiping it, as people say in an effort to make the reality more remote, “off the map.” But that’s just what Hillary Clinton said about Iran during the presidential campaign in April of 2008. Responding to a question asking how she would respond as president if Iran were to use nuclear weapons (which it does not have) on Israel (which does have them), she proudly declared that she would “totally obliterate them.” I understand that answer as genocide on the largest scale—the eradication of every man, woman, and child, to say nothing of the animals and the flora and fauna—in the nation. The computer does not have a deterrence button—too bad for us—only one marked delete.

  We can understand, quite graphically, the extent to which the computer has taken over our “lives” by looking back to the end of the year in 1999. The technological revolution unmoored the text from the book, and with it went the familiar metaphors of literacy: reading character, as Freud did in the nineteenth century; or reading minds as Houdini did during that same time; or reading movies, as contemporary audiences do. Ivan Illich points out that the “first to use writing no longer as a metaphor but as an explanatory analogy was . . . a physicist, the Jewish emigrant Erwin Schroedinger. . . . He suggested that genetic substance could best be understood as a stable text whose occasional variations had to be interpreted as textual variations. As a physicist, Schroedinger stepped completely beyond his domain formulating this biological model.”10 Maybe more than any other person, Schroedinger understood those new, post-literate people and had just the right scientific background to comment on what the future held for them.

  Those metaphors of literacy slowly became disembedded from the culture during the course of the twentieth century. Such a rupture could take place because the book, as the crucible for carrying knowledge in the culture, was slowly dying out, only to be replaced by the computer and all its affiliates—Internet, websites, blogs, chat rooms, threads, and on and on. The deeply embedded metaphor for carrying knowledge—the book—gave way over the century to various kinds of screens—movie, television, computer, cell phone, and computer game consoles. In the process, people found themselves reduced to programs and data.

  In the midst of that low-level and constant hum of machines, a very terrible scare marked the final years of the last century, a scare that revealed just how dependent the general population had become on computers, and just what kind of beings we had now become because of them. A small technological “glitch” threatened to make the entirety of data on which we construct our everyday lives evaporate into the ether. That disappearing act, dubbed Y2K by the technology community, was to commence at another monumental, liminal moment, midnight, January 1, 2000—the beginning of both the new century and the new millennium. Experts attributed the problem to a small matter in the way computers inventory time.

  Let me first mention an earlier scare about the very same issue—about telling time—from the preceding century. We can think of Y2K as an electronic version of a catastrophe that threatened life from the nineteenth century. Recall that on November 18, 1883, America’s railroads put the entire nation on standard time. Jack Beatty, in his book Age of Betrayal, probably without knowing about Peter Schlemihl, recasts the event this way: “On November 18, 1883, America’s railroad corporations stopped time.” The Indianapolis Daily Sentinel pointed to a certain natural absurdity in that temporal shift: “The sun is no longer to boss the job. . . . The sun will be requested to rise and set by railroad time. The planets must, in the future, make their circuits by such timetables as railroad magnates arrange.”

  Where people for centuries had experienced time mainly with the sun, they would now measure, let’s say, high noon by a railroad schedule. The New York Herald observed that standard public time “goes beyond the public pursuits of men and enters into their private lives as part of themselves.” The new time severed whatever relationship people had with the heavens. The rising and setting of the sun had determined local times for centuries; now, by tampering with the local sun, the railroads threatened to upset the natural world.

  On November 17—the day before the proposed change—The New York Times reported that city jewelers “were busy answering questions from the curious, who seemed to think that the change in time would . . . create a sensation . . . some sort of disaster, the nature of which would not be exactly entertained.” Jack Beatty relates the story of the following day, November 18, “The Day of Two Noons,” as the New York World, The Washington Post, and the important Boston Evening Transcript called it: “The master clock at Chicago’s West Side Union Depot was stopped at 12:00, waiting for the railroad-decreed noon.” People huddled around the large clock; they talked and they debated, but mostly they waited. Finally, the long-awaited news: At precisely nine minutes and thirty-two seconds after high noon, a telegraph arrived announcing the start of 12:00 in public standard railroad time. In matters of telling time, the sun had stopped dead in its tracks across the sky, which is to say, I suppose, the Earth stood stock still.

  In those nine minutes and thirty-two seconds “out of time,” people feared the absolute worst. Schlemihl could not stop time; but technology seemed to have pulled it off. What would be the fallout? Ordinary nineteenth-century Americans had the Bible, Joshua 10:12-14, to frighten them: “So the sun stood still, and the moon stopped, till the nation avenged itself of its enemies, as it is written in the Book of Jashar. The sun stopped in the middle of the sky and delayed going down for about a full day.�
�� God gave Joshua a gift of time at the battle of Jericho. Believers know it as the longest day ever. And here it was, right in their midst, except big and powerful industry was intervening, not God. On top of that, the railroads had chosen to make the change on a Sunday. What could it all mean? More to the point, what would it all mean?

  But of course nothing overtly catastrophic ensued. No disasters followed. People argued over the power of railroad magnates, but business went on as usual. For a brief time people asked, “Have you the new time?” But no one recorded any obvious major tragedies. In fact, it took until 1918 and an act of Congress for the federal government to cede time to the country’s railroads.

  But a severing did occur. Anyone in Chicago who was used to looking into the sky and judging high noon would find himself or herself forever off by nine minutes and thirty-two seconds. In the scheme of the universe, that does not seem like a big thing, but it does represent a shift—a small but significant shift—away from one’s own place in nature to a move toward the mechanical and the commercial.

  Only one person commented, and then only indirectly, on the major change. An editor of a local newspaper in Nebraska had this to say about the incursion of the railroad into people’s lives, both in terms of time and space: “In a quarter of a century, they have made the people of the country homogeneous, breaking through the peculiarities and provincialisms which marked separate and unmingling sections.”11 Train travel does not represent the only erasure of the human being, but it certainly exerted a major force in eroding the differences in individuals.

  After all, the railroad hauled people—one person much the same as the next—from one place to another, through those various zones of localized time. Take a load of people from New York and set them down in Chicago, say, change watch time, and, in a snap, they all suddenly emerge as Chicagoans. Only the accent—or lack of accent—would give any of them away.

  Michael O’Malley opens his chapter on public standard time with the following paragraph. It so much summarizes the thrust of the age, I want to quote it in full. It also serves as a transition to Y2K, for it prepared the way for transforming time into bits: “American astronomers, drawing on technological innovations like the telegraph, had created a new understanding of time by 1880. The observatories made time a product. They captured an apparently natural phenomenon, distilled what they understood to be its essence—order—and then offered this essence for sale in tidy, attractive packages.”12

  When the ball fell on the year 1999, many experts predicted, the world would witness the worst of all manmade disasters. Unlike the temporal change in the nineteenth century, no one seriously questioned this one, which got labeled, in appropriate machine language, Y2K. America’s entire financial, governmental, and social system would utterly and totally disappear into that nineteenth-century quintessential stuff, the ether. Few experts were willing to predict the extent of the devastation, but most critics described it as if a huge bomb might go off in our midst, sending shock waves and shrapnel throughout the population all over the world. No one—not a single soul—would be exempt. No one would get a free ride.

  The reason for the crisis exposes the very nature of computing. From the late 1960s on, computer software stored dates with two digits rather than four in order to save memory on what were fairly expensive disks. As the nineties came to a close, computer experts realized that their programs might read double zero not as the year 2000 but as the year 1900. What an amazing irony, as if the universe were sending a message, telling us that we were forever stuck at the juncture between the close of the nineteenth century and the opening of the twentieth; that we were condemned to repeat that century because we had learned absolutely nothing from it. The film Groundhog Day captured the dilemma perfectly.

  If that were to happen, we would inhabit two places at once: finding ourselves standing in the twenty-first century, but with all of the dates clearly pointing to the twentieth or even the nineteenth. We would be dead in this century, and alive only in the nineteenth. No matter how loud we screamed, “Look, I am alive,” we could not prove that we were indeed alive. We would be in possession of no records to prove even who we actually were. Only when someone observed us would we then come alive. Through our own inept computing designs, and our own foolhardy reliance on numbers, we would have made a seer out of Erwin Schroedinger, for we would all be fully Draculated.

  Of course, we could not tolerate such a frustrating state of affairs. Computers made it a certainty that as a people we could not tolerate such ambiguity. Computers dictated that we needed but one definition, one place, one time. Schroedinger be damned! And so IT companies around the world spent billions and billions of dollars to correct the glitch. Systems application companies worked feverishly to develop compliant software to combat the problem. The race was on to try to beat the countdown to the end of the millennium. If we hoped for victory, then the technocrats of the world would have to stop time in its tracks. Of course, the poorer nations could not afford the fix, and moved into the future only with the mercy and generosity of the more wealthy countries and the aid granted by IT companies. The fear was real and the fear was widespread.

  But the most potent fear by far was the loss of what went by the name of something called personal identity: Every number on which we found ourselves, the argument went, would disappear—gone would be social security numbers, bank account numbers, savings account numbers, driver’s license numbers, cholesterol and blood pressure levels. Our bank deposits would be gone. The numbers of our bank accounts would no longer exist. All of our contracts, records, transcripts—everything that had been digitized would be erased just as the ball dropped in Times Square. We would no longer know who we were, for on one very crucial and decidedly deep level, at least, we are nothing but a string of integers.

  The ball fell, the numbers turned over, and, well, nothing happened except for the new morning—all foreplay and no climax. The sun did not stop in the sky and the crisis did not end on January 1, 2001. For Y2K was but a prelude, a brief prelude, to a more recent and terrifying loss of self, total identity theft—or what I choose to call electronic cloning. The self had fallen into the flimsiest of constructions—of little or no use any longer. Our identities can be eradicated in an instant with a simple hack of any corporate computing system, or even of our own personal computers. On the Internet, we can become anything we want; we can use any name we choose; we can construct any identity we prefer. The self is as slippery and elusive as one wishes it to be.

  The most ordinary people take on the wildest new identities on the Internet, construct entirely fanciful lives on the World Wide Web. Such is the nature of the postmodern, protean self. In the digital age, we can deconstruct the self with utter ease, and reconstruct with that same ease. We can split ourselves into many different voices without fearing the punishment of the insane asylum. As the old joke goes, On the Internet, no one knows you’re a dog.

  It does little good to say, “This is who I am,” and then offer up a name, an address, a zip code, as if we were the product of a self-interrogation. In a sense, we all now carry fake IDs. Besides, you can find me, or a facsimile of me, on MySpace or Facebook. How do we, then, talk about who we are, about what makes us human in this, the twenty-first century, and about what place we occupy in the grander scheme of things? How do we stop the disappearing act that began in the nineteenth century and which continues to rush on through our own time?

  In another century, Poe warned that people were being buried alive; Emerson countered, No, they just needed to take deeper breaths to find out that they were very much alive; he pointed to the ultimate power of the self. Oscar Wilde exposed society’s criminal underbelly; the reverend Henry Ward Beecher revealed society’s magnificent soul. Oliver Wendell Holmes condemned people’s greed; Darwin celebrated humankind’s roots. Some warned about losing humanness, while others showed the way back to power and strength and autonomy. In the end, however, very few people listened. They to
ok the subway to Coney; they had a drink; they watched a movie; they turned on TV; they played a video game. They had fun. The questions linger, and they constitute our legacy: What is essential? What is of meaning? Of permanence? What allows us, as human beings, not merely to endure, but to prevail?

  I certainly offer no definitive answer; that would be presumptuous and even arrogant. In the end, people must find their own solutions. They first have to think that a problem exists. But framing the question correctly can sometimes prove helpful. The answer will undoubtedly be different for each person. It may be love, friendship, family, but each response rests on a powerful belief in real, live people. We can find no solution to any of our grand contemporary problems—war, terrorism, global warming—without first participating in this most basic project: the recovery of the human being. By lifting the veil, we can, hopefully, become more than spectators at a deposition.

  Any solution to the vitality of our own being, I believe, now requires slipping out from a system that does not admit of ambiguity; that wants to keep us tightly and solidly defined. People struggle, it seems, to regain some of that ambiguity on the very machine that robs them of it—by spinning new selves on the web. There is tremendous irony here, for they are being forced into that kind of splintering just to feel whole and alive—and trying to feel more real on a virtual canvas called the monitor. This is part of the great appeal currently of the work of Patricia Highsmith, and in particular of her novel The Talented Mr. Ripley. To assume another identity, as young Tom Ripley does, with such audacity takes a great act of bravery and great exercise of will. One has to assert one’s entire being in just being.

 

‹ Prev