The Modern Mind

Home > Other > The Modern Mind > Page 122
The Modern Mind Page 122

by Peter Watson


  The most original response to the culture wars was David Denby’s excellent Great Books, published in 1996. Denby, film critic of New York magazine and a contributing editor to the New Yorker, attended Columbia University in 1961, when he took two foundation courses, ‘Literature Humanities’ and ‘Contemporary Civilization.’51 In the autumn of 1991, he had the idea of sending himself back to Columbia to do the same courses. He wanted to see how they had changed, how they were now taught, and what effects they had on himself and the young freshmen attending Columbia in the 1990s. He had been a film critic since 1969, he said, and though he still loved his job, he was tired of the ‘society of the spectacle,’ the secondhand, continuously ironic world of the media: ‘The media give information, but information, in the 1990s, has become transitory and unstable. Once in place, it immediately gets pulled apart…. No one’s information is ever quite adequate, which is one reason among many that Americans now seem half-mad with anxiety and restlessness. Like many others, I was jaded yet still hungry; I was cast into the modern state of living-in-the-media, a state of excitement needled with disgust.’52 Denby takes us through the great books he liked (Homer, Plato, Virgil, the Bible, Dante, Rousseau, Shakespeare, Hume and Mill, Marx, Conrad, de Beauvoir, Woolf), leaving out what didn’t engage him (Galileo, Goethe, Darwin, Freud, Arendt, Habermas). His book is notable for some fine passages describing his own reactions to the Great Books, for the way he occasionally related them to movies, and for the way he fears for his son, Max, overwhelmed by tawdry and trivial media, against which these older voices cannot compete. He notes that minority students sometimes rebel against the ‘White, European’ nature of the books, but such rebellion, when it occurs, is heavily tinged with embarrassment and sorrow as much as with anger. And this was his main point, in conclusion: that students, whether white, black, Latino, or Asian, ‘rarely arrive at college as habitual readers,’ that few of them have more than a nominal connection with the past: ‘The vast majority of white students do not know the intellectual tradition that is allegedly theirs any better than black or brown ones do.’ The worlds of Homer, Dante, Boccaccio, Rousseau, and Marx are now so strange, so different, that he came to a surprising conclusion: ‘The core-curriculum courses jar so many student habits, violate so many contemporary pieties, and challenge so many forms of laziness that so far from serving a reactionary function, they are actually the most radical courses in the undergraduate curriculum.’53 Denby found that in fact the Great Books he (re)studied were capable of individual and idiosyncratic interpretation, not necessarily the interpretation the cultural right would wish, but that didn’t matter – the students grasped that ‘they dramatise the utmost any of us is capable of in love, suffering and knowledge.’ And, perhaps the best thing one can say about it, the Western canon can be used to attack the Western canon. ‘What [non-whites] absorb of the older “white” culture they will remake as their own; it cannot hurt them.’54

  For Denby, a much greater danger came from the media. ‘Most high schools can’t begin to compete against a torrent of imagery and sound that makes every moment but the present seem quaint, bloodless, or dead.’55 In fact, he said, the modern world has turned itself upside down. On his first time round, in 1961, the immediacy of pop had been liberating, a wonderful antidote to the stifling classroom; but now ‘the movies have declined; pop has become a field of conformity and complacency, while the traditional high culture, by means of its very strangeness and difficulty, strikes students as odd. They may even be shocked by it…. The [great] books are less a conquering army than a kingdom of untameable beasts, at war with one another and with readers.’56

  In 1999 Harold Bloom returned to his first love. In Shakespeare: The Invention of the Human, Bloom argued that the great poet ‘invented us,’ that ‘personality, in our sense, is a Shakespearean invention.’57 Before Shakespeare, Bloom claims, characters did not grow and develop. ‘In Shakespeare, characters develop rather than unfold, and they develop because they reconceive themselves. Sometimes this comes about because they overhear themselves talking, whether to themselves or to others. Self-overhearing is the royal road to individuation.’58 Bloom’s book is deeply unfashionable, not only in its message but in the way it is written. It is an act of worship. He freely concedes that Bardolatry is and has been ‘a secular religion’ for some two hundred years, and he enjoys being in that tradition because he believes that the very successes of Shakespeare transcend all ways of approaching him: he is simply too brilliant, too intelligent, to be cut down to size, as the feminists, cultural materialists, and Marxists would like to do. ‘Shakespeare, through Hamlet, has made us skeptics in our relationships with anyone, because we have learned to doubt articulateness in the realm of affection…. Our ability to laugh at ourselves as readily as we do at others owes much to Falstaff…. Cleopatra [is the character] through whom the playwright taught us how complex eros is, and how impossible it is to divorce acting the part of being in love and the reality of being in love…. Mutability is incessant in her passional existence, and it excludes sincerity as being irrelevant to eros.’59 ‘When we are wholly human, and know ourselves, we become most like either Hamlet or Falstaff.’60

  There is something magnificent about this ‘Bloom in love,’ dismissing his critics and opponents without even naming them. It is all very unscientific, but that is Bloom’s point: this is what art should seek to emulate, these are the feelings great art exists for. Individuation may have been one of the great issues of the century, but Shakespeare got there first, and has still not been equalled. He is the one man worth worshipping, and we are, if we will only see it, surrounded by his works.

  One more distinguished combatant joined the Blooms on the barricades, an academic Boadicea whose broadsides went wider even than theirs: Gertrude Himmelfarb, the historian wife of Irving Kristol, founder (with Daniel Bell) of the Public Interest. In On Looking into the Abyss (1994), Himmelfarb, professor emeritus of history at the Graduate School of the City University of New York, attacked postmodernism in whatever guise it raised its head, from literary theory to philosophy to history.61 Her argument against literary theory was that the theory itself had displaced literature as the object of study and in the process taken away the ‘profound spiritual and emotional’ experience that comes with reading great works, the ‘dread beasts’ as she put it, ‘lurking at the bottom of the “Abyss.” ’62 As a result, she said, ‘The beasts of modernism have mutated into the beasts of postmodernism – relativism into nihilism, amorality into immorality, irrationality into insanity, sexual deviancy into polymorphous perversity.’63 She loathed the ‘boa-deconstructors’ like Derrida and Paul de Man and what they had done to literature, thinking their aim more political than literary (they would have agreed). She attacked the Annales school: she admired Fernand Braudel’s fortitude in producing his first great book in a concentration camp, from memory, but thought his concept of la longue durée gave him a fatally skewed perspective on such events as, say, the Holocaust. She thought that the new enemy of liberalism had become – well, liberalism itself. Liberalism was now so liberal, she argued, that it absolved postmodern historians, as they saw it, from any duty to the truth. ‘Postmodernists deny not only absolute truth but contingent, partial, incremental truth…. In the jargon of the school, truth is “totalising,” “hegemonic,” “logocentric,” “phallocentric,” “autocratic,” “tyrannical.” ’64 She turned on Richard Rorty for arguing there is no ‘essential’ truth or reality, and on Stanley Fish for arguing that the demise of objectivity ‘relieves me of the obligation to be right.’65 But her chief point was that ‘postmodernism entices us with the siren call of liberation and creativity,’ whereas there is a tendency for ‘absolute liberty to subvert the very liberty it seeks to preserve.’66 In particular, and dangerously, she saw about her a tendency to downplay the importance and horror of the Holocaust, to argue that it was something ‘structural,’ rather than a personal horror for which real individuals were responsible, wh
ich need not have happened, and which needed to be understood, and reunderstood by every generation. She tellingly quotes the dedication in David Abraham’s book The Collapse of the Weimar Republic, published in 1981, which contained the dedication, ‘For my parents – who at Auschwitz and elsewhere suffered the worst consequences of what I can merely write about.’ In Himmelfarb’s view, the reader is invited to think that the author’s parents perished in the camps, but they did not. This curious phraseology was later examined by Natalie Zemon Davis, an historian, who concluded that Abraham’s work had been designed to show that the Holocaust was not the work of devils ‘but of historical forces and actors.’67 This was too much for Himmelfarb, a relativising of evil that was beyond reason. It epitomised the postmodern predicament: the perfect example of where too much liberty has brought us.

  There is a sense in which the culture wars are a kind of background radiation left over from the Big Bang of the Russian Revolution. At exactly the time that political Marxism was being dismantled, along with the Berlin Wall, postmodernism achieved its greatest triumphs. For the time being at least, the advocates of local knowledge have the edge. Gertrude Himmelfarb’s warning, however timely, and however sympathetic one finds it, is rather like trying to put a genie back into a bottle.

  42

  DEEP ORDER

  In 1986 Dan Lynch, an ex-student from UCLA, started a trade fair for computer hardware and software, known as Interop. Until then the number of people linked together via computer networks was limited to a few hundred ‘hardcore’ scientists and academics. Between 1988 and 1989, however, Interop took off: hitherto a fair for specialists, it was from then on attended by many more people, all of whom suddenly seemed to realise that this new way of communicating – via remote computer terminals that gave access to very many databases, situated across the world and known as the Internet – was a phenomenon that promised intellectual satisfaction and commercial rewards in more or less equal measure. Vint Cerf, a self-confessed ‘nerd’, from California, a man who set aside several days each year to re-read The Lord of the Rings, and one of a handful of people who could be called a father of the Internet, visited Lynch’s fair, and he certainly noticed a huge change. Until that point the Internet had been, at some level, an experiment. No more.1

  Different people place the origins of the Internet at different times. The earliest accounts put it in the mind of Vannevar Bush, as long ago as 1945. Bush, the man who had played such a prominent role in the building of the atomic bomb, envisaged a machine that would allow the entire compendium of human knowledge to be ‘accessed’. But it was not until the Russians surprised the world with the launch of the Sputnik in October 1957 that the first faltering steps were taken toward the Net as we now know it. The launch of a satellite, as was discussed in chapter 27, raised the spectre of associated technologies: in order to put such an object in space, Russia had developed rockets capable of reaching America with sufficient accuracy to do huge damage if fitted with nuclear warheads. This realisation galvanised America, and among the research projects introduced as a result of this change in the rules of engagement was one designed to explore how the United States’ command and control system – military and political – could be dispersed around the country, so that should she be attacked in one area, America would still be able to function elsewhere. Several new agencies were set up to consider different aspects of the situation, including the National Aeronautics and Space Administration (NASA) and the Advanced Research Projects Agency, or ARPA.2 It was this outfit which was charged with investigating the safety of command and control structures after a nuclear strike. ARPA was given a staff of about seventy, an appropriation of $520 million, and a budget plan of $2 billion.3

  At that stage computers were no longer new, but they were still huge and expensive (one at Harvard at the time was fifty feet long and eight feet high). Among the specialists recruited by ARPA was Joseph Licklider, a tall, laconic psychologist from Missouri, who in 1960 had published a paper on ‘man-computer symbiosis’ in which he looked forward to an integrated arrangement of computers, which he named, ironically, an ‘intergalactic network.’ That was some way off. The first breakthrough came in the early 1960s, with the idea of ‘packet-switching,’ developed by Paul Baran.4 An immigrant from Poland, Baran took his idea from the brain, which can sometimes recover from disease by switching the messages it sends to new routes. Baran’s idea was to divide a message into smaller packets and then send them by different routes to their destination. This, he found, could not only speed up transmission but avoid the total loss of information where one line is faulty. In this way technology was conceived that reassembled the message packets when they arrived, and tested the network for the quickest routes. This same idea occurred almost simultaneously to Donald Davies, working at the National Physical Laboratory in Britain – in fact, packet-switching was his term. The new hardware was accompanied by new software, a brand-new branch of mathematics known as queuing theory, designed to prevent the buildup of packets at intermediate nodes by finding the most suitable alternatives.5

  In 1968 the first ‘network’ was set up, consisting of just four sites: UCLA, Stanford Research Institute (SRI), the University of Utah, and the University of California at Santa Barbara.6 The technological breakthrough that enabled this to proceed was the conception of the so-called interface message processor, or IMP, whose task it was to send bits of information to a specified location. In other words, instead of ‘host’ computers being interconnected, the IMPs would be instead, and each IMP would be connected to a host.7 The computers might be different pieces of hardware, using different software, but the IMPs spoke a common language and could recognise destinations. The contract to construct the IMPs was given by ARPA to a small consulting firm in Cambridge, Massachusetts, called Bolt Beranek and Newman (BBN) and they delivered the first processor in September 1969, at UCLA, and the second in October, at SRI. It was now possible, for the first time, for two disparate computers to ‘talk’ to each other. Four nodes were up and running by January 1970, all on the West Coast of America. The first on the East Coast, at BBN’s own headquarters, was installed in March. The ARPANET, as it came to be called, now crossed the continent.8 By the end of 1970 there were fifteen nodes, all at universities or think tanks.

  By the end of 1972 there were three cross-country lines in operation and clusters of IMPs in four geographic areas – Boston, Washington D.C., San Francisco and Los Angeles – with, in all, more than forty nodes. By now ARPANET was usually known as just the Net, and although its role was still strictly defence-oriented, more informal uses had also been found: chess games, quizzes, the Associated Press wire service. It wasn’t far from there to personal messages, and one day in 1972, e-mail was born when Ray Tomlinson, an engineer at BBN, devised a program for computer addresses, the most salient feature of which was a device to separate the name of the user from the machine the user was on. Tomlinson needed a character that could never be found in any user’s name and, looking at the keyboard, he happened upon the ‘@’ sign.9 It was perfect: it meant ‘at’ and had no other use. This development was so natural that the practice just took off among the ARPANET community. A 1973 survey showed that there were 50 IMPs on the Net and that three-quarters of all traffic was e-mail.

  By 1975 the Net community had grown to more than a thousand, but the next real breakthrough was Vint Cerf’s idea, as he sat in the lobby of a San Francisco hotel, waiting for a conference to begin. By then, ARPANET was no longer the only computer network: other countries had their own nets, and other scientific-commercial groups in America had begun theirs. Cerf began to consider joining them all together, via a series of what he referred to as gateways, to create what some people called the Catenet, for Concatenated Network, and what others called the Internet.10 This required not more machinery but design of TCPs, or transmission-control protocols, a universal language. In October 1977 Cerf and his colleagues demonstrated the first system to give access to more than
one network. The Internet as we now know it was born.

  Growth of the Net soon accelerated. It was no longer purely a defence exercise, but, in 1979, it was still largely confined to (about 120) universities and other academic/scientific institutions. The main initiatives, therefore, were now taken over from ARPA by the National Science Foundation, which set up the Computer Science Research Network, or CSNET, and in 1985 created a ‘backbone’ of five supercomputer centres scattered around the United States, and a dozen or so regional networks.11 These supercomputers were both the brains and the batteries of the network, a massive reservoir of memory designed to soak up all the information users could throw at it and prevent gridlock. Universities paid $20,000 to $50,000 a year in connection charges. More and more people could now see the potential of the Internet, and in January 1986 a grand summit was held on the West Coast and order put into the e-mail, to create seven domains or ‘Frodos.’ These were universities (edu), government (gov), companies (com), military (mil), nonprofit organisations (org), network service providers (net), and international treaty entities (int). It was this new order that, as much as anything, helped the phenomenal growth of the Internet between 1988 and 1989, and which was seen at Dan Lynch’s Interop. The final twist came in 1990 when the World Wide Web was created by researchers at CERN, the European Laboratory for Particle Physics near Geneva.12 This used a special protocol, HTTP, devised by Tim Berners-Lee, and made the Internet much easier to browse, or navigate. Mosaic, the first truly popular browser, devised at the University of Illinois, followed in 1993. It is only since then that the Internet has been commercially available and easy to use.

 

‹ Prev