Book Read Free

Modern Mind: An Intellectual History of the 20th Century

Page 113

by Peter Watson


  Saint Thomas Aquinas believed, along with all Christians, that everyone had the potentiality to act in a reasoned way, which would lead to a moral life, but that only education in a certain order – logic, mathematics, physics – could bring about full realisation of those potentialities. There was, for him, no difference between being rational and being moral. The Scottish enlightenment, on the other hand, turned back to an emphasis on the passions, David Hume distinguishing between the calm passions and the violent passions, which take priority over reason. ‘Truth in itself according to Hume … is not an object of desire. But how then are we to explain the pursuit of truth in philosophy? Hume’s answer is that the pleasure of philosophy and of intellectual inquiry more generally “consists chiefly in the action of the mind, and the exercise of the genius and understanding in the discovery or comprehension of any truth.” Philosophy, so it turns out, is like the hunting of woodcocks or plovers; in both activities the passion finds its satisfaction in the pleasures of the chase.’ For Hume, then, reason cannot motivate us.46 ‘And the passions, which do motivate us, are themselves neither reasonable nor unreasonable…. Passions are thus incapable of truth or falsity.’47 Hume himself said, ‘Reason is, and ought only to be, the slave of the passions and can never pretend to any other office than to serve and obey them.’48

  In the modern liberal society, on the other hand, Maclntyre tells us there is a rival concept of reason and of justice, based on different assumptions, namely that people are individuals and nothing more: ‘In Aristotelian practical reasoning it is the individual qua citizen who reasons; in Thomistic practical reasoning it is the individual qua enquirer into his or her good and the good of his or her community; in Humean practical reasoning it is the individual qua propertied or unpropertied participant in a society of a particular kind of mutuality and reciprocity; but in the practical reasoning of liberal modernity it is the individual qua individual who reasons.’49 Maclntyre’s conclusion is that our concepts of reasoning (and justice) are just one tradition among several. He offers no concept of evolution in these matters, and neither Darwin nor Richard Dawkins is mentioned in his book. Instead, Maclntyre thinks we continue to deform our relationship with the past by coarse translations of the classics (even when done by some scholars), which do not treat ancient words to their ancient meanings but instead offer crude modern near-equivalents. Quoting Barthes, he says that to understand the past, we need to include all the signs and other semiological clues that the ancients themselves would have had, to arrive at what Clifford Geertz (who is referred to in Maclntyre’s book) would call a ‘thick description’ of their conceptions of reason and justice. The result of the liberal conception of reason, he says, has some consequences that might be seen as disappointing: ‘What the student is in consequence generally confronted with … is an apparent inconclusiveness in all argument outside the natural sciences, an inconclusiveness which seems to abandon him or her to his or her pre-rational preferences. So the student characteristically emerges from a liberal education with a set of skills, a set of preferences, and little else, someone whose education has been as much a process of deprivation as of enrichment.’50

  The tide of David Harvey’s book The Condition of Postmodernity is strikingly similar to Lyotard’s Postmodern Condition. First published in 1980, it was reissued in 1989 in a much revised version, taking into account the many developments in postmodernism during that decade.51 Contrasting postmodernity with modernity, Harvey begins by quoting an editorial in the architectural magazine Precis 6: ‘Generally perceived as positivistic, technocentric, and rationalistic, universal modernism has been identified with the belief in linear progress, absolute truths, the rational planning of ideal social orders, and the standardisation of knowledge and production. Postmodernism, by way of contrast, privileges “heterogeneity and differences as liberative forces in the redefinition of cultural discourse.” Fragmentation, indeterminacy, and intense distrust of all universal or ‘totalising’ discourses (to use the favoured phrase) are the hallmark of postmodernist thought. The rediscovery of pragmatism in philosophy (e.g., Rorty, 1979), the shift of ideas about the philosophy of science wrought by Kuhn (1962) and Feyerabend (1975), Foucault’s emphasis on discontinuity and difference in history and his privileging of “polymorphous correlations in place of simple or complex causality,” new developments in mathematics emphasising indeterminacy (catastrophe and chaos theory, fractal geometry), the re-emergence of concern in ethics, politics and anthropology for the validity and dignity of “the other,” all indicate a widespread and profound shift in “the structure of feeling.” What all these examples have in common is a rejection of ‘metanarratives’ (large-scale theoretical interpretations purportedly of universal application).’52 Harvey moves beyond this summing-up, however, to make four contributions of his own. In the first place, he describes postmodernism in architecture (the form, probably, where most people encounter it); most valuably, he looks at the political and economic conditions that brought about postmodernism and sustain it; he looks at the effect of postmodernism on our conceptions of space and time (he is a geographer, after all); and he offers a critique of postmodernism, something that was badly needed.

  In the field of architecture and urban design, Harvey tells us that postmodernism signifies a break with the modernist idea that planning and development should focus on ‘large-scale, metropolitan-wide, technologically rational and efficient urban plans, backed by absolutely no-frills architecture (the austere “functionalist” surfaces of “international style” modernism). Postmodernism cultivates, instead, a conception of the urban fabric as necessarily fragmented, a “palimpsest” of past forms superimposed upon each other, and a “collage” of current uses, many of which may be ephemeral.’ Harvey put the beginning of postmodernism in architecture as early as 1961, with Jane Jacobs’s Death and Life of Great American Cities (see chapter 30), one of the ‘most influential anti-modernist tracts’ with its concept of ‘the great blight of dullness’ brought on by the international style, which was too static for cities, where processes are of the essence.53 Cities, Jacobs argued, need organised complexity, one important ingredient of which, typically absent in the international style, is diversity. Postmodernism in architecture, in the city, Harvey says, essentially meets the new economic, social, and political conditions prevalent since about 1973, the time of the oil crisis and when the major reserve currencies left the gold standard. A whole series of trends, he says, favoured a more diverse, fragmented, intimate yet anonymous society, essentially composed of much smaller units of diverse character. For Harvey the twentieth century can be conveniently divided into the Fordist years – broadly speaking 1913 to 1973 – and the years of ‘flexible accumulation.’ Fordism, which included the ideas enshrined in Frederick Winslow Taylor’s Principles of Scientific Management (1911), was for Harvey a whole way of life, bringing mass production, standardisation of product, and mass consumption:54 ‘The progress of Fordism internationally meant the formation of global mass markets and the absorption of the mass of the world’s population, outside the communist world, into the global dynamics of a new kind of capitalism.’55 Politically, it rested on notions of mass economic democracy welded together through a balance of special-interest forces.56 The restructuring of oil prices, coming on top of war, brought about a major recession, which helped catalyse the breakup of Fordism, and the ‘regime of accumulation’ began.57

  The adjustment to this new reality, according to Harvey, had two main elements. Flexible accumulation ‘is marked by a direct confrontation with the rigidities of Fordism. It rests on flexibility with respect to labour processes, labour markets, products and patterns of consumption. It is characterised by the emergence of entirely new sectors of production, new ways of providing financial services, new markets, and, above all, greatly intensified rates of commercial, technological, and organisational innovation.’58 Second, there has been a further round of space-time compression, emphasising the ephemeral, the
transient, the always-changing. ‘The relatively stable aesthetic of Fordist modernism has given way to all the ferment, instability, and fleeting qualities of a postmodernist aesthetic that celebrates difference, ephemerality, spectacle, fashion, and the commodification of cultural forms.’59 This whole approach, for Harvey, culminated in the 1985 exhibition at the Pompidou Centre in Paris, which had Lyotard as one of its consultants. It was called The Immaterial.

  Harvey, as was said earlier, was not uncritical of postmodernism. Elements of nihilism are encouraged, he believes, and there is a return to narrow and sectarian politics ‘in which respect for others gets mutilated in the fires of competition between the fragments.’60 Travel, even imaginary travel, need not broaden the mind, but only confirms prejudices. Above all, he asks, how can we advance if knowledge and meaning are reduced ‘to a rubble of signifiers’?61 His verdict on the postmodern condition was not wholly flattering: ‘confidence in the association between scientific and moral judgements has collapsed, aesthetics has triumphed over ethics as a prime focus of social and intellectual concern, images dominate narratives, ephemerality and fragmentation take precedence over eternal truths and unified politics, and explanations have shifted from the realm of material and political-economic groundings towards a consideration of autonomous cultural and political practices.’62

  * This terminology recalls exactly the title of Colin Maclnnes’s 1958 novel, Mr Love and Justice.

  39

  ‘THE BEST IDEA, EVER’

  Narborough is a small village about ten miles south of Leicester, in the British East Midlands. Late on the evening of 21 November 1983 a fifteen-year-old girl, Lynda Mann, was sexually assaulted and strangled, her body left in a field not too far from her home. A manhunt was launched, but the investigation revealed nothing. Interest in the case died down until the summer of 1986, when on 2 August the body of another fifteen-year-old, Dawn Ashworth, was discovered in a thicket of blackthorn bushes, also near Narborough. She too had been strangled, after being sexually assaulted.

  The manhunt this time soon produced a suspect, Richard Buckland, a porter in a nearby hospital.1 He was arrested exactly one week after Dawn’s body was found, following his confession. The similarities in the victims’ ages, the method of killing, and the proximity to Narborough naturally made the police wonder whether Richard Buckland might also be responsible for the death of Lynda Mann, and with this in mind they called upon the services of a scientist who had just developed a new technique, which had become known to police and public alike as ‘genetic fingerprinring.’2 This advance was the brainchild of Professor Alec Jeffreys of Leicester University. Like so many scientific discoveries, Jeffreys’s breakthrough came in the course of his investigation of something else – he was looking to identify the myoglobin gene, which governs the tissues that carry oxygen from the blood to the muscles. Jeffreys was in fact using the myoglobin gene to look for ‘markers,’ characteristic formations of DNA that would identify, say, certain families and would help scientists see how populations varied genetically from village to village, and country to country. What Jeffreys found was that on this gene one section of DNA was repeated over and over again. He soon found that the same observation – repeated sections – was being made in other experiments, investigating other chromosomes. What he realised, and no one else did, was that there seemed to be a widespread weakness in DNA that caused this pointless duplication to take place. As Walter Bodmer and Robin McKie describe it, the process is analogous to a stutterer who repeatedly stammers over the same letter. Moreover, this weakness differed from person to person. The crucial repeated segment was about fifteen base pairs long, and Jeffreys set about identifying it in such a way that it could be seen by eye with the aid of just a microscope. He first froze the DNA, then thawed it, which broke down the membranes of the red blood cells, but not those of the white cells that contain DNA. With the remains of the red blood cells washed away, an enzyme called proteinase K was added, exploding the white cells and freeing the DNA coils. These were then treated with another enzyme, known as Hinfl, which separates out the ribbons of DNA that contain the repeated sequences. Finally, by a process known as electrophoresis, the DNA fragments were sorted into bands of different length and transferred to nylon sheets, where radioactive or luminescent techniques obtained images unique to individuals.3

  Jeffreys was called in to try this technique with Richard Buckland. He was sent samples of semen taken from the bodies of both Lynda Mann and Dawn Ashworth, together with a few cubic centimetres of Buckland’s blood. Jeffreys later described the episode as one of the tensest moments of his life. Until that point he had used his technique simply to test whether immigrants who came to Britain and were admitted on the basis of a law that allowed entry only to close relatives of those already living in the country really were as close as they claimed. A double murder case would clearly attract far more attention. When he went into his lab late one night to get the results, because he couldn’t bear hanging on until the next morning, he got a shock. He lifted the film from its developing fluid, and could immediately see that the semen taken from Lynda and Dawn came from the same man – but that killer wasn’t Richard Buckland.4 The police were infuriated when he told them. Buckland had confessed. To the police mind, that meant the new technique had to be flawed. Jeffreys was dismayed, but when an independent test by Home Office forensic experts confirmed his findings, the police were forced to think again, and Buckland was eventually acquitted, the first person ever to benefit in this way from DNA testing. Once they had adjusted to the surprising result, the police mounted a campaign to test the DNA of all the men in the Narborough area. Despite 4,000 men coming forward, no match was obtained, not until Ian Kelly, a baker who lived some distance from Narborough, revealed to friends that he had taken the test on behalf of a friend, Colin Pitchfork, who did live in the vicinity of the village. Worried by this deception, one of Kelly’s friends alerted the police. Pitchfork was arrested and DNA-tested. The friend had been right to be worried: tests showed that Pitchfork’s DNA matched the semen found on Lynda and Dawn. In January 1988, Pitchfork became the first person to be convicted after genetic fingerprinting. He went to prison for life.5

  DNA fingerprinting was the most visible aspect of the revolution in molecular biology. Throughout the late 1980s it came into widespread use, for testing immigrants and men in paternity suits, as well as in rape cases. Its practical successes, so soon after the structure of the double helix had been identified, underlined the new intellectual climate initiated by techniques to clone and sequence genetic material. In tandem with these practical developments, a great deal of theorising about genetics revised and refined our understanding of evolution. In particular, much light was thrown on the stages of evolutionary progress, working forward from the moment life had been created, and on the philosophical implications of evolution.

  In 1985 a Glasgow-based chemist, A. G. Cairns-Smith, published Seven Clues to the Origin of Life.6 In some ways a maverick, this book gave a totally different view of how life began to the one most biologists preferred. The traditional view about the origins of life had been summed up by a series of experiments carried out in the 1950s by S. L. Miller and H. C. Urey. They had assumed a primitive atmosphere on early Earth, consisting of ammonia, methane, and steam (but no oxygen – we shall come back to that). Into this early atmosphere they had introduced ‘lightning’ in the form of electrical discharges, and produced a ‘rich brew’ of organic chemicals, much richer than had been expected, including quite a large yield of amino acids, the building blocks for the nucleic acids which make up DNA. Somehow, from this rich brew, the ‘molecules of life’ formed. Graham Cairns-Smith thought this view nonsense because DNA molecules are extremely complicated, too complicated architecturally and in an engineering sense to have been produced accidentally, as the Miller-Urey reactions demanded. In one celebrated part of his book, he calculated that for nucleotides to have been invented, something like 140 operations wou
ld have needed to have evolved at the same time, and that the chances of this having occurred were one in 10109. Since this is more than the number of electrons in the universe, calculated as 108°, Cairns-Smith argued that there has simply not been enough time, or that the universe is not big enough, for nucleotides to have evolved in this way.7

  His own version was startlingly different. He argued that evolution arrived before life as we know it, that there were chemical ‘organisms’ on earth before biochemical ones, and that they provided the architecture that made complex molecules like DNA possible. Looking about him, he saw that there are, in nature, several structures that, in effect, grow and reproduce – the crystal structures in certain clays, which form when water reaches saturation point. These crystals grow, sometimes break up into smaller units, and continue growing again, a process that can be called reproduction.8 Such crystals form different shapes – long columns, say, or flat mats – and since these have formed because they are suited to their micro-environments, they may be said to be adapted and to have evolved. No less important, the mats of crystal can form into layers that differ in ionisation, and it was between these layers, Cairns-Smith believed, that amino acids may have formed, in minute amounts, created by the action of sunlight, in effect photosynthesis. This process would have incorporated carbon atoms into inorganic organisms – there are many substances, such as titanium dioxide, that under sunshine can fix nitrogen into ammonia. By the same process, under ultraviolet light, certain iron salts dissolved in water can fix carbon dioxide into formic acid. The crystal structure of the clays was related to their outward appearance (their phenotype), all of which would have been taken over by carbon-based structures.9 As Linus Pauling’s epic work showed, carbon is amazingly symmetrical and stable, and this is how (and why), Cairns-Smith said, inorganic reproducing organisms were taken over by organic ones.

 

‹ Prev