Book Read Free

The Modern Mind

Page 81

by Peter Watson


  Before long, however, Louis convinced himself that the new skull was actually midway between australopithecines and modern Homo sapiens and so he called the new find Zinjanthropus boisei – Zinj being the ancient Arabic word for the coast of East Africa, anthropos denoting the fossil’s humanlike qualities, and boisei after Charles Boise, the American who had funded so many of their expeditions.86 Because he was so complete, so old and so strange, Zinj made the Leakeys famous. The discovery was front-page news across the world, and Louis became the star of conferences in Europe, North America, and Africa. At these conferences, Leakey’s interpretation of Zinj met some resistance from other scholars who thought that Leakey’s new skull, despite its great size, was not all that different from other australopithecines found elsewhere. Time would prove these critics right and Leakey wrong. But while Leakey was arguing his case with others about what the huge, flat skull meant, two scientists elsewhere produced a completely unexpected twist on the whole matter. A year after the discovery of Zinj, Leakey wrote an article for the National Geographic magazine, ‘Finding the World’s Earliest Man,’ in which he put Zinjanthropus at 600,000 years old.87 As it turned out, he was way off.

  Until the middle of the century, the main dating technique for fossils was the traditional archaeological device of stratigraphy, analysing sedimentation layers. Using this technique, Leakey calculated that Olduvai dated from the early Pleistocene, generally believed to be the time when the giant animals such as the mammoth lived on earth alongside man, extending from 600,000 years ago until around 10,000 years ago. Since 1947, a new method of dating, the carbon-14 technique, had been introduced. C14 dating depends on the fact that plants take out of the air carbon dioxide, a small proportion of which is radioactive, having been bombarded by cosmic rays from space. Photosynthesis converts this Co2 into radioactive plant tissue, which is maintained as a constant proportion until the plant (or the organism that has eaten the plant) dies, when radioactive carbon uptake is stopped. Radioactive carbon is known to have a half-life of roughly 5,700 years, and so, if the proportion of radioactive carbon in an ancient object is compared with the proportion of radioactive carbon in contemporary objects, it is possible to calculate how long has elapsed since that organism’s death. With its relatively short half-life, however, C14 is only useful for artefacts up to roughly 40,000 years old. Shortly after Leakey’s National Geographic article appeared, two geophysicists from the University of California at Berkeley, Jack Evernden and Garniss Curtis, announced that they had dated some volcanic ash from Bed I of Olduvai – where Zinj had been found – using the potassium-argon (K/Ar) method. In principle, this method is analogous to C14 dating but uses the rate at which the unstable radioactive potassium isotope potassium-40 (K40) decays to stable argon-40 (Ar40). This can be compared with the known abundance of K40 in natural potassium, and an object’s age calculated from the half-life. Because the half-life of K40 is about 1.3 billion years, this method is much more suitable for geological material.88

  Using the new method, the Berkeley geophysicists came up with the startling news that Bed I at Olduvai was not 600,000 but 1.75 million years old.89 This was a revelation, the very first clue that early man was, much, much older than anyone suspected. This, as much as the actual discovery of Zinj, made Olduvai Gorge famous. In the years that followed, many more skulls and skeletons of early hominids would be found in East Africa, sparking bitter controversy about how, and when, early man developed. But the ‘bone rush’ in the Rift Valley ready dates from the fantastic publicity surrounding the discovery of Zinj and its great antiquity. This eventually produced the breathtakingly audacious idea – almost exactly one hundred years after Darwin – that man originated in Africa and then spread out to populate the globe.

  *

  Each of these episodes was important in itself, albeit in very different ways, and transformed our understanding of the natural world. But besides the advances in knowledge that at least four of them share and to which we shall return (Lysenko was eventually overthrown in the mid-1960s), they all have in common that they show science to be an untidy, emotional, obsessive, all-too-human activity. Far from being a calm, reflective, solely rational enterprise, carried out by dispassionate scientists only interested in the truth, science is revealed as not so very different from other walks of life. If this seems an unexceptional thing to say now, at the end of the century, that is a measure of how views have changed since these advances were made, in the 1940s and 1950s. Early on in that same decade, Claude Lévi-Strauss had expressed the general feeling of the time: ‘Philosophers cannot insulate themselves against science,’ he said. ‘Not only has it enlarged and transformed our vision of life and the universe enormously: it has also revolutionised the rules by which the intellect operates.’90 This mindset was underlined by Karl Popper in The Logic of Scientific Discovery, published in English in 1959, in which he set out his view that the scientist encounters the world – nature – essentially as a stranger, and that what sets the scientific enterprise apart from everything else is that it only entertains knowledge or experience that is capable of falsification. For Popper this is what distinguished science from religion, say, or metaphysics: revelation, or faith, or intuition have no part, at least no central role; rather, knowledge increases incrementally, but that knowledge is never ‘finished’ in the sense that anything is ‘knowable’ as true for all time.91 But Popper, like Lévi-Strauss, focused only on the rationalism of science, the logic by which it attempted – and often managed – to move forward. The whole penumbra of activities – the context, the rivalry, the ambition and hidden agendas of the participants in these dramas (for dramas they often were) – were left out of the account, as somehow inappropriate and irrelevant, sideshows to the main event. At the time no one thought this odd. Michael Polanyi, as we have seen, had raised doubts back in 1946, but it was left to a historian of science rather than a philosopher to produce the book that changed for all time how science was perceived. This was Thomas Kuhn, whose Structure of Scientific Revolutions appeared in 1962.

  Kuhn, a physicist turned historian of science at MIT, was interested in the way major changes in science come about. He was developing his ideas in the 1950s and so did not use the examples just given, but instead looked at much earlier episodes from history, such as the Copernican revolution, the discovery of oxygen, the discovery of X rays, and Einstein’s ideas about relativity. Kuhn’s chief argument was that science consists mainly of relatively stable periods, when nothing much of interest goes on and scientists working within a particular ‘paradigm’ conduct experiments that flesh out this or that aspect of the paradigm. In this mode, scientists are not especially sceptical people – rather, they are in a sort of mental straitjacket as laid down by the paradigm or theory they are following. Amid this set of circumstances, however, Kuhn observed that a number of anomalies will occur. To begin with, there is an attempt to incorporate the anomalies into the prevailing paradigm, and these will be more or less successful. Sooner or later, however, the anomalies grow so great that a crisis looms within whatever branch of science it may be – and then one or more scientists will develop a totally new paradigm that better explains the anomalies. A scientific revolution will have taken place.92 Kuhn also noted that science is often a collaborative exercise; in the discovery of oxygen, for example it is actually very difficult to say precisely whether Joseph Priestley or Antoine-Laurent Lavoisier was primarily responsible: without the work of either, oxygen would not have been understood in exactly the way it was. Kuhn also observed that revolutions in science are often initiated by young people or those on the edge of the discipline, not fully trained – and therefore not fully schooled – in a particular way of thought. He therefore stressed the sociology and social psychology of science as a factor in both the advancement of knowledge and the reception of new knowledge by other scientists. Echoing an observation of Max Planck, Kuhn found that the bulk of scientists never change their minds – a new theory wins because
adherents of the old theory simply die out, and the new theory is favoured by the new generation.93 In fact, Kuhn makes it clear several times that he sees scientific revolutions as a form of evolution, with the better – ‘fitter’ – ideas surviving while the less successful become extinct. The view that science is more ordered than is in fact the case, Kuhn said, is aided by the scientific textbook.94 Other disciplines use textbooks, but it is in science that they are most popular, reflecting the fact that many young scientists get their information predigested (and therefore repackaged), rather than by reading the original literature. So, very often scientists do not – or did not then – learn about discoveries at first hand, as someone interested in literature reads the original books themselves, as well as reading textbooks of literary criticism. (In this, Kuhn was echoing one of F. R. Leavis’s main criticisms of C. P. Snow.)

  Much was made of Kuhn’s book, especially by nonscientists and antiscientists, so it is necessary to emphasise that he was not seeking to pull the rug out from under the feet of science. Kuhn always maintained that science produced, as Lévi-Strauss said, a special kind of knowledge, a knowledge that worked in a distinctive way and very well.95 Some of the uses to which his book was put would not have met with his approval. Kuhn’s legacy is a reconceptualisation of science, not so much a culture, as Snow said, but a tradition in which many scientists serve their apprenticeship, which predetermines the types of question science finds interesting, and the way it seeks answers to problems. Thus the scientific tradition is nowhere near as rational as is generally thought. Not all scientists find this view convincing, and obviously there is much scope for disagreement as to what is or is not a paradigm, and what is and is not normal science. But for historians of science, and many in the humanities, Kuhn’s work has been very liberating, allowing scientific knowledge to be regarded as somehow more tentative than before.

  28

  MIND MINUS METAPHYSICS

  At the end of 1959 the film director Alfred Hitchcock was producing a movie in absolute secrecy. Around the lot at Revue Studios, part of Universal Pictures in Los Angeles, the film was known on the clapper board and in company designation by its codename, ‘Wimpy.’ When it was ready, Hitchcock wrote to film critics in the press, begging them not to give away the ending and announcing at the same time that no member of the public would be allowed in to the film after it had started.

  Psycho was a screen ‘first’ in many different ways. Hitherto Hitchcock had directed top-quality murder stories, set in exotic locations and usually made in Technicolor. In deliberate contrast, Psycho was cheap in appearance, filmed in black and white, and focused on an area of sleaze.1 There were unprecedented scenes of violence. Most arresting of all, however, was the treatment of madness. The film was actually based on the real-life case of Ed Gein, a ‘cannibalistic Wisconsin killer’ whose terrible deeds also inspired The Texas Chain Saw Massacre and Deranged. In Psycho, Hitchcock – fashionably enough – pinpointed the source of Norman Bates’s homicidal mania in his narrow and inadequate family and sexual history.2

  The film starred Anthony Perkins and Janet Leigh, both of whom worked for Hitchcock for well below their usual fee in order to gain experience with a master storyteller (Leigh’s character was actually killed off halfway through the film, another innovation). The film is rich in visual symbolism meant to signify madness, schizophrenia in particular. Apart from the gothic setting in a gingerbread-house motel on a stormy night, each of the characters has something to hide – whether it is an illicit affair, stolen cash, a concealed identity, or an undiscovered murder. Mirrors are widely used to alter images, which are elsewhere sliced in two to suggest the reversal of reality and the cutting, split world of the violently insane.3 Anthony Perkins, who pretends he is in thrall to his mother when in reality he has killed her long ago, spends his time ‘stuffing birds’ (nightbirds, like owls, which also watch him). All this tension builds to what became the most famous scene in the film, the senseless slashing of Janet Leigh in the shower, where ‘the knife functions as a penis, penetrating the body in a symbolic rape’ and the audience watches – horrified and enthralled – as blood gurgles down the drain of the shower.4 Psycho is in fact a brilliant example of a device that would become much debased as time passed – the manipulation of the cinema audience so that, to an extent, it understands, or at least experiences, the conflicting emotions wrapped up in a schizophrenic personality. Hitchcock is at his most cunning when he has the murderer, Perkins/Bates, dispose of Janet Leigh’s body by sinking it in a car in a swamp. As the car is disappearing in the mud, it suddenly stops. Involuntarily, the audience wills the car to disappear – and for a moment is complicit in the crime.5

  The film received a critical pasting when it was released, partly because the critics hated being dictated to over what they could and could not reveal. ‘I remember the terrible panning we got when Psycho opened,’ Hitchcock said. ‘It was a critical disaster.’ But the public felt otherwise, and although the movie cost only $800,000 to make, Hitchcock alone eventually recouped more than $20 million. In no time the movie became a cult. ‘My films went from being failures to masterpieces without ever being successes,’ said Hitchcock.6

  Attempts to understand the mentally id as if their sickness is a maladaptation, a pathology of logic or philosophy rather than a physical disease, has a long history and is at the root of the psychoanalytic school of psychiatry. In the same year as Hitchcock’s film, a psychoanalytic book appeared in Britain that also achieved cult status quickly. Its author was a young psychiatrist from Glasgow in Scotland who described himself as an existentialist and went on to become a fashionable poet. This idiosyncratic career path was mirrored in his theories about mental illness. In The Divided Self, Ronald D. Laing applied Sartre’s existentialism to frankly psychotic schizophrenics in an attempt to understand why they went mad. Laing was one of the leaders of a school of thought (David Cooper and Aaron Esterson were others) which argued that schizophrenia was not an organic illness, despite evidence even then that it was grouped in families and therefore to some extent inherited, but represented a patient’s private response to the environment in which he or she was raised. Laing and his colleagues believed in an entity they labelled the ‘schizophrenogenic’ – or schizophrenia-producing – family. In The Divided Self and subsequent books, Laing argued that investigation of the backgrounds of schizophrenics showed that they had several things in common, the chief of which was a family, in particular a mother, who behaved in such a way that the person’s sense of self became separated from his or her sense of body, that life was a series of ‘games’ which threatened to engulf the patient.7

  The efficacy of Laing’s theories, and their success or otherwise in generating treatment, will be returned to in just a moment, but Laing was important in more than the merely clinical sense: insofar as his approach represented an attempt to align existential philosophy with Freudian psychology, his theories were part of an important crossover that took place between about 1948 and the mid-1960s. This period saw the death of metaphysics as it had been understood in the nineteenth century. It was philosophers who laid it to rest, and ironically, one of the chief culprits was the Waynflete Professor of Metaphysical Philosophy at Oxford University, Gilbert Ryle. In The Concept of Mind, published in 1949, Ryle delivered a withering attack on the traditional, Cartesian concept of duality, which claimed an essential difference between mental and physical events.8 Using a careful analysis of language, Ryle gave what he himself conceded was a largely behaviourist view of man. There is no inner life, Ryle said, in the sense that a ‘mind’ exists independently of our actions, thoughts, and behaviours. When we ‘itch’ to do something, we don’t really itch in the sense that we itch if a mosquito bites us; when we ‘see’ things ‘in our mind’s eye,’ we don’t see them in the way that we see a green leaf. This is all a sloppy use of language, he says, and most of his book is devoted to going beyond this sloppiness. To be conscious, to have a sense of self, is no
t a byproduct of the mind; it is the mind in action. The mind does not, as it were, ‘overhear’ us having our thoughts; having the thoughts is the mind in action.9 In short, there is no ghost in the machine – only the machine. Ryle examined the will, imagination, intellect, and emotions in this way, demolishing at every turn the traditional Cartesian duality, ending with a short chapter on psychology and behaviourism. He took psychology to be more like medicine – an agglomeration of loosely connected inquiries and techniques – than a proper science as generally understood.10 In the end, Ryle’s book was more important for the way it killed off the old Cartesian duality than for anything it did for psychology.

  While Ryle was developing his ideas in Oxford, Ludwig Wittgenstein was pursuing a more or less parallel course in Cambridge. After he had published Tractatus Logico-Philosophicus in 1921, Wittgenstein abandoned philosophy for a decade, but he returned in 1929 to Cambridge, where at first he proceeded to dismantle the philosophy of the Tractatus, influential though that had been, and replace it with a view that was in some respects diametrically opposite. Throughout the 1930s and the 1940s he published nothing, feeling ‘estranged’ from contemporary Western civilisation, preferring to exert his influence through teaching (the ‘deck-chair’ seminars that Turing had attended).11 Wittgenstein’s second masterpiece, Philosophical Investigations, was published in 1953, after his death from cancer in 1951, aged sixty-two.12 His new view took Ryle’s ideas much further. Essentially, Wittgenstein thought that many philosophical problems are false problems, mainly because we are misled by language. All around us, says P. M. S. Hacker, who wrote a four-volume commentary on Philosophical Investigations, are grammatical similarities that mask profound logical differences. ‘philosophical questions are frequently not so much questions in search of an answer as questions in search of a sense. “Philosophy is a struggle against the bewitchment of our understanding by means of language.” ‘For example, ‘the verb “to exist” looks no different from such verbs as “to eat” or “to drink” but while it makes sense to ask how many people in College don’t eat meat or drink wine, it makes no sense to ask how many people in College don’t exist.13

 

‹ Prev