by Peter Watson
The second area of disagreement arose in 1979, in a paper by Gould and Lewontin in the Proceedings of the Royal Society, entitled ‘The Spandrels of San Marco and the Panglossian Paradigm: A Critique of the Adaptationist Programme.’58 The central point of this paper, which explains the strange architectural reference, is that a spandrel, the tapering triangular space formed by the intersection of two rounded arches at a right angle, isn’t really a design feature. Gould and Lewontin had seen these features at San Marco in Venice and concluded that they were inevitable by-products of other, more important features – i.e., the arches. Though harmonious, they were not really ‘adaptations’ to the structure, but simply what was left when the main design was put in place. Gould and Lewontin thought there were parallels to be drawn with regard to biology, that not all features seen in nature were direct adaptations – that, they said, was Panglossian. Instead, there were biological spandrels that were also by-products. As with punctuated equilibrium, Gould and Lewontin thought that the spandrel approach was a radical revision of Darwinism. A claim was even made for language being a biological spandrel, an emergent phenomenon that came about by accident, in the course of the brain’s development in other directions. This was too much, and too important, to be left alone by Dawkins, Dennett, and others. It was shown that even in architecture a spandrel isn’t inevitable – there are other ways of treating what happens where two arches meet at right angles – and again, like punctuated equilibrium, the idea of language as a spandrel, a by-product of some other set of adaptations, has not really stood the test of time.
The third area where Gould differed from his colleagues came in 1989 in his book Wonderful Life.59 This was a reexamination and retelling of the story of the Burgess Shale, a fossil-rich rock formation in British Columbia, Canada, which has been well known to geologists and palaeontologists since the turn of the century. The lesson that Gould drew from these studies was that an explosion of life forms occurred in the Cambrian period, ‘far surpassing in variety of bodily forms today’s entire animal kingdom. Most of these forms were wiped out in mass extinctions; but one of the survivors was the ancestor of the vertebrates, and of the human race.’ Gould went on to say that if the ‘tape’ of evolution were to be run again, it need not turn out in the same way – a different set of survivors would be here now. This was a notable heresy, and once again the prevailing scientific opinion is now against Gould. As we saw in the section on Dennett and Kauffman, only a certain number of design solutions exist to any problem, and the general feeling now is that, if one could run evolution all over again, something very like humans would result. Even Gould’s account of the Burgess Shale has been attacked. In a book published in 1998 Simon Conway Morris, part of the palaeontological group from Cambridge that has spent decades studying the Shale, concluded in The Crucible of Creation that in fact the vast army of trilobites does fit with accepted notions of evolution; comparisons can be made with living animal families, although we may have made mistakes with certain groupings.60
One might think that the repeated rebuffs which Gould received to his attempts to reshape classical Darwinism would have dampened his enthusiasm. Not a bit of it. And in any case, the fourth area where he, Lewontin, and others have differed from their neo-Darwinist colleagues has had a somewhat different history. Between 1981 and 1991, Gould and Lewontin published three books that challenged in general the way ‘the doctrine of DNA,’ as Lewontin put it, had been used, again to quote Lewontin, to ‘justify inequalities within and between societies and to claim that those inequalities can never be changed.’ In The Mismeasure of Man (1981), Gould looked at the history of the controversy over IQ, what it means, and how it is related to class and race.61 In 1984 Lewontin and two others, Steven Rose and Leon J. Kamin, published Not in Our Genes: Biology, Ideology and Human Nature, in which they rooted much biology in a bourgeois political mentality of the nineteenth century, arguing that the quantification of such things as the IQ is crude and that attempts to describe mental illness only as a biochemical illness avoid certain politically inconvenient facts.62 Lewontin took this further in 1991 in The Doctrine of DNA, where he argued that DNA fits perfectly into the prevailing ideology; that the link between cause and effect is simple, mainly one on one; that for the present DNA research holds out no prospect of a cure for the major illnesses that affect mankind – for example, cancer, heart disease and stroke – and that the whole edifice is more designed to reward scientists than help science, or patients. Most subversive of all, he writes, ‘It has been clear since the first discoveries in molecular biology that “genetic engineering,” the creation to order of genetically altered organisms, has an immense possibility for producing private profit…. No prominent molecular biologist of my acquaintance is without a financial stake in the biotechnology business.’63 He believes that human nature, as described by the evolutionary biologists such as E. O. Wilson, is a ‘made-up story,’ designed to fit the theories the theorists already hold.
Given the approach of Gould and Lewontin in particular, it comes as no surprise to find them fully embroiled in yet another (but very familiar) biological controversy, which erupted in 1994. This was the publication of Richard J. Herrnstein and Charles Murray’s The Bell Curve: Intelligence and Class Structure in American Life.64
Ten years in the making, the main argument of The Bell Curve was twofold. In some places, it is straight out of Michael Young’s Rise of the Meritocracy, though Herrnstein and Murray are no satirists but in deadly earnest. In the twentieth century, they say, as more and more colleges have opened up to the general population, as IQ tests have improved and been shown to be better predictors of job performance than other indicators (such as college grades, interviews, or biographical data), and as the social environment has become more uniform for most of the population, a ‘cognitive elite’ has begun to emerge in society. Three phenomena are the result of this sorting process, and mean that it will accelerate in the future: the cognitive elite is getting richer, at a time when everybody else is having to struggle to stay even; the elite is increasingly segregated physically from everyone else, especially at work and in the neighbourhoods they inhabit; and the cognitive elite is increasingly likely to intermarry.65 Herrnstein and Murray also analysed afresh the results of the National Longitudinal Study of Youth (NLSY), a database of about 4 million Americans drawn from a population that was born in the 1960s. This enables them to say, for example, that low intelligence is a stronger precursor of poverty than coming from a low socioeconomic status background, that students who drop out of school come almost entirely from the bottom quartile of the IQ distribution (i.e., the lowest 25 percent), that low-IQ people are more likely to divorce early on in married life and to have illegitimate children. They found that low-IQ parents are more likely to be on welfare and to have low-birthweight children. Low IQ men are more likely to be in prison. Then there was the racial issue. Herrnstein and Murray spend a lot of time prefacing their remarks by saying that a high ‘I Q ‘does not necessarily make someone admirable or the kind to be cherished, and they concede that the racial differences in IQ are diminishing. But, after controlling for education and poverty, they still find that people of Asian stock in America outperform ‘whites,’ who outperform blacks on tests of IQ.66 They also find that recent immigrants to America have a lower IQ score than native-born Americans. And finally, they voice their concerns that the IQ level of America is declining. This is due partly, they say, to a dysgenic trend – people of lower IQ are having more children – but that is not the only reason. In practice, the American schooling system has been ‘dumbed down’ to meet the needs of average and below-average students, which means that the performance of the average students has not, contrary to popular opinion, been adversely affected. It is the brighter students who have been most affected, their SAT (Scholastic Aptitude Test) scores dropping by 41 percent between 1972 and 1993. They also blame parents, who seem not to want their children to work harder anymore, and television, wh
ich has replaced newsprint as a source of information, and the telephone, which has replaced letter writing as a form of self-expression.67 Further, they express their view that affirmative-action programs have not helped disadvantaged people, indeed have made their situation worse. But it is the emergence of the cognitive elite, this ‘invisible migration,’ the ‘secession of the successful,’ and the blending of the interests of the affluent with the cognitive elite that Herrnstein and Murray see as the most important, and pessimistic, of their findings. This elite, they say, will fear the ‘underclass’ that is emerging, and will in effect control it with ‘kindness’ (which is basically what Murray’s rival, J. K. Galbraith had said in The Culture of Contentment). They will provide welfare for the underclass so long as it is out of sight and out of mind. They hint, though, that such measures are likely to fail: ‘racism will re-emerge in a new and more virulent form.’68
Herrnstein and Murray are traditionalists. They would like to see a return to old-fashioned families, small communities, and the familiar forms of education, where pupils are taught history, literature, arts, ethics, and the sciences in such a way as to be able to weigh, analyse, and evaluate arguments according to exacting standards.69 For them, the IQ test not only works – it is a watershed in human society. Allied to the politics of democracy and the homogenising successes of modern capitalism, the IQ aids what R. A. Fisher called runaway evolution, promoting the rapid layering of society, divided according to IQ – which, of course, is mainly inherited. We are indeed witnessing the rise of the meritocracy.
The Bell Curve provoked a major controversy on both sides of the Atlanric. This was no surprise. Throughout the century white people, people on the ‘right’ side of the divide they were describing, have concluded that whole segments of the population were dumb. What sort of reaction did they expect? Many people countered the claims of Herrnstein and Murray, with at least six other books being produced in 1995 or 1996 to examine (and in many cases refute) the arguments of The Bell Curve. Stephen Jay Gould’s The Mismeasure of Man was reissued in 1996 with an extra chapter giving his response to The Bell Curve. His main point was that this was a debate that needed technical expertise. Too many of the reviewers who had joined the debate (and the book provoked nearly two hundred reviews or associated articles) did not feel themselves competent to judge the statistics, for example. Gould did, and dismissed them. In particular, he attacked Herrnstein and Murray’s habit of giving the form of the statistical association but not the strength. When this was examined, he said, the links they had found always explained less than 20 percent of the variance, ‘usually less than 10 percent and often less than 5 percent. What this means in English is that you cannot predict what a given person will do from his IQ score.’70 This was the conclusion Christopher Jencks had arrived at, thirty years before.
By the time The Bell Curve rumpus erupted, the infrastructure was in place for a biological project capable of generating controversy on an even bigger scale. This was the scramble to map the human genome, to draw up a plan to describe exactly all the nucleotides that constitute man’s inheritance and that, in time, will offer at least the possibility of interfering in our genetic makeup.
Interest in this idea grew throughout the 1980s. Indeed, it could be said that the Human Genome Project (HGP), as it came to be called, had been simmering since Victor McKusick, a Boston doctor, began collecting a comprehensive record, ‘Mendelian Inheritance in Man,’ a list of all known genetic diseases, first published in 1966.71 But then, as research progressed, first one scientist then another began to see sense in mapping the entire genome. On 7 March 1986, in Science, Renato Dulbecco, Nobel Prize-winning president of the Salk Institute, startled his colleagues by asserting that the war on cancer would be over quicker if geneticists were to sequence the human genome.72 Various U.S. government departments, including the Department of Energy and the National Institutes of Health, became interested at this point, as did scientists in Italy, the United Kingdom, Russia, Japan, and France (in roughly that order; Germany was backward, owing to the controversial role biology had played in Nazi times). A major conference, organised by the Howard Hughes Medical Institute, was held in Washington in July 1986 to bring together the various interested parties, and this had two effects. In February 1988 the US. National Research Council issued its report, Mapping and Sequencing the Human Genome, which recommended a concerted research program with a budget of $200 million a year.73 James Watson, appropriately enough, was appointed associate director of NIH, later that year, with special responsibility for human genome research. And in April 1988, HUGO, the Human Genome Organisation, was founded. This was a consortium of international scientists to spread the load of research, and to make sure there was as little duplication as possible, the aim being to finalise the mapping as early as possible in the twenty-first century. The experience of the Human Genome Project has not been especially happy. In April 1992 James Watson resigned his position over an application by certain NIH scientists to patent their sequences. Watson, like many others, felt that the human genome should belong to everyone.74
The genome project came on stream in 1988–89. This was precisely the time that communism was collapsing in the Soviet Union and the Berlin Wall was dismantled. A new era was beginning politically, but so too in the intellectual field. For HUGO was not the only major innovation introduced in 1988. That year also saw the birth of the Internet.
Whereas James Watson took a leading role in the genome project, his former colleague and co-discoverer of the double helix, Francis Crick, took a similar position in what is perhaps the hottest topic in biology as we enter the twenty-first century: consciousness studies. In 1994 Crick published The Astonishing Hypothesis, which advocated a research assault on this final mystery/problem.75 Consciousness studies naturally overlap with neurological studies, where there have been many advances in identifying different structures of the brain, such as language centres, and where MRI, magnetic resonance imaging, can show which areas are being used when people are merely thinking about the meaning of words. But the study of consciousness itself is still as much a matter for philosophers as biologists. As John Maddox put it in his 1998 book, What Remains to be Discovered, ‘No amount of introspection can enable a person to discover just which set of neurons in which part of his or her head is executing some thought-process. Such information seems to be hidden from the human user. ‘76
It should be said that some people think there is nothing to explain as regards consciousness. They believe it is an ‘emergent property’ that automatically arises when you put a ‘bag of neurons’ together. Others think this view absurd. A good explanation of emergent property is given by John Searle, Mills Professor of Philosophy at the University of California, Berkeley, regarding the liquidity of water. The behaviour of the H20 molecules explains liquidity, but the individual molecules are not liquid. At the moment, the problem with consciousness is that our understanding is so rudimentary that we don’t even know how to talk about it – even after the ‘Decade of the Brain,’ which was adopted by the U.S. Congress on 1 January 1990.77 This inaugurated many innovations and meetings that underlined the new fashion for consciousness studies. For example, the first international symposium on the science of consciousness was held at the University of Arizona at Tucson in April 1994, attended by no fewer than a thousand delegates.78 In that same year the first issue of the Journal of Consciousness Studies was published, with a bibliography of more than 1,000 recent articles. At the same time a whole raft of books about consciousness appeared, of which the most important were: Neural Darwinism: The Theory of Neuronal Group Selection, by Gerald Edelman (1987), The Remembered Present: A Biological Theory of Consciousness, by Edelman (1989), The Emperor’s New Mind, by Roger Penrose (1989), The Problem of Consciousness, by Colin McGinn (1991), Consciousness Explained, by Daniel Dennett (1991), The Rediscovery of the Mind, by John Searle (1992), Bright Air, Brilliant Fire, by Edelman (1992), The Astonishing Hypothesis, by Francis
Crick (1994), Shallows of the Mind: A Search for the Missing Science of Consciousness, by Roger Penrose (1994), and The Conscious Mind: In Search of a Fundamental Theory, by David Chalmers (1996). Other journals on consciousness were also started, and there were two international symposia on the subject at Jesus College, Cambridge, published as Nature’s Imagination (1994) and Consciousness and Human Identity (1998), both edited by John Cornwell.
Thus consciousness has been very much the flavour of the decade, and it is fair to say that those involved in the subject fall into four camps. There are those, like the British philosopher Colin McGinn, who argue that consciousness is resistant to explanation in principle and for all time.79 Philosophers we have met before – such as Thomas Nagel and Hilary Putnam – also add that at the present (and maybe for all time) science cannot account for qualia, the first-person phenomenal experience that we understand as consciousness. Then there are two types of reductionist. Those like Daniel Dennett, who claim not only that consciousness can be explained by science but that construction of an artificially intelligent machine that will be conscious is not far off, may be called the ‘hard’ reductionists.80 The soft reductionists, typified by John Searle, believe that consciousness does depend on the physical properties of the brain but think we are nowhere near solving just how these processes work, and dismiss the very idea that machines will ever be conscious.81 Finally, there are those like Roger Penrose who believe that a new kind of dualism is needed, that in effect a whole new set of physical laws may apply inside the brain, which account for consciousness.82 Penrose’s particular contribution is that quantum physics operate within tiny structures, known as tubules, within the nerve cells of the brain to produce – in some as yet unspecified way – the phenomena we recognise as consciousness.83 Penrose actually thinks that we live in three worlds – the physical, the mental, and the mathematical: ‘The physical world grounds the mental world, which in turn grounds the mathematical world and the mathematical world is the ground of the physical world and so on around the circle.’84 Many people, who find this tantalising, nonetheless don’t feel Penrose has proved anything. His speculation is enticing and original, but it is still speculation.