The Unnatural Nature of Science

Home > Other > The Unnatural Nature of Science > Page 4
The Unnatural Nature of Science Page 4

by Lewis Wolpert


  We may like to see ourselves as naturally rational and logical, but there is a lot of good evidence that this is not always so. While in everyday thinking the mind can show some adherence to logical rules, these can be influenced by the nature of the problem, and so the formal rules break down. This can be illustrated by what is now recognized as a classic and seminal experiment. Imagine you are presented with four cards, each of which has a letter on one side and a number on the other. The four cards when placed on the table show A, J, 2 and 7. Your task is to decide which cards should be turned over in order to determine the truth or falsity of the following statement: ‘If there is a vowel on one side of the card then there is an even number on the other side.’ Most people correctly turn over the card bearing the A, and some turn the card with 2 on it. Few choose the card with 7, even though this is a logical choice – for if there were a vowel on the other side of the 7 the rule would be falsified. Turning over the J or the 2 tells one nothing. Whatever is on the other side of the 2 will not provide useful information, since whether or not it is a vowel or a consonant will not determine the validity of the rule. This experiment shows in addition the preference that people – including scientists – have for trying to confirm hypotheses, rather than for trying to refute them.

  One area of day-to-day thinking which has been shown to be particularly prone to errors is that which involves probabilities and judgements which have to be made on the basis of uncertain information. Many scientific investigations have to be done under precisely such conditions, and the scientist has somehow to become free from the all-too-common errors.

  Children have a limited understanding of chance: they believe that outcomes of games based on chance can be influenced by practice, intelligence and effort. Adults, too, have difficulty with probabilities and the nature of chance. If you are playing roulette and red has come up five times running, is the chance of black greater on the next spin? The answer is ‘no’, and the contrary expectation is known as the ‘gambler’s fallacy’. Again, if, in spinning a coin, heads has come down ten times running, the probability of a tail or a head at the next spin is still 0.5 – evens. The coin has no memory. Given an evenly balanced coin, many people believe that a sequence H-T-H-T-H-T is much more likely than H-H-H-H-H-H, whereas in fact both are equally likely.

  Correct probability judgements are often counter-intuitive. Striking coincidences often lead to ideas of supernatural forces at play. For example, to hear that a woman had won the New Jersey lottery twice in four months seemed remarkable, and the odds against it were claimed to be 17 trillion (17 × 1012 ) to 1. But further analysis showed that the chance that such an event could happen to someone, somewhere, in the United States was about one in thirty, because so many people take lottery tickets. Another example is that it only requires twenty-three people to be together in a room for the probability of two of them having the same birthday to be one in two.

  There was, a little while ago, a spate of articles in newspapers in the USA which suggested a link between teenage suicide and a game called ‘Dungeons & Dragons’. It was said that the game could become an obsession and lead to a loss of a sense of reality. Evidence to support this claim was that twenty-eight teenagers who often played the game had committed suicide. However, the game had sold millions of copies, and probably as many as 3 million teenagers played it. Since the annual suicide rate for teenagers is about twelve per 100,000, the number of expected suicides in a teenage population of 3 million is about 360. So, finding twenty-eight such suicides has little or no significance on its own.

  These examples of failure to appreciate the nature of probabilities and statistical thinking are particularly important when it comes to assessment of risk. It is, for example, rarely appreciated that it is almost impossible to ensure that a drug does not cause a death rate of, say, one in 100,000. Indeed the basis for clinical trials is rarely appreciated. In order to show the efficacy of a particular drug or medical treatment, it is essential to follow a vigorous procedure for the selection of a sample group, some of whom will be treated and some of whom will not. The assignment to the treated or non-treated group must be random, and wherever possible doctors themselves should not be aware of who is being given which treatment. Moreover, the results will require a careful statistical analysis. Such expensive trials are essential, but a 1 in 100,000 death rate due to the drug would require an enormous sample. Anecdotal collections of cases in which cures of, for example, cancer, are claimed can be very misleading.

  An important class of error is based on what is known as representativeness – that is, the degree to which one event is representative of another is judged by how closely they resemble one another. For example, experimental subjects were given descriptions of men taken from a group that comprised 70 per cent lawyers and 30 per cent engineers and were asked to assess the profession of each man described. Even though the subjects knew the composition of the group, and thus should have seen that the probability of being a lawyer was more than twice that of being an engineer, the subjects nevertheless consistently judged a description to refer to an engineer if it contained even the slightest hint, no matter how unconvincing, of something that fitted their stereotyped image of an engineer. They ignored the probabilities involved in selecting a single case from a population of known composition. And this tendency was even more pronounced when assessing the reliability of small samples. Subjects are, for example, very bad at judging the likelihood that the number of boys being born each day would be greater than 60 per cent in a large and a small maternity hospital. They usually thought that there would be no difference, whereas in fact, with a small sample, the changes in the percentage of boys at a small hosptial are very much greater, because each birth represents a greater percentage of the total. In fact most of us have poor intuitive understanding of the importance of chance where small numbers are involved.

  Representativeness also results in people having much greater confidence in their ability to predict than is in fact warranted. A superficial match between, for example, the input and the outcome generates a confidence which ignores all those factors which would limit the validity of the prediction. For example, staff at medical schools select students and believe in their ability to select correctly. But they can later judge only those students whom they have selected: they cannot compare them with those whom they rejected. This is well illustrated by psychologists’ confidence in their own ability to select the best candidates at interview even though they know of the extensive literature showing quite conclusively how unreliable the interviews are. They cannot restrain their own convictions about their own reliability.

  Another example is where people judge frequency according to a method which depends on the information available to them – that is to say, they estimate frequency in terms of the examples that come to mind. Thus most people believe that there are more words beginning with the letter R than there are words which have R as the third letter, because words beginning with R are easier to think of. Similarly they give a much lower estimate for 1 × 2 × 3 × 4 × 5 × 6 × 7 × 8 than for 8 × 7 × 6 × 5 × 4 × 3 × 2 × 1, and in both cases it is far too low. Typical answers are around 500, whereas the correct answer is 40,320. The plausibility of the scenarios that come to mind serve as an indication of the likelihood of an event. If no reasonable scenario comes to mind, the event is deemed impossible or highly unlikely; if, however, many scenarios come to mind, the event in question appears probable. Even physicians tend to have distorted ideas about the dangers of various diseases that are frequently referred to in medical journals, irrespective of their true incidence

  We tend to generalize from our own experience, and so there is a tendency to believe illusory correlations ranging from ‘fat people are jolly’ to ‘if you wash your car it will rain soon afterwards’ and all sorts of theories about illness. Even psychologists have been known to find correlations between projective tests when none were later shown to exist. However, simple associations are pr
obably very useful in everyday life.

  There is in general a preference for simple rather than complex explanations. It is possible to understand such a predisposition in evolutionary terms. For primitive humans it would have been an evolutionary advantage to learn about the environment rapidly and to infer causal relationships. Selection for a brain that could directly appreciate probabilistic events and counter-intuitive results would seem to be extremely unlikely in a hostile environment where rapid and immediate judgements are required. And the use of tools and the development of technologies such as metalworking and agriculture do not require scientific thinking. But to do science it is necessary to be rigorous and to break out of many of the modes of thought imposed by the natural thinking associated with ‘common sense’.

  2

  Technology is not Science

  Much of modern technology is based on science, but this recent association obscures crucial differences and the failure to distinguish between science and technology has played a major role in obscuring the nature of science. To put it briefly, science produces ideas whereas technology results in the production of usable objects. Techology – by which I mean the practical arts – is very much older than science. Unaided by science, technology gave rise to the crafts of primitive man, such as agriculture and metalworking, the Chinese triumphs of engineering, Renaissance cathedrals, and even the steam engine. Not until the nineteenth century did science have an impact on technology. In human evolution the ability to make tools, and so control the environment, was a great advantage, but the ability to do science was almost entirely irrelevant.

  For some historians, science began whenever and wherever humans tried to solve the innumerable problems of dealing with the environment. For them, technology, starting with toolmaking, is problem-solving and hence science. In fact the crafts associated with agriculture, animal domestication, metalworking, dyeing and glass-making were present thousands of years before the appearance of what we think of as science. In The Savage Mind, the French anthropologist Claude Lévi-Strauss argues that ‘Each of these techniques assumes centuries of active and methodical observation, of bold hypothesis, tested by means of endlessly repeated experiments.’ Put in this way, he makes it sound like a formula for doing science and makes it seem that primitive technology involved mental processes very similar to those of science. But did this early development of technology involve bold hypotheses?

  Lévi-Strauss has no doubt that neolithic or early historical man was heir to a long scientific tradition. If he is right, then there is a paradox, as he forcefully points out. If neolithic culture was inspired by ‘scientific’ thinking similar to our own, it is impossible to understand how several thousand years of stagnation intervened between the neolithic revolution and modern science. For Lévi-Strauss there is only one solution to the paradox, namely that there are two distinct modes of scientific thought, two strategic levels at which nature is accessible to scientific enquiry: one roughly adapted to perception and imagination; the other at a remove from it. The ‘science of the concrete … was not less scientific and its results no less genuine. They were secured ten thousand years earlier and still remain at the basis of our own civilization.’ But, as I will try to show, Lévi-Strauss’s two modes of thought are, in fact, science and technology – and technology requires no understanding or theory of the kinds provided by science.

  Agriculture was already in progress at about 7000 BC when man passed from hunting and gathering to food-producing. Cattle were probably domesticated at this time, but there is no reason to believe that the farmers had any more understanding of the science involved in agriculture than most Third World farmers have today. They relied on their experience and learned from their mistakes. Of course there was inventiveness, but this inventiveness was of the same kind involved in primitive toolmaking: it was an acquired skill based on learning and is closely linked to common sense. There is no reason to distinguish such inventiveness from an extension of the ability of chimpanzees to manipulate their environment to achieve a particular goal. Classic examples of such behaviour include their ability to join two sticks together to get bananas from a hook too high up for them to reach with their hands. This in no way lessens the achievements of early technology, but it does help distinguish it from science.

  By 3500 BC there was already a high degree of competence in metal working, and by 3000 BC Mesopotamian craftsmen mixed copper and tin in varying proportions to produce different sorts of bronze. The kilns must have produced temperatures of over 1,000°C. In the case of glassworking, there is a text from round 1600 BC found near Baghdad which gives a description of how to make a green glaze. Essentially it is a recipe. It begins, ‘Take a mina of zuku glass together with ten shekels of lead, fifteen shekels of copper …,’ and it continues with detailed instructions as how to proceed: ‘Dip the pot in this glaze, then lift it out, fire it and leave it to cool. Inspect the result: if the glaze resembles marble, all is well. Put it back in the kiln again …’ Mixed in with such practical injunctions there were also ritual ‘magical’ actions. For example, from the seventh century BC there are instructions that the glass-furnace must be built at an auspicious time, a shrine must be installed and the deities placated. ‘When laying out the ground-plan for the glass-furnace, find out a favourable day in a lucky month for such work … Do not allow any stranger to enter the building … Offer the due libations to the gods daily.’

  Copper-making was well developed on the coast of Peru as early as 500 BC, many hundreds of years before the arrival of the Spaniards. Evidence from furnaces from around AD 1000 suggest that smelting was associated with solemn rituals and offerings to deities.

  The technological achievement of the ancient cultures was enormous, and Lévi-Strauss is right to pose the question of how it was achieved. But whatever process was involved, it was not based on science. There is no evidence of any theorizing about the processes involved in the technology nor about the reasons why it worked: for example, it was enough to know that adding charcoal to the molten mixture would accelerate the smelting of iron. Metalworking was an essentially practical craft based on common sense. The goals of the ordinary person in those times were practical ends such as sowing and hunting, and that practical orientation does not serve pure knowledge. Our brains have been selected to help us survive in a complex environment; the generation of scientific ideas plays no role in this process.

  As technology became more advanced and resulted in more complicated inventions like the telescope, compass and steam engine, it might be thought that science, which was by then itself quite advanced, would have made significant contributions to these inventions, even if it played no role in early primitive technology. This is not the case. As will now be shown, science did almost nothing to aid technology until the nineteenth century, when it had an impact on synthetic-dye production and electrical power.

  Galileo understood quite clearly that the technology of his time, the early seventeenth century, was not based on science. The inventor of eyeglasses and the telescope is unknown, and Galileo comments on this: ‘We are certain the first inventor of the telescope was a simple spectacle-maker who, handling by chance different forms of glasses, looked, also by chance, through two of them, one convex, one concave, held at different distances from the eye; saw and noted the unexpected result; and thus found the instrument.’ Galileo himself improved the telescope by trial and error, aided by his skill as an instrument-maker, and not by his understanding of optics.

  Francis Bacon, unlike his contemporary Galileo, was confused about the relation between science and technology and he drew no real distinction between them. ‘Science also must be known by works … The improvement of man’s mind and the improvement of his lot are one and the same thing.’ Science and technology are here conflated. (Compare this with Archimedes’ contempt for the practical, described in the next chapter.) The three inventions which he identified as the source of great changes in Renaissance Europe – printing, gunpowder
and the magnetic compass – were Chinese imports and owed nothing to science; nevertheless, he believed that scientific accomplishments would transform human activity through technological change, though he did not have a single example to support his case.

  The history of technology is largely an anonymous one, with few honoured names – again, unlike science. Neither learning nor literacy was relevant. Who, for example, was the unknown genius who realized that a thin piece of metal coiled into a spiral could be made to drive a machine as it unwound? Spring-driven clocks were being made early in the fifteenth century. Other crucial inventions were machines for cutting the teeth in wheels to make gears. Both the screw and the gear were invented by the Greeks – Archimedes had used a spiral screw for raising water in the third century BC – but the ability to make both reliably, in metal, required the construction of special and ingenious machines that in the fifteenth century gave rise to the metalworking lathe.

  The wheel also illustrates a nice absence of relation between technology and science, for why does a wheel make it easier to move a load? The answer is moderately subtle: the wheel reduces the friction between the object moved and the ground. Most of the work required to move an object over a surface is needed to overcome friction between the object and the surface. By using a wheel, the friction is reduced both by having an axle which is smooth and so reduces friction and by introducing a rolling motion at the surface. But that understanding, based on science, is completely unnecessary for either the invention of the wheel or the appreciation of its usefulness.

 

‹ Prev