The Modern Mind

Home > Other > The Modern Mind > Page 25
The Modern Mind Page 25

by Peter Watson


  Paul Fussell, in The Great War in Modern Memory, gives one of the most clear-eyed and harrowing accounts of World War I. He notes that the toll on human life even at the beginning of the war was so horrific that the height requirement for the British army was swiftly reduced from five feet eight in August 1914 to five feet five on II October.9 By $ November, after thirty thousand casualties in October, men had to be only five feet three to get in. Lord Kitchener, secretary of state for war, asked at the end of October for 300,000 volunteers. By early 1916 there were no longer enough volunteers to replace those that had already been killed or wounded, and Britain’s first conscript army was installed, ‘an event which could be said to mark the beginning of the modern world.’10 General Douglas Haig, commander in chief of the British forces, and his staff devoted the first half of that year to devising a massive offensive.

  World War I had begun as a conflict between Austro-Hungary and Serbia, following the assassination of the Archduke Franz Ferdinand. But Germany had allied itself with Austro-Hungary, forming the Central Powers, and Serbia had appealed to Russia. Germany mobilised in response, to be followed by Britain and France, which asked Germany to respect the neutrality of Belgium. In early August 1914 Russia invaded East Prussia on the same day that Germany occupied Luxembourg. Two days later, on 4 August, Germany declared war on France, and Britain declared war on Germany. Almost without meaning to, the world tumbled into a general conflict.

  After six months’ preparation, the Battle of the Somme got under way at seven-thirty on the morning of I July 1916. Previously, Haig had ordered the bombardment of the German trenches for a week, with a million and a half shells fired from 1,500 guns. This may well rank as the most unimaginative military manoeuvre of all time – it certainly lacked any element of surprise. As Fussell shows, ‘by 7.31’ the Germans had moved their guns out of the dugouts where they had successfully withstood the previous week’s bombardment and set up on higher ground (the British had no idea how well dug in the Germans were). Out of the 110,000 British troops who attacked that morning along the thirteen-mile front of the Somme, no fewer than 60,000 were killed or wounded on the first day, still a record. ‘Over 20,000 lay dead between the lines, and it was days before the wounded in No Man’s Land stopped crying out.’11 Lack of imagination was only one cause of the disaster. It may be too much to lay the blame on social Darwinist thinking, but the British General Staff did hold the view that the new conscripts were a low form of life (mainly from the Midlands), too simple and too animal to obey any but the most obvious instructions.12 That is one reason why the attack was carried out in daylight and in a straight line, the staff feeling the men would be confused if they had to attack at night, or by zigzagging from cover to cover. Although the British by then had the tank, only thirty-two were used ‘because the cavalry preferred horses.’ The disaster of the Somme was almost paralleled by the attack on Vimy Ridge in April 1917. Part of the infamous Ypres Salient, this was a raised area of ground surrounded on three sides by German forces. The attack lasted five days, gained 7,000 yards, and cost 160,000 killed and wounded – more than twenty casualties for each yard of ground that was won.13

  Passchendaele was supposed to be an attack aimed at the German submarine bases on the Belgian coast. Once again the ground was ‘prepared’ by artillery fire – 4 million shells over ten days. Amid heavy rain, the only effect was to churn up the mud into a quagmire that impeded the assault forces. Those who weren’t killed by gun- or shell-fire died either from cold or literally drowned in the mud. British losses numbered 370,000. Throughout the war, some 7,000 officers and men were killed or wounded every day: this was called ‘wastage.’14 By the end of the war, half the British army was aged less than nineteen.15 No wonder people talked about a ‘lost generation.’

  The most brutally direct effects of the war lay in medicine and psychology. Major developments were made in the understanding of cosmetic surgery and vitamins that would eventually lead to our current concern with a healthy diet. But the advances that were of the most immediate importance were in blood physiology, while the most contentious innovation was the IQ – Intelligence Quotient – test. The war also helped in the much greater acceptance afterwards of psychiatry, including psychoanalysis.*

  It has been estimated that of some 56 million men called to arms in World War I, around 26 million were casualties.16 The nature of the injuries sustained was different from that of other wars insofar as high explosives were much more powerful and much more frequently used than before. This meant more wounds of torn rather than punctured flesh, and many more dismemberments, thanks to the machine gun’s ‘rapid rattle.’ Gunshot wounds to the face were also much more common because of the exigencies of trench warfare; very often the head was the only target for riflemen and gunners in the opposing dugouts (steel helmets were not introduced until the end of 1915). This was also the first major conflict in which bombs and bullets rained down from the skies. As the war raged on, airmen began to fear fire most of all. Given all this, the unprecedented nature of the challenge to medical science is readily appreciated. Men were disfigured beyond recognition, and the modern science of cosmetic surgery evolved to meet this dreadful set of circumstances. Hippocrates rightly remarked that war is the proper school for surgeons.

  Whether a wound disfigured a lot or a little, it was invariably accompanied by the loss of blood. A much greater understanding of blood was the second important medical advance of the war. Before 1914, blood transfusion was virtually unknown. By the end of hostilities, it was almost routine.17 William Harvey had discovered the circulation of the blood in 1616, but it was not until 1907 that a doctor in Prague, Jan Jansky, showed that all human blood could be divided into four groups, O, A, B, and AB, distributed among European populations in fairly stable proportions.18 This identification of blood groups showed why, in the past, so many transfusions hadn’t worked, and patients had died. But there remained the problem of clotting: blood taken from a donor would clot in a matter of moments if it was not immediately transferred to a recipient.19 The answer to this problem was also found in 1914, when two separate researchers in New York and Buenos Aires announced, quite independently of each other and almost at the same time, that a 0.2 percent solution of sodium citrate acted as an efficient anticoagulant and that it was virtually harmless to the patient.20 Richard Lewisohn, the New York end of this duo, perfected the dosage, and two years later, in the killing fields of France, it had become a routine method for treating haemorrhage.21 Kenneth Walker, who was one of the pioneers of blood transfusion, wrote in his memoirs, ‘News of my arrival spread rapidly in the trenches and had an excellent effect on the morale of the raiding party. “There’s a bloke arrived from G.H.Q. who pumps blood into you and brings you back to life even after you’re dead,” was very gratifying news for those who were about to gamble with their lives.’22

  Mental testing, which led to the concept of the IQ, was a French idea, brainchild of the Nice-born psychologist Alfred Binet. At the beginning of the century Freudian psychology was by no means the only science of behaviour. The Italo-French school of craniometry and stigmata was also popular. This reflected the belief, championed by the Italian Cesare Lombroso and the Frenchman Paul Broca, that intelligence was linked to brain size and that personality – in particular personality defects, notably criminality – was related to facial or other bodily features, what Lombroso called ‘stigmata.’

  Binet, a professor at the Sorbonne, failed to confirm Broca’s results. In 1904 he was asked by France’s Minister of Public Education to carry out a study to develop a technique that would help identify those children in France’s schools who were falling behind the others and who therefore needed some form of special education. Disillusioned with craniometry, Binet drew up a series of very short tasks associated with everyday life, such as counting coins or judging which of two faces was ‘prettier.’ He did not test for the obvious skills taught at school – math and reading for example – because the teach
ers already knew which children failed on those skills.23 Throughout his studies, Binet was very practical, and he did not invest his tests with any mystical powers.24 In fact, he went so far as to say that it didn’t matter what the tests were, so long as there were a lot of them and they were as different from one another as could be. What he wanted to be able to do was arrive at a single score that gave a true reflection of a pupil’s ability, irrespective of how good his or her school was and what kind of help he or she received at home.

  Three versions of Binet’s scale were published between 1905 and 1911, but it was the 1908 version that led to the concept of the so-called IQ.25 His idea was to attach an age level to each task: by definition, at that age a normal child should be able to fulfil the task without error. Overall, therefore, the test produced a rounded ‘mental age’ of the child, which could be compared with his or her actual age. To begin with, Binet simply subtracted the ‘mental age’ from the chronological age to get a score. But this was a crude measure, in that a child who was two years behind, say, at age six, was more retarded than a child who was two years behind at eleven. Accordingly, in 1912 the German psychologist W. Stern suggested that mental age should be divided by chronological age, a calculation that produced the intelligence quotient.26 It was never Binet’s intention to use the IQ for normal children or adults; on the contrary, he was worried by any attempt to do so. However, by World War I, his idea had been taken to America and had completely changed character.

  The first populariser of Binet’s scales in America was H. H. Goddard, the contentious director of research at the Vineland Training School for Feebleminded Girls and Boys in New Jersey.27 Goddard was a much fiercer Darwinian than Binet, and after his innovations mental testing would never be the same again.28 In those days, there were two technical terms employed in psychology that are not always used in the same way now. An ‘idiot’ was someone who could not master full speech, so had difficulty following instructions, and was judged to have a mental age of not more than three. An ‘imbecile,’ meanwhile, was someone who could not master written language and was considered to have a mental age somewhere between three and seven. Goddard’s first innovation was to coin a new term – ‘moron,’ from the Greek, meaning foolish – to denote the feebleminded individuals who were just below normal intelligence.29 Between 1912 and the outbreak of war Goddard carried out a number of experiments in which he concluded, alarmingly – or absurdly – that between 50 and 80 percent of ordinary Americans had mental ages of eleven or less and were therefore morons. Goddard was alarmed because, for him, the moron was the chief threat to society. This was because idiots and imbeciles were obvious, could be locked up without too much public concern, and were in any case extremely unlikely to reproduce. On the other hand, for Goddard, morons could never be leaders or even really think for themselves; they were workers, drones who had to be told what to do. There were a lot of them, and most would reproduce to manufacture more of their own kind. Goddard’s real worry was immigration, and in one extraordinary set of studies where he was allowed to test the immigrants then arriving at Ellis Island, he managed to show to his own satisfaction (and again, alarm) that as many as four-fifths of Hungarians, Italians, and Russians were ‘moronic.’30

  Goddard’s approach was taken up by Lewis Terman, who amalgamated it with that of Charles Spearman, an English army officer who had studied under the famous German psychologist Wilhelm Wundt at Leipzig and fought in the Boer War. Until Spearman, most of the practitioners of the young science of psychology were interested in people at the extremes of the intelligence scale – the very dull or the very bright. But Spearman was interested in the tendency of those people who were good at one mental task to be good at others. In time this led him to the concept of intelligence as made up of a ‘general’ ability, or g, which he believed underlay many activities. On top of g, said Spearman, there were a number of specific abilities, such as mathematical, musical, and spatial ability. This became known as the two-factor theory of intelligence.31

  By the outbreak of World War I, Terman had moved to California. There, attached to Stanford University, he refined the tests devised by Binet and his other predecessors, making the ‘Stanford-Binet’ tests less a diagnosis of people in need of special education and more an examination of ‘higher,’ more complex cognitive functioning, ranging over a wider spread of abilities. Tasks included such things as size of vocabulary, orientation in space and time, ability to detect absurdities, knowledge of familiar things, and eye–hand coordination.32 Under Terman, therefore, the IQ became a general concept that could be applied to anyone and everyone. Terman also had the idea to multiply Stern’s calculation of the IQ (mental age divided by chronological age) by 100, to rule out the decimal point. By definition, therefore, an average IQ became 100, and it was this round figure that, as much as anything, caused ‘IQ’ to catch on in the public’s imagination.

  It was at this point that world events – and the psychologist Robert Yerkes – intervened.33 Yerkes was nearly forty when the war started, and by some accounts a frustrated man.34 He had been on the staff of the Harvard faculty since the beginning of the century, but it rankled with him that his discipline still wasn’t accepted as a science. Often, for example, in universities psychology was part of the philosophy department. And so, with Europe already at war, and with America preparing to enter, Yerkes had his one big idea – that psychologists should use mental testing to help assess recruits.35 It was not forgotten that the British had been shocked during the Boer War to find out how poorly their recruits rated on tests of physical health; the eugenicists had been complaining for years that the quality of American immigrants was declining; here was a chance to kill two birds with one stone – assess a huge number of people to gain some idea of what the average mental age really was and see how immigrants compared, so that they too might be best used in the coming war effort. Yerkes saw immediately that, in theory at least, the U.S. armed services could benefit enormously from psychological testing: it could not only weed out the weaker men but also identify those who would make the best commanders, operators of complex equipment, signals officers, and so forth. This ambitious goal required an extraordinary broadening of available intelligence testing technology in two ways – there would have to be group testing, and the tests would have to identify high flyers as well as the inadequate rump. Although the navy turned down Yerkes’s initiative, the army adopted it – and never regretted it. He was made a colonel, and he would later proclaim that mental testing ‘had helped to win the war.’ This was, as we shall see, an exaggeration.36

  It is not clear how much use the army made of Yerkes’s tests. The long-term significance of the military involvement lay in the fact that, over the course of the war, Yerkes, Terman, and another colleague named C. C. Brigham carried out tests on no fewer than 1.75 million individuals.37 When this unprecedented mass of material had been sifted (after the war), three main results emerged. The first was that the average mental age of recruits was thirteen. This sounds pretty surprising to us at this end of the century: a nation could scarcely hope to survive in the modern world if its average mental age really was thirteen. But in the eugenicist climate of the time, most people preferred the ‘doom’ scenario to the alternative view, that the tests were simply wrong. The second major result was that European immigrants could be graded by their country of origin, with (surprise, surprise) darker people from the southern and eastern parts of the continent scoring worse than those fairer souls from the north and west. Third, the Negro was at the bottom, with a mental age of ten and a half.38

  Shortly after World War I, Terman collaborated with Yerkes to introduce the National Intelligence Tests, constructed on the army model and designed to measure the intelligence of groups of schoolchildren. The market had been primed by the army project’s publicity, and intelligence testing soon became big business. With royalties from the sales of his tests, Terman became a wealthy as well as a prominent psychologist. And then, in
the 1920s, when a fresh wave of xenophobia and the eugenic conscience hit America, the wartime IQ results came in very handy. They played their part in restricting immigration, with what results we shall see.39

  The last medical beneficiary of World War I was psychoanalysis. After the assassination of the archduke in Sarajevo, Freud himself was at first optimistic about a quick and painless victory by the Central Powers. Gradually, however, like others he was forced to change his mind.40 At that stage he had no idea that the war would affect the fortunes of psychoanalysis so much. For example, although America was one of the half-dozen or so foreign countries that had a psychoanalytic association, the discipline was still regarded in many quarters as a fringe medical speciality, on a level with faith healing or yoga. The situation was not much different in Britain. When The Psychopathology of Everyday Life was published in translation in Britain in the first winter of the war, the book was viciously attacked in the review pages of the British Medical Journal, where psychoanalysis was described as ‘abounding nonsense’ and ‘a virulent pathogenic microbe.’ At other times, British doctors referred slightingly to Freud’s ‘dirty doctrines.’41

  What caused a change in the views of the medical profession was the fact that, on both sides in the war, a growing number of casualties were suffering from shell shock (or combat fatigue, or battle neurosis, to use the terms now favoured). There had been cases of men breaking down in earlier wars, but their numbers had been far fewer than those with physical injuries. What seemed to be crucially different this time was the character of hostilities – static trench warfare with heavy bombardment, and vast conscript armies which contained large numbers of men unsuited for war.42 Psychiatrists quickly realised that in the huge civilian armies of World War I there were many men who would not normally have become soldiers, who were unfit for the strain, and that their ‘civilian’ neuroses would express themselves under the terror of bombardment. Doctors also learned to distinguish such men from those who had more resilient psychoses but through fatigue had come to the end of their tether. The intense scrutiny of the men on the stage in the theatre of war revealed to psychology much that would not have been made evident in years and years of peace. As Rawlings Rees noted, ‘The considerable incidence of battle neurosis in the war of 1914–18 shook psychiatry, and medicine as a whole, not a little.’ But it also helped make psychiatry respectable.43 What had been the mysteries of a small group of men and women was now more widely seen as a valuable aid to restoring some normality to a generation that had gone almost insane with the horror of it all. An analysis of 1,043,653 British casualties revealed that neuroses accounted for 34 percent.44

 

‹ Prev