Serving the Reich

Home > Other > Serving the Reich > Page 30
Serving the Reich Page 30

by Philip Ball


  In this respect, the lesson is not that the German physicists, as a group, failed to offer sufficient opposition to Hitler. That conclusion is hard to deny, but it is a brave person who asserts without hesitation that he or she would have done better, shown better judgement, been braver, had a clearer view of where choices would lead. Rather than simply accusing them of being morally wanting, Haberer draws a more valid and much more general judgement:

  The failure of scientists has lain in their moral obtuseness, in their incapacity to define, delineate or even to recognize the nature of the problem of responsibility. Characteristically, responsibility has been recognized only in its narrower sense. Scientists have been willing to be held responsible for the calibre of their scientific work; or when acting in administrative positions for their performance in terms of the formal responsibilities attached to their positions. Beyond this methodological and bureaucratic responsibility scientists have not, at least until very recently, ventured.

  For the choices they made, I do not judge Debye, Planck or even Heisenberg as harshly as some have done. But it is very hard indeed to see how they can be exempted from the failure that Haberer here describes. In this, they were representative of most scientists of their times.

  A new dialogue

  If the community of science today does not wholly escape these charges either, we would nevertheless be mistaken to suppose that nothing has changed. The Manhattan Project and the nuclear arms race that followed played a big part in cultivating a recognition of wider responsibility. So too have many other episodes since then, among them environmental despoliation and climate change, thalidomide, the link between smoking and cancer, genetic engineering, Chernobyl, AIDS, embryo research and synthetic biology. It is unfair to suggest that science continues doggedly to insist on its abstract purity and detachment from morality.

  Scientists’ acceptance of responsibility in these and other instances has, however, sometimes emerged only under duress. The public backlash against genetically modified organisms in the 1990s, for example, forced researchers in this field to address the need for dialogue, or in the jargon of our day, ‘public engagement’. It is also true that some of this ‘engagement’ is prompted more by a desire to avoid overly restrictive and poorly informed regulation than from a profound wish to develop principles of good conduct. But it would be churlish and cynical to suppose that this is as far as the matter goes.

  The launch of the Bulletin of the Atomic Scientists in 1945 by some of the researchers involved in the Manhattan Project was the first indication after the war that science was ready to acknowledge its social and ethical obligations. The magazine was an explicit attempt to counteract political abuses of nuclear physics and to alert the wider world to the dangers of the new knowledge. In 1947 it introduced an iconic symbol to convey these perils: the Doomsday Clock, on which the proximity of the hands to midnight illustrates the scientists’ consensus on the danger of global nuclear apocalypse. Today the Bulletin has broadened its focus to other potentially catastrophic dangers of science and technology, in particular climate change and new technologies in the life sciences. In 2007 the Doomsday Clock was moved from seven to five minutes from midnight in response both to the existence of many thousands of nuclear weapons in an ever-growing clique of nations and to the destruction of human habitats from climate change.

  The readiness of nuclear scientists to shoulder their onerous responsibilities was also signalled by a gathering of scientists in 1957 at a meeting in the village of Pugwash in Nova Scotia, Canada, to discuss the proliferation of nuclear arms and the escalation of tensions between the Soviet Union and the West. That meeting, sponsored by the Canadian banker and philanthropist Cyrus Eaton, was triggered by the release two years earlier of a manifesto written by Bertrand Russell and Albert Einstein, calling for scientists to ‘assemble in conference to appraise the perils that have arisen as a result of the development of weapons of mass destruction’ and appealing for peaceful reconciliation of East and West. The manifesto was signed by, among others, Max Born, Percy Bridgman, Frédéric Joliot-Curie, Linus Pauling and Joseph Rotblat. The Pugwash meeting was the first in an ongoing series of conferences on ‘science and world affairs’, focusing in particular on nuclear weapons, chemical and biological warfare, and international diplomacy.

  In 1995 Rotblat and the Pugwash organization were jointly awarded the Nobel Peace Prize. In his address, Rotblat condemned the ‘disgraceful role played by a few scientists . . . in fuelling the arms race’. He quoted with approval the words of anatomist Solly Zuckerman, chief scientific adviser to the British government from 1964 to 1971:

  When it comes to nuclear weapons . . . it is the man in the laboratory who at the start proposes that for this or that arcane reason it would be useful to improve an old or to devise a new nuclear warhead. It is he, the technician, not the commander in the field, who is at the heart of the arms race.

  Rotblat called on his fellow scientists to relinquish the myth of ‘apolitical science’ and to face the dilemmas that their research creates:

  You are doing fundamental work, pushing forward the frontiers of knowledge, but often you do it without giving much thought to the impact of your work on society. Precepts such as ‘science is neutral’ or ‘science has nothing to do with politics’ still prevail. They are remnants of the ivory tower mentality, although the ivory tower was finally demolished by the Hiroshima bomb.

  Rotblat’s words belie any notion that scientists refuse to embrace moral questions, but at the same time they illustrate that even in recent times such an acceptance of responsibility is not the norm.

  Another important acknowledgement of the scientist’s ethical duties occurred in 1975, when many leading biologists gathered at the Asilomar Conference Center in Monterey, California, along with members of the press and US government, to discuss the implications of new techniques in genetic engineering: the ability to excise and insert genes into DNA. Such methods are now one of the dominant influences on molecular biology, being central not only to the creation of genetically modified organisms for research, agriculture and breeding, but also to new forms of medicine (gene therapies), cloning, and genomic profiling. As one attendee, the Nobel laureate biochemist Paul Berg, has put it, ‘Looking back now, this unique conference marked the beginning of an exceptional era for science and for the public discussion of science policy.’ Scientists had become aware that, while genetic engineering created extraordinary opportunities in medicine, industry and fundamental research, it also had serious risks. Some felt, according to Berg, that ‘unfettered pursuit of this research might engender unforeseen and damaging consequences for human health and the earth’s ecosystems’—and that as a consequence there should be a voluntary moratorium on certain avenues of research.

  The Asilomar conference did not recommend such a moratorium, but instead led to the imposition of strict guidelines on the new genetic technologies. This ‘cautious permissiveness’ seems now to have been a wise position, for the worst fears about public health hazards have not materialized despite the many millions of experiments that have used the techniques. In Berg’s view, Asilomar was a success not only because it made the right decision for science but also because of its impact on the image of how science is done:

  First and foremost, we gained the public’s trust, for it was the very scientists who were most involved in the work and had every incentive to be left free to pursue their dream that called attention to the risks inherent in the experiments they were doing. Aside from [the] unprecedented nature of that action, the scientists’ call for a temporary halt to the experiments that most concerned them and the assumption of responsibility for assessing and dealing with those risks was widely acclaimed as laudable ethical behavior.

  This is a matter of public relations, but not just that: an increasingly suspicious public (and that suspicion, the tarnishing of science’s halo, began with Hiroshima) will not be easily fooled by scientists going through the motions of a
societal duty to which they aren’t genuinely committed. However, although Asilomar demonstrated a commendable readiness to consider consequences and accept inconvenient conclusions, Berg doubts whether the same approach will work today for some of the ethical issues raised by genetic and biomedical research, such as embryo research and stem-cell technology. It is one thing to evaluate objective health risks, even though this alone is hard enough in the face of unknown consequences and the vagaries of public risk perception. But when science confronts deeply held social and religious values, it is far from clear that a consensus can ever be reached, even by compromise. Society has to find some way of accommodating irreconcilably different views. It is neither science’s duty nor its prerogative to resolve such questions. But we should hope that it continues to cultivate a community in which an awareness that they must be confronted is found not just in a few unusually thoughtful individuals.

  Science and democracy

  While German National Socialism cannot stand proxy for every autocracy in the modern world, the fate of science under its auspices challenges some preconceptions about the relationship of research and political democracy. Many Western scientists cleave to the idea that science can only truly flourish in a wholly free society (forgetting that it did not arise in one). This was Samuel Goudsmit’s agenda in his attacks on Heisenberg, who he wanted to push into admitting—without strong justification—that the Germans failed to make the bomb because Nazi interference in German science had left it too enervated. The attitude is evident too in the common perception among scientists that the Nazi leaders rejected aspects of modern theoretical physics on the ideological grounds that they were ‘Jewish’. As we have seen, the National Socialists were more pragmatic than that—they lost interest in ‘Aryan physics’ when it became evident that it was merely a distraction from, and perhaps a hindrance to, useful technologies.

  This attitude that only democracies can and will nurture science is unduly and perhaps dangerously self-congratulatory. The work of historians of science, such as Yakov Rabkin and Elena Mirskaya, dispels that illusion: ‘The history of science in totalitarian societies’, they say, ‘makes associations between science and freedom appear tenuous at best.’ Not only have such regimes often been quite generous in their support of science, but the scientific attitude of detached objectivity can be and has been adopted by these regimes to legitimize regarding their own citizens in the same way. Mark Walker and a group of other eminent science historians agree that ‘no single ideology, including liberal democracy, has historically proven more effective than another in driving science or leading to intended results’. The project commissioned by the Max Planck Society in the late 1990s to look into its (that is, the KWG’s) murky past concluded that the society as a whole did not simply ‘survive the swastika’ but was successful in pushing its own agenda and in some ways flourished under Hitler. Indeed, it deemed that ‘the Kaiser Wilhelm Society was an integral part of the National Socialist system of domination that subjugated people inside and outside Germany and culminated in genocide and war’. Challenging the idea that science and mathematics are inherently democratic, historian Herbert Mehrtens argues that ‘they will adapt to political and social changes as long as there is the chance to preserve existence’. After examining closely the history of mathematics in Nazi Germany, Mehrtens concludes that ‘I cannot find any reason why mathematics, and any other science, should not find a perfect partner in technocratic fascism.’

  The common suggestion that a non-democratic country might ape the innovation of the democratic West but can never match its scientific creativity is an arrogant delusion. Even during the height of the Cold War, when state oppression in the Soviet Union was more extreme than it was in pre-war Germany, Soviet scientists were capable of inventive and effective scientific research. And today Chinese scientists are increasingly proving that, even in the face of the rote learning of China’s traditional education system, democracies have no monopoly on creativity. This should surprise no one. Most Chinese scientists today enjoy considerably more state support, personal liberty and freedom from demagoguery than most German scientists had done under Nazi rule, yet even the latter were perfectly able to conduct vibrant and productive science, not least the work that led Hahn and Strassmann to discover nuclear fission in 1938.

  Many scientists believe that dictatorships will inevitably constrain science by imposing an arbitrary ideology on what may or may not be discovered and taught. That has certainly happened: one of the most notorious ideological distortions of science was Stalin’s suppression of Darwinian genetics in favour of the agriculturally disastrous Lamarckian views on heredity propagated by Trofim Lysenko between the 1920s and the 1960s, which Lysenko had couched in a politically expedient Marxist framework. But this sort of interference is rare. We have seen how ‘Aryan physics’ enjoyed rather little support from the Nazi government, largely because those leaders who recognized the value of science did not believe it could deliver the goods. ‘No political regime has ever tried consistently and comprehensively to impose ideologically correct science on its scientists’, say Walker and colleagues—in part because ‘the military potential of science and scientists outweighs and overrules attempts to purify science ideologically’. Hitler was prepared to slacken the shackles of anti-Semitism for pragmatic ends during the war, and not even Stalin risked the politicization of nuclear physics. ‘Stalin left his nuclear physicists alone’, says historian Tony Judt. ‘[He] may well have been mad but he was not stupid.’

  That science and democratic values are uncoupled is true not only for the institutions and practices of science but also for its intellectual content. Some scientists cling to the belief that one cannot possibly be a good scientist unless one is also a good citizen of the world, a liberal democrat, able to approach nature with heart and mind open. While neither Stark nor Lenard was perhaps as distinguished scientifically as their Nobel Prizes implied, we shouldn’t make the mistake of imagining that something fundamental to their obnoxious political and social sympathies precluded them from continuing to function as scientists. The case is even more fraught for Pascual Jordan, one of the key figures in Bohr’s circle as the Copenhagen interpretation of quantum mechanics was taking shape. Jordan concluded from the apparent demolition of objectivity by quantum theory that the ‘liquidation’ of the Enlightenment by the Third Reich was inevitable. What is more, according to the historian M. Norton Wise, Jordan’s Nazi-leaning ideological views informed his physics, helping him to formulate aspects of quantum theory that were of genuine value and utility. ‘It is necessary to make this point explicitly’, says Wise,

  because we live with the tenacious myth that the acquisition of fundamental knowledge had to cease when scientists embraced Hitler. No real seekers after truth could also be pursuing Nazi political interests nor using those interests in the pursuit of knowledge. But of course they could, and did.

  It is the other side of the same counterfeit coin to imagine that political interference in science happens only in dictatorships. Some is—or should be—unavoidable: science and technology need regulation, for example to ensure that certain ethical standards and responsibilities are met, and there is no obvious or consensual position on how far such constraints should extend: what to one professor is a reasonable demand might be repressive meddling to another. Funding, which can make or break a discipline, is highly politicized. Paul Forman’s study of research in quantum electronic technologies in the post-war United States showed that, simply by how they choose to support particular research, governments ‘can profoundly influence how scientists work—the questions they investigate, the methods they use, how they present their results’.

  Democratically elected politicians have shown a readiness to challenge the autonomy, authority, integrity and validity of science. Not only do they sometimes find it expedient to ignore inconvenient advice from scientists—most egregiously, George W. Bush’s resistance to the scientific consensus on climate change, a
lthough one might also adduce the persistent refusal of some Western governments to heed medical advice on drug-abuse policy—but they are not above rigging the evidence. Bush’s Committee on Bioethics was chosen to engineer the advice on embryo research and stem-cell technology into a form that would play best to his constituency, while in 2007 the US House of Representatives Committee on Oversight and Government Reform concluded that ‘the Bush Administration has engaged in a systematic effort to manipulate climate change science and mislead policymakers and the public about the dangers of global warming’. Turkey, a Muslim democracy, has recently brought its Academy of Sciences and its scientific funding agency under direct state control, a move that some say was prompted by a feeling in the government that the scientific community was too secular and liberal. Of course, one can argue in the mode of Churchill that democracy is the least bad of political systems for guarding against such meddling. That may well be true. But the cosy assumption that democracy guarantees good science and totalitarianism kills it finds little support in history. Moreover, if it is to engage with and sometimes oppose its political leaders, science needs the support of the rest of society. Scientists need legal protection from exclusion and persecution, a fact made all too evident in the politicization of climate science and biomedical research in the US, where some institutions have refused to defend individuals against litigation or intimidation from well-funded religious or climate-sceptic organizations.

 

‹ Prev