Unspeak
Page 5
Anti-social behaviour orders, in the meantime, were defended by Home Office minister Hazel Blears on the grounds that they helped to ‘rebuild confidence in communities’.70 Yet perhaps the ‘confidence’ enjoyed by a ‘community’ that threatened to transfer certain of its members into a community of prison inmates for being sarcastic or sitting down on a riverbank was not an untrammelled moral good. And one might have been forgiven for continuing to lack ‘confidence’ that ‘community’ itself was anything more than a promiscuous term of Unspeak.
In exposing the tricksiness of ‘community’ talk a decade earlier, Stephen Holmes, professor of political science and law at the University of Chicago, had pointed out: ‘Members of the Ku Klux Klan, too, have “a commonality of shared self-understanding”. Shared self-conceptions or aspirations or allegiances are not thereby intrinsically admirable.’71 Indeed: just as being anti-social is not intrinsically wrong unless you accept the extra hidden premise that society is already how it should be, so being social is not intrinsically a virtue: there may be ‘communities’ of neo-Nazis, or plotters of terrorism, or sexual torturers of children, who are not made admirable simply by grouping together. Theorists of ‘community’, Holmes argued, ‘surreptitiously import moral approval into ostensibly descriptive categories such as “group loyalty”, “collective aims”, and “social bonds”.’72 The surreptitious importing of moral approval (or disapproval) into a supposedly neutral name is, of course, a primary characteristic of Unspeak.
So it had always gone with the most blatant political uses of ‘community’ to try to engineer a positive response to a policy. Even before Clinton, Blair, and Etzioni, Margaret Thatcher’s government, which was not known for indulging in the rhetoric of social cohesion, had attempted to defuse widespread loathing of its new ‘poll tax’ by renaming it the ‘community charge’.73 The mental-health policy christened ‘care in the community’, meanwhile – first announced, aptly, in 1984 – had resulted in the evacuation of ‘large numbers of mentally ill people from institutions, without providing an adequate alternative’:74 in other words, the phrase chose not to acknowledge the large-scale release of patients from asylums, and instead promised ‘care’ that was not sufficiently forthcoming. The cosy ideal of the caring ‘community’ was in marked contrast to the subsequent neglect or even persecution of those thrown on its mercies.
Later on, a second-class cadre of policemen was introduced by Blair’s Home Secretary, David Blunkett, under the name ‘community support officers’: they would contribute, he said, to the ‘endeavour […] to face down the antisocial and thuggish behaviour that bedevils our streets, parks and open spaces’.75 CSOs were much cheaper than real policemen, went out on the beat after only four weeks’ initial training, and were advised to avoid ‘dangerous situations’, according to an undercover reporter who trained as such an officer.76 Walking away was a novel way to ‘face down’. But never mind: the very repetition of their name – ‘community support officers’ – sounded pleasantly reassuring in Parliament and on TV. Everyone loves support, just as everyone loves a community. And CSOs, furthermore, were introduced by the Police Reform Act,77 so they must be a good thing.
The faith community
Happily the word ‘community’ is so semantically promiscuous that George W. Bush could carry right on using it when he took over from Clinton, as with his programme named ‘Faith-Based and Community Initiatives’, which involved giving religious organisations access to federal welfare funds. ‘As we improve our communities, we improve our nation,’ Bush explained, in a direct echo of Clinton’s language.78 Bush even managed to refer to ‘the faith community’, which was strange given the historical fact that different religions, and different factions of the same religion, had not always worked together harmoniously. But since Bush had ‘faith’, and held weekly Bible-study sessions in the White House,79 there was at least one ‘faith community’ centred on him.
It was a curious feature of political rhetoric, indeed, that nearly all politicians appealed implicitly to something like a ‘faith community’ in the vocabulary they used to express their opinions. Rather than saying ‘I think’ that something is the case, Bush and Blair, for example, consistently preferred to say ‘I believe’. What does this apparently trivial decision accomplish? Well, it encases the speaker in an armour of faith. If you ‘think’ something, you may just be mistaken. Moreover, to say ‘I think’ implies the kind of cold ratiocination that may dent a politician’s likeability rating in polls. If he claims to ‘believe’ something, on the other hand, whether he is right or wrong on the issue in question, he is automatically virtuous, because he is at least sincere: he believes. Thus George W. Bush said in July 2005: ‘I believe that we will succeed in Iraq’,80 which expressed a noble optimism and could not in principle be refuted by any subsequent events. Even if success in Iraq (howsoever defined) was not forthcoming, that could not dent the purity of Bush’s faith. Opposition politicians played the same game. The phrase ‘I believe’ was used thirty times by both Bush and John Kerry in their third Presidential debate in 2004, compared with thirty-two instances of ‘I think’.81 Thus the debate was a contest of faith as much as of reason.
Similarly, Tony Blair, facing a hostile television audience and attempting to justify the upcoming Iraq war, said in early 2003: ‘I think it’s my job as Prime Minister, even if frankly I might be more popular if I didn’t say this to you or said I’m having nothing to do with George Bush, I think it’s my duty to tell it to you if I really believe it, and I do really believe it. I may be wrong in believing it but I do believe it.’82 Notice how he moves from saying ‘I think’ to insisting more powerfully that ‘I believe’. And so faith trumps truth: ‘I may be wrong’, but who cares if I am wrong or not as long as ‘I do believe it’?
You can also, usefully, say you ‘believe’ something for which you intend to offer no evidence, as when Blair claimed: ‘I believe the vast majority of those on the centre-left now believe in the new personalised concept of public services.’83 Here he even offered two ‘beliefs’ for the price of one. Not only did he ‘believe’ that a ‘vast majority’ of ill-defined people agreed with him, he ‘believed’ that those people ‘believed in’ his policy. The idea that a policy is something to be ‘believed in’, rather than to be argued about and rationally accepted or rejected, converts politics itself into faith. People who ‘believe in’ a policy might indeed be described as a ‘faith community’.
What is the opposite of a ‘faith community’? Reporter Ron Suskind learned the answer when in summer 2002 he met a ‘senior adviser to Bush’, who provided him with what has become a justly famous explanation of US policy in that era:
The aide said that guys like me were ‘in what we call the reality-based community,’ which he defined as people who ‘believe that solutions emerge from your judicious study of discernible reality.’ I nodded and murmured something about enlightenment principles and empiricism. He cut me off. ‘That’s not the way the world really works anymore,’ he continued. ‘We’re an empire now, and when we act, we create our own reality. And while you’re studying that reality – judiciously, as you will – we’ll act again, creating other new realities, which you can study too, and that’s how things will sort out.84
This may fruitfully be read in parallel with the words of Hannah Arendt, who defined totalitarian thinkers specifically as having ‘extreme contempt for facts as such, for in their opinion fact depends entirely on the power of the man who can fabricate it’.85 In this aide’s words, the word ‘community’ maximises its potential to be patronising, even pitying, contrasted as it is with the chest-beating of ‘empire’. The ‘reality-based community’ is pictured as embattled, timid, probably gathering in ‘huddles’, like the Afghans found by one intrepid journalist in Britain.
But if the Bush administration was not ‘reality-based’, on what exactly did it base its acts? On, we must suppose, faith. And indeed, the Bush administration did not lack for impres
sive displays of ‘faith’ in contradiction of ascertainable realities. Another realm in which ‘the judicious study of discernible reality’ was held in contempt, for example, was that of science.
3
Nature
Climate change
Sometimes there are smoking guns. In the arena of science, there is plain evidence of political groups and corporate interests mulling over the right Unspeak strategies. US pollster Frank Luntz, for example, has produced a range of documents advising Republicans on the kind of persuasive political language they should use. One such memo, leaked in 2003, treats the language of environmental policy. A section headed ‘Winning the Global Warming Debate’ reads in part:
The terminology in the upcoming environmental debate needs refinement […] It’s time for us to start talking about ‘climate change’ instead of global warming […]
1. ‘Climate change’ is less frightening than ‘global warming.’ As one focus group participant noted, climate change ‘sounds like you’re going from Pittsburgh to Fort Lauderdale.’ While global warming has catastrophic connotations attached to it, climate change suggests a more controllable and less emotional challenge.1
This is an example of what Luntz, elsewhere in the document, called the strategy of ‘redefining labels’. According to this view, ‘global warming’ sounds sinister and menacing: it may conjure a picture of a red-hot planet Earth, swathed in hellfire. ‘Climate change’, by contrast, is what happens when you go on holiday, or switch on the air-conditioner at home, or the ‘climate control’ in your sports-utility vehicle. Notice also that ‘climate change’ modestly takes no position on the direction or quality of any possible change. It might get warmer, but then again it might get cooler, avoiding droughts; or rainier, which would be nice for the garden; or we might just have a picturesque dusting of snow every Christmas.
In fact this rhetorical contest had already been won in the international arena more than a decade earlier. In the late 1980s, the scientific talk had been of ‘global warming’ due to increased atmospheric concentrations of greenhouse gases. The UN convened panels and working groups to discuss the problem, and for a time the phrases ‘global warming’ and ‘climate change’ (as in the name of the Intergovernmental Panel on Climate Change (IPCC), set up in 1988) coexisted. But within a few years, the phrase ‘climate change’ had replaced ‘global warming’ as the official general name for the phenomenon.
We can pinpoint quite precisely when this took place. In December 1988, the UN General Assembly passed a resolution entitled ‘Protection of global climate for present and future generations of mankind’, which mentioned both ‘global warming’ and ‘climate change’.2 A year later, the UN passed another, identically named resolution. It continued to talk of ‘climate change’, but now all mention of ‘global warming’ had mysteriously disappeared.3 By the time of the 1992 UN Framework Convention on Climate Change, this had been entrenched: the convention mentions even the idea of ‘warming’ only once, and ‘global warming’ nowhere at all.
Why did this shift happen? Because states with oil interests, including Saudi Arabia and the US (then as now the world’s biggest contributor to global warming), had specifically lobbied for the elimination of the phrase ‘global warming’ in agreements.4 The mention of ‘warming’ seemed to enshrine as fact the theory that burning fossil fuels was largely to blame for the heating of the planet. ‘Climate change’ instead gestured vaguely at an unspecific problem, without pointing the finger of blame directly at any particular industry. For the same reason, Saudi Arabia had demanded that treaties refer only to ‘greenhouse gases’ and not specifically to ‘carbon dioxide’ – the gas, emitted when fossil fuels were burned, that was the major contributor to global warming.5 The Soviet Union and China joined in continuing Saudi efforts during 1990 to ‘weaken’ the language of assessments, and the US contributed an amendment saying that ‘information’ was ‘inadequate’.6 Meanwhile, the Reagan and Thatcher governments had opposed any mention of the concept of ‘populations’ in the global-warming agreements,7 the effect of which was to suppress discussion of the fact that global warming would impinge directly on the lives of large numbers of people.
It should be noted that there is also a scientific reason to prefer the term ‘climate change’ in some contexts. If the melting of polar ice caps releases too much water into the oceans, the warm Gulf Stream current (part of what is called the thermohaline circulation) could be turned off, making Britain’s weather much colder.8 Thus drastic local cooling can occur; depending on the interaction of vastly complex systems, a local climate might either heat up or get colder. A particular area, therefore, can ‘change’ in either direction; but always as a result of the mean temperature of the planet having increased – in other words, because of global warming.
Some people even thought that ‘global warming’ was not frightening enough – it sounded ‘too cosy’, according to one newspaper letter-writer.9 (On one day of IPCC negotiations in August 1990, in fact, the US delegation had floated the idea of replacing the term ‘climate change’ with the highly specific ‘global warming at the surface of the Earth’, which sounded minimally catastrophic.)10 Jeremy Leggett, former professor of earth sciences at Imperial College, London and chief scientist at Greenpeace during the 1990s, goes further: ‘I have never considered global warming a scary enough term. If I could have designed the language, I’d have gone for global overheating, climate chaos, or maybe climate meltdown.’11 But even if ‘global warming’ is not scary enough, polling by the Brookings Institution in the US confirmed Frank Luntz’s assessment that ‘climate change’ was even less frightening.12
‘Climate change’ was unarguably more vague. Besides its ability to point in both directions, the noun form of ‘change’ also seemed to be agnostic as to whether any change was anthropogenic – whether it was man’s fault at all. After all, the climate had ‘changed’ many times over the earth’s geological history, for example during several ice ages, before humans were around. This indeed is how the IPCC defined ‘climate change’: any variation in the climate, attributable to either ‘natural internal processes or external forcings’.13 But in a subtle disagreement on terminology that was not reflected in public discourse, the UN Framework Convention on Climate Change used ‘climate change’ specifically to mean change caused by man; natural change was instead called ‘climate variability’.14 The effect was that UNFCC reports allowed themselves to mean ‘global warming’ when they said ‘climate change’, without actually using the troublemaking former term, and making it possible for their use of ‘climate change’ to be interpreted in the agnostic fashion in which everyone else meant it.
It is clear that in the phrase ‘global warming’, after all, the word ‘warming’ implies an agent doing the warming. And once you accept that human beings might be the cause of the problem, again you will eye sceptically those with an interest in burning coal, oil, and gas. Thus the preference for the term that seems to assign no blame, ‘climate change’, works to support the notion, eagerly propagated by the Bush administration, that there is a controversy about whether there is warming, and if there is, whether humankind is to blame at all. Luntz’s memo, indeed, encouraged an appeal to uncertainty: ‘Should the public come to believe that the scientific issues are settled, their views about global warming will change accordingly. Therefore, you need to continue to make the lack of scientific certainty a primary issue in the debate.’ That first sentence is wonderfully revealing: it means, essentially, that if the public knew the truth, they would not accept your policies. So you need to hide the truth from them.
Curiously enough, this was also the strategy of oil companies. In 1998, for example, the American Petroleum Institute, working with Exxon, Chevron, and Southern, created an internal ‘Global Climate Science Communications Action Plan’, a programme for a multi-million-dollar media offensive. In it, they noted that if people were told that there was scientific uncertainty, they were more likely to
oppose US ratification of the Kyoto Protocol, according to which more than 140 countries of the world had agreed to timetables and mechanisms for reducing overall greenhouse-gas emissions. ‘When informed that “some scientists believe there is not enough evidence to suggest that (what is called global climate change) is a long-term change due to human behavior and activities,” 58 percent of those surveyed said they were more likely to oppose the Kyoto treaty.’15 The API’s ‘Action Plan’ thus concludes that ‘Victory Will Be Achieved When […] recognition of uncertainties becomes part of the “conventional wisdom”.’ Obediently, George W. Bush cited the ‘incomplete’ state of scientific knowledge when announcing in April 2001 his country’s withdrawal from the Kyoto negotiations.16 Of course scientific knowledge is always incomplete: that is what drives research. But that is a long way from saying it is wrong, or even controversial.
Industry’s attempt to publicise the fictional debate on global warming was, however, abetted by structural biases in the media, on both sides of the Atlantic: ‘The problem lies with the media’s obsession with entertaining us with a good bust-up between two warring sides,’ said Fiona Fox, director of the Science Media Centre. […] ‘This means the public gets a manufactured debate. The Today programme [on BBC Radio 4] is a classic example. It seems incapable of running a piece on new climate research without asking those in the denial lobby […] to come along to spice up the action.’17
Some of those in the ‘denial lobby’ turned out to be financially motivated. Stephen Whittle at BBC News recalls that, on one occasion, ‘The BBC got rather caught by some American bloke appearing on [a news] programme to discuss these things and who wished to cast a large question mark over the whole question of global warming. It turned out that he was actually being paid by an oil company.’18 Other deniers had a rather shaky grasp of science. The British pundit Peter Hitchens, for example, wrote: ‘The greenhouse effect probably doesn’t exist. There is as yet no evidence for it.’ To this, George Monbiot offered the following unimprovable riposte: ‘Perhaps Mr Hitchens would care to explain why our climate differs from that of the moon.’19 The greenhouse effect described how the earth’s atmosphere trapped heat.20 The dumping by human beings of carbon dioxide and other pollutants in the atmosphere did not create the greenhouse effect, but it did accelerate it. Hitchens had also appealed to one set of data much cherished by a handful of sceptics with more expertise: that although measurements of the earth’s surface temperature showed clear rises in the previous few decades, there seemed to be no such rise in atmospheric temperatures.21 This objection was finally laid to rest in 2005, when three articles in the journal Science showed that the placement and functioning of weather-balloon sensors in the 1970s had resulted in erroneous data and satellite readings had been misinterpreted: in fact, the atmosphere too had warmed.22