by Bill Bryson
Professional training also leads too many scientists to ‘bury our leads’, as American journalists would put it, rather than finding effective ways to communicate complex ideas. Being straightforward and understandable is a challenge given the strong scientific tradition of full disclosure, which makes us lead with our caveats, not our conclusions. But what I call the ‘double ethical bind’ – be effective in public communication even if that means there isn’t enough space or time to present all of the caveats – is not unbridgeable. It calls for the scientist to develop a hierarchy of products ranging from sound-bites on the evening news to get our findings headlined on the agenda, to short but meatier articles in semi-popular journals like Scientific American, to more in-depth websites, to full-length books in which that smaller fraction of the public or policy worlds that actually want the details about the nature of the processes and how the state of the art has evolved can find them. Yes, it is very time-consuming to produce websites or long books with the details, but it is also necessary for those in complex systems science fields like climate science to simultaneously be effective in public messaging, where all the details are not feasible to communicate, but the longer backup materials can honestly separate the components of the science that are well established from those best characterised as competing explanations and from those which are still speculative.
The Royal Society and my own National Academy of Sciences (if less boldly, I think) have moved into this realm with clear statements of the potential risks of climate change. An evolving series of pronouncements include the joint statement of 2001 of the Royal Society with fifteen other national science academies on the science of climate change.3 The statement of June 2005 on global response to climate change by the science academies of the G8 nations and of China, India and Brazil stressed that the scientific understanding of climate change is now sufficiently clear to justify prompt action.4 There followed the May 2007 statement on sustainability, energy efficiency and climate protection of the national science academies of the same countries plus Mexico and South Africa5and most recently the June 2009 joint statement calling for the transformation of the G8+5 nations’ energy strategies.6In addition, I always push at our annual US National Academy membership meetings for us to be more publicly oriented, but it comes slowly. I am glad that our new NAS President, Ralph Cicerone, is committed to communicating quality science in the public interest. It is also encouraging that President Obama’s new science adviser, John Holdren, is more in the mould of former UK government adviser and Royal Society President Lord May than some previous science advisers in the US who tended to carry the administration’s message to the science community, rather than the other way around, as in the case of May or Holdren.
Along with climate projections, scientists also have to explain how systems science gets done. We cannot usually do traditional ‘falsification’ controlled experiments. What we can do is assess where the preponderance of evidence lies, and assign confidence levels to various conclusions. Over decades, the community as a whole can ‘falsify’ earlier collective conclusions – like the sporadic suggestions in the early 1970s that the world would cool. But in systems science it sometimes takes a score of years to even discover that certain data were not collected or analysed correctly, as well as continuing to identify new data, and such discoveries are rarely by individuals but by teams and even assessment groups.
BACK TO BAYES
When I first got involved in discussing the range of outcomes in climate change, I didn’t understand Bayesian versus frequentist statistics, but in fact that was the heart of the matter – how to deal with objectivity and subjectivity in modelling and in projections.
As Bill Bryson mentions in the Introduction, the English clergyman and mathematician Thomas Bayes (circa 1702–61) formulated an approach to probability now called Bayesian inference. His key theorem was published posthumously in 1764. In essence, it expresses how our knowledge base – and prejudices – establish an a priori probability for something (that is, a prior belief in what will happen based on as much data and theory as is available). As we further study the system, obtaining more data and devising better theories, we amend our prior belief and establish a new, a posteriori probability – after the fact. This is called Bayesian updating. Over time, we keep revising our prior assumptions until eventually the facts converge on the real probability.
Since we cannot do experiments on the future, prediction is wholly a Bayesian exercise. This is precisely why the Intergovernmental Panel on Climate Change produces new assessments every six years or so, since new data and improved theory allow us to update our prior assumptions and increase our confidence in the projected conclusions.
That confidence still falls short of certainty for most aspects of the problem. For example, there is only maybe a fifty-fifty chance of sea levels rising many metres in centuries to come. The conclusion cannot be objective, since the future is yet to come. However, we can use current measurements of ice sheet melting. We can compare them with 125,000 years ago, when the Earth was a degree or two warmer than now and sea levels were four to six metres (thirteen to twenty feet) higher. Because that ancient natural warming had a different cause (changed orbital dynamics of Earth around the Sun) from recent and near future warming caused primarily from current anthropogenic greenhouse gas increases, we can’t say with high confidence that a few degrees of warming from greenhouse gases will also cause a four-to-six-metre rise in sea levels. But it undoubtedly indicates an uncomfortable Bayesian probability of something similar to that happening in the next few centuries. This indeed was the conclusion of the Synthesis Report of the IPCC’s Fourth Assessment in 2007, for exactly those reasons.
Some statisticians and scientists are leery of Bayesian methods. They prefer to stick only with empirical data and well-validated models. But what do you do when you don’t have such data? One example is found in clinical trials in cancer treatments, a subject in which I have had a very personal interest. The ‘gold standard’ is a double-blind trial where half the patients receive a placebo and the other half receive the drug being tested, and neither the patients nor the researchers know who got what. After five or ten years, if there is a statistically significant difference between the recovery rate of drug and placebo, the trial is declared successful. The trial isn’t designed to pinpoint individual differences. Even if we knew the odds of recovery for the average person from different treatments, there is a wide spread in individual responses. So medicine should try to tailor treatments to the individual’s idiosyncrasies. That makes some doctors – and many insurance companies – nervous. Likewise, some scientists and many policy-makers are nervous about Bayesian inferences based on the best assessment of experts, preferring hard statistics. But as there are no hard statistics on the future, Bayesian methods are all we have. They are certainly better than no assessment at all and hoping that everything will work out fine with no treatment. If we care about the future, we have to learn to engage with subjective analyses and updating – there is no alternative other than to wait for Laboratory Earth to perform the experiment for us, with all living things on the planet along for the ride.
CHANGING THE CULTURE OF SCIENCE
While we have refined our models, it has also taken decades to develop the right approach to these scientific realities, and to find the language to convey them properly to policy-makers. In the global climate policy discussion, the most important assessments have been produced by the Intergovernmental Panel on Climate Change, in an extraordinary exercise which involves thousands of scientists reviewing the latest evidence. Ever since the IPCC was founded in 1988, I have pushed hard for a cultural change in the assessments. As I have said, overcoming uncertainties, the traditional approach of what the philosopher Thomas Kuhn7 called ‘normal science’, will take an unforeseeably long time. Climate systems science demands a shift to managing uncertainties instead.
That means we scientists, and policy-makers, grappling with climate
change impacts are dealing with risk management. As the sea level rise example indicates, outcomes cannot be assessed with high confidence in many important cases, but the probable range can often be estimated.
Risk-management framing is a judgment about acceptable and unacceptable risks. That makes it a value judgment. As with the Bayesian approach to probability, many traditional scientists are uncomfortable with that. I am one of them, but I am more uncomfortable ignoring the problems altogether because they don’t fit neatly into our paradigm of ‘objective’ falsifiable research based on already known empirical data.
Systems science also alerts us to the possibility of ‘surprises’ in future global climate – perhaps extreme outcomes or tipping points which lead to unusually rapid changes of state. By definition, very little in climate science is more uncertain than the possibility of ‘surprises’. But it is nevertheless a real one. Even so, it took several long rounds of assessment just to get IPCC to mention surprises, let alone discuss formal subjective probabilistic treatment of such potentially irreversible, large changes.
John Houghton, former director of the UK Meteorological Office and the IPCC Working Group I leader for the first three assessment reports, was initially very reluctant to get into the surprises tangle. I recall a very clear exchange at a climate meeting in Oxford University in 1993.8 Houghton thought the public discussion about ‘surprises’ was too speculative and would be abused by the media. ‘Aren’t you just a little bit worried that some will take this surprises/abrupt change issue and take it too far?’ he asked. ‘I am, John; we have to frame it very carefully,’ I replied. ‘But I am at least equally worried that if we don’t tell the political world the full range of what might happen that could materially affect them, we have not done our jobs fully and are substituting our values on how to take risks for those of society – the right level to decide such questions.’ 9
In the end, despite the worry that discussions of surprises and nonlinearities could be taken out of context by extreme elements in the press and NGOs, we were able to include a small section on the need for both more formal and subjective treatments of uncertainties and outright surprises in the IPCC Second Assessment Report (SAR) in 1995.10 Chapter 11, ‘Advancing Our Understanding’, was about what to do later, and so was not directly assessed in the more politically sensitive conclusions of the report. Thus, John did not object to the few sentences on those topics in that chapter. As a result, the very last sentence of the IPCC Working Group I 1995 Summary for Policy Makers (SPM)11 addresses the abrupt non-linearity issue. This made much more in-depth assessment in subsequent IPCC reports possible, simply by noting that ‘When rapidly forced, non-linear systems are especially subject to unexpected behaviour.’
A LANGUAGE FOR RISK
Now we had licence to pursue risk assessment of uncertain probability but high consequence possibilities in more depth; but how should we go about it? The basics are that scientists can help policy-makers by laying out the elements of risk, classically defined as consequence x probability. In other words, what can happen and what are the odds of it happening?
The plethora of uncertainties inherent in climate change projections clearly makes risk assessment difficult. The inertia in the climate and socio-economic systems and the fact that greenhouse gases emissions will continue to rise, given the absence of strong mitigation policies (or unexpected events like a prolonged recession), indicate that globally most policy-makers have been reluctant to make long-term investments beyond their expected terms in office. But that is changing both in some regions like the EU and even in the US. These kinds of decision-makers are increasingly wary of making what is known as a Type II error – fiddling while the Earth burns. A Type I error is a false positive, which in this case would mean taking action against climate change which subsequently proved relatively needless. Scientists are often leery of making a Type I error when data are scarce for fear of misleading society into unnecessary actions and being blamed for undue alarm. The other kind, a Type II error, is a false negative, and in this case would mean assuming it is preferable to do little or nothing until there is less uncertainty, and subsequently finding that serious climate change ensues unabated with much more damage than if precautionary policies had been undertaken to adapt to and mitigate the effects. So it appears that many scientists are often Type I and our future-oriented decision-makers Type II error avoiders. A less charitable interpretation of those reluctant to invest in precautionary adaptation and mitigation measures is that they know that the really adverse outcomes will likely occur in the future when current decision-makers are not in office and not likely to be held accountable. The short-term incentives are to delay action and pass the risks and the recriminations on to the next generation. None of this is scientific risk assessment, but value judgments on where and how to take risks and make investments in policy hedges – in short, risk management. But risk management is put on a much firmer scientific basis when the managers are schooled in the best risk assessments that state-of-the-art science can produce.
To help decision-makers, the IPCC produced a Guidance Paper on Uncertainties in 200012 which was a foundation for the 2007 Fourth Assessment Report.13 I prepared the original draft with Richard Moss, now a Senior Scientist, Joint Global Change Research Institute, after convening a meeting in 1996 in which about two dozen IPCC lead authors met with decision analysts to fashion a better way to treat uncertainties in scientific assessments. The final guidance eventually agreed to within the IPCC was a quantitative scale. We would define ‘low confidence’ as a less than one-in-three chance; ‘medium confidence’, one-in-three to two-in-three; ‘high confidence’, above two-thirds; ‘very high confidence’, above 95 per cent; and ‘very low confidence’, below 5 per cent.
It took a long time to negotiate those numbers and those words in the Third Assessment Report cycle. There were some people who still felt that they could not apply a quantitative scale to issues that were too speculative or ‘too subjective’ for real scientists to indulge in ‘speculating on probabilities not directly measured’. One critic said, ‘Assigning confidence by group discussions, even if informed by the available evidence, was like doing seat-of-the-pants statistics over a good beer.’ He never answered my response: ‘Would you and your colleagues think you’d do that subjective estimation less credibly than your Minister of the Treasury or the President of the US Chamber of Commerce?’
So we had two things we wanted everyone to use – a set of numbers defining the probability ranges for words such as ‘likely’, and a set of qualitative phrases for our confidence in the results, going from ‘well established’ if there were a lot of data and a lot of agreement between theory and data, to ‘speculative’ without much data and when there wasn’t much agreement. We had ‘established but incomplete’ and ‘competing explanations’ for the intermediate cases.
And then for the next two years Richard and I became what a journalist later called ‘the uncertainty cops’. I read three thousand pages of draft material for the IPCC’s Third Assessment Report. People did not always
Assessment and Reporting in R. Pachauri, T. Taniguchi and K. Tanaka (eds), Guidance Papers on the Cross Cutting Issues of use uncertainty terms according to our simple rules. For instance, they would say that because of uncertainties, we can’t be ‘definitive’. I wrote back, ‘What is the probability of a “definitive”?’ Early drafts would put the range of outcomes anywhere from a one to five degrees Celsius change in temperature. And then they would say in parentheses ‘medium confidence’. That was completely incorrect. It was ‘very high confidence’, because they were talking about the fact that between one and five degrees was a very, very likely place to arrive. But people didn’t want to say ‘very high confidence’ because nobody felt very confident about the state of the science at the level of pinning it down to, say, one degree. So Richard or I would help them to rewrite, and say that we have ‘low confidence’ in specific forecasts to a precision of a half d
egree, but we have ‘high confidence’ that the range is one to five degrees. Simple things like that were needed to achieve consistency of message.
Meanwhile the political chicanery of ideologists and special interests was shamelessly exploiting systems uncertainty by misframing the climate debate as bipolar – ‘the end of the world’ versus ‘it’s good for you’. The media compliantly carried it in that frame much of the time, too. But those were and still are, in my view, the two lowest probability outcomes. The confusion that bipolar framing has engendered creates in the public at large a sense that ‘if the experts don’t know the answers, how can I, a mere lay citizen, fathom this complex situation?’ To this, industry-funded pressure groups added the old trick of recruiting non climate scientists who are sceptical of anthropogenic climate change to serve as counterweights to mainstream climate scientists. This spreads doubt and confusion among those who don’t look up the credentials of the apparently contending scientists – and that, unfortunately, includes most of the public and too much of the media. The framing of the climate problem as ‘unproved’, ‘lacking a consensus’, and ‘too uncertain for preventive policy’ has been advanced strategically by the defenders of the status quo. This is very similar to the tactics of the Tobacco Institute and its three-decade record of distortion that helped stall policy actions against the tobacco industry, despite the horrendous health consequences and eventually billions of dollars in successful lawsuits against big tobacco.