Book Read Free

The Signal and the Noise

Page 41

by Nate Silver


  Reassuringly, the differences between the various records are fairly modest71 (figure 12-5). All six show both 1998 and 2010 as having been among the three warmest years on record, and all six show a clear long-term warming trend, especially since the 1950s when atmospheric CO2 concentrations began to increase at a faster rate. For purposes of evaluating the climate forecasts, I’ve simply averaged the six temperate records together.

  James Hansen’s Predictions

  One of the more forthright early efforts to forecast temperature rise came in 1981, when Hansen and six other scientists published a paper in the esteemed journal Science.72 These predictions, which were based on relatively simple statistical estimates of the effects of CO2 and other atmospheric gases rather than a fully fledged simulation model, have done quite well. In fact, they very slightly underestimated the amount of global warming observed through 2011.73

  Hansen is better known, however, for his 1988 congressional testimony as well as a related 1988 paper74 that he published in the Journal of Geophysical Research. This set of predictions did rely on a three-dimensional physical model of the atmosphere.

  Hansen told Congress that Washington could expect to experience more frequent “hot summers.” In his paper, he defined a hot summer as one in which average temperatures in Washington were in the top one-third of the summers observed from 1950 through 1980. He said that by the 1990s, Washington could expect to experience these summers 55 to 70 percent of the time, or roughly twice their 33 percent baseline rate.

  In fact, Hansen’s prediction proved to be highly prescient for Washington, DC. In the 1990s, six of the ten summers75 qualified as hot (figure 12-6), right in line with his prediction. About the same rate persisted in the 2000s and Washington experienced a record heat wave in 2012.

  In his paper, Hansen had also made these predictions for three other cities: Omaha, Memphis, and New York. These results were more mixed and go to illustrate the regional variability of the climate. Just 1 out of 10 summers in Omaha in the 1990s qualified as “hot” by Hansen’s standard, well below the historical average rate of 33 percent. But 8 out of 10 summers in New York did, according to observations at LaGuardia Airport.

  Overall, the predictions for the four cities were reasonably good, but were toward the lower end of Hansen’s range. His global temperature predictions are harder to evaluate because they articulated a plethora of scenarios that relied on different assumptions, but they were also somewhat too high.76 Even the most conservative scenario somewhat overestimated the warming experienced through 2011.

  The IPCC’s 1990 Predictions

  The IPCC’s 1990 forecasts represented the first true effort at international consensus predictions in the field and therefore received an especially large amount of attention. These predictions were less specific than Hansen’s, although when they did go into detail they tended to get things mostly right. For instance, they predicted that land surfaces would warm more quickly than water surfaces, especially in the winter, and that there would be an especially substantial increase in temperature in the Arctic and other northerly latitudes. Both of these predictions have turned out to be correct.

  The headline forecast, however, was that of the global temperature rise. Here, the IPCC’s prediction left more to be desired.

  The IPCC’s temperature forecast, unlike Hansen’s, took the form of a range of possible outcomes. At the high end of the range was a catastrophic temperature increase of 5°C over the course of the next one hundred years. At the low end was a more modest increase of 2°C per century, with a 3°C increase representing the most likely case.77

  In fact, the actual temperature increase has been on a slower pace since the report was published (figure 12-7). Temperatures increased by an average of 0.015°C per year from the time the IPCC forecast was issued in 1990 through 2011, or at a rate of 1.5°C per century. This is about half the IPCC’s most likely case, of 3°C warming per century, and also slightly less than the low end of their range at 2°C. The IPCC’s 1990 forecast also overestimated the amount of sea-level rise.78

  This represents a strike against the IPCC’s forecasts, although we should consider one important qualification.

  The IPCC forecasts were predicated on a “business-as-usual” case that assumed that there would be no success at all in mitigating carbon emissions.79 This scenario implied that the amount of atmospheric CO2 would increase to about four hundred parts per million (ppm) by 2010.80 In fact, some limited efforts to reduce carbon emissions were made, especially in the European Union,81 and this projection was somewhat too pessimistic; CO2 levels had risen to about 390 ppm as of 2010.82 In other words, the error in the forecast in part reflected scenario uncertainty—which turns more on political and economic questions than on scientific ones—and the IPCC’s deliberately pessimistic assumptions about carbon mitigation efforts.*

  Nevertheless, the IPCC later acknowledged their predictions had been too aggressive. When they issued their next forecast, in 1995, the range attached to their business-as-usual case had been revised considerably lower: warming at a rate of about 1.8°C per century.83 This version of the forecasts has done quite well relative to the actual temperature trend.84 Still, that represents a fairly dramatic shift. It is right to correct a forecast when you think it might be wrong rather than persist in a quixotic fight to the death for it. But this is evidence of the uncertainties inherent in predicting the climate.

  The score you assign to these early forecasting efforts overall might depend on whether you are grading on a curve. The IPCC’s forecast miss in 1990 is partly explained by scenario uncertainty. But this defense would be more persuasive if the IPCC had not substantially changed its forecast just five years later. On the other hand, their 1995 temperature forecasts have gotten things about right, and the relatively few specific predictions they made beyond global temperature rise (such as ice shrinkage in the Arctic85) have done quite well. If you hold forecasters to a high standard, the IPCC might deserve a low but not failing grade. If instead you have come to understand that the history of prediction is fraught with failure, they look more decent by comparison.

  Uncertainty in forecasts is not necessarily a reason not to act—the Yale economist William Nordhaus has argued instead that it is precisely the uncertainty in climate forecasts that compels action,86 since the high-warming scenarios could be quite bad. Meanwhile, our government spends hundreds of billions toward economic stimulus programs, or initiates wars in the Middle East, under the pretense of what are probably far more speculative forecasts than are pertinent in climate science.87

  The Lessons of “Global Cooling”

  Still, climate scientists put their credibility on the line every time they make a prediction. And in contrast to other fields in which poor predictions are quickly forgotten about, errors in forecasts about the climate are remembered for decades.

  One common claim among climate critics is that there once had been predictions of global cooling and possibly a new ice age. Indeed, there were a few published articles that projected a cooling trend in the 1970s. They rested on a reasonable-enough theory: that the cooling trend produced by sulfur emissions would outweigh the warming trend produced by carbon emissions.

  These predictions were refuted in the majority of the scientific literature.88 This was less true in the news media. A Newsweek story in 1975 imagined that the River Thames and the Hudson River might freeze over and stated that there would be a “drastic decline” in food production89—implications drawn by the writer of the piece but not any of the scientists he spoke with.

  If the media can draw false equivalences between “skeptics” and “believers” in the climate science debate, it can also sometimes cherry-pick the most outlandish climate change claims even when they have been repudiated by the bulk of a scientist’s peers.

  “The thing is, many people are going around talking as if they looked at the data. I guarantee that nobody ever has,” Schmidt told me after New York’s October 2011 sno
wstorm, which various media outlets portrayed as evidence either for or against global warming.

  Schmidt received numerous calls from reporters asking him what October blizzards in New York implied about global warming. He told them he wasn’t sure; the models didn’t go into that kind of detail. But some of his colleagues were less cautious, and the more dramatic their claims, the more likely they were to be quoted in the newspaper.

  The question of sulfur emissions, the basis for those global cooling forecasts in the 1970s, may help to explain why the IPCC’s 1990 forecast went awry and why the panel substantially lowered their range of temperature predictions in 1995. The Mount Pinatubo eruption in 1991 burped sulfur into the atmosphere, and its effects were consistent with climate models.90 But it nevertheless underscored that the interactions between different greenhouse gases can be challenging to model and can introduce error into the system.

  Sulfur emissions from manmade sources peaked in the early 1970s before declining91 (figure 12-8), partly because of policy like the Clean Air Act signed into law by President Nixon in 1970 to combat acid rain and air pollution. Some of the warming trend during the 1980s and 1990s probably reflected this decrease in sulfur, since SO2 emissions counteract the greenhouse effect.

  Since about 2000, however, sulfur emissions have increased again, largely as the result of increased industrial activity in China,92 which has little environmental regulation and a lot of dirty coal-fired power plants. Although the negative contribution of sulfur emissions on global warming is not as strong as the positive contribution from carbon—otherwise those global cooling theories might have proved to be true!—this may have provided for something of a brake on warming.

  A Simple Climate Forecast

  So suppose that you have good reason to be skeptical of a forecast—for instance, because it purports to make fairly precise predictions about a very complex process like the climate, or because it would take years to verify the forecast’s accuracy.

  Sophomoric forecasters sometimes make the mistake of assuming that just because something is hard to model they may as well ignore it. Good forecasters always have a backup plan—a reasonable baseline case that they can default to if they have reason to worry their model is failing. (In a presidential election, your default prediction might be that the incumbent will win—that will do quite a bit better than just picking between the candidates at random.)

  What is the baseline in the case of the climate? If the critique of global warming forecasts is that they are unrealistically complex, the alternative would be a simpler forecast, one grounded in strong theoretical assumptions but with fewer bells and whistles.

  Suppose, for instance, that you had attempted to make a climate forecast based on an extremely simple statistical model: one that looked solely at CO2 levels and temperatures, and extrapolated a prediction from these variables alone, ignoring sulfur and ENSO and sunspots and everything else. This wouldn’t require a supercomputer; it could be calculated in a few microseconds on a laptop. How accurate would such a prediction have been?

  In fact, it would have been very accurate—quite a bit better, actually, than the IPCC’s forecast. If you had placed the temperature record from 1850 through 1989 into a simple linear regression equation, along with the level of CO2 as measured in Antarctic ice cores93 and at the Mauna Loa Observatory in Hawaii, it would have predicted a global temperature increase at the rate of 1.5°C per century from 1990 through today, exactly in line with the actual figure (figure 12-9).

  Another technique, only slightly more complicated, would be to use estimates that were widely available at the time about the overall relationship between CO2 and temperatures. The common currency of any global warming forecast is a value that represents the effect on temperatures from a doubling (that is, a 100 percent increase) in atmospheric CO2. There has long been some agreement about this doubling value.94 From forecasts like those made by the British engineer G. S. Callendar in 193895 that relied on simple chemical equations, to those produced by today’s supercomputers, estimates have congregated96 between 2°C and 3°C of warming from a doubling of CO2.

  Given the actual rate of increase in atmospheric CO2, that simple conversion would have implied temperature rise at a rate of between 1.1°C and 1.7°C per century from 1990 through the present day. The actual warming pace of 0.015°C per year or 1.5°C per century fits snugly within that interval.

  James Hansen’s 1981 forecasts, which relied on an approach much like this, did quite a bit better at predicting current tempertaures than his 1988 forecast, which relied on simulated models of the climate.

  The Armstrong and Green critique of model complexity thus looks pretty good here. But the success of the more basic forecasting methods suggests that Armstrong’s critique may have won the battle but not the war. He is asking some good questions about model complexity, and the fact that the simple models do pretty well in predicting the climate is one piece of evidence in favor of his position that simpler models are preferable. However, since the simple methods correctly predicted a temperature increase in line with the rise in CO2, they are also evidence in favor of the greenhouse-effect hypothesis.

  Armstrong’s no-change forecast, by contrast, leaves some of the most basic scientific questions unanswered. The forecast used 2007 temperatures as its baseline, a year that was not exceptionally warm but which was nevertheless warmer than all but one year in the twentieth century. Is there a plausible hypothesis that explains why 2007 was warmer than 1987 or 1947 or 1907—other than through changes in atmospheric composition? One of the most tangible contributions of climate models, in fact, is that they find it impossible to replicate the current climate unless they account for the increased atmospheric concentration of CO2 and other greenhouse gases.97

  Armstrong told me he made the no-change forecast because he did not think there were good Bayesian priors for any alternative assumption; the no-change forecast, he has found, has been a good default in the other areas that he has studied. This would be a more persuasive case if he had applied the same rigor to climate forecasting that he had to other areas he has studied. Instead, as Armstrong told a congressional panel in 2011,98 “I actually try not to learn a lot about climate change. I am the forecasting guy.”

  This book advises you to be wary of forecasters who say that the science is not very important to their jobs, or scientists who say that forecasting is not very important to their jobs! These activities are essentially and intimately related. A forecaster who says he doesn’t care about the science is like the cook who says he doesn’t care about food. What distinguishes science, and what makes a forecast scientific, is that it is concerned with the objective world. What makes forecasts fail is when our concern only extends as far as the method, maxim, or model.

  An Inconvenient Truth About the Temperature Record

  But if Armstrong’s critique is so off the mark, what should we make of his proposed bet with Gore? It has not been a failed forecast at all; on the contrary, it has gone quite successfully. Since Armstrong made the bet in 2007, temperatures have varied considerably from month to month but not in any consistent pattern; 2011 was a slightly cooler year than 2007, for instance.

  And this has been true for longer than four years: one inconvenient truth is that global temperatures did not increase at all in the decade between 2001 and 2011 (figure 12-10). In fact they declined, although imperceptibly.99

  This type of framing can sometimes be made in bad faith. For instance, if you set the year 1998 as your starting point, which had record-high temperatures associated with the ENSO cycle, it will be easier to identify a cooling “trend.” Conversely, the decadal “trend” from 2008 through 2018 will very probably be toward warming once it is calculated, since 2008 was a relatively cool year. Statistics of this sort are akin to when the stadium scoreboard optimistically mentions that the shortstop has eight hits in his last nineteen at-bats against left-handed relief pitchers—ignoring the fact that he is batting .190 for the
season.100

  Yet global warming does not progress at a steady pace. Instead, the history of temperature rise is one of a clear long-term increase punctuated by periods of sideways or even negative trends. In addition to the decade between 2001 and 2011, for instance, there would have been little sign of warming between 1894 and 1913, or 1937 and 1956, or 1966 and 1977 (figure 12-11)—even though CO2 concentrations were increasing all the while. This problem bears some resemblance to that faced by financial analysts: over the very long run, the stock market essentially always moves upward. But this tells you almost nothing about how it will behave in the next day, week, or year.

  It might be possible to explain some of the recent sideways trend directly from the science; increased sulfur emissions in China might have played some role, for instance. And it might be remembered that although temperatures did not rise from 2001 through 2011, they were still much warmer than in any prior decade.

  Nevertheless, this book encourages readers to think carefully about the signal and the noise and to seek out forecasts that couch their predictions in percentage or probabilistic terms. They are a more honest representation of the limits of our predictive abilities. When a prediction about a complex phenomenon is expressed with a great deal of confidence, it may be a sign that the forecaster has not thought through the problem carefully, has overfit his statistical model, or is more interested in making a name for himself than in getting at the truth.

  Neither Armstrong nor Schmidt was willing to hedge very much on their predictions about the temperature trend. “We did some simulations from 1850 up to 2007,” Armstrong told me. “When we looked one hundred years ahead it was virtually certain that I would win that bet.”101 Schmidt, meanwhile, was willing to offer attractive odds to anyone betting against his position that temperatures would continue to increase. “I could easily give you odds on the next decade being warmer than this decade,” he told me. “You want 100-to-1 odds, I’d give it to you.”

 

‹ Prev