Book Read Free

The Resilient Earth: Science, Global Warming and the Fate of Humanity

Page 27

by Simmons, Allen


  This ratio can be measured very accurately using a mass spectrometer. Though accuracies of better than 1% can be achieved with modern instruments,405 the quantities involved are often close to the limits of measurability to start with. Great care must also be taken to avoid contaminating the samples as contact with the modern atmosphere or the investigator's breath can bias the results.

  The δ18O ratio is expressed in parts per 1,000 compared with a reference sample called Standard Mean Ocean Water (SMOW). The more negative the value of δ18O, the lower the implied temperature when the snow fell. There are numerous other factors that can affect δ18O, such as altitude,406 atmospheric circulation patterns,407 and the timing of storms.408 These complications add more uncertainty to determining temperature. Generally quoted uncertainty figures from studies going back to the start of the Holocene have temperature uncertainties in the range of ±3.0°C.409 Even in more recent time frames, data quoted by the IPCC show temperature uncertainties that exceed the measured temperature increases for the last century. In fact, the IPCC's projected increase falls within the uncertainty range of the data they based their predictions on.

  Illustration 127: Snow compacting into firn, and finally glacial ice.

  To determine historic atmospheric CO2 levels, readings are taken from gas bubbles trapped in glacial ice. As snow layers build up year by year, they compact into progressively denser layers of firn. Firn is old snow found on top of glaciers, granular and compact but not yet converted into solid ice. Also called névé, it is a transitional stage between snow and glacial ice. Until the firn has become solid ice, the gaps between the ice crystals are still in contact with the atmosphere. The gas ratios do not become fixed until the gaps become bubbles in ice—a process that can take centuries.

  The trapped gas bubbles are not the same age as the ice that encases them. The ice was formed as crystals when the snow fell, while the gas bubbles were formed later, when the snow eventually turned to ice. The time lag involved could be a few hundred years or more than a thousand. Just how long the bubble trapping process took cannot be accurately known, but the time lag used can change the relationship between CO2 levels and temperature levels. A short lag shows CO2 leading, a longer lag has temperature leading. Scientists have come to the conclusion that the later case is true. From studies of Antarctic ice cores going back half a million years, the average CO2 to temperature lag is 1,300 years ±1,000 years.410 Samples taken from around the end of the last glacial period indicate that the CO2 levels did not begin to rise until after the warming began.

  There is uncertainty in the date, in the temperature readings, and the CO2 levels, as well as the temporal relationship among all three. How a particular study chooses to resolve the uncertainties affects the interpretation of the results. The location of the ice may also result in a distorted view of global average temperature. We know that the circumpolar current keeps Antarctica thermally isolated from the rest of the world. Historical records show that temperature differences between the tropics, temperate latitudes and the poles have varied over time. This is not to say that ice core data is bad, or even wrong in a scientific sense. It simply means that there are open questions as to what such data means. As always, with proxy data, the levels of uncertainty must be taken into account when interpreting results.

  Ice cores have been taken from Antarctica, Greenland, and mountain glaciers around the world. Since Antarctica has been covered by glacial ice longer than any other place on Earth, it is quite popular with scientists looking for samples (see Illustration 128). The oldest continuous ice core ever taken was retrieved from Dome Concordia in Antarctica. Initial dating confirmed that the 104,000 ft (3200 meter) core dated back at least 750,000 years.411 For comparison, the oldest Greenland ice cores date back only around 125,000 years.412

  Illustration 128: Antarctic ice core locations, source USGS.

  The annual layers in the Dome Concordia core are too thin to count visually, so the researchers are forced to use markers such as dust, gas and electrical conductivity to match different layers to known events. By matching layers to events that have already been dated, such as volcanic eruptions or ice ages, the entire ice core can be aligned with the past. The core was made by the European Project for Ice Coring in Antarctica (EPICA) and is being analyzed at the Alfred Wegener Institute in Bremerhaven, Germany.

  In results published in Science, a high-resolution deuterium profile is now available for the entire EPICA Dome C ice core. This profile allowed the construction of a climate record that extends back to 800,000 years before the present. The ice core has provided temperature data covering 11 glacial, and corresponding interglacial, periods. The authors used an atmospheric GCM to calculate an improved temperature record for the entire interval, finding temperatures during warm intervals as much as 8°F (4.5°C) warmer, and, during cold intervals, as much as 18°F (10°C) lower, than pre-anthropogenic Holocene values.413

  Living with Error and Uncertainty

  Errors can be divided into two general categories: systematic and random. Systematic errors are errors which tend to shift all measurements in a systematic way, shifting data values in a consistent way. This may be due to incorrect calibration of equipment, consistently improper use of equipment or failure to properly account for some external effect. The time lag in temperature data from an ascending radiosonde and the offset between CO2 and proxy temperature in ice core samples are examples of systematic error. Large systematic errors can often be identified and measured, allowing corrections to be applied to the experimental data, but small systematic errors will always be present. For instance, no instrument can ever be calibrated perfectly.

  Random errors are errors which vary from one measurement to the next. They cause data measurements to fluctuate about some average value. Random errors can occur for a variety of reasons.

  Lack of sensitivity—an instrument may not be able to respond to or to indicate a sufficiently small change. Most instruments come with resolution limitations noted by their manufacturers.

  Noise—extraneous disturbances which cannot be quantified or accounted for. Short duration local climate variation can distort long-term climate trends.

  Random fluctuation due to statistical processes—such as the average rate of radioactive decay. Such randomness is an inherent property of the phenomenon being measured.

  Imprecise definition—lack of understanding of the property being measured.

  Random errors displace measurements in arbitrary ways, whereas systematic errors displace measurements in a single direction. Some systematic error can be taken into account and substantially eliminated. Random errors are unavoidable and must be lived with.

  As previously stated, proxy data is secondhand data, not actual measurements of the property being evaluated. Proxies are used because direct measurements are not available. All proxy measurements contain errors from at least two sources: uncertainty inherent in the measurements themselves and error in interpreting the data. Errors in measurement involve not only the difficulties in taking the proxy readings but uncertainty in dating the samples used to take the readings. The farther back in time we go, the more uncertainty enters into the process—uncertainty increases with the age of the sample. For the Holocene, ice core data has an uncertainty of about 300 years—for the early Pleistocene, the uncertainty can be much as 9,000 years.

  Errors in ice core temperature proxies also become larger with age. They are affected by the hydrological conditions at the time the sample precipitated, which become harder to ascertain with age. Ice core samples from the Holocene are quoted with uncertainties ranging from ±0.8°C to as much as ±4.0°C.414 ,415 Even data collected from satellites and direct measurement can contain significant errors. Accuracy of the combined ship and satellite data set, the Reynolds Optimum Interpolation Sea-Surface Temperature maps is about ±0.3°C. Experimental uncertainty is simply a fact of life.

  Another type of error comes from how one type of measurement is translated
into another. Scientists will tell you that this translation requires judgment. In other words, guesswork. In order to remove errors—and to provide more continuous historical coverage—combining multiple proxies into a single record has become standard procedure.

  At the end of the 20th century, three teams of researchers carried out the time consuming and painstaking task of combining multiple proxy records into uninterrupted climate records for the recent past. In 1998, the teams that performed the statistical analyses reported their results in separate journal articles: Briffa et al., in Nature, vol. 393, p. 350; Jones et al., in Holocene, vol. 8, p. 445; and Mann et al., in Nature, vol. 392, p. 779. This leads the discussion back to questions of methodology and interpretation surrounding the Mann/IPCC hockey stick temperature history.

  The Hockey Stick Revisited

  Michael Mann's findings were arguably the single most influential factor in convincing the public that human-caused global warming is real. To construct the hockey stick plot, Mann, Raymond S. Bradley of the University of Massachusetts Amherst and Malcolm K. Hughes of the University of Arizona analyzed paleoclimate data sets from tree rings, ice cores and coral. Joining proxy data with thermometer readings from the recent past, they created a “reconstruction” of Northern Hemisphere temperatures going back 600 years. A year later, in 1999, they had extended their analysis to cover the last 1,000 years. In 2001, Mann's revised climate history became the official view of the IPCC, superseding previously accepted historical climate records. The IPCC placed the hockey-stick chart in the Summary for Policymakers section of the panel's Third Assessment Report, thrusting it into the public eye. But Mann's work had also attracted critics, particularly two Canadians, Ross McKitrick and Steve McIntyre. The same Steve McIntyre who recently discovered the Y2K discontinuity in NASA's GISS temperature records.

  McIntyre, a businessman involved in financing speculative mineral exploration, became interested in Mann's hockey stick because of the public debate about global warming. Not a scientist, but very familiar with statistical methods and promotional material, he was struck by how similar Mann's graph looked to a typical “sales pitch” chart. McIntyre attempted to reproduce the hockey stick graph from the data and methodology presented in the 1998 paper by Mann, Bradley and Hughes (MBH98), but could not. So began a statistical odyssey that would last for several years and end up in one of the most rancorous public debates in modern science.

  Eventually, McIntyre made contact with Ross McKitrick, an Associate Professor of Economics at the University of Guelph. Together, they began a correspondence with Mann, et al, that eventually led to a running debate in a number of scientific journals. The tortuous history of this public squabble has been well documented416 and will not be recounted here. What will be presented are the arguments, voiced by McKitrick and McIntyre, as stated in the abstract from their 2005 paper:

  The “hockey stick” shaped temperature reconstruction of Mann et al. [1998, 1999] has been widely applied. However it has not been previously noted in print that, prior to their principal components (PCs) analysis on tree ring networks, they carried out an unusual data transformation which strongly affects the resulting PCs. Their method, when tested on persistent red noise, nearly always produces a hockey stick shaped first principal component (PC1) and overstates the first eigenvalue. In the controversial 15th century period, the MBH98 method effectively selects only one species (bristlecone pine) into the critical North American PC1, making it implausible to describe it as the “dominant pattern of variance.” Through Monte Carlo analysis, we show that MBH98 benchmarks for significance of the Reduction of Error (RE) statistic are substantially under-stated and, using a range of cross-validation statistics, we show that the MBH98 15th century reconstruction lacks statistical significance.417

  Eventually, the argument came to a full boil, bringing both the scientific authorities and the US Congress into the fray. First to weigh in was the American National Academy of Sciences (NAS), a society of distinguished scholars engaged in scientific and engineering research sanctioned by the American government. The NAS was signed into being by President Abraham Lincoln in 1863, with a mandate to “investigate, examine, experiment, and report upon any subject of science or art” whenever called upon to do so by any department of the government.418

  Under its legislatively established mandate, the NAS was asked to investigate the veracity of Mann's work. A panel of experts was convened and, after due deliberation, a report was issued. Mann's supporters have claimed that the report fully exonerated his conclusions. Others found that the report, written in typically careful scientific style, was more like “damning him with faint praise.” Here are the actual conclusions from the NAS report:

  Based on the analyses presented in the original papers by Mann et al. and this newer supporting evidence, the committee finds it plausible that the Northern Hemisphere was warmer during the last few decades of the 20th century than during any comparable period over the preceding millennium. The substantial uncertainties currently present in the quantitative assessment of large-scale surface temperature changes prior to about A.D. 1600 lower our confidence in this conclusion compared to the high level of confidence we place in the Little Ice Age cooling and 20th century warming. Even less confidence can be placed in the original conclusions by Mann et al. (1999) that “the 1990s are likely the warmest decade, and 1998 the warmest year, in at least a millennium” because the uncertainties inherent in temperature reconstructions for individual years and decades are larger than those for longer time periods, and because not all of the available proxies record temperature information on such short timescales.419

  In short, it is seemingly a valid statement (i.e. “plausible”420 ) to say that things are warmer now than in the Medieval Warm Period. A stronger statement would have said they concur or agree with Mann's statement, not that it was plausible. The panel expressed a “high level of confidence” for cooling during the Little Ice Age and warming in the 20th century, conclusions that were not in dispute. But the claims made by Mann, et al., that 1998 was the hottest year “in at least a millennium” are questionable due to lack of supporting data and flaws in their work. As we have seen, 1998 was not even the hottest year in the last century. An even blunter analysis of the Mann work is offered by the Wegman Report.

  This 2006 report was the result of an ad hoc committee of independent statisticians who were asked by a congressional committee to assess the statistical information presented in the Mann papers. Dr. Edward Wegman, a prominent statistics professor at George Mason University and chairman of the National Academy of Sciences’ (NAS) Committee on Applied and Theoretical Statistics, headed the panel of experts. Few statisticians in the world have qualifications to rival his. Also included on the committee were Dr. David Scott of Rice University, Dr. Yasmin Said of The Johns Hopkins University, Denise Reeves of MITRE Corp. and John T. Rigsby of the Naval Surface Warfare Center.

  Wegman found that Mann et al, made basic errors that “may be easily overlooked by someone not trained in statistical methodology.” Further, “We note that there is no evidence that Dr. Mann or any of the other authors in paleoclimate studies have had significant interactions with mainstream statisticians.” Instead, this small group of climate scientists were working on their own, largely in isolation, and without the academic scrutiny needed to ferret out false assumptions. When Wegman corrected Mann's statistical mistakes, the hockey stick disappeared. Wegman's committee found Mann's conclusions unsupportable, adding that “The paucity of data in the more remote past makes the hottest-in-a-millennium claims essentially unverifiable.” In the words of the report:

  It is important to note the isolation of the paleoclimate community; even though they rely heavily on statistical methods they do not seem to be interacting with the statistical community. Additionally, we judge that the sharing of research materials, data and results was haphazardly and grudgingly done. In this case we judge that there was too much reliance on peer review, which wa
s not necessarily independent. Moreover, the work has been sufficiently politicized that this community can hardly reassess their public positions without losing credibility. Overall, our committee believes that Dr. Mann’s assessments that the decade of the 1990s was the hottest decade of the millennium and that 1998 was the hottest year of the millennium cannot be supported by his analysis.421

  Mann's supporters rushed to criticize the Wegman report, some even claiming that the report was invalid because it was not peer-reviewed. They missed the point—the Wegman report was a peer-review. In fact, Wegman's analysis of the peer-review process within the close-knit climatological community calls the validity of that process into question.

  More troubling is how widespread the misapplication of statistical techniques is in the broader climate-change and meteorological community. An indication of the depth of the problem is evident in the American Meteorological Society's Committee on Probability and Statistics. Again quoting Dr. Wegman: “I believe it is amazing for a committee whose focus is on statistics and probability that of the nine members only two are also members of the American Statistical Association, the premier statistical association in the United States, and one of those is a recent PhD with an assistant-professor appointment in a medical school.”

  While Wegman's advice—to use trained statisticians in studies reliant on statistics—may seem obvious, Mann's supporters and the IPCC faithful find the suggestion objectionable. Mann's hockey stick graph may be wrong, many experts now acknowledge, but they assert that he nevertheless came to the right conclusion.

 

‹ Prev