The Resilient Earth: Science, Global Warming and the Fate of Humanity
Page 29
Earth's climate system is definitely affected by external forcing factors; radiant energy from the Sun most obviously. As we have seen, physicists suspect that complex interactions involving cosmic radiation, the solar system's path through the galaxy, and the Sun's magnetic field, have significant impact on low-level cloud formation. This, in turn, affects Earth's albedo, one of the most sensitive regulators of climate. Models that cannot account for cloud cover, cannot begin to address the effects of this external forcing factor. Yet, many climate scientists reject the possibility that these factors are important and refuse to consider them.
The concept of a safety factor comes from engineering, where more robust systems are built by designing in margins of error—“over engineering” them. Bridges are built to support more than their expected load, aircraft wings are designed to safely bend far beyond the range expected in normal flight, scuba tanks are constructed so they won't burst even if they are filled to pressures higher than their rated level. But, environmental systems are not designed, they are created by the interactions of natural forces.
In environmental modeling, the equivalent of a factor of safety is the “worst-case scenario” prediction. Unfortunately, “where relationships between variables are highly nonlinear, and a clear understanding of the controlling variables does not exist, as in many natural and environmental systems, the factor of safety concept is inapplicable.”430 As we have seen, climate models are highly nonlinear and many relationships among the controlling variables remain unknown.
Even addressing these factors may not be sufficient to ensure an accurate model. Dr. Haff has stated, “sources of uncertainty that are unimportant or that can be controlled at small scales and over short times become important in large-scale applications and over long time scales.”431 These factors, combined with the nonlinear nature of natural systems, which leads to emergent (unexpected) behavior, may mean climate systems cannot be modeled at all.
Computational Error
Even if the data used to feed a model was totally accurate, error would still arise. This is because of the nature of computers themselves. Computers represent real numbers by approximations called floating-point numbers. In nature, there are no artificial restrictions on the values of quantities but in computers, a value is represented by a limited number of digits. This causes two types of error; representational error and roundoff error.
Representational error can be readily demonstrated with a pocket calculator. The value 1/3 cannot be accurately represented by the finite number of digits available to a calculator. Entering a 1 and dividing by 3 will yield a result of 0.33333... to some number of digits. This is because there is no exact representation for 1/3 using the decimal, or base 10, number system. This problem is not unique to base 10, all number systems have representational problems.
The other type of error, roundoff, is familiar from daily life in the form of sales tax. If a locale has a 7% sales tax and a purchase totals $0.50 the added tax should be $0.035 but, since currency isn't available for a ½ cent, the tax added is only $0.03 dropping the lowest digit. The tax value was truncated or rounded down in order to fit the available number of digits. In computers, multiplication and division often result in more digits than can be maintained by the machine's limited representation. Arithmetic operations can be ordered to minimize the error introduced due to roundoff, but some error will still creep into the calculations.
After the uncertainties present in data are combined with errors introduced by representation and roundoff, a third type of computational error arises—propagated error. What this means is that errors present in values are passed on through calculations, propagated into the resulting values. In this book, values have often been presented as a number followed by another number prefaced with the symbol “±” which is read “plus or minus.” This notation is used to express uncertainty or error present in a value: 10±2 represents a range of values from 8 to 12, centered on a value of 10.
When numbers containing error ranges are used in calculations, rules exist that describe how the error they contain is passed along. The simplest rule is for addition: the two error ranges are added to yield the error range for the result. For example, adding 10±2 to 8±3 gives a result of 18±5. There are other, more complicated rules for multiplication and division, but the concept is the same. When dealing with complicated equations and functions, like sines and cosines, how error propagates is determined using partial differential equations. The mathematics of error propagation rapidly becomes very complex and, as seen in the MD example related above, errors can build up until the model is overwhelmed.
To demonstrate how these sources of computational error can overwhelm the results of even simple calculations, consider the following example. This example cannot be explained without using equations, but understanding them is not essential—only understanding the final result is important. Having said that, let n be a positive integer and let x(n)=1/n. Instructing a computer to compute x=(n+1)*x-1 should not change the value of x. Source code for this program, written in the Perl language, is given in Error: Reference source not found.
This program will generate a table of numbers, with the number n varying from 1 to 10. The value the function x is calculated for each value of n. Then the equation x=(n+1)*x-1is computed first ten and then thirty times. Each time the equation is computed, the newly calculated value of x replaces the old value, so that the new value of x from each step is used as the input value of x for the next step. This process is called iteration. If the computer's internal representation of x is exact, the values produced by the second equation should not change.
The output from running the example program is shown in Text 2. The value of n is given in the first column and the initial value of x is in the second column. The values of x after 10 and 30 iterations are given in columns three and four, respectively. Notice that for some values of n the computed values of x do remain unchanged, but for others the results diverge—slightly after 10 iterations, then wildly after 30.
The reason that the x values for 1, 2, 4, and 8 do not diverge is because computers use binary arithmetic, representing numbers in base 2. The rows of results that did not diverge began with numbers that are integer powers of 2. These numbers are represented inside the computer exactly, and the iterative computation does not change the resulting values of x. The other numbers cannot be represented exactly so the computation blows up. The same thing happens in any computer program that perform iterative computations—like climate simulation models.
If all of these complicating factors—data errors, incomplete and erroneous models, non-linear model response, roundoff and representational error, and error propagation—are not daunting enough, different computer hardware can introduce different amounts of error for different arithmetic operations. This means that running a model on a Sun computer can yield different results than an Intel based computer or an SGI computer.432 To say the least, computer modeling is not an exact science.
Modeling Earth's Climate
Earth's climate is far too complex for the human intellect to fully encompass—no set of equations can capture it completely. A normal procedure in science is to isolate parts of a more complex system in hopes of understanding the smaller, simpler pieces. Computer scientists call this type of approach “divide and conquer,” breaking down a problem into sub-problems until they become simple enough to be solved directly. The fundamental assumption with this problem solving approach is that the parts, when reassembled into a whole, will accurately reflect the original problem. This is often an invalid assumption—it certainly is when dealing with Earth's climate.
Modeling inaccuracy occurs because many of the processes that influence climate also influence each other. We saw examples of positive feedback mechanisms in earlier chapters:
Increasing CO2 levels cause increasing temperatures, resulting in the release of more CO2.
A colder climate increases the area covered with highly reflectiv
e ice and snow, causing Earth to cool, causing more ice and snow.
Melting sea ice, which is highly reflective, exposes more open ocean, which has a much lower albedo, causing greater warming.
Illustration 130: Flow diagram of the climate system illustrating the massive and complicated physical processes and multiple feedback loops. Source: Robock, 1985.
But, we have also seen examples of negative feedback where warming creates more water vapor that increases snowfall in colder regions leading to cooler climate. Sometimes the same response of nature can result in contradictory effects. For example; increasing forests, which absorb CO2, cooling the planet, also decreases Earth's albedo, leading to greater warming. So, are forests good or bad with respect to global warming? Illustration 130 shows some of the feedback relationships in Earth's climate system.
A simpler view of the main climate factors is shown as a block diagram in Illustration 131. The actions of feedback linkages are represented by the heavy black arrows. Even this simplified view demonstrates how complex and confusing Earth's climate mechanisms are.
Illustration 131: interacting loops of Earth's climate system. After Pittock.
We have also seen that some mechanisms do not respond gradually, but are instead non-linear. The great ocean conveyor belt, the global circulation of water that redistributes heat around the globe, is known to have been disrupted by glacial melt water during the early years of the Holocene warming. The results of these disruptions were several brief, sharp returns to colder conditions. Scientists fear this might happen again if the climate continues to warm, but they have no way of predicting such an event.
Many climate mechanisms have sensitive dependence on initial conditions, the so-called “butterfly effect.” Over time, systems that show this type of response become unpredictable. This concept is often illustrated by a butterfly flapping its wings in one area of the world, causing a storm to occur much later, in another part of the world. Such nonlinear systems are central to chaos theory, exhibiting fantastically complex and unpredictable behavior. According to the IPCC:
“The physical climate system is highly complex, has aspects that are inherently chaotic, and involves non-linear feedbacks operating on a wide variety of time scales. Our empirical knowledge of how these operate ranges from being good on decadal time scales, moderate over time scales of 100-1000 years, to being quite limited at 10000 years and longer.”433
From the shaky foundation of incomplete theory, with input and defining data from an uncertain and error laden climate history, climate scientists have boldly constructed GCM computer programs, confidently basing their public pronouncements on the models' dubious predictions.
GISS modelE
The GCM used to provide predictions for the IPCC's latest report is called GISS modelE, specifically model III, a version of modelE frozen in 2004. ModelE is actually a composite of an atmospheric model and four different ocean circulation models, along with other factors. ModelE is driven by ten measured or estimated climate forcings. A large team of NASA investigators, led by Hansen and Ruedy, produced an in-depth, eye-opening study of modelE's strengths and weaknesses in their 2006 paper titled “Climate Simulations for 1880-2003 with GISS Model E.”
Illustration 132: Growth of GHG, the primary forcing for modelE. Source Hansen et al, 2007.
As Jeff Kiehl noted, “models produce a lot more information than we have observations for, and this is not a satisfactory situation.” This has not stopped the IPCC, and others, from using models to produce a deluge of data with which to inundate the public and the media. Some of the illustrations in the Hansen report are revealing. For instance, Figure 1 from the report shows the relative strengths and growth in the primary forcing used to drive modelE, well-mixed greenhouse gases (Illustration 132). It should come as no surprise that CO2 concentration dominates.
When all the driving forcings are viewed together, the result is still clear—CO2 is the main driving factor of modelE. This is shown in Figure 5 from the report (Illustration 133). Notice that there is a conspicuous lack in forcing variability during the 1940s, a period of time when global temperature hit a peak and then went into decline. “The model's fit with peak warmth near 1940 depends in part on unforced temperature fluctuations,” states the report, adding “It may be fruitless to search for an external forcing to produce peak warmth around 1940.” This is an admission that modelE is incapable of accurately reproducing the observed temperature fluctuations of the past 120 years. Yet we are asked to accept that the IPCC's projections for the next 100 years, using modelE, are accurate.
Illustration 133: Net forcings used to drive modelE. Source Hansen et al, 2007.
They also report that the “greatest uncertainties in the forcings are the temporal and spatial variations of anthropogenic aerosols and their indirect effects on clouds.” This statement reinforces the concerns of others regarding the inability of current models to accurately represent cloud cover and cloud formation mechanisms due to weak theory and even weaker experimental data. Recall that the IPCC's report listed understanding of aerosols' role in climate regulation at only 10% (Illustration 5, page 24).
Illustration 134: Results from modelE taken from Hansen et al, 2007, figure 6.
Graphs of output from modelE illustrate the model's problem areas. This is shown in Figure 6 from the Hansen report, reproduced in Illustration 134. Atmospheric temperatures, shown in the three graphs on the left, are mostly in agreement with historical measurements but become less accurate in the most recent two decades. It is interesting that these values become more inaccurate just when CO2 levels start rising. Surface temperature, shown in the upper right hand graph, performs significantly worse. The model does not accurately reproduce the variability of recorded surface temperature and there are decade-spanning periods where it consistently underestimates or over-estimates surface temperatures. But worst of all are the predictions for ocean ice coverage, which overestimates sea ice for one hundred years, and then underestimates it for the past twenty years. Still, the true accuracy of the model is hard to discern from the data plots.
A better idea of the accuracy of modelE's backcast predictions for the past 120 years can be found in the Hansen report's written narrative. This fifty page report contains a frank assessment of the accuracy of the IPCC's favorite GCM. Here are some of the major problems cited in the study:
The atmospheric model produces polar temperatures as much as 5-10°C too cold in the lower stratosphere during winter and the model produces sudden stratospheric warming at only one quarter of the observed frequency.
A 25% regional deficiency in summer stratus cloud cover of the west coasts of the continents—resulting in excessive absorption of solar radiation of as much as 50 W/m2.
A net deficiency in solar radiation absorbed over tropical regions of 20 W/m2.
Sea level air pressure readings are too high by 4-8 hPa434 during winter in the Arctic and 2-4 hPa too low year-round in the tropics.
A 20% shortfall in rain over the Amazon basin.
A 25% deficiency in summer cloud cover in the western United States and central Asia, causing summer temperatures to be ~5°C too high in these regions.
Global sea ice cover is deemed realistic but the distribution of sea ice is not, with too much sea ice in the Northern Hemisphere and too little in the Southern Hemisphere.
Too much sea ice remains in the Arctic summer affecting climate feedback rates.
Unrealistically weak tropical El Niño-like variability.
Despite all these shortcomings, modelE “fares about as well as the typical global model in the verisimilitude of its climatology” in IPCC model comparisons.435 We have already mentioned the study of 108 different models that found predicted temperature increases of 0.29-15.6°F (0.16- 8.7°C)436 so perhaps modelE does fare about as well as other models.
In all, Hansen's report is a blunt admission that, after 30 years of development and more than $50 billion in funded research, current climate mod
els are just not up to the job. In fact, if the predictions of the Third Annual Report are compared with the predictions in Annual Report 4, the IPCC's efforts are moving in reverse—the new predictions in AR4 have a wider range of uncertainty than those in the TAR (1.1-6.4°C vs 1.5-5.8°C).
Modeling Invalidated
In a famous paper, entitled “Ground-water models cannot be validated,” Leonard F. Konikow and John D. Bredehoeft, state that some natural systems cannot be accurately modeled. They state, “testing the predictive capability of a model is best characterized as calibration or history matching; it is only a limited demonstration of the reliability of the model.”437 If testing a model cannot prove its ability to provide accurate predictions then there is no way climate modeling, as a technique, can be trusted. They summarize their findings, saying the emphasis in trying to understand natural processes should shift “away from building false confidence into model predictions.”
Konikow and Bredehoeft are not members of the scientific fringe. Their paper won the Meinzer prize from the Geological Society of America, the highest honor in the field of hydrogeology in the US.438 They are not the only modeling experts to warn about the fundamental problems of modeling complex natural processes.
Hendrik Tennekes, Professor of Aeronautical Engineering at Pennsylvania State University and the former director of research at the Royal Dutch Meteorological Institute, was a pioneer in multi-modal weather forecasting. Though a strong proponent of scientific modeling, he has challenged the use of unproven scientific models to predict the future course of global warming. Pointing to the complexity of Earth's climate and the incomplete nature of climate models, Tennekes has said, “if I try to look at climate modeling from this perspective, I’m almost fainting.”439