Book Read Free

The Future of Everything: The Science of Prediction

Page 16

by David Orrell


  IRRATIONAL NUMBERS

  One of the more popular theories was that the doubling time of the exponential growth changed, so it was fastest at short times, then progressively slowed. A couple of months later, a workshop on atmospheric predictability was held at the Naval Research Laboratory in Monterey, California. During one session, the recalculated error plot was put up to determine the doubling time. As I watched, several representatives from the top weather centres of the United States and Europe formed a “consensus” that the doubling time must be about a day or two, because it is faster than a day at the beginning and more than that at the end. I protested that the graph was not an exponential curve, so it did not have a doubling time. The errors grew with the square root of time. But it was like Hippasus telling the Pythagoreans, who believed only in rational numbers, about the existence of the square root of two. I was lucky they didn’t take me out to sea and throw me overboard, as the Pythagoreans were rumoured to have done. The published meeting report stated that though discussion was “very animated,” the conclusion was that “predictability limitations are not an artifact of the numerical model.”67 So according to the “consensus,” the model was effectively flawless.

  This all reminded me of the Greeks’ attempts to replicate the motion of the planet by incorporating more and more epicycles into their model. The “consensus” opinion could perhaps explain why the error curves didn’t look exponential, but it would have been a coincidence that they varied so closely with the square root of time. According to Ockham’s Razor, the simplest explanation is usually the best, and in this case, the simplest was that the error was caused by the model.

  The more closely we look at the assumption that chaos lies behind forecast error, the weaker it seems. Even the techniques used to measure the doubling time of errors appear to have been chosen to produce a fast result. The usual method is to make small perturbations to the initial condition and see how the error grows (as in figure4.5for the Lorenz system). Experiments with weather models, however, were performed using a technique known as lagged forecasts, which appeared to give doubling times as fast as a day.68 It turned out that the rapid growth was a result of the special type of perturbation used, which was large in all variables except those being measured. As the error propagated from the other variables, it appeared to grow rapidly. But when the experiment was repeated in a global metric, which took all errors into account, the rapid growth disappeared.69

  The way the analysis is typically prepared is also questionable. In 1941, the Nazi meteorologist Franz Baur told Hitler that the upcoming winter in the Soviet Union would be mild or normal. Hitler decided to launch an invasion, but his unprepared army was frozen in its tracks by a winter of almost unprecedented severity. When informed of the conditions, Baur said, “The observations must be wrong.”70 A similarly confident attitude is adopted in the analysis procedure, which adjusts atmospheric observations to better fit the model predictions from a few hours before.71 In other words, the model is favoured over reality. The practice is defensible to a point, because it helps smooth the observations. However, errors are then measured against the analysis rather than the original observations. Because the analysis is made to be more like the model, this reduces published forecast errors, therefore demonstrating the accuracy of the model. The GCM has become the embodiment of the Pythagorean ideal of an ordered, rational universe, and weather research centres often seem concerned more with protecting it than with making accurate predictions of the future.

  BAD WEATHER

  In January 2003, the American Meteorological Society (AMS) held its annual meeting at the downtown convention centre in Seattle, where I then lived. Thousands of meteorologists descended on the city from research centres all over the world, clutching laptops, briefcases, and tubes containing poster presentations.Scanning the local newspaper, I saw the meeting announced with a quote, supplied by the AMS media advisory, which claimed that “numerical weather prediction is widely regarded to be among the foremost scientific accomplishments of the 20th century.” On the back page was the weather forecast. Typically for Seattle in January, it appeared to consist of random selections from the words “rain,” “drizzle,” “cloudy spells,” and “sun breaks.”

  To most people, our less-than-awesome ability to predict the weather doesn’t quite rank with, say, putting a man on the moon or discovering DNA. It is true that enormous progress has been made in understanding how the weather works, which is laudable, but despite massive increases in research effort and computer power, this progress has not carried over into accurate predictions. If anything, weather forecasting has been one of the great underachievements in science; there’s a huge gulf between what seemed attainable early in the last century and what is possible today. This reflects not on the quality of the scientists—many brilliant people have worked or are working in the area—but on the complexity of the problem.

  It is nonetheless remarkable that even though GCMs had been around for fifty years, the topic of model error was still viewed as “new, uncharted territory.” When I first performed a search to see what else had been written on the subject, I found that only a handful of research papers contained the words “model error” in the abstract, and none of them had made a serious, concerted effort to measure it.72 So why was there so little work being done in this area, and why were meteorologists so eager to adopt chaos as the cause of forecast error?

  Part of the reason is institutional. Aristotle once said that man is a political animal, and scientists are no exception. As the engineer Bart Kosko puts it: “Career science, like career politics, depends as much on career maneuvering, posturing, and politics as it depends on research and the pursuit of truth. . . . Politics lies behind literature citations and omissions, academic promotions, government appointments, contract and grant awards . . . and most of all, where the political currents focus into a laserlike beam, in the peer-review process of technical journal articles—when the office door closes and the lone anonymous scientist reads and ranks his competitors’ work.”73 It is hard to publish papers that demonstrate that models don’t work, or that expensive strategies are poorly conceived—especially when the people reviewing the manuscripts are the ones who developed the models and strategies in the first place. The peerreview process has many benefits, but it can easily slide into a kind of self-censoring avoidance mechanism.

  Another problem is that scientists, like anyone else, become attached to fashions and fads. In the 1980s, as Stephen Wolfram observed, signs of chaos were detected not just in weather forecasting but “in all sorts of mechanical, electrical, fluid and other systems, and there emerged a widespread conviction that such chaos must be the source of all important randomness in nature.”74 But while some mathematical equations show sensitivity to initial condition, “none of those typically investigated have any close connection to realistic descriptions of fluid flow.”75

  Perhaps a deeper problem, though, is that investigating model error—calculating the drift, finding shadow orbits—is like exploring the shadow side of science. Instead of saying what we can do, trumpeting our superior intelligence over brute nature, it shows what we cannot do. It demonstrates ignorance instead of knowledge, mystery instead of clarity, darkness instead of light. By embracing chaos as the cause of forecast error, meteorologists could maintain the illusion that models were essentially perfect.76 The same temptation exists in other branches of science. As we’ll see in Chapter 6, economists managed a similar trick in the 1960s, when they blamed unpredictability on random events external to the economy.

  Of course, the fact that model error is large now does not imply that the models are in some obvious way flawed, or that they will not continue to improve. The models do an excellent job of capturing the atmospheric dynamics, to the extent that this is achievable using equations. Further advances and adjustments, coupled with better methods to measure model error, will yield continued improvements. But increasing the resolution of the models offers dimi
nishing returns.77 GCMs will be better in a hundred years’ time, but there is no reason to anticipate the arrival of perfect oneweek weather forecasts.

  Meteorology’s greatest contribution has probably been to provide warnings of storms or other short-term phenomena, or more recently of fragilities in the climate system that we may inadvertently be affecting. Atmospheric scientists have often led the way in pointing out the dangers of human impacts on the environment. We would have had no idea about the decline in the ozone layer or the rise in atmospheric carbon dioxide if no one had gone out and measured them, little idea of their importance to the planet if no one had modelled it. For such purposes, simple models that do not attempt to capture the full system in all its glory are often effective. (The danger of global warming, for example, was first identified by the Swedish physical chemist Svante Arrhenius over a hundred years ago.) The developing area of complex systems research may not directly improve weather predictions, but it will build on our understanding by offering new conceptual approaches.

  SUN SCREEN

  Human effects on the environment have a way of coming back at us. A famous example was the discovery that artificially synthesized chemical compounds known as chlorofluorocarbons, or CFCs, were destroying the ozone layer. CFCs were introduced in the 1930s as non-toxic cooling agents for refrigerators and air-conditioning units. In the middle part of the twentieth century, air-conditioning grew enormously in popularity. This increasing demand led to the mass-production of CFCs, and their slow release into the environment.

  In the 1970s, the British scientist James Lovelock used his new invention, the electron-capture gas chromatograph, to detect the presence of aerosols in the atmosphere. He discovered that CFCs were found at surprisingly high concentrations. The chemists Sherwood Rowland and Mario Molina came to the conclusion that CFCs could degrade ozone (a form of oxygen) in the upper atmosphere. This effect was alarming because the ozone layer, as it is known, acts as a protective screen against DNA-destroying ultraviolet radiation. The ozone had accumulated in the atmosphere as a by-product of billions of years of photosynthesis by oceanliving organisms, but CFCs were eroding it in mere decades. Chlorofluorocarbons are also potent greenhouse gases.

  Rowland and Molina were at first ignored by the companies responsible for making the compounds, but their concerns were borne out by the discovery of an “ozone hole” above Antarctica. Helped along by a degree of media excitement, this led to the banning of CFCs. Even if atmospheric modelling can’t predict the future, it can sometimes help avert disaster.

  We can summarize with the following points, many of which also apply in modified form to the other systems studied in this book:

  The ocean/atmosphere system is complex and based on local interactions. Structure exists over all scales, from microscopic to global.

  Features such as clouds or storms can be viewed as emergent properties. Their behaviour cannot be computed from first principles.

  GCMs use parameterizations to approximate these properties. The difference between the equations and reality results in model errors that limit prediction.

  Increasing model complexity does not necessarily lead to error reduction. More parameters need to be approximated.

  As a result, improvements in prediction accuracy, especially for key metrics such as precipitation, have lagged far behind improvements in computers, observation systems, and scientific effort.

  It is still possible to make general warnings. These can often be made with simple models that do not attempt a detailed simulation of the entire climate system.

  If all that was at stake was the short-term weather, then the longer-term accuracy of weather models would be of no great concern, and the cause of error would be academic. The problem, however, is that if forecasts for next week are wrong because of seemingly unresolvable model errors, we have no reason to think that we’ll be able to predict, decades in advance, the even more complex effects of global warming on the biosphere.

  This assertion may appear to be completely negative. Perhaps paradoxically, though, I believe the opposite to be true. The realization that the earth system is inherently unpredictable— coupled with a deeper understanding of and respect for complex systems—may turn out to be highly liberating.

  In the next chapter, we look at a more personal kind of prediction: that of the destiny that is written in our genes. Just as weather prediction is “only” fluid flow, the reading of genetic information is “only” biochemistry. Again, though, we will see that something seems to get lost in translation.

  5 IT’S IN THE GENES

  PREDICTING OUR HEALTH

  JODY: Scorpion wants to cross a river, but he can’t swim. Goes to the frog, who can, and asks for a ride. Frog says, “If I give you a ride on my back, you’ll go and sting me.” Scorpion replies, “It would not be in my interest to sting you since as I’ll be on your back we both would drown.” Frog thinks about this logic for a while and accepts the deal. Takes the scorpion on his back. Braves the waters. Halfway over feels a burning spear in his side and realizes the scorpion has stung him after all. And as they both sink beneath the waves the frog cries out, “Why did you sting me, Mr. Scorpion, for now we both will drown?” Scorpion replies, “I can’t help it, it’s in my nature.”

  FERGUS: So what’s that supposed to mean?

  jody: : Means what it says. The scorpion does what is in his nature.

  —Neil Jordan, The Crying Game

  . . . CGCGGTGCTCACGCCAGTAATCCCAACACTT . . .

  —Sequence of human DNA (basepairs 100000–100030 of chromosome 16), from the Human Genome Project1

  GOOD BREEDING

  To prophets of different persuasions, anything from tea leaves to animal entrails to segments of the Bible can form a kind of text that can be probed for glimpses of the future. In Kepler’s day, a child’s destiny was thought to be determined—or at least strongly influenced—by the positions of the sun, the moon, the planets, and the stars at the time of birth. New parents could pay an astrologer (such as Kepler or Tycho) for a detailed horoscope and get the weather for the year thrown in as well. Now, we have the option of consulting medical experts, who scan stretches of DNA instead of the heavens for hidden portents. With procedures such as amniocentesis, an expectant mother can have her unborn fetus tested for a range of genetic diseases. And with the completion of the Human Genome Project, there is the promise that DNA will give up information about a broad range of physical and mental traits. How strong will the child be? How smart? How long-lived? In this chapter, we look at genetic prediction: how scientists use biochemistry and statistics to determine the effect of genes and inheritance on an individual’s health.

  Humans have always been fascinated by the question of how traits are inherited. Aristotle was of the opinion that semen contained particles of blood, known as pangenes, which were passed on from generation to generation. In The Eumenides, the playwright Aeschylus has Apollo say, “The mother is no parent of that which is called her child, but only nurse of the new-planted seed that grows. The parent is he who mounts.”2 A popular theory in the seventeenth century was that the sperm or the egg contained a minuscule homunculus—a tiny version of a human being that grew into an embryo. This theory was stunningly confirmed in 1677, when the Dutch naturalist Antonie van Leeuwenhoek observed sperm under the microscope and saw what he believed was a tiny human being. (Microscopes have improved since then, though we still tend to see what we are looking for.)

  The story of modern genetic prediction begins with two men, both born in 1822: the Victorian polymath Sir Francis Galton, and the Austrian monk Gregor Mendel. There are essentially two ways to make scientific predictions. The first is to look for statistical patterns in past data and predict that they will continue. This was the technique used by Sir Gilbert Walker to detect the El Niño pattern (though there, prediction proved more difficult) and is especially popular in finance. The scientist may then work backwards to find a causal explanation, but this is at the ris
k of imposing a story that seems plausible but is oversimplified or wrong. The second method is to use mathematical models derived from physical principles. Mendel’s work was based on studying the simplest traits; although little known or appreciated in its time—Galton learned of it late and paid it no great attention—it eventually led to a kind of physics for life. Galton, who was concerned with complex traits such as human intelligence, was the pioneer of the data-driven approach, and his work is an excellent example of both its uses and its risks.

  Among Galton’s many contributions to society, which include the newspaper weather map, fingerprinting, and a description of how to make the perfect cup of tea,3 we can include the coining of the phrase “nature and nurture.” He used the phrase in the title of his 1875 book, English Men of Science: Their Nature and Nurture, and might have been inspired by Shakespeare’s Tempest, in which Prospero describes Caliban as “a devil, a born Devil, on whose nature nurture can never stick.”

  Galton’s thesis was that human ability was a function of nature rather than nurture, that it was in the blood. In a two-year experiment in collaboration with his cousin Charles Darwin, he tried to prove this by injecting the blood of lop-eared rabbits into grey rabbits, and vice versa, to see how it affected their progeny.4 (It didn’t.) He did argue that exceptional ability could be inherited by the usual means, however, and demonstrated it by showing that the children of particularly eminent people were more likely to themselves be eminent. (Here, eminence was measured by mention in a biographical handbook, and by whether or not someone’s obituary appeared in the London Times.5)

 

‹ Prev