Modeling of Atmospheric Chemistry

Home > Other > Modeling of Atmospheric Chemistry > Page 6
Modeling of Atmospheric Chemistry Page 6

by Guy P Brasseur


  (1.2)

  where d/dt = ∂/∂t + v•∇ is the total derivative. From a mathematical standpoint, the Lagrangian approach reduces the continuity equation to a 1-D ordinary differential equation applied to points (0-D packets of mass) moving with the flow. The trajectory of each point still needs to be computed. The Eulerian approach is generally preferred in 3-D models because it guarantees a well-defined concentration field over the whole domain. In addition, Eulerian models deal better with nonlinear chemistry and mass conservation. On the other hand, Lagrangian models often have lower numerical transport errors and are better suited for tracking the transport of pollution plumes, as a large number of points can be released at the location of the pollution source. They are also generally the better choice for describing the source influence function contributing to observations made at a particular location (receptor-oriented modeling). In that case, a large number of points can be released at the location of the observations and transported backward in time in the Lagrangian framework.

  Figure 1.8 Global atmospheric distribution of trace species concentrations computed using Eulerian and Lagragian model representations. Concentrations are represented by colors (high values in blue, low values in orange). The Eulerian framework uses a fixed computational grid (b) while the Lagrangian framework uses an ensemble of points moving with the airflow (c). The figure also shows portraits of the Swiss mathematician and physicist Leonhard Paul Euler (1707–1783, a) and of the French mathematician Joseph Louis, Comte de Lagrange (1736–1813, d).

  We gave in Section 1.6 a brief history of the growing complexity of atmospheric chemistry models leading to the current generation of global 3-D models. At the other end of the complication spectrum, 0-D models remain attractive as simple tools for improving our understanding of processes. These models solve the continuity equation as dCi/dt = Pi(C) – Li(C), without consideration of spatial dimensions. They are called box models and are often appropriate to compute chemical steady-state concentrations of short-lived species, for which the effect of transport is negligible, or to compute the global budgets of long-lived chemical species for which uniform mixing within the domain can be assumed. Other simple models frequently used in atmospheric research include Gaussian plume models to simulate the fate of chemicals emitted from a point source and mixing with the surrounding background and 1-D models to simulate vertical mixing of chemicals in the atmospheric boundary layer (lowest few kilometers) assuming horizontal uniformity. Two-dimensional models (latitude–altitude) are also still used for stratospheric applications where longitudinal concentration gradients are generally small.

  1.8 Models as Components of Observing Systems

  The usefulness of a model is often evaluated by its ability to reproduce observations, but this can be misleading for two reasons. First, the observations themselves have errors, and models can in fact help to identify bad observations by demonstrating their inconsistency with independent knowledge. Second, the observations sample only a small domain of the space simulated by the model and may not be particularly relevant for testing the model predictions of interest. It is important to evaluate the model in the context of the problem to which it is applied. It is also important to establish whether the goal of the evaluation is to diagnose errors in the model physics or in the data used as input to the model.

  A broad class of modeling applications involves the use of observations to quantify the variables driving the system (state variables) when these variables cannot be directly observed. Here, the observations probe the manifestations of the system (observation variables) while the model physics provide a prediction of the observation variables as a function of the state variables. Inverting the model then yields a prediction of the state variables for given values of the observational variables. One can think of this as fitting the model to observations in order to infer values for the state variables. Because of errors in the model and in the observations, the best that can be achieved in this manner is an optimal estimate for the state variables. Such an analysis is called inverse modeling and requires careful consideration of errors in the model, errors in the observations, and compatibility of results with prior knowledge. Model and observations are inseparable partners in inverse modeling. Having a very precise model is useless if the observations are not precise; having very precise observations is useless if the model is not precise.

  This partnership between models and observations leads to the concept of observing system as the concerted combination of models and observations to address targeted monitoring or scientific goals. This concept has gained momentum with the dramatic growth in atmospheric observations, in particular from satellites generating massive amounts of data that are complicated to interpret. A model provides a continuous field of concentrations that can serve as a common platform for examining the complementarity and consistency of observations taken from different instruments at different locations, on different schedules, and for different species. Formal integration of the model and observational data can take the form of chemical data assimilation to produce optimized fields of concentrations, or inverse modeling to optimize state variables that are not directly observed.

  Following on this concept of an observing system integrating observations and models, one may use models to compare the value of different observational data sets for addressing a particular problem, and to propose new observations that would be of particular value for that problem. Observing system simulation experiments (OSSEs) are now commonly conducted to quantify the benefit of a new source of observations (as from a proposed satellite) for addressing a quantifiable monitoring or scientific objective. Observing system simulation experiments use a CTM to generate a “true” synthetic atmosphere to be sampled by the ensemble of existing and proposed observing instruments. Pseudo-observations are made of that atmosphere along the instrument sampling paths and sampling schedules, with random error added following the instrument specifications. A second independent CTM is then used to invert these pseudo-observations and assess their value toward meeting the objective. A well-designed OSSE can tell us whether a proposed instrument will add significant information to the existing observing system.

  1.9 High-Performance Computing

  Modeling of atmospheric chemistry is a grand computational challenge. It requires solution of a large system of coupled 4-D partial differential equations (1.1). Relevant temporal and spatial scales range over many orders of magnitude. Atmospheric chemistry modeling is a prime application of high-performance computing, which describes mathematical or logical operations performed on supercomputers that are at the frontline of processing capacity and many orders of magnitude more powerful than desktop computers. The early supercomputers developed in the 1960s and 1970s by Seymour Cray spurred the development of weather and climate models. An important breakthrough in the 1980s was the development of vector processors able to run mathematical operations on multiple data elements simultaneously. The costs of these specialized vector platforms were still relatively high, so that in the 1990s the computer industry turned to developing high-performance machines based on mass-produced, less expensive components. This was accomplished through architectures that include a large number of scalar micro-processing elements operating in massively parallel architectures. Box 1.1 defines some of the relevant computing terminology.

  Box 1.1 High-Performance Computing Terminology

  Basic arithmetic operations are performed by processing elements or processors. Each processing element may have its own local memory, or can share a memory with other processors. The speed at which data transfer between memory and processor takes place is called the memory bandwidth. Computers with slow shared-memory bandwidth rely on small, fast-access local memories that hold data temporarily and are called cache memory. A computational node is a collection of processors with their shared memories. If data from a processor on one node can access directly a memory area in another node, the system is said to have shared memory. If messages ha
ve to be exchanged across the network to share data between nodes, the computer is said to have distributed memory. A cluster is a collection of nodes linked by a local high-speed network. When applications are performed in parallel, individual nodes are responsible only for a fraction of the calculations. Central to fully exploiting massively parallel architectures is the ability to divide and synchronize the computational burden effectively among the individual processors and nodes. The efficiency of the multi-node computation depends on the optimal use of the different processors, the memory bandwidth, and the bandwidth of the connection between nodes.

  The effective use of supercomputers requires advanced programming. Fortran remains the language of choice because Fortran compilers generate faster code than other languages. Programming for parallel architectures may use the Message Passing Interface (MPI) protocol for loosely connected clusters with distributed memory and/or the Open Multi-Processing (openMP) protocol for shared-memory nodes. Massively parallel architectures require the use of distributed memory. Grid computing refers to a network of loosely coupled, heterogeneous, and geographically dispersed computers, offering a flexible and inexpensive resource to access a large number of processors or large amounts of data.

  (Adapted in part from Washington and Parkinson, 2005)

  The speed of a computer is measured by the number of floating point operations performed per second (called Flops). The peak performance of the Cray-1 installed in the 1970s at the Los Alamos National Laboratory (New Mexico) was 250 megaflops (106 flops), while the performance of the Cray-2 installed in 1985 at the Lawrence Livermore National Laboratory (California) was 3.9 gigaflops (109 flops). The Earth Simulator introduced in Yokohama (Japan) in 2002, the largest computer in the world until 2004, provided 36 teraflops (1012 flops). This machine included 5120 vector processors distributed among 640 nodes. It was surpassed in 2004 by the IBM Blue Gene platform at the Lawrence Livermore National Laboratory with a performance that reached nearly 500 teraflops at the end of 2007. The performance of leading-edge supercomputers exceeded tens of petaflops (1015 flops) in 2015, and is predicted to be close to exaflops (1018 flops) by 2018. Enabling models to scale efficiently on such powerful platforms is a major engineering challenge.

  References

  Cadle R. D. and Allen E. R. (1970) Atmospheric photochemistry, Science, 167, 243–249.

  Fabry C. and Buisson M. (1913) L’absorption de l’ultraviolet par l’ozone et la limite du spectre solaire, J. Phys. Rad., 53, 196–206.

  Gershenfeld N. (1999) The Nature of Mathematical Modeling, Cambridge University Press, Cambridge.

  Kaufmann W. J. and Smarr L. L. (1993) Supercomputing and the Transformation of Science, Scientific American Library, New York.

  Lakoff G. and Johnson M. (1980) Metaphors We Live By, Chicago University Press, Chicago, IL.

  Lorenz E. (1963) Deterministic nonperiodic flow, J. Atmos. Sci., 20, 131–141.

  Lorenz E. (1982) Atmospheric predictability experiments with a large numerical model, Tellus, 34, 505–513.

  Lovelock J. E. (1989) Geophysiology, the science of Gaia, Rev. Geophys., 27, 2, 215–222, doi: 10.1029/RG027i002p00215.

  Müller P. and von Storch H. (2004) Computer Modelling in Atmospheric and Oceanic Sciences: Building Knowledge, Springer-Verlag, Berlin.

  Walliser B. (2002) Les modèles économiques. In Enquête sur le concept de modèle (Pascal Nouvel, ed.), Presses Universitaires de France, Paris.

  Washington W. M. and Parkinson C. L. (2005) An Introduction to Three-Dimensional Climate Modeling, University Science Book, Sausalito, CA.

  2

  Atmospheric Structure and Dynamics

  2.1 Introduction

  The atmosphere surrounding the Earth is a thin layer of gases retained by gravity (Figure 2.1). Table 2.1 lists the most abundant atmospheric gases. Concentrations are expressed as mole fractions, commonly called mixing ratios. The principal constituents are molecular nitrogen (N2), molecular oxygen (O2), and argon (Ar). Their mixing ratios are controlled by interactions with geochemical reservoirs below the Earth’s surface on very long timescales. Water vapor is present at highly variable mixing ratios (10–6–10–2 mol mol–1), determined by evaporation from the Earth’s surface and precipitation. In addition to these major constituents, the atmosphere contains a very large number of trace gases with mixing ratios lower than 10–3 mol mol–1, including carbon dioxide (CO2), methane (CH4), ozone (O3), and many others. It also contains solid and liquid aerosol particles, typically 0.01–10 μm in size and present at concentrations of 101–104 particles cm–3. These trace gases and aerosol particles do not contribute significantly to atmospheric mass, but are of central interest for environmental issues and for atmospheric reactivity.

  Figure 2.1 The Earth’s atmosphere seen from space, with the Sun just below the horizon. Air molecules scatter solar radiation far more efficiently in the blue than in the red. The red sunset color represents solar radiation transmitted through the lower atmosphere. The blue color represents solar radiation scattered by the upper atmosphere. Cloud structures are visible in the lowest layers.

  Table 2.1 Mixing ratios of gases in dry aira

  Gas Mixing ratio (mol mol–1)

  Nitrogen (N2) 0.78

  Oxygen (O2) 0.21

  Argon (Ar) 0.0093

  Carbon dioxide (CO2) 400 × 10–6

  Neon (Ne) 18 × 10–6

  Ozone (O3) 0.01–10 × 10–6

  Helium (He) 5.2 × 10–6

  Methane (CH4) 1.8 × 10–6

  Krypton (Kr) 1.1 × 10–6

  Hydrogen (H2) 500 × 10–9

  Nitrous oxide (N2O) 330 × 10–9

  a excluding water vapor

  The mean atmospheric pressure at the Earth’s surface is 984 hPa, which combined with the Earth’s radius of 6378 km yields a total mass for the atmosphere of 5.14 × 1018 kg. As we will see, atmospheric pressure decreases quasi-exponentially with height: 50% of total atmospheric mass is found below 5.6 km altitude and 90% below 16 km. Atmospheric pressures are sufficiently low for the ideal gas law to be obeyed within 1% under all conditions. The global mean surface air temperature is 288 K, and the corresponding air density is 1.2 kg m–3 or 2.5 × 1019 molecules cm–3; air density also decreases quasi-exponentially with height.

  We present in this chapter a general overview of the structure and dynamics of the atmosphere to serve as a foundation for atmospheric chemistry models. More detailed considerations on atmospheric physics and dynamics can be found in meteorological textbooks such as Gill (1982), Pedlosky (1987), Andrews et al. (1987), Zdunkowski and Bott (2003), Green (2004), Vallis (2006), Martin (2006), Mak (2011), Holton and Hakim (2013).

  2.2 Global Energy Budget

  The main source of energy for the Earth system is solar radiation. The Sun emits radiation as a blackbody of effective temperature TS = 5800 K. The corresponding blackbody energy flux is where σ = 5.67 × 10–8 W m–2 K–4 is the Stefan-Boltzmann constant. This radiation extends over all wavelengths but peaks in the visible at 0.5 μm. The solar energy flux intercepted by the Earth’s disk (surface perpendicular to the incoming radiation) is 1365 W m–2. This quantity is called the solar constant and is denoted S. Thus, the mean solar radiation flux received by the terrestrial sphere is S/4 = 341 W m–2. A fraction α = 30% of this energy is reflected back to space by clouds and the Earth’s surface; this is called the planetary albedo. The remaining energy is absorbed by the Earth–atmosphere system. This energy input is compensated by blackbody emission of radiation by the Earth at an effective temperature TE. At steady state, the balance between solar heating and terrestrial cooling is given by

  (2.1)

  The mean effective temperature deduced from this equation is TE = 255 K. It is the temperature of the Earth that would be deduced by an observer in space from measurement of the emitted terrestrial radiation. The corresponding wavelengths of terrestrial emission are in the infrared (IR), peaking at 10 μm. The effective temperature is 33 K lower than the observed me
an surface temperature, because most of the terrestrial radiation emitted to space originates from the atmosphere aloft where clouds and greenhouse gases such as water vapor and CO2 absorb IR radiation emitted from below and re-emit it at a colder temperature. This is the essence of the greenhouse effect.

  Figure 2.2 presents a more detailed description of the energy exchanges in the atmosphere. Of the energy emitted by the Earth’s surface (396 W m–2), only 40 W m–2 is directly radiated to space, while the difference (356 W m–2) is absorbed by atmospheric constituents. Thus, the global heat budget of the atmosphere must include the energy inputs resulting from (1) the absorption of infrared radiation by clouds and greenhouse gases (356 W m–2), (2) the latent heat released in the atmosphere by condensation of water (80 W m–2), (3) the sensible heat from vertical transport of air heated by the surface (17 W m–2), and (4) the absorption of solar radiation by clouds, aerosols, and atmospheric gases (78 W m–2). Of this total atmospheric heat input (532 W m–2), 199 W m–2 is radiated to space by greenhouse gases and clouds, while 333 W m–2 is radiated to the surface and absorbed. This greenhouse heating of the surface (333 W m–2) is larger than the heating from direct solar radiation (161 W m–2). At the top of the atmosphere, the incoming solar energy of 341 W m–2 is balanced by the reflected solar radiation of 102 W m–2 (corresponding to a planetary albedo of 0.30 with 23 W m–2 reflected by the surface and 79 W m–2 by clouds, aerosols and atmospheric gases) and by the IR terrestrial emission of 239 W m–2. Note that the system as described here for the 2000–2004 period is slightly out of balance because of anthropogenic greenhouse gases: A net energy per unit area of 0.9 W m–2 is absorbed by the surface, producing a gradual warming.

 

‹ Prev