Modeling of Atmospheric Chemistry

Home > Other > Modeling of Atmospheric Chemistry > Page 19
Modeling of Atmospheric Chemistry Page 19

by Guy P Brasseur


  4.7.3 Zero-Dimensional Models

  Zero-dimensional models do not account for the effects of transport so that the continuity equations collapse to ordinary differential equations of the form dρi/dt = si.. Computing the chemical evolution of an ensemble of coupled species involves solving a system of coupled ODEs. Such models have three general classes of applications:

  Global box models for long-lived and therefore well-mixed species, such as CO2 or methane, where long-term trends in total atmospheric mass can be computed from a balance between sources and sinks.

  Chemical evolution of an ensemble of short-lived species (such as a family of radicals) where the chemical source and sink terms are much larger than the transport terms. This is particularly useful to interpret field observations of radical chemistry. The concentrations of radicals are assumed to be in chemical steady state, while concentrations of longer-lived species are specified from observations or from climatology. Diurnal variation forced by photochemistry can be incorporated in the steady-state assumption by using periodic boundary conditions over the 24-hour diurnal cycle (Figure 4.10).

  Diagnostic studies to understand the nonlinear evolution of a chemical system. This is done, for example, to interpret the results of laboratory chamber experiments, or to understand the cascade of oxidation products resulting from the atmospheric oxidation of hydrocarbons.

  Figure 4.10 Diurnal evolution of nitrogen species calculated by a zero-dimensional box model for May 10 at 65° N at 20 km (a) and 40 km (b).

  From Brasseur and Solomon (2005).

  4.8 Numerical Frameworks for Eulerian Models

  Eulerian models derive the concentrations of chemical species at fixed locations on the Earth’s sphere by numerical solution of the Eulerian chemical continuity equations (Section 4.2). Here, we present the general numerical frameworks used to solve these equations, including finite difference, finite volume, spectral, and finite element methods. In the finite difference method, each unknown function is described by its values at a set of discrete grid points, while in the finite volume method, the functions are represented by their value averaged over specified intervals called grid cells or gridboxes. In the spectral and finite element methods, the functions are expanded by a linear combination of orthogonal basis functions, and the coefficients appearing in these expansions become the unknowns. We use the generic symbol Ψ to represent non-negative scalar functions, which can be viewed as the concentration (density or mixing ratio) of chemical species.

  4.8.1 Finite Difference (Grid Point) Methods

  In the finite difference method, the partial derivatives in space ∂Ψ/∂x and time ∂Ψ/∂t of a function Ψ(x, t) are approximated by finite difference analogs ΔΨ/Δx and ΔΨ/Δt in which the increments Δ are finite rather than infinitesimal. In this approach, a differential equation is replaced by a system of algebraic equations that can be solved by numerical methods. The solution of the system is obtained at a finite number (J) of grid points: x1, x2, …, xj–1, xj, xj+1, … xJ for discrete time levels t1, t2, … tn–1, tn, … For simplicity, we first assume that the dependent variable Ψ is a function of variable x only (1-D problem) and we consider that the points are uniformly spaced, so that the distance Δx between points is constant. The finite difference forms are the same when the independent variable is time; in this case Δt represents the model time step.

  The spatial resolution associated with a 1-D grid of constant spacing Δx can be estimated by noting that, if one decomposes function Ψ(x) into several sinusoids of varying amplitudes, wavelengths, and phases (Fourier analysis), the smallest resolvable wavelength is λmin = 2 Δx. The resolution of the model is therefore twice the grid spacing. The maximum resolvable wavenumber is kmax = 2π/λmin = π/Δx. Processes that occur at scales smaller than λmin are referred to as unresolved or subgrid-scale processes; they are usually parameterized in terms of the resolved model variables (see Chapter 8).

  To derive finite difference approximations to the derivatives of a function Ψ(x) at point xj, we expand the function as a Taylor series around that point:

  (4.177)

  Similarly, we can write

  (4.178)

  Function Ψ(x) takes the values Ψj–1, Ψj, and Ψj+1 at points xj–1 = xj – Δx, xj, and xj+1 = xj + Δx, respectively. An approximate value of the first-order derivative ∂Ψ/∂x at point xj is derived from (4.177):

  (4.179)

  The scheme is referred to as a forward difference scheme because the derivative is approximated by using information at points xj and xj + Δx. The error made by adopting this approximation, called the truncation error, is

  (4.180)

  From (4.178), one derives the backward scheme that uses information at points xj and xj – Δx

  (4.181)

  with the truncation error

  (4.182)

  These two algorithms are said to be first-order accurate because the largest term in the truncation error is proportional to Δx. By subtracting (4.178) from (4.177) we obtain

  (4.183)

  One deduces a second-order approximation (or central scheme) for the first derivative at point xj

  (4.184)

  with the largest-magnitude term in the truncation error

  (4.185)

  being proportional to Δx2. Higher-order approximations to the derivative can similarly be obtained by addition and subtraction of the Taylor expansions to cancel error terms to higher order (Table 4.1). For example, if one multiplies the centered difference (4.183) obtained for a spacing Δx by a factor 4/3 and the similar expression derived for a grid spacing of 2Δx by a factor of 1/3, and if one substracts the two expressions, the second-order error disappears; one obtains the fourth-order accurate scheme

  (4.186)

  Table 4.1 Numerical approximations to the partial derivative ∂Ψ/∂x

  Approximation Order Expression

  Forward 1

  2

  3

  4

  Backward 1

  2

  3

  4

  Centered 2

  4

  Figure 4.11 shows a graphical representation of the first derivative for a continuous function Ψ(x) at point xj as the slope of the tangent to the function at that particular point (point B). It is approximated by the chords AB, BC, and AC in the case of the backward, forward, and central difference approximations, respectively. Higher order formulations involve more than two grid points around point B.

  Figure 4.11 Finite difference approximation to derivative dΨ/dx: forward, backward, and central approximations to slope of tangent at point B.

  By adding equations (4.178) and (4.177), we find the following expression

  (4.187)

  from which we deduce a second-order accurate expression for the second derivative at point xj

  (4.188)

  Approximations for the second derivative are summarized in Table 4.2.

  Table 4.2 Numerical approximations to the partial derivative ∂2Ψ/∂x2

  Approximation Order Expression

  Centered 2

  Centered 4

  Ordinary Differential Equations (ODEs)

  To illustrate the use of the finite-difference method, we consider a system of initial-value ODEs that describes the evolution with time t of a vector valued function Ψ(t):

  (4.189)

  with an initial condition Ψ(t0) = Ψ0. The specified forcing function F is dependent on function Ψ and time t. In atmospheric problems, such equations describe the rate of changes in different physical quantities (mass, energy, momentum, concentration of chemical species, etc.) in response to known forcing factors.

  From the first-order forward approximation of the derivative, and with F evaluated at time tn, we approximate (4.189) by

  (4.190)

  where Ψn denotes an estimate of function Ψ at time tn and where the time interval Δt = tn+1 – tn is the time step of the numerical method. From the knowledge of Ψ at time tn, we obtain the solution f
or Ψ at time tn+1 as

  Ψn + 1 = Ψn + Δt Fn

  (4.191)

  where Fn stands for F(Ψn, tn). This is the explicit Euler forward method. It is called “explicit” because the right-hand side depends solely on the known value of Ψ at time tn. Thus Ψn+1 is readily calculated, and one can march in this manner forward from time step to time step.

  An alternative approximation to (4.189) is to evaluate F at time tn+1. This is the implicit Euler backward algorithm:

  Ψn + 1 = Ψn + Δt Fn + 1

  (4.192)

  This algebraic equation is more difficult to solve since F is expressed as a function of the unknown solution at time tn+1. In this latter case, the solution Ψn+1 is generally determined by adopting a functional iteration process. In the Newton iterative procedure, for example, one solves a linearized version of the system

  G(Ψn + 1) = Ψn + 1 − Ψn − Δt Fn + 1 = 0

  (4.193)

  leading to the iterative relations

  with an initial guess, e.g., . In these expressions,

  (4.194)

  is the Jacobian matrix of G, I is the identity matrix, and (r) is the iteration index. The iteration is interrupted when the absolute value of the correction || becomes smaller than a user-prescribed tolerance.

  A second-order algorithm is the implicit trapezoidal scheme

  (4.195)

  The three algorithms presented above are single-step methods because the value of function Ψ at time tn+1 is calculated only as a function of Ψ at the previous time tn. Higher-order multi-step methods provide the solution Ψn+1 at time tn+1 by using information from s previous steps tn, tn–1, tn–2, …, tn–s+1:

  (4.196)

  The choice of coefficients a0, a1, …, as–1 and b–1, b0, b1, …, bs–1 defines the particular algorithm. If b–1 = 0, the method is explicit; otherwise, it is implicit. For s = 1 (single-step method) and a0 = 1, the algorithm is a single-step Euler method (explicit if b0 = 1 and b–1 = 0, and implicit if b0 = 0 and b–1 = 1). For b–1 = b0 = 0.5, one obtains the single-step trapezoidal method.

  An example of a two-step algorithm (s = 2) is the leapfrog method

  Ψn + 1 = Ψn − 1 + 2Δt Fn

  (4.197)

  in which all parameters in expression (4.196) are chosen to be zero except a1 = 1 and b0 = 2. This explicit method is second-order accurate. A difficulty in the leapfrog algorithm is that the solutions obtained on the even time levels n, n + 2, n + 4, …, and odd time levels n + 1, n + 3, n + 5, … are only weakly coupled (Figure 4.12). This can lead to the appearance of a “computational mode” that gradually amplifies when treating nonlinear problems. The computational mode can be damped by applying a second-order time filter (see Section 4.15). Another corrective approach is to periodically apply over the course of the integration another algorithm, producing a permutation between the evolution at the odd and even time levels.

  Figure 4.12 The leapfrog method. Numerical integration of the solution at tn–1, tn+1, tn+3, … time levels (in yellow) and tn–2, tn, tn+2, … time levels (in red).

  A classic multi-step algorithm is the explicit Adams–Bashforth method, in which the coefficients of expression (4.196) are defined as a0 = 1, and a1, a2, …, as–1 = 0. A value of zero is imposed for coefficient b–1, defining the explicit nature of the method. Coefficients b0, b1, …, bs–1 are chosen so that the method has order s. For s = 1, the algorithm is the explicit Euler method. The two- and three-step Adams–Bashforth algorithms are expressed by

  (4.198)

  and

  (4.199)

  respectively.

  The Adams–Moulton method is an implicit multi-step algorithm. The values adopted for coefficients aj are the same as in the Adams–Bashforth method (all values of ai equal to zero, except a0 = 1). In this implicit case, the restriction on b–1 is removed, and the values of coefficients bj are chosen so that the method reaches order s + 1. The algorithm for s = 1 is the trapezoidal rule. For s = 2 and s = 3, it is expressed respectively as

  (4.200)

  (4.201)

  Finally, the implicit backward differentiation formula (BDF) method, also called Gear’s method, is defined by setting all coefficients b0, b1, …, bs–1 = 0 and specifying a non-zero value only for b–1. The resulting algorithm is generally written in the form

  (4.202)

  For example, the two- and three-step BDFs are

  (4.203)

  (4.204)

  Higher-order expressions are provided in Section 6.4.3.

  The choice of a particular algorithm to solve ODEs is determined first by the requirement that the method provide stable solutions for relatively large time steps. Accuracy is another important requirement. In the case of nonlinear initial value problems, no general numerical analysis is available for assessing the conditions under which a specific algorithm provides stable solutions. We consider therefore a simple prototype linear problem

  (4.205)

  for which different numerical schemes can be evaluated relative to a known analytic solution. The analysis of this linear problem provides guidance for the choice of integration algorithms to be adopted for more complex nonlinear problems. To keep our analysis simple, we assume that Ψ(t) is a non-negative scalar function, and we examine the numerical stability of different algorithms discussed above by assuming that solution Ψn+1 at time tn+1 is expressed as a function of the solution Ψn at time tn by

  Ψn + 1 = R(z) Ψn

  (4.206)

  where function R(z) is the so-called stability function and z = βΔt is a complex variable [z = (λ, ω) = λ + i ω]. The real part of R(z) determines the amplitude of the solution as time evolves. We consider here a problem whose solution decays with time, and assume therefore that λ < 0. Thus, in our discussion, only the left half of the complex plane with its axes x = λΔt and y = ωΔt is of particular interest. The imaginary part of R(z) provides information on the phase of the solution, and is of relevance when treating fluid dynamical equations such as, for example, the momentum or the transport equation. When considering chemical kinetics equations, variable z = (λ, 0) is real, and R(z) is therefore a real function.

  The stability function R(z) associated with different algorithms is generally expressed as the ratio between two polynomials. This function approximates its analytic analog (exact solution of the prototype equation)

  R(z) = ez

  (4.207)

  A numerical method is said to be A-stable if the amplitude of the stability function |R(z)| ≤ 1 for −∞ ≤ λ ≤ 0. The area of stability covers therefore the entire left-half area of the complex plane for which λ ≤ 0. An A-stable integration algorithm is stable for any value of the time step Δt. Any other type of stability introduces restrictions on z and therefore on the time step. The A-stability condition is met for the implicit (backward) Euler and the trapezoidal methods. The amplification functions of these two algorithms are

  respectively. It is not met in the case of the explicit (forward) Euler method, for which

  R(z) = 1 + z

  In this case, the stability region in the complex plane is limited to a circle of radius 1 centered around point (λ = –1, ω = 0). In the case of a chemical kinetics equation (i.e., z is a real variable), the method is unstable unless time steps are smaller than 1/λ. For a coupled system of several chemical species, the time step must be smaller than the lifetime of the shortest-lived species. This makes it inapplicable for stiff chemical kinetics systems of ODEs, as is typical of atmospheric chemistry problems, where the lifetimes of different species in the system vary over many orders of magnitude. The Adams–Bashforth and Adams–Moulton methods are not A-stable either. The one- and two-step BDF (Gear’s) methods are A-stable, but the A-stability property is lost when the order of accuracy of the method becomes higher than 2 (this limit is called Dahlquist’s second barrier). The values of z that spoil the A-stability conditions, however, are located only in a shallow area left of the imaginary axis in the complex domain (Durran, 2010). Gear’s
method is stable for accuracy orders up to 6 if z is real and negative. This algorithm is therefore particularly appropriate for treating stiff chemical kinetics systems. See Chapter 6 for further discussion.

  A-stability may not be a sufficient condition for properly treating systems in which different components decay at very different rates. One introduces therefore the concept of L-stability: a method is said to be L-stable in the context of the prototype equation adopted here, if it is A-stable and, in addition, if |R(z)| → 0 for large time steps (Δt → ∞). L-stable algorithms are particularly useful for treating stiff systems. By adopting relatively large time steps, L-stable methods retain the slowly varying contributions of the solution (which are generally of most interest), while ignoring its rapidly decaying components (which are generally of little or no interest). The implicit Euler method is L-stable since |R(z)| tends to zero for large time steps. On the contrary, the trapezoidal algorithm, which is A-stable, is not L-stable. Since no multi-step method is A-stable for orders higher than 2, stricto sensu only low-order BDF (Gear’s) algorithms are L-stable.

  A simple geometric interpretation of the numerical methods for ODEs presented above can be provided by integrating the scalar version of (4.189) over a time step Δt = tn+1 – tn:

 

‹ Prev