Modeling of Atmospheric Chemistry

Home > Other > Modeling of Atmospheric Chemistry > Page 34
Modeling of Atmospheric Chemistry Page 34

by Guy P Brasseur


  (6.29)

  where the vector p and diagonal matrix L are functions of the unknown concentrations ψ. We denote as the production pk(tn, ψn) and as the loss coefficient ℓk(tn, ψn) for species k at time tn. The loss rate of species k is generally a linear function of its concentration, hence the utility of separating out the loss coefficient.

  6.3.1 Exponential Approximation

  The exponential method, one of the earliest methods used to treat chemical processes in atmospheric models, is motivated by the form of (6.29), which has a trivial exponential solution if p and L are constant. Assuming that p and L can be approximated as constant over the time step Δt, we obtain the following explicit expression for each species k:

  (6.30)

  The solution provided by this first-order algorithm is positive for any integration time step, and the algorithm does not require any matrix manipulation. However, the method does not conserve mass and it requires small time steps to be accurate. The accuracy can be improved by considering the implicit form of the exponential approximation

  (6.31)

  This equation can be easily solved by an iteration procedure, starting from an initial iterate (0) = . The number of iterations required to ensure a given level of accuracy may be different for the different chemical species within the system.

  An extrapolated form of the exponential approximation proposed by Jay et al. (1995) provides a second-order accurate algorithm. Omitting index k, we compute a first estimate of the solution at time tn+1 following (6.30)

  (6.32)

  A second estimate of ψn+1 is derived by a two-step integration using time step ∆t/2

  (6.33)

  (6.34)

  The solution at time tn+1 is then given by the extrapolation relation (6.28):

  (6.35)

  6.3.2 Quasi Steady-State Approximation

  The computation of exponential functions in the algorithms described in Section 6.3.1 requires substantial amounts of computer time. In a scheme proposed by Hesstvedt et al. (1978), called Quasi Steady State Approximation (QSSA), classification of species by lifetime reduces the number of exponential functions. The species are separated according to their e-folding time (chemical lifetime τk = 1/ℓk), and different algorithms are applied:

  For long-lived species with τk > 100 ∆t, a fully explicit Euler forward algorithm:

  (6.36)

  For intermediate-lived species with 0.1 ∆t < τk < 100 ∆t, an exponential approximation:

  (6.37)

  For short-lived species with τk < 0.1 ∆t, a steady-state value:

  (6.38)

  This method is more efficient than the pure exponential solver. Its accuracy is highly dependent on the choice of the integration time step.

  6.3.3 Extrapolation Technique (ET)

  As described in Section 6.2.3, the extrapolation method combines the solutions obtained by a low-order algorithm with different time steps using Richardson’s recurrence formula. In the extrapolation technique proposed by Dabdub and Seinfeld (1995), also called ET solver, the numerical algorithm is a predictor-corrector scheme. The predictor, which calculates a first estimate ψk* of the solution at time tn+1, is provided by the explicit exponential formula (6.30)

  (6.39)

  The formula for the corrector is chosen according to the lifetime τk = 1/ℓk of chemical species k.

  For long-lived species (τk > 100 ∆t), one adopts the trapezoidal rule

  (6.40)

  For intermediate species (0.1 ∆t < τk < 100 ∆t), the corrector uses an exponential form

  (6.41)

  where is defined as

  (6.42)

  For short-lived species (τk < 0.1 ∆t)

  (6.43)

  The correctors can be iterated until the relative difference between successive approximations becomes smaller than a user-imposed tolerance.

  6.3.4 CHEMEQ Solver

  In the CHEMEQ solver proposed by Young and Boris (1977) and as implemented by Saylor and Ford (1995), a distinction is made again between chemical species according to their lifetime τk = 1/ℓk. The corrector formulas are derived from the implicit trapezoidal rule, but applied in an explicit way.

  For long-lived species (τk > 5 ∆t), we use:

  Predictor:

  (6.44)

  Corrector:

  (6.45)

  For intermediate species (0.2 ∆t < τk < 5 ∆t) we use the more accurate asymptotic integration formula:

  Predictor:

  (6.46)

  Corrector:

  (6.47)

  For short-lived species (τk < 0.2 ∆t), steady state is assumed:

  (6.48)

  Iterations on the corrector are performed until convergence is reached.

  6.3.5 TWOSTEP method

  The TWOSTEP method (Verwer, 1994) is based on the second-order backward differentiation formula (see Table 4.2):

  (6.49)

  or

  (6.50)

  with

  (6.51)

  Again sn+1 represents the source term s(tn+1, ψn+1). This algorithm is a two-step method. The solution at time tn+1 is expressed as a function of the solutions ψn and ψn–1 at times tn and tn–1. When the source term is replaced by the rate of production p and the loss coefficient matrix L, the solution becomes

  (6.52)

  with I being the identity matrix. The value of ψn+1 can be obtained by applying an iterative procedure provided, for example, by the Jacobi or Gauss–Seidel method (see Box 6.2).

  Box 6.2 Solutions of Linear Algebraic Equations: LU Decomposition, Jacobi and Gauss–Seidel Iteration

  Different numerical methods are available to solve a system of N linear equations

  Ay = b

  (I)

  In the LU decomposition method, matrix A with elements ai,j is decomposed into the product of two matrices L and U

  (II)

  where L, the lower triangular matrix, includes non-zero elements li,j only on the diagonal and below, and U, the upper triangular matrix, includes non-zero elements ui,j only on the diagonal and above. If A, L, and U are 3 × 3 matrices, equation (II) is written

  System (I) becomes

  (III)

  Its solution is found by solving sequentially the two triangular systems

  (IV)

  first by forward substitution

  and then by back-substitution

  The decomposition of matrix A into triangular matrices L and U is performed by deriving the values of li,j and ui,j from the N2 equations of system (II). Since this system includes N2 + N unknowns, N of them can be specified: For example, the diagonal elements in one of the triangular matrices can be set equal to 1.

  The Jacobi and Gauss–Seidel methods can be described as follows (Press et al., 2007). We first split matrix A into its diagonal part D, its lower triangle L part (with zeros on the diagonal) and its upper triangle U part (also with zeros on the diagonal). Thus, we write

  A = L + D + U

  (V)

  In the Jacobi iteration method, we write for iteration step (r + 1)

  Dy(r + 1) = − (L + U)y(r) + b

  (VI)

  y(r + 1) = y(r) − D−1[Ay(r) − b]

  (VII)

  The value of y(r+1) can easily be derived since D is a diagonal matrix. The method converges slowly and is most effective when matrices A are dominated by diagonal terms.

  In the Gauss–Seidel method, iteration (r + 1) is expressed by

  (L + D)y(r + 1) = − U y(r) + b

  (VII)

  or

  y(r + 1) = y(r) − (L + D)−1[Ay(r) − b]

  (VIII)

  This method leads to an algorithm in which the updated values for the individual components of vector y are used to derive the solutions of the next components of the same vector. In the successive over-relaxation method, these iterations can be accelerated by multiplying the correction vector [A y(r) – b] in (VII) by an over-relaxation parameter whose value is generally chosen to be between 1 and 2. In this ran
ge of values, the method is convergent.

  The TWOSTEP method is second-order accurate and does not require matrix manipulation. The solution is always positive and approaches its steady-state value for large time steps. Mass is not fully conserved by the Jacobi and Gauss–Seidel iterative procedures (Box 6.2).

  6.4 Implicit Solvers

  We now examine a few frequently used implicit integrators. As indicated earlier, implicit solvers are robust for solving stiff systems. They require, however, computationally expensive matrix manipulations. Information on the stability of these methods is provided in Appendix E.

  6.4.1 Backward Euler

  In the backward Euler method,

  ψn + 1 = ψn + s(tn + 1, ψn + 1)Δt

  (6.53)

  the solution ψn+1 is obtained by determining the roots of the K-valued vector function

  g(ψn + 1) ≡ ψn + 1 − ψn − s(tn + 1, ψn + 1)Δt = 0

  (6.54)

  where K is the number of species in the system. Solution can be obtained with the Newton–Raphson iteration method. In this case, function g(ψn+1) at iteration (r + 1) is developed as a Taylor series about a previous estimate of the solution at iteration (r). Thus

  (6.55)

  Here J is the Jacobian matrix whose elements are given by Ji,j = ∂gi/∂ψj. Neglecting the higher-order terms, the value for which g() = 0 is derived by

  (6.56)

  with = ψn as the initial iterate. The iteration proceeds until convergence is reached to within a user-prescribed tolerance. The Newton–Raphson iteration conserves mass when the analytic form of the Jacobian matrix is used and recalculated for each iteration; this property may be lost, however, when approximations for the Jacobian are used.

  The backward Euler method requires repeated construction and inversion of the Jacobian matrix. Inversion can be sped up by noting that the matrix is usually sparse (matrix with many zero elements) as many pairs of chemical species are not directly coupled. Various methods exist for computationally efficient inversion of sparse matrices (see, for example, Press et al., 2007). In cases when the interactions between groups of species are weak, the Jacobian matrix can be broken into smaller matrices enabling more efficient solution (Hertel et al., 1993; Sandilands and McConnell, 1997).

  In another approach proposed by Shimazaki (1985), the chemical source term s is linearized as follows:

  s = p − Lψ

  (6.57)

  where p is a vector of production rates and L is a diagonal matrix of loss rate coefficients. We then write

  ψn + 1 = ψn + Δt[p(tn + 1, ψn + 1) − L(tn + 1, ψn + 1)ψn + 1]

  (6.58)

  The solution can be obtained by applying an iterative procedure:

  (6.59)

  if pn+1 = p(tn+1, ψn+1), Ln+1 = L(tn+1, ψn+1). I denotes the identity matrix, and (r) represents the iteration index (r = 0, 1, 2, …). The initial iteration uses = ψn. Convergence restrictions on the adopted time step ∆t depend on the functional forms of vector p and matrix L. Convergence may be difficult when the chemical coupling between the different species included in the system is strong. Linearization affects mass conservation but the situation is improved when the quadratic terms such as k ψ1 ψ2 are linearized as k( + )/2 and linear terms such as k ψ are expressed as k(ψn+1 + ψn)/2 (Ramaroson et al., 1992).

  A particularly useful feature of the backward Euler method (6.53) is its flexibility in the choice of chemical constraints applied to the system of coupled species. This makes it attractive for analysis of chemical mechanisms using box models where computational requirements are not a concern. The functions g(ψn+1) that are used to define the solution system do not necessarily need to be finite difference forms of the chemical kinetic equations. They can be any constraint that we choose. For example, steady-state solution of the system is obtained by using

  g(ψn + 1) = pn + 1 − Ln + 1ψn + 1 = 0

  (6.60)

  Individual constraints can also be applied to any particular species or groups of species. For example, we might want to impose conservation of the sum ψT of concentrations for a family of species j = 1, … q:

  (6.61)

  Here, (6.61) replaces the kinetic equation for one chosen member of the chemical family. This allows chemical cycling within the family while holding constant the total concentration of the family (see Box 3.1). For example, one may impose a fixed concentration of NOx ≡ NO + NO2 in the model while allowing the individual concentrations of NO and NO2 to change. This is done by replacing the kinetic equation for NO2 by the NOx mass conservation equation ([NO] + [NO2] = [NOx], where [NOx] is imposed). The chemical kinetic equation is retained for NO. Other potentially useful constraints that can be expressed by the form gk(ψn+1) = 0 include chemical equilibria between species, charge balance for aqueous-phase ion chemistry, etc.

  6.4.2 Rosenbrock Solvers

  The Rosenbrock methods (Rosenbrock, 1963), which can be regarded as a generalization of the Runge–Kutta methods, are non-iterative implicit algorithms that are particularly well adapted to stiff systems. If we apply only one Newton–Raphson iteration to the full implicit algorithm with ψn being the initial iterate, we obtain

  (6.62)

  and solve

  k = s(ψn) + J k Δt

  (6.63)

  where J is the Jacobian matrix of the chemical source function s. The idea behind the Rosenbrock methods (Hairer and Wanner, 1996) is to derive stable integration formulas that generalize expressions (6.62) and (6.63) and use s stages to achieve a high order of accuracy (i.e., high-order method). An s-stage Rosenbrock method applied to an autonomous problem dψ/dt = s(ψ) seeks a solution of the form

  (6.64)

  with s linear equations

  (6.65)

  that can be rearranged as

  (6.66)

  where I is again the identity matrix. The method-specific coefficients bi, αij, and γij are fixed constants independent of the problem, chosen to obtain a desired order of accuracy and to ensure stability for stiff problems. Equation (6.66) can be solved successively for k1, k2, …, ks, using, for example, an LU decomposition process or, when possible, by a suitable sparse matrix procedure. A comprehensive treatment of the Rosenbrock methods is provided by Hairer and Wanner (1996). See also Rosenbrock (1963) and Press et al. (2007).

  For a non-autonomous system dψ/dt = s(t, ψ), the definition of ki in expression (6.65) is changed to

  The Rosenbrock solvers are one-step algorithms. Like fully implicit methods, they conserve mass during the integration if the true analytic form of the Jacobian is used. Positivity of the solution is not guaranteed. The Rosenbrock solvers, like the Runge–Kutta solvers, form successive results that approximate the solution at intermediate time levels. A disadvantage of the Rosenbrock solvers is that they require an evaluation of the Jacobian at each time step, several matrix vector multiplications, and the resolution of a linear system. The cost of the method, however, can be reduced (Sandu et al., 1997a) by keeping the Jacobian unchanged during several time steps of the integration, and by approximating the Jacobian by a matrix of higher sparsity (this option will not preserve mass conservation).

  An example of a Rosenbrock method is the second-order ROS2 solver (s = 2):

  (6.67)

  with

  (6.68)

  To maximize stability, the value is recommended (Verwer et al., 1999).

  Another Rosenbrock algorithm that is accurate for stiff systems is the RODAS3 solver (Sandu et al., 1997a) for which s = 4. The solution is given by

  (6.69)

  with

  (6.70)

  Verwer et al. (1999), who compared ROS2 and RODAS3, suggest that the second algorithm is less stable when using large fixed time steps, and that, in general, the first method performs with higher stability for nonlinear atmospheric kinetics problems.

  6.4.3 Gear Solver

  Most of the algorithms discussed in previous sections are single-step methods, in which the solution ψn+1 at time tn+1 is calculated as a fun
ction of the solution ψn at time tn. In a multi-step method, the solution ψn+1 is derived as a function of the solutions ψn, ψn–1, ψn–2, … at previous time levels tn, tn–1, tn–2, …. A general formulation for a multi-step algorithm of order s is given by Byrne and Hindmarsh (1975):

  (6.71)

  where αk and γk are method-specific constants selected to ensure stability. Single-step methods correspond to the particular case of s = 0 and α0 = 1.

  Different explicit and implicit multi-step methods are available to solve ordinary differential equations, and are briefly discussed in Chapter 4. A widely used multi-step method that is particularly well adapted to stiff problems is the implicit Gear’s solver (Gear, 1971), also called backward differentiation formulae (BDF). Here s can be as high as 6, only γ–1 is non-zero among the γ coefficients, and αk is selected by stability and accuracy considerations. Thus, we write the BDF

 

‹ Prev