Modeling of Atmospheric Chemistry

Home > Other > Modeling of Atmospheric Chemistry > Page 69
Modeling of Atmospheric Chemistry Page 69

by Guy P Brasseur


  ‖c‖ = ‖a × b‖ = ‖a‖‖b‖sinθ

  The components of the vector product are

  cx = aybz – azby cy = azbx – axbz cz = axby – aybx

  The following rules are satisfied by the vector product (m is a scalar):

  Triple Products

  Derivative of a Vector

  The derivative da/dξ of vector a (ax, ay, az) with respect to independent variable ξ is obtained by computing the derivative of each component. For example, in a Cartesian frame,

  We have

  E.3 Matrices

  A matrix of size (m × n) is a rectangular table of elements arranged as m rows and n columns. A (m × n) matrix with elements ai,j (i = 1, n; j = 1, m), is written as

  (3)

  A matrix with the same number of rows and columns (m = n) is called a square matrix. A matrix with zero elements below (above) the diagonal ai,i is an upper (lower) triangular matrix. A matrix with non-zero elements on the main diagonal and zero off-diagonal elements is called a diagonal matrix. A matrix with non-zero elements only on the main diagonal and on the first diagonals below and above the main diagonal is called a tridiagonal matrix. Matrices that include a large number of zero elements (often encountered in chemical modeling) are referred to as sparse matrices.

  Matrices of the same size (m × n) can be added or subtracted element by element. If ai,j are the elements of matrix A and bi,j the elements of matrix B, the elements of matrix C = A + B are

  ci, j = ai, j + bi, j

  The multiplication of a matrix A by a scalar γ is a matrix B whose elements are bi,j = γ ai,j.

  Matrices can be multiplied when the number of columns (equal to n) of the first matrix (A) is equal to the number of rows of the second matrix (B). If C = AB

  Matrix multiplication satisfies several rules:

  (AB)C = A(BC)

  (A + B)C = AC + BC

  A(B + C) = AB + AC

  Matrix multiplication is not commutative

  AB ≠ BA

  The trace of a square matrix A is the sum of its diagonal elements.

  The transpose AT of a (m × n) matrix A is a (n × m) matrix in which the rows have been turned into the columns and the columns into the rows. For example, the transpose of the (m × n) matrix A in (3) is

  An (n × n) square matrix A is said to be symmetric if A = AT such that ai,j = aj,i. It is said to be antisymmetric if A = –AT with ai,j = –aj,i.

  If the matrix includes complex elements, the complex conjugate matrix A* of matrix A is a matrix whose elements are the complex conjugates of the elements of matrix A.

  For a complex matrix A with elements ai,j, the adjoint A‡ with elements a‡i,j is the complex conjugate of the transpose AT (with elements aj,i)

  a‡i, j = a*j, i

  The adjoint of a real matrix is thus its transpose.

  The identity matrix I of size n (denoted In) is a (n × n) square matrix in which all diagonal elements are equal to 1 and all off-diagonal elements are 0. For example (n = 2)

  The multiplication of a matrix A by the identity matrix I is

  IA = A

  The inverse matrix of a square matrix A is a matrix of the same size denoted A–1 that satisfies the relation

  A-1 A = A A-1 = I

  The inverse of a diagonal matrix is a diagonal matrix of reciprocal elements.

  Determinants

  The determinant of a square matrix A of size n with elements ai,j is denoted det(A) or |A| and is calculated as

  (4)

  The Levi-Civita symbol ε(k1 … kn) denotes the permutation sign function defined by

  ε(k1 … kn) = (−1)π(K)

  where π(K) is the number of pairwise exchanges among the indices of sequence K = {k1, k2, … kn} needed to reorder the sequence of distinct indices ranging from 1 to n into an ascending order given by [1, 2, …, n]. Thus, for example, ε(1, 2, 3) = +1 and ε(2, 1, 3, 4) = –1. The summation in (4) is performed over all non-repeated combinations of indices 1, 2, … n.

  For example, the determinant of a matrix of size n = 2 is

  det(A) = a1, 1 a2, 2 – a1, 2 a2, 1

  and for a matrix of size n = 3:

  det(A) = a1, 1 (a2, 2 a3, 3 – a2, 3 a3, 2) + a1, 2(a2, 3 a3, 1 – a2, 1 a3, 3) + a1, 3(a2, 1 a3, 2 – a2, 2 a3, 1)

  A matrix whose determinant is equal to zero is said to be singular or degenerate. A matrix whose determinant is non-zero is said to be non-singular or of full rank.

  Important properties of determinants are

  det(AB) = det (BA)

  det(AT) = det (A)

  det(A−1) = 1/det(A)

  The determinant of a triangular or diagonal matrix T with elements ti,j is the product of its diagonal elements

  Linear Systems

  A system of m linear equations with n independent variables

  (5)

  can be represented in a matrix form by

  or

  Ax = b

  Here, A is an m × n matrix, x is a vector of dimension n, and b is a vector of dimension m. If m = n, the number of unknowns in system (5) is equal to the number of equations, and matrix A is a square matrix. If A is also non-singular (det(A) ≠ 0) then the system can be solved as

  x = A−1b

  A generic expression for the inverse of a non-singular matrix of size n × n is given by

  In this expression, B, called the adjugate of A, is a matrix of size n × n whose coefficients bi,j, referred to as cofactors of elements ai,j, are given by

  bi, j = (–1)i + j det (A(i, j))

  where matrix A(i, j) of size (n – 1 × n – 1), called (i,j)-th redact, is the same as matrix A in which row i and column j have been removed. In practical applications, more efficient algorithms are adopted to seek the solution of linear systems. Among these are direct methods (e.g., Gauss–Jordan elimination, LU decomposition) or iterative methods (e.g., Jacobi, Gauss–Seidel, least-square, conjugate gradients methods). See Press et al. (2007) and Co (2013) for more details. The Thomas algorithm (see Box 4.4) is used to solve tridiagonal systems.

  Jacobian Matrix

  Let x = (x1, x2, … xn)T be a vector of dimension n and f = (f1, f2, … fm)T be a vector-valued function of dimension m whose elements fi are scalar-valued functions of x. The Jacobian matrix J(f) of f defined by

  is the m × n matrix whose elements ji,j are partial derivatives of fi with respect to vector elements xj.

  ji,j represents the sensitivity of function fi to variable xj. The Jacobian generalizes the notion of gradient to describe the sensitivity to a vector. The m components of f are assumed to be continuous over the entire domain under consideration. The Jacobian matrix is often used as the linearized expression of a mathematical model. In Chapter 11, the Jacobian is denoted K to avoid confusion with the usual notation for the cost function (J).

  The Hessian H(f) of a scalar-valued function f is a square matrix whose elements hi,j are the second-order partial derivatives of function f

  It describes the local curvature of a function of many variables. The Hessian matrix is related to the Jacobian matrix by

  H(f) = J(∇f)

  Eigenvalues and Eigenvectors

  An eigenvector e of a square matrix A (n × n) is a vector that satisfies

  Ae = λe

  (6)

  where the scalar λ (real or complex) is the eigenvalue corresponding to the eigenvector. It is often useful to know the eigenvectors of a matrix because they represent the vectors for which operation by the matrix modifies the amplitude without changing the direction. If e is an eigenvector of A, then any scalar multiplier of e must also be an eigenvector as seen from (6). It follows that (6) must yield an infinite number of solutions for e and hence that

  det(A − λIn) = 0

  (7)

  The eigenvalues of matrix A are thus the solutions to (7) and replacement into (6) yields the corresponding eigenvectors e1, e2, …

  One can show that the trace of a matrix is equal to the sum of its eigenvalues, and that its determinant is equ
al to the product of its eigenvalues.

  E.4 Vector Operators

  The Gradient Operator

  If f(x, y, z) is a scalar field at position (x, y, z) in three dimensions, the gradient of f denoted grad(f) or∇f is defined in Cartesian coordinates by

  where, as above, i, j, and k represent the basis of unit vectors in the coordinate system, and ∇ (nabla symbol) the vector differential operator. At any point P, the gradient of a scalar-valued function f is a vector that points in the direction of the greatest change of f at point P. In spherical coordinates (Figure E.1), the gradient is expressed by

  where ir, iθ, and iφ are unit vectors pointing along coordinate directions, and where φ is the azimuth angle and θ the zenith angle.

  Figure E.1 Definition of spherical coordinates. Point P is defined by the radial distance r, the azimuthal angle φ, and the zenith angle θ. If i, j, and k are the unit vectors along rectangular coordinates x, y, and z, the unit vectors in spherical coordinates ir, iφ, and iθ are defined by ir = r/||r|| = (x i + y j + z k)/(x2 + y2 + z2)1/2, iφ = k × ir, and iθ = iφ × ir.

  The gradient of a vector-valued function f(x) is its Jacobian matrix J = ∂f/∂x. This is often used in a steepest-descent algorithm to find the minimum of f.

  Divergence of a Vector Field

  If f = (fx, fy, fz)T is a vector function in a 3-D Euclidean space, its divergence is a scalar field defined in Cartesian coordinates by

  The divergence of a vector field represents the flux generation per unit volume at each point of the field. In spherical coordinates, the divergence of a vector f(fr, fθ, fφ) is written

  Curl of a Vector Field

  The curl of a vector field f = (fx, fy, fz)T is a vector defined as the cross product of the operator ∇ with vector f. In Cartesian coordinates,

  The curl of a vector field f represents the vorticity or circulation per unit area of the field. In spherical coordinates, it is expressed by

  Laplacian of a Scalar Field

  The Laplacian Δfof a scalar field f is the divergence of its gradient.

  Δf = ∇ · (∇f)

  It is a scalar field. In Cartesian coordinates, it is expressed as

  In spherical coordinates, it is written

  Important Relations

  If f is a scalar field and f a vector field, we have

  ∇ × (∇f) = 0

  ∇ · (∇ × f) = 0

  ∇ × (∇ × f) = ∇(∇ · f) − ∇2f

  ∇ · (f f) = f(∇ · f) + (∇f) · f

  ∇ × (f f) = f(∇ × f) + (∇f) × f

  Scalar and Vector Potentials

  Any vector field f whose curl is equal to zero can be expressed as the gradient of a scalar potential V

  f = − ∇V

  Any vector field f whose divergence is equal to zero can be expressed as the curl of a vector field a

  f = ∇ × a

  Integration Theorems

  If f = (fx, fy, fz)T is a vector field defined in a given region, Γ a curve in this region, and dr = (dx, dy, dz)T an elementary displacement along this curve, the circulation C between points P1 and P2 is given by the line integral

  For a vector f whose curl equals zero, the circulation can be expressed as a function of potential V, and

  In this case, the circulation is thus independent of the path between points P1 and P2. For a closed curve, the circulation of a vector whose curl is equal to zero is equal to zero.

  The Kelvin–Stokes’ theorem, also called the curl theorem, relates the surface integral of the curl of vector f over a surface A in the 3-D Euclidean space to the line integral of the vector field over its boundary Γ

  where n is the outward-pointing unit normal vector on the surface boundary. The Gauss–Ostrogradsky’s theorem, also called the divergence theorem, states that the outward flux of a vector field f through a closed surface A is equal to the volume integral of the divergence of f over the volume V inside the surface

  The 2-D version (in a plane) of the divergence theorem, called Green’s theorem, is expressed as

  E.5 Differential Equations

  A differential equation is a mathematical expression that relates some function to its derivatives. The order of a differential equation is the highest order of the derivative(s) found in the equation. We consider here two types of differential equations: ordinary differential equations (ODEs) and partial differential equations (PDEs).

  Ordinary Differential Equations

  An ODE prescribes a function and its derivative(s) relative to a single independent variable. A simple example of an initial value ODE for a vector-valued function y(t) is the first-order equation

  which describes the rate at which the vector-valued function y(t) varies with time t, for an applied forcing f(y, t). The system is said to be autonomous if function f is not explicitly dependent on time [f = f(y)] and non-autonomous otherwise [f = f(y, t)]. The solution y(t) is expressed for a specified initial value y(t0) by

  A linear ODE of order N, written here for a scalar function y(x), is expressed as

  It is said to be homogeneous if the forcing term f(x) = 0 and inhomogeneous otherwise. The solution of this equation requires that N independent conditions be specified. These can be, for example, values of function y and its N – 1 first derivatives at one end of the interval under consideration (e.g., at point x = 0). These conditions are referred to as initial conditions when the independent variable is time. Another option is to provide conditions at each boundary of the interval.

  Partial Differential Equations

  A PDE prescribes a function and its partial derivatives relative to several independent variables. If y is a dependent variable and x(x1, x2, …, xn) are n independent variables, the general form of a first-order partial differential equation is

  One distinguishes between different forms of first-order PDEs:

  Quasi-linear equation:

  Linear equation:

  Strictly linear equation:

  The second-order linear PDE for a field ψ(x, y)

  is said to be

  elliptic if B2 − 4AC < 0

  parabolic if B2 − 4AC = 0

  hyperbolic if B2 − 4AC > 0

  by formal analogy with the name of the conics (ellipse, parabola, and hyperbola) represented for these same conditions when applied to the quadratic equation

  Ax2 + Bxy + Cy2 + Dx + Ey + F = 0

  An example of an elliptic PDE is the Laplace equation

  ∇2ψ = 0

  describing a time-independent “boundary value” problem. The determination of the solution requires that a condition be prescribed at each point of the boundary of the domain in which this equation is to be solved. This condition can be a specified value of the function y (Dirichlet or first-type condition) or a specified value of the normal derivative of y (Neumann or second-type condition).

  An example of a parabolic PDE is the diffusion equation

  The solution of this linear parabolic PDE requires that an initial condition and a condition at each point of the boundary of the spatial domain be specified.

  Finally, the wave equation

  is an example of a hyperbolic PDE. The advection equation

  can also be classified as a hyperbolic PDE since its solution verifies the wave equation. The solution of the linear advection equation requires that initial conditions be specified along with boundary conditions at the upstream boundary of the spatial domain.

  Another characterization of PDEs that is more useful from a computational point of view is to distinguish between initial value problems (such as the advection and the diffusion equations) in which the equations are integrated forward in time from a specified initial condition, and boundary value problems (such as the Laplace equation) in which the correct solution must be found everywhere at once (Press et al., 2007).

  Different analytical methods are available to solve PDEs (see, e.g., Durran, 2010; Co, 2013) including the method of characteristics, discussed in different textbooks, the use of the Laplace
or Fourier transforms (see Section E.6), etc. Numerical approaches to solve the advection and diffusion equations are presented in Chapters 7 and 8.

  E.6 Transforms

  Orthogonal Transforms

  We consider a linear transformation of n-element vector x to a new n-element vector y:

  y = M x

  where M is an n × n matrix. The transformation is said to be orthogonal if matrix M is orthogonal, i.e., its inverse M–1 is equal to its transpose MT:

 

‹ Prev