Book Read Free

X and the City: Modeling Aspects of Urban Life

Page 23

by Adam, John A.


  so the probability of the infection spreading from this encounter is proportional to NM, and this is just N(K − N). Furthermore, it is reasonable to suppose that the rate of change of N is proportional to this probability, that is,

  which is just the form of equation (15.8) with some obvious notational changes.

  Appendix 9

  A MINISCULE INTRODUCTION TO FRACTALS

  The person most often associated with the discovery of fractals (and rightly so) is the mathematician Benoit B. Mandelbrot. Indeed, one doesn’t need to be the proverbial rocket scientist to see why the Mandelbrot set is so named (for details of this amazing set, just Google the name!). The mathematics underlying the structure of fractals (geometric measure theory), however, had been developed long before the “computer revolution” made possible the visualization of such complicated mathematical objects. In the 1960s Mandelbrot pointed out some interesting but very surprising results in a paper entitled “How long is the coastline of Britain?” published posthumously by the English meteorologist Lewis Fry Richardson. That author had noticed that the measured length of the west coast of Britain depended heavily on the scale of the map used to make those measurements: a map with scale 1:10,000,000 (1 cm being equivalent to 100 km) has less detail than a map with scale 1:100,000 (1 cm equivalent to 1 km). The more detailed map, with more “nooks and crannies,” gives a larger value for the coastline. Alternatively, one can imagine measuring a given map with smaller and smaller measuring units, or even walking around the coastline with smaller and smaller graduations on our meter rule. Of course this presumes that at such small scales we can meaningfully define the coastline, but naturally this process cannot be continued indefinitely due to the atomic structure of matter. This is completely unlike the “continuum” mathematical models to which we have referred in this book, and in which there is no smallest scale.

  Richardson also investigated the behavior with scale for other geographical regions: the Australian coast, the South African coast, the German Land Frontier (1900), and the Portugese Land Frontier. For the west coast of Britain in particular, he found the following relationship between the total length s in km and the numerical value a of the measuring unit (in km, so a is dimensionless):

  where s1 is the length when a = 1. Clearly, as a is reduced, s increases! If the measuring unit were one meter instead of one km, the value of s would increase by a factor of about 4.6 according to this model. Clearly, the concept of length in this context is rather amorphous; is there a better way of describing the coastline? Can we measure the “crinkliness” or “roughness” or “degree of meander” or some other such quantity? Mandelbrot showed that the answer to this question is yes, and the answer is intimately connected with a generalization of our familiar concept of “dimension.” That is the so-called topological dimension, expressed in the natural numbers 0, 1, 2, 3, . . . (there is no reason to stop at three, by the way). It turns out that the concept of fractal dimension used by Mandelbrot (the Hausdorff-Besicovich dimension), being a ratio of logarithms, is not generally an integer. Mandelbrot defines a fractal as “a set for which the Hausdorff-Besicovich dimension strictly exceeds the topological dimension.”

  Consider the measurement of a continuous curve by a “measuring rod” of length a. Suppose that it fits N times along the length of the curve, so that the measured length L = Na. Obviously then, N = L/a is a function of a, that is, N = N(a). Thus if a = 1, N(1) = L. Similarly, if a = 1/2, N(1/2) = 2L, N(1/3) = 3L, and so on. For fractal curves, N = La−D, where D > 1 in general, and it is called the fractal dimension. This means that making the scale three times as large (or a one third of the size it was previously) may lead to the measuring rod fitting around the curve more than three times the previous amount. This is because if N(1) = L as before,

  In what follows below we shall use unit length, L = 1 when a = 1 without loss of generality. Given that N = a−D it follows that

  More precisely, Mandelbrot used the definition of the fractal dimension as

  If this has the same value at each step, then the former definition is perfectly general. Let us apply the definition to what has come to be called the Koch snowflake curve. The basic iteration step is to take each line segment or side of an equilateral triangle, remove the middle third, and replace it by two sides of an equilateral triangle (each side of which is equal in length to the middle third), so now it is a “Star of David.” Each time this procedure is carried out the previous line segment is increased in length by a factor 4/3. Thus a = 1/3 and N = 4 (there now being four smaller line segments in place of the original one), so

  The limiting snowflake curve (a → 0) thus “intrudes” a little into the second dimension; this intrusion is indicated by the degree of “meander” as expressed by the fractal dimension D. This curve is everywhere continuous but nowhere differentiable! The existence of such curves, continuous but without tangents, was first demonstrated well over a century ago by the German mathematician Karl Weierstrass (1815–1897), and this horrified many of his peers. Physicists, however, were more welcoming; Ludwig Boltzmann (1844–1906) wrote to Felix Klein (1849–1925) in January 1898 with the comment that such functions might well have been invented by physicists because there are problems in statistical mechanics “that absolutely necessitate the use of non-differentiable functions.” He had in mind, no doubt, Brownian motion (this is the constant and highly erratic movement of tiny particles (e.g., pollen) suspended in a liquid or a gas, or on the surface of a liquid).

  Consider next the box-fractal: this is a square, in which the basic iteration is to divide it into 9 identical smaller squares and remove the middle squares from each side, leaving 5 of the original 9. It is readily seen that a = 1/3 and N = 5, so

  The Sierpinski triangle is any triangle for which the basic iteration is to join the midpoints of the sides with line segments and remove the middle triangle. Now a = 1/2 and N = 3, so

  These fractals penetrate increasingly more into the second dimension. We will mention two more at this juncture: the Menger sponge and Cantor dust. For the former, we do in three dimensions what was done in two for the box fractal. Divide a cube into 27 identical cubes and “push out” the middle ones in each face (and the central one). Now it follows that a = 1/3 and N = 20 (seven smaller cubes having been removed in the basic iterative step), so in the limit of the requisite infinite number of iterations

  (intruding well into the third dimension). Now for Cantor dust: what is that? Take any line segment and remove the middle third; this is the basic iteration. In the limit of an infinity of such iterations for which a = 1/3 and N = 2, it follows that

  which is obviously less than one. Quite amazing.

  Appendix 10

  RANDOM WALKS AND THE DIFFUSION EQUATION

  Consider the motion of non-interacting “point particles” along the x-axis only, starting at time t = 0 and x = 0 and executing a random walk. The particles can be whatever we wish them to be: molecules, pollen, cars, inebriated people, white-tailed deer, rabbits, viruses, or anything else as long as there are sufficiently many for us to be able to assume a continuous distribution, and yet not so dense that they interact and interfere with one another (though this may seem rather like “having our cake and eating it”!). In its simplest description the particle motion is subject to the following constraints or “rules”:

  1. Every τ seconds, each particle moves to the left or the right, moving at speed v, a distance δ = ±vτ. We consider all these parameters to be constants, but in reality they will depend on the size of the particle and the medium in which it moves.

  2. The probability of a particle moving to the left is ½, and that of moving to the right is also ½. Thus they have no “memory” of previous steps (just like the toss of a fair coin); successive steps are statistically independent, so the walk is not biased.

  3. As mentioned above, each particle moves independently of the others (valid if the density of particles in the medium is sufficiently dilute).
/>   Some consequences of these assumptions are that (i) the particles go nowhere on the average and (ii) that their root-mean-square displacement is proportional, not to the time elapsed, but to the square root of the time elapsed. Let’s see why this is so. With N particles, suppose that the position of the ith particle after the nth step is denoted by xi(n). From rule 1 it follows that xi(n) − xi(n − 1) = ±δ. From rule 2, the + sign will apply to about half the particles and the − sign will apply to the other half if N is sufficiently large; in practice this will be the case for molecules. Then the mean displacement of the particles after the nth step is

  That is, the mean position does not change from step to step, and since they all started at the origin, the mean position is still the origin! This means that the spread of the particles is symmetric about the origin.

  Let’s now consider the mean-square displacement . It is clear that

  so that

  Since xi(0) = 0 for all particles i,

  Thus the mean-square displacement increases as the step number n, and the root-mean-square displacement increases as the square root of the step number n. But the particles execute n steps in a time t = nτ so n is proportional to t. Thus the spreading is proportional to . And as we have shown in Chapter 19 for the case of a time-independent smokestack plume or a line of traffic, the spreading is proportional, in the same fashion, to the square root of distance from the source.

  Incidentally, there is a very nice application of these ideas, extended to three dimensions, in the book by Ehrlich (1993). In an essay entitled “How slow can light go?” he points out that, barring exotic places like neutron stars and black holes, it’s probably in the center of stars that light travels most slowly. This is because the denser the medium is, the slower is the speed of light.

  In fact, it takes light generated in the central core of our star, the sun, about 100,000 years to reach the surface! (Do not confuse this with the 8.3 minutes it takes for light to reach Earth from the sun’s surface.) The reason for this long time is that the sun is extremely opaque, especially in its deep interior. This means that light travels only a short distance (the path length) before being absorbed and re-emitted (usually in a different direction). The path of a typical photon of light traces out a random walk in three dimensions, much like the two-dimensional path of a highly inebriated person walking away from a lamppost. After a random walk consisting of N steps of length (say) 1 ft each, it would be very unlikely for the person to end up N feet away from the lamppost. In fact the average distance staggered away from the lamppost turns out to be feet.

  In the sun, the corresponding path length is about 1 mm, not 1 foot. This is the average distance light travels before being absorbed, based on estimates of the density and temperature inside the sun. The radius of the sun is about 700,000 km, or 7 × 105 km, or approximately 1012 mm, rounding up for simplicity to the nearest power of ten. Now watch carefully: there’s nothing up my sleeve. We require this distance, the radius of the sun, to be the value of , the distance from the center traveled by our randomly “walking” photon. This means that N ≈ (1012)2 = 1024, and 1024 mm steps is 1018 km. Now 1 light year is about 1013 km (check it for yourself!), so N ≈ 105 light years, which by definition takes 105 or 100,000 years! Good gracious me!

  THE DIFFUSION EQUATION

  Now we formulate the one-dimensional diffusion equation in a very similar fashion. Basically, we wish to be able to describe the “density” of a distribution of “particles” along a straight line (the x-direction) as a function of time (t). N(x, t) will denote the density of particles (number per unit volume) at location x and time t. How many particles at time t will move across unit area perpendicular to the x-direction from x to x + δx? By the time t + τ (i.e., during the next time step), half of the particles will have moved to the location x + δ and half of those located at x + δ will have moved to x (see Figure A10.1). This means that the net number moving from x to x + δx is [N(x,t) − N(x + δ, t)]/2. The total number of these particles per unit time and per unit area is called the net flux. We’ll call this Fx and rearrange it as

  Figure A10.1. Schematic for the flux argument leading to equation (A10.4).

  The quantity δ2/2τ has dimensions of (distance)2/(time), and this will be called the diffusion coefficient D. The first quotient in square brackets is the number of particles per unit volume at x + δ and time t. This is just the concentration, denoted by C(x + δ,t). Similarly, the second term is C(x,t). This means that we can rewrite the net flux as

  At this point we are in a position to carry out a familiar procedure: taking the limit of this quotient as δ → 0. If this limit exists, we can write

  Physically this means that the net flux is proportional to the concentration gradient, and it moves in the opposite direction. We can think of it this by imagining instead that C is temperature; the flow of heat will be from the higher temperature region toward the lower one—in a direction opposite to the gradient of the temperature. If on the other hand C is the concentration of sugar in my tea, the flow of sugar molecules is toward regions of lower concentration.

  Let’s take this one stage farther and consider a little slab of thickness δ and area A perpendicular to the x-axis (see Figure A10.2). In a time τ, the number of particles entering from the left is Fx(x)Aτ, while Fx(x + δ)Aτ leave from the right (assuming no particles are created or destroyed). This means that the number of particles per unit volume in the slab must increase at a rate given by the expression

  Figure A10.2. Schematic for the flux argument leading to equation (A10.6).

  In the limit τ → 0, δ → 0 we obtain

  There is another mechanism that must be included in any realistic discussion of pollution: wind. As with the discussion in Chapter 19 we shall consider the effects of a wind with constant speed U in the x-direction only (even when a higher-dimensional diffusion equation is used). The rate at which particles enter the slab per unit volume is approximately UC(x, t)A, so the wind’s contribution to the right-hand side of the diffusion equation is approximately

  And so in the usual limit, equation (A10.7) is generalized slightly to become

  Appendix 11

  RAINBOW/HALO DETAILS

  My heart leaps up when I behold a rainbow in the sky

  —William Wordsworth

  In Chapter 22 a contrast was drawn between some meteorological optical effects—rainbows and some ice crystal halos—observed during the day, from those potentially observable from nearby light sources at night. This Appendix summarizes some of the salient features of rainbows and one of the common ice crystal halos.

  So what is a rainbow, and what causes it? A rainbow is sunlight, displaced by reflection and dispersed by refraction in raindrops, seen by an observer with his or her back to the sun. The primary rainbow, which is the lower and brighter of two that may be seen, is formed from two refractions and one reflection in myriads of raindrops (see Figure A11.1). It can be seen and photographed, but it is not located at a specific place, only in a particular set of directions. Obviously the raindrops causing it are located in a specific region in front of the observer. The path for the secondary rainbow is similar, but involves one more internal reflection. In principle, an unlimited number of higher-order rainbows exist from a single drop, but light loss at each reflection limits the number of visible rainbows to two. Claims have been made concerning observations of a tertiary bow (and even a quaternary bow), however, such a bow would occur around the sun, and be very difficult to observe, quite apart from its intrinsic faintness. Nevertheless, in 2011, significant photographic evidence for such bows was published in a reputable scientific journal (see Groβmann et al. (2011) and Theusner (2011)); it will be exciting to see what further research is carried out in this area.

  Returning to the primary bow, note that while each drop produces its individual primary rainbow, what is seen by an observer is the cumulative set of images from myriads of drops, some contributing to the red region of the bow, others to
the orange, yellow, green, and so forth. Although each drop is falling, there are numerous drops to replace each one as it falls through a particular location, and so the rainbow, for the period that it lasts, is for each observer effectively a continuum of colors produced by a near-continuum of drops.

  Let’s start with an examination of the basic geometry for a light ray entering a spherical droplet. From Figure A11.1 note that after two refractions and one reflection the light ray shown contributing to the rainbow has undergone a total deviation of D(i) radians, where

  in terms of the angles of incidence (i) and refraction (r), respectively. The latter is a function of the former, this relationship being expressed in terms of Snell’s law of refraction,

  Figure A11.1. Ray path inside a spherical drop in the formation of a primary rainbow from sunlight.

 

‹ Prev