Book Read Free

How Big is Big and How Small is Small

Page 7

by Smith, Timothy Paul


  Figure 4.3 The number of hurricanes in the North Atlantic (2003–2012) as a function of windspeed.

  One other similarity between earthquakes and hurricanes is that as the severity of the destruction increases, the frequency of the occurrence decreases (see Figure 4.3). Thank goodness!

  ***

  While on the subject of geology, one other curious scale worth mentioning is the Mohs scale of mineral hardness. This is the scale where diamonds, the hardest natural substance, is a 10 and talc is a 1. Friedrich Mohs (1773–1839) developed his scale of mineral hardness in about 1812. What made Mohs’ scale so useful is its simplicity. He started with ten readily available minerals and arranged them in order of hardness. How can you tell which of two stones is harder? You simply try to scratch one with the other, and the hardest stone always wins. Quartz cannot scratch topaz, because topaz is harder. On the Mohs scale quartz is a 7 and topaz is an 8. Since the ten standards are relatively common, a geologist heading into the field needs to only carry a pocket full of standards to determine the hardness of a new, unknown sample with a scratch test.

  Is it the best, most objective and quantifiable scale? No, not really. Ideally hardness is a measure of how much pressure is needed to make a standard scratch. Conversely we could drag a diamond point with a standard pressure across the face of our sample and then measure the size of the scratch. This is the technique behind a sclerometer. We can use a sclerometer to make an absolute measurement of hardness. Not only do we know that quartz (Mohs 7) is harder than talc (Mohs 1), but we now know that it is 100 times harder. A diamond, that perfect lattice of carbon atoms that makes nature’s hardest substance, is a Mohs 10. More than that, it is 1500 times harder than talc or 15 times harder than quartz.

  The second hardest mineral on Mohs’ scale is corundum. This oxide of aluminum (Al2O3) comes to us in several forms including rubies, sapphires and emery. Emery, of course, is that grit found on emery cloth, which is used to “sand” steel. Since it can scratch nearly everything, it is not surprising to find it high up on the hardness scale. At the other end is talc, which we usually encounter as a powder and find hard to imagine as a stone. Soapstone is primarily talc, valued because its softness allows it to be easily carved.

  Figure 4.4 The Mohs scale versus the absolute hardness of minerals. The Mohs scale was designed to accommodate a simple technique, but in fact is close to the logarithm of absolute hardness.

  What makes the Mohs scale so interesting for us is that as you go up the scale, from talc to gypsum, calcite, fluorite and apatite, the hardness just about doubles with each step. The ascent in hardness continues through feldspar, quartz, topaz, corundum and finally to diamond. If hardness were to increase by 2, or even 2.2, with each step, the scale would clearly be logarithmic. It does not rise with perfectly tuned steps (see Figure 4.4)—the step between gypsum (2) and calcite (3) is a factor of 4.5 on an absolute hardness scale—but it is close. It is not logarithmic because Mohs’ objective was to create a simple test that could be used in the field. The fact that it is almost logarithmic is accidental, except that nature seems to like logarithmic scales. On a practical side there are a great many more soft rocks than hard rocks, so Mohs partitioned his scale where the field is most dense. Finally the Mohs scale is based on standard collection of rocks and not on an ideal; it is based on the technique of comparison and not an absolute scale.

  ***

  One other scale that was originally based upon comparing samples to standards is the star magnitude or brightness scale. Curiously enough it was the word “magnitude” for stars that inspired Richter to use the same word for earthquakes.

  Our earliest encounter with stellar magnitude is in Claudius Ptolemy’s catalog of stars from about 140 AD. Ptolemy was part of the intellectual Greek society in Alexandria, Egypt, at the time of the Roman Empire. It was a community known for its library and built upon the great Greek academic tradition. Ptolemy’s Almagest, with the position and magnitude of over a thousand stars, is the oldest catalog of stellar brightness that we still have, but it is based upon the lost works of Hipparchus (~190–120 BC) who ranked stars into six different magnitudes of brightness. In this system the brightest stars are first magnitude stars. Figure 4.5 is based on data from the Almagest. Stars that are half as bright as first magnitude are second magnitude. Stars that are half as bright as second magnitude, or a quarter of the brightness of a first magnitude, are labeled third magnitude. And so on down to sixth magnitude stars, which are about the dimmest things you can see with the unaided eye. So this scale, with the difference between magnitude being a jump of two in brightness, is a logarithmic scale.

  I think it is worth our effort to understand stellar magnitude because later in this book the stellar magnitude scale will be one of the major tools used for determining the distances to stars, and therefore the size of the universe.

  Ideally the magnitude of a star, as determined by Hipparchus or recorded in Almagest, is determined by just looking at the star. In practice it is hard to judge brightness and half-brightness; the naked eye does not determine these things precisely. If every step really was a factor of two, then the sixth magnitude star would have one thirty-second of the light of a first magnitude star or 3% of the brightness. But by modern measurements of stars Almagest-rated sixth magnitude stars have about 1% of the light of a first magnitude star. The eye is actually more sensitive than Hipparchus or Ptolemy appreciated.

  In the Almagest the stars Rigel, Sirius and Vega were all rated first magnitude, even though their intensity, by modern measurements, varies by a factor of over four. This seems to suggest that the Greeks did not know how to rate the stars. In fact the technique that they used was to recognize a few standards and then compare new stars to nearby standards. For example, if Ptolemy called Polaris, the north star, a 3, then you could decide that Electra, a star in the Pleiades was dimmer and so could be rated a 5. The art of rating stars is then a lot like Mohs’ scale, where the new samples are compared to old and established samples.

  Figure 4.5 The stars of Orion as reported in the Almagest.

  The science of measuring starlight, or any light, is called photometry, and by the middle of the nineteenth century people were making much more precise measurements. This was the age of the telescope and the eye, before electronics sensors or even photographic plates. So astronomers used a clever bit of optics called a heliometer. The heliometer was originally designed to measure the size of the Sun’s diameter as it varied over the year (as the Earth moved), but it had a use far beyond the Sun. The heliometer could take two images—two views of the sky—and with internal mirrors put them next to each other. Now the technique for measuring star brightness is to first always start with the same star: Vega, with magnitude 0, and therefore even brighter than magnitude 1. With the heliometer one could view Vega and the star one wanted to rate, say Polaris, at the same time. It is very clear that Polaris is dimmer, but by how much? What one does now is cover up part of the telescope that is pointing towards Vega until both stars appear to have the same brightness. In this case one needs to obstruct three quarters of the light from Vega to get them to match. So Vega, magnitude 0, has four times the brightness of Polaris, and therefore Polaris has a magnitude of 2 by modern measurements.

  By the middle of the nineteenth century measurements like this were becoming quite good and systematic and they revealed a deep problem. Stars that the Almagest said were magnitude 6 were really much fainter than 3% of the brightness of a first-magnitude star. Astronomers either needed to relabel these dim objects as magnitude 8, or adopt some other way of describing brightness.

  Norman Robert Pogson (1829–1891), as an assistant at the Radcliffe Observatory in Oxford noted that the first-magnitude stars are about 100 times brighter than the sixth-magnitude stars and that stars could keep there traditional magnitude designation if we allowed that the difference between magnitudes to be about instead of 2. More precisely, we can take the factor of 100 and divided it into s
teps of 2.511886… = the fifth root of one hundred. Today we call this number the Pogson ratio (P0). This also gives us a prescription for calculating magnitude based on an absolute luminosity (L), the number of photons per second:

  This would be all well and good, if Vega were truly a star of steady brightness.

  Modern photometry has moved beyond the techniques used by Pogson and his colleagues and we can essentially count photons received from a star. But why would we want to do this? We now understand that if two stars are the same color—if they have the same hue of red or blue, for example—they are probably about the same size and putting out the same amount of light. Then, if one star is much dimmer than another star of the same color, it must be farther away from us. Thus we can use star brightness to measure the distance to stars and by implication, the size of the cosmos.

  So Pogson patched up Hipparchus’s stellar magnitude system, making it the precise and objective scale we still used today. We now also have a few new terms. When we measure starlight we say that we are measuring a star’s apparent magnitude; that is, the magnitude as it appears to us on Earth. But to compare stars we will want to determine a star’s absolute magnitude; that is, how bright would that star appear if we were a standard distance from that star. This is the same thing that is done with the Richter scale. The seismologist does not report what the seismograph measures; rather, he reports what they would have read if the seismograph was 100 km from the epicenter. The standard distance for stellar absolute magnitude is 10 parsecs, or 3 × 1017 m. As an example, if you were 10 parsecs from our own Sun, it would appear to be a dim magnitude 4.83 star. However from our vantage point of Earth, 1.5 × 1011 m away, the sun is 42,000 times brighter than Vega, or an apparent magnitude of −26.74!

  So why is it that Hipparchus and Ptolemy were so far off and labeled some stars as sixth magnitude when, by their own system and criteria, they should have been labeled eighth magnitude? First off, I do not think people realized how sensitive our eyes really are. The fact that we can see a star that has one percent of the light of Vega is amazing. Vega is itself is just a pinprick of light in the black void. But when we use only our eyes we are comparing the impression left on our brains. I can tell that a sixth magnitude star is dimmer than a fifth magnitude, but quantifying it as one percent of Vega is much harder.

  For example, back on Earth, if I was asked the length of a backyard telescope that is about a meter long, I could estimate its length by eye to about 10 cm. However if I was asked to estimate the size of an array of radio astronomy telescopes, scores of antennas that look like radar dishes arranged in a line 10 km long, I could only guess its size to about 1 km, not to 10 cm. We can estimate things to about 10%, a relative size, not to 10 cm, an absolute size. This is the central tenet of Fechner’s (or Weber–Fechner’s) law. Sensation increases as the logarithm of the stimulation; that is, slower than the stimulation.

  At the same time as Pogson was looking at stars, Gustav Fechner (1801–1887) was performing the first experiments in psychophysics. Psychophysics is the study of the relationship between a physical stimulus and how that stimulus is psychologically perceived. For example, Fechner would have a blindfolded assistant sit in a chair and hold a brick. Then he would add weights on top of the brick and ask the assistant if they felt an increase. If the addition was too little the assistant would not notice it, but if the addition exceeded some threshold, the assistant knew it. It would take a number of trials to establish this threshold, adding and subtracting weights in a random order, but eventually the threshold could be measured. Now Fechner would give the assistant a new brick of a different weight and measure a new weight perception threshold. He found that if the weight of the brick was doubled, the threshold would also double.

  The actual threshold will change from person to person, for some people are more sensitive to physical stimuli than others, but the fact that the threshold is proportional to the basic brick’s weight seems to be universal. Mathematically this means that the sensation is proportional to the logarithm of the stimulus. One of the consequences of this is that even though there is a fantastic range of stimuli to our eyes and ears, there is a much smaller range of sensation when these show up in our brains. For example, we can see things as bright as the Sun (magnitude −27) and as dim as the Triangulum galaxy (magnitude 6), which means that our eyes can span 33 magnitudes of stars, or in intensity There is a factor of 16 trillion between the brightest and the dimmest stars we can see. This number is a little bit deceptive, since the light of Triangulum galaxy appears as a point, whereas the Sun subtends about half a degree, which means the sunlight is spread out across our retinas. Our ears, however do not spread out the stimuli, and still they have a marvelous range.

  ***

  When it comes to sound, we measure how loud something is with the units of decibels (dB). A whisper is about 30 dB, a vacuum cleaner is about 70 dB, and a rock concert can be in the 100–130 dB range. The sound of breathing can be 10 dB, and the faintest thing you can hear if your ears are young and undamaged is near 0 dB. So our range of hearing goes from 0 to about 120 dB and, right in the middle at 50–60 dB, is normal conversation. But what is a decibel?

  Sound is a pressure wave moving through the air. When you speak you push on the air near you, and this bumps up against the molecules of air a bit farther away, which bump up against the next neighbors and so forth. It is not that the air travels, but rather that the “push” travels, i.e. the pressure travels. So perhaps we should be measuring sound in terms of pressure or the energy of the waves. The reason we do not is because the decibel really is a useful unit for describing the sensation we feel in our ears. So what is a decibel?

  The decibel was first developed at Bell Labs. They were originally interested, not in how loud sound is, but rather they were trying to quantify how much a signal would deteriorate as it traveled down a wire. Originally the people at Bell Labs would describe the deterioration as “1 transmission unit,” or 1 TU. A TU was the amount of deterioration of a signal as it traversed 1 mile of standard wire. Again, a unit is defined in terms of a standard. In 1923 or 1924 they renamed the TU a bel. It has been suggested that they were trying to name it after their lab, but defenders of the name will point out that most scientific measurement units are named after people—volts, watts, amps, ohms, henrys, faradays and so forth—and that the bel is named after the father of the telephone, Alexander Graham Bell. But a bel is not a decibel.

  A decibel is a measurement of the magnitude of the ratio of something to a standard. Here the word “magnitude” refers to how many factors of ten there are in this ratio. So 10, 100, 1000, or 101, 102, 103 are of magnitude 1, 2 and 3. Then the definition of a decibel is:

  Here P1 is the power of what we are measuring and P0 is the standard we are comparing it to. The 10 in front of the log is where the prefix “deci” in decibel comes from. At Bell Labs this meant that 10 miles of standard wire would be rated at 10 dB, 100 miles at 20 dB and 1000 miles at 30 dB. Decibel can refer to any ratio and engineers use it widely. But to most people it refers to sound and to how loud something is.

  In sound, the standard was selected to be the faintest thing good ears can here, which is a pressure wave of 20 μ Pa; that is, a pressure of twenty billionths of an atmosphere. Normal conversation is about 60 dB or a pressure wave one million times (106) more powerful than that faintest sound. Our range of hearing, from 0 dB to 120 dB, is a factor of a trillion in power! So why do we talk about decibels instead of pressure or power? It is because 50–60 dB really does seem like the middle of our range of hearing, whereas a million does not seem like the middle of the range between zero and a trillion. If our hearing range goes up to a trillion, should half a trillion not be mid-range? Imagine you are 100 m away from a jet engine and hearing 120 dB, or a trillion times that faintest standard. If you move away about 40 m, the power will have dropped in half. The sound will be at 117 dB, which is still beyond comfort. So the decibel is a very practical way of describi
ng the range of our hearing and much more useful than pressure or energy.

  We could actually use the decibel to describe any ratio. For example if our standard was the light from a sixth-magnitude star, then the Sun would be rated at 130 dB.

  ***

  There is one last scale that I will describe in this chapter: the musical scale. Actually there are many, many different musical scales and the discussion of the nuances of a certain scale can be long, technical and even heated. For now I will confine myself to the diatonic or heptatonia prima scale. This is the scale most pianos are built for and tuned to. You may have already spotted the “dia-” prefix and realized that two, or a half, will be our multiplier between different magnitudes of this scale. The “-tonic” refers to tones.

  As we have just said, sound is a pressure wave that passes through the air. But it is not a single wave; it is a series of waves. Different musical notes are characterized by the different frequencies of these waves. According to the ISO (International Organization for Standardization), the A-key of a piano that is just above the middle-C should be tuned to 440 Hz. If you press down on the A-key of the keyboard a small hammer behind the keyboard is tripped and will strike the A-string. That string will vibrate 440 times per second. If you placed a microphone next to your piano and watch the electrical signal from the microphone on an oscilloscope or a computer screen, you would see the signal rise and fall 440 times each second. But the signal you see will not be a simple sine wave. A simple sine wave would mean that we were hearing a pure tone, or a monotone. That is the sort of sound we associate with pushbutton telephones. Our piano has a much richer sound and a more complex wave pattern. We can think of these complex sound waves as a combination of many simple waves. The sound from our piano’s A-note is made up of a wave with frequency 440 Hz plus a wave with frequency 880 Hz, 1320 Hz, 1760 Hz and so forth. All these additional waves are multiples of that original 440 Hz of the monotone A. The 440 is called the fundamental frequency, while the 880, 1320, 1760, 2200 are the harmonics or the overtones. What gives an instrument its unique sound, or its timbre, is the way its combines these overtones: the relative weighting or strength of the first, second, third … overtones compared to the fundamental harmonic. You can hear an A from a piano and distinguish it from the A of a guitar or a trumpet or a flute because your ear takes the sound apart, breaking it into the harmonics, and your brain recognizes that when (say) the second harmonic is strong and the third and fifth are weak the A is from a certain instrument. In fact your ear is so good at sorting out all of these harmonics that you can listen to a whole orchestra and pick out individual instruments, even when they are playing the same note.

 

‹ Prev