Book Read Free

Dancing With Myself

Page 8

by Charles Sheffield


  Cantor said that you don’t need to enumerate things to agree that there are the same number of them. You can line them up, and if to each member of one set there corresponds exactly one member of the other, then there must be the same number in each set. Just as in musical chairs, you don’t need to count the players or the chairs to see if the game will work; all you need to do is give a chair to every player, then take one chair away and start the music.

  This idea of exact matching, or one-to-one correspondence as it is usually called, allows us to compare sets having an infinite number of members, but it quickly leads to curious consequences. As Galileo pointed out in the seventeenth century, the whole numbers can be matched one-for-one with the squares of the whole numbers, thus:

  1,

  2,

  3,

  4,

  5,

  6,

  7,

  8,

  9,

  10,

  11,

  12

  …

  1,

  4,

  9,

  16,

  25,

  36,

  49,

  64,

  81,

  100,

  121,

  144

  …

  According to Cantor, we must say there are as many squares as there are numbers, because they can be put into one-to-one correspondence. On the other hand, there seem to be a lot more whole numbers that are not squares than numbers that are. The squares omit 2, 3, 5, 6, 7, 8, 10, 11, 12, 13, 14, 15, and so on. Most numbers are not squares.

  In spite of this, Cantor insisted that it makes sense to say that there are the same number of whole numbers as there are of squares; an infinite number of both, true, but the same sort of infinity. There are similarly as many even numbers as there are whole numbers. They can be put into one-to-one correspondence, thus:

  1,

  2,

  3,

  4,

  5,

  6, 7, ...

  2,

  4,

  6,

  8,

  10,

  12, 14, ...

  Using the same idea, we find that there are as many squares as there are whole numbers, as many cubes, even as many rational numbers. But there are more irrational numbers, a different order of infinity. The irrational numbers cannot be placed in one-to-one correspondence with the whole numbers, and Cantor was able to prove that fact using an elementary argument.

  Cantor was able to go farther. There are higher orders of infinite number than the points on the line, in fact, there is an infinite set of orders of infinity. The whole numbers define the “least infinite” infinite set. The question of whether the number of points on a line constitute the second smallest infinite set, i.e., there is no third infinite set which includes the set of whole numbers and is included by the set of points on a line, was a famous unsolved problem of mathematics. In 1967, Paul Cohen showed that it is impossible to prove this conjecture (called the continuum hypothesis) using the standard axioms of set theory. This negative proof diminished mathematical interest in the question.

  Infinite sets are seductive stuff, and it is tempting to pursue them farther. However, that route will not take us towards the physics we want. That road is found by looking at numbers that are finite, but very large by everyday standards.

  4.BIG NUMBERS

  “…Man, proud Man! Dressed in a little brief authority, most ignorant of what he’s most assured…”

  Pure mathematics, as its name suggests, should be mathematics uncontaminated by anything so crude as an application. As G.H. Hardy, an English mathematician of the first half of this century, said in a famous toast: “Here’s to pure mathematics. No damned use to anyone, and let’s hope it never will be.”

  Bertrand Russell went even farther in stressing the lack of utility of mathematics; “Mathematics may be defined as the subject in which we never know what we are talking about, nor whether what we are saying is true.”

  In spite of these lofty sentiments, mathematics that began its life as the purest form of abstract thought has an odd tendency to be just what scientists need to describe the physical world. For example, the theory of conic sections, developed by the Greeks before the birth of Christ, was the tool that Kepler needed to formulate his laws of planetary motion. The theory of matrices was ready and waiting, when it was needed in quantum mechanics (and today in hundreds of other places in applied mathematics); and Einstein found the absolute differential calculus of Ricci and Levi-Civita just the thing to describe curved space-times in general relativity.

  (It doesn’t always work out so conveniently. When Newton was setting up the laws of motion and of universal gravitation, he needed calculus. It didn’t exist. That would have been the end of the story for almost everyone. Newton, being perhaps the greatest intellect who ever lived, went ahead and invented calculus, then applied it to his astronomical calculations.)

  Conversely, the needs of the applied scientist often stimulate the development of mathematics. And by the seventeenth century, the main attention of physicists and astronomers was not with counting finite sets of objects; it was with describing things that varied continuously, like moving planets, or spinning tops, or heat flow. For such studies, counting things seemed to have little relevance.

  That state of affairs continued until the third quarter of the nineteenth century. By that time continuous-variable mathematics had done a wonderful job in the development of astronomy, hydrodynamics, mechanics, and thermodynamics. The main tool was the calculus, which had been developed into a dozen different forms, such as the theory of functions of a complex variable and the calculus of variations.

  Now, back before 400 B.C. the Greek Democritus had already suggested on philosophical grounds that matter must be composed of separate indivisible particles, called atoms (atomos in Greek means “can’t be cut”). However, people had rather lost sight of that idea until 1805, when John Dalton re-introduced an atomic theory. But this time, there was experimental evidence to support the notion that matter was made up of individual atoms. Thus the behavior of such atoms, regarded as separate, countable objects, must somehow be able to explain the apparently continuous properties of the matter that we observe in everyday life. Unfortunately, the numbers involved are so huge—nearly a trillion trillion atoms in a gram of hydrogen gas—that it was difficult to visualize the properties of so large an assembly of objects.

  If we have direct counting experience of numbers only up to a few dozen, any number as big as a billion is beyond intuition. It is possible that some crazy human has actually counted up to a million, but it is certain that no human has ever counted up to a billion. At one number a second (try saying 386,432,569 in less than a second), ten hours a day, every day of the year, a billion would take 76 years to finish.

  I worry when I hear talk of “this many millions” and “that many billions” from officials who I know have trouble calculating a resta
urant tip. I wonder if they really know the difference. It’s an old theory that the expenditure of $100 on a new bike rack will engender more debate than $500,000,000 for a weapons system. People have a personal feel for a hundred dollars. A billion is just an abstract number.

  However, when we look at the counting needed to enumerate the atoms of a bar of soap or a breath of air, we are well beyond the billion mark. We are talking a trillion trillion atoms; and when we reach numbers so large, we are all like Gamow’s Hungarian aristocrats. We simply have no experience base, no intuitive feel for the properties of a system with so many pieces.

  To take a simple example, suppose that we toss an unbiased coin in the air. We all believe we understand coin-tossing pretty well. Half the time it will come down heads, the other half it will come down tails. Now suppose that we toss two coins. What’s the chance of getting two heads? Of two tails? Of one head and one tail?

  Any gambler can give you the answer. You have a one in four chance of two heads, a one in four chance of two tails, and a one in two chance of a head and a tail. The same gambler can probably tell you the odds if you toss three or four coins, and ask him for the probability of getting some given number of heads.

  Now throw a million coins in the air. We know that if they are all unbiased coins, the most likely situation will be that half of the coins will land heads, and half tails. But we have no feel for the chance of getting a particular number, say, 400,000 heads, and 600,000 tails. How does it compare with the chance of getting 500,000 heads and 500,000 tails?

  The probabilities obey what is known as a binomial distribution, and can be calculated exactly. Table 1 shows how many times we will get a given number of heads, divided by the number of times we will get exactly equal numbers of heads and tails. As we expected, this ratio is always less than one, because equal heads and tails is the most likely situation.

  However, the general behavior of the table as the number of coins increases is not at all intuitive. For small numbers of coins, there is a good chance of getting any number of heads we like to choose. For a million coins, however, the chance of getting anything far from equal numbers of heads and tails is totally negligible. And as the number of coins keeps on increasing, the shape of the curve keeps squeezing narrower and narrower. By the time we reach a trillion trillion coins, the curve has become a single spike. The chance of getting a quarter heads and three-quarters tails, or 51% heads and 49% tails, or even 50.00001% heads and 49.99999% tails is vanishingly small.

  This result may seem to have no relevance to anything in the real world. But such probabilities are now central to our understanding of everything from refrigerators to lasers.

  Table 1: Coin-Tossing Probabilities

  R is the probability of throwing N heads, divided by the probability of throwing an equal number of heads and tails. Probabilities less than one in a million are not printed.

  NUMBER OF COINS = 6

  NUMBER OF COINS =10,000

  N

  0

  1

  2

  3

  R

  0.05

  0.3

  0.75

  1

  N

  4750

  4800

  4850

  4900

  4950

  5000

  R

  0.0000037

  0.000335

  0.011

  0.135

  0.607

  1

  NUMBER OF COINS = 10

  NUMBER OF COINS = 100,000

  N

  0

  1

  2

  3

  4

  5

  R

  0.004

  0.040

  0.179

  0.476

  0.833

  1

  N

  49,200

  49,400

  49,600

  49,800

  50,000

  R

  0.0000028

  0.000747

  0.041

  0.449

  1

  NUMBER OF COINS = 100

  NUMBER OF COINS =1,000,000

  N

  25

  30

  35

  40

  45

  50

  R

  0.0000024

  0.000291

  0.0108

  0.136

  0.609

  1

  N

  497,500

  498,000

  498,500

  499,000

  499,500

  500,000

  R

  0.0000037

  0.000335

  0.011

  0.135

  0.607

  1

  NUMBER OF COINS = 1,000

  NUMBER OF COINS = 10,000,000

  N

  420

  430

  440

  450

  460

  470

  480

  490

  500

  R

  0.0000027

  0.000054

  0.000739

  0.00672

  0.041

  0.165

  0.450

  0.819

  1

  N

  4,992,000

  4,994,000

  4,996,000

  4,998,000

  5,000,000

  R

  0.0000028

  0.000747

  0.041

  0.449

  1

  As the number of coins tossed increases, the chance of a large inequality between heads and tails rapidly decreases.

  5.COUNTING AND PHYSICS

  “Man is slightly nearer to the atom than the star.”

  —A.S. Eddington

  A small room contains about 1027 molecules of air.1 The molecules are in ceaseless random motion, and the air pressure on the walls of the room is generated by their impact. Suppose that the walls of the room face north, south, east, and west, and that the room is perfectly sealed, so that no molecule can arrive or escape. Then at any moment, some fraction of the molecules inside the room have a component of their motion taking them generally towards the north wall, and the rest are heading generally towards the south wall (the number who happen to be heading due east or due west is negligible).

  All the molecules bounce off the walls, and occasionally off each other. If the motions are, as we said, truly random, then we would be most surprised if the same number were heading for the north wall at all times. Thus the air pressure on any area of the wall ought to keep changing, fluctuating from one second to the next depending on the number of molecules striking there. If we measured the pressure, we ought to get constantly changing values.

  We don’t. Unless the temperature in the room changes, we always measure the same pressure
on the walls.

  To see how this can be so, imagine that the molecules had originally been introduced into the room one at a time. They are to have random motions, so to decide on the motion of these molecules, let’s suppose a coin had been tossed. If it lands heads, the molecule will move north; tails, it moves south. Since coin-tossing is random, so are the movements of the molecules.

  When we have tossed 1027 coins, the room will be filled with air. Now recall our earlier result on tossing a very large number of coins. The chance that exactly as many coins land heads as tails is extremely small, so the chance that exactly equal numbers of molecules are heading north and south is effectively zero. However, as we saw from Table 1, even with as “few” as ten million coins, the chance that we will get substantially more heads than tails is also negligible. This result applies even more strongly when we have trillions of trillions of coins. The ratio of heads to tails will be so close to one that we will never measure anything other than an even split. Since air pressure is generated by the impact of randomly moving molecules on the room’s walls, and since there is a negligible probability that we have substantially more than half the molecules heading for any given wall, we measure the same pressure on each wall. We will also find no change in the pressure over time.

  A similar argument can be used to analyze the position of the molecules. Imagine a partition drawn across the middle of the room. Since molecules move at random, at any moment any number of the molecules may be to the right of the partition. What is to prevent a situation arising where all the molecules happen to be down at one end of the room, with a perfect vacuum at the other?

  The distribution of molecules within the room can again be simulated by the tossing of a coin. Let us spin the coin, and each time it lands as heads, we will place a molecule to the right of the partition; when it lands as tails, we place the molecule to the left of the partition. After 1027 coin tosses, the room is full of air. However, we know from our coin-tossing probabilities that there is negligible chance of, say, 60% of the molecules being to the right, and 40% to the left, or even of 50.00001% being on the right and 49.99999% on the left. The danger of finding one end of the room suddenly airless is small enough to be totally ignored. It will never happen, not in a time span billions of times longer than the age of the universe.

 

‹ Prev