The Universe in Zero Words

Home > Other > The Universe in Zero Words > Page 2
The Universe in Zero Words Page 2

by Mackenzie, Dana


  But before we start feeling too smug about our “superior” number system, the tale offers several cautionary lessons. First is a message that is far from obvious to most people: There are many different ways to do mathematics. The way you learned in school is only one of numerous possibilities. Especially when we study the history of mathematics, we find that other civilizations used different notations and had different styles of reasoning, and those styles often made very good sense for that society. We should not assume they are “inferior.” An abacus salesman can still beat a Nobel Prize winner at addition and multiplication.

  Feynman’s tale exemplifies also how mathematical cultures have collided many times in the past. Often this collision of cultures has benefited both sides. For instance, the Arabs didn’t invent Arabic numbers or the idea of zero—they borrowed them from India.

  Finally, we should recognize that the victory of the algorists may be only temporary. In the present era, we have a new calculating device; it’s called the computer. Any mathematics educator can see signs that our students’ number sense, the inheritance bequeathed to us by the algorists, is eroding. Students today do not understand numbers as well as they once did. They rely on the computer’s perfection, and they are unable to check its answers in case they type the numbers in wrong. We again find ourselves in a contest between two paradigms, and it is by no means certain how the battle will end. Perhaps our society will decide, as in ancient times, that the average person does not need to understand numbers and that we can entrust this knowledge to an elite caste. If so, the bridge to science and higher mathematics will become closed to many more people than it is today.

  PART ONE

  equations of antiquity

  In the modern world, mathematics is an impressively unified subject. The same equations, such as a2 + b2 = c2, will elicit recognition and understanding in any country of the world, from Europe to Asia to Africa to the Americas.

  But it was not always that way. Looking back through the history of mathematics, especially in the ancient world, we see a great diversity of styles and reasons for doing mathematics. During this period, mathematics gradually evolved out of its origins in surveying, tax collecting, building, and astronomy to become a distinct subject. In Egypt and Mesopotamia, arithmetic and geometry were simply part of the scribe’s general education. From the papyri and cuneiform tablets that have survived, it appears that mathematics was taught as a collection of rules, with very little in the way of explanation.

  In ancient Greece, on the other hand, rote calculation took a back seat to philosophical contemplation. The Greek philosophers, starting with Pythagoras and Plato, held an exalted view of mathematics, which they saw as a science of pure reason that could penetrate behind the illusory appearance of the physical world. In Euclid’s Elements, all of geometry is deduced from a very short list of (supposedly) self-evident facts, or axioms. This style of deductive reasoning gave birth to modern mathematics and even extended its influence to other human endeavors. (Recall the opening words of the American Declaration of Independence: “We hold these truths to be self-evident …”; the author, Thomas Jefferson, was laying out in true Euclidean fashion the axioms on which a new society would be based.)

  In India, mathematics or ganita (calculation) was subservient to astronomy for many centuries, and only came into its own around the ninth and tenth centuries AD. Nevertheless, several important discoveries originated in India—foremost among them the decimal number system we use today. In China, the fortune of mathematics or suan shu (number art) waxed and waned over the centuries. In the Tang dynasty (618–907 AD) it was a prestigious subject that all scholars had to study; on the other hand, in the Ming dynasty (1368–1644) it was categorized as xiaoxue (lesser learning)! The change in attitude may have something to do with why Chinese mathematics—which was previously superior to contemporary European mathematics—stagnated after the 1300s, precisely the period when Western mathematics began to take off.

  Finally, the Islamic world occupied a unique position in mathematics history, as the inheritor of two distinct traditions (Greek and Indian) and the transmitter of those traditions—augmented by the new discoveries of Islamic mathematicians—to western Europe. Strangely, it was only in western Europe that the decisive transition to modern mathematics occurred … but that is a subject for later.

  1

  why we believe in arithmetic the world’s simplest equation

  One plus one equals two: perhaps the most elementary formula of all. Simple, timeless, indisputable … But who wrote it down first? Where did this, and the other equations of arithmetic, come from? And how do we know they are true? The answers are not quite obvious.

  One of the surprises of ancient mathematics is that there is not much evidence of the discussion of addition. Babylonian clay tablets and Egyptian papyri have been found that are filled with multiplication and division tables, but no addition tables and no “1 + 1 = 2.” Apparently, addition was too obvious to require explanation, while multiplication and division were not. One reason may be the simpler notation systems that many cultures used. In Egypt, for instance, a number like 324 was written with three “hundred” symbols, two “ten” symbols, and four “one” symbols. To add two numbers, you concatenated all their symbols, replacing ten “ones” by a “ten” when necessary, and so on. It was very much like collecting change and replacing the smaller denominations now and then with larger bills. No one needed to memorize that 1 + 1 = 2, because the sum of | and | was obviously ||.

  In ancient China, arithmetic computations were performed on a “counting board,” a sort of precursor of the abacus in which rods were used to count ones, tens, hundreds, and so on. Again, addition was a straightforward matter of putting the appropriate number of rods next to each other and carrying over to the next column when necessary. No memorization was required. However, the multiplication table (the “nine-nines algorithm”) was a different story. It was an important tool, because multiplying 8 × 9 = 72 was faster than adding 8 to itself nine times.

  A simple interpretation is this: On the number line, 2 is the number that is one step to the right of 1. However, logicians since the early 1900s have preferred to define the natural numbers in terms of set theory. Then the formula states (roughly) that the disjoint union of any two sets with one element is a set with two elements.

  Another exceedingly important notational difference is that not a single ancient culture—Babylonian, Egyptian, Chinese, or any other—possessed a concept of “equation” exactly like our modern concept. Mathematical ideas were written as complete sentences, in ordinary words, or sometimes as procedures. Thus it is hazardous to say that one culture “knew” a certain equation or another did not. Modern-style equations emerged over a period of more than a thousand years. Around 250 AD, Diophantus of Alexandria began to employ one-letter abbreviations, or what mathematical historians call “syncopated” notation, to replace frequent words such as “sum,” “product,” and so on. The idea of using letters such as x and y to denote unknown quantities emerged much later in Europe, around the late 1500s. And the one ingredient found in virtually every equation today—an “equals” sign—did not make its first appearance until 1557. In a book called The Whetstone of Wytte, by Robert Recorde, the author eloquently explains: “And to avoide the tediouse repetition of these woordes: is equal to: I will sette as I doe often in woorke use, a paire of paralleles, or Gemowe lines of one lengthe, thus: because noe 2 thynges can be moare equalle.” (The archaic word “Gemowe” meant “twin.” Note that Recorde’s equals sign was much longer than ours.)

  So, even though mathematicians had implicitly known for millennia that 1 + 1 = 2, the actual equation was probably not written down in modern notation until sometime in the sixteenth century. And it wasn’t until the nineteenth century that mathematicians questioned our grounds for believing this equation.

  THROUGHOUT THE 1800s, mathematicians began to realize that their predecessors had relied too often
on hidden assumptions that were not always easy to justify (and were sometimes false). The first chink in the armor of ancient mathematics was the discovery, in the early 1800s, of non-Euclidean geometries (discussed in more detail in a later chapter). If even the great Euclid was guilty of making assumptions that were not incontrovertible, then what part of mathematics could be considered safe?

  In the late 1800s, mathematicians of a more philosophical bent, such as Leopold Kronecker, Giuseppe Peano, David Hilbert, and Bertrand Russell, began to scrutinize the foundations of mathematics very seriously. What can we really claim to know for certain, they wondered. Can we find a basic set of postulates for mathematics that can be proven to be self-consistent?

  Opposite The key to arithmetic: an Arabic manuscript, by Jamshidal-Kashi, 1390–1450.

  Kronecker, a German mathematician, held the opinion that the natural numbers 1, 2, 3, … were God-given. Therefore the laws of arithmetic, such as the equation 1 + 1 = 2, are implicitly reliable. But most logicians disagreed, and saw the integers as a less fundamental concept than sets. What does the statement “one plus one equals two” really mean? Fundamentally, it means that when a set or collection consisting of one object is combined with a different set consisting of one object, the resulting set always has two objects. But to make sense of this, we need to answer a whole new round of questions, such as what we mean by a set, what we know about them and why.

  In 1910, the mathematician Alfred North Whitehead and the philosopher Bertrand Russell published a massive and dense three-volume work called Principia Mathematica that was most likely the apotheosis of the attempts to recast arithmetic as a branch of set theory. You would not want to give this book to an eight-year-old to explain why 1 + 1 = 2. After 362 pages of the first volume, Whitehead and Russell finally get to a proposition from which, they say, “it will follow, when arithmetical addition has been defined, that 1 + 1 = 2.” Note that they haven’t actually explained yet what addition is. They don’t get around to that until volume two. The actual theorem “1 + 1 = 2” does not appear until page 86 of the second book. With understated humor, they note, “The above proposition is occasionally useful.”

  Below A clay tablet impressed with cuneiform script details an algebraic-geometrical problem, ca. 2000–1600 BC.

  It is not the intention here to make fun of Whitehead and Russell, because they were among the first people to grapple with the surprising difficulty of set theory. Russell discovered, for instance, that certain operations with sets are not permissible; for example, it is impossible to define a “set of all sets” because this concept leads to a contradiction. That is the one thing that is never allowed in mathematics: a statement can never be both true and false.

  But this leads to another question. Russell and Whitehead took care to avoid the paradox of the “set of all sets,” but can we be absolutely sure that their axioms will not lead us to some other contradiction, yet to be discovered? That question was answered in surprising fashion in 1931, when the German logician Kurt Gödel, making direct reference to Whitehead and Russell, published a paper called “On formally undecidable propositions of Principia Mathematica and related systems.” Gödel proved that any rules for set theory that were strong enough to derive the rules of arithmetic could never be proven consistent. In other words, it remains possible that someone, someday, will produce an absolutely valid proof that 1 + 1 = 3. Not only that, it will forever remain possible; there will never be an absolute guarantee that the arithmetic we use is consistent, as long as we base our arithmetic on set theory.

  MATHEMATICIANS DO NOT actually lose sleep over the possibility that arithmetic is inconsistent. One reason is probably that most mathematicians have a strong sense that numbers, as well as the numerous other mathematical constructs we work with, have an objective reality that transcends our human minds. If so, then it is inconceivable that contradictory statements about them could be proved, such as 1 + 1 = 2 and 1 + 1 = 3. Logicians call this the “Platonist” viewpoint.

  “The typical working mathematician is a Platonist on weekdays and a formalist on Sundays,” wrote Philip Davis and Reuben Hersh in their 1981 book, The Mathematical Experience. In other words, when we are pinned down we have to admit we cannot be sure that mathematics is free from contradiction. But we do not let that stop us from going about our business.

  Another point to add might be that scientists who are not mathematicians are Platonists every day of the week. It would never even occur to them to doubt that 1 + 1 = 2. And they may have the right of it. The best argument for the consistency of arithmetic is that humans have been doing it for 5000 years and we have not found a contradiction yet. The best argument for its objectivity and universality is the fact that arithmetic has crossed cultures and eras more successfully than any other language, religion, or belief system. Indeed, scientists searching for extraterrestrial life often assume that the first messages we would be able to decode from alien worlds would be mathematical—because mathematics is the most universal language there is.

  We know that 1 + 1 = 2 (because it can be proved from generally accepted principles of set theory, or else because we are Platonists). But we don’t know that we know it (because we can’t prove that set theory is consistent). That may be the best answer we will ever be able to give to the eight-year-old who asks why.

  2

  resisting a new concept the discovery of zero

  Entire books have been written about the concept of the number zero. This number was a latecomer to arithmetic, perhaps because it is difficult to visualize zero cubits or zero sheep. Even today, if you pick up a children’s counting book, you will probably not find a page devoted to zero.

  The number zero has two different interpretations, one of them a good deal more sophisticated than the other. First, in numbers like 2009 or 90,210, zero is used as a symbol to denote an empty place. That is the function of the zeros. Without the numeral zero, we would not be able to tell those numbers apart from 29 or 921. In a place-value number system, the meaning of “2” depends on where it is; in the number 29 it denotes two tens, but in the number 2009 it denotes two thousands.

  Of course cultures that did not use a place-value system, such as ancient Egypt or Rome, did not have this problem and did not need a symbol for an empty place. The Roman numeral MMIX (2009) is easy to distinguish from XXIX (29). Thus it is not surprising that the notion of zero did not arise in those societies. The Babylonians, however, did use a place-value number system, and yet for many centuries it did not occur to them to employ a mark to denote an empty place. Apparently the ambiguity between 2009 and 29 did not trouble them—perhaps because it is usually apparent from context which number is intended. Even today the same thing is true. If someone is telling you what year it is, you expect a number like 2009; if they are telling you how old they are, 29 is more reasonable.*

  On the number line, zero is the number that is one step to the left of 1.

  Only around 400 BC, near the end of Babylonia’s independent existence and some 1500 years after the cuneiform number system had first come into use, did scribes start to use two vertical wedges (∧∧) to denote an empty place. This was the first appearance in history of a symbol that meant zero, but it is clear that the Babylonians thought of it only as a placeholder and not as a number itself.

  THE SECOND, more subtle, concept of zero as an actual entity (as implied by the equation 1 – 1 = 0) arose in India. It appears for the first time in 628 AD, in a book called Corrected Treatise of Brahma, by Brahmagupta.

  As is the case for many ancient mathematicians, little information is available about Brahmagupta’s life. He was born in 598 in north central India, and was a member of a mathematical school (in the sense of a loosely knit community of scholars) in Ujjain. He lived not long after the end of the Gupta dynasty (ca. 320–550), a period of prosperity that is often considered a golden age of Indian culture, when much classical Sanskrit literature was written and when astronomers developed very accurate p
redictions of eclipses and planetary motions.

  One thing that stands out clearly in Brahmagupta’s work is his derisory attitude toward his rivals. The very title, Corrected Treatise of Brahma, is an implicit criticism of an earlier astronomical work. Brahmagupta makes comments such as this about his predecessors: “One is not a master through the treatises of Aryabhata, Visnucandra, etc., even when [they are] known [by heart]. But one who knows the calculations of Brahma [attains] mastery.”†

  Arrogant though he may have been, Brahmagupta clearly understood the nature of zero. He wrote, “[The sum] of two positives is positive, of two negatives negative; of a positive and a negative [the sum] is their difference; if they are equal it is zero.” Thus, zero is obtained by adding a positive number to a negative of equal magnitude; for example, 1 + (–1). This is what is meant by the modern notation 1 – 1. Further, Brahmagupta wrote that adding zero does not change the sign of a number, that 0 + 0 = 0, and that any number times zero gives zero. However, he is not sure about division by zero. Rather tautologically, he writes, “A negative or a positive divided by zero has that as its divisor,” and he states incorrectly that “a zero divided by a zero is zero.” Modern mathematicians would say that any division by zero is undefined.

  It is noteworthy that zero goes hand in hand in Brahmagupta’s work with negative numbers. Indeed, the resistance to zero may be explained by the even greater difficulty of visualizing negative cubits or negative sheep. For centuries after Brahmagupta, mathematicians continued to avoid negative numbers in their formulas. For example, the solution of quadratic equations and cubic equations was made unduly complicated by mathematicians’ avoidance of negatives. They perceived the need for several different methods of solution, which we now condense into one formula.

 

‹ Prev