Surfaces and Essences
Page 71
Luckily, physicists are often sensitive to such psychological pressures, and most of the time they try hard to cast their equations in the form of clean and clear cause-and-effect relationships, with one side giving rise to the other side. Take, for instance, the first of Maxwell’s four equations for electromagnetism:
div E = 4πρ
where E represents an electric field, ρ represents electric charge density (basically, a description of how much electric charge there is in each point of space), “div” stands for a certain operation in differential calculus called the “divergence”, and π is the familiar circular ratio 3.14159…
This formula is universally seen by physicists as saying, “A certain distribution of electric charges in space (the cause) always gives rise to a certain pattern of electric fields in space (the effect).” For historical reasons, however, the cause (the charge distribution) is conventionally placed on the right side of this equation and the effect (the electric field) on its left side, thus reversing the usual operation–result order. Why do physicists always write it in this flipped fashion? That’s hard to say, but basically it’s just a harmless “professional deformation”. In any case, Maxwell’s first equation intuitively embodies a physical cause-and-effect relationship, with the cause on the right side and its effect on the left side. (In fact, all four of Maxwell’s equations embody similar cause-and-effect relationships, and they all have this same kind of right-to-left causal flow.)
There is, however, another way of looking at Maxwell’s equations. For concreteness’ sake, let’s once again consider the first one, as shown above. It says that if you calculate the divergence of the electric field, you will obtain the charge density. Now such a calculation can also be seen as a kind of cause-and-effect or process-and-result relationship, wherein certain quantities are fed into a calculating machine that churns for a while and eventually outputs new quantities. Seen this way, the “cause”, or initial event (namely, the feeding of input values into the computing device), is always on the left side, while the “effect”, or subsequent event (namely, the number that the device spits out), is always on the right. So now we have a left-to-right causal flow!
But one must keep in mind that this is only a mathematical kind of causality, meaning you can calculate the charge density if you’re given the electric field everywhere in space. However, as we pointed out above, the equation can also be read as a physical kind of causality, asserting that if you arrange electrical charges in a certain way in space, you will always find that a specific pattern of electric fields surrounds them: in short, the charges produce the fields. When the equation is read in this latter way, the causality flows from right to left (i.e., from charges to fields). And that’s how physicists view this equation, whether they do so consciously or unconsciously. Indeed, it would strike a physicist as absurd if someone were to say that Maxwell’s first equation means that an electric field spread out all over space gives rise to a tiny electric charge sitting somewhere. That would sound as backwards as saying that a strong stench wafting all through a neighborhood will give rise to a frightened skunk crouching under a bush (note the use of a caricature analogy here).
In summary, the equals signs in Maxwell’s equations can be understood either as expressing physical causality (a physical cause giving rise to an effect), when they are read from right to left, or as expressing mathematical causality (a calculation giving rise to a result), when they are read from left to right. And Maxwell’s equations are in no way exceptional. Physicists always try to manipulate their equations so that they will have this quality — namely, with causes on one side and effects on the other. Doing so is certainly not logically necessary, but it contributes greatly to clarity. For example, here are two alternative ways of writing Maxwell’s first equation that are both perfectly correct yet would make physicists scratch their heads in puzzlement and ask, “What’s the point of writing it that way?”
div E / 2 – 2πρ = 0div E / 4ρ = π
Indeed, these equations both cloud up the crux of the law, which is the fact that one phenomenon gives rise to another.
In short, physicists, no less than other people, have a weakness for, and also derive benefit from, the naïve analogy likening equations to cause-and-effect relations.
Does Multiplication Always Imply Getting Bigger?
For some concepts that one learns in school, there is an early naïve analogy that is very helpful, but there is no other familiar category that helps one develop the concept more deeply. In such cases, the naïve analogy will very likely be one’s only means for grasping the concept, and it will retain this primary role even after many years of education. In such cases, refining the concept so that it becomes more general and flexible will be far harder. It so happens that multiplication and division, two of the most basic notions in mathematics, are cases of this sort.
Addition, subtraction, multiplication, and division are taught in elementary school and are presumed to have been fully understood by middle school. Since so many other mathematical notions are built on them, they are often called the four basic arithmetical operations. These classic notions feel as if they are part of the cultural heritage of every member of our society, and any adult claiming to have no idea what multiplication or division is would be looked at askance. Taught all the way through childhood, these notions should be clear as a bell to high-school and college students. And yet the belief that these operations have been mastered by most adults is an illusion. The next few sections illustrate how this can be so.
Let’s consider multiplication. We have found, in surveys of many quite advanced university students (we don’t mean advanced math majors), that if they are asked, “What is the most precise possible definition of multiplication?”, they are generally very satisfied with either of the following two definitions that we suggest:
Multiplying is repeatedly adding a value a certain number of times.
Multiplication is taking a times b, which means adding b up a times.
It’s hard to find anyone who disagrees in any way with these definitions, and virtually no one sees any way to improve upon them. We have also asked groups of advanced university students to supply definitions themselves, and exactly the same themes reappear. It always comes down to the idea that multiplication means, by definition, adding a given number over and over to itself, counting how many times it is done, even if the formulation is not always as concise or clear as the two definitions offered above. For example:
A multiplication is the iterated addition of a given number a specified number of times.
Multiplying means adding a given figure to itself as many times as one is told to do so.
To multiply is to add a particular number to itself as many times as the other number tells you to do so.
Multiplication is a calculation in which one is told how many times one should add a given quantity to itself.
On the Web, definitions of this sort abound. One site proposes: “Multiplication is thus nothing but an addition in which the numbers being added up are all equal to each other. This is why we say that it amounts to repeating the multiplicand as many times as there are units in the multiplier.”
For a bit of historical perspective on the question, one can take a look at definitions along these lines proposed by professional experts. Thus in 1821, the renowned French mathematician Étienne Bezout wrote, in his Treatise on Arithmetic Intended for Sailors and Footsoldiers: “Multiplying one number by another is summing up the first of these as many times as there are units in the other.”
Well, now… is this view of multiplication as repeated addition really as indisputable as it would seem from all the above? Hardly! As a matter of fact, this view is a naïve analogy that falls far short of the target, and in the long run, it is almost sure to lead anyone who relies on it into confusion.
First of all, this view of multiplication requires that one of the two values be a positive whole number, since otherwise “as many times”
has no meaning. What would it mean to speak of adding a number to itself 2½ times or 1/3 of a time, let alone times or π times? And yet, requiring one of the factors in a multiplication to be an integer should raise suspicions, since everyone knows that multiplying two non-integers is not forbidden; indeed, in school we all learn how to do it, and pocket calculators don’t balk at all at multiplying any two numbers they are given. What on earth would the expression “π × π” mean if at least one of the two factors had to be an integer?
The next stumbling block lurking in this definition is the common belief that when one adds b to itself over and over again, the result will always be greater than b. We would not merely expect the result to be somewhat greater than b, but, by definition, a times greater than b. Dictionaries confirm this naïve idea, as does everyday speech. Indeed, the words “multiply” and “multiplication” suggest a clear image of growth, never an image of shrinking (even though, as we pointed out in Chapter 4, things that shrink can be said to be “growing smaller”). Thus rabbits are said to multiply, quickly resulting in overcrowding; in good times, one’s assets multiply, making one wealthier; in bad times, risks multiply, making one less secure; and so on. The prefix “multi” also tends to make one think of growth, as in words like “multinational”, “multicolored”, “multilingual”, “multimillionaire”, and so forth. However, this preconception runs into a brick wall when one is instructed to multiply by a value less than 1, as doing so yields a result smaller than the multiplicand. This is incompatible with repeated addition.
There is still more trouble. The best-known property of multiplication is that it is commutative — that is, for any numbers a and b, it is always the case that a x b = b x a. Why this should hold for every pair of numbers a and b is not at all obvious if multiplication is conceived of as repeated addition. In fact, the naïve analogy would suggest that multiplication is intrinsically asymmetric, since it treats the multiplicand and the multiplier differently: the former is repeatedly added to itself, while the latter counts how many times the operation is carried out. This certainly does not fit the image of commutativity, in which the two numbers play totally interchangeable roles. Since the naïve analogy gives no insight into this key property, a child (or an adult!) may be baffled by the fact that a added to itself b times always gives the same result as b added to itself a times. To be sure, one can enrich one’s notion of multiplication by arbitrarily tossing in the fact of commutativity like icing on the cake, but the naïve analogy of multiplication as repeated addition makes this fact seem mysterious rather than natural.
These minor stumbling blocks turn into serious obstacles when pupils who depend on the naïve analogy are given word problems to solve. For instance, when middle-school students in England were given the problem “If one gallon of petrol costs 2.47 pounds, what is the price of 0.26 of a gallon?”, only 44% of them recognized that this is in fact a multiplication problem. The remaining 56% took it to be a division problem (namely, 2.47 divided by 0.26)! And thus a multiplication problem that should be very easy even for elementary-schoolers stumped roughly half of the middle-schoolers.
What happens if one changes the numbers in this problem? If one merely replaces “0.26” by “5” and asks the question again (“If one gallon of petrol costs 2.47 pounds, what is the price of 5 gallons?”), then 100% of the middle-schoolers solve it correctly. This discrepancy is due to the fact that the first problem doesn’t meet the naïve analogy’s image of adding a number repeatedly, since the idea of adding a number to itself 0.26 times makes no sense. On the other hand, using the naïve analogy of repeated addition works just fine in the modified problem (2.47 + 2.47 + 2.47 + 2.47 + 2.47). Discrepancies between participants’ performances on the two problems reflect the fact that the naïve analogy is of no help in the first one, yet is appropriate in the second one.
Adding Thrice and Fifty Times are Different Kettles of Fish
It’s enlightening to compare the preceding findings with some experiments carried out in Brazil. The participants were teen-aged boys who had dropped out of school and were making a living as street vendors. The following simple problem was given to a group of them:
A boy wants to buy some chocolates. Each chocolate costs 50 cruzeiros. He decides to buy 3 of them. How much money will he need?
The same problem was also given to a different group, except that the two numbers were interchanged, as follows:
A boy wants to buy some chocolates. Each chocolate costs 3 cruzeiros. He decides to buy 50 of them. How much money will he need?
To all readers of this book it will surely be trivially obvious that each of the two boys will have to fork over 150 cruzeiros, even if one of them winds up with far fewer chocolates (at least in number) than the other one. When we read the two problems, they appear equally easy; the scenario is the same, they involve the same numbers (50 and 3), and they both involve the same arithmetical operation: multiplication. But were they equally easy for the two groups of street vendors? Not in the least.
The first problem was handled pretty well by most: 75% got it right. The second problem, by contrast, was not solved by any of the street vendors. The reason behind this discrepancy is relatively simple; it comes down to reliance on the naïve analogy of repeated addition. To solve the first problem, all one needs to do is add 50 + 50 + 50, to get 150 cruzeiros. This is just two additions, and it involves only very simple facts: first, that 50 + 50 = 100, and second, that 100 + 50 = 150. The variant problem, however, is another ball of wax entirely. To compute the answer, one has to carry out a very long process of iterated addition — namely, 3 + 3 + 3 + …… + 3 + 3 + 3, involving fifty 3’s. As one can easily imagine, this is not a challenge that an elementary-school dropout would be very likely to be able to handle.
This might seem the moment to shower praise on our educational system, thanks to which we educated adults are all instantly able to solve a problem that to school dropouts seems impossibly hard. We all know in a flash that 50 x 3 equals 3 x 50, end of story. Given this contrast with the 0% success rate of the school dropouts, one might be tempted to conclude that schooling very effectively gets across the true nature of multiplication. However, things are not that simple.
There is no disputing the fact that schooling teaches us that the two numbers in a multiplication can be interchanged — we all know that multiplication is commutative, that a x b = b x a — and we all carry out such switches without a moment’s thought. However, carrying out multiplications using one’s knowledge of commutativity doesn’t mean that one’s understanding of multiplication, as an adult, goes far beyond that of elementary-school children. Indeed, a quick informal survey reveals that almost no one, aside from serious math enthusiasts, knows why it is the case that, for instance, 5 x 3 equals 3 x 5. Middle-school students, high-school students, even university students are generally unable to say why the two numbers in a product are switchable. How, then, do they justify to themselves the idea that five threes equals three fives, or symbolically, the fact that 3 + 3 + 3 + 3 + 3 = 5 + 5 + 5 ?
Most people, if asked this question, will answer readily that one can check this out in any specific case (“Just go get a calculator and try it out for whatever pair of numbers that you wish!”). Some people will state it more as an axiom: “In multiplication, you have the right to switch the two factors”; others will baldly assert, almost as if some kind of magic were involved, “That’s just how it is” or “Well, it’s known to be a fact.” In short, for most well-educated adults, since multiplication is conceived of as repeated addition, its commutativity appears simply as a kind of miraculous coincidence, lacking any clear explanation or reason.
The above-cited treatise on arithmetic by Étienne Bezout provides a somewhat wordy and obscure justication for the commutativity of multiplication. If one’s vision of multiplication is rooted in the naïve analogy of repeated addition, then asking why the order of the factors makes no difference in multiplication amounts to asking why two different
repeated additions give the same answer, and there is no obvious symmetry between the two operations involved. Bezout tries to resolve this dilemma, but his words are not terribly clear:
As long as one considers numbers as abstract entities — that is, as long as one ignores the units attached to them — it makes little difference which of two numbers to be multiplied is taken as the multiplier and which as the multiplicand. For example, 3 times 4 is nothing but the triple of 1 taken four times, while 4 times 3 is the triple of 4 taken one time. Now it’s self-evident that 1 times 4 is the same thing as 4 times 1; and one can apply the same reasoning to any other number.
Most adults consider the fact that a x b always equals b x a to be a very useful but unexplained coincidence, which simply is empirically true. They “understand” the commutativity of multiplication in much the same way as they “understand” why bicycles don’t topple over and why airplanes can stay airborne: simply because they’ve seen such things for most of their lives and have long since forgotten that these phenomena are mysteries that crave explanation. And so, although education certainly drills into students the rote fact that multiplication is commutative, it fails to instill a deep understanding of multiplication’s nature; instead, it leaves them dependent on their initial naïve analogy.