Asimov's New Guide to Science

Home > Science > Asimov's New Guide to Science > Page 118
Asimov's New Guide to Science Page 118

by Isaac Asimov


  The eighteenth century began a kind of golden age of automatons. Automatic toy soldiers were constructed for the French dauphin; an Indian ruler had a six-foot mechanical tiger.

  Such royal conveniences, however, were outstripped by commercial ventures. In 1738, a Frenchman, Jacques de Vaucanson, constructed a mechanical duck of copper that could quack, bathe, drink water, eat grain, seem to digest and then excrete it. People paid to see the duck, and it earned money for its owners for decades but no longer survives.

  A later automaton does survive in a Swiss museum at Neuchâtel. It was constructed in 1774 by Pierre Jacquet-Droz and is an automatic scribe. It is in the shape of a boy who dips his pen in an inkwell and writes a letter.

  To be sure, such automatons are completely inflexible. They can only follow the motions dictated by the clockwork.

  Nevertheless, it was not long before the principles of automatism were made flexible and turned to useful labor rather than mere show.

  The first great example was an invention of a French weaver, Joseph Marie Jacquard. In 1801, he devised the Jacquard loom.

  In such a loom, needles ordinarily move through holes set in a block of wood and there engage the threads in such a way as to produce the weaving interconnections.

  Suppose, though, that a punched card is interposed between needles and holes. Holes in the card here and there allow needles to pass through and enter the wood as before. In places where needles are not punched through the card, the needles are stopped. Thus, some interconnections are made, and some are not.

  If there are different punched cards with different arrangements of holes and if these are inserted into the machine in a particular order then changes stitches that allowed or not can produce a pattern. By appropriate adjustment of the cards, any pattern, in principle, can be formed quite automatically. In modern terms, we would say that the card arrangement serves to program the loom, which then does something, of its own apparent accord, that could be mistaken for artistic creativity.

  The most important aspect of the Jacquard loom was that it accomplished its amazing successes (by 1812, there were 11,000 of these looms in France; and once the Napoleonic wars were over, they spread to Great Britain) by a simple yes-no dichotomy. Either a hole existed in a special place or it did not, and the pattern yes-no-yes-yes-no and so on over the face of the card was all that was necessary.

  Ever since, more and more complicated devices designed to mimic human thought have made use of ever more subtle methods of dealing with yes-no patterns. It might seem totally ridiculous to expect to get complicated, human-seeming results, from a simple yes-no pattern; but actually the mathematical basis for it had been demonstrated in the seventeenth century, after thousands of years of attempts to mechanize arithmetical calculations and to find aids (increasingly subtle) for the otherwise unaided operation of the human mind.

  ARITHMETICAL CALCULATIONS

  The first tools for the purpose must have been human fingers. Mathematics began when human beings used their own fingers to represent numbers and combinations of numbers. It is no accident that the word digit stands both for a finger (or toe) and for a numerical integer.

  From that, another step leads to the use of other objects in place of fingers—small pebbles, perhaps. There are more pebbles than fingers, and intermediate results can be preserved for future reference in the course of solving the problem. Again, it is no accident that the word calculate comes from the Latin word for “pebble.”

  Pebbles or beads lined up in slots or strung on wires, formed the abacus, the first really versatile mathematical tool (figure 17.5). With this device, it became easy to represent units, tens, hundreds, thousands, and so on. By manipulating the pebbles, or counters, of an abacus, one could quickly carry through an addition such as 576 + 289. Furthermore, any instrument that can add can also multiply, for multiplication is only repeated addition. And multiplication makes raising to a power possible, because this is only repeated multiplication (for example, 45 is shorthand for 4 × 4 × 4 × 4 × 4). Finally, running the instrument backward, so to speak, makes possible the operations of subtraction, division, and extracting a root.

  Figure 17.5.Adding with an abacus. Each counter below the bar counts 1; each counter above the bar counts 5. A counter registers when it is pushed to the bar. Thus in the top setting here, the right-hand column reads 0; the one to the left of that reads 7 or (5 + 2); the next left reads 8 or (5 + 3); and the next left reads 1: the number shown, then, is 1870. When 549 is added to this, the right column becomes 9 or (9 + 0); the next addition (4 + 7) becomes 1 with 1 to carry, which means that one counter is pushed up in the next column; the third addition is 9 + 5, or 4 with 1 to carry;and the fourth addition is 1 + 1 or 2: the addition gives 2419, as the abacus shows. The simple maneuver of carrying 1 by pushing up a counter in the next column makes it possible to calculate very rapidly;a skilled operator can add faster than an adding machine can, as was shown by an actual test in 1946.

  The abacus can be considered the second digital computer. (The first, of course, was the fingers.)

  For thousands of years the abacus remained the most advanced form of calculating tool. It actually dropped out of use in the West after the end of the Roman Empire and was reintroduced by Pope Sylvester II about 1000 A.D., probably from Moorish Spain, where its use had lingered. It was greeted on its return as an Eastern novelty, its Western ancestry forgotten.

  The abacus was not replaced until a numerical notation was introduced that imitated the workings of the abacus. (This notation, the one familiar to us nowadays as Arabic numerals, was originated in India some time about 800 A.D., was picked up by the Arabs, and finally introduced to the West about 1200 A.D. by the Italian mathematician Leonardo of Pisa.)

  In the new notation, the nine different pebbles in the units row of the abacus were represented by nine different symbols, and those same nine symbols were used for the tens row, hundreds row, and thousands row. Counters differing only in position were replaced by symbols differing only in position, so that in the written number 222, for instance, the first 2 represents 200, the second 20, and third represents two itself; that is, 200 + 20 + 2 = 222.

  This “positional notation” was made possible by recognition of an all-important fact which the ancient users of the abacus had overlooked. Although there are only nine counters in each row of the abacus, there are actually ten possible arrangements. Besides using any number of counters from one to nine in a row, it is also possible to use no counter—that is, to leave the place at the counting position empty. This escaped all the great Greek mathematicians and was not recognized until the ninth century, when some unnamed Hindu thought of representing the tenth alternative by a special symbol which the Arabs called “sifr” (“empty”) and which has come down to us, in consequence, as “cipher” or, in more corrupt form, “zero.” The importance of the zero is recorded in the fact that the manipulation of numbers is still sometimes called “ciphering,” and that to solve any hard problem is to “decipher” it.

  Another powerful tool grew out of the use of the exponents to express powers of numbers. To express 100 as 102, 1,000 as 103, 100,000 as 105, and so on, is a great convenience in several respects; not only does it simplify the writing of large numbers but it reduces multiplication and division to simple addition or subtraction of the exponents (e.g., 102 × 103 = 105) and makes raising to a power or extraction of a root a simple matter of multiplying or dividing exponents (e.g., the cube root of 1,000,­000 is 106/3 = 102). Now this is all very well, but very few numbers can be put into simple exponential form. What could be done with a number such as 111? The answer to that question led to the tables of logarithms.

  The first to deal with this problem was the seventeenth-century Scottish mathematician John Napier. Obviously, expressing a number such as 111 as a power of 10 involves assigning a fractional exponent to 10 (the exponent is between 2 and 3). In more general terms, the exponent will be fractional whenever the number in question
is not a multiple of the base number. Napier worked out a method of calculating the fractional exponents of numbers, and he named these exponents logarithms. Shortly afterward, the English mathematician Henry Briggs simplified the technique and worked out logarithms with 10 as the base. The Briggsian logarithms are less convenient in calculus, but they are the more popular for ordinary computations.

  All nonintegral exponents are irrational: that is, they cannot be expressed in the form of an ordinary fraction. They can be expressed only as an indefinitely long decimal lacking a repeating pattern. Such a decimal can be calculated, however, to as many places as necessary for the desired precision.

  For instance, let us say we wish to multiply 111 by 254. The Briggsian logarithm of 111 to five decimal places is 2.04532, and for 254 it is 2.40483. Adding these logarithms, we get 102.04532 × 102.40483 =104.45015. That number is approximately 28,194, the actual product of 111 × 254. If we want to get still closer accuracy, we can use the logarithms to six or more decimal places.

  Tables of logarithms simplified computation enormously. In 1622 an English mathematician named William Oughtred made things still easier by devising a slide rule. Two rulers are marked with a logarithmic scale, in which the distances between numbers get shorter as the numbers get larger: for example, the first division holds the numbers from 1 to 10; the second division, of the same length, holds the numbers from 10 to 100; the third from 100 to 1,000; and so on. By sliding one rule along the other to an appropriate position, one can read off the result of an operation involving multiplication or division. The slide rule makes computations as easy as addition and subtraction on the abacus; though in both cases, to be sure, one must be skilled in the use of the instrument.

  CALCULATING MACHINES

  The first step toward a truly automatic calculating machine was taken in 1642 by the French mathematician Blaise Pascal. He invented an adding machine that did away with the need to move the counters separately in each row of the abacus. His machine consisted of a set of wheels connected by gears. When the first wheel—the units wheel—was turned ten notches to its a mark, the second wheel turned one notch to the number 1, so that the two wheels together showed the number 10. When the tens wheel reached its 0, the third wheel turned a notch, showing 100, and so on. (The principle is the same as that of the mileage indicator in an automobile.) Pascal is supposed to have had more than fifty such machines constructed; at least five are still in existence.

  Pascal’s device could add and subtract. In 1674, the German mathematician Gottfried Wilhelm von Leibnitz went a step further and arranged the wheels and gears so that multiplication and division were as automatic and easy as addition and subtraction. In 1850, a United States inventor named D. D. Parmalee patented an important advance which added greatly to the calculator’s convenience: in place of moving the wheels by hand, he introduced a set of keys—pushing down a marked key with the finger turned the wheels to the correct number. This is the mechanism of what is now familiar to us as the old-fashioned cash register.

  Leibnitz, however, went on to do something more. Perhaps as a result of his efforts to mechanize calculation, he thought of its ultimate simplification by inventing the binary system.

  Human beings usually use a ten-based system (decinary), in which ten different digits (0, 1, 2, 3, 4, 5, 6, 7, 8, 9) are used to represent, in different amounts and combinations, all conceivable numbers. In some cultures, other bases are used (there are five-based systems, twenty-based systems, twelve-based systems, sixty-based systems and so on) but the ten-based is by far the most popular. It undoubtedly arose out of the fact that we happen to have evolved with ten fingers on our two hands.

  Leibnitz saw that any number could be used as a base and that, in many ways, the simplest to operate mechanically would be a two-based system (binary).

  The binary notation uses only two digits: 0 and 1. It expresses all numbers in terms of powers of 2. Thus, the number one is 20, the number two is 21, three is 21 + 20, four is 22, and so on. As in the decimal system, the power is indicated by the position of the symbol. For instance, the number four is represented by 100, read thus: (1 × 22) + (0 × 21) + (0 × 20), or 4 + 0 + 0 = 4 in the decimal system.

  As an illustration, let us consider the number 6,413. In the decimal system it can be written (6 × 103) + (4 × 102) + (1 × 101) + (3 × 100); remember that any number to the zero power equals 1. Now in the binary system we add numbers in powers of 2, instead of powers of 10, to compose a number. The highest power of 2 that leaves us short of 6,413 is 12; 212 is 4,096. If we now add 211, or 2,048, we have 6,144, which is 269 short of 6,413. Next, 28 adds 256 more, leaving 13; we can then add 23, or 8, leaving 5; then 22, or 4, leaving 1; and 20 is 1. Thus we might write the number 6,413 as (1 × 212) + (1 × 211) + (1 × 28) + (1 × 23) + (1 × 22) + (1 × 20). But, as in the decimal system, each digit in a number, reading from the left, must represent the next smaller power. Just as in the decimal system we represent the additions of the third, second, first, and zero powers of 10 in stating the number 6,413, so in the binary system we must represent the additions of the powers of 2 from 12 down to 1. In the form of a table this would read:

  1 × 212 = 4096

  1 × 211 = 2048

  0 × 210 = 0

  0 × 29   = 0

  1 × 28   =   256

  0 × 27   = 0

  0 × 26   = 0

  0 × 25   = 0

  0 × 24   = 0

  1 × 23   = 8

  1 × 22 =   4

  0 × 21   = 0

  1 × 20   = 1

  ———

  6,413

  Taking the successive multipliers in the column at the left (as we take 6, 4, 1, and 3 as the successive multipliers in the decimal system), we write the number in the binary system as 1100100001101.

  This looks pretty cumbersome. It takes 13 digits to write the number 6,413, whereas in the decimal system we need only four. But for a computing machine the system is just about the simplest imaginable. Since there are only two different digits, any operation can be carried out in terms of yes-and-no.

  Presumably something as simple as the presence or the absence of a needle in a Jacquard loom can somehow mimic the yes and the no respectively, or the 1 and the 0. With the proper ingenious combinations, one can have the combinations so adjusted as to have 0 + 0 = 0, 0 + 1 = 1, 0 × 0 = 0; 0 × 1 = 0 and 1 × 1 = 1. Once such combinations are possible, we can imagine all arithmetical calculations performable on something like a Jacquard loom.

  Nor are just ordinary calculations conceivably possible. The system can be widened to include logical statements that we do not often think of as representing arithmetic.

  In 1936, the English mathematician Alan Mathison Turing showed that any problem could be solved mechanically if it could be expressed in the form of a finite number of manipulations that could be performed by the machine.

  In 1938, an American mathematician and engineer, Claude Elwood Shannon, pointed out in his master’s thesis that deductive logic, in a form known as Boolean algebra, could be handled by means of the binary system. Boolean algebra refers to a system of symbolic logic suggested in 1854 by the English mathematician George Boole in a book entitled An Investigation of the Laws of Thought. Boole observed that the types of statement employed in deductive logic could be represented by mathematical symbols, and he went on to show how such symbols could be manipulated according to fixed rules to yield appropriate conclusions.

  To take a very simple example, consider the following statement: “Both A and B are true.” We are to determine the truth or falsity of this statement by a strictly logical exercise, assuming that we know whether A and B, respectively, are true or false. To handle the problem in binary terms, as Shannon suggested, let 0 represent “false” and 1 represent “true.” If A and B are both false, then the statement “Both A and B are true” is false. In other words, 0 and 0 yield 0. If A is true but B is false (or vice versa), then the statement again is false: that i
s, 1 and 0 (or 0 and 1) yield 1. If A is true and B is true, then the statement “Both A and B are true” is true. Symbolically, 1 and 1 yield 1.

  Now these three alternatives correspond to the three possible multiplications in the binary system—namely: 0 × 0 = 0, 1 × 0 = 0, and 1 × 1 = 1. Thus the problem in logic posed by the statement “Both A and B are true” can be manipulated by multiplication. A device (properly programed) therefore can handle this logical problem as easily, and in the same way, as it handles ordinary calculations.

  In the case of the statement “Either A or B is true,” the problem is handled by addition instead of by multiplication. If neither A nor B is true, then this statement is false. In other words, 0 + 0 = 0. If A is true and B false, or vice versa, the statement is true; in these cases 1 + 0 = 1 and 0 + 1 = 1. If both A and B are true, the statement is certainly true, and 1 + 1 = 10. (The significant digit in the 10 is the 1; the fact that it is moved over one position is immaterial. In the binary system, 10 represents (1 × 21) + (0 × 20), which is equivalent to 2 in the decimal system.)

  Boolean algebra has become important in the engineering of communications and forms part of what is now known as information theory.

  Artificial Intelligence

  The first person who really saw the potentialities of the punch cards of the Jacquard loom was an English mathematician, Charles Babbage. In 1823, he began to design and build a device he called a Difference Engine, and then, in 1836, a more complicated Analytical Engine, but completed neither.

  His notions were, in theory, completely correct. He planned to have arithmetical operations carried out automatically by the use of punch cards and then to have the results either printed out or punched out on blank cards. He also planned to give the machine a memory by enabling it to store cards, which had been properly punched out, and then making use of them at later times when called upon to do so.

 

‹ Prev