Book Read Free

The Man Who Invented the Computer

Page 23

by Jane Smiley


  The question remains: would the computer as we know it have been invented without Atanasoff? I do not think ENIAC would have been; therefore, the computers that grew out of ENIAC and John von Neumann’s thoughts about ENIAC might not have been invented. When Konrad Zuse found himself in the mountains at the Austrian border and pondered his future testing the fat content of milk at the local dairies, he heard from IBM—they were interested in his ideas—but they might not have been had they not felt the prick of computer development in the United States. It does not seem as though Howard Aiken’s decimal Mark I–IV computers and those similar to it were likely to evolve very quickly into the small, powerful, and handy machines we have; the inventors devoted to analog machines did not believe in electronic machines even when they saw them work. It does not seem likely, therefore, that they would have switched to electronic machines on their own. Tommy Flowers, Max Newman, and Alan Turing knew what electronics could do—it is possible that the computer industry could have blossomed in England rather than the United States, but even aside from the problem of British security concerns after the war (as far in philosophy from von Neumann’s practice of encouraging and even forcing the sharing of information as it is possible to be), Colossus operated on different principles from the American computers designed originally to solve mathematical problems. On the other hand, if the ABC had not been invented, the need to solve very complex mathematical problems, especially those, at first, relating to the invention of the H-bomb, would have pressed mathematicians into some sort of calculating solution. The need was there. It would have been met at some point. But the ABC was invented, and as Kirwan Cox puts it, “The ideas [about computers] Atanasoff had were things that have continued to this day—the machine has been completely surpassed, but the concepts he had have not been surpassed.”

  For those of us who aren’t mathematicians, inventors, physicists, or engineers, the history of the invention of the computer is a fascinating look at both human history and human character. There was no inventor of the computer who was not a vivid personality, and no two are alike. It is Alan Turing who has captured the imagination of the culture, perhaps because of his brilliant mind and his tragic death, but Konrad Zuse is at least as idiosyncratic, and his life was even more dramatic. Like Atanasoff, he lived until 1995, long enough to be remembered, and vindicated, too. The most poignant figure, in some ways, may be Tommy Flowers, who remains largely unsung. But perhaps our most problematic character is John von Neumann. Scott McCartney considers him a thief, Norman Macrae and Kati Marton consider him a visionary. Everyone considers him a genius. As for me, von Neumann is the man whose memoirs I would have liked to read, the man at the center of everything, the man of Budapest and the man of Washington, D.C. I would like to know who he thought had invented the computer.

  Acknowledgments

  Thank you to the Sloan Foundation for funding this project. The invention of the computer is a wonderful story, and an important one.

  Thank you to William Silag, for telling me this story the first time, in 1984, and for writing an article about Atanasoff in the Palimpsest, the Iowa history magazine, that year (and for plenty else, besides).

  Thank you to John Gustafson, for much help in understanding all the issues, and for his contributions to the manuscript.

  Thanks to Kirwan Cox, documentary teacher/writer/researcher, who discussed information from the interviews and other research he has done for a television documentary on John Atanasoff and the ABC, which is being produced by Eyesteelfilm, Montreal, for History Television in Canada.

  Thank you to Robert Armstead, for editing and information.

  All mistakes are mine.

  Appendices

  John Gustafson, PhD

  Appendix A | Linear Solvers

  The problem that motivated John Atanasoff to build an electronic computer was one that had challenged mathematicians for many centuries. In about 300 BC, a Babylonian clay tablet gives this example of how a system of two equations can arise:

  There are two fields whose total area is 1,800 square yards. One produces grain at the rate of 2/3 of a bushel per square yard while the other produces grain at the rate of 1/2 a bushel per square yard. If the total yield is 1,100 bushels, what is the size of each field?*

  Translated into equations, with x and y for the areas of each field, this word problem says that

  x + y = 1,800 square yards

  2/3x + 1/2y = 1,100 bushels

  The Chinese also studied such problems, and in the Jiuzhang Suanshu, or Nine Chapters on the Mathematical Art, they provided examples of systems involving up to six equations in six unknown quantities as early as 200 BC.

  Even though such problems could be posed very easily, the effort to solve them seemed extraordinary and out of proportion to the simplicity of the statement of the problem. At the risk of reminding the reader of some of the more tedious moments spent in middle school algebra class, the way to solve the above system of two equations is to scale one equation so that the number multiplying x or y in one equation matches that of the other equation, and then subtract the equations to eliminate that variable. If we multiply both sides of the first equation by 2/3, for example, the two equations line up nicely:

  2/3x + 2/3y = 1,200

  2/3x + 1/2y = 1,100

  and we can subtract the second equation from the first to get a system that involves only y:

  1/6y = 100

  This is called “forward elimination,” where you eliminate one variable at a time from the system. After that, you “backsolve”; in the example above, y must be 600, and we can use the first equation x + y = 1,800 to conclude that x = 1,200.

  What stymied human calculators was that the work to eliminate every variable grew as the cube of the number of equations. In the two-by-two example above, all one had to do was scale the first equation and subtract it from the second. But in a six-by-six problem (the largest one attempted in the Chinese tome), the first equation would have to be scaled for each of the other equations to eliminate that first unknown variable, and that task requires the performing of arithmetic on the entire six-by-six problem description (thirty-six numbers). That leaves a problem with five equations in five unknowns, so one has to repeat the elimination task, until all that is left is a simple equation in one unknown quantity. The “forward elimination” to get to a simple problem of one equation in one unknown is like a pyramid of arithmetic work. For a system of n equations, the base of the pyramid of work is n by n, working up to a tip that is 1 by 1, and the volume of that pyramid (the total amount of work) is proportional to the cube of n. (The backsolving task is still tedious, but only grows as the square of n.)

  In the 1700s, to solve even ten equations in ten unknowns was considered a nearly insurmountable task. It requires more than three thousand multiplications and subtractions, and each arithmetic operation usually must be done with at least ten decimals of precision to avoid rounding errors that would make the result unacceptably inaccurate. The German mathematician Karl Friedrich Gauss needed to solve a system of six equations in the early 1800s when he was trying to plot the course of an observable asteroid, Pallas, and spent years grinding away at the numbers using a method almost identical to that explained by the Chinese two millennia earlier; that method now bears the name Gaussian elimination.

  By 1836, Charles Babbage had conceived his mechanical (steam-powered) Analytical Engine, and in pitching his plan for it to the funding agencies of his era, he led with the idea that it could be used to solve systems of equations:

  In the absence of a special engine for the purpose, the solution of large sets of simultaneous equations is a most laborious task, and a very expensive process indeed, when it has to be paid for, in the cases in which the result is imperatively needed.

  When a physical problem demanded a logarithm, or a cosine, or for a physical quantity like energy to be calculated, it might have required a few dozen calculations per input quantity, and human calculators knew it was t
edious work but not intractable. Solving systems of equations was regarded as intractable, since the work grew as the cube of the number of unknown quantities. Whether one used an abacus, a slide rule, or a desktop calculator like those made by Monroe or Marchand in the twentieth century, it was simply a matter of patience and a bit of skill to bull through the problems that arise with a single unknown variable. But to solve systems of n equations in n unknowns was, and is, the standard by which computational speed is measured.

  The speed of computers at solving systems of linear equations has been tracked and publicized since the 1970s. The Top 500 list of computers in the world, analogous to the lists business magazines maintain of the Top 500 companies in the world, is based on this time-honored problem: how fast can the system solve n equations in n unknowns? In 2010, the computers at the top of the list solve problems that have more than a million unknown quantities, and they solve them at speeds exceeding a quadrillion operations per second. Compared to the Atanasoff computer of 1940, they are astronomically faster, yet they use the same procedure and the same precision for what remains the fundamental test of any computer more than seven decades later.

  Appendix B | Binary Arithmetic

  People accustomed to working with numbers using the usual decimal (base ten) arithmetic notation tend to forget that the basis for that notation is biological and not mathematical: we have ten fingers (digits) to count with. From our earliest years, we are taught the Arabic system of writing the symbols 1, 2, 3, 4, 5, 6, 7, 8, 9 for the quantities one to nine, and that numbers larger than that require more than one symbol. By recording how many tens there are in a number, then how many hundreds, and so on, every whole number is expressed in a unique way. And because Arabic is written from right to left, the tens and hundreds and thousands position are added to the left as numbers get progressively larger, not to the right. This is “base ten” or “decimal” arithmetic because it uses powers of ten. When one reads the number 7,239, say, it immediately conveys the quantity (7 × 1000) + (2 × 100) + (3 × 10) + 9.

  We also commonly use clock arithmetic, which uses base sixty. The number of seconds goes up to 59 and then requires a new symbol, 1:00, to indicate 1 minute, 0 seconds. In other words, we work up to 59:59 and then one more second causes us to write it as 1:00:00—1 hour, no minutes, no seconds. There are many ways to represent numbers other than decimal notation.

  The decimal system is convenient for counting on fingers, but it creates the inconvenience of having to memorize large tables for addition and multiplication. With rote memorization and many hours of practice, children learn to recite combinations like 7 × 8 = 56 without working through the derivation that 7 groups of 8 is the same number as 5 groups of 10 plus 6.

  For automatic computation, it is certainly possible to build machines that work with decimal numbers. The older-style mechanical odometer on a car worked by rotating a wheel numbered 0 to 9, and at the point where the wheel rolls over to 0, a peg advances a similar wheel to the left by one digit to indicate another group of ten has accumulated. That’s the “carry,” and all computing mechanisms require a way to make sure that when an operation produces a number too large to store in one place, the overflow is carried to the next higher place in the notational system. More complex mechanisms allow counting down as well as counting up, which can be repeated for addition and subtraction, and addition and subtraction can be repeated for multiplication and division. Each string on a Chinese abacus holds beads in positions to represent the numbers 0 to 9, where the operator manually manages propagating the carry from ones place to tens place to hundreds place, and so on.

  Binary arithmetic uses the number 2 instead of the number 10 as its base. It thus requires only two symbols, 0 and 1, to record numbers. It takes more symbols to record numbers, as can be seen simply by looking at the rather bulky-looking binary equivalent of the numbers 0 to 10:

  Decimal Binary

  0 0

  1 1

  2 10

  3 11

  4 100

  5 101

  6 110

  7 111

  8 1000

  9 1001

  10 1010

  However, the binary system has at least one major advantage over the decimal system: the arithmetic tables are extremely small and simple. For addition, there are only four entries:

  0 + 0 = 0

  0 + 1 = 1

  1 + 0 = 1

  1 + 1 = 10

  (where “10” is binary for the number 2, not decimal for ten).

  For multiplication they are even simpler, since there is no need for a carry; the table looks the same as it does in decimal:

  0 × 0 = 0

  0 × 1 = 0

  1 × 0 = 0

  1 × 1 = 1

  The information theorist Claude Shannon was the first to shorten the phrase “binary digit” to “bit,” which was a play on words because it also was the smallest bit of information that could be represented: yes or no, on or off, one or zero, true or false.

  Making automatic devices that have two states is much simpler than making devices that have ten states. The two states could be a wire in an electric circuit being at one of two voltages, or a mechanical lever being in one of two positions, for example. The design problem for building an automatic multiplier changes from having to somehow mimic the entire ten-by-ten table one learned in grade school, to this:

  If both inputs are 1, then the answer is 1. Otherwise, the answer is 0.

  If the automatic computing device is mechanical, like the first Zuse computers, then this rule means something like letting a pin slip through two holes only if both are lined up on the right. If the device is electronic, like the Atanasoff-Berry Computer, then this rule means building a circuit that allows current to flow only if both switches in series are closed.

  The early decimal machines, like the ENIAC and the Harvard Mark I, still used on-off states in their circuits but bundled them into subcircuits that represented the decimal digits 0 to 9. They were essentially electrical versions of the earlier mechanical calculators that used wheels to count in decimal. Both ENIAC and the Mark I were the size of a room, largely because of the inherent inefficiency of decimal representation, whereas the Zuse and Atanasoff designs were the size of a desk. It was not because the larger machines held more data; the ENIAC needed 18,000 vacuum tubes to compute with a maximum of twenty ten-decimal-digit numbers. The Atanasoff-Berry Computer needed only 600 vacuum tubes, yet it computed with a maximum of thirty fifteen-decimal-digit numbers—50 percent more numbers, each with 50 percent greater precision.

  Because very few people can look at a representation like 0001110001000111 and grasp its numerical value, all computers that use binary arithmetic also provide a way of accepting human-readable decimal representations as input and converting their answers to decimal output. That conversion is a small price to pay for an overwhelming simplification of the design of the computing device, so computers that use decimal arithmetic internally have disappeared from the computing landscape in favor of the binary arithmetic approach used by Atanasoff and Zuse.

  Appendix C | Electronic Switches

  In a biological organism, a neuron might cause a muscle to contract or convey a sensation, but a neuron can also trigger other neurons. A brain is a collection of neurons, interconnected so the firing of one can cause or suppress the firing of many others. That capability is what permits organisms to exhibit such complex and intelligent behavior. For any artificial device to rival the computing abilities found in nature, it must similarly have control elements that trigger other control elements.

  The control elements of early electronic computing progressed from relays to vacuum tubes (or valves, the British term) to transistors. Such devices, sometimes called “switching elements,” mimic the behavior of neurons in that they are switches that can control many other switches.

  When something closes a switch to complete a circuit, the current that flows can operate another switch, either to ope
n it or close it. The small amount of current needed to operate a switching element can result in a lot more current either flowing or not flowing, so a switching element can operate several other switching elements, not just one. That is why switching elements are often referred to as “amplifiers”: the power they control is larger than the power it takes to operate them. Switching elements let electronic devices perform binary arithmetic (see appendix B) because their on-off state can represent the binary digits 0 and 1.

  Imagine a waterfall with a gate at the top that can dam up the flow of water. When the gate is open, water spills down with much more energy than that needed to operate the gate. That flow of water could operate mechanisms that open or close other gates of other waterfalls, creating quite a complicated series of events. If, say, water not flowing represents the binary digit 0 and water flowing represents a 1, then a set of waterfalls could represent numbers in binary. Changing the gates performs operations on those numbers. What electrical devices do is similar, but with a flow of electrons instead of a flow of water.

  An everyday example of switches in combination is what electricians call a double throw switch. Most homes have at least one room with two entry points and separate wall switches by each entry that can operate the overhead light. Perhaps you have had the experience of entering a dark room at one door when someone else enters another door, and you both hit the wall switches at the same time, which keeps the room dark. The pair of wall switches performs the logical operation that computer scientists call an “exclusive OR,” because the light turns on if one or the other switch flips, but not when both flip.

 

‹ Prev