Book Read Free

Collected Essays

Page 36

by Rucker, Rudy


  Although we associate punch cards with IBM and mainframe computers, it turns out that they were first used on French looms. The invention was made by Joseph Marie Jacquard in 1801. By coding up a tapestry pattern as a series of cards, a “Jacquard loom” was able to weave the same design over and over, without the trouble of a person having to read the pattern and set the threads on the loom. Babbage himself owned a woven portrait of Jacquard that was generated by a loom using 24,000 punch cards.

  One of the most lucid advocates of Babbage’s Analytical Engine was the young Ada Byron, daughter of the famed poet. Ada memorably put like this.

  The distinctive characteristic of the Analytical Engine, and that which has rendered it possible to endow mechanism with such extensive faculties as bid fair to make this engine the executive right-hand of abstract algebra, is the introduction into it of the principle which Jacquard devised for regulating, by means of punched cards, the most complicated patterns in the fabrication of brocaded stuffs…We may say most aptly, that the Analytical Engine weaves algebraical patterns just as the Jacquard loom weaves flowers and leaves. [Ada Augusta, Countess of Lovelace, “Notes on Menabrea’s Sketch of the Analytical Engine,” reprinted in Philip and Emily Morrison, eds., Selected Writings by Charles Babbage, (Dover Books).]

  In reality, no Analytical Engine was ever completed. But the idea stands as a milestone. In 1991, the science fiction writers William Gibson and Bruce Sterling published a fascinating alternative history novel, The Difference Engine, which imagines what Victorian England might have been like if Babbage had been successful. (The book is really about Analytical Engines rather than Difference Engines.) Just as our computers are managed by computer hackers, the Analytical Engines of Gibson and Sterling are manned by “clackers.” Here is their description of a visit to the Central Statistics Bureau in their what-if London.

  Behind the glass loomed a vast hall of towering Engines—so many that at first Mallory thought the walls must surely be lined with mirrors, like a fancy ballroom. It was like some carnival deception, meant to trick the eye—the giant identical Engines, clock-like constructions of intricately interlocking brass, big as rail-cars set on end, each on its foot-thick padded blocks. The whitewashed ceiling, thirty feet overhead, was alive with spinning pulley-belts, the lesser gears drawing power from tremendous spoked flywheels on socketed iron columns. White-coated clackers, dwarfed by their machines, paced the spotless aisles. Their hair was swaddled in wrinkled white berets, their mouths and noses hidden behind squares of white gauze.

  In the world of The Difference Engine, one can feed in a punch card coded with someone’s description, and the Central Statistics Bureau Engines will spit out a “collection of stippleprinted Engine-portraits” of likely suspects.

  Punch Card Memory Storage

  In our world, it wasn’t until the late 1800s that anyone started using punch cards for any purpose other than controlling Jacquard looms. It was Herman Hollerith who had the idea of using punch cards in order to organize information for the U.S. census. He designed machines for tabulating the information on punch cards, as well as a variety of calculating devices for massaging the info. He got the contract for the census of 1890, and his machines were installed in the census building in Washington, D.C. A battery of clerks transferred written census information to punch cards and fed the cards into tabulators. The tabulators worked by letting pins fall down onto the cards. Where a pin could go through, it would touch a little cup of mercury, completing a circuit and turning a wheel of a clock-like counter arrangement similar to the Pascaline.

  The work was quite monotonous, and one of the employees later recalled:

  Mechanics were there frequently…to get the ailing machines back in operation. The trouble was usually that somebody had extracted the mercury…from one of the little cups with an eye-dropper and squirted it into a spittoon, just to get some un-needed rest. [Geoffrey D. Austrian, Herman Hollerith: Forgotten Giant of Information Processing, (Columbia University Press).]

  Hollerith’s company eventually came under the leadership of a sharp-dealing cash register salesman named Thomas J. Watson—who a few years later would change the business’s name from the Computing-Tabulating-Recording Company to International Business Machines, a.k.a. IBM.

  With punch card readers well in place, the realization of machines like the Analytical Engine still required a technology to handle what Babbage called the “store,” a readily accessible short-term memory that the machine can use for scratch paper, much as we write down intermediate results when carrying out a multiplication or a long division by hand. In modern times, of course, we are used to the idea of storing memory on integrated circuit chips—our RAM—and not having to worry about it. But how did the first computer designers deal with creating rapidly accessible memory?

  Electromechanical Computers

  The first solution used was an electromechanical device called a relay. A primitive two-position relay might be designed like a circuit-breaker switch. In this type of switch, a spring holds it in one position while an electromagnet can pull it over into another position. If there is no current through the electromagnet, the relay stays in the “zero” or “reset” position, and if enough current flows through the electromagnet, the switch is pulled over to the “one” or “set” position. With a little tinkering, it’s also possible to make a wheel-shaped ten-position relay that can be electromechanically set to store the value of any digit between zero to nine. Historically, the technology for these kinds of relays was developed for telephone company switching devices—which need to remember the successive digits of the phone numbers which callers request.

  In the 1930s, the German scientist Konrad Zuse built a primitive relay-based computer that could add, multiply, and so on. As well as using relays for short-term memory storage, Zuse used them for switching circuits to implement logical and arithmetic operations much more general than the repeated additions of a Difference Engine. The Nazi government’s science commission was unwilling to fund Zuse’s further research—this was the same Nazi science commission which sent scouts across the Arctic ice to look for a possible hole leading to the Hollow Earth. They didn’t see the promise of electromechanical computation.

  In the early 1940s, a rather large electromechanical relay-based computer called the Mark I was constructed at Harvard University under the leadership of Howard Aiken. Aiken’s funding was largely provided by Thomas J. Watson’s IBM. The Mark I could read data and instructions from punch cards, by then known as “IBM cards,” and was built of nearly a million parts. When it was running, the on/off clicking of its relays made a sound like a muffled hailstorm.

  Electronic Computers

  The next stage in the development of the computer was to replace electromechanical components by much faster electronic devices. In other words, use vacuum tubes instead of relays for your logic circuits and short-term memory storage. Although vacuum tubes look like rather sophisticated devices, they are a lot funkier than one first imagines. Storing one single bit of memory—a simple zero or one—typically took at least two vacuum tubes, arranged into a primitive circuit known as a flip-flop. Even in the 1950s, electrical engineers had to learn a lot about relays and flip-flop circuits. In his novel V., the ex-engineering student and master novelist Thomas Pynchon includes a jazzy ditty on this theme by his bebop jazz-musician character McClintic Sphere:

  Flop, flip, once I was hip,

  Flip, flop, now you’re on top,

  Set-REset, why are BEset

  With crazy and cool in the same molecule.

  A man named John Atanasoff began building a small special purpose computer using 300 vacuum tubes for memory at Iowa State University around 1940. Atanasoff’s computer was intended for solving systems of linear equations, but he abandoned the project in 1942. It is unclear if his machine was ever fully operational. But this work was significant in that it demonstrated the possibility of making a computer with no moving parts. Babbage’s Analyti
cal Engine would have been purely made of moving gears, the Mark I was a mixture of electrical circuits and spring-loaded relay switches, but Atanasoff’s device was completely electronic, and operated at a much faster speed.

  The first general purpose electronic computer was the ENIAC (for “Electronic Numerical Integrator And Computer”), completed at the Moore School of Engineering of the University of Pennsylvania in November, 1945. The ENIAC was primarily built by J. Presper Eckert and John Mauchly. The funding for the project was obtained through the Ballistics Research Laboratory of the U.S. Army in 1943.

  Although Mauchly contends that he thought of vacuum tube memories on his own, he did visit Atanasoff in 1941 to discuss electronic computing, so at the very least Atanasoff influenced Mauchly’s thought. In 1972, Atanasoff came out of obscurity to support the Honeywell corporation in a lawsuit to break the Sperry Rand corporation’s ownership of Eckert and Mauchly’s patents on their UNIVAC computer—a descendent of the ENIAC which Eckert and Mauchly had licensed to Sperry Rand. Although Honeywell and Atanasoff won the trial, this may have been a miscarriage of justice. The feeling among computer historians seems to be that Eckert and Mauchly deserve to be called the inventors of the electronic computer. Firstly, the ENIAC was a much larger machine than Atanasoff’s, secondly, the ENIAC was general purpose, and thirdly, the ENIAC was successfully used to solve independently proposed problems.

  The original plan for the ENIAC was that it would be used to rapidly calculate the trajectories traveled by shells fired at different elevation angles at different air temperatures. When the project was funded in 1943, these trajectories were being computed either by the brute force method of firing lots of shells, or by the time-consuming methods of having office workers carry out step-by-step calculations of the shell paths according to differential equations.

  As it happened, World War II was over by the time ENIAC was up and running, so ENIAC wasn’t actually ever used to compute any ballistic trajectories. The first computation ENIAC carried out was a calculation to test the feasibility of building a hydrogen bomb. It is said that the calculation used an initial condition of one million punch cards, with each punch card representing a single “mass point.” The cards were run though ENIAC, a million new cards were generated, and the million new cards would served as input for a new cycle of computation. The calculation was a numerical solution of a complicated differential equation having to do with nuclear fusion. You might say that the very first electronic computer program was a simulation of an H-bomb explosion. A long way from the Eccentric Anomaly of Mars.

  The Von Neumann Architecture

  The man who had the idea of running the H-bomb program on the ENIAC was the famous mathematician John von Neumann. As well as working in the weapons laboratory of Los Alamos, New Mexico, von Neumann was also consulting with the ENIAC team, which consisted of Mauchly, Eckert, and a number of others.

  Von Neumann helped them draw up the design for a new computer to be called the EDVAC (for Electronic Discrete Variable Automatic Computer). The EDVAC would be distinguished from the ENIAC by having a better memory, and by having the key feature of having an easily changeable stored program. Although the ENIAC read its input data off of punch cards, its program could only be changed by manually moving the wires on a plugboard and by setting scores of dials. The EDVAC would allow the user to feed in the program and the data on punch cards. As von Neumann would later put it:

  Conceptually we have discussed…two different forms of memory: storage of numbers and storage of orders. If, however, the orders to the machine are reduced to a numerical code and if the machine can in some fashion distinguish a number from an order, the memory organ can be used to store both numbers and orders. [Arthur Burks, Herman Goldstine, and John von Neumann, “Preliminary Discussion of the Logical Design of an Electronic Computing Instrument,” reprinted in John von Neumann, Collected Works, (Macmillan ).]

  Von Neumann prepared a document called “First Draft of a Report on the EDVAC,” and sent it out to a number of scientists in June, 1945. Since von Neumann’s name appeared alone as the author of the report, he is often credited as the sole inventor of the modern stored program concept, which is not strictly true. The stored program was an idea which the others on the ENIAC team had also thought of—not to mention Charles Babbage with his Analytical Engine! Be that as it may, the name stuck, and the design of all the ordinary computers one sees is known as “the von Neumann architecture.”

  Even if this design did not spring full-blown from von Neumann’s brow alone, he was the first to really appreciate how powerful a computer could be if it used a stored program, and he was an eminent enough man to exert influence to help bring this about. Initially the idea of putting both data and instructions into a computer’s memory seemed strange and heretical, not to mention too technically difficult.

  The technical difficulty with storing a computer’s instructions is that the machine needs to be able to access these instructions very rapidly. You might think this could be handled by putting the instructions on, say, a rapidly turning reel of magnetic tape, but it turns out that a program’s instructions are not accessed by a single, linear read-through as would be natural for a tape. A program’s execution involves branches, loops and jumps; the instructions do not get used in a fixed serial order. What is really needed is a way to store all of the instructions in memory in such a way that any location on the list of instructions can be very rapidly accessed.

  The fact that the ENIAC used such a staggering number of vacuum tubes raised the engineering problems of its construction to a pyramid-of-Cheops or man-on-the-moon scale of difficulty. That it worked at all was a great inspiration. But it was clear that something was going to have to be done about using all those tubes, especially if anyone wanted to store a lengthy program in a computer’s memory.

  Mercury Memory

  The trick for memory storage that would be used in the next few computers was almost unbelievably strange, and is no longer widely remembered: bits of information were to be stored as sound waves in tanks of liquid mercury. These tanks or tubes were also called “mercury delay lines.” A typical mercury tube was about three feet long and an inch in diameter, with a piezoelectric crystal attached to each end. If you apply an oscillating electrical current to a piezoelectric crystal it will vibrate; conversely, if you mechanically vibrate one of these crystals it will emit an oscillating electrical current. The idea was to convert a sequence of zeroes and ones into electrical oscillations, feed this signal to the near end of a mercury delay line, let the vibrations move through the mercury, have the vibrations create an electrical oscillation coming out of the far end of the mercury delay line, amplify this slightly weakened signal, perhaps read off the zeroes and ones, and then, presuming that continued storage was desired, feed the signal back into the near end of the mercury delay line. The far end was made energy-absorbent so as not to echo the vibrations back towards the near end.

  How many bits could a mercury tube hold? The speed of sound (or vibrations) in mercury is roughly a thousand meters per second, so it takes about one thousandth of a second to travel the length of a one meter mercury tube. By making the vibration pulses one millionth of a second long, it was possible to send off about a thousand bits from the near end of a mercury tank before they started arriving at the far end (there to be amplified and sent back through a wire to the near end). In other words, this circuitry-wrapped cylinder of mercury could remember 1000 bits, or about 128 bytes. Today, of course, it’s common for a memory chip the size of your fingernail to hold many millions of bytes.

  A monkey wrench was thrown into the EDVAC plans by the fact that Eckert and Mauchly left the University of Pennsylvania to start their own company. It was the British scientist Maurice Wilkes who first created a stored-program machine along the lines laid down by the von Neumann architecture. Wilkes’s machine, the EDSAC (for Electronic Delay Storage Automatic Calculator, where “Delay Storage” refers to the mercury del
ay lines used for memory), began running at Cambridge University in May 1949. Thanks to the use of the mercury memory tanks, the EDSAC needed only 3,000 vacuum tubes.

  In an email to me, the mathematician John Horton Conway recalled:

  As an undergraduate [at Cambridge University] I saw the mercury delay lines in the old EDSAC machine they had there. The mercury was in thick-walled glass tubes between 6 and 8 feet long, and continually leaked into the containing trays below. Nobody then (late ‘50s) seemed unduly worried about the risks of mercury poisoning.

  UNIVAC

  Although Eckert and Mauchly were excellent scientists, they were poor businessmen. After a few years of struggle, they turned the management of their struggling computer company over to Remington-Rand (now Sperry-Rand). In 1952, the Eckert-Mauchly division of Remington-Rand delivered the first commercial computer systems to the National Bureau of Standards. These machines were called UNIVAC (for Universal Automatic Computer). The UNIVAC had a console, some tape readers, a few cabinets filled with vacuum tubes and a bank of mercury delay lines the size of a china closet. This mercury memory held about one kilobyte and it cost about half a million dollars.

  The public became widely aware of the UNIVAC during the night of the presidential election of 1952: Dwight Eisenhower vs. Adalai Stevenson. As a publicity stunt, Remington-Rand arranged to have Walter Cronkite of CBS report a UNIVAC’s prediction of the election outcome based on preliminary returns—the very first time this now common procedure was done. With only seven percent of the vote in, UNIVAC predicted a landslide victory for Eisenhower. But Remington-Rand’s research director Arthur Draper was afraid to tell this to CBS! The pundits had expected a close election with a real chance of Stevenson’s victory, and UNIVAC’s prediction seemed counterintuitive. So the Draper had the Remington-Rand engineers quickly tweak the UNIVAC program to make it predict the expected result, a narrow victory by Eisenhower. When, a few hours later, it became evident that Eisenhower would indeed sweep the electoral college, Draper went on TV to improve UNIVAC’s reputation by confessing his subterfuge. One moral here is that a computer’s predictions are only as reliable as its operator’s assumptions.

 

‹ Prev