Turing's Cathedral
Page 21
Meanwhile, Rajchman had kept working on the Selectron at RCA. The first operative tube, with 256 storage elements, was demonstrated to Bigelow and Goldstine on September 22, 1948, and they “seemed reasonably impressed.” The second tube was finished on October 1, 1948. Only then did Rajchman find out that the Selectron had been demoted to second place. “It seems that we have at last the desired tube,” Rajchman reported to Zworykin on October 5. “This success comes, however, at a time when the Selectron has to compete with the English tube of Prof. Williams. The Institute group has started to work on it—without telling us and even consciously concealing this fact from us—at the end of May or beginning of June.” Rajchman, who finally obtained a copy of the Williams-Kilburn report, was “most impressed by his work,” admitting that “in a typical English way he produced a startling result with an ordinary cathode ray tube.”52
The Selectron was out of the race. A limited number of 256-bit tubes were eventually produced, and proved spectacularly successful (with 100,000 hours mean time between failures) in the IAS-derived JOHNNIAC built at RAND. But by that time IBM had adopted cathode-ray-tube memory for the IBM 701, and magnetic-core memory, originally suggested by Rajchman but commercialized elsewhere, was about to take the lead that had been abandoned by RCA. The Selectron never achieved commercial success or scale. Was the Selectron a failure? “No more so than the dinosaur was a failure,” says Willis Ware. “They were doing things inside that vacuum that hadn’t been done before.”53
High-speed storage was a switching problem, not a memory problem. “The difficulty with all schemes of this type lies principally in the method of switching among the large numbers of elements involved,” Goldstine had written to Mina Rees of the Office of Naval Research. “The crux of the memory problem lies not in the development of a cheap memory element but rather in the development of a satisfactory switch.”54 The advantage of the Williams tube memory—performing the switching function with no moving parts other than the deflection of an electron beam—was not that it solved the switching problem better, but that it solved it first.
The Selectron, which addressed memory locations by direct gating rather than “by directing a beam of electrons, like an electron garden hose, to a certain place,” provided, in Rajchman’s words, “a ‘matrix’ digital control that gives an absolute certainty of selecting the desired location, as opposed to the none-too-certain selection by analog deflection of a beam.” Not only was the switching all digital, but the output was also all digital, and did not require an analog-to-digital “discriminator” to distinguish between a zero and a one. As Frank Gruenberger put it, “in the Selectron a particular slot in the memory is selected by digital (rather than analog) means, and the output signals are a thousand times larger than those in Williams tubes.”55 A solution to both the memory problem and the switching problem, the Selectron is the reason that the MANIAC’s logical architecture, designed around it and descended to subsequent generations of computers, adapted so well to solid-state memory when the time came.
Some failures stem from lack of vision, and some failures from too much. “The ideas were so beautiful and so elegant that Rajchman was always trying to push it [the storage matrix] to a larger population of cells than his technique at that time would allow him to do,” says Bigelow, explaining the delays at RCA. “He was just so clever in electron optics, that he couldn’t face the fact that if he chopped it down to something much more modest and got it going, and then built up the size from there, he’d be much better off.”56
The Selectron missed its chance. Once Bigelow and Pomerene saw how to convert cheap, off-the-shelf oscilloscope tubes into random-access memory, the challenge of doing so was impossible to resist. RCA, distracted by television, never took the Selectron seriously and failed to give Rajchman, working largely alone, the resources to make it a success. The Project Whirlwind group at MIT, developing a digital computer for air defense, “spent something like $25 million on storage tubes alone, which was nearly ten times our total project,” Bigelow pointed out.57
With the Williams tube memory working, the computer began to take its final physical form. The MANIAC was unusually compact—“perhaps too compact for convenient maintenance,” admitted Bigelow, who was largely responsible for its physical design. A minimal connection path between components was achieved by convolutions in its chassis, like the folding of a cerebral cortex into a skull. In 1947 most electronic devices were laid out in two dimensions, with components above a flat chassis and wiring below. The same remains true of most circuit boards, integrated circuits, and rack-mounted devices today. Bigelow, in contrast, took a three-dimensional approach to the way components were laid out and interconnected, and to wiring and cooling the dense vacuum tube arrays. “All those wires that aren’t near any metal, they’re out in space—that’s all Julian,” says Willis Ware. “That concave-shaped chassis, so you could wire point to point, keep the wire length minimum—that’s all his ideas.”58
“Vacuum tubes unfortunately had heaters and the wires that supply the current to the heaters were always a nuisance,” James Pomerene explains. “They were always in the way and they’d have nothing to do with the logic of the computer.” Bigelow’s machinists milled out sheets of heavy-gauge copper sheet stacked in duplicate, sandwiching the individual strips between insulating fiberboard, so that all the heater current was conducted through those strips. “That got the heaters wired up without any wires being in the way, and made the machine significantly easier to build,” says Pomerene.59 Besides allowing much higher component density within the core of the computer, this minimized electronic noise and improved cooling flow.
The MANIAC resembled a turbocharged V-40 engine, about 6 feet high, 2 feet wide, and 8 feet long. The computer itself, framed in aluminum, weighed only 1,000 pounds, a microprocessor for its time. The crankcase had 20 cylinders on each side, each containing, in place of a piston, a 1,024-bit memory tube. The 40 cylinders, angled upward at a 45-degree angle in two parallel banks, each contained a 5-inch-diameter 5CP1A oscilloscope tube, with its elongated neck reaching down into the crankcase and its phosphorescent screen facing up toward the cylinder head.
Bolted above the lower crankcase, resembling a very tall engine block (with overhead valves), was the main frame of the computer, containing the memory registers, accumulators, arithmetic registers, and central control. An intake manifold fed data into the computer, and an exhaust manifold delivered the results. A 4,500-cubic-foot-per-minute blower forced cool air into the base of the engine, while 20 smaller blowers, resembling turbochargers, exhausted waste heat through overhead ducts. At first cool air had been introduced downward through the core of the computer, and exhausted through the floor; later, this was switched to the practice used in data centers today, with the entire machine room being cooled by a bank of external air conditioners, and the heat exhausted overhead. “The total power dissipated in the main body of the machine is approximately 19.5 kw,” it was reported in 1953. “About 9 kw represents the dissipation of D.C. power, and the remaining 10.5 kw is accounted for by the heaters, transformers, and blowers.”60
The original air-conditioning unit was rated at 7.5 tons, and later doubled in capacity to 15 tons. This means, roughly, that if the air-conditioning system had been run at full power (about 50 kilowatts) and supplied with ice-cold water, it could have made 15 tons of ice a day. The refrigeration units, manufactured by York Refrigeration, and nicknamed “York” by the engineers, caused frequent trouble. The ability to produce 15 tons of ice a day meant that it took about 40 minutes for the refrigeration coils to ice up catastrophically with moisture from the New Jersey summer air.
“Fridge blocked up completely with heavy ice,” reads an entry in the machine log for 8:55 p.m. on September 23, 1954. “Now York refuses to go at all—35 amp fuse blown,” reads the next entry, at 9:10 p.m. “Replace and run York while de-icing fridge. York doing a poor job. D.C. off to help.” The final entry indicates that the main DC pow
er to the computer had been shut off, in the hope of bringing the core temperature down to where it would be safe to restart. Being able to run the air-conditioning or the computer, but not both, was not much help. York’s heavy draw on alternating current had a tendency to introduce Williams tube errors at the worst possible time. “All my failures to duplicate occurred during a period of instability, on the part of York,” says the machine log for October 22, 1954. “Off,” notes pioneer climate modeler Norman Phillips at 7:38 p.m. “This because York is acting up,” Hedi Selberg adds. “York caused lights to dim seconds before the error,” noted Nils Barricelli on November 2, 1954.61
The engineers faced the challenge of getting all these disparate components to work together—not only with one another, but with the coded instructions by which the machine would be brought to life. “The planning for this machine will require such foresight and self-contained rigor,” von Neumann had explained to Roger Revelle in 1947, “as one would need in order to be able to leave a group of 20 (human) computers, who are reliable but absolutely devoid of initiative, alone for a year, to work on the basis of exhaustive but rigid instructions, which are expected to provide for all possible contingencies.”62
When the computer ground to a halt, was it noise in the deflection of an electron beam, or the transposition of one bit in specifying a memory address? “False start machine or human?” reads the first entry for a blast wave calculation run in February 1953. And an answer: “Found Trouble in code—I hope!”
“Code error, machine not guilty,” admitted Barricelli on March 4, 1953. “What’s the use? GOOD NIGHT,” is recorded at 11:00 p.m. on May 7, 1953. “Damnit—I can be just as stubborn as this thing,” notes a meteorologist on June 14, 1953. “I’ll never know why you have to load these codes twice sometimes to make them go, but they go usually the second time.”63
All computations were run twice, and accepted only when the two runs produced duplicate results. “I have now duplicated BOTH RESULTS how will I know which is right assuming one result is correct?” asks an engineer on July 10, 1953. “This now is the 3rd different output,” notes the next log entry. “I know when I’m licked.” Someone running a hydrogen bomb code from 2:09 a.m. to 5:18 a.m. on July 15, 1953, signs off: “if only this machine would be just a little consistent.”
“THE HELL WITH IT,” is the final entry for June 17, 1956, at thirteen minutes past midnight, noting that the master control is being turned off. “M/C OFF (WAY OFF!!).” It took years of midnight oil to sort these problems out, but the general trend was for hardware to become more reliable and error-free, while codes grew more complicated, and error-prone. “M/C OK. All troubles were code troubles,” reads a log entry for March 6, 1958, one month after von Neumann’s death.64
The MANIAC’s logical architecture was indisputably the work of Burks, Goldstine, and von Neumann, whatever their ideas’ original source. Its physical implementation was indisputably the work of Bigelow, and its electronic design was largely the result of teamwork between Bigelow, Pomerene, Rosenberg, Slutz, and Ware. Goldstine left the engineering to others, though he did build himself a television from a kit, “which gave him some knowledge at least of what’s involved in putting together electronic-electromechanical equipment,” says Rubinoff, “and he also at the same time got some feel for what can be done with triggering circuits and switching circuits and the like.”65
Rosenberg, however, disagreed strongly with Bigelow on circuit design. “During the day I did as he commanded, and came back at night to accurately diagnose and fix the problems,” he says. Pomerene was more diplomatic. “I would have to give him, I think, just about 100% credit for the unusual but highly effective mechanical design,” he acknowledges, crediting Bigelow, whom he ended up replacing as chief engineer in 1951, for the three-dimensional, V-40 layout of the machine.66
“Julian would have the ideas, Ralph [Slutz] would kind of detail the ideas, and then Pom [James Pomerene] and I would go try and make the electrons do their thing,” says Willis Ware. “He was kind of more physicist and theoretician than engineer.… In modern parlance, what you’d say was: Julian was the architect of that machine.”67
“The rate at which Julian could think, and the rate at which Julian could put ideas together was the rate at which the project went,” adds Ware. In 1951, Bigelow was awarded a Guggenheim fellowship, and took a one-year leave. “Herman Goldstine and possibly von Neumann felt that there was something about Julian that would keep him from ever quite exactly finishing the machine; that he might get it almost 99.9 percent done, but he would never get that final. 1 percent done,” says Pomerene. “And let’s say, however that Guggenheim Fellowship happened to come, that they were quite happy to make me Chief Engineer and get the thing finished.”68
“His problem was that he was a thinker,” says Atle Selberg, whose wife, Hedi, was hired by von Neumann on September 29, 1950, and remained with the computer project until its termination in 1958. “He wouldn’t leave things alone when other people thought they were finished. Julian was always thinking of doing something a bit more here and there.”69
“I can see that being said,” says Ware, concerning Bigelow’s perfectionist streak. “But I think, after the fact, that damned machine might not have worked except for that. Hell—we were trying to make 2000 vacuum tubes do their thing! And to do it reliably, that level of perfection was a positive attribute.”70
“I think part of the trouble there was he was looking for perfection before he got something running,” says Morris Rubinoff. “You could never tell whether he was doing it because he was seeking perfection or because he was worried about reliability. Nobody ever had the courage to try a machine that fast before in quite that way. And putting a machine together and finding that it failed every three seconds didn’t do you much good.”71
Bigelow agrees. “You can’t build a 40-fold parallel machine unless the basic circuitry of the individual stage is so good that it does what it should do without regard to the state of the next stage,” he explains. “It’s got to be working at a megacycle rate for hundreds of hours. You can’t rely upon chance.”72
According to Bigelow, the forty-fold parallel architecture, despite its deviations, was descended directly from the pure-serial Turing Machine. “Turing’s machine does not sound much like a modern computer today, but nevertheless it was,” Bigelow explains. “It was the germinal idea. If you build an apparatus which will obey certain explicit orders in a certain explicit fashion, can you say anything about the kinds of computational or intellectual processes which it can or cannot do?” Bigelow and von Neumann had lengthy discussions about the implications of Gödel’s and Turing’s work. “Von Neumann understood this very deeply,” Bigelow confirms. “So when looking at ENIAC, or some of the early machines which were very inflexible, he saw better than any other man that this was just the first step, and that great improvement would come.”73
“What von Neumann contributed,” says Bigelow, was “this unshakable confidence that said: ‘Go ahead, nothing else matters, get it running at this speed and this capability, and the rest of it is just a lot of nonsense.’ It was really on a basis of that sort of belief that we went ahead, with six people and a budget.”74 Von Neumann’s approach was to bring a handful of engineers into a den of mathematicians, rather than a handful of mathematicians into a den of engineers. This freed the project from any constraints that might have been imposed by an established group of engineers with preexisting opinions as to how a computer should be built. “We were missionaries,” says Bigelow. “Our mission was to produce a machine that would demonstrate what high speed computation would do.”75
“A long chain of improbable chance events led to our involvement,” Bigelow concluded in 1976. “People ordinarily of modest aspirations, we all worked so hard and selflessly because we believed—we knew—it was happening here and at a few other places right then, and we were lucky to be in on it. We were sure because von Neumann cleared the cobwebs from our min
ds as nobody else could have done. A tidal wave of computational power was about to break and inundate everything in science and much elsewhere, and things would never be the same.”76
NINE
Cyclogenesis
The part that is stable we are going to predict. And the part that is unstable we are going to control.
—John von Neumann, 1948
“I AM A LITTLE TROUBLED about the tea service in the electronic computer building,” outgoing IAS director Frank Aydelotte warned John von Neumann on June 5, 1947, six months after the Computer Project engineers had departed from Fuld Hall. “Apparently the members of your staff consume several times as much supplies as the same number of people in Fuld Hall and they have been especially unfair in the matter of sugar.” The war was over, but foodstuffs as well as building materials were still in short supply. “To come up here as Thompson did and carry down a large quantity of sugar in excess of your rations is not cricket,” Aydelotte continued, “and I should like to raise the question of whether it would not be better for the computer people to come up to Fuld Hall at the end of the day at five o’clock in the afternoon and have their tea here under proper supervision.”1