Turing's Cathedral

Home > Other > Turing's Cathedral > Page 42
Turing's Cathedral Page 42

by George Dyson


  In the beginning was the command line: a human programmer supplied an instruction and a numerical address. There is no proscription against computers supplying their own instructions, and an ever-diminishing fraction of commands have ever been touched by human hands or human minds. Now the commands and addresses are as likely to be delivered the other way: the global computer supplies an instruction, and an address that maps to a human being via a personal device. That the resulting human behavior can only be counted on statistically, not deterministically, is, as von Neumann demonstrated in 1952 with his Probabilistic Logics and the Synthesis of Reliable Organisms from Unreliable Components, no obstacle to the synthesis of those unreliable human beings into a reliable organism. We are returning to the landscape as envisioned by von Neumann in 1948: with a few large computers handling much of the computation in the world. The big computers, however, are not physically centralized; they are distributed across a multitude of hosts.

  In October 2005, on the occasion of the sixtieth anniversary of von Neumann’s proposal to Lewis Strauss for the MANIAC, and Turing’s proposal to the National Physical Laboratory for the ACE, I was invited to Google’s headquarters in California, and given a glimpse inside the organization that has been executing precisely the strategy that Turing had in mind: gathering all available answers, inviting all possible questions, and mapping the results. I felt I was entering a fourteenth-century cathedral while it was being built. Everyone was busy placing one stone here and another stone there, with some invisible architect making everything fit. Turing’s 1950 comment about computers being “mansions for the souls that He creates” came to mind. “It is difficult to see why a soul should come to rest in a human body, when from both intellectual and moral viewpoints a computer would be preferable,” Olof Johannesson adds.24

  At the time of my visit, my hosts had just begun a project to digitize all the books in the world. Objections were immediately raised, not by the books’ authors, who were mostly long dead, but by book lovers who feared that the books might somehow lose their souls. Others objected that copyright would be infringed. Books are strings of code. But they have mysterious properties—like strings of DNA. Somehow the author captures a fragment of the universe, unravels it into a one-dimensional sequence, squeezes it through a keyhole, and hopes that a three-dimensional vision emerges in the reader’s mind. The translation is never exact. In their combination of mortal, physical embodiment with immortal, disembodied knowledge, books have a life of their own. Are we scanning the books and leaving behind the souls? Or are we scanning the souls and leaving behind the books?

  “We are not scanning all those books to be read by people,” an engineer revealed to me after lunch. “We are scanning them to be read by an AI.”

  The AI that is reading all these books is also reading everything else—including most of the code written by human programmers over the past sixty years. Reading does not imply understanding—any more than reading a genome allows us to understand an organism—but this particular AI, with or without understanding, is especially successful at making (and acquiring) improvements to itself. Only sixty years ago the ancestor of this code was only a few hundred lines long, and required personal assistance even to locate the next address. Artificial intelligence, so far, requires constant attention—the strategy that infants use. No genuinely intelligent artificial intelligence would reveal itself to us.

  Here was Alfvén’s vision, brought to life. The Big Computer was doing everything in its power to make life as comfortable as possible for its human symbionts. Everyone was youthful, healthy, happy, and exceptionally well fed. I had never seen so much knowledge in one place. I visited a room where a dedicated fiber-optic line was importing all the data that existed in the world concerning Mars. I listened to an engineer explain how we would all eventually have implanted auxiliary memories, individually initialized with everything we needed to know. Knowledge would become universal, and evil could be edited out. “The primary biological function of the brain was that of a weapon,” Alfvén had explained. “It is still not quite clear in which brain circuits the lust for power is located. In any case data machines seem devoid of any such circuits, and it is this which gives them their moral superiority over man; it is for this reason that computers were able to establish the kind of society which man had striven for and so abysmally failed to achieve.”25 I was tempted to sign up.

  At the end of the day I had to leave the digital Utopia behind. I relayed my impressions to a compatriot of Alfvén’s who had also visited the home of the Big Computer, and who might be able to shed some light. “When I was there, just before the IPO, I thought the coziness to be almost overwhelming,” she replied. “Happy golden retrievers running in slow motion through water sprinklers on the lawn. People waving and smiling, toys everywhere. I immediately suspected that unimaginable evil was happening somewhere in the dark corners. If the devil would come to earth, what place would be better to hide?”26

  The Great Disaster was caused not by the Big Computer, but by human beings unable to resist subverting this power to their own ends. “Evolution on the whole has moved steadily in one direction. While data machines have developed enormously, man has not,” Alfvén warned.27 Our hopes appear to lie with the future according to Olof Johannesson, who, after the world is reconstructed from the Great Disaster, declares, “We believe—or rather we know—that we are approaching an era of even swifter evolution, and even higher living standard, and an even greater happiness than ever before.”

  “We shall all live happily ever after,” ends Alfvén’s tale.28

  Olof Johannesson, however, turned out to be a computer, not a human being. Those who had sought to use the power of computers for destructive purposes discovered that one of those powers was the ability to replace human beings with something else. What if the price of machines that think is people who don’t?

  The other party is still waiting to collect.

  EIGHTEEN

  The Thirty-ninth Step

  It is easier to write a new code than to understand an old one.

  —John von Neumann to Marston Morse, 1952

  AT EXACTLY MIDNIGHT on July 15, 1958, in the machine room at the end of Olden Lane, Julian Bigelow turned off the master control, shut down the power supplies, picked up a blunt No. 2 pencil, and made the following entry in the machine log: “Off—12:00 Midnight—JHB.” Knowing there would be no log entries to follow, he extended his signature diagonally across the rest of the page.

  Within seconds, the cathodes stopped emitting, the heater filaments stopped glowing, and the Williams memory tubes gave up their last traces of electrostatic charge. No electrons would ever flow through these circuits again.

  “The other day I saw a ghost—the skeleton of a machine which not so long ago had been very much alive, the cause of much violent controversy,” wrote Klári von Neumann, some two years later.

  The computer, alias The Jonnyac, the Maniac, more formally The Institute for Advanced Study Numerical Computing Machine … is now locked away, not buried but hidden in the back-room of the building where it used to be the queen. Its life-juice, the electricity, has been cut off; its breathing, the air-conditioning system, has been dismantled. It still has its own little room, one which can only be approached through the big hall which was its ante-chamber used for the auxiliary equipment—now a dead storage hold for empty boxes, old desks and other paraphernalia that invariably finds its way to such places and then is “forgotten with the rest.”

  Klári had returned to Princeton, after Johnny’s death, for the dedication of von Neumann Hall at Princeton University, where the Institute for Defense Analysis was installing a new computer. “The old one, the original, the firster, lies silently in its inglorious tomb,” she wrote. “Sic Transit Gloria Mundi.”1

  After von Neumann left Princeton for Washington in 1955, the engineers remaining at the Institute hoped to build a second computer, incorporating a long list of improvements compiled while
building the first. “We had enormous numbers of ideas,” says Bigelow, “which we never did anything with.”2 On February 29, 1956, however, it was decided “that no new machine should be built at the Institute for Advanced Study; that most of the engineering staff would, therefore, leave to pursue development work at other places; and that the Electronic Computer should be transformed from an experimental project into a tool for the solution of the many computational problems arising in the scientific community of Princeton.”3

  “With Johnny gone the greatness was left out of it,” says Harris Mayer, “and the Institute, who really didn’t want to have much to do with the MANIAC, was out of the game.”4 On July 1, 1957, the computer was transferred to Princeton University, with the machine remaining in its existing location at the end of Olden Lane. “There are two major changes as compared with the ‘Golden Times’ under the auspices of the I.A.S.,” Hans Maehly, acting director since July 1, 1956, explained to Oppenheimer after the ownership changed. “No coding services will be supplied to the users—except that we shall prepare general program subroutines (the exact opposite used to be the case),” and “there will be a computer time bookkeeping, involving hourly charges and dollars!”5

  In contrast to the first five years, when the machine was rarely idle, the entry “No Customers” appears regularly in the machine logs for 1957 and 1958. All new projects were placed on hold, except for the development of a higher-level language, which, as Maehly described it, “takes the mathematics and English that the coder writes as his statement of the problem, and turns this into machine coding without the necessity for human intervention.”6 The remaining engineers continued to work on developing user-friendly utilities such as ASBY, a relative address assembly routine, and POST-MORTEM, a debugging routine invoked in the event of a code “stopping in the wrong place or getting into loops, or whatever a program does in its death agonies.”7 FLINT was a floating point interpretive routine. “An interpretive routine is, by definition, a code that ‘translates’ orders given in a new ‘language’ into ordinary ‘machine language,’ ” Maehly explained. “Thus the machine plus FLINT will act like a new machine though no physical changes have been made for that purpose. We shall, therefore, speak of FLINT as if it were a virtual machine.”8

  A computer with floating-point arithmetic keeps track of the position of the decimal (or binary) point. Without floating point, the programmer has to bring numbers “back into focus” as a computation moves along. After debating the question in November of 1945, the IAS group decided to forgo floating point, making more memory directly available to codes, such as Barricelli’s, that did not invoke normal arithmetic, or Monte Carlo codes that consumed every available bit. “Von Neumann thought that anybody who was smart enough to use a computer like this, is smart enough to understand the precision requirements of all the processes involved,” Bigelow explains. “He never thought that computers would be run by mathematical imbeciles. He thought computers would be run by mathematicians, physicists and research people who were as good as he was.”9 Floating point got in the way of having an entirely empty universe in which to work.

  Each memory location held a string of 40 bits, of which the first (leftmost) bit represented the sign (0 for positive numbers, 1 for negative), leaving 39 bits for the number itself. Without floating point, the binary point (equivalent to the decimal point in decimal arithmetic) is fixed just to the right of the first bit. The next 39 positions, going from left to right, represent 2–1 (½), 2–2 (¼), 2–3 (⅘), and so on, all the way to 2–39 (1/549,755,813,888). The computer thus only stores numbers ranging from –1 to +1, to an accuracy of 39 binary places. For reasons that the June 1946 Preliminary Discussion of the Logical Design of an Electronic Computing Instrument elaborated in detail, this made the most of the available 1,024 strings of 40 bits.

  Elementary arithmetic was either performed thirty-nine-fold in a single operation (in the case of addition or subtraction) or was iterated thirty-nine times (in the case of multiplication or division). Addition and subtraction were precise. Multiplication of two thirty-nine-digit numbers, however, produces a seventy-eight-digit number, and division may produce a number of arbitrary length. The result had to be truncated, and was no longer precise. “Every number x that appears in the computing machine is an approximation of another number x′, which would have appeared if the calculation had been performed absolutely rigorously,” Burks, Goldstine, and von Neumann explained in 1946.10 Sooner or later a value has to be chosen for the thirty-ninth digit, discarding the remaining bits. Deciding how to make the approximation took human judgment, and making the approximation, according to the chosen algorithm, was the thirty-ninth step.

  FLINT, “which, as far as its user is concerned, transforms our machine into a slower, less sophisticated instrument for which coding is much simpler,” insulated the end user from having to communicate directly with the machine. “The planned general external language should be influenced as little as possible by the peculiarities of the machine; in other words, it should be as close as possible to the thinking of the programmer,” it was explained. The user “need not know machine language at all, even, and in particular, while debugging his program.”11 Instead of human beings having to learn to write code in machine language, machines began learning to read codes written in human language, a trend that has continued ever since.

  Despite this attempt to make things as easy as possible for the new owners, Princeton University had trouble getting the machine to work. “Our efforts to operate it on a regular basis during the past year have been unsuccessful,” Henry D. Smyth (author of Atomic Energy for Military Purposes) complained in announcing the MANIAC’s retirement in July of 1958. “Although it embodies the principles of modern machines it was essentially developmental and not very carefully engineered.”12

  Bigelow disagreed. “Sometime last summer the University crew, who are operating the machine, decided to ‘modify and improve it’ with the result that after the departure of Bill Keefe, the last of the original training engineers, it went on the blink and was pretty much inoperable from July through November 1957,” he reported to the Atomic Energy Commission in 1958. Finally, on December 22, according to Bigelow, Henry Smyth “asked me if I would undertake to get the thing running … since the University felt that this was their only chance. I thought it over and, for various reasons such as the fact that one of the men with 11 children derived his income from his job on the project—etc., I tackled the job.” Bigelow divided the available personnel into two crews working two full shifts, including weekends, except January 1 and December 25, and by “approximately the 1st of March we got things going pretty well and, with a few minor interruptions, it … has computed everything in sight.”13

  “The bewildering developments of the last couple of weeks end[ed] with the decision to close the Maniac on the 1st of July,” Martin Schwarzschild wrote to Hedi Selberg on June 6, 1958, reporting the demise of their stellar evolution work. “Your code has run the last couple of weeks wonderfully…[and] we have reached a point in the evolution where a new physical situation has arisen, not by the onset of helium burning, as I and [Fred] Hoyle used to expect, but by a convective instability in the helium core caused by the heat flux coming out of this contracting core.… I still have no idea what the star will do.”14 Schwarzschild’s universe was brought to a halt.

  Except for a retrospective account presented at Los Alamos in 1976, Bigelow never spoke or wrote publicly about the MANIAC again. Even the machine’s given name was removed. When mathematician Garrett Birkhoff referred to the MANIAC in a paper on numerical hydrodynamics in 1954, he was advised by Herman Goldstine that “I do not believe that the title ‘Maniac’ is an acceptable one here.”15 The Los Alamos copy became known as the MANIAC, and the original MANIAC became known as MANIAC-0 or simply the “IAS” or “Princeton” Machine. Bigelow arranged for the remains to go to the Smithsonian Institution, and in preparation, all auxiliary equipment was removed. He p
aid the university $406 cash for “Misc Residual Property” on August 4, 1958, and purchased the remaining “excess electronic gear left over from the extinct computer project” for $275, on December 18, 1959.16 Gerald Estrin arranged for the original 2,048-word magnetic drum to be donated to the Weizmann Institute in Israel, and the core of the machine was finally transported to Washington, D.C., in 1962.

  In exchange for the university having rights to use the computer, free of charge, when it was first constructed, Electronic Computer Project staff had been allowed to enroll as graduate students at the university, a benefit that helped attract young postwar electronic engineers eager to work for von Neumann while obtaining their PhDs. Bigelow kept attending lectures in the physics department until his status was rescinded by the university in 1960. “Since prior arrangement relieves you of the obligation to pay tuition, to avoid further difficulty it seems wise that you no longer continue in the status of enrolled student,” he was advised. “You are of course, free to submit a dissertation and present yourself for your Final Oral Examination.”17 The doors that von Neumann had opened were now closed.

  When the Institute transferred the computer to the university, it was understood that Institute scholars would be granted access to university computers in return. But when IAS astronomers sought to exercise this privilege, in 1966, the matter ended in dispute. “The transfer of the MANIAC to the University was a generous gesture on the Institute’s part, but I am afraid that it turned into something of a disaster for us,” Dean Pittendrigh, who was “considering” the matter on behalf of the university, complained. “We spent well over $100,000 on it and got very little useful computation out of it.… At any rate, the Institute is most welcome to use the University Computer Center at any time. The machines now in use at the University and the rates we can offer you for their use are listed below.”18 The charges were $110.00 per hour for an IBM 7044, and $137.50 per hour for an IBM 7094. Oppenheimer responded: “Can one ‘consider’ whether to keep his word?”19

 

‹ Prev