Turing's Cathedral

Home > Other > Turing's Cathedral > Page 40
Turing's Cathedral Page 40

by George Dyson


  In 1950, von Neumann and Goldstine invited Schwarzschild to use the MANIAC instead. Stellar evolution attracted von Neumann because “its objects of study can be observed but cannot be experimented with,” and because the results could be compared with the observed characteristics of known types of stars at different stages in their lives. “Suddenly you had the possibility of computing evolution sequences for individual stars, of comparing different stars and different phases of the evolution with observed stars,” Schwarzschild explained. This would provide an immediate check on whether the numerical models were holding up. Where meteorologists have to accept that every weather system is different and that observations can only be collected once, astronomers can look up in the night sky and observe the different families of stars at every stage of their existence at any time. “A wealth (what you might call a zoo) of observed stars fell into patterns,” says Schwarzschild, “and we could start getting ages for stars.”6

  Von Neumann provided machine time, along with the assistance of Hedvig (Hedi) Selberg, who became Schwarzschild’s collaborator on the stellar evolution work. Selberg, born in Targu Mures, Transylvania, in 1919, was the daughter of a furniture maker who went bankrupt during the Depression, leaving her to help support the family as a tutor in mathematics while attending the University of Kolozsvar, where she graduated, with the equivalent of a master’s degree, at the head of her class. She then taught mathematics and physics at a Jewish high school in Satu Mare, until her entire family was deported to Auschwitz in June 1944. The only member of her family to survive, she escaped to Scandinavia, married the Norwegian number theorist Atle Selberg in August 1947, and arrived in Princeton in September. Her teaching credentials were not recognized in New Jersey and she was making plans to obtain a PhD in mathematics at Columbia when von Neumann offered her a position with the Electronic Computer Project, in September 1950, at $300 per month. “She loved this new line of work, and felt very lucky to have been part of this exciting development in history,” says her daughter, Ingrid. “It allowed her to use her knowledge of mathematics and physics as well as her intelligence and attention to detail.”7

  Hedi Selberg remained with the MANIAC for the machine’s entire life, from the first hydrogen bomb calculations to the last stellar evolution model that was running when the university pulled the plug. “I never really knew about the hydrogen bomb research going on at ECP at night,” says Ingrid, whereas, according to Freeman Dyson, “Hedi always said the computer was working mostly on bombs.” When the engineers or the “AEC boys” left questions in the machine logbooks, the answers are often signed “H.S.”

  “Impossible to load anything but all ‘1’s. Even blank cards show all ‘1’s,” she noted in the machine log on December 11, 1953. “Machine operates OK apart from this,” she records in the next entry. “Relay in back of machine (the one we put matches in) seems OK.” She supervised the machine singlehandedly for extended periods of time. “The A/cond are not doing wonderfully well tonight and got quite iced up,” she notes at 12:09 a.m. on the night of November 19–20, 1954. “However the temp is coming down gradually and will leave Barricelli with instructions to shut down if temp reaches 90,” she adds before signing off for the night.

  Schwarzschild and Selberg started out with simple models of the early stages in stellar evolution, when the star is in hydrostatic and thermal equilibrium and “the internal temperatures just right so that the hydrogen burning produces energy at a rate exactly compensating [for] the losses by surface radiation.”8 Eventually the hydrogen in the core is transmuted into helium, and the area of hydrogen burning moves farther out. The helium core grows in mass but shrinks in size, while the luminosity and radius of the star as a whole increases, producing a red giant that burns for another billion years or so, before becoming a white dwarf.

  As a star ages, its behavior grows increasingly complex. Convection introduces mixing between layers, and the transmutation of helium produces a succession of heavier elements, with their varying opacity having the same critical effects that radiation opacity had on the feasibility of the Teller-Ulam bomb. “The whole system including the various supplementary equations for computing the opacity, energy generation, etc., and the distinction between the radiative, convective and degenerate cases is certainly the most complicated system ever treated on our computer,” Schwarzschild and Selberg explained.9 The models were far ahead of their time. “It was only, regrettably, much later, after he was no longer with us, that I understood how deep and profound and insightful were his early studies and how they informed his comments and questions,” says astronomer John Bahcall, who, twenty years later, brought numerical modeling back to the IAS. “But he was the kind of person who never said, ‘If you look back at my paper in 1956 you’ll see that I calculated the primordial heating abundance’ or something. That just wasn’t his style. He never mentioned his own work.”10

  Schwarzschild’s calculations constituted a cosmological meteorology: the infinite forecast to end all infinite forecasts. “Do we then live in an adult galaxy which has already bound half of its mass permanently into stars, and has consumed a fourth of all its fuel?” he asked in 1957. “Are we too late to witness the turbulent sparkle of galactic youth but still in time to watch stars in all their evolutionary phases before they settle into the permanence of the white dwarfs?”11

  By mid-1953, five distinct sets of problems were running on the MANIAC, characterized by different scales in time: (1) nuclear explosions, over in microseconds; (2) shock and blast waves, ranging from microseconds to minutes; (3) meteorology, ranging from minutes to years; (4) biological evolution, ranging from years to millions of years; and (5) stellar evolution, ranging from millions to billions of years. All this in 5 kilobytes—enough memory for about one-half second of audio, at the rate we now compress music into MP3s.

  These time scales ranged from about 10–8 seconds (the lifetime of a neutron in a nuclear explosion) to 1017 seconds (the lifetime of the sun). The middle of this range falls between 104 and 105 seconds, or about eight hours, exactly in the middle of the range (from the blink of an eye, over in three-tenths of a second, to a lifetime of three billion seconds, or ninety years) that a human being is able to directly comprehend.

  Of these five sets of problems, shock waves were von Neumann’s first love and remained closest to his heart. He had an intuitive feel for the subject. Calculation alone was not always enough. “The question as to whether a solution which one has found by mathematical reasoning really occurs in nature … is a quite difficult and ambiguous one,” he explained in 1949, concerning the behavior of shock waves produced by the collision of gas clouds in interstellar space. “We have to be guided almost entirely by physical intuition in searching for it … and it is difficult to say about any solution which has been derived, with any degree of assurance, that it is the one which must exist.”12

  Shock waves are produced by collisions between objects, or between an object and a medium, or between two mediums, or by a sudden transition within a medium, when the velocities or time scales are mismatched. If the difference in velocity is greater than the local speed of information, this propagates a discontinuity, a sonic boom being the classic example, as an aircraft exceeds the speed of sound. The disturbance can be the detonation front in a high explosive, a bullet exiting the muzzle of a gun, a meteorite hitting the atmosphere, the explosion of a nuclear weapon, or the collision between two jets of interstellar gas.

  Shock waves could even be produced by the collision between two universes, or the explosion of a new universe—this being one way to describe the discontinuities being produced as the digital universe collides with our universe faster than we are able to adjust. “The ever accelerating progress of technology and changes in the mode of human life,” von Neumann explained to Stan Ulam, “gives the appearance of approaching some essential singularity in the history of the race.”13

  In our universe, we measure time with clocks, and computers have a “cl
ock speed,” but the clocks that govern the digital universe are very different from the clocks that govern ours. In the digital universe, clocks exist to synchronize the translation between bits that are stored in memory (as structures in space) and bits that are communicated by code (as sequences in time). They are clocks more in the sense of regulating escapement than in the sense of measuring time.

  “The I.A.S. computing machine is non-synchronous; that is, decisions between elementary alternatives, and enforcement of these decisions are initiated not with reference to time as an independent variable but rather according to sequence,” Bigelow explained to Maurice Wilkes—who had just succeeded in coaxing Cambridge’s delay-line storage EDSAC computer into operation before the Institute’s—in 1949. “Time, therefore, does not serve as an index for the location of information, but instead counter readings are used, the counters themselves being actuated by the elementary events.”14

  “It was all of it a large system of on and off, binary gates,” Bigelow reiterated fifty years later. “No clocks. You don’t need clocks. You only need counters. There’s a difference between a counter and a clock. A clock keeps track of time. A modern general purpose computer keeps track of events.”15 This distinction separates the digital universe from our universe, and is one of the few distinctions left.

  The acceleration from kilocycles to megahertz to gigahertz is advancing even faster than this increase in nominal clock speed indicates, as devices such as dedicated graphic processors enable direct translation between coded sequences and memory structures, without waiting for any central clock to authorize the translation step-by-step. No matter how frequently we reset our own clocks to match the increasing speed of computers we will never be able to keep up. Codes that take advantage of asynchronous processing, in the digital universe, will rapidly move ahead, in ours.

  Thirty years ago, networks developed for communication between people were adapted to communication between machines. We went from transmitting data over a voice network to transmitting voice over a data network in just a few short years. Billions of dollars were sunk into cables spanning six continents and three oceans, and a web of optical fiber engulfed the world. When the operation peaked in 1991, fiber was being rolled out, globally, at over 5,000 miles per hour, or nine times the speed of sound: Mach 9.

  Since it costs little more to install a bundle of fibers than a single strand, tremendous overcapacity was deployed. Fifteen years later, a new generation of companies, including Google, started buying up “dark fiber” at pennies on the dollar, awaiting a time when it would be worth the expense of connecting it at the ends. With optical switching growing cheaper by the minute, the dark fiber is now being lit. The “last mile” problem—how to reach individual devices without individual connection costs—has evaporated with the appearance of wireless devices, and we are now rolling out cable again. Global production of optical fiber reached Mach 20 (15,000 miles per hour) in 2011, barely keeping up with the demand.

  Among the computers populating this network, most processing cycles are going to waste. Most processors, most of the time, are waiting for instructions. Even within an active processor, as Bigelow explained, most computational elements are waiting around for something to do next. The global computer, for all its powers, is perhaps the least efficient machine that humans have ever built. There is a thin veneer of instructions, and then there is a dark, empty 99.9 percent.

  To numerical organisms in competition for computational resources, the opportunities are impossible to resist. The transition to virtual machines (optimizing the allocation of processing cycles) and to cloud computing (optimizing storage allocation) marks the beginning of a transformation into a landscape where otherwise wasted resources are being put to use. Codes are becoming multicellular, while the boundaries between individual processors and individual memories grow indistinct.

  When Julian Bigelow and Norbert Wiener formulated their Maxims for Ideal Prognosticators in 1941, their final maxim was that predictions (of the future position of a moving target) should be made by normalizing observations to the frame of reference of the target, “emphasizing its fundamental symmetry and invariance of behavior,” otherwise lost in translation into the frame of reference of the observer on the ground.16 This is why it is so difficult to make predictions, within the frame of reference of our universe, as to the future of the digital universe, where time as we know it does not exist. All we have are lag functions that we can only turn forward in our nondigital time.

  The codes spawned in 1951 have proliferated, but their nature has not changed. They are symbiotic associations of self-reproducing numbers (starting with a primitive alphabet of order codes) that were granted limited, elemental powers, the way a limited alphabet of nucleotide sequences code for an elemental set of amino acids—with polynucleotides, proteins, and everything else that follows developing from there. The codes by which an organization as vast and complex as Google maps the state of the entire digital universe are descended from the first Monte Carlo codes that Klári von Neumann wrote while smoking Lucky Strikes from the Los Alamos Post Exchange.

  At about the time the IAS computer became operational in 1951, Stan Ulam sent John von Neumann an undated note, wondering whether a purely digital universe could capture some of the evolutionary processes we see in our universe. He envisioned an unbounded two-dimensional matrix where Turing-complete digital organisms (operating in two dimensions, unlike Barricelli’s one-dimensional, cross-breeding genetic strings) would compete for resources, and evolve. Ulam also suggested carrying the digital model backward, in the other direction, to consider the question, as he put it, of “generation of time and space in ‘prototime.’ ”17

  Organisms that evolve in the digital universe are going to be very different from us. To us, they will appear to be evolving ever faster, but to them, our evolution will appear to have begun decelerating at their moment of creation—the way our universe appears to have suddenly begun to cool after the big bang. Ulam’s speculations were correct. Our time is becoming the prototime for something else.

  “The game that nature seems to be playing is difficult to formulate,” Ulam observed, during a conversation that included Nils Barricelli, in 1966. “When different species compete, one knows how to define a loss: when one species dies out altogether, it loses, obviously. The defining win, however, is much more difficult because many coexist and will presumably for an infinite time; and yet the humans in some sense consider themselves far ahead of the chicken, which will also be allowed to go on to infinity.”18

  Von Neumann’s first solo paper, “On the Introduction of Transfinite Numbers,” was published in 1923, when he was nineteen. The question of how to consistently distinguish different kinds of infinity, which von Neumann clarified but did not answer, is closely related to Ulam’s question: Which kind of infinity do we want?

  SEVENTEEN

  The Tale of the Big Computer

  In a small laboratory—some people maintain that it was an old converted stable—a few men in white coats stood watching a small and apparently insignificant apparatus equipped with signal lights, which flashed like stars. Gray perforated strips of paper were fed into it, and other strips emerged. Scientists and engineers worked hard, with a gleam in their eyes; they knew that the little gadget in front of them was something exceptional—but did they foresee the new era that was opening before them, or suspect that what had happened was comparable to the origin of life on Earth?

  —Hannes Alfvén, 1966

  VON NEUMANN MADE a deal with “the other party” in 1946. The scientists would get the computers, and the military would get the bombs. This seems to have turned out well enough so far, because, contrary to von Neumann’s expectations, it was the computers that exploded, not the bombs.

  “It is possible that in later years the machine sizes will increase again, but it is not likely that 10,000 (or perhaps a few times 10,000) switching organs will be exceeded as long as the present techniques and
philosophy are employed,” von Neumann predicted in 1948. “About 10,000 switching organs seem to be the proper order of magnitude for a computing machine.”1 The transistor had just been invented, and it would be another six years before you could buy a transistor radio—with four transistors. In 2010 you could buy a computer with a billion transistors for the inflation-adjusted cost of a transistor radio in 1956.

  Von Neumann’s estimate was off by over five orders of magnitude—so far. He believed, and counseled the government and industry strategists who sought his advice, that a small number of large computers would be able to meet the demand for high-speed computing, once the impediments to remote input and output were addressed. This was true, but only for a very short time. After concentrating briefly in large, centralized computing facilities, the detonation wave that began with punched cards and vacuum tubes was propagated across a series of material and institutional boundaries: into magnetic-core memory, semiconductors, integrated circuits, and microprocessors; and from mainframes and time-sharing systems into minicomputers, microcomputers, personal computers, the branching reaches of the Internet, and now billions of embedded microprocessors and aptly named cell phones. As components grew larger in number, they grew smaller in size and cycled faster in time. The world was transformed.

 

‹ Prev