Darwin Among the Machines

Home > Other > Darwin Among the Machines > Page 12
Darwin Among the Machines Page 12

by George B. Dyson


  “The perturbation arising from the resistance of the medium . . . does not, on account of its manifold forms, submit to fixed laws and exact description,” Galileo explained in 1638, “and disturbs [the trajectory] in an infinite variety of ways corresponding to the infinite variety in the form, weight, and velocity of the projectiles.”14 Galileo found the behavior of high-velocity cannon fire mathematically impenetrable and limited his services to the preparation of firing tables for low-velocity trajectories, “since shots of this kind are fired from mortars using small charges and imparting no supernatural momentum they follow their prescribed paths very exactly.”15 The production of firing tables still demanded equal measures of art and science when von Neumann arrived on the scene at the beginning of World War II. Shells were test-fired down a range equipped with magnetic pickup coils to provide baseline data. Then the influence of as many variables as could be assigned predictable functions was combined to produce a firing table composed of between two thousand and four thousand individual trajectories, each trajectory requiring about 750 multiplications to determine the path of that particular shell for a representative fifty points in time.

  A human computer working with a desk calculator took about twelve hours to calculate a single trajectory; the electromechanical differential analyzer at the Ballistic Research Laboratory (a ten-integrator version of the machine that Vannevar Bush had developed at MIT) took ten or twenty minutes. This still amounted to about 750 hours, or a month of uninterrupted operation, to complete one firing table. Even with double shifts and the assistance of a second, fourteen-integrator differential analyzer (constructed at the Moore School of Electrical Engineering at the University of Pennsylvania in Philadelphia), each firing table required about three months of work. The nearly two hundred human computers working at the Ballistic Research Laboratory were falling hopelessly behind. “The number of tables for which work has not been started because of lack of computational facilities far exceeds the number in progress,” reported Herman Goldstine in August 1944. “Requests for the preparation of new tables are being currently received at the rate of six per day.”16

  Electromechanical assistance was not enough. In April 1943, the army initiated a crash program to build an electronic digital computer based on decimal counting circuits made from vacuum tubes. Rings of individual flip-flops (two vacuum tubes each) were formed into circular, ten-stage counters linked to each other and to an array of storage registers, forming, in effect, the electronic equivalent of Leibniz’s stepped reckoner, but running at about six million rpm. The ENIAC (Electronic Numerical Integrator and Computer) was constructed by a team that included John W. Mauchly, John Presper Eckert, and (Captain) Herman H. Goldstine, supervised by John G. Brainerd under a contract between the army’s Ballistic Research Laboratory and the Moore School. A direct descendant of the electromechanical differential analyzers of the 1930s, the ENIAC represented a brief but fertile intersection between the otherwise diverging destinies of analog and digital computing machines. Incorporating eighteen thousand vacuum tubes operating at 100,000 pulses per second, the ENIAC consumed 150 kilowatts of power arid held twenty 10-digit numbers in high-speed storage. With the addition of a magnetic-core memory of 120 numbers in 1953, the working life of the ENIAC was extended until October 1955.

  Programmed by hand-configured plugboards (like a telephone switchboard) and resistor-matrix function tables (set up as read-only memory, or ROM), the ENIAC was later adapted to a crude form of stored-program control. Input and output was via standard punched-card equipment requisitioned from IBM. It was thus possible for the Los Alamos mathematicians to show up with their own decks of cards and produce intelligible results. Rushed into existence with a single goal in mind, the ENIAC became operational in late 1945 and just missed seeing active service in the war. To celebrate its public dedication in February 1946, the ENIAC computed a shell trajectory in twenty seconds—ten seconds faster than the flight of the shell and a thousand times faster than the methods it replaced. But the ENIAC was born with time on its hands, because the backlog of firing-table calculations vanished when hostilities ceased.

  Sudden leaps in biological or technological evolution occur when an existing structure or behavior is appropriated by a new function that spreads rapidly across the evolutionary landscape, taking advantage of a head start. Feathers must have had some other purpose before they were used to fly. U-boat commanders appropriated the Enigma machine first developed for use by banks. Charles Babbage envisioned using the existing network of church steeples that rose above the chaos of London as the foundation for a packet-switched communications net. According to neurophysiologist William Calvin, the human mind appropriated areas of our brains that first evolved as buffers for rehearsing and storing the precise timing sequences required for ballistic motor control. “Throwing rocks at even stationary prey requires great precision in the timing of rock release from an overarm throw, with the ‘launch window’ narrowing eight-fold when the throwing distance is doubled from a beginner’s throw,” he observed in 1983. “Paralleled timing neurons can overcome the usual neural noise limitations via the law of large numbers, suggesting that enhanced throwing skill could have produced strong selection pressure for any evolutionary trends that provided additional timing neurons. . . . This emergent property of parallel timing circuits has implications not only for brain size but for brain reorganization, as another way of increasing the numbers of timing neurons is to temporarily borrow them from elsewhere in the brain.”17 According to Calvin’s theory, surplus off-hours capacity was appropriated by such abstractions as language, consciousness, and culture, invading the neighborhood much as artists colonize a warehouse district, which then becomes the gallery district as landlords raise the rent. The same thing happened to the ENIAC: a mechanism developed for ballistics was expropriated for something else.

  John Mauchly, Presper Eckert, and others involved in the design and construction of the ENIAC had every intention of making wider use of their computer, but it was von Neumann who carried enough clout to preempt the scheduled ballistics calculations and proceed directly with a numerical simulation of the super bomb. The calculation took a brute-force numerical approach to a system of three partial differential equations otherwise resistant to analytical assault. As one million mass points were shuffled one IBM card at a time through the accumulators and registers at the core of the ENIAC’s eighty-foot-long vault, the first step was taken toward the explosion of a device that was, as Oppenheimer put it, “singularly proof against any form of experimental approach.”18 The test gave a false positive result: the arithmetic was right, but the underlying physics was wrong. Teller and colleagues were led to believe that the super design would work, and the government was led to believe that if the Americans didn’t build one the Soviets might build one first. By the time the errors were discovered, the hydrogen-bomb project had acquired the momentum to keep going until an alternative was invented that worked.

  The super fizzled, but the ENIAC was hailed as an unqualified success. Because of the secret nature of their problem the Los Alamos mathematicians had to manage the calculation firsthand. They became intimately familiar with the operation of the computer and suggested improvements to its design. A machine capable of performing such a calculation could, in principle, compute an answer to any problem presented in numerical form. Von Neumann discovered in the ENIAC an instrument through which his virtuoso talents could play themselves out to the full—inventing new forms of mathematics as he went along. “It was his feeling that a mathematician who was pursuing some new field of endeavor or trying to extend the scope of older fields, should be able to obtain clues for his guidance by using an electronic digital machine,” explained Willis Ware in 1953. “It was, therefore, the most natural thing that von Neumann felt that he would like to have at his own disposal such a machine.”19

  During the war, von Neumann had worked with the bomb designers at Los Alamos as well as with conventional-weapon des
igners calculating ballistic trajectories, blast and shock-wave effects, and the design of shaped charges for armor-piercing shells. It was his experience with the mathematics of shaped charges, in part, that led to the original success of the implosion-detonated atomic bomb. Bombs drew von Neumann’s interest toward computers, and the growing power of computers helped sustain his interest in developing more powerful bombs. “You had an explosion a little above the ground and you wanted to know how the original wave would hit the ground, form a reflected wave, then combine near the ground with the original wave, and have an extra strong blast wave go out near the ground,” recalled Martin Schwarzschild, the Princeton astrophysicist whose numerical simulations of stellar evolution had much in common—and shared computer time—with simulations of hydrogen bombs. “That was a problem involving highly non-linear hydrodynamics. At that time it was only just understood descriptively. And that became a problem that I think von Neumann became very much interested in. He wanted a real problem that you really needed computers for.”20

  Software—first called “coding” and later “programming”—was invented on the spot to suit the available (or unavailable) machines. Physicist Richard Feynman served on the front lines in developing the computational methods and troubleshooting routines used at Los Alamos in early 1944, when desk calculators and punched-card accounting machines constituted the only hardware at hand. The calculations were executed by dozens of human computers (“girls” in Feynman’s terminology) who passed intermediate results back and forth, weaving together a long sequence, or algorithm, of simpler steps. “If we got enough of these machines in a room, we could take the cards and put them through a cycle. Everybody who does numerical calculations now knows exactly what I’m talking about, but this was kind of a new thing then—mass production with machines.” The problem was that the punched-card machines that Stan Frankel had ordered from IBM were delayed. So to test out Frankel’s program, explained Feynman, “we set up this room with girls in it. Each one had a Marchant [mechanical calculator]. . . . We got speed with this system that was the predicted speed for the IBM machine[s]. The only difference is that the IBM machines didn’t get tired and could work three shifts.”21 By keeping all stages of the computation cycle busy all the time, Feynman invented the pipelining that has maximized the performance of high-speed processors ever since.

  Many thriving computer algorithms are direct descendants of procedures worked out by human computers passing results back and forth by hand. The initial challenge was how to break large problems into smaller, computable parts. Physically distinct phenomena often proved to be computationally alike. A common thread running through many problems of military interest was fluid dynamics—a subject that had drawn von Neumann’s attention by its mathematical intractability and its wide-ranging manifestations in the physical world. From the unfolding of the afternoon’s weather to the flight of a missile through the atmosphere or the explosion, by implosion, of atomic bombs, common principles are at work. But the behavior of fluids in motion, however optically transparent, remained mathematically opaque. Beginning in the 1930s, von Neumann grew increasingly interested in the phenomenon of turbulence. He puzzled over the nature of the Reynolds number, a nondimensional number that characterizes the transition from laminar to turbulent flow. “The internal motion of water assumes one or other of two broadly distinguishable forms,” reported Osborne Reynolds in 1883, “either the elements of the fluid follow one another along lines of motion which lead in the most direct manner to their destination, or they eddy about in sinuous paths the most indirect possible.”22

  “It seemed, however, to be certain if the eddies were owing to one particular cause, that integration [of Stokes equations of fluid motion] would show the birth of eddies to depend on some definite value of cρU/μ,” explained Reynolds, introducing the parameter that bears his name.23 As the product of length (of an object moving through a fluid or the distance over which a moving fluid is in contact with an object or a wall), density (of the fluid), and velocity (of the fluid or the object) divided by viscosity (of the fluid), the Reynolds number signifies the relative influence of these effects. All instances of fluid motion—water flowing through a pipe, a fish swimming through the sea, a missile flying through the air, or air flowing around the earth—can be compared on the basis of their Reynolds numbers to predict the general behavior of the flow. A low Reynolds number indicates the predominance of viscosity (owing to molecular forces between individual fluid particles) in defining the character of the flow; a high Reynolds number indicates that inertial forces (due to the mass and velocity of individual particles) prevail. Reynolds identified this pure, dimensionless number as the means of distinguishing between laminar (linear) and turbulent (nonlinear) flow and revealed how (but not why) the development of minute, unstable eddies precipitates self-sustaining turbulence as the transitional value is approached. The critical Reynolds number thus characterizes a transition between an orderly, deterministic regime and a disorderly, probabilistic regime that can be described statistically but not in full detail.

  “Von Neumann . . . wanted to find an explanation or at least a way to understand this very puzzling large number,” wrote Ulam. “Small numbers like π and e, are of course very frequent in physics, but here is a number of the order of thousands and yet it is a pure number with no dimensions: it does whet our curiosity.”24 Von Neumann later suggested a similar distinction in computational complexity, marking the transition from a relatively small number of units foiming an orderly, deterministic system to a probabilistic system composed of a large number of interconnected components whose behavior cannot be described (or predicted) more economically than by a statistical description of the system as a whole. Von Neumann was intrigued by the origins of self-organization in complicated systems—behavior reminiscent of the origins of turbulence but on a different scale. He understood that the boundaries between physics and computational models of physics are imprecise. The behavior of a turbulent hydrodynamic system can be predicted only by accounting for all interactions down to molecular scale. The situation can be modeled in computationally manageable form either by adopting a coarser numerical mesh, following a random sample of elements and drawing statistical conclusions accordingly, or by slowing the computation down in time. To make useful predictions of an ongoing process—say, a forecast of tomorrow’s weather—the computation has to be speeded up.

  The goal of weather prediction stimulated the development of electronic computers on several fronts. John Mauchly first conceived the ENIAC as an electronic differential analyzer to assist in recognizing long-term cycles in the weather—before another war came along with its demands for firing tables for bigger guns. Vladimir Zworykin (1889–1982), the brilliant Russian immigrant who brought electronic television into existence with his invention of the iconoscope, or camera tube, in 1923, also foresaw the potential of an electronic computer as a meteorological oracle for the world. Norbert Wiener, patron of cybernetics, who embraced the anti-aircraft gun but shunned the bomb, was a vocal proponent of atmospheric modeling and prophesied growing parallels between computers powerful enough to model nonlinear systems, such as the weather, and “the very complicated study of the nervous system which is itself a sort of cerebral meteorology.”25

  A cellular approach to numerically modeling the weather was developed by meteorologist Lewis Fry Richardson (1881–1953), who refined his atmospheric model, calculating entirely by hand and on “a heap of hay in a cold rest billet,” while serving in the Friends’ Ambulance Unit attached to the Sixteenth Division of the French infantry in World War I. “During the battle of Champagne in April 1917,” wrote Richardson in the preface to his Weather Prediction by Numerical Process (1922), “the working copy was sent to the rear, where it became lost, to be rediscovered some months later under a heap of coal.”26 Richardson imagined partitioning the earth’s surface into several thousand meteorological cells, relaying current observations to the arched galleries and
sunken amphitheater of a great hall, where some 64,000 human computers would continuously evaluate the equations governing each cell’s relations with its immediate neighbors, constructing a numerical model of the earth’s weather in real time. “Perhaps some day in the dim future, it will be possible to advance the computations faster than the weather advances,” hoped Richardson, “and at a cost less than the saving to mankind due to the information gained.”27

  Richardson thereby anticipated massively parallel computing, his 64,000 mathematicians reincarnated seventy years later as the multiple processors of Danny Hillis’s Connection Machine. “We had decided to simplify things by starting out with only 64,000 processors,” explained Hillis, recalling how Richard Feynman helped him bring Lewis Richardson’s fantasy to life.28 Even without the Connection Machine, Richardson’s approach to cellular modeling would be widely adopted once it became possible to assign one high-speed digital computer rather than 64,000 human beings to keep track of numerical values and relations among individual cells. The faint ghost of Lewis Richardson haunts every spreadsheet in use today.

  After the war, Richardson settled down as a meteorologist at Benson, Oxfordshire, contributing to the mathematical theory of turbulence and developing a novel method for remote sensing of movement in the upper air. By shooting small steel balls (between the size of a pea and a cherry) at the zenith and observing where they fell, Richardson helped turn swords into plowshares with a system that was faster, more accurate, and more robust than using balloons. When the Meteorological Office was transferred to the jurisdiction of the Air Ministry, Department of War, Richardson felt compelled, as a Quaker, to resign his post. Later still, when he discovered that poison-gas technicians were interested in his methods for predicting atmospheric flow, he ended his meteorological research, launching a mathematical investigation into the causes of war to which he devoted the remainder of his life. His studies were published posthumously in two separate volumes: Arms and Insecurity, an analysis of arms races, and Statistics of Deadly Quarrels, which documents every known category of violent conflict, from murder to strategic bombing, arranged both chronologically and on a scale of magnitude based on the logarithm of the number of victims left dead.29

 

‹ Prev