Book Read Free

Darwin Among the Machines

Page 25

by George B. Dyson


  Life now faces opportunities of unprecedented scale. Microprocessors divide time into imperceptibly fine increments, releasing signals that span distance at the speed of light. Systems communicate globally and endure indefinitely over time. Large, long-lived, yet very fast composite organisms are free from the constraints that have limited biology in the past. Since the process of organizing large complex systems remains mysterious to us, we have referred to these developments as self-organizing systems or self-organizing machines.

  Theories of self-organization became fashionable in the 1950s, generating the same excitement (and disappointments) that the “new” science of complexity has generated in recent years. Self-organization appeared to hold the key to natural phenomena such as morphogenesis, epigenesis, and evolution, inviting the deliberate creation of systems that grow and learn. Unifying principles were discovered among organizations ranging from a single cell to the human nervous system to a planetary ecology, with implications for everything in between. All hands joined in. Alan Turing was working on a mathematical model of morphogenesis, theorizing how self-organizing chemical processes might govern the growth of living forms, when his own life came to an end in 1954; John von Neumann died three years later in the midst of developing a theory of self-reproducing machines.

  “The adjective [self-organizing] is, if used loosely, ambiguous, and, if used precisely, self-contradictory,” observed British neurologist W. Ross Ashby in 1961. “There is a first meaning that is simple and unobjectionable,” Ashby explained. “This refers to the system that starts with its parts separate (so that the behavior of each is independent of the others’ states) and whose parts then act so that they change towards forming connections of some type. Such a system is ‘self-organizing’ in the sense that it changes from ‘parts separated’ to ‘parts joined.’ An example is the embryo nervous system, which starts with cells having little or no effect on one another, and changes, by the growth of dendrites and formation of synapses, to one in which each part’s behavior is very much affected by the other parts.”5 The second type of self-organizing behavior—where interconnected components become organized in a productive or meaningful way—is perplexing to define. In the infant brain, for example, self-organization is achieved less by the growth of new connections and more by allowing meaningless connections to die out. Meaning, however, has to be supplied from outside. Any individual system can only be self-organizing with reference to some other system; this frame of reference may be as complicated as the visible universe or as simple as a single channel of Morse code.

  William Ross Ashby (1903–1972) began his career as a psychiatrist, diversifying into neurology by way of pathology after serving in the Royal Medical Corps during World War II. By studying the structure of the human brain and the peculiarities of human behavior, he sought to unravel the mysteries in between. Like von Neumann, he hoped to explain how mind can be so robust yet composed of machinery so frail. Two years before his death, Ashby reported on a series of computer simulations measuring the stability of complex dynamic systems as a function of the degree of interconnection between component parts. The evidence suggested that “all large complex dynamic systems may be expected to show the property of being stable up to a critical level of connectance, and then, as the connectance increases, to go suddenly unstable.”6 Implications range from the origins of schizophrenia to the stability of market economies and the performance of telecommunications webs.

  Ashby formulated a concise set of principles of self-organizing systems in 1947, demonstrating “that a machine can be at the same time (a) strictly determinate in its actions, and (b) yet demonstrate a self-induced change of organisation.”7 This work followed an earlier paper on adaptation by trial and error, written in 1943 but delayed by the war, in which he observed that “an outstanding property of the nervous system is that it is self-organizing, i.e., in contact with a new environment the nervous system tends to develop that internal organization which leads to behavior adapted to that environment.”8 Generalizing such behavior so that it was “not in any way restricted to mechanical systems with Newtonian dynamics,” Ashby concluded that “‘adaptation by trial and error’ . . . is in no way special to living things, that it is an elementary and fundamental property of all matter, and . . . no ‘vital’ or ‘selective’ hypothesis is required.”9 Starting from a rigorous definition of the concepts of environment, machine, equilibrium, and adaptation, he developed a simple mathematical model showing how changes in the environment cause a machine to break, that is, to switch to a different equilibrium state. “The development of a nervous system will provide vastly greater opportunities both for the number of breaks available and also for complexity and variety of organization,” he wrote. “The difference, from this point of view, is solely one of degree.”10

  When the cybernetics movement took form in the postwar years, Ashby’s ideas were folded in. His Design for a Brain: The Origin of Adaptive Behaviour, published in 1952, was adopted as one of the central texts in the new field. Ashby’s “homeostat,” the electromechanical embodiment of his ideas on equilibrium-seeking machines, behaved like a cat that turns over and goes back to sleep when it is disturbed. His “Law of Requisite Variety” held that the complexity of an effective control system corresponds to the complexity of the system under its control.

  Ashby believed that the “spontaneous generation of organization” underlying the origins of life and other improbabilities was not the exception but the rule. “Every isolated determinate dynamic system obeying unchanging laws will develop ‘organisms’ that are adapted to their ‘environments,’” he argued. “There is no difficulty, in principle, in developing synthetic organisms as complex and as intelligent as we please. But . . . their intelligence will be an adaptation to, and a specialization towards, their particular environment, with no implication of validity for any other environment such as ours.”11

  According to Ashby, high-speed digital computers offered a bridge between laws and life. “Until recently we have had no experience of systems of medium complexity; either they have been like the watch and the pendulum, and we have found their properties few and trivial, or they have been like the dog and the human being, and we have found their properties so rich and remarkable that we have thought them supernatural. Only in the last few years has the general-purpose computer given us a system rich enough to be interesting yet still simple enough to be understandable . . . it enables us to bridge the enormous conceptual gap from the simple and understandable to the complex.” To understand something as complicated as life or intelligence, advised Ashby, we need to retrace its steps. “We can gain a considerable insight into the so-called spontaneous generation of life by just seeing how a somewhat simpler version will appear in a computer,” he noted in 1961.12

  The genesis of life or intelligence within or among computers goes approximately as follows: (1) make things complicated enough, and (2) either wait for something to happen by accident or make something happen by design. The best approach may combine elements of both. “My own guess is that, ultimately, efficient machines having artificial intelligence will consist of a symbiosis of a general-purpose computer together with locally random or partially random networks,” concluded Irving J. Good in 1958. “The parts of thinking that we have analyzed completely could be done on the computer. The division would correspond roughly to the division between the conscious and unconscious minds.”13 A random network need not be implemented by a random configuration of neurons, wires, or switches; it can be represented by logical relationships evolved in an ordered matrix of two-state devices if the number of them is large enough. This possibility was inherent in John von Neumann’s original conception of the digital computer as an association of discrete logical elements, a population that just so happened to be organized by its central control organ for the performance of arithmetical operations but that could in principle be organized differently, or even be allowed to organize itself. Success at performing a
rithmetic, however, soon preempted everything else.

  An early attempt at invoking large-scale self-organizing processes within a single computer was a project aptly christened Leviathan, developed in the late 1950s at the System Development Corporation, a spin-off division of RAND. Leviathan proposed to capture a behavioral model of a semiautomatic air-defense system that had grown too complicated for any predetermined model to comprehend. Leviathan (the single-computer model) and SAGE (its multiple-computer subject) jointly represented the transition from computer systems designed and organized by engineers to computer systems that were beginning to organize themselves.

  The System Development Corporation (SDC) originated in the early 1950s with a series of RAND Corporation studies for the U.S. Air Force on the behavior of complex human-machine systems under stress. In 1951, behind a billiard parlor at Fourth and Broadway in downtown Santa Monica, California, RAND constructed a replica of the Tacoma Air Defense Direction Center, where the behavior of real humans and real machines was studied under simulated enemy attack. The first series of fifty-four experiments took place between February and May 1952, using twenty-eight human subjects provided with eight simulated radar screens under punched-card control. In studying the behavior of their subjects—students hired by the hour from UCLA—it was discovered that participation in the simulations so improved performance that the air force asked RAND to train real air-defense crews instead. “The organization learned its way right out of the experiment,” the investigators reported in a summary of the tests. “Within a couple days the college students were maintaining highly effective defense of their area while playing word games and doing homework on the side.”14 The study led to the establishment of a permanent System Research Laboratory within RAND’s System Development Division and to a training system duplicated at 150 operational air-defense sites.

  RAND’s copy of the IAS computer became operational in 1952, followed by delivery of an IBM 701, the first system off the assembly line, in August 1953. The computer systems used to stage RAND’s simulations soon became more advanced than the control systems used in actual air defense. “We found that to study an organization in the laboratory we, as experimenters had to become one,” wrote Allen Newell, who went on to become one of the leaders of artificial intelligence research.15 Repeating the process by which human intelligence may have first evolved, an observational model developed into a system of control. RAND’s contracts were extended to include designing as well as simulating the complex information-processing systems needed for air defense. “The simplest way of summarizing the incidents, impressions, and data of the air-defense experiments,” reported Newell, “is to say that the four organizations behaved like organisms.”16 RAND’s studies were among the first to examine how large information-processing systems not only facilitate the use of computers by human beings but facilitate the use of human beings by machines. As John von Neumann pointed out, “the best we can do is to divide all processes into those things which can be better done by machines and those which can be better done by humans and then invent methods by which to pursue the two.”17

  By the time it became an independent, nonprofit corporation at the end of 1956, the System Development Division employed one thousand people and had grown to twice the size of the rest of RAND. When the air force contracted jointly with the Lincoln Laboratory at MIT and the RAND Corporation to develop the continental air-defense system known as SAGE (Semi-Automatic Ground Environment), the job of programming the system was delegated to SDC. Bell Telephone Laboratories and IBM were offered the contract but both declined. “We couldn’t imagine where we could absorb 2,000 programmers at IBM when this job would be over someday,” said Robert Crago, “which shows how well we were understanding the future at that time.”18

  SAGE integrated hundreds of channels of information related to air defense, coordinating the tracking and interception of military targets as well as peripheral details, such as some thirty thousand scheduled airline flight paths augmented by all the unscheduled flight plans on file at any given time. Each of some two dozen air-defense sector command centers, housed in windowless buildings protected by six feet of blast-resistant concrete, was based around an AN-FSQ-7 computer (Army-Navy Fixed Special eQuipment) built by IBM. Two identical processors shared 58,000 vacuum tubes, 170,000 diodes, and 3,000 miles of wiring as one ran the active system and the other served as a “warm” backup, running diagnostic routines while standing by to be switched over to full control at any time. These systems weighed more than 250 tons. The computer occupied 20,000 square feet of floor space; input and output equipment consumed another 22,000 square feet. A 3,000-kilowatt power supply and 500 tons of air-conditioning equipment kept the laws of thermodynamics at bay. One hundred air force officers and personnel were on duty at each command center; the system was semiautomatic in that SAGE supplied predigested intelligence to its human operators, who then made the final decisions as to how the available defenses should respond.

  The use of one computer by up to one hundred simultaneous operators ushered in the era of time-share computing and opened the door to the age of data networking that followed. The switch from batch processing, when you submit a stack of cards and come back hours or days later to get the results, to real-time computing was sparked by SAGE’s demand for instantaneous results. The SAGE computers, descended from the Whirlwind prototype constructed at MIT, also led the switch to magnetic-core memory, storing 8,192 33-bit words, increased to 69,632 words in 1957 as the software grew more complex. The memory units were glass-faced obelisks housing a stack of thirty-six ferrite-core memory planes. It took forty hours of painstaking needlework to thread a single plane; each of its 4,096 ferrite beads was interlaced by fine wires in four directions, the intersections weaving a tapestry of cross-referenced electromagnetic bits. The read-write cycle was six microseconds, shuttling data back and forth nearly 200,000 times from one second to the next. High-speed magnetic drums and 728 individual tape drives supplied peripheral programs and data, and traffic with the network of radar-tracking stations pioneered high-speed (1,300 bits per second) data transmission over the voice telephone system using lines leased from AT&T.

  The prototype Cape Cod station was operational in 1953; twenty-three sectors were deployed by 1958; the last six SAGE sector control centers were shut down in January 1984, having outlived all other computers of their time. SAGE was designed to defend against land-based bombers; the age of ballistic missiles left its command centers vulnerable to attack. As the prototype of a real-time global information-processing system, however, SAGE left instructions that are still going strong.

  The SAGE operating system incorporated one million lines of code, by far the largest software project of its time. Each control sector was configured differently, yet all sectors had to interact smoothly under stress. To this day no one knows how the system would have behaved in response to a real attack. Even the principal architects of the operating system spoke of it as having been evolved rather than designed. When human beings were added, the behavior of the system became even less predictable, so it was tested regularly with simulated intrusions and mock attacks. The dual-processor configuration allowed these exercises to be conducted using one-half of the system while the other half remained on-line, like the right and left hemispheres of a brain. Exhibiting a quality that some theorists suggest distinguishes organisms from machines, the SAGE system was so complicated that there appeared to be no way to model its behavior more concisely than by putting the system through its paces and observing the results.

  Despite the experience with SAGE, in the RAND tradition of giving every hypothesis a chance the Leviathan project was launched. Leviathan was an attempt to let a model design itself. “Leviathan is the name we give to a highly adaptable model of large behavioral systems, designed to be placed in operation on a digital computer,” wrote Beatrice and Sydney Rome in a report first published on 29 January 1959.19 The air force’s problem was that its systems wer
e structured and analyzed hierarchically, but when operated under pressure unforeseen relationships caused things to happen between different levels, with unanticipated, and perhaps catastrophic, results. “The problem of system levels . . . pervades the investigation of any subject matter that incorporates symbols,” wrote the Romes, philosophers by profession and biographers of the seventeenth-century philosopher Nicolas Malebranche. “An example of the latter is any work of art, but the example we shall offer here is drawn from simulating air defense.”20 An oblique reference to “other kinds of systems of command and authority that produce a product or that render a constructive or destructive service” was as close as the Romes came to acknowledging the policy of assured retaliation underlying the air force’s interest in decision making by human-machine systems under stress.21 References to “special Leviathan pushbutton intervention equipment” sound sinister, but only referred to circuits installed at SDC’s System Simulation Research Laboratory to allow human operators to input decisions used in training the Leviathan program during tests.

  To construct their model, the Romes proposed using a large digital computer as a self-organizing logical network rather than as a data processor proceeding through a sequence of logical steps. “Let us suppose that we decide to use the computer in a more direct, a non-computational way. The binary states of the cores are theoretically subject to change thousands of times each second. If we can somehow induce some percentage of these to enter into processes of dynamic interaction with one another under controllable conditions, then direct simulation may be possible. A million cells of storage subject to rapid individual change may provide a mesh of sufficiently fine grain.”22

 

‹ Prev