Book Read Free

The Cybernetic Brain

Page 21

by Andrew Pickering


  What can we take from this discussion? First, evidently, it is a nice example of the consequentiality of Ashby's work beyond the immediate community of cyberneticians. Second, it is another example of the undisciplined quality of the transmission of cybernetics through semipopular books like Design for a Brain. I know of no evidence of contact between Alexander and Ashby or other cyberneticians; it is reasonable to assume that Alexander simply read Design and saw what he could do with it, in much the same way as both Rodney Brooks and William Burroughs read Grey Walter. Along with this, we have another illustration of the protean quality of cybernetics. Ashby thought he was writing about the brain, but Alexander immediately extended Ashby's discussion of connectedness to a continuing program in architecture and design, a field that Ashby never systematically thought about. We can thus take both Alexander's distinctive approach to architectural design and the actual buildings he has designed as further exemplars of the cybernetic ontology in action.62 Finally, we can note that Alexander's architecture is by no means uncontroversial. Alexander's "Linz Café" (1983) is an extended account of one of his projects (fig. 4.13) that includes the text of a debate at Harvard with Peter Eisenman. Alexander explains how the cafe was constructed around his "patterns" (58–59) but also emphasizes that the design elements needed to be individually "tuned" by building mock-ups and seeing what they felt like. The goal was to construct spaces that were truly "comfortable" for human beings. This tuning harks back to and exemplifies Alexander's earlier discussion of how problems can be and are solved on a piecemeal basis in traditional architecture, and the last section of his article discusses resonances between the Linz Café and historical buildings (59). In debate Eisenman tries to problematize Alexander's comfort principle and suggests a different, less harmonious idea of architecture (theoretically inspired). Egged on by a sympathetic audience, Alexander remarks that "people who believe as you do are really fucking up the whole profession of architecture right now by propagating these beliefs" (67)—another marker of the fact that ontology makes a difference. We can return to this theme in a different and less "comfortable" guise when we come to Gordon Pask's version of adaptive architecture.

  Figure 4.13.The Linz Café. Source: Alexander 1983, 48.

  _ _ _ _ _

  IT IS A FUNDAMENTAL QUESTION WHETHER METABOLIC STABILITY AND EPIGENESIS REQUIRE THE GENETIC REGULATORY CIRCUITS TO BE PRECISELY CONSTRUCTED. HAS A FORTUNATE EVOLUTIONARY HISTORY SELECTED ONLY NETS OF HIGHLY ORDERED CIRCUITS WHICH ALONE CAN INSURE METABOLIC STABILITY; OR ARE STABILITY AND EPIGENESIS, EVEN IN NETS OF RANDOMLY CONNECTED INTERCONNECTED REGULATORY CIRCUITS, TO BE EXPECTED AS THE PROBABLE CONSEQUENCE OF AS YET UNKNOWN MATHEMATICAL LAWS? ARE LIVING THINGS MORE AKIN TO PRECISELY PROGRAMMED AUTOMATA SELECTED BY EVOLUTION, OR TO RANDOMLY ASSEMBLED AUTOMATA WHOSE CHARACTERISTIC BEHAVIOR REFLECTS THEIR UNORDERLY CONSTRUCTION, NO MATTER HOW EVOLUTION SELECTED THE SURVIVING FORMS?

  STUART KAUFFMAN,"METABOLIC STABILITY AND EPIGENESIS IN RANDOMLY

  CONSTRUCTED GENETIC NETS" (1969B, 438)

  Now for Stuart Kauffman, one of the founders of contemporary theoretical biology, perhaps best known in the wider world for two books on a complex systems approach to the topics of biology and evolution, At Home in the Universe (1995) and Investigations (2002). I mentioned his important and explicitly cybernetic notion of "explanation by articulation of parts" in chapter 2, but now we can look at his biological research.63

  The pattern for Kauffman's subsequent work was set in a group of his earliest scientific publications in the late 1960s and early 1970s, which concerned just the same problem that Alexander inherited from Ashby, the question of a large array of interacting elements achieving equilibrium. In Design for a Brain, Ashby considered two limits—situations in which interconnections between the elements were either minimal or maximal—and argued that the time to equilibrium would be small in one case and longer than the age of the universe in the other. The question that then arose was what happened in between these limits. Ashby had originally been thinking about an array of interacting homeostats, but one can simplify the situation by considering an array of binary elements that switch each other on and off according to some rule—as did Alexander with his imaginary lightbulbs. The important point to stress, however, is that even such simple models are impossible to solve analytically. One cannot calculate in advance how they will behave; one simply has to run through a series of time steps, updating the binary variables at each step according to the chosen transformation rules, and see what the system will in fact do. This is the cybernetic discovery of complexity transcribed from the field of mechanisms to that of mathematical formalisms. Idealized binary arrays can remain Black Boxes as far as their aggregate behavior is concerned, even when the atomic rules that give rise to their behavior are known.

  The only way to proceed in such a situation (apart from Alexander's trick of simply assuming that the array breaks up into almost disconnected pieces) is brute force. Hand calculation for a network of any size would be immensely tedious and time consuming, but at the University of Illinois Crayton Walker's 1965 PhD dissertation in psychology reported on his exploration of the time evolution of one-hundred-element binary arrays under a variety of simple transformation rules using the university's IBM 7094–1401 computer. Walker and Ashby (1966) wrote these findings up for publication, discussing how many steps different rule systems took to come to equilibrium, whether the equilibrium state was a fixed point or a cycle, how big the limit cycles were, and so on.64 But it was Kauffman, rather than Walker and Ashby, who obtained the most important early results in this area, and at the same time Kauffman switched the focus from the brain to another very complex biological system, the cell.

  Beginning in 1967, Kauffman published a series of papers grounded in computer simulations of randomly connected networks of binary elements, which he took to model the action of idealized genes, switching one another on and off (like lightbulbs, which indeed feature in At Home in the Universe). We could call what he had found a discovery of simplicity within complexity. A network of N binary elements has 2N possible states, so that a one-thousand-element network can be in 21000 distinct states, which is about 10300—another one of those hyperastronomical numbers. But Kauffman established two fundamental findings, one concerning the inner, endogenous, dynamics of such nets, the other concerning exogenous perturbations.65

  On the first, Kauffman's simulations suggested that if each gene has exactly two inputs from other genes, then a randomly assembled network of one thousand genes would typically cycle among just twelve states—an astonishingly small number compared with 10300 (Kauffman 1969b, 444). Furthermore the lengths of these cycles—the number of states a network would pass through before returning to a state it had visited before—were surprisingly short. He estimated, for example, that a network having a million elements would "possess behavior cycles of about one thousand states in length—an extreme localization of behavior among 21,000,000 possible states" (446). And beyond that, Kauffman's computer simulations revealed that the number of distinct cycles exhibited by any net was "as surprisingly small as the cycles are short" (448). He estimated that a net of one thousand elements, for example, would possess around just sixteen distinct cycles.

  On the second, Kauffman had investigated what happened to established cycles when he introduced "noise" into his simulations—flipping single elements from one state to another during a cycle. The cycles proved largely resistant to such exogenous interference, returning to their original trajectories around 90% of the time. Sometimes, however, flipping a single element would jog the system from one cyclic pattern to one of a few others (452).

  What did Kauffman make of these findings? At the most straightforward level, his argument was that a randomly connected network of idealized genes could serve as the model for a set of cell types (identified with the different cycles the network displayed), that the short cycle lengths of these cells were consistent with biological time scales, that the cells exhibited th
e biological requirement of stability against perturbations and chemical noise, and that the occasional transformations of cell types induced by noise corresponded to the puzzling fact of cellular differentiation in embryogenesis.66 So his idealized gene networks could be held to be models of otherwise unexplained biological phenomena—and this was the sense in which his work counted as "theoretical biology." At a grander level, the fact that these networks were randomly constructed was important, as indicated in the opening quotation from Kauffman. One might imagine that the stability of cells and their pathways of differentiation are determined by a detailed "circuit diagram" of control loops between genes, a circuit diagram laid down in a tortuous evolutionary history of mutation and selection. Kauffman had shown that one does not have to think that way. He had shown that complex systems can display selforganizing properties, properties arising from within the systems themselves, the emergence of a sort of "order out of chaos" (to borrow the title of Prigogine and Stengers 1984). This was the line of thought that led him eventually to the conclusion that we are "at home in the universe"—that life is what one should expect to find in any reasonably complex world, not something we should be surprised at and requiring any special explanation.67

  This is not the place to go into any more detail about Kauffman's work, but I want to comment on what we have seen from several angles. First, I want to return to the protean quality of cybernetics. Kauffman was clearly working in the same space as Ashby and Alexander—his basic problematic was much the same as theirs. But while their topic was the brain (as specified by Ashby) or architecture (as specified by Alexander), it was genes and cells and theoretical biology when specified by Kauffman.

  Second, I want to comment on Kauffman's random networks, not as models of cells, but as ontological theater more generally. I argued before that tortoises, homeostats, and DAMS can, within certain limitations, be seen as electromechanical models that summon up for us the cybernetic ontology more broadly—machines whose aggregate performance is impenetrable. As discussed, Kauffman's idealized gene networks displayed the same character, but as emerging within a formal mathematical system rather than a material one. Now I want to note that as world models Kauffman's networks can also further enrich our ontological imaginations in important ways. On the one hand, these networks were livelier than, especially, Ashby's machines. Walter sometimes referred to the homeostat as Machina sopora—the sleeping machine. Its goal was to become quiescent; it changed state only when disturbed from outside. Kauffman's nets, in contrast, had their own endogenous dynamics, continually running through their cycles whether perturbed from the outside or not. On the other hand, these nets stage for us an image of systems with which we can genuinely interact, but not in the mode of command and control. The perturbations that Kauffman injected into their cycling disturbed the systems but did not serve to direct them into any other particular cycles.

  This idea of systems that are not just performative and inscrutable but also dynamic and resistant to direction helps, I think, to give more substance to Beer's notion of "exceedingly complex systems" as the referent of cybernetics. The elaborations of cybernetics discussed in the following chapters circle around the problematic of getting along with systems fitting that general description, and Kauffman's nets can serve as an example of the kinds of things they are.68

  My last thought on Kauffman returns to the social basis of cybernetics. To emphasize the odd and improvised character of this, in the previous chapter (note 31) I listed the range of diverse academic and nonacademic affiliations of the participants at the first Namur conference. Kauffman's CV compresses the whole range and more into a single career. With BAs from Dartmouth College and Oxford University, he qualified as a doctor at the University of California, San Francisco, in 1968, while first writing up the findings discussed above as a visitor at MIT's Research Laboratory of Electronics in 1967. He was then briefly an intern at Cincinnati General Hospital before becoming an assistant professor of biophysics and theoretical biology at the University of Chicago from 1969 to 1975. Overlapping with that, he was a surgeon at the National Cancer Institute in Bethesda from 1973 to 1975, before taking a tenured position in biochemistry and biophysics at the University of Pennsylvania in 1975. He formally retired from that position in 1995, but from 1986 to 1997 his primary affiliation was as a professor at the newly established Santa Fe Institute (SFI) in New Mexico. In 1996, he was the founding general partner of Bios Group, again in Santa Fe, and in 2004 he moved to the University of Calgary as director of the Institute for Biocomplexity and Informatics and professor in the departments of Biological Sciences and Physics and Astronomy.69

  It is not unreasonable to read this pattern as a familiar search for a congenial environment for a research career that sorts ill with conventional disciplinary and professional concerns and elicits more connections across disciplines and fields than within any one of them. The sociological novelty that appears here concerns two of Kauffman's later affiliations. The Santa Fe Institute was established in 1984 to foster a research agenda devoted to "simplicity, complexity, complex systems, and particularly complex adaptive systems" and is, in effect, an attempt to provide a relatively enduring social basis for the transient interdisciplinary communities—the Macy and Namur conferences, the Ratio Club—that were "home" to Walter, Ashby, and the rest of the first generation of cyberneticians. Notably, the SFI is a freestanding institution and not, for example, part of any university. The sociologically improvised character of cybernetics reappears here, but now at the level of institutions rather than individual careers.70 And two other remarks on the SFI are relevant to our themes. One is that while the SFI serves the purpose of stabilizing a community of interdisciplinary researchers, it does not solve the problem of cultural transmission: as a private, nonprofit research institute it does not teach students and grant degrees.71 The other is that the price of institutionalization is, in this instance, a certain narrowing. The focus of research at the SFI is resolutely technical and mathematical. Ross Ashby might have been happy there, but not, I think, any of our other principals. Their work was too rich and diverse to be contained by such an agenda.

  Besides the SFI, I should comment on Kauffman's affiliation with the Bios Group (which merged with NuTech Solutions in 2003). "BiosGroup was founded by Dr. Stuart Kauffman with a mission to tackle industry's toughest problems through the application of an emerging technology, Complexity Science."72 Here we have an attempt is establish a stable social basis for the science of complexity on a business rather than a scholarly model—a pattern we have glimpsed before (with Rodney Brooks's business connections) and which will reappear immediately below. And once more we are confronted with the protean quality of cybernetics, with Kauffman's theoretical biology morphing into the world of capital.

  _ _ _ _ _

  WE HAVE SUCCEEDED IN REDUCING ALL OF ORDINARY PHYSICAL BEHAVIOR TO A SIMPLE, CORRECT THEORY OF EVERYTHING ONLY TO DISCOVER THAT IT HAS REVEALED EXACTLY NOTHING ABOUT MANY THINGS OF GREAT IMPORTANCE.

  R. B. LAUGHLIN AND DAVID PINES,

  "THE THEORY OF EVERYTHING" (2000, 28)

  IT'S INTERESTING WHAT THE PRINCIPLE OF COMPUTATIONAL EQUIVALENCE ENDS UP SAYING. IT KIND OF ENCAPSULATES BOTH THE GREAT STRENGTH AND THE GREAT WEAKNESS OF SCIENCE. BECAUSE ON THE ONE HAND IT SAYS THAT ALL THE WONDERS OF THE UNIVERSE CAN BE CAPTURED BY SIMPLE RULES. YET IT ALSO SAYS THAT THERE'S ULTIMATELY NO WAY TO KNOW THE CONSEQUENCES OF THESE RULES—EXCEPT IN EFFECT JUST TO WATCH AND SEE HOW THEY UNFOLD.

  STEPHEN WOLFRAM,"THE GENERATION OF FORM

  IN A NEW KIND OF SCIENCE" (2005, 36)

  If the significance of Kauffman's work lay in his discovery of simplicity within complexity, Wolfram's achievement was to rediscover complexity within simplicity. Born in London in 1959, Stephen Wolfram was a child prodigy, like Wiener: Eton, Oxford, and a PhD from Caltech in 1979 at age twenty; he received a MacArthur "genius" award two years later. Wolfram's early work was in theoretical elementary-particle physics and cosmology
, but two interests that defined his subsequent career emerged in the early 1980s: in cellular automata, on which more below, and in the development of computer software for doing mathematics. From 1983 to 1986 he held a permanent position at the Institute for Advanced Study in Princeton; from 1986 to 1988 he was professor of physics, mathematics and computer science at the University of Illinois at Urbana-Champaign, where he founded the Center for Complex Systems Research (sixteen years after Ashby had left—"shockingly, I don't think anyone at Illinois ever mentioned Ashby to me"; email to the author, 6 April 2007). In 1987 he founded Wolfram Research, a private company that develops and markets what has proved to be a highly successful product: Mathematica soft ware for mathematical computation. Besides running his company, Wolfram then spent the 1990s developing his work on cellular automata and related systems, in his spare time and without publishing any of it (echoes of Ashby's hobby). His silence ended in 2002 with a blaze of publicity for his massive, 1,280-page book, A New Kind of Science, published by his own company.73

 

‹ Prev