The Cybernetic Brain
Page 20
I can find nothing good to say about this aspect of Ashby's work. My own historical research has confronted me with many examples in which great scientific accomplishments were in fact bound up with shifts in goals, and without making a statistical analysis I would be willing to bet that most of the accomplishments we routinely attribute to "genius" have precisely that quality. I therefore think that while it is reasonable to regard the fixity of the homeostat's goals as possibly a good model for some biological processes and a possibly unavoidable electromechanical limitation, it would be a mistake to follow Ashby's normative insistence that fixed goals necessarily characterize epistemological practice. This is one point at which we should draw the line in looking to his cybernetics for inspiration.
Beyond that, there is the question of how cognitive goals are to be achieved. Once Ashby and Walker have insisted that the goals of knowledge production have to be fixed in advance, they can remark that "the theorems of information theory are directly applicable to problems of this kind" (Ashby and Walker 1968, 210). They thus work themselves into the heartland of Ashby's mature cybernetics, where, it turns out, the key question is that of selection.57 Just as the homeostat might be said to select the right settings of its uniselectors to achieve its goal of homeostasis, so, indeed, should all forms of human cultural production be considered likewise (210):
To illustrate, suppose that Michelangelo made one million brush strokes in painting the Sistine Chapel. Suppose also that, being highly skilled, at each brush stroke he selected one of the two best, so that where the average painter would have ranged over ten, Michelangelo would have regarded eight as inferior. At each brush stroke he would have been selecting appropriately in the intensity of one in five. Over the million brush strokes the intensity would have been one in 51,000,000. The intensity of Michelangelo's selection can be likened to his picking out one painting from five-raised-to-the-one-millionth-power, which is a large number of paintings (roughly 1 followed by 699,000 zeroes). Since this number is approximately the same as 23,320,000, the theorem says that Michelangelo must have processed at least 3,320,000 "bits" of information, in the units of information theory, to achieve the results he did. He must have done so, according to the axiom, because appropriate selections can only be achieved if enough information is received and processed to make them happen.
Ashby and Walker go on to deduce from this that Michelangelo must have worked really hard over a long period of time to process the required amount of information, and they produce a few historical quotations to back this up. They also extend the same form of analysis to Newton, Gauss, and Einstein (selecting the right scientific theories or mathematical axioms from an enormous range of possibilities), Picasso (back to painting), Johann Sebastian Bach (picking just the right notes in a musical composition), and even Adolf Hitler, who "had many extraordinary successes before 1942 and was often acclaimed a genius, especially by the Germans" (207).
What can one say about all this? There is again something profoundly wrong about the image of "selection" that runs through Ashby's epistemology and even, before that, his ontology. There is something entirely implausible in the idea of Michelangelo's picking the right painting from a preexisting set or Einstein's doing the same in science. My own studies of scientific practice have never thrown up a single instance that could be adequately described in those terms (even if there is a branch of mainstream philosophy of science that does conceive "theory choice" along those lines). What I have found instead are many instances of open-ended, trial-and-error extensions of scientific culture. Rather than selecting between existing possibilities, scientists (and artists, and everyone else, I think) continually construct new ones and see how they play out. This is also a cybernetic image of epistemology—but one that emphasizes creativity and the appearance of genuine novelty in the world (both human and nonhuman) that the homeostat cannot model. The homeostat can only offer us selection and combinatorics. I have already discussed the homeostat's virtues as ontological theater at length; here my suggestion is that we should not follow it into the details of Ashby's epistemology.58
_ _ _ _ _
I want to end this chapter by moving beyond Ashby's work, so here I should offer a summary of what has been a long discussion. What was this chapter about?
One concern was historical. Continuing the discussion of Walter's work, I have tried to show that psychiatry, understood as the overall problematic of understanding and treating mental illness, was both a surface of emergence and a surface of return for Ashby's cybernetics. In important ways, his cybernetics can be seen to have grown out of his professional concerns with mental illness, and though the development of Ashby's hobby had its own dynamics and grew in other directions, too, he was interested, at least until the late 1950s, in seeing how it might feed back into psychiatry. At the same time, we have explored some of the axes along which Ashby's cybernetics went beyond the brain and invaded other fields: from a certain style of adaptive engineering (the homeostat, DAMS) to a general analysis of machines and a theory of everything, exemplified in Ashby's discussions of autopilots, economics, chemistry, evolutionary biology, war, planning, and epistemology. Ashby even articulated a form of spirituality appropriate to his cybernetics: "I am now . . . a Time-worshipper." In this way, the chapter continues the task of mapping out the multiplicity of cybernetics.
Another concern of the chapter has been ontological. I have argued that we can see the homeostat, and especially the multihomeostat setups that Ashby worked with, as ontological theater—as a model for a more general state of affairs: a world of dynamic entities evolving in performative (rather than representational) interaction with one another. Like the tortoise, the homeostat searched its world and reacted to what it found there. Unlike the tortoise's, the homeostat's world was as lively as the machine itself, simulated in a symmetric fashion by more homeostats. This symmetry, and the vision of a lively and dynamic world that goes with it, was Ashby's great contribution to the early development of cybernetics, and we will see it further elaborated as we go on. Conversely, once we have grasped the ontological import of Ashby's cybernetics, we can also see it from the opposite angle: as ontology in action, as playing out for us and exemplifying the sorts of project in many fields that might go with an ontology of performance and unknowability.
We have also examined the sort of performative epistemology that Ashby developed in relation to his brain research, and I emphasized the gearing of knowledge into performance that defined this. Here I also ventured into critique, arguing that we need not, and should not, accept all of the ontological and epistemological visions that Ashby staged for us. Especially, I argued against his insistence on the fixity of goals and his idea that performance and representation inhabit a given space of possibilities from which selections are made.
At the level of substance, we have seen that Ashby, like Walter, aimed at a modern science of the brain—at opening up the Black Box. And we have seen that he succeeded in this: the homeostat can indeed be counted as a model of the sort of adaptive processes that might happen in the brain. But the hybridity of Ashby's cybernetics, like Walter's, is again evident. In their mode of adaptation, Ashby's electromechanical assemblages themselves had, as their necessary counterpart, an unknowable world to which they adapted performatively. As ontological theater, his brain models inescapably return us to a picture of engagement with the unknown.
Furthermore, we have seen that that Ashby's cybernetics never quite achieved the form of a classically modern science. His scientific models were revealing from one angle, but opaque from another. To know how they were built did not carry with it a predictive understanding of what they would do. The only way to find out was to run them and see (finding out whether multihomeostat arrays with fixed internal settings would be stable or not, finding out what DAMS would do). This was the cybernetic discovery of complexity within a different set of projects from Walter's: the discovery that beyond some level of complexity, machines (and mathemati
cal models) can themselves become mini–Black Boxes, which we can take as ontological icons, themselves models of the stuff from which the world is built. It was in this context that Ashby articulated a distinctively cybernetic philosophy of evolutionary design—design in medias res—very different from the blueprint attitude of modern engineering design, the stance of a detached observer who commands matter via a detour through knowledge.
Finally, the chapter thus far also explored the social basis of Ashby's cybernetics. Like Walter's, Ashby's distinctively cybernetic work was nomadic, finding a home in transitory institutions like the Ratio Club, the Macy and Namur conferences, and the Biological Computer Laboratory, where Ashby ended his career. I noted, though, that Ashby was hardly a disruptive nomad in his professional home, the mental hospital. There, like Walter, he took for granted established views of mental illness and therapy and existing social relations, even while developing novel theoretical accounts of the origins of mental illness in the biological brain and of the mechanisms of the great and desperate cures. This was a respect in which Ashby's cybernetics reinforced, rather than challenged, the status quo.
The last feature of Ashby's cybernetics that I want to stress is its seriousness. His journal records forty-four years' worth of hard, technical work, 7,189 pages of it, trying to think clearly and precisely about the brain and machines and about all the ancillary topics that that threw up. I want to stress this now because this seriousness of cybernetics is important to bear in mind throughout this book. My other cyberneticians were also serious, and they also did an enormous amount of hard technical work, but their cybernetics was not as unremittingly serious as Ashby's. Often it is hard to doubt that they were having fun, too. I consider this undoing of the boundary between serious science and fun yet another attractive feature of cybernetics as a model for practice. But there is a danger that it is the image of Allen Ginsberg taking LSD coupled to a flicker machine by a Grey Walter–style biofeedback mechanism, or of Stafford Beer invoking the Yogic chakras or the mystical geometry of the enneagram, that might stick in the reader's mind. I simply repeat here, therefore, that what fascinates me about cybernetics is that its projects could run the distance from the intensely technical to the far out. Putting this somewhat more strongly, my argument would have to be that the technical development of cybernetics encourages us to reflect that its more outré aspects were perhaps not as far out as we might think. The nonmodern is bound to look more or less strange.
A New Kind of Science:
Alexander, Kauffman, and Wolfram
In the previous chapter, I explored some of the lines of work that grew out of Grey Walter's cybernetics, from robotics to the Beats and biofeedback, and I want to do something similar here, looking briefly at other work up to the present that resonates with Ashby's. My examples are taken from the work of Christopher Alexander, Stuart Kauffman, and Stephen Wolfram. One concern is again with the protean quality of cybernetics: here we can follow the development of distinctively Ashby-ite approaches into the fields of architecture, theoretical biology, mathematics, and beyond. The other concern is to explore further developments in the Ashby-ite problematic of complexity.
The three examples carry us progressively further away from real historical connections to Ashby, but, as I said in the opening chapters, it is the overall cybernetic stance in the world that I am trying to get clear on here, rather than lines of historical filiation.
_ _ _ _ _
IN ALEXANDER'S VIEW, MODERNITY IS A SORT OF TEMPORARY ABERRATION.
HILDE HEYNEN,ARCHITECTURE AND MODERNITY (1999, 20)
Christopher Alexander was born in Vienna in 1936 but grew up in England, graduated from Cambridge having studied mathematics and architecture, and then went to the other Cambridge, where he did a PhD in architecture at Harvard. In 1963 he became a professor of architecture at the University of California, Berkeley, retiring as an emeritus professor in 1998. British readers will be impressed, one way or the other, by the fact that from 1990 to 1995 he was a trustee of Prince Charles's Institute of Architecture. Alexander is best known for his later notion of "pattern languages," but I want to focus here on his first book, Notes on the Synthesis of Form (1964), the published version of his prize-winning PhD dissertation.59
The book takes us back to questions of design and is a critique of contemporary design methods, in general but especially in architecture. At its heart are two ideal types of design: "unselfconscious" methods (primitive, traditional, simple) and "selfconscious" ones (contemporary, professional, modern), and Alexander draws explicitly on Design for a Brain (the second edition, of 1960) to make this contrast.60 The key concept that he takes there from Ashby is precisely the notion of adaptation, and his argument is that unselfconscious buildings, exemplified by the Mousgoum hut built by African tribes in French Cameroon, are well-adapted buildings in several senses: in the relation of their internal parts to one another, to their material environment, and to the social being of their inhabitants (Alexander 1964, 30). Contemporary Western buildings, in contrast, do not possess these features, is the claim, and the distinction lies for Alexander in the way that architecture responds to problems and misfits arising in construction and use. His idea is that in traditional design such misfits are localized, finite problems that are readily fixed in a piecemeal fashion, while in the field of self-conscious design, attempts to fix misfits ramify endlessly: "If there is not enough light in a house, for instance, and more windows are added to correct this failure, the change may improve the light but allow too little privacy; another change for more light makes the windows bigger, perhaps, but thereby makes the house more likely to collapse" (1964, 42).
The details here are not important, but I want to note the distinctly Ashby-ite way in which Alexander frames the problem in order to set up his own solution of it, a solution which is arguably at the heart of Alexander's subsequent career. As discussed earlier, in a key passage of Design for a Brain Ashby gave estimates of the time for multihomeostat systems to achieve equilibrium, ranging from short to impossibly long, depending upon the density of interconnections between the homeostats. In the second edition of Design, he illustrated these estimates by thinking about a set of rotors, each with two positions labeled A and B, and asking how long it would take various spinning strategies to achieve a distribution of, say, all As showing and no Bs (Ashby 1960, 151). In Notes on the Synthesis of Form, Alexander simply translates this illustration into his own terms, with ample acknowledgment to Ashby but with an interesting twist.
Alexander invites the reader to consider an array of one hundred lightbulbs that can be either on, standing for a misfit in the design process, or off, for no misfit. This array evolves in time steps according to certain rules. Any light that is on has a 50-50 chance of going off at the next step. Any light that is off has a 50-50 chance of coming back on if at least one light to which it is connected is on, but no chance if the connected lights are all off. And then one can see how the argument goes. The destiny of any such system is eventually to become dark: once all the lights are off—all the misfits have been dealt with—none of them can ever, according to the rules, come back on again. So, following Ashby exactly, Alexander remarks, "The only question that remains is, how long will it take for this to happen? It is not hard to see that apart from chance this depends only on the pattern of interconnection between the lights" (1964, 40).61
Alexander then follows Ashby again in providing three estimates for the time to darkness. The first is the situation of independent adaptation. If the lights have no meaningful connections to one another, then this time is basically the time required for any single light to go dark: 2 seconds, if each time step is 1 second. At the other extreme, if each light is connected to all the others, then the only way in which the lights that remain on can be prevented from reexciting the lights that have gone off is by all of the lights happening to go off in the same time step, which one can estimate will take of the order of 2100 seconds, or 1022 years—one
of those hyperastronomical times that were crucial to the development of Ashby's project. Alexander then considers a third possibility which differs in an important way from Ashby's third possibil ity. In Design for a Brain, Ashby gets his third estimate by thinking about the situation in which any rotor that comes up A is left alone and the other rotors are spun again, and so on until there are no Bs left. Alexander, in contrast, considers the situation in which the one hundred lights fall into subsystems of ten lights each. These subsystems are assumed to be largely independent of one another but densely connected internally. In this case, the time to darkness of the whole system will be of the order of the time for any one subsystem to go dark, namely 210 seconds, or about a quarter of an hour—quite a reasonable number.
We recognize this line of thought from Design, but the advantage of putting it this way is that it sets up Alexander's own solution to the problem of design. Our contemporary problems in architecture stem from the fact that the variables we tinker with are not sufficiently independent of one another, so that tinkering with any one of them sets up problems elsewhere, like the lit lightbulbs turning on the others. And what we should do, therefore, is to "diagonalize" (my word) the variables—we should find some new design variables such that design problems only bear upon subsets of them that are loosely coupled to others, like the subsystems of ten lights in the example. That way, we can get to grips with our problems in a finite time and our buildings will reach an adapted state: just as in unselfconscious buildings, the internal components will fit together in all sorts of ways, and whole buildings will mesh with their environments and inhabitants. And this is indeed the path that Alexander follows in the later chapters of Notes on the Synthesis of Form, where he proposes empirical methods and mathematical techniques for finding appropriate sets of design variables. One can also, though I will not go into this, see this reasoning as the key to his later work on pattern languages: the enduring patterns that Alexander came to focus on there refer to recurring design problems and solutions that can be considered in relative isolation from others and thus suggest a realistically piecemeal approach to designing adapted buildings, neighborhoods, cities, conurbations, or whatever (Alexander et al. 1977).