Most of Ashby's cybernetic career thus displayed the usual social as well as ontological mismatch with established institutions, finding its home in improvised social relations and temporary associations lacking the usual means of reproducing themselves. In this respect, of course, his time at the BCL is anomalous, an apparent counterinstance to the correlation of the ontological and the social, but this instance is, in fact, deceptive. The BCL was itself an anomalous and marginal institution, only temporarily lodged within the academic body. It was brought into existence in the late 1950s by the energies of von Foerster, a charming and energetic Austrian postwar emigré, with powerful friends and sponsors, especially Warren McCulloch, and ready access to the seemingly inexhaustible research funding available from U.S. military agencies in the decades following World War II. When such funding became progressively harder to find as the sixties went on, the BCL contracted, and it closed down when von Foerster retired in 1975. A few years later its existence had been all but forgotten, even at the University of Illinois. The closure of the BCL—rather than, say, its incorporation within the Electrical Engineering Department—once again illustrates the social mismatch of cybernetics with existing academic structures.23
Design for a Brain
We can return to the technicalities of Ashby's cybernetics. The homeostat was the centerpiece of his first book, Design for a Brain, which was published in 1952 (and, much revised, in a second edition, in 1960). I want to discuss some of the principal features of the book, as a way both to clarify the substance of Ashby's work in this period and to point the way to subsequent developments.
First, we should note that Ashby had developed an entire mathematical apparatus for the analysis of complex systems, and, as he put it, "the thesis [of the book] is stated twice: at first in plain words and then in mathematical form" (1952, vi). The mathematics is, in fact, relegated to a forty-eight-page appendix at the end of the book, and, following Ashby's lead, I, too, postpone discussion of it to a later section. The remainder of the book, however, is not just "plain words." The text is accompanied by a distinctive repertoire of diagrams aimed to assist Ashby and the reader in thinking about the behavior of complex systems. Let me discuss just one diagram to convey something of the flavor of Ashby's approach.
In figure 4.5 Ashby schematizes the behavior of a system characterized by just two variables, labeled A and B. Any state of the system can thus be denoted by a "representative point," indicated by a black dot, in the A-B plane, and the arrows in the plane denote how the system will change with time after finding itself at one point or another. In the unshaded central portions of the plane, the essential variables of the system are supposed to be within their assigned limits; in the outer shaded portions, they travel beyond those limits. Thus, in panel I, Ashby imagines that the system starts with its representative point at X and travels to point Y, where the essential variables exceed their limits. At this point, the parameters of the system change discontinuously in a "stepfunction"—think of a band breaking in the bead-and-elastic machine of 1943, or a uniselector moving to its next position in the homeostat—and the "field" of system behavior thus itself changes discontinuously to that shown in panel II. In this new field, the state of the system is again shown as point Y, and it is then swept along the trajectory that leads to Z ,followed by another reconfiguration leading to state field III. Here the system has a chance of reaching equilibrium: there are trajectories within field III that swirl into a "stable state," denoted by the dot on which the arrows converge. But Ashby imagines that the system in question lies on a trajectory that again sweeps into the forbidden margin at Z. The system then transmogrifies again into state IV and at last ceases its development, since all the trajectories in that field configuration converge on the central dot in a region where the essential variables are within their limits.
Figure 4.5.Changes of field in an ultrastable system. Source: W. R. Ashby, Design for a Brain (London: Chapman & Hall, 1952), 92, fig. 8/7/1. (With kind permission from Springer Science and Business Media.)
Figure 4.5 is, then, an abstract diagram of how an ultrastable system such as a homeostat finds its way to state of equilibrium in a process of trial and error, and I want to make two comments on it. The first is ontological. The basic conceptual elements of Ashby's cybernetics were those of the sort analyzed in this figure, and they were dynamic—systems that change in time. Any trace of stability and time independence in these basic units had to do with the specifics of the system's situation and the special circumstance of having arrived at a stable state. Ashby's world, one can say, was built from such intrinsically dynamic elements, in contrast to the modern ontology of objects carrying unvarying properties (electrons, quarks). My second comment is historical but forward looking. In Design for a Brain, one can see Ashby laboriously assembling the technical elements of what we now call complex systems theory. For those who know the jargon, I can say that Ashby already calls diagrams like those of figure 4.5 "phase-space diagrams"; the points at which the arrows converge in panels III and IV are what we now call "attractors" (including, in Ashby's diagrams, both point and cyclical attractors, but not "strange" ones); and the unshaded area within panel IV is evidently the "basin of attraction" for the central attractor. Stuart Kauffman and Stephen Wolfram, discussed at the end of this chapter, are among the leaders of present-day work on complexity.
Now for matters of substance. Following Ashby, I have so far described the possible relation of the homeostat to the brain in abstract terms, as both being adaptive systems. In Design for a Brain, however, Ashby sought to evoke more substantial connections. One approach was to point to real biological examples of putatively homeostatic adaptation. Here are a couple of the more horrible of them (Ashby 1952, 117–18):
Over thirty years ago, Marina severed the attachments of the internal and external recti muscles of a monkey's eyeball and re-attached them in crossed position so that a contraction of the external rectus would cause the eyeball to turn not outwards but inwards. When the wound had healed, he was surprised to discover that the two eyeballs still moved together, so that binocular vision was preserved.
More recently Sperry severed the nerves supplying the flexor and extensor muscles in the arm of the spider monkey, and re-joined them in crossed position. After the nerves had regenerated, the animal's arm movements were at first grossly incoordinated, but improved until an essentially normal mode of progression was re-established.
And, of course, as Ashby pointed out, the homeostat showed just this sort of adaptive behavior. The commutators, X, precisely reverse the polarities of the homeostat's currents, and a uniselector-controlled homeostat can cope with such reversals by reconfiguring itself until it returns to equilibrium. A very similar example concerns rats placed in an electrified box: after some random leaping about, they learn to put their foot on a pedal which stops the shocks (1952, 106–8). Quite clearly, the brain being modelled by the homeostat here is not the cognitive brain of AI; it is the performative brain, the Ur-referent of cybernetics: "excitations in the motor cortex [which] certainly control the rat's bodily movements" (1952, 107). In the second edition of Design for a Brain, Ashby added some less brutal examples of training animals to perform in specified ways, culminating with a discussion of training a "house-dog" not to jump on chairs (1960, 113): "Suppose then that jumping into a chair always results in the dog's sensory receptors being excessively stimulated [by physical punishment, which drives some essential variable beyond its limits]. As an ultrastable system, step-function values which lead to jumps into chairs will be followed by stimulations likely to cause them to change value. But on the occurrence of a set of step-function values leading to a remaining on the ground, excessive stimulation will not occur, and the values will remain." He then goes on to show that similar training by punishment can be demonstrated on the homeostat. He discusses a set up in which just three units were connected with inputs running 1231, where the trainer, Ashby, insisted that an equilibrium should
be reached in which a small forced movement of the needle on 1 was met by the opposite movement of the needle on 2. If the system fell into an equilibrium in which the correlation between the needles 1 and 2 was the wrong way around, Ashby would punish homeostat 3 by pushing its needle to the end of its range, causing its uniselector to trip, until the right kind of equilibrium for the entire system, with an anticorrelation of needles 1 and 2, was achieved. Figure 4.6 shows readouts of needle positions from such a training session.
Figure 4.6.Training a three-homeostat system. The lines running from left to right indicate the positions of the needles on the tops of units 1, 2, and 3. The punishments administered to unit 3 are marked D1 and D2. The shifts in the uniselectors are marked as vertical blips on the bottom line, U. Note that after the second punishment a downward displacement of needle 1 evokes an upward displacement of needle 2, as desired. Source: W. R. Ashby, Design for a Brain: The Origin of Adaptive Behaviour, 2nd ed. (London: Chapman & Hall, 1960), 114, fig. 8/9/1. (With kind permission from Springer Science and Business Media.)
Ashby thus sought to establish an equation between his general analysis of ultrastable systems and brains by setting out a range of exemplary applications to the latter. Think of the response of animals to surgery, and then think about it this way. Think about training animals; then think about it this way. In these ways, Ashby tried to train his readers to make this specific analogical leap to the brain.
But something is evidently lacking in this rhetoric. One might be willing to follow Ashby some of the way, but just what are these step mechanisms that enable animals to cope with perverse surgery or training? Having warned that "we have practically no idea of where to look [for them], nor what to look for [and] in these matters we must be vary careful to avoid making asssumptions unwittingly, for the possibilities are very wide" (1960, 123), Ashby proceeds to sketch out some suggestions.
One is to note that "every cell contains many variables that might change in a way approximating to the step-function form. . . . Monomolecular films, protein solutions, enzyme systems, concentrations of hydrogen and other ions, oxidation-reduction potentials, adsorbed layers, and many other constituents or processes might act as step-mechanisms" (1952, 125). A second suggestion is that neurons are "amoeboid, so that their processes could make or break contact with other cells" (126). And third, Ashby reviews an idea he associates with Rafael Lorente de Nó and Warren McCulloch, that the brain contains interconnected circuits of neurons (fig. 4.7), on which he observes that "a simple circuit, if excited, would tend either to sink back to zero excitation, if the amplification factor was less than unity, or to rise to the maximal excitation if it was greater than unity." Such a circuit would thus jump discontinuously from one state to another and "its critical states would be the smallest excitation capable of raising it to full activity, and the smallest inhibition capable of stopping it" (128). Here, then, were three suggestions for the go of it—plausible biological mechanisms that might account for the brain's homeostatic adaptability.
Figure 4.7.Interconnected circuit of neurons. Source: W. R. Ashby, Design for a Brain (London: Chapman & Hall, 1952), 128, fig. 10/5/1. (With kind permission from Springer Science and Business Media.)
_ _ _ _ _
The homeostat appears midway through Design for a Brain. The preceding chapters prepare the way for it. Then its properties are reviewed. And then, in the book's concluding chapters, Ashby looks toward the future. "My aim," he says, with a strange kind of modesty, "is simply to copy the living brain" (1952, 130). Clearly, a single homeostat was hardly comparable in its abilities to the brain of a simple organism, never mind the human brain—it was "too larval" (Ashby 1948, 343)—and the obvious next step was to contemplate a multiplication of such units. Perhaps the brain was made up of a large number of ultrastable units, biological homeostats. And the question Ashby then asked was one of speed or efficiency: how long would it take such an assembly to come into equilibrium with its environment?
Here, some back-of-an-envelope calculations produced interesting results. Suppose that any individual unit had a probablity p of finding an equilibrium state in one second. Then the time for such a unit to reach equilibrium would be of the order of 1/p. And if one had a large number of units, N of them, acting quite independently of one another, the time to equilibrium for the whole assemblage would still be 1/p. But what if the units were fully interconnected with one another, like the four units in the prototypical four-homeostat setup? Then each of the units would have to find an equilibrium state in the same trial as all the others, otherwise the nonequilibrium homeostats would keep changing state and thus upsetting the homeostats that had been fortunate enough already to reach equilibrium. In this configuration, the time to equilibrium would be of the order of 1/pN. Ashby also considered an intermediate case in which the units were interconnected, but in which it was possible for them to come into equilibrium sequentially: once unit 1 had found an equilibrium condition it would stay there, while 2 hunted around for the same, and so on. In this case, the time to equilibrium would be N/p.
Ashby then put some numbers in: p= 1/2; N= 1,000 units. This leads to the following estimates for T, the time for whole system to adapt (1952, 142):
for the fully interconnected network: T1 = 21000 seconds;
for interconnected but sequentially adapting units, T2 = 2,000 seconds;
for the system of entirely independent units, T3 = 2 seconds.24
Two seconds or 2,000 seconds are plausible figures for biological adaptation. According to Ashby, 21000 seconds is 3 × 10291 centuries, a number vastly greater than the age of the universe. This last hyperastronomical number was crucial to Ashby's subsequent thinking on the brain and how to go beyond the homeostat, and the conclusion he drew was that if the brain were composed of many ultrastable units, they had better be only sparsely connected to one another if adaptation were going to take a realistic time. At this point he began the construction of a new machine, but before we come to that, let me note again the ontological dimension of Ashby's cybernetics.
The brain that adapted fastest would be composed of fully independent units, but Ashby noted that such a brain "cannot represent a complex biological system" (1952, 144). Our brains do not have completely autonomous subsystems each set to adapt to a single feature of the world we inhabit, on the one hand; the neurons of the brain are observably very densely interconnected, on the other. The question of achieving a reasonable speed of adaptation thus resolved itself, for Ashby, into the question of whether some kind of serial ad aptation was possible, and he was very clear that this depended not just on how the brain functioned but also on what the world was like. Thus, he was led to distinguish between "easy" environments "that consist of a few variables, independent of each other," and "difficult" ones "that contain many variables richly cross-linked to form a complex whole" (1952, 132). There is a sort of micromacro correspondence at issue here. If the world were too lively—if every environmental variable one acted on had a serious impact on many others—a sparsely interconnected brain could never get to grips with it. If when I cleaned my teeth the cat turned into a dog, the rules of mathematics changed and the planets reversed their courses through the heavens, it would be impossible for me to grasp the world piecemeal; I would have to come to terms with all of it in one go, and that would get us back to the ridiculous time scale of T1.25
In contrast, of course, Ashby pointed out that not all environmental variables are strongly interconnected with one another, and thus that sequential adaptation within the brain is, in principle, a viable strategy. In a long chapter on "Serial Adaptation" he first discusses "an hour in the life of Paramecium, " traveling from a body of water to its surface, where the dynamics are different (due to surface tension), from bodies of water with normal oxygen concentration to those where the oxygen level is depleted, from cold to warm, from pure water to nutrient-rich regions, occasionally bumping into stones, and so on (1952, 180–81). The idea is that each circumstance
represents a different environment to which Parameciumc an adapt in turn and more or less independently. He then discusses the business of learning to drive a car, where one can try to master steering on a straight road, then the accelerator, then changing gears (in the days before automatics, at least in Britain)—though he notes that at the start these tend to be tangled up together, which is why learning to drive can be difficult (181–82). "A puppy can learn how to catch rabbits only after it has learned to run; the environment does not allow the two reactions to be learned in the opposite order. . . . Thus, the learner can proceed in the order 'Addition, long multiplication, . . .' but not in the order 'Long multiplication, addition, . . .' Our present knowledge of mathematics has in fact been reached only because the subject contains such stage-by-stage routes" (185).26 There follows a long description of the steps in training falcons to hunt (186), and so on.
So, in thinking through what the brain must be like as a mechanism, Ashby also further elaborated a vision of the world in which an alchemical correspondence held between the two terms: the microcosm (the brain) and the macrocosm (the world) mirrored and echoed one another inasmuch as both were sparsely connected systems, not "fully joined," as Ashby put it. We can follow this thread of the story below, into the fields of architecture and theoretical biology as well as Ashby's next project after the homeostat, DAMS. But I can finish this section with a further reflection.
The Cybernetic Brain Page 15