The Cybernetic Brain

Home > Other > The Cybernetic Brain > Page 19
The Cybernetic Brain Page 19

by Andrew Pickering


  We can begin with what I called the "instability of the referent" of Ashby's cybernetics. Even when his concern was directly with the brain, he very often found himself thinking and writing about something else. His 1945 publication that included the bead-and-elastic device, for example, was framed as a discussion of a "dynamic system" or "machine" defined as "a collection of parts which (a) alter in time, and (b) interact on one another in some determinate and known manner. Given its state at any one moment it is assumed we know or can calculate what its state will be an instant later." Ashby then asserted that "consideration seems to show that this is the most general possible description of a 'machine' . . . not in any way restricted to mechanical systems with Newtonian dynamics" (1945, 14). Ashby's conception of a "machine" was, then, from early on exceptionally broad, and correspondingly contentless, by no means tied to the brain. And the generality of this conception was itself underwritten by a mathematical formalism he first introduced in his original 1940 protocybernetic publication, the set of equations describing the temporal behavior of what he later called a state-determined system, namely,

  dxi/dt= fi(x1, x2, . . . , xn) for i= 1, 2, . . . , n,

  where t stands for time, xi are the variables characterizing the system, and fi is some mathematical function of the xi.

  Since Ashby subsequently argued that almost all the systems described by science are state-determined systems, one can begin to see what I mean by the instability of the referent of his cybernetics: though he was trying to understand the brain as a machine, from the outset his concept of a machine was more or less coextensive with all of the contents of the universe. And this accounts for some of the rhetorical incongruity of Ashby's early cybernetic writings. For example, although it was published in the Journal of General Psychology, Ashby's 1945 bead-and-elastic essay contains remarkably little psychological content in comparison with its discussion of machines. It opens with the remark that "it is the purpose of this paper to suggest that [adaptive] behavior is in no way special to living things, that it is an elementary and fundamental property of all matter," it defines its topic as "all dynamic systems, whether living or dead" (13), and it closes with the assertion that "this type of adaptation (by trial and error) is therefore an essential property of matter, and no 'vital' or 'selective' hypothesis is required" (24). One wonders where the brain has gone in this story—to which Ashby's answer is that "the sole special hypothesis required is that the animal is provided with a sufficiency of breaks" (19), that is, plenty of elastic bands. "The only other point to mention at present is that the development of a nervous system will provide vastly greater opportunities both for the number of breaks available and also for complexity and variety of organization. Here I would emphasize that the difference . . . is solely one of degree and not of principle" (20).

  So we see that in parallel to his inquiries into the brain, and indeed constitutive of those inquiries, went Ashby's technical development of an entire worldview —a view of the cosmos, animate and inanimate, as built out of state-determined machines. And my general suggestion then is that, as the lines of Ashby's research specifically directed toward the brain ran out of steam in the 1950s, so the cybernetic worldview in general came to the fore. And this shift in emphasis in his research was only reinforced by the range of disparate systems that Ashby described and analyzed in enriching his intuition about the properties of state-determined machines. I have already mentioned his discussions of chicken incubators and bead-and-elastic contrivances (the latter described as a "typical and clear-cut example of a dynamic system" [Ashby 1945a, 15]). The homeostat itself was first conceived as a material incarnation of Ashby's basic set of equations; his analysis of discontinuities in autocatalytic chemical reactions, discussed above, likewise concerned a special case of those equations. In Design for a Brain Ashby outlined the capabilities of a homeostatic autopilot—even if you wire it up backward so that its initial tendency is to destabilize a plane's flight, it will adapt and learn to keep the plane level anyway. And later in the book he spelled out the moral for evolutionary biology—namely, that complex systems will tend over time to arrive at complicated and interesting equilibriums with their environment. Such equilibriums, he argued are definitional of life, and therefore, "the development of life on earth must thus not be seen as something remarkable. On the contrary, it was inevitable" (233)—foreshadowing the sentiments of Stuart Kauffman's book At Home in the Universe (1995) four decades in advance. Ashby's single venture into the field of economics is also relevant. In 1945, the third of his early cybernetic publications was a short letter to the journal Nature, entitled "Effect of Controls on Stability" (Ashby 1945b). There he recycled his chicken-incubator argument about "stabilizing the stabilizer" as a mathematical analysis of the price controls which the new Labour government was widely expected to impose, showing that they might lead to the opposite result from that intended, namely a destabilization rather than stabilization of the British economy.48 This reminds us that, as we have just seen, in his journal he was also happy to extend his analysis of the multistable system to both social planning and warfare.

  Almost without intending it, then, in the course of his research into normal and pathological brains, Ashby spun off a version of cybernetics as a supremely general and protean science, with exemplifications that cut right across the disciplinary map—in a certain kind of mathematics, engineering, chemistry, evolutionary biology, economics, planning, and military science (if one calls it that), as well as brain science and psychiatry. And as obstacles were encountered in his specifically brain-oriented work, the brain lost its leading position on Ashby's agenda and he turned more and more toward the development of cybernetics as a freestanding general science. This was the conception that he laid out in his second book, An Introduction to Cybernetics, in 1956, and which he and his students continued to elaborate in his Illinois years.49 I am not going to go in any detail into the contents of Introductionor of the work that grew out of it. The thrust of this work was formal (in contrast to the materiality of the homeostat and DAMS), and to follow it would take us away from the concerns of this book. I will mention some specific aspects of Ashby's later work in the following sections, but here I need to say a few words specifically about An Introduction to Cybernetics, partly out of respect for its author and partly because it leads into matters discussed in later chapters.50

  An Introduction to Cybernetics presents itself as a textbook, probably the first and perhaps the last introductory textbook on cybernetics to be written. It aims to present the "basic ideas of cybernetics," up to and including "feedback, stability, regulation, ultrastability, information, coding, [and] noise" (Ashby 1956, v). Some of the strangeness of Ashby's rhetoric remains in it. Repeatedly and from the very start, he insists that he is writing for "workers in the biological sciences—physiologists, psychologists, sociologists" (1960, v) with ecologists and economists elsewhere included in the set. But just as real brains make few appearances in Design for a Brain, the appearances of real physiology and so on are notable by their infrequency in An Introduction to Cybernetics. The truly revealing definition of cybernetics that Ashby gives is on page 2: cybernetics offers "the framework on which all individual machines may be ordered, related and understood."51

  An Introduction to Cybernetics is distinguished from Design for a Brain by one major stylistic innovation, the introduction of a matrix notation for the transformation of machine states in discrete time steps (in contrast to the continuous time of the equations for a state-determined system). Ontologically, this highlights for the reader that Ashby's concern is with change in time, and, indeed, the title of the first substantive chapter, chapter 2, is "Change" (with subheadings "Transformation" and "Repeated Change"). The new notation is primarily put to work in an analysis of the regulatory capacity of machines. "Regulation" is one of the new terms that appeared in Ashby's list of the basic ideas of cybernetics above, though its meaning is obvious enough. All of the machines we have discusse
d thus far—thermostats, servomechanisms, the homeostat, DAMS—are regulators of various degrees of sophistication, acting to keep some variables within limits (the temperature in a room, the essential variables of the body). What Ashby adds to the general discussion of regulation in An Introduction to Cybernetics, and his claim to undying eponymous fame, is the law of requisite variety, which forms the centerpiece of the book and is known to his admirers as Ashby's law. This connects to the other novel terms in An Introduction to Cybernetics's list of basic ideas of cybernetics—information, coding, and noise—and thence to Claude Shannon's foundational work in information theory (Shannon and Weaver 1963 [1949]). One could, in fact, take this interest in "information" as definitive of Ashby's mature work. I have no wish to enter into information theory here; it is a field in its own right. But I will briefly explain the law of requisite variety.52

  Shannon was concerned with questions of efficiency in sending messages down communication channels such as telephone lines, and he defined the quantity of information transmitted in terms of a selection between the total number of possible messages. This total can be characterized as the variety of the set of messages. If the set comprised just two possible messages—say, "yes" or "no" in answer to some question—then getting an answer one way or the other would count as the transmission of one bit (in the technical sense) of information in selecting between the two options. In effect, Ashby transposed information theory from a representational idiom, having to do with messages and communication, to a performative one, having to do with machines and their configurations. On Ashby's definition, the variety of a machine was defined precisely as the number of distinguishable states that it could take on. This put Ashby in a position to make quantitative statements and even prove theorems about the regulation of one machine or system by another, and preeminent among these statements was Ashby's law, which says, very simply, that "only variety can destroy variety" (Ashby 1956, 207).

  To translate, as Ashby did in An Introduction to Cybernetics, a regulator is a blocker—it stops some environmental disturbance from having its full impact on some essential variable, say, as in the case of the homeostat. And then it stands to reason that to be an effective blocker one must have at least as much flexibility as that which is to be blocked. If the environment can take on twenty- five states, the regulator had better be able to take on at least twenty-five as well—otherwise, one of the environment's dodges and feints will get straight past the regulator and upset the essential variable. I have stated this in words; Ashby, of course, used his new machine notation as a means to a formal proof and elaboration; but thus Ashby's law.

  To be able to make quantitative calculations and produce formal proofs was a major step forward from the qualitative arguments of Design for a Brain, in making cybernetics more recognizably a science like the modern sciences, and it is not surprising that much of the later work of Ashby and his students and followers capitalized on this bridgehead in all sorts of ways. It put Ashby in a position, for example, to dwell repeatedly on what he called Bremermann's limit. This was a quantum-mechanical and relativistic estimate of the upper limit on the rate of information processing by matter, which sufficed to make some otherwise plausible accounts of information processing look ridiculous—they could not be implemented in a finite time even if the entire universe were harnessed just to that purpose.53 But there I am going to leave this general topic; Ashby's law will return with Stafford Beer in chapter 6.54

  Cybernetics and Epistemology

  I have been exploring Ashby's cybernetics as ontology, because that is where his real originality and certainly his importance for me lies. He showed how a nonmodern ontology could be brought down to earth as engineering which was also brain science, wth ramifications extending in endless directions. That is what I wanted to focus on. But Ashby did epistemology, too. If the Ur-referent of his cybernetics was preconscious, precognitive adaptation at deep levels of the brain, he was also willing to climb the brain stem to discuss cognition, articulated knowledge, science, and even painting and music, and I want just to sketch out his approach to these topics. I begin with what I take to be right about his epistemology and then turn to critique.

  How can we characterize Ashby's vision of knowledge? First, it was a deflationary and pragmatic one. Ashby insisted that "knowledge is finite" (Ashby 1963, 56). It never exceeds the amount of information on which it rests, which is itself finite, the product of a finite amount of work. It is therefore a mistake to imagine that our knowledge ever attains the status of a truth that transcends its origins—that it achieves an unshakeable correspondence to its object, as I would put it. According to Ashby, this observation ruled out of court most of the contemporary philosophical discourse on topics like induction that has come down to us from the Greeks. And, having discarded truth as the key topic for epistemological reflection, he came to focus on "the practical usefulness of models" (Ashby 1970, 95) in helping us get on with mundane, worldly projects.55 The great thing about a model, according to Ashby, is that it enables us to lose information, and to arrive at something more tractable, handle-able, manipulable, than the object itself in its infinite complexity. As he put it, "No electronic model of a cat's brain can possibly be as true as that provided by the brain of another cat, yet of what use is the latter as a model?" (1970, 96). Models are thus our best hope of evading Bremermann's limit in getting to grips with the awful diversity of the world (1970, 98–100).

  For Ashby, then, knowledge was to be thought of as engaged in practical projects and worldly performances, and one late essay, written with his student Roger Conant, can serve to bring this home. "Every Good Regulator of a System Must Be a Model of That System" (Conant and Ashby 1970) concerned the optimal method of feedback control. The authors discussed two different feedback arrangements: error- and cause-controlled. The former is typified by a household thermostat and is intrinsically imperfect. The thermostat has to wait until the environment drives the living-room temperature away from its desired setting before it can go to work to correct the deviation. Error control thus never quite gets it right: some errors always remain—deviations from the optimum—even though they might be much reduced by the feedback mechanism. A cause-controlled regulator, in contrast, does not need to wait for something to go wrong before it acts. A cause-controlled thermostat, for example, would monitor the conditions outside a building, predict what those conditions would do to the interior temperature, and take steps in advance to counter that—turning down the heating as soon as the sun came out or whatever. Unlike error control, cause control might approach perfection: all traces of environmental fluctuations might be blocked from affecting the controlled system; room temperature might never fluctuate at all. And the result that Conant and Ashby formally proved in this essay (subject to formal conditions and qualifications) was that the minimal condition for optimal cause control was that the regulator should contain a model of the regulated system.

  Intuitively, of course, this seems obvious: the regulator has to "know" how changes in the environment will affect the system it regulates if it is to predict and cancel the effects of those changes, and the model is precisely that "knowledge." Nevertheless, something interesting is going here. In fact, one can see the cause-controlled regulator as an important elaboration of Ashby's ontological theater. The servomechanism, the homeostat, and DAMS staged, with increasing sophistication, an image of the brain as an adaptive organ performatively engaged with a lively world at the level of doing rather than knowing. This is undoubtedly the place to start if one wants to get the hang of the ontology of cybernetics. But, like CORA and M. docilis, the causecontrolled regulator invites us to think about the insertion of knowledge into this performative picture in a specific way. The virtue of knowledge lies not in its transcendental truth but in its usefulness in our performative engagements with the world. Knowledge is engaged with performance; epistemology with ontology. This performative epistemology, as I called it before, is the message of the ca
use-controlled regulator as ontological or epistemological theater; this is how we should think about knowledge cybernetically. Conversely, the cause-controlled regulator is a concrete example of how one might include the epistemic dimension in bringing ontology down to earth in engineering practice. That is what interests me most about this example.56

  _ _ _ _ _

  BASIC RESEARCH IS LIKE SHOOTING AN ARROW INTO THE AIR, AND, WHERE IT LANDS, PAINTING A TARGET.

  HOMER ADKINS,CHEMIST, QUOTED IN BUCHANAN (2007, 213)

  Now we can return to the critique I began earlier. In discussing the homeostat I noted that it had a fixed and pregiven goal—to keep its essential variables within limits, and I suggested that this is a bad image to have in general. At that stage, however, the referent of the essential variables was still some inner parameter analogous to the temperature of the blood—a slippery concept to criticize. But in his more epistemological writings, Ashby moved easily to a discussion of goals which clearly pertain to states of the outer, rather than the inner, world. An essay on "Genius," written with another of his students, Crayton Walker, can serve to illustrate some consistent strands of Ashby's thinking on this (Ashby and Walker 1968).

  The topic of "Genius" is more or less self-explanatory. In line with the above discussion, Ashby and Walker aim at a deflationary and naturalistic account of the phenomena we associate with word "genius." But to do so, they sketch out an account of knowledge production in which the importance of predefined goals is constantly repeated. "On an IQ test, appropriate [selection of answers in a multiple choice test] means correct, but not so much in an objective sense as in the sense that it satisfies a decision made in advance (by the test makers) about which answers show high and which low intelligence. In evaluating genius, it makes an enormous difference whether the criterion for appropriateness [i.e., the goal] was decided before or after the critical performance has taken place. . . . Has he succeeded or failed? The question has no meaning in the absence of a declared goal. The latter is like the marksman's saying he really meant to miss the target all along" (Ashby and Walker 1968, 209–10). And, indeed, Ashby and Walker are clear that they understand these goals as explicit targets in the outer world (and not, for example, keeping one's blood temperature constant): "In 1650, during Newton's time, many mathematicians were trying to explain Galileo's experimental findings. . . . In Michelangelo's day, the technical problems of perspective . . . were being widely discussed" (210). The great scientist and the great artist thus both knew what they were aiming for, and their "genius" lay in hitting their specified targets (before anyone else did).

 

‹ Prev