Book Read Free

The Cybernetic Brain

Page 42

by Andrew Pickering


  there is no need to see minds as neatly encapsulated in brains connected by a network of channels called "the media" [fig. 7.10a]. . . . I am inviting the reader to try out a different point of view; namely, the image of a pervasive medium (or media) inhabited by minds in motion. Thus, media are characterized as computing systems, albeit of a peculiar kind. But the statement neither asserts nor denies the homogeneity of a medium. In our present state of knowledge, it seems prudent to regard the medium as heterogeneous, and rendered modular by the existence of specially unrestricted regions (brains, for example), capable of acting as L[language] processors (though I have a hankering to imagine that these regions are ultimately determined by programmatic rather than concrete localization). It is surely true that rather powerful computerized systems greatly reduce the differentiation of the medium and coalesce the specially restricted modules, so that "interface barriers" are less obtrusive than they used to be [fig. 7.10b].

  Figure 7.10.Two views of minds and media: a, linked minds. Squares, organisms; arrows, media as channels of communication. b, embedded minds. Circles, individuals; arrows, communication as program sharing and linguistic interaction between individuals. Source: Pask 1977, 40, figs. 1, 2.

  Here one might be tempted to think of recent work in cognitive science on "distributed cognition"—the observation that much "mental" activity in fact depends upon external, "non-mental" processing (e.g., Hutchins 1995). But something more is at stake. Even with Musicolour and SAKI, Pask had been impressed by the strength of the coupling established between human and machine, which he argued fused them into a single novel entity: "The teaching machine starts to work, in the sense that it accelerates the learning process and teaches efficiently, just when we, as outsiders, find that it is impossible to say what the trainee is deciding about—in other words, at the stage when interaction between the teaching machine and the trainee has given rise to a dynamic equilibrium which involves parts of both" (Pask 1960a, 975); "although the physical demarcation of the student and the machine is definite, the subsystems representing the student's region of control and the adaptive machine's region of control are arbitrary and (relative to any given criterion) have limits that are continually changing" (Pask and McKinnon-Wood 1965, 962). Pask thus switched gestalt entirely, in favor of an image of mind as an all-pervading medium, with human minds as inflections within the overall flow.

  This decentered image of mind as all pervasive, and of individual brains as finite localized nodes, is common to much Eastern philosophy and spirituality, though Pask did not quite put it like that: "There is little originality in the view put forward. The McCluhans [sic](both Marshall and Eric in different style) say that media are extensions of the brain; poets, mystics, and sorcerers have expressed similar sentiments for ages" (Pask 1977, 40). It bears emphasis, however, that like the other cyberneticians, Pask did not simply equate cybernetics with Buddhist philosophy or whatever. We could say that he addedto Buddhist philosophy an engineering aspect. If Eastern philosophy has been presented for millenia in the form of a reflection on the mind, this lineage of Paskian machines running from Musicolour to Thoughtsticker staged much the same vision in the mundane material world of entertainment and teaching machines. In this sense, we could think of Pask's work, like Beer's, as a sort of spiritual engineering.21

  Chemical Computers

  SELF-ORGANIZING SYSTEMS LIE ALL AROUND US. THERE ARE QUAGMIRES, THE FISH IN THE SEA, OR INTRACTABLE SYSTEMS LIKE CLOUDS. SURELY WE CAN MAKE THESE WORK THINGS OUT FOR US, ACT AS OUR CONTROL MECHANISMS, OR PERHAPS MOST IMPORTANT OF ALL, WE CAN COUPLE THESE SEEMINGLY UNCONTROLLABLE ENTITIES TOGETHER SO THAT THEY CAN CONTROL EACH OTHER. WHY NOT, FOR EXAMPLE, COUPLE THE TRAFFIC CHAOS IN CHICAGO TO THE TRAFFIC CHAOS OF NEW YORK IN ORDER TO OBTAIN AN ACCEPTABLY SELF-ORGANIZING WHOLE? WHY NOT ASSOCIATE INDIVIDUAL BRAINS TO ACHIEVE A GROUP INTELLIGENCE?

  GORDON PASK,"THE NATURAL HISTORY OF NETWORKS" (1960B, 258)

  Much of Pask's cybernetics grew straight out of Musicolour: the trajectory that ran through the trainers and educational machines just discussed and the work in the arts, theater, and architecture discussed later in this chapter. But in the 1950s and early 1960s there was another aspect to his cybernetics that was not so closely tied to Musicolour and that I want to examine now.22 This was the work on "biological computers" already mentioned in the previous chapter. Some of this work was done in collaboration with Stafford Beer, but here we can focus on the best-documented work in this area, on what I will now call "chemical computers"—though Pask often referred to them as "organic computers," in reference to their quasi-organic properties rather than the materials from which they were constructed.23 Beer again figures in this story, though it is clear that the initiative and most of the work was Pask's.

  As discussed in the previous chapter, at the center of Beer's vision of the cybernetic factory was the numinous U-machine, the homeostatic controller which not only kept the factory on course in normal conditions but also adapted to changing conditions. Beer's experiments with biological systems aimed at constructing such a machine. In his publications on chemical computers, which first appeared in 1958, Pask set his work in a similar frame, and a review of this work might help us understand the overall problematic more clearly. The opening paragraph of Pask's essay "Organic Control and the Cybernetic Method" (1958, 155) is this: "A manager, being anxious to retire from his position in an industry, wished to nominate his successor. No candidate entirely satisfied his requirements, and after a prolonged but fruitless search, this manager decided that a control mechanism should take his place. Consequently he engaged four separate cyberneticians. Each of them had been recommended in good faith as able to design a control mechanism which would emulate and improve upon the methods of industrial decision making the manager had built up throughout the years." Among other things, this paragraph is evidently a setup for distinguishing between four versions of what cybernetics might be and recommending one of them, namely, Pask's (and Beer's). There is no need to go into the details of all four, but a key contrast among them is brought out in the following hypothetical conversation. One of the cyberneticians is trying to find out how the manager manages (158):

  Manager.—I keep telling you my immediate object was to maximise production of piston rings.

  Cybernetician.—Right, I see you did this on a budget of £10,000.

  Manager.—I bought the new machine and installed it for £8,000.

  Cybernetician.—Well, how about the remaining £2,000?

  Manager.—We started to make ornamental plaques.

  Cybernetician.—Keep to the subject. That has nothing to do with piston rings.

  Manager.—Certainly it has. I didn't want to upset Bill Smith. I told you he was sensitive about being a craftsman. So we tried our hand at ornamental plaques, that was my daughter's idea.

  Cybernetician.—Which costs you £2,000.

  Manager.—Nonsense, Bill Smith enjoys the job. He is a responsible chap, and helps to sober up the hot heads, no it's worth every penny.

  Cybernetician.—Very well, as you please. Just one other enquiry, though. What is an appropriate model for this process? What does it seem like to manage a piston ring plant?

  Manager.—It's like sailing a boat.

  Cybernetician.—Yes.

  In case the reader might miss the significance of that final "yes," Pask comments that "they might continue to infuriate each other indefinitely." This "cybernetician" is infuriated because he wants to extract some rules from the manager that can be run on a computer, or perhaps find some statistical regularity between the firm's inputs and outputs that can be likewise encoded. The manager, in contrast, insists that running the factory is not like that; that genuinely novel solutions to problems are sometimes necessary, solutions not given in prior practice and thus not capturable in algorithms, like spending £2,000 just to keep Bill Smith happy for the overall good of the firm. Hence his very cybernetic final reply, that managing a firm is like sailing a boat—a perform
ative participation in the dynamics of a system that is never fully under control (taking us straight back to Wiener's derivation of "cybernetics," and reminding us, for example, of Brian Eno's approach to musical composition).

  In this essay, Pask makes it clear that he does not take the search for algorithms to be the defining aspect of cybernetics. People who take that approach are "rightly electronic engineers examining their particular kinds of hypotheses about managers" (Pask 1958, 171). In effect, Pask makes here much the same contrast I made in the opening chapter between symbolic AI and the branch of cybernetics that interests me and to which Pask and our other principals devoted themselves. Pask was interested in machines that could sail boats, to which we can now turn. We can look at how Pask's chemical computers functioned, and then how they might substitute for human managers.

  Threads

  Figure 7.11 is a schematic of a chemical computer. A set of electrodes dips down vertically into a dish of ferrous sulphate solution. As current is passed through the electrodes, filaments of iron—"threads" as Pask called them—grow outward from their tips into the liquid: figure 7.12 is a photograph of a stage in this process. Very simple, but so what? Three points about such devices need to be understood to appreciate Pask's vision. First, the threads are unstable:they grow in regions of high current density but dissolve back into solution otherwise. Second, the threads grow unpredictably,sprouting new dendritic branches (which might extend further or dissolve)—"The moment to moment development of a thread proceeds via a trial process. Slender branches develop as extensions of the thread in different directions, and most of these, usually all except the one which points along the path of maximum current, are abortive" (Pask 1958, 165). Such a system can be seen as conducting a search through an open-ended space of possibilities, and we can also see that in Ashby's terms its has the high variety required of a controller: it can run through an endless list of material configurations (compare the space of thread geometries with the twenty-five states of the homeostat). Third, as extensions of the electrodes, the threads themselves influence current densities in the dish. Thus, the present thread structure helps determine how the structure will evolve in relation to currents flowing through the electrodes, and hence the growth of the thread structure exhibits a path dependence in time: it depends in detail on both the history of inputs through the electrodes and on the emerging responses of the system to those. The system thus has a memory, so it can learn. This was Pask's idea: the chemical computer could function as an adaptive controller, in the lineage of the homeostat. In this, of course, it was not so far removed from Musicolour and SAKI, though realized in a much more flexible and lively medium than that supplied by uniselectors, relays, and capacitors.

  Figure 7.11.Schematic of a chemical computer. Source: Pask 1960b, 247, fig. 4.

  Figure 7.12.Threads growing in a chemical computer. A, connecting wires for electrodes; B, platinum pillar electrodes; C, edges of glass tank containing ferrous sulfate; D, chemical reaction in progress; E, "tree" threads being formed; F, connecting cables. Source: Pask 1959, 919, fig. 12.

  The question now becomes one of how such a system might be interested in us: how can a chemical computer be induced to substitute for the human manager of a factory? As with Beer's biological computers, the answer is simple enough, at least in principle. Imagine there are two different sets of electrodes dipping into the dish of ferrous sulphate with its thread structure.

  One set is inputs: the currents flowing through them reflect the parameters of the factory (orders, stocks, cash-flow, etc.). The other set is outputs: the voltages they detect represent instructions to the factory (buy more raw materials, redirect production flows). There will be some determinate relationship between these inputs and outputs, fixed by the current thread structure, but this structure will itself evolve in practice in a process of reciprocal vetoing, as Beer callled it, and, as Ashby would have said, the combined system of factory plus controller will inevitably "run to equilibrium." Like a set of interacting homeostats, the chemical computer and the factory will eventually find some operating condition in which both remain stable: the factory settles down as a viable system, in Beer's terms, and the chemical computer, too, settles down into a state of dynamic equilibrium (at least until some uncontrollable perturbation arrives and disturbs the equilibrium, when the search process starts again).

  The magic is done—well, almost. Pask thought through at least two further complications. First, there is the question of how to get the process of coupling the computer to the factory going. One answer was to envisage a "catalyst," a system that would send current through the "least visited" electrodes, thus fostering a variety of interactions with the factory and enabling the computer to interrogate the factory's performance on a broad front. Of course, second, the procedure of simply letting the computer and the factory search open-endedly for a mutual equilibrium would almost certainly be disastrous. Who knows what terminally idiotic instructions the computer would issue before stability was approached? Pask therefore imagined that the manager would be allowed to trainthe controller before he retired, monitoring the state of the factory and the machine's responses to that and approving or disapproving those responses by injecting pulses of current as appropriate to reinforce positive tendencies in the machine's evolution, as indicated in figure 7.13. Pask noted that this kind of training would not take the form of the manager dominating the controller and dictating its performance; there was no way that could be done. In fact, and as usual, the interaction would have to take the form of a "partly competitive and partly collaborative game" or conversation (Pask 1958, 170): "After an interval, the structured regions [in the controller] will produce a pattern of behaviour which the manager accepts, not necessarily one he would have approved of initially, but one he accepts as a compromise." Thus the manager and the controller come into homeostatic equilibrium at the same time, in the same way, and in the same process as the controller comes into equilibrium with the factory. "At this point the structured region will replicate indefinitely so that its replica produces the same pattern of behaviour. The manager may thus be removed and the assemblage will act as an organic control mechanism in the industry" (169).

  Figure 7.13. Training a chemical computer. Source: Pask 1958, 169, diagram 2.

  Not much new commentary is needed here. As ontological theater, Pask's chemical computers were in much the same space as Beer's biological ones, staging a direct performative coupling between exceedingly complex dynamic systems (the threads, the factory, the manager) free from any representational detour—a coupling originally enacted by Musicolour in Pask's career and that could take us all the way back to Walter's tortoises, except that the threads displayed more variety than the tortoise and, of course, they grew without any painstaking design, exploiting the liveliness of matter instead (and taking us back to Ashby's thoughts on evolutionary design in chapter 4, as well as Beer in chapter 6). As I said in the previous chapter, I am struck by the imagination required to even begin contemplating the use of an electrochemical device such as this as an adaptive controller for any sort of system. It is hard to imagine arriving at such a vision within the symbolic AI tradition, for example.

  But there is another striking feature of Pask's chemical computers that remains to be discussed. We have forgotten about Bill Smith. His function in the hypothetical conversation with the cybernetician is to introduce a consideration of what Pask called the "relevance conditions" for control systems, the question of what variables the system needs to pay attention to, the ones that figure as its inputs and outputs. Bill Smith's contentedness was not something the manager needed to think about under the old regime of production—Bill was happy enough—but suddenly becomes a key variable when the new machine is installed and his work is deskilled. Now, it is one thing to design a control system when these relevance conditions are fixed and known in advance, but quite another to control a system where the relevance conditions change and have continually to be found
out. This brings us to the most magical aspect of Pask's chemical computers—an aspect that went forgotten until an important and very insightful essay published by Peter Cariani in a 1993 festschrift for Pask, to which Stafford Beer added historical detail in a 2001 tribute in a similar volume.

  New Senses

  Beer recalled that in 1956 or 1957 he was visiting London from Sheffield and spent most of the night with Pask at the latter's flat in Baker Street, as he often did. They first had the idea of exploring the robustness of Pask's chemical computers by chiseling out sections of established threads and seeing what happened. It turned out that the systems were very robust and that the gaps healed themselves, though in an unexpected way—instead of joining up from either end, they traveled along the thread until they disappeared. "And yet these demonstrations, though exciting at the time, were somehow recognized to be trivial" (S. Beer 2001, 554–55):

  "Adaptation to the unexpected" should mean more than this, and yet there must be limits. I was already developing my theory of viable systems, and often used myself as an example. But what if someone pulled out a gun and shot me. Would that be proof that I am not after all a viable system? Surely not: the system itself would have been annihilated. We fell to discussing the limiting framework of ultrastability. Suddenly Gordon said something like, "Suppose that it were a survival requirement that this thing should learn to respond to sound? If there were no way in which this [sound] 'meant' anything [to the device], it would be equivalent to your being shot. It's like your being able to accommodate a slap rather than a bullet. We need to see whether the cell can learn to reinforce successfully by responding to the volume of the sound."

 

‹ Prev