The Cybernetic Brain

Home > Other > The Cybernetic Brain > Page 3
The Cybernetic Brain Page 3

by Andrew Pickering


  _ _ _ _ _

  I have been trying to indicate why we might find it historically and anthropologically interesting to explore the history and substance of cybernetics, but my own interest also has a political dimension. The subtitle of this book—Sketches of Another Future—is meant to suggest that we might learn something from the history of cybernetics for how we conduct ourselves in the present, and that the projects we will be examining in later chapters might serve as models for future practice and forms of life. I postpone further development of this thought to the next chapter, where the overall picture should become clearer, but for now I want to come at it from the opposite angle. I need to confront the fact that cybernetics has a bad reputation in some quarters. Some people think of it as the most despicable of the sciences. Why is that? I do not have a panoptic grasp of the reasons for this antipathy, and it is hard to find any canonical examples of the critique, but I can speak to some of the concerns.6

  One critique bears particularly on the work of Walter and Ashby. The idea is that tortoises and homeostats in fact fail to model the human brain in important respects, and that, to the degree that we accept them as brain models, we demean key aspects of our humanity (see, e.g., Suchman 2005). The simplest response to this is that neither Walter nor Ashby claimed actually to have modelled anything approaching the real human brain. In 1999, Rodney Brooks gave his book on neo-Walterian robotics the appropriately modest title of Cambrian Intelligence, referring to his idea that we should start at the bottom of the evolutionary ladder (not the top, as in symbolic AI, where the critique has more force). On the other hand, Ashby, in particular, was not shy in his speculations about human intelligence and even genius, and here the critique does find some purchase. His combinatoric conception of intelligence is, I believe, inadequate, and we can explore this further in chapter 4.

  A second line of critique has to do with cybernetics' origins in Wiener's wartime work; cybernetics is often thought of as a militarist science. This view is not entirely misleading. The descendants of the autonomous antiaircraft guns that Wiener worked on (unsuccessfully) in World War II (Galison 1994) are today's cruise missiles. But first, I think the doctrine of original sin is a mistake—sciences are not tainted forever by the moral circumstances of their birth—and second, I have already noted that Ashby and Walter's cybernetics grew largely from a different matrix, psychiatry. One can disapprove of that, too, but the discussion of Bateson and Laing's "antipsychiatry" challenges the doctrine of original sin here as well.

  Another line of critique has to do with the workplace and social inequality. As Wiener himself pointed out, cybernetics can be associated with the postwar automation of production via the feedback loops and servomechanisms that are crucial to the functioning of industrial robots. The sense of "cybernetics" is often also broadened to include anything to do with computerization and the "rationalization" of the factory floor. The ugly word "cybernation" found its way into popular discourse in the 1960s as part of the critique of intensified control of workers by management. Again, there is something to this critique (see Noble 1986), but I do not think that such guilt by association should lead us to condemn cybernetics out of hand.7 We will, in fact, have the opportunity to examine Stafford Beer's management cybernetics at length. The force of the critique turns out to be unclear, to say the least, and we will see how, in Beer's hands, management cybernetics ran into a form of politics that the critics would probably find congenial. And I should reemphasize that my concern here is with the whole range of cybernetic projects. In our world, any form of knowledge and practice that looks remotely useful is liable to taken up by the military and capital for their own ends, but by the end of this book it should be abundantly clear that military and industrial applications come nowhere close to exhausting the range of cybernetics.

  Finally, there is a critique pitched at a more general level and directed at cybernetics' concern with "control." From a political angle, this is the key topic we need to think about, and also the least well understood aspect of the branch of cybernetics that this book is about. To get to grips with it properly requires a discussion of the peculiar ontological vision of the world that I associate with cybernetics. This is the topic of the next chapter, at the end of which we can return to the question of the political valence of cybernetics and of why this book has the subtitle it does.

  _ _ _ _ _

  The rest of the book goes as follows: Chapter 2 is a second introductory chapter, exploring the strange ontology that British cybernetics played out, and concluding, as just mentioned, with a discussion of the way in which we can see this ontology as political, in a very general sense.

  Chapters 3–7 are the empirical heart of the book. The chapters that make up part 1—on Walter, Ashby, Bateson, and Laing—are centrally concerned with the brain, the self, and psychiatry, though they shoot off in many other directions too. Part 2 comprises chapters on Beer and Pask and the directions in which their work carried them beyond the brain. The main concern of each of these chapters is with the work of the named individuals, but each chapter also includes some discusssion of related projects that serve to broaden the field of exploration. One rationale for this is that the book is intended more as an exploration of cybernetics in action than as collective biography, and I am interested in perspicuous instances wherever I can find them. Some of these instances serve to thicken up the connections between cybernetics and the sixties that I talked about above. Others connect historical work in cybernetics to important developments in the present in a whole variety of fields. One object here is to answer the question: what happened to cybernetics? The field is not much discussed these days, and the temptation is to assume that it died of some fatal flaw. In fact, it is alive and well and living under a lot of other names. This is important to me. My interest in cybernetics is not purely historical. As I said, I am inclined to see the projects discussed here as models for future practice, and, though they may be odd, it is nice to be reassured that they are not a priori ridiculous. Also, unlike their cybernetic predecessors, the contemporary projects we will be looking at are fragmented; their interrelations are not obvious, even to their practitioners. Aligning them with a cybernetic lineage is a way of trying to foreground such interrelations in the present—to produce a world.

  The last chapter, chapter 8, seeks to summarize what has gone before in a novel way, by pulling together various cross-cutting themes that surface in different ways in some or all of the preceding chapters. More important, it takes further the thought that the history of cybernetics might help us imagine a future different from the grim visions of today.

  2

  _ _ _ _ _

  ONTOLOGICAL THEATER

  OUR TERRESTRIAL WORLD IS GROSSLY BIMODAL IN ITS FORMS: EITHER THE FORMS IN IT ARE EXTREMELY SIMPLE, LIKE THE RUN-DOWN CLOCK, SO THAT WE DISMISS THEM CONTEMPTUOUSLY, OR THEY ARE EXTREMELY COMPLEX, SO THAT WE THINK OF THEM AS BEING QUITE DIFFERENT, AND SAY THEY HAVE LIFE.

  ROSS ASHBY,DESIGN FOR A BRAIN (1960, 231–32)

  In the previous chapter I approached cybernetics from an anthropological angle—sketching out some features of a strange tribe and its interesting practices and projects, close to us in time and space yet somehow different and largely forgotten. The following chapters can likewise be read in an anthropological spirit, as filling in more of this picture. It is, I hope, a good story. But more can be said about the substance of cybernetics before we get into details. I have so far described cybernetics as a science of the adaptive brain, which is right but not enough. To set the scene for what follows we need a broader perspective if we are to see how the different pieces fit together and what they add up to. To provide that, I want to talk now about ontology: questions of what the world is like, what sort of entities populate it, how they engage with one another. What I want to suggest is that the ontology of cybernetics is a strange and unfamiliar one, very different from that of the modern sciences. I also want to suggest that ontology makes a difference—that
the strangeness of specific cybernetic projects hangs together with the strangeness of its ontology.1

  A good place to start is with Bruno Latour's (1993) schematic but insightful story of modernity. His argument is that modernity is coextensive with a certain dualism of people and things; that key features of the modern West can be traced back to dichotomous patterns of thought which are now institutionalized in our schools and universities. The natural sciences speak of a world of things (such as chemical elements and quarks) from which people are absent, while the social sciences speak of a distinctly human realm in which objects, if not entirely absent, are at least marginalized (one speaks of the "meaning" of "quarks" rather than quarks in themselves). Our key institutions for the production and transmission of knowledge thus stage for us a dualist ontology: they teach us how to think of the world that way, and also provide us with the resources for acting as if the world were that way.2

  Against this backdrop, cybernetics inevitably appears odd and nonmodern, to use Latour's word. At the most obvious level, synthetic brains—machines like the tortoise and the homeostat—threaten the modern boundary between mind and matter, creating a breach in which engineering, say, can spill over into psychology, and vice versa. Cybernetics thus stages for us a nonmodern ontology in which people and things are not so different after all. The subtitle of Wiener's foundational book, Control and Communication in the Animal and the Machine,already moves in this direction, and much of the fascination with cybernetics derives from this challenge to modernity. In the academic world, it is precisely scholars who feel the shortcomings of the modern disciplines who are attracted most to the image of the "cyborg"—the cybernetic organism— as a nonmodern unit of analysis (with Haraway 1985 as a key text).

  This nonmodern, nondualist quality of cybernetics will be evident in the pages to follow, but it is not the only aspect of the unfamiliarity of cybernetic ontology that we need to pay attention to. Another comes under the heading of time and temporality. One could crudely say that the modern sciences are sciences of pushes and pulls: something already identifiably present causes things to happen this way or that in the natural or social world. Less crudely, perhaps, the ambition is one of prediction—the achievement of general knowledge that will enable us to calculate (or, retrospectively, explain) why things in the world go this way or that. As we will see, however, the cybernetic vision was not one of pushes and pulls; it was, instead, of forward-looking search. What determined the behavior of a tortoise when set down in the world was not any presently existing cause; it was whatever the tortoise found there. So cybernetics stages for us a vision not of a world characterized by graspable causes, but rather of one in which reality is always "in the making," to borrow a phrase from William James.

  We could say, then, that the ontology of cybernetics was nonmodern in two ways: in its refusal of a dualist split between people and things, and in an evolutionary, rather than causal and calculable, grasp of temporal process. But we can go still further into this question of ontology. My own curiosity about such matters grew out of my book The Mangle of Practice(1995). The analysis of scientific practice that I developed there itself pointed to the strange ontological features just mentioned: I argued that we needed a nondualist analysis of scientific practice ("posthumanist" was the word I used); that the picture should be a forward-looking evolutionary one ("temporal emergence"); and that, in fact, one should understand these two features as constitutively intertwined: the reciprocal coupling of people and things happens in time, in a process that I called, for want of a better word, "mangling." But upstream of those ideas, so to speak, was a contrast between what I called the representational and performative idioms for thinking about science. The former understands science as, above all, a body of representations of reality, while the latter, for which I argued in The Mangle,suggests that we should start from an understanding of science as a mode of performative engagement with the world. Developing this thought will help us see more clearly how cybernetics departed from the modern sciences.3

  _ _ _ _ _

  WHAT IS BEING SUGGESTED NOW IS NOT THAT BLACK BOXES BEHAVE SOMEWHAT LIKE REAL OBJECTS BUT THAT THE REAL OBJECTS ARE IN FACT ALL BLACK BOXES, AND THAT WE HAVE IN FACT BEEN OPERATING WITH BLACK BOXES ALL OUR LIVES.

  ROSS ASHBY,AN INTRODUCTION TO CYBERNETICS (1956, 110)

  Ross Ashby devoted the longest chapter of his 1956 textbook, An Introduction to Cybernetics,to "the Black Box" (chap. 6), on which he had this to say (86): "The problem of the Black Box arose in electrical engineering. The engineer is given a sealed box that has terminals for input, to which he may bring any voltages, shocks, or other disturbances, he pleases, and terminals for output from which he may observe what he can." The Black Box was a key concept in the early development of cybernetics, and much of what I need to say here can be articulated in relation to it. The first point to note is that Ashby emphasized the ubiquity of such entities. This passage continues with a list of examples of people trying to get to grips with Black Boxes: an engineer faced with "a secret and sealed bomb-sight" that is not working properly, a clinician studying a brain-damaged patient; a psychologist studying a rat in a maze. Ashby then remarks, "I need not give further examples as they are to be found everywhere. . . . Black Box theory is, however, even wider in its application than these professional studies," and he gives a deliberately mundane example: "The child who tries to open a door has to manipulate the handle (the input) so as to produce the desired movement at the latch (the output); and he has to learn how to control the one by the other without being able to see the internal mechanism that links them. In our daily lives we are confronted at every turn with systems whose internal mechanisms are not fully open to inspection, and which must be treated by the methods appropriate to the Black Box" (Ashby 1956, 86). On Ashby's account, then, Black Boxes are a ubiquitous and even universal feature of the makeup of the world. We could say that his cybernetics assumed and elaborated a Black Box ontology,and this is what we need to explore further.

  Next we can note that Black Box ontology is a performative image of the world. A Black Box is something that does something,that one does something to, and that does something back—a partner in, as I would say, a dance of agency (Pickering 1995). Knowledge of its workings, on the other hand, is notintrinsic to the conception of a Black Box—it is something that may (or may not) grow out of our performative experience of the box. We could also note that there is something right about this ontology. We are indeed enveloped by lively systems that act and react to our doings, ranging from our fellow humans through plants and animals to machines and inanimate matter, and one can readily reverse the order of this list and say that inanimate matter is itself also enveloped by lively systems, some human but most nonhuman. The world just is that way.

  A Black Box ontology thus seems entirely reasonable. But having recognized this, at least two stances in the world of Black Boxes, ways of going on in the world, become apparent. One is the stance of modern science, namely, a refusal to take Black Boxes for what they are, a determination to strip away their casings and to understand their inner workings in a representational fashion. All of the scientist's laws of nature aim to make this or that Black Box (or class of Black Boxes) transparent to our understanding. This stance is so familiar that I, at least, used to find it impossible to imagine any alternative to it. And yet, as will become clear, from the perspective of cybernetics it can be seen as entailing a detour,away from performance and through the space of representation, which has the effect of veiling the world of performance from us. The modern sciences invite us to imagine that our relation to the world is basically a cognitive one—we act in the world through our knowledge of it—and that, conversely, the world is just such a place that can be known through the methods and in the idiom of the modern sciences. One could say that the modern sciences stage for us a modern ontology of the world as a knowable and representable place. And, at the same time, the product of the modern sciences, scientific knowledge i
tself, enforces this vision. Theoretical physics tells us about the unvarying properties of hidden entities like quarks or strings and is silent about the performances of scientists, instruments, and nature from which such representations emerge. This is what I mean by veiling: the performative aspects of our being are unrepresentable in the idiom of the modern sciences.4

  The force of these remarks should be clearer if we turn to cybernetics. Though I will qualify this remark below, I can say for the moment that the hallmark of cybernetics was a refusal of the detour through knowledge—or, to put it another way, a conviction that in important instances such a detour would be mistaken, unnecessary, or impossible in principle. The stance of cybernetics was a concern with performance as performance,not as a pale shadow of representation. And to see what this means, it is perhaps simplest to think about early cybernetic machines. One could, for example, imagine a highly sophisticated thermostat that integrated sensor readings to form a representation of the thermal environment and then transmitted instructions to the heating system based upon computational transformations of that representation (in fact, Ashby indeed imagined such a device: see chap. 4). But my thermostat at home does no such thing. It simply reacts directly and performatively to its own ambient temperature, turning the heat down if the temperature goes up and vice versa.5 And the same can be said about more sophisticated cybernetic devices. The tortoises engaged directly, performatively and nonrepresentationally, with the environments in which they found themselves, and so did the homeostat. Hence the idea expressed in the previous chapter, that tortoises and homeostats modelled the performative rather than the cognitive brain.

 

‹ Prev