The Cybernetic Brain
Page 5
And third, we can take this line of thought further. This is the place to mention what I think of as a cybernetic discovery of complexity.At an "atomic" level, Walter and Ashby understood their machines very well. The individual components were simple and well-understood circuit elements—resistors, capacitors, valves, relays, some wires to make the connections. But the discovery of complexity was that such knowledge is not enough when it comes to understanding aggregate behavior; that explanation by articulation of parts is not as straightforward as one might imagine; that especially—and in contrast to paradigmatic instances of modern science—predictionof overall performance on the basis of an atomic understanding can be difficult to the point of impossibility. Walter reported that he was surprised by the behavior of his tortoises. Ashby was baffled and frustrated by the homeostat's successor—a machine called DAMS—so much so that he eventually abandoned the DAMS project as a failure. We could say, therefore, that Walter and Ashby both discovered in their scientific attack on the brain that even rather simple systems can be, at the same time, exceedingly complex systems in Beer's terms. Beer's favorite examples of such systems were the brain itself, the firm, and the economy, but even Ashby's and Walter's little models of the brain fell into this class, too.
Two observations follow. First, despite the modern scientific impulse behind their construction, we could take the tortoise and, especially, DAMS as themselves instances of ontological theater, in a somewhat different sense from that laid out above. We could, that is, try to imagine the world as populated by entities like the tortoise and DAMS, whose behavior we can never fully predict. This is another way in which the modern scientific approach to the brain of Walter and Ashby in effect turns back into a further elaboration of the nonmodern ontology that this book focuses upon. It is also a rephrasing of my earlier remark on hybridity. Seen from one end of the telescope, the cybernetic brain models shed genuinely scientific light on the brain—in adapting to their environment, they represented an advance in getting to grips with the inner go of the brain itself. But seen from the other end, they help us imagine what an exceedingly complex system is. If "toys" like these, to borrow Walter's description of them, can surprise us, the cybernetic ontology of unknowability seems less mysterious, and cybernetic projects make more sense.
Continuing with this line of thought, in chapter 4 we can follow one line of Ashby's work into the mathematical researches of Stuart Kauffman and Stephen Wolfram. I just mentioned some important philosophical work by Kauffman, but at issue here is another aspect of his theoretical biology. In computer simulations of complex systems in the late 1960s, Kauffman came across the emergence of simple structures having their own dynamics, which he could interfere with but not control. These systems, too, might help give substance to our ontological imaginations. In understanding the work of Bateson, Laing, Beer, and Pask, the idea of performative interaction with systems that are not just unknowable but that also have their own inner dynamics—that go their own way—is crucial. Wiener derived the word "cybernetics" from the Greek for "steersman"; Pask once compared managing a factory with sailing a ship (chap. 7); and the sense of sailing we will need in later chapters is just that of participating performatively in (rather than representationally computing) the dynamics of sails, winds, rudders, tides, waves, and what have you.
The motto of Wolfram's New Kind of Science(2002) is "extremely complex behaviour from extremely simple systems," and this is precisely the phrase that goes with the earlier cybernetic discovery of complexity. Whereas the cyberneticians built machines, Wolfram's work derives from experimentation with very simple formal mathematical systems called cellular automata (CAs). And Wolfram's discovery has been that under the simplest of rules, the time evolution of CAs can be ungraspably complex—the only way to know what such a system will do is set it in motion and watch. Again we have the idea of an unpredictable endogenous dynamics, and Wolfram's CAs can thus also function as ontological theater for us in what follows—little models of the fundamental entities of a cybernetic ontology. In their brute unpredictability, they conjure up for us what one might call an ontology of becoming. Much of the work to be discussed here had as its problematic questions of how to go on in such a world.
Again, in the case of Kauffman and Wolfram, a certain ontological hybridity is evident. In classically modern fashion, Wolfram would like to know which CA the world is running. The recommendation here is to look through the other end of the telescope—or pick up the other end of the stick—and focus on the literally unpredictable properties of mathematical systems like these as a way of imagining more generally how the world is.11
_ _ _ _ _
THE FACT IS THAT OUR WHOLE CONCEPT OF CONTROL IS NAIVE, PRIMITIVE AND RIDDEN WITH AN ALMOST RETRIBUTIVE IDEA OF CAUSALITY. CONTROL TO MOST PEOPLE (AND WHAT A REFLECTION THIS IS UPON A SOPHISTICATED SOCIETY!) IS A CRUDE PROCESS OF COERCION.
STAFFORD BEER,CYBERNETICS AND MANAGEMENT (1959, 21)
MODERN SCIENCE'S WAY OF REPRESENTING PURSUES AND ENTRAPS NATURE AS A CALCULABLE COHERENCE OF FORCES. . . . PHYSICS. . . SETS NATURE UP TO EXHIBIT ITSELF AS A COHERENCE OF FORCES CALCULABLE IN ADVANCE.
MARTIN HEIDEGGER,"THE QUESTION CONCERNING TECHNOLOGY" (1976 [1954], 302–3)
WE HAVE TO LEARN TO LIVE ON PLANETARY SURFACES AND BEND WHAT WE FIND THERE TO OUR WILL.
NASA ADMINISTRATOR,NEW YORK TIMES, 10 DECEMBER 2006
I want to conclude this chapter by thinking about cybernetics as politics, and to do so we can pick up a thread that I left hanging in the previous chapter. There I ran through some of the critiques of cybernetics and indicated lines of possible response. We are now in a position to consider one final example. Beyond the specifics of its historical applications, much of the suspicion of cybernetics seems to center on just one word: "control." Wiener defined the field as the science of "control and communication," the word "control" is everywhere in the cybernetics literature, and those of us who have a fondness for human liberty react against that. There are more than enough controls imposed on us already; we don't want a science to back them up and make them more effective.
The cyberneticians, especially Stafford Beer, struggled with this moral and political condemnation of their science, and I can indicate the line of response. We need to think about possible meanings of "control." The objectionable sense is surely that of control as domination—the specter of Big Brother watching and controlling one's every move—people reduced to automata. Actually, if this vision of control can be associated with any of the sciences, it should be the modern ones. Though the word is not much used there, these are Deleuze and Guattari's royal sciences, aligned with the established order, that aspire to grasp the inner workings of the world through knowledge and thus to dominate it and put it entirely at our disposal. Beyond the natural sciences, an explicit ambition of much U.S. social science throughout the twentieth century was "social engineering." Heidegger's (1976 [1954]) understanding of the sciences as integral to a project of enframingand subjugation comes to mind. And the point I need to stress is that the cybernetic image of control was not like that.
Just as Laingian psychiatry was sometimes described as antipsychiatry, the British cyberneticians, at least, might have been rhetorically well advised to describe themselves as being in the business of anticontrol.And to see what that means, we have only to refer back to the preceding discussion of ontology. If cybernetics staged an ontology in which the fundamental entities were dynamic systems evolving and becoming in unpredictable ways, it could hardly have been in the business of Big Brother–style domination and enframing. It follows immediately from this vision of the world that enframing will fail. The entire task of cybernetics was to figure out how to get along in a world that was not enframable, that could not be subjugated to human designs—how to build machines and construct systems that could adapt performatively to whatever happened to come their way. A key aspect of many of the examples we will examine was that of open-ended search—of systems that would explor
e their world to see what it had to offer, good and bad. This, to borrow another word from Heidegger, is a stance of revealingrather than enframing—of openness to possibility, rather than a closed determination to achieve some preconceived object, come what may (though obviously this assertion will need to be nuanced as we go along). This is the ontological sense in which cybernetics appears as one of Deleuze and Guattari's nomad sciences that upset established orders.
One theme that will emerge from the chapter on Ashby onward, for example, is that of a distinctly cybernetic notion of design,very different from that more familiar in modern science and engineering. If our usual notion of design entails the formulation of a plan which is then imposed upon matter, the cybernetic approach entailed instead a continuing interaction with materials, human and nonhuman, to explore what might be achieved—what one might call an evolutionary approach to design, that necessarily entailed a degree of respect for the other.
Readers can decide for themselves, but my feeling is, therefore, that the critique of cybernetics that centers on the word "control" is importantly misdirected. British cybernetics was not a scientized adjunct of Big Brother. In fact, as I said, the critique might be better redirected toward modernity rather than cybernetics, and this brings us to the question of ontological politics. The period in which I have been writing this book has not been a happy one, and the future looks increasingly grim. In our dealings with nature, 150 years of the enframing of the Mississippi by the U.S. Army Corps of Engineers came to a (temporary) end in 2005 with Hurricane Katrina, the flooding of New Orleans, many deaths, massive destruction of property, and the displacement of hundreds of thousands of people.12 In our dealings with each other, the United States's attempt to enframe Iraq—the installation of "freedom and democracy"— became another continuing disaster of murder, mayhem, and torture.
In one of his last public appearances, Stafford Beer (2004 [2001], 853) argued, "Last month [September 2001], the tragic events in New York, as cybernetically interpreted, look quite different from the interpretation supplied by world leaders—and therefore the strategies now pursued are quite mistaken in cybernetic eyes." Perhaps we have gone a bit overboard with the modern idea that we can understand and enframe the world. Perhaps we could do with a few examples before our eyes that could help us imagine and act in the world differently. Such examples are what the following chapters offer. They demonstrate concretely and very variously the possibility of a nonmodern stance in the world, a stance of revealing rather than enframing, that hangs together with an ontology of unknowability and becoming. Hence the invitation to see the following scenes from the history of cybernetics as sketches of another future, models for another way to go on, an invitation elaborated further in chapter 8.
This book is not an argument that modernity must be smashed or that science as we know it should be abandoned. But my hope is that it might do something to weaken the spell that modernity casts over us—to question its hegemony, to destabilize the idea that there is no alternative. Ontological monotheism is not turning out to be a pretty sight.
PART ONE
PSYCHIATRY TO CYBERNETICS
3
_ _ _ _ _
GREY WALTER
from electroshock to the
psychedelic sixties
THE BRUTE POINT IS THAT A WORKING GOLEM IS . . . PREFERABLE TO TOTAL IGNORANCE. . . . IT IS CLEAR BY NOW THAT THE IMMEDIATE FUTURE OF STUDY IN MODELLING THE BRAIN LIES WITH THE SYNTHESIS OF GADGETS MORE THAN WITH THE ANALYSIS OF DATA.
JEROME LETTVIN, EMBODIMENTS OF MIND (1988, VI, VII)
In an obituary for his long-standing friend and colleague, H. W. Shipton described Grey Walter as, "in every sense of the phrase a free thinker [with] contempt for those who followed well paved paths. He was flamboyant, persuasive, iconoclastic and a great admirer of beauty in art, literature, science, and not least, woman" (1977, iii). The historian of science Rhodri Hayward remarks on Walter's "swashbuckling image" as an "emotional adventurer," and on his popular and academic reputation, which ranged from "robotics pioneer, home guard explosives expert, wife swapper, t.v.-pundit, experimental drugs user and skin diver to anarcho-syndicalist champion of leucotomy and electro-convulsive therapy" (2001a, 616). I am interested in Walter the cybernetician, so the swashbuckling will get short shrift, alas.1
After an outline of Walter's life and career, I turn to robot-tortoises, exploring their contribution to a science of the performative brain while also showing the ways in which they went beyond that. I discuss the tortoises as ontological theater and then explore the social basis of Walter's cybernetics and its modes of transmission. Here we can look toward the present and contemporary work in biologically inspired robotics. A discussion of CORA, a learning module that Walter added to the tortoises, moves the chapter in two directions. One adds epistemology to the ontological picture; the other points to the brutal psychiatric milieu that was a surface of emergence for Walter's cybernetics. The chapter concludes with Walter's interest in strange performances and altered states, and the technologies of the self that elicit them, including flicker and biofeedback. Here we can begin our exploration of crossovers and resonances between cybernetics and the sixties, with reference to William Burroughs, the Beats, and "brainwave music." I also discuss the hylozoist quality of the latter, a theme that reappears in different guises throughout the book.
Figure 3.1.Grey Walter. Reproduced from The Burden: Fifty Years of Clinical and Experimental Neuroscience at the Burden Neurological Institute, by R. Cooper and J. Bird (Bristol: White Tree Books, 1989), 50. (By permission of White Tree Books, Bristol.)
The ontological hybridity of first-generation cybernetics will be apparent. While we can read Walter's work as thematizing a performative vision of ourselves and the world, the impulse to open up the Black Box of the brain will also be evident. Cybernetics was born in the matrix of modern science, and we can explore that too.
_ _ _ _ _
William Grey Walter was born in Kansas City, Missouri, in 1910.2 His parents were journalists, his father English, his mother Italian-American. The family moved to Britain in 1915, and Walter remained there for the rest of his life. At some stage, in a remarkable coincidence with Ashby, Beer, and Pask, Walter stopped using his first name and was generally known as Grey (some people understood him to have a double-barreled surname: Grey-Walter). He was educated at Westminster School in London and then at King's College Cambridge, where he gained an honors degree in physiology in 1931 and stayed on for four years' postgraduate research on nerve physiology and conditioned reflexes, gaining his MA degree for his dissertation, "Conduction in Nerve and Muscle." His ambition was to obtain a college fellowship, but he failed in that and instead took up a position in the Central Pathological Laboratory of the Maudsley mental hospital in London in 1935, at the invitation of Frederick Golla, the laboratory's director, and with the support of a fellowship from the Rockefeller Foundation.3
Golla encouraged Walter to get into the very new field of electroencephalography (EEG), the technique of detecting the electrical activity of the brain, brainwaves, using electrodes attached to the scalp. The possibility of detecting these waves had first been shown by the Jena psychiatrist Hans Berger in 1928 (Borck 2001) but the existence of such phenomena was only demonstrated in Britain in 1934 by Cambridge neurophysiologists E. D. Adrian and B. H. C. Matthews. Adrian and Matthews confirmed the existence of what they called the Berger rhythm, which later became known as the alpha rhythm: an oscillation at around ten cycles per second in electrical potentials within the brain, displayed by all the subjects they examined. The most striking feature of these waves was that they appeared in the brain when the subjects' eyes were shut, but vanished when their eyes were opened (fig. 3.2). Beyond that, Adrian and Matthews found that "the Berger rhythm is disappointingly constant" (Adrian and Matthews 1934, 382). But Walter found ways to take EEG research further. He was something of an electrical engineering genius, designing and building EEG apparatus and frequency an
alyzers and collaborating with the Ediswan company in the production of commercial equipment, and he quickly made some notable clinical achievements, including the first diagnosis and localization of a cerebral tumor by EEG, the discovery that a significant proportion of epileptics show unusual brainwaves even between fits, and intervention in a famous murder case (Hayward 2001a, 620).4 Following his pioneering work, EEG was at the center of Walter's career for the rest of his life. In 1949 he was a cofounder and coeditor of the felicitously titled journal Electroencephalography and Clinical Neurophysiology(self-described on its title page as "The EEG Journal") and from 1953 to 1957 he was president of the International Federation of EEG Societies.5
Fig. 2.-- The development of the rhythm in the absence of visual activity.
A. E.D.A. The rhythms appears when the eyes are closed.
B. B.H.C.M Ditto.
C. E.D.A. The rhythm disappears when the eyes are opened.