Seek!: Selected Nonfiction

Home > Other > Seek!: Selected Nonfiction > Page 9
Seek!: Selected Nonfiction Page 9

by Rudy Rucker


  There are two approaches to the problem of creating A-life: the top-down approach, and the bottom-up approach.

  The top-down approach is associated with AI (artificial intelligence), the bottom-up with CA (the study of cellular automata). Both approaches are needed for intelligent artificial life, and I predict that someday soon chaos theory, neural nets and fractal mathematics will provide a bridge between the two. What a day that will be when our machines begin to live and speak and breed - a day like May 10, 1869, when the final golden spike completed the U.S. transcontinental railroad! The study of CAs brings us ever closer to the forging of that last golden link in the great chain between bottom and beyond. If all goes well, many of us will see live robot boppers on the moon.

  A heckler might say, "Sure that's fine, but why are CAs needed? Why have a bottom-up approach at all? What do mindless colored dots have to do with intelligent artificial life?"

  For all humanity's spiritual qualities, we need matter to live on. And CAs can act as the "matter" on which intelligent life can evolve. CAs provide a lively, chaotic substrate capable of supporting the most diverse emergent behaviors. Indeed, it is at least possible that human life itself is quite literally based on CAs.

  How so? View a person as wetware: as a protein factory. The proteins flip about, generating hormones, storing memories. Looking deeper, observe that the proteins' nanotech churning is a pattern made up of flows and undulations in the potential surfaces of quantum chemistry. These surfaces "smell out" minimal energy configurations by using the fine fuzz of physical vacuum noise - far from being like smooth rubber sheets, they are like pocked ocean swells in a rainstorm. The quantum noise obeys local rules that are quite mathematical; and these rules can be well simulated by CAs.

  Why is it that CAs are so good at simulating physics? Because, just like CAs, physics is (1) parallel, (2) local, and (3) homogeneous.

  Page 67

  Restated in terms of nature, these principles say that (1) the world is happening in many different places at once, (2) there is no action at a distance and (3) the laws of nature are the same everywhere.

  Whether or not the physical world really is a cellular automaton, the point is that CAs are rich enough that a ''biological" world could live on them. We human hackers live on language games on biology on chemistry on physics on mathematics on - something very like the iterated parallel computations of a CA.

  Life needs something to live on, intelligence needs something to think on, and it is this seething information matrix which CAs can provide. If AI is the surfer, CA is the sea. That's why I think cellular automata are interesting: A-life! CAs will lead to intelligent artificial life!

  Another interesting thing about CAs is that they are a universal form of computation. That is, any computation can be modeled (usually inefficiently) as a CA process. The question then becomes whether computations can be done better as CAs?

  It's clear that certain kinds of parallel computations can be done more rapidly and efficiently by a succession of parallel CA steps. And one does best to use the CA intrinsically, rather than simply using it as a simulation of the old serial mode - emulating the gates of an Intel chip is not the way to go. No, when we use CAs best, we do not use them as simulations of circuit diagrams. While behaviors can be found in top-down expert-system style by harnessing particular patterns to particular purposes, I think by far the more fruitful course is to use the bottom-up freestyle surfing CA style summed up in the slogan:

  Seek Ye the Gnarl!

  New dimensional CA hacks are possible, new and marketable techniques of parallel programming are lying around waiting to be found, both in the form of individual CA structures and in the form of wholly different rules.

  CA structures are labile and can be bred in three senses: one can collide and interface different local patterns within the framework of a fixed CA rule, one can combine globally different CA rules (or

  Page 68

  ideas about them) to produce wholly new ecologies, or one can "gene-splice" the logic of successful rules. Then, like Alexander von Humboldt in the Americas, one botanizes and zoologizes and mineralizes, looking for whatever artificially alive information structures can be found in the new worlds. As always, both top-down and bottom-up approaches are viable. We use bottom-up to find new ecologies and their flora and fauna. We use top-down to seed a given instance of a particular ecology with the sort of gene-tailored computer agents we want to breed.

  In my own bottom-up searches I begin simply by hoping that my programs will display interesting output for a long time. Then I begin to hope that my programs will be robust under varying initial conditions, and that they will be reactive in anthropomorphizable ways. Once the program is, at this very rudimentary level, artificially alive, I may cast about for applications in some practical domain.

  As I mentioned above, I think the most productive near-term applications of CAs are to image generation and image processing. A cycle or two of an averaging CA rule, for instance, can be used for easy image cleanup, munching down all stray "turd bits." This technique, known as "convolution" in the literature, is used every day by NASA's massively parallel computer in Beltsville, Maryland, to process terabyte arrays of satellite photo data. Present-day designers of the paint and graphics packages commonly put CA-based rules into their image processor toolboxes. Several Photoshop effects, for instance, use CAs.

  CAs have still not been sufficiently exploited for original image generation. How about a logo that instead of being chrome is matte and luminous, with a smooth curved surface made of tiny moving mosaics of light, light-bits that form crawling dirty CA gliders, or that shudder in psychedelic washes of color? These are what the expressive "flickercladding" skins of the boppers and moldies look like in my A-life science fiction Ware novels.

  Many simulation applications exist as well. The idea is to find a CA rule that looks like something you want to model. If you are lucky there will be some common underlying mathematics between the two. Some rules, for instance, are difference method solutions of

  Page 69

  the Laplacian equation which models both diffusion of chemicals and heat flow. Wave motions can be modeled as well.15

  A final application of CAs is to encryption. Either a CA can serve as a cheap source of "essentially random" encryption bits, or the whole message can be fed to a reversible CA. Stephen Wolfram actually patented the one-dimensional rule with "Wolfram code 30" as part of an encryption scheme.16

  But to recapitulate, the real reason for studying CAs is to promote artificial life. The most important use for cellular automata will be as "universes" or "arenas" in which to evolve better fractals, bots, virtual ants, neural nets and expert agents, using gene-splicing, mutation, and our own ''divine interventions" to achieve a rapid and dramatic evolution in these parallel processes. CA workers need your help in accomplishing the manifest destiny of mankind: to pass the torch of life and intelligence on to the computer. There are no more than a few hundred active workers in the CA field today. Twenty-first century technology will need thousands more!

  History of Cellular Automata:

  Von Neumann to Gosper

  Cellular automata were invented in the late 1940s by Stanislaw Ulam (19091984) and John von Neumann (19031957). One can say that the "cellular" comes from Ulam, and the "automata" from von Neumann.

  Ulam was primarily a mathematician. He invented the Monte Carlo simulation technique, the highly infinite "measurable cardinals" of set theory, and he made contributions to analysis and number theory. With Edward Teller, Ulam was the co-inventor of the hydrogen bomb. Von Neumann was a still more wide-ranging mathematician. He did work in set theory, in the foundations of quantum mechanics,

  15. Since CA Lab, my students and I have worked on a cellular automata package specifically designed for physical simulation. This is the CAPOW software for simulating 1-I] and 2-D continuous valued cellular automata. It's available for download from http://www.mathcs.sjsu.edu/capow.
/>
  16. Stephcn Wolfram, U.S. Patent Number 4,691,291, "Random Sequence Generators", granted September 1, 1987.

  Page 70

  in economics, and in game theory. In addition, von Neumann greatly influenced the logical architecture of the first electronic computers.

  In the late 1940s, von Neumann gave some ground-breaking lectures on the topic of whether or not it would ever be possible for a machine, or "automaton," to reproduce itself.

  Usually a machine makes something much simpler than itself consider a huge milling machine turning out bolts. Could a machine possibly fabricate machines as complicated as itself? Or is there some extra-mechanical magic to self-reproduction? To simplify the problem, von Neumann suggested that we suppose that our robots or automata are made up of a small number of standardized parts:

  I will introduce as elementary units neurons, a "muscle," entities which make and cut fixed contacts, and entities which supply energy, all defined with about that degree of superficiality with which [the theory of neural networks] describes an actual neuron. If you describe muscles, connective tissues, "disconnecting tissues," and means of providing metabolic energy . . . you probably wind up with something like ten or twelve or fifteen elementary parts.17

  Using the idea of machines made up of multiple copies of a small number of standardized elements, von Neumann posed his question about robot self-reproduction as follows.

  Can one build an aggregate out of such elements in such a manner that if it is put into a reservoir, in which there float all these elements in large numbers, it will then begin to construct other aggregates, each of which will at the end turn out to be another automaton exactly like the original one?18

  17. From "Theory and Organization of Complicated Automata," 1949, reprinted in John von Neumann, Theory of Self-Reproducing Automata, University of Illinois Press, Urbana 1966, p. 77.

  18. From "The General and Logical Theory of Automata," 1948, reprinted in: John von Neumann, Collected Works, Vol. 5, Macmillan, New York 1963, p. 315. The weird scenario described in this quote is reminiscent of a scene in Kurt Vonnegut, Jr.'s Sirens of Titan where an unhappy robot tears himself apart and floats the pieces in a lake.

  Page 71

  Using techniques of mathematical logic, von Neumann was then able to deduce that such self-reproduction should in fact be possible. His proof hinged on the idea that an automaton could have a blueprint for building itself, and that in self-reproduction, two steps would be necessary: (1) to make an exact copy of the blueprint, and (2) to use the blueprint as instructions for making a copy of the automaton. The role of the blueprint is entirely analogous to the way DNA is used in biological self-reproduction, for here the DNA is both copied and used as instructions for building new proteins.

  The complexity of a reservoir full of floating machine parts hindered von Neumann from making his proof convincing. The next step came from Stanislaw Ulam, who was working with von Neumann at Los Alamos during those years. Ulam's suggestion was that instead of talking about machine parts in a reservoir, von Neumann should think in terms of an idealized space of cells that could hold finite state-numbers representing different sorts of parts.

  Ulam's first published reference to this idea reads as follows:

  An interesting field of application for models consisting of an infinite number of interacting elements may exist in the recent theories of automata. A general model, considered by von Neumann and the author, would be of the following sort: Given is an infinite lattice or graph of points, each with a finite number of connections to certain of its "neighbors." Each point is capable of a finite number of "states." The states of neighbors at time Tn induce, in a specified manner, the state of the point at time Tn+1. One aim of the theory is to establish the existence of subsystems which are able to multiply, i.e., create in time other systems identical ("congruent") to themselves.19

  By 1952, von Neumann had completed a description of such a self-reproducing "cellular automaton" which uses 29 states. Von Neumann's CA work was not published during his lifetime; it seems that once he saw the solution, he became distracted and moved on

  19. From "Random Processes and Transformations," 1950, reprinted in : Stanislaw Ulam, Sets, Numbers and Universes, MIT Press, Cambridge 1974, p. 336.

  Page 72

  A cellular automaton rule called "Rainzha."

  (Generated by Cellab.)

  to other things. Ulam continued working on a number of simpler cellular automata, publishing several papers on them during the early 1960s.

  The next big event in CA history occurred in 1970. In his popular mathematical games column, Martin Gardner wrote about how John Horton Conway, a mathematician at the University of Cambridge, had discovered a fascinating two-dimensional cellular automaton so rich in patterns and behavior that it was known as "Life." Conway's vague initial goal had been to find a cellular automaton rule in which simple patterns could grow to a large size, but in which it was not clear whether any patterns could grow forever.

  Conway conjectures that no pattern can grow without limit. Put another way, any configuration with a finite number of counters cannot grow beyond a finite upper limit to the number of counters on the field. This is probably the deepest and most difficult question posed by the game. Conway has offered a prize of $50 to the first person who can prove or disprove the conjecture before the end of the year. One way to disprove it would be to discover patterns that keep adding counters to the field: a "gun" (a configuration that repeatedly

  Page 73

  shoots out moving objects such as the "glider"), or a "puffer train" (a configuration that moves about leaving behind a trail of "smoke").20

  The prize was won a month later by William Gosper and five fellow hackers at MIT; they sent Martin Gardner a telegram with the coordinates of the cells to turn on to make a glider gun. Steven Levy's Hackers has a good section about Gosper and the early excitement over Life among the users of the PDP-6 computer at the MIT Artificial Intelligence Project. Levy has a nice quote from Gosper, telling how he saw Life as a way to

  . . . basically do new science in a universe where all the smart guys haven't already nixed you out two or three hundred years ago. It's your life story if you're a mathematician: every time you discover something neat, you discover that Gauss or Newton knew it in his crib. With Life you're the first guy there, and there's always fun stuff going on. You can do everything from recursive function theory to animal husbandry. There's a community of people who are sharing their experiences with you. And there's the sense of connection between you and the environment. The idea of where's the boundary of a computer. Where does the computer leave off and the environment begin?21

  One must remember that 1970 was still the Dark Ages of computing; Conway himself ran his Life simulations by marking the cells with checkers or fiat Othello counters. For Gosper and his team to get Life to run on a monitor at all was a nontrivial feat of hacking - it was a new thing to do with a computer. After Gardner's second column on Life, the game became something of a mania among computer users. By 1974, an article about Life in Time could com-

  20. Martin Gardner, "Mathematical Games: The fantastic combinations of John Conway's new solitaire game Life," Scientific American, October 1970, pp. 120123.

  21. Steven Levy, Hackers: Heroes of the Computer Revolution, Doubleday, New York 1984, p. 147.

  Page 74

  plain that "millions of dollars in valuable computer time may have already been wasted by the game's growing horde of fanatics."22

  More and more intricate Life patterns were found all through the '70s, and by 1980, Conway had enough Life machinery at hand to publish a detailed proof that Life can be used to simulate any digital computation whatsoever. The significance of Conway's proof is not that he shows that some cellular automaton can act as a universal computer, for von Neumann already proved this; and for that matter Alvy Ray Smith's Stanford dissertation of 1970 describes a universal one-dimensional CA computer. (Smith later founded the co
mputer graphics company Pixar.) The significance of Conway's proof is that he shows that the specific rule called Life can itself act as a universal computer.

 

‹ Prev