by Rucker, Rudy
Rudy’s twenty-five now, so we were cool about it and only went on two rides. The first was the Big Dipper, a wonderful old wooden roller coaster rising up right next to the Monterey Bay. The streaming air was cool and salty, the colors were bright and sun-drenched, and the cars moved though a long tunnel of sound woven from screams and rattles and carnival music and distant waves. It was wonderful.
The second ride we went on was a Virtual Reality ride in which nine people are squeezed into a windowless camper van mounted on hydraulic legs. On the front wall of the airless little van was a big screen showing a first-person view of a ride down—a roller coaster! As the virtual image swooped and jolted, the van’s hydraulic jacks bucked and shuddered in an attempt to create a kinesthetic illusion that you were really in the cyberspace of the virtual ride. Compared to fresh memory of the true roller coaster, the Virtual Reality ride was starkly inadequate and manipulative.
Compared to reality, computers will always be second best. But computers are here for us to use, and if we use them wisely, they can teach us to enjoy reality more than ever.
* * *
Note on “A Brief History of Computers”
Written August, 1996.
Appeared in Seek!, 1999.
When I wrote this piece, I was thinking of it as the start of a long book about computers and computation. Embedded in computer science as I was at that time, I found it very interesting to write my “Brief History of Computers” and learn the background of the machines that were more or less taking over my life. But I couldn’t quite find the right angle for making a book of the essay. So I just ran in in my nonfiction anthology Seek! And eight years later, in 2004, I finally wrote my big nonfiction book about computation, that is, The Lifebox, the Seashell, and the Soul.
Games, Intelligence, Enlightenment
I've been thinking about human intelligence, about the inevitability of intelligence in evolving systems. The idea is that intelligence arises from an ability to mentally simulate the world. Simulation is a powerful method for improving your survival ability; if you can simulate, you can plan and anticipate. Once you simulate the world, it follows almost automatically that you get an ability for abstract thought. One of the symbols you acquire is that of your Self. And only one step beyond that lies consciousness—at least if you agree with Antonio Damasio, The Feeling of What Happens (Harcourt 1999).
In his book, Damasio argues that consciousness amounts to forming a mental image of yourself observing the world. It’s not enough to just have an image of yourself in the world. To get consciousness, you go a step beyond that and add on a second-order symbol of the operating-system self that looks at the simulation.
In terms of how computer game designers think, the lower level self symbol within the simulation is the “player,” and the second-order self symbol that watches a copy of the simulation is the “user.”
Antonio Damasio’s notion of consciousness arises by this sequence:
(0) Being active in the world,
(1) Being able to perceive and distinguish separate objects in the word,
(2) Having a first-order simulation of the world including a “player,” that is, a self-token representing you, and,
(3) Having a second-order simulation in which there is a “user” which represents you observing a first-order simulation.
Another way to put it is that in step (3) you simulate a second-order self token which mimics your behavior of observing a simulation of the world with a first-order self token. You simulate yourself watching the world and “playing the game.” Step (3) might arise from the necessity to realize that the creatures like yourself around you are also playing the game, that is, doing (2). Rephrasing this, once you do step (3), it’s natural to do a step (3A) in which the other intelligent agents of the world are also represented by first-order tokens, each of which is to have its own simulation of the world and itself.
A snail doesn’t even have (1). I’m not sure if a dog has (2) or not, maybe only fleetingly. As I recall, in The Sense of What Happens, Damasio talks about some brain-damaged people who have (2) but not (3). Philosophically, this goes back to something I wrote about in Infinity and the Mind: self-awareness leads to an infinite regress. At step (1) and (2) we don’t have the regress, but right away at step (3) we do have the regress, because now the agent is “thinking about itself thinking,” and we can nest thought-balloons all the way down. So once you have step (3), you inevitably have (4), (5), (6), and so on.
In terms of the game analogy, we might say that in stage (4), you simulate a “designer” who observes the “user” interacting with the “player.” And so on.
It’s fitting that at the same stage (3) where we reach consciousness we introduce a kind of dynamic that leads to infinity—this is reasonable and pleasing, given that it’s so natural to think of the mind as being infinite. The early stages beyond (3) are levels that we experience, when unpleasant, as self-consciousness and irony, or, when pleasant, as maturity and self-knowledge.
If you run the regress right out through all the natural numbers, you get a kind of enlightenment which is, however, illusory, as right away you can ask for a level “infinity plus one.”
The real enlightenment is the one you can’t finish, it’s the unthinkable Absolute Infinity that lies beyond all the humanly conceivable levels of merely mathematical infinities. To my way of thinking, reaching Absolute Infinity would be akin to getting back to stage (0) again.
Stage (0), viewed in a positive way, might be thought of as experiencing the world with an empty mind, no model of it needed, no image, no notion of objects, simply the world in and of itself, letting “the world think me” instead of “me thinking the world.”
* * *
A Note on “A Note on Games, Intelligence, Enlightenment”
Written in 2001.
Unpublished.
This short essay was an inspiration I had in Italy after giving a talk in Rimini. I was there to receive, for some reason, the medal of Italian Senate. It was very gratifying. The award ceremony was coupled with an academic conference. It was exciting to be in Rimini, which was Federico’s Fellini’s home town—we conference attendees were in fact lodged in the same Grand Hotel that appears in Fellini’s Amarcord. Walking around town on my own, I had some profound (or seemingly profound) insights into the nature of consciousness, and it became clear to me that the evolution of consciousness is more or less inevitable for any species on any world. This material eventually made it’s way into my tome, The Lifebox, the Seashell, and the Soul.
Adventures In Gnarly Computation
Everything Is A Computation
What is reality? One type of answer to this age-old question has the following format: “Everything is ________.” Over the years I’ve tried out lots of different ways to fill in the blank: particles, bumps in spacetime, thoughts, mathematical sets, and more. I once had a friend who liked to say, “The universe is made of jokes.”
Now there may very well be no correct way to fill in the “Everything is” blank. It could be that reality is fundamentally pluralistic, that is, made of up all kinds of fundamentally incompatible things. Maybe there really isn’t any single one underlying substance. But it’s interesting to think that perhaps there is.
Lately I’ve been working to convince myself that everything is a computation. I call this belief universal automatism. Computations are everywhere, once you begin to look at things in a certain way. The weather, plants and animals, your personal thoughts and shifts of mood, society’s history and politics—all computations.
One handy aspect of computations is that they occur at all levels and in all sizes. When you say that everything’s made of elementary particles, then you need to think of large-scale objects as being made of a zillion tiny things. But computations come in all scales, and an ordinary natural process can be thought of as a single high-level computation.
If I want to say that all sorts of processes are like computations, it�
�s to be expected that my definition of computation must be fairly simple. I go with the following: A computation is a process that obeys finitely describable rules.
People often suppose that a computation has to “find an answer” and then stop. But our general notion of computation allows for computations that run indefinitely. If you think of your life as a kind of computation, it’s quite abundantly clear that there’s not going to be a final answer and there won’t be anything particularly wonderful about having the computation halt! In other words, we often prefer a computation to yield an ongoing sequence of outputs rather than to attain one final output and turn itself off.
Everything is a Gnarly Computation
If we suppose that many natural phenomena are in effect computations, the study of computer science can tell us about the kinds of natural phenomena that can occur. Starting in the 1980s, the scientist-entrepreneur Stephen Wolfram did a king-hell job of combing through vast seas of possible computations, getting a handle on the kinds of phenomena that can occur, exploring the computational universe.
Simplifying just a bit, we can say that Wolfram found three kinds of processes: the predictable, the random-looking, and what I term the gnarly. These three fall into a Goldilocks pattern.
Too cold (predictable). Processes that produce no real surprises. This may be because they die out and become constant, or because they’re repetitive in some way. The repetitions can be spatial, temporal, or scaled so as to make fractally nested patterns that are nevertheless predictable.
Too hot (random-looking). Processes that are completely scuzzy and messy and dull, like white noise or video snow. The programmer William Gosper used to refer to computational rules of this kind as “seething dog barf.”
Just right (gnarly). Processes that are structured in interesting ways but nonetheless unpredictable. In computations of this kind we see coherent patterns moving around like gliders; these patterns produce large-scale information transport across the space of the computation. Gnarly processes often display patterns at several scales. We find them fun to watch because they tend to appear as if they’re alive.
Gnarliness lies between predictability and randomness. It’s an interface phenomenon like organic life, poised between crystalline order and messy deliquescence.
Why do I use the world gnarly? Well, the original meaning of “gnarl” was simply “a knot in the wood of a tree.” In California surfer slang, “gnarly” came to be used to describe complicated, rapidly changing surf conditions. And then, by extension, something gnarly came to be anything with surprisingly intricate detail. As a late-arriving and perhaps over-assimilated Californian, I get a kick out of the word.
Clouds, fire, and water are gnarly in the sense of being beautifully intricate, with purposeful-looking but not quite comprehensible patterns. Although the motion of a projectile through a empty space would seem to be predictable, if we add in the effects of mutually interacting planets and suns, the calculation may become gnarly. And earthly objects moving through water or air tend to leave a turbulent wakes—which very definitely involve gnarly computations.
All living things are gnarly, in that they inevitably do things that are much more complex than one might have expected. The shapes of tree branches are of course the standard example of gnarl. The life cycle of a jellyfish is way gnarly. The wild three-dimensional paths that a humming-bird sweeps out are kind of gnarly too, and, if the truth be told, your ears are gnarly as well.
Needless to say, the human mind is gnarly. I’ve noticed, for instance, that my moods continue to vary even if I manage to behave optimally and think nice correct thoughts about everything. I might suppose that this is because my moods are affected by other factors—such as diet, sleep, exercise, and biochemical processes I’m not even aware of. But a more computationally realistic explanation is simply that my emotional state is the result of a gnarly unpredictable computation, and any hope of full control is a dream.
Still on the topic of psychology, consider trains of thought, the free-flowing and somewhat unpredictable chains of association that the mind produces when left on its own. Note that trains of thoughts need not be formulated in words. When I watch, for instance, a tree branch bobbing in the breeze, my mind plays with the positions of the leaves, following them and automatically making little predictions about their motions. And then the image of the branch might be replaced by a mental image of a tiny man tossed up high into the air. His parachute pops open and he floats down towards a city of lights. I recall the first time I flew into San Jose, and how it reminded me of a great circuit board. I remind myself that I need to see about getting a new computer soon, and then in reaction, I think about going for a bicycle ride. And so on.
Society, too is carrying out gnarly computations. The flow of opinion, the gyrations of the stock markets, the ebb and flow of success, the accumulation of craft and invention—gnarly, dude.
So What?
If you were to believe all the ads you see, you might imagine that the latest personal computers have access to new, improved methods that lie wholly beyond the abilities of older machines. But computer science tells us that if I’m allowed to equip my old machine with additional memory chips, then I can always get it to behave like any new computer at all.
This carries over to the natural world. Many naturally occurring processes are not only gnarly, they’re capable of behaving like any other kind of computation. Wolfram feels that this behavior is very common, and he formulates this notion in the claim that he calls the Principle of Computational Equivalence (PCE): Almost all processes that are not obviously simple can be viewed as computations of equivalent sophistication.
If the PCE is true, then, for instance, a leaf fluttering in the breeze outside my window is as computationally rich a system as my brain. I seem to be a fluttering leaf? Some scientists find this notion an affront. Personally, I find serenity in accepting that the flow of my thoughts and moods is a gnarly computation that’s fundamentally the same as a cloud, a flame, or a fluttering leaf. It’s soothing to realize that my mind’s processes are inherently uncontrollable. Looking at the waving branches of trees calms me down.
But rather than arguing for the full PCE, I think it’s worthwhile to formulate a slightly weaker claim, which I call the Principle of Computational Unpredictability (PCU):Most naturally occurring complex computations are unpredictable.
In the PCU, I’m using “unpredictable” in a specific computer-science sense; I’m saying that a computation is unpredictable if there’s no fast shortcut way to predict its outcomes. If a computation is unpredictable and you want to know what state it’ll be in after, say, a million steps, you pretty much have to crunch out those million steps to find out what’s going to happen.
Traditional science is all about finding shortcuts. Physics 101 teaches students to use Newton’s laws to predict how far a cannonball will travel when shot into the air at a certain angle and with a certain muzzle-velocity. But, as I mentioned above, in the case of a real object moving through the air, if we want to get full accuracy in describing the object’s motions, we need to take the turbulent flow of air into account. At least at certain velocities, flowing fluids are known to produce computationally complex patterns—think of the bumps and ripples that move back and forth along the lip of a waterfall, or of eddies of milk stirred into coffee. So an earthly object’s motion will often be carrying out a gnarly computation, and these computations are unpredictable—meaning that the only certain way to get a really detailed prediction of an artillery shell’s trajectory through the air is to simulate the motion every step of the way. The computation performed by the physical motion is unpredictable in the sense of not being reducible to a quick shortcut method. (By the way, simulating trajectories was the very purpose for which the U. S. funded the first electronic computer, ENIAC, in 1946, the same year in which I was born.)
Physical laws provide, at best, a recipe for how the world might be computed in parallel particle by part
icle and region by region. But—unless you have access to some so-far-unavailable ultra-super computer that simulates reality faster than the world does itself—the only way to actually learn the results is to wait for the actual physical process to work itself out. There is a fundamental gap between T-shirt physics equations and the unpredictable gnarl of daily life.
Some SF Thought Experiments
One of the nice things about science fiction is that it lets us carry out thought experiments. Mathematicians adopt axioms and deduce the consequences. Computer scientists write programs and observe the results of letting the programs run. Science fiction writers put characters into a world with arbitrary rules and work out what happens.
Science fiction is a powerful futurological tool because, in practice, there are no quick shortcuts for predicting the effects of new technological developments. Only if you place the new tech into a fleshed-out fictional world and simulate the effects in your novelistic reality can you get a clear image of what might happen.
This relates to the ideas I’ve been talking about. We can’t predict in advance the outcomes of naturally occurring gnarly systems; we can only simulate (with great effort) their evolution step by step. In other words, when it comes to futurology, only the most trivial changes to reality have easily predictable consequences. If I want to imagine what our world will be like one year after the arrival of, say, soft plastic robots, the only way to get a realistic vision is to fictionally simulate society’s reactions during the intervening year.
These days I’ve been working on a fictional thought experiment about using natural systems to replace conventional computers. My starting point is the observed fact that gnarly natural systems compute much faster than our supercomputers. Although in principle, a supercomputer can simulate a given natural process, such simulations are at present very much slower than what nature does. It’s a simple matter of resources: a natural system is inherently parallel, with all its parts being updated at once. And a ordinary sized object is made up of something on the order of an octillion atoms (that’s ten to the 27th power) . Naturally occurring systems update their states much faster than our digital machines can model the process is That’s why existing computer simulations of reality are still rather crude.