I Am a Strange Loop

Home > Other > I Am a Strange Loop > Page 34
I Am a Strange Loop Page 34

by Douglas R. Hofstadter


  WHEN I was around twelve, there were kits you could buy that allowed you to put together electronic circuitry that would carry out various interesting functions. You could build a radio, a circuit that would add two binary numbers, a device that could encode or decode a message using a substitution cipher, a “brain” that would play tic-tac-toe against you, and a few other devices like this. Each of these machines was dedicated: it could do just one kind of trick. This is the usual meaning of “machine” that we grow up with. We are accustomed to the idea of a refrigerator as a dedicated machine for keeping things cold, an alarm clock as a dedicated machine for waking us up, and so on. But more recently, we have started to get used to machines that transcend their original purposes.

  Take cellular telephones, for instance. Nowadays, in order to be competitive, cell phones are marketed not so much (maybe even very little) on the basis of their original purpose as communication devices, but instead for the number of tunes they can hold, the number of games you can play on them, the quality of the photos they can take, and who knows what else! Cell phones once were, but no longer are, dedicated machines. And why is that? It is because their inner circuitry has surpassed a certain threshold of complexity, and that fact allows them to have a chameleon-like nature. You can use the hardware inside a cell phone to house a word processor, a Web browser, a gaggle of video games, and on and on. This, in essence, is what the computer revolution is all about: when a certain well-defined threshold — I’ll call it the “Gödel–Turing threshold” — is surpassed, then a computer can emulate any kind of machine.

  This is the meaning of the term “universal machine”, introduced in 1936 by the English mathematician and computer pioneer Alan Turing, and today we are intimately familiar with the basic idea, although most people don’t know the technical term or concept. We routinely download virtual machines from the Web that can convert our universal laptops into temporarily specialized devices for watching movies, listening to music, playing games, making cheap international phone calls, who knows what. Machines of all sorts come to us through wires or even through the air, via software, via patterns, and they swarm into and inhabit our computational hardware. One single universal machine morphs into new functionalities at the drop of a hat, or, more precisely, at the double-click of a mouse. I bounce back and forth between my email program, my word processor, my Web browser, my photo displayer, and a dozen other “applications” that all live inside my computer. At any specific moment, most of these independent, dedicated machines are dormant, sleeping, waiting patiently (actually, unconsciously) to be awakened by my royal double-click and to jump obediently to life and do my bidding.

  Inspired by Gödel’s mapping of PM into itself, Alan Turing realized that the critical threshold for this kind of computational universality comes at exactly that point where a machine is flexible enough to read and correctly interpret a set of data that describe its own structure. At this crucial juncture, a machine can, in principle, explicitly watch how it does any particular task, step by step. Turing realized that a machine that has this critical level of flexibility can imitate any another machine, no matter how complex the latter is. In other words, there is nothing more flexible than a universal machine. Universality is as far as you can go!

  This is why my Macintosh can, if I happen to have fed it the proper software, act indistinguishably from my son’s more expensive and faster “Alienware” computer (running any specific program), and vice versa. The only difference is one of speed, because my Mac will always remain, deep in its guts, a Mac. It will therefore have to imitate the fast, alien hardware by constantly consulting tables of data that explicitly describe the hardware of the Alien, and doing all those lookups is very slow. This is like me trying to get you to sign my signature by writing out a long set of instructions telling you how to draw every tiny curve. In principle it’s possible, but it would be hugely slower than just signing with my own handware!

  The Unexpectedness of Universality

  There is a tight analogy linking universal machines of this sort with the universality I earlier spoke of (though I didn’t use that word) when I described the power of Principia Mathematica. What Bertrand Russell and Alfred North Whitehead did not suspect, but what Kurt Gödel realized, is that, simply by virtue of representing certain fundamental features of the positive integers (such basic facts as commutativity, distributivity, the law of mathematical induction), they had unwittingly made their formal system PM surpass a key threshold that made it “universal”, which is to say, capable of defining number-theoretical functions that imitate arbitrarily complex other patterns (or indeed, even capable of turning around and imitating itself — giving rise to Gödel’s black-belt maneuver).

  Russell and Whitehead did not realize what they had wrought because it didn’t occur to them to use PM to “simulate” anything else. That idea was not on their radar screen (for that matter, radar itself wasn’t on anybody’s radar screen back then). Prime numbers, squares, sums of two squares, sums of two primes, Fibonacci numbers, and so forth were seen merely as beautiful mathematical patterns — and patterns consisting of numbers, though fabulously intricate and endlessly fascinating, were not thought of as being isomorphic to anything else, let alone as being stand-ins for, and thus standing for, anything else. After Gödel and Turing, though, such naïveté went down the drain in a flash.

  By and large, the engineers who designed the earliest electronic computers were as unaware as Russell and Whitehead had been of the richness that they were unwittingly bringing into being. They thought they were building machines of very limited, and purely military, scopes — for instance, machines to calculate the trajectories of ballistic missiles, taking wind and air resistance into account, or machines to break very specific types of enemy codes. They envisioned their computers as being specialized, single-purpose machines — a little like wind-up music boxes that could play just one tune each.

  But at some point, when Alan Turing’s abstract theory of computation, based in large part on Gödel’s 1931 paper, collided with the concrete engineering realities, some of the more perceptive people (Turing himself and John von Neumann especially) put two and two together and realized that their machines, incorporating the richness of integer arithmetic that Gödel had shown was so potent, were thereby universal. All at once, these machines were like music boxes that could read arbitrary paper scrolls with holes in them, and thus could play any tune. From then on, it was simply a matter of time until cell phones started being able to don many personas other than just the plain old cell-phone persona. All they had to do was surpass that threshold of complexity and memory size that limited them to a single “tune”, and then they could become anything.

  The early computer engineers thought of their computers as number-crunching devices and did not see numbers as a universal medium. Today we (and by “we” I mean our culture as a whole, rather than specialists) do not see numbers that way either, but our lack of understanding is for an entirely different reason — in fact, for exactly the opposite reason. Today it is because all those numbers are so neatly hidden behind the screens of our laptops and desktops that we utterly forget they are there. We watch virtual football games unfolding on our screen between “dream teams” that exist only inside the central processing unit (which is carrying out arithmetical instructions, just as it was designed to do). Children build virtual towns inhabited by little people who virtually ride by on virtual bicycles, with leaves that virtually fall from trees and smoke that virtually dissipates into the virtual air. Cosmologists create virtual galaxies, let them loose, and watch what happens as they virtually collide. Biologists create virtual proteins and watch them fold up according to the complex virtual chemistry of their constituent virtual submolecules.

  I could list hundreds of things that take place on computer screens, but few people ever think about the fact that all of this is happening courtesy of addition and multiplication of integers way down at the hardware level. But th
at is exactly what’s happening. We don’t call computers computers for nothing, after all! They are, in fact, computing sums and products of integers expressed in binary notation. And in that sense, Gödel’s world-dazzling, Russell-crushing, Hilbert-toppling vision of 1931 has become such a commonplace in our downloading, upgrading, gigabyte culture that although we are all swimming in it all the time, hardly anyone is in the least aware of it. Just about the only trace of the original insight that remains visible, or rather, “audible”, around us is the very word “computer”. That term tips you off, if you bother to think about it, to the fact that underneath all the colorful pictures, seductive games, and lightning-fast Web searches, there is nothing going on but integer arithmetic. What a hilarious joke!

  Actually, it’s more ambiguous than that, and for all the same reasons as I elaborated in Chapter 11. Wherever there is a pattern, it can be seen either as itself or as standing for anything to which it is isomorphic. Words that apply to Pomponnette’s straying also apply, as it happens, to Aurélie’s straying, and neither interpretation is truer than the other, even if one of them was the originally intended one. Likewise, an operation on an integer that is written out in binary notation (for instance, the conversion of “0000000011001111” into “1100111100000000”) that one person might describe as multiplication by 256 might be described by another observer as a left-shift by eight bits, and by another observer as the transfer of a color from one pixel to its neighbor, and by someone else as the deletion of an alphanumeric character in a file. As long as each one is a correct description of what’s happening, none of them is privileged. The reason we call computers “computers”, then, is historic. They originated as integer-calculation machines, and they are still of course validly describable as such — but we now realize, as Kurt Gödel first did back in 1931, that such devices can be equally validly perceived and talked about in terms that are fantastically different from what their originators intended.

  Universal Beings

  We human beings, too, are universal machines of a different sort: our neural hardware can copy arbitrary patterns, even if evolution never had any grand plan for this kind of “representational universality” to come about. Through our senses and then our symbols, we can internalize external phenomena of many sorts. For example, as we watch ripples spreading on a pond, our symbols echo their circular shapes, abstract them, and can replay the essence of those shapes much later. I say “the essence” because some — in fact most — detail is lost; as you know very well, we retain not all levels of what we encounter but only those that our hardware, through the pressures of natural selection, came to consider the most important. I also have to make clear (although I hope no reader would fall into such a trap) that when I say that our symbols “internalize” or “copy” external patterns, I don’t mean that when we watch ripples on a pond, or when we “replay” a memory of such a scene (or of many such scenes blurred together), there literally are circular patterns spreading out on some horizontal surface inside our brains. I mean that a host of structures are jointly activated that are connected with the concepts of water, wetness, ponds, horizontal surfaces, circularity, expansion, things bobbing up and down, and so forth. I am not talking about a movie screen inside the head!

  Representational universality also means that we can import ideas and happenings without having to be direct witnesses to them. For example, as I mentioned in Chapter 11, humans (but not most other animals) can easily process the two-dimensional arrays of pixels on a television screen and can see those ever-changing arrays as coding for distant or fictitious three-dimensional situations evolving over time.

  On a skiing vacation in the Sierra Nevada, far away from home, my children and I took advantage of the “doggie cam” at the Bloomington kennel where we had boarded our golden retriever Ollie, and thanks to the World Wide Web, we were treated to a jerky sequence of stills of a couple of dozen dogs meandering haphazardly in a fenced-in play area outdoors, looking a bit like particles undergoing random Brownian motion, and although each pooch was rendered by a pretty small array of pixels, we could often recognize our Ollie by subtle features such as the angle of his tail. For some reason, the kids and I found this act of visual eavesdropping on Ollie quite hilarious, and although we could easily describe this droll scene to our human friends, and although I would bet a considerable sum that these few lines of text have vividly evoked in your mind both the canine scene at the kennel and the human scene at the ski resort, we all realized that there was not a hope in hell that we could ever explain to Ollie himself that we had been “spying” on him from thousands of miles away. Ollie would never know, and could never know.

  Why not? Because Ollie is a dog, and dogs’ brains are not universal. They cannot absorb ideas like “jerky still photo”, “24-hour webcam”, “spying on dogs playing in the kennel”, or even, for that matter, “2,000 miles away”. This is a huge and fundamental breach between humans and dogs — indeed, between humans and all other species. It is this that sets us apart, makes us unique, and, in the end, gives us what we call “souls”.

  In the world of living things, the magic threshold of representational universality is crossed whenever a system’s repertoire of symbols becomes extensible without any obvious limit. This threshold was crossed on the species level somewhere along the way from earlier primates to ourselves. Systems above this counterpart to the Gödel–Turing threshold — let’s call them “beings”, for short — have the capacity to model inside themselves other beings that they run into — to slap together quick-and-dirty models of beings that they encounter only briefly, to refine such coarse models over time, even to invent imaginary beings from whole cloth. (Beings with a propensity to invent other beings are often informally called “novelists”.)

  Once beyond the magic threshold, universal beings seem inevitably to become ravenously thirsty for tastes of the interiority of other universal beings. This is why we have movies, soap operas, television news, blogs, webcams, gossip columnists, People magazine, and The Weekly World News, among others. People yearn to get inside other people’s heads, to “see out” from inside other crania, to gobble up other people’s experiences.

  Although I have been depicting it somewhat cynically, representational universality and the nearly insatiable hunger that it creates for vicarious experiences is but a stone’s throw away from empathy, which I see as the most admirable quality of humanity. To “be” someone else in a profound way is not merely to see the world intellectually as they see it and to feel rooted in the places and times that molded them as they grew up; it goes much further than that. It is to adopt their values, to take on their desires, to live their hopes, to feel their yearnings, to share their dreams, to shudder at their dreads, to participate in their life, to merge with their soul.

  Being Visited

  One morning not long ago I woke up with the memory of my father richly pulsating inside my cranium. For a shining moment my dreaming mind seemed to have brought him back to life in the most vivid fashion, even though “he” had had to float in the rarefied medium of my brain’s stage. It felt, nonetheless, like he was really back again for a short while, and then, sadly, all at once he just went poof. How is this bittersweet kind of experience, so familiar to every adult human being, to be understood? What degree of reality do these software beings that inhabit us have? Why did I put “he” in quotation marks, a few lines up? Why the caution, why the hedging?

  What is really going on when you dream or think more than fleetingly about someone you love (whether that person died many years ago or is right now on the other end of a phone conversation with you)? In the terminology of this book, there is no ambiguity about what is going on. The symbol for that person has been activated inside your skull, lurched out of dormancy, as surely as if it had an icon that someone had double-clicked. And the moment this happens, much as with a game that has opened up on your screen, your mind starts acting differently from how it acts in a “normal” context. Y
ou have allowed yourself to be invaded by an “alien universal being”, and to some extent the alien takes charge inside your skull, starts pushing things around in its own fashion, making words, ideas, memories, and associations bubble up inside your brain that ordinarily would not do so. The activation of the symbol for the loved person swivels into action whole sets of coordinated tendencies that represent that person’s cherished style, their idiosyncratic way of being embedded in the world and looking out at it. As a consequence, during this visitation of your cranium, you will surprise yourself by coming out with different jokes from those you would normally make, seeing things in a different emotional light, making different value judgments, and so forth.

  But the crux of the matter for us right now is the following question: Is your symbol for another person actually an “I”? Can that symbol have inner experiences? Or is it as unalive as is your symbol for a stick or a stone or a playground swing? I chose the example of a playground swing for a reason. The moment I suggest it to you, no matter what playground you have located it in, no matter what you imagine its seat to be made of, no matter how high you imagine the bar it is dangling from, you can see it swinging back and forth, wiggling slightly in that funny way that swings wiggle, losing energy unless pushed, and you can also hear its softly clinking chains. Though no one would call the swing itself alive, there is no doubt that its mental proxy is dancing in the seething substrate of your brain. After all, that is what a brain is made for — to be a stage for the dance of active symbols.

  If you seriously believe, as I do and have been asserting for most of this book, that concepts are active symbols in a brain, and if furthermore you seriously believe that people, no less than objects, are represented by symbols in the brain (in other words, that each person that one knows is internally mirrored by a concept, albeit a very complicated one, in one’s brain), and if lastly you seriously believe that a self is also a concept, just an even more complicated one (namely, an “I”, a “personal gemma”, a rock-solid “marble”), then it is a necessary and unavoidable consequence of this set of beliefs that your brain is inhabited to varying extents by other I’s, other souls, the extent of each one depending on the degree to which you faithfully represent, and resonate with, the individual in question. I include the proviso “and resonate with” because one can’t just slip into any old soul, no more than one can slip into any old piece of clothing; some souls and some suits simply “fit” better than others do.

 

‹ Prev