You are not a Gadget: A Manifesto

Home > Other > You are not a Gadget: A Manifesto > Page 17
You are not a Gadget: A Manifesto Page 17

by Jaron Lanier


  The same could be said about a great many topics other than math. If you’re interested in the history of a rare musical instrument, for instance, you can delve into the internet archive and find personal sites devoted to it, though they probably were last updated around the time Wikipedia came into being. Choose a topic you know something about and take a look.

  Wikipedia has already been elevated into what might be a permanent niche. It might become stuck as a fixture, like MIDI or the Google ad exchange services. That makes it important to be aware of what you might be missing. Even in a case in which there is an objective truth that is already known, such as a mathematical proof, Wikipedia distracts the potential for learning how to bring it into the conversation in new ways. Individual voice—the opposite of wikiness—might not matter to mathematical truth, but it is the core of mathematical communication.

  * See Norm Cohen, “The Latest on Virginia Tech, from Wikipedia,” New York Times, April 23, 2007. In 2009, Twitter became the focus of similar stories because of its use by protestors of Iran’s disputed presidential election.

  † See Jamin Brophy-Warren, “Oh, That John Locke,” Wall Street Journal, June 16, 2007.

  * Once again, I have to point out that where Wikipedia is useful, it might not be uniquely useful. For instance, there is an alternative choice for a site with raw, dry math definitions, run as a free service by a company that makes software for mathematicians. Go to http://mathworld.wolfram.com/.

  * For example, figuring out how to present a hendecachoron, which is a four-dimensional shape I love, in an accessible, interactive web animation is an incredibly hard task that has still not been completed. By contrast, contributing to a minimal, raw, dry, but accurate entry about a hendecachoron on Wikipedia is a lot easier, but it offers nothing to someone encountering the shape for the first time.

  This shape is amazing because it is symmetrical like a cube, which has six faces, but the symmetry is of a prime number, eleven, instead of a divisible number like six. This is weird, because prime numbers can’t be broken into sets of identical parts, so it sounds a little odd that there could be prime-numbered geometric symmetries. It’s possible only because the hendecachoron doesn’t fit inside a sphere, in the way a cube can. It fits, instead, along the contours of a close cousin of the sphere, which is called the real projective plane. This shape is like a doubly extreme version of the famous Klein bottle. None other than Freeman Dyson made me aware of the hendecachoron, and Carlo Sequin and I worked on producing the first-ever image of one.

  PART FOUR

  Making The Best of Bits

  IN THIS SECTION, I will switch to a more positive perspective, examining what distinguishes cybernetic totalism from humanism by considering the evolution of human culture.

  What I hope to demonstrate is that each way of thinking has its proper place and a specific, pragmatic scope, within which it makes sense.

  We should reject cybernetic totalism as a basis for making most decisions but recognize that some of its ideas can be useful methods of understanding.

  The distinction between understanding and creed, between science and ethics, is subtle. I can hardly claim to have mastered it, but I hope the following reports of my progress will be of use.

  CHAPTER 12

  I am a Contrarian Loop

  VARIETIES OF COMPUTATIONALISM are distinguished; realistic computationalism is defined.

  The Culture of Computationalism

  In Silicon Valley you will meet Buddhists, anarchists, goddess worshippers, Ayn Rand fanatics, self-described Jesus freaks, nihilists, and plenty of libertarians, as well as surprising blends of all of the above and many others who seem to be nonideological. And yet there is one belief system that doesn’t quite mesh with any of these identities that nonetheless serves as a common framework.

  For lack of a better word, I call it computationalism. This term is usually used more narrowly to describe a philosophy of mind, but I’ll extend it to include something like a culture. A first pass at a summary of the underlying philosophy is that the world can be understood as a computational process, with people as subprocesses.

  In this chapter I will explore the uses of computationalism in scientific speculation. I will argue that even if you find computationalism helpful in understanding science, it should not be used in evaluating certain kinds of engineering.

  Three Less-Than-Satisfying Flavors of Computationalism

  Since I’m a rarity in computer science circles—a computationalism critic—I must make clear that computationalism has its uses.

  Computationalism isn’t always crazy. Sometimes it is embraced because avoiding it can bring about other problems. If you want to consider people as special, as I have advised, then you need to be able to say at least a little bit about where the specialness begins and ends. This is similar to, or maybe even coincident with, the problem of positioning the circle of empathy, which I described in Chapter 2. If you hope for technology to be designed to serve people, you must have at least a rough idea of what a person is and is not.

  But there are cases in which any possible setting of a circle can cause problems. Dividing the world into two parts, one of which is ordinary—deterministic or mechanistic, perhaps—and one of which is mystifying, or more abstract, is particularly difficult for scientists. This is the dreaded path of dualism.

  It is awkward to study neuroscience, for instance, if you assume that the brain is linked to some other entity—a soul—on a spirit plane. You have to treat the brain simply as a mechanism you don’t understand if you are to improve your understanding of it through experiment. You can’t declare in advance what you will and will not be able to explain.

  I am contradicting myself here, but the reason is that I find myself playing different roles at different times. Sometimes I am designing tools for people to use, while at other times I am working with scientists trying to understand how the brain works.

  Perhaps it would be better if I could find one single philosophy that I could apply equally to each circumstance, but I find that the best path is to believe different things about aspects of reality when I play these different roles or perform different duties.

  Up to this point, I have described what I believe when I am a technologist. In those instances, I take a mystical view of human beings. My first priority must be to avoid reducing people to mere devices. The best way to do that is to believe that the gadgets I can provide are inert tools and are only useful because people have the magical ability to communicate meaning through them.

  When I put on a different hat—that of a collaborator with scientists—then I believe something else. In those cases, I prefer ideas that don’t involve magical objects, for scientists can study people as if we were not magical at all. Ideally, a scientist ought to be able to study something a bit without destroying it. The whole point of technology, though, is to change the human situation, so it is absurd for humans to aspire to be inconsequential.

  In a scientific role, I don’t recoil from the idea that the brain is a kind of computer, but there is more than one way to use computation as a source of models for human beings. I’ll discuss three common flavors of computationalism and then describe a fourth flavor, the one that I prefer. Each flavor can be distinguished by a different idea about what would be needed to make software as we generally know it become more like a person.

  One flavor is based on the idea that a sufficiently voluminous computation will take on the qualities we associate with people—such as, perhaps, consciousness. One might claim Moore’s law is inexorably leading to superbrains, superbeings, and, perhaps, ultimately, some kind of global or even cosmic consciousness. If this language sounds extreme, be aware that this is the sort of rhetoric you can find in the world of Singularity enthusiasts and extropians.

  If we leave aside the romance of this idea, the core of it is that meaning arises in bits as a result of magnitude. A set of one thousand records in a database that refer to one another in patterns
would not be meaningful without a person to interpret it; but perhaps a quadrillion or a googol of database entries can mean something in their own right, even if there is no being explaining them.

  Another way to put it is that if you have enough data and a big and fast enough computer, you can conceivably overcome the problems associated with logical positivism. Logical positivism is the idea that a sentence or another fragment—something you can put in a computer file—means something in a freestanding way that doesn’t require invoking the subjectivity of a human reader. Or, to put it in nerd-speak: “The meaning of a sentence is the instructions to verify it.”

  Logical positivism went out of fashion, and few would claim its banner these days, but it’s enjoying an unofficial resurgence with a computer assist. The new version of the idea is that if you have a lot of data, you can make logical positivism work on a large-scale statistical basis. The thinking goes that within the cloud there will be no need for the numinous halves of traditional oppositions such as syntax/semantics, quantity/quality, content/context, and knowledge/wisdom.

  A second flavor of computationalism holds that a computer program with specific design features—usually related to self-representation and circular references—is similar to a person. Some of the figures associated with this approach are Daniel Dennett and Douglas Hofstadter, though each has his own ideas about what the special features should be.

  Hofstadter suggests that software that includes a “strange loop” bears a resemblance to consciousness. In a strange loop, things are nested within things in such a way that an inner thing is the same as an outer thing.

  If you descend on a city using a parachute, land on a roof, enter the building through a door on that roof, go into a room, open another door to a closet, enter it, and find that there is no floor in the closet and you are suddenly once again falling in the vast sky toward the city, you are in a strange loop. The same notion can perhaps be applied to mental phenomena, when thoughts within thoughts lead to the original thoughts. Perhaps that process has something to do with self-awareness—and what it is to be a person.

  A third flavor of computationalism is found in web 2.0 circles. In this case, any information structure that can be perceived by some real human to also be a person is a person. This idea is essentially a revival of the Turing test. If you can perceive the hive mind to be recommending music to you, for instance, then the hive is effectively a person.

  I have to admit that I don’t find any of these three flavors of computationalism to be useful on those occasions when I put on my scientist’s hat.

  The first idea, that quantity equals quality in software, is particularly galling, since a computer scientist spends much of his time struggling with the awfulness of what happens to software—as we currently know how to make it, anyway—when it gets large.

  The second flavor is also not helpful. It is fascinating and clever to create software with self-representations and weird loopy structures. Indeed, I have implemented the skydiving scenario in a virtual world. I have never observed any profound change in the capabilities of software systems based on an enhanced degree of this kind of trickery, even though there is still a substantial community of artificial intelligence researchers who expect that benefit to appear someday.

  As for the third flavor—the pop version of the Turing test—my complaint ought to be clear by now. People can make themselves believe in all sorts of fictitious beings, but when those beings are perceived as inhabiting the software tools through which we live our lives, we have to change ourselves in unfortunate ways in order to support our fantasies. We make ourselves dull.

  But there are more ways than these three to think about people as being special from a computational point of view.

  Realistic Computationalism

  The approach to thinking about people computationally that I prefer, on those occasions when such thinking seems appropriate to me, is what I’ll call “realism.” The idea is that humans, considered as information systems, weren’t designed yesterday, and are not the abstract playthings of some higher being, such as a web 2.0 programmer in the sky or a cosmic Spore player. Instead, I believe humans are the result of billions of years of implicit, evolutionary study in the school of hard knocks. The cybernetic structure of a person has been refined by a very large, very long, and very deep encounter with physical reality.

  From this point of view, what can make bits have meaning is that their patterns have been hewn out of so many encounters with reality that they aren’t really abstractable bits anymore, but are instead a nonabstract continuation of reality.

  Realism is based on specifics, but we don’t yet know—and might never know—the specifics of personhood from a computational point of view. The best we can do right now is engage in the kind of storytelling that evolutionary biologists sometimes indulge in.

  Eventually data and insight might make the story more specific, but for the moment we can at least construct a plausible story of ourselves in terms of grand-scale computational natural history. A myth, a creation tale, can stand in for a while, to give us a way to think computationally that isn’t as vulnerable to the confusion brought about by our ideas about ideal computers (i.e., ones that only have to run small computer programs).

  Such an act of storytelling is a speculation, but a speculation with a purpose. A nice benefit of this approach is that specifics tend to be more colorful than generalities, so instead of algorithms and hypothetical abstract computers, we will be considering songbirds, morphing cephalopods, and Shakespearean metaphors.

  CHAPTER 13

  One Story of How Semantics Might Have Evolved

  THIS CHAPTER PRESENTS a pragmatic alternation between philosophies (instead of a demand that a single philosophy be applied in all seasons). Computationalism is applied to a naturalistic speculation about the origins of semantics.

  Computers Are Finally Starting to Be Able to Recognize Patterns

  In January 2002 I was asked to give an opening talk and performance for the National Association of Music Merchants,* the annual trade show for makers and sellers of musical instruments. What I did was create a rhythmic beat by making the most extreme funny faces I could in quick succession.

  A computer was watching my face through a digital camera and generating varied opprobrious percussive sounds according to which funny face it recognized in each moment.† (Keeping a rhythm with your face is a strange new trick—we should expect a generation of kids to adopt the practice en masse any year now.)

  This is the sort of deceptively silly event that should be taken seriously as an indicator of technological change. In the coming years, pattern-recognition tasks like facial tracking will become commonplace. On one level, this means we will have to rethink public policy related to privacy, since hypothetically a network of security cameras could automatically determine where everyone is and what faces they are making, but there are many other extraordinary possibilities. Imagine that your avatar in Second Life (or, better yet, in fully realized, immersive virtual reality) was conveying the subtleties of your facial expressions at every moment.

  There’s an even deeper significance to facial tracking. For many years there was an absolute, unchanging divide between what you could and could not represent or recognize with a computer. You could represent a precise quantity, such as a number, but you could not represent an approximate holistic quality, such as an expression on a face.

  But until recently, computers couldn’t even see a smile. Facial expressions were imbedded deep within the imprecise domain of quality, not anywhere close to the other side, the infinitely deciphered domain of quantity. No smile was precisely the same as any other, and there was no way to say precisely what all the smiles had in common. Similarity was a subjective perception of interest to poets—and irrelevant to software engineers.

  While there are still a great many qualities in our experience that cannot be represented in software using any known technique, engineers have finally gained
the ability to create software that can represent a smile, and write code that captures at least part of what all smiles have in common. This is an unheralded transformation in our abilities that took place around the turn of our new century. I wasn’t sure I would live to see it, though it continues to surprise me that engineers and scientists I run across from time to time don’t realize it has happened.

  Pattern-recognition technology and neuroscience are growing up together. The software I used at NAMM was a perfect example of this intertwining. Neuroscience can inspire practical technology rather quickly. The original project was undertaken in the 1990s under the auspices of Christoph von der Malsburg, a University of Southern California neuroscientist, and his students, especially Hartmut Neven. (Von der Malsburg might be best known for his crucial observation in the early 1980s that synchronous firing—that is, when multiple neurons go off at the same moment—is important to the way that neural networks function.)

  In this case, he was trying to develop hypotheses about what functions are performed by particular patches of tissue in the visual cortex—the part of the brain that initially receives input from the optic nerves. There aren’t yet any instruments that can measure what a large, complicated neural net is doing in detail, especially while it is part of a living brain, so scientists have to find indirect ways of testing their ideas about what’s going on in there.

  One way is to build the idea into software and see if it works. If a hypothesis about what a part of the brain is doing turns out to inspire a working technology, the hypothesis certainly gets a boost. But it isn’t clear how strong a boost. Computational neuroscience takes place on an imprecise edge of scientific method. For example, while facial expression tracking software might seem to reduce the degree of ambiguity present in the human adventure, it actually might add more ambiguity than it takes away. This is because, strangely, it draws scientists and engineers into collaborations in which science gradually adopts methods that look a little like poetry and storytelling. The rules are a little fuzzy, and probably will remain so until there is vastly better data about what neurons are actually doing in a living brain.

 

‹ Prev