You are not a Gadget: A Manifesto

Home > Other > You are not a Gadget: A Manifesto > Page 19
You are not a Gadget: A Manifesto Page 19

by Jaron Lanier


  In the second story, protohumans have become successful enough that more of them are surviving, finding mates, and reproducing. They are making all kinds of weird sounds because evolution allows experimentation to run wild, so long as it doesn’t have a negative effect on survival. Meanwhile, the protohumans are doing a lot of things in groups, and their brains start correlating certain distinctive social vocalizations with certain events. Gradually, a large number of approximate words come into use. There is no clear boundary at first between words, phrases, emotional inflection, and any other part of language.

  The second story seems more likely to me. Protohumans would have been doing something like what big computers are starting to do now, but with the superior pattern-recognizing capabilities of a brain. While language has become richer over time, it has never become absolutely precise. The ambiguity continues to this day and allows language to grow and change. We are still living out the second story when we come up with new slang, such as “bling” or “LOL.”

  So this is an ironic moment in the history of computer science. We are beginning to succeed at using computers to analyze data without the constraints of rigid grammarlike systems. But when we use computers to create, we are confined to equally rigid 1960s models of how information should be structured. The hope that language would be like a computer program has died. Instead, music has changed to become more like a computer program.

  Even if the second story happened, and is still happening, language has not necessarily become more varied. Rules of speech may have eventually emerged that place restrictions on variety. Maybe those late-arriving rules help us communicate more precisely or just sound sexy and high status, or more likely a little of both. Variety doesn’t always have to increase in every way.

  Retropolis Redux

  Variety could even decrease over time. In Chapter 9, I explained how the lack of stylistic innovation is affecting the human song right now. If you accept that there has been a recent decrease in the stylistic variety, the next question is “Why?” I have already suggested that the answer may be connected with the problem of fragment liberation and the hive mind.

  Another explanation, which I also think possible, is that the change since the mid-1980s corresponds with the appearance of digital editing tools, such as MIDI, for music. Digital tools have more impact on the results than previous tools: if you deviate from the kind of music a digital tool was designed to make, the tool becomes difficult to use. For instance, it’s far more common these days for music to have a clockwork-regular beat. This may be largely because some of the most widely used music software becomes awkward to use and can even produce glitches if you vary the tempo much while editing. In predigital days, tools also influenced music, but not nearly as dramatically.

  Rendezvous with Rama

  In Chapter 2 I argued that the following question can never be asked scientifically: What is the nature of consciousness? No experiment can even show that consciousness exists.

  In this chapter, I am wearing a different hat and describing the role computer models play in neuroscience. Do I have to pretend that consciousness doesn’t exist at all while I’m wearing this other hat (probably a cap studded with electrodes)?

  Here is the way I answer that question: While you can never capture the nature of consciousness, there are ways to get closer and closer to it. For instance, it is possible to ask what meaning is, even if we cannot ask about the experience of meaning.

  V. S. Ramachandran, a neuroscientist at the University of California at San Diego and the Salk Institute, has come up with a research program to approach the question of meaning with remarkable concreteness. Like many of the best scientists, Rama (as he is known to his colleagues) is exploring in his work highly complex variants of what made him curious as a child. When he was eleven, he wondered about the digestive system of the Venus flytrap, the carnivorous plant. Are the digestive enzymes in its leaves triggered by proteins, by sugars, or by both? Would saccharin fool the traps the way it fools our taste buds?

  Later Rama graduated to studying vision and published his first paper in the journal Nature in 1972, when he was twenty. He is best known for work that overlaps with my own interests: using mirrors as a low-tech form of virtual reality to treat phantom-limb pain and stroke paralysis. His research has also sparked a fruitful ongoing dialogue between the two of us about language and meaning.

  The brain’s cerebral cortex areas are specialized for particular sensory systems, such as vision. There are also overlapping regions between these parts—the cross-modal areas I mentioned earlier in connection with olfaction. Rama is interested in determining how the cross-modal areas of the brain may give rise to a core element of language and meaning: the metaphor.

  A Physiological Basis for Metaphor

  Rama’s canonical example is encapsulated in an experiment known as bouba/kiki. Rama presents test subjects with two words, both of which are pronounceable but meaningless in most languages: bouba and kiki.

  Then he shows the subjects two images: one is a spiky, hystricine shape and the other a rounded cloud form. Match the words and the images! Of course, the spiky shape goes with kiki and the cloud matches bouba. This correlation is cross-cultural and appears to be a general truth for all of humankind.

  The bouba/kiki experiment isolates one form of linguistic abstraction. “Boubaness” or “kikiness” arises from two stimuli that are otherwise utterly dissimilar: an image formed on the retina versus a sound activated in the cochlea of the ear. Such abstractions seem to be linked to the mental phenomenon of metaphor. For instance, Rama finds that patients who have lesions in a cross-modal brain region called the inferior parietal lobule have difficulty both with the bouba/kiki task and with interpreting proverbs or stories that have nonliteral meanings.

  Rama’s experiments suggest that some metaphors can be understood as mild forms of synesthesia. In its more severe forms, synesthesia is an intriguing neurological anomaly in which a person’s sensory systems are crossed—for example, a color might be perceived as a sound.

  What is the connection between the images and the sounds in Rama’s experiment? Well, from a mathematical point of view, kiki and the spiky shape both have “sharp” components that are not so pronounced in bouba; similar sharp components are present in the tongue and hand motions needed to make the kiki sound or draw the kiki picture.

  Rama suggests that cross-modal abstraction—the ability to make consistent connections across senses—might have initially evolved in lower primates as a better way to grasp branches. Here’s how it could have happened: the cross-modal area of the brain might have evolved to link an oblique image hitting the retina (caused by viewing a tilted branch) with an “oblique” sequence of muscle twitches (leading the animal to grab the branch at an angle).

  The remapping ability then became coopted for other kinds of abstraction that humans excel in, such as the bouba/kiki metaphor. This is a common phenomenon in evolution: a preexisting structure, slightly modified, takes on parallel yet dissimilar functions.

  But Rama also wonders about other kinds of metaphors, ones that don’t obviously fall into the bouba/kiki category. In his current favorite example, Shakespeare has Romeo declare Juliet to be “the sun.” There is no obvious bouba/kiki-like dynamic that would link a young, female, doomed romantic heroine with a bright orb in the sky, yet the metaphor is immediately clear to anyone who hears it.

  Meaning Might Arise from an Artificially Limited Vocabulary

  A few years ago, when Rama and I ran into each other at a conference where we were both speaking, I made a simple suggestion to him about how to extend the bouba/kiki idea to Juliet and the sun.

  Suppose you had a vocabulary of only one hundred words. (This experience will be familiar if you’ve ever traveled to a region where you don’t speak the language.) In that case, you’d have to use your small vocabulary creatively to get by. Now extend that condition to an extreme. Suppose you had a vocabulary of only four nouns: ki
ki, bouba, Juliet, and sun. When the choices are reduced, the importance of what might otherwise seem like trivial synesthetic or other elements of commonality is amplified.

  Juliet is not spiky, so bouba or the sun, both being rounded, fit better than kiki. (If Juliet were given to angry outbursts of spiky noises, then kiki would be more of a contender, but that’s not our girl in this case.) There are a variety of other minor overlaps that make Juliet more sunlike than boubaish.

  If a tiny vocabulary has to be stretched to cover a lot of territory, then any difference at all between the qualities of words is practically a world of difference. The brain is so desirous of associations that it will then amplify any tiny potential linkage in order to get a usable one. (There’s infinitely more to the metaphor as it appears in the play, of course. Juliet sets like the sun, but when she dies, she doesn’t come back like it does. Or maybe the archetype of Juliet always returns, like the sun—a good metaphor breeds itself into a growing community of interacting ideas.)

  Likewise, much of the most expressive slang comes from people with limited formal education who are making creative use of the words they know. This is true of pidgin languages, street slang, and so on. The most evocative words are often the most common ones that are used in the widest variety of ways. For example: Yiddish: Nu? Spanish: Pues.

  One reason the metaphor of the sun fascinates me is that it bears on a conflict that has been at the heart of information science since its inception: Can meaning be described compactly and precisely, or is it something that can emerge only in approximate form based on statistical associations between large numbers of components?

  Mathematical expressions are compact and precise, and most early computer scientists assumed that at least part of language ought to display those qualities too.

  I described above how statistical approaches to tasks like automatic language translation seem to be working better than compact, precise ones. I also argued against the probability of an initial, small, well-defined vocabulary in the evolution of language and in favor of an emergent vocabulary that never became precisely defined.

  There is, however, at least one other possibility I didn’t describe earlier: vocabulary could be emergent, but there could also be an outside factor that initially makes it difficult for a vocabulary to grow as large as the process of emergence might otherwise encourage.

  The bouba/kiki dynamic, along with other similarity-detecting processes in the brain, can be imagined as the basis of the creation of an endless series of metaphors, which could correspond to a boundless vocabulary. But if this explanation is right, the metaphor of the sun might come about only in a situation in which the vocabulary is at least somewhat limited.

  Imagine that you had an endless capacity for vocabulary at the same time that you were inventing language. In that case you could make up an arbitrary new word for each new thing you had to say. A compressed vocabulary might engender less lazy, more evocative words.

  If we had infinite brains, capable of using an infinite number of words, those words would mean nothing, because each one would have too specific a usage. Our early hominid ancestors were spared from that problem, but with the coming of the internet, we are in danger of encountering it now. Or, more precisely, we are in danger of pretending with such intensity that we are encountering it that it might as well be true.

  Maybe the modest brain capacity of early hominids was the source of the limitation of vocabulary size. Whatever the cause, an initially limited vocabulary might be necessary for the emergence of an expressive language. Of course, the vocabulary can always grow later on, once the language has established itself. Modern English has a huge vocabulary.

  Small Brains Might Have Saved Humanity from an Earlier Outbreak of Meaninglessness

  If the computing clouds became effectively infinite, there would be a hypothetical danger that all possible interpolations of all possible words—novels, songs, and facial expressions—will cohabit a Borges-like infinite Wikipedia in the ether. Should that come about, all words would become meaningless, and all meaningful expression would become impossible. But, of course, the cloud will never be infinite.

  * Given my fetish for musical instruments, the NAMM is one of the most dangerous—i.e., expensive—events for me to attend. I have learned to avoid it in the way a recovering gambler ought to avoid casinos.

  † The software I used for this was developed by a small company called Eyematic, where I served for a while as chief scientist. Eyematic has since folded, but Hartmut Neven and many of the original students started a successor company to salvage the software. That company was swallowed up by Google, but what Google plans to do with the stuff isn’t clear yet. I hope they’ll come up with some creative applications along with the expected searching of images on the net.

  * Current commercial displays are not quite aligned with human perception, so they can’t show all the colors we can see, but it is possible that future displays will show the complete gamut perceivable by humans.

  PART FIVE

  Future Humors

  IN THE PREVIOUS SECTIONS, I’ve argued that when you deny the specialness of personhood, you elicit confused, inferior results from people. On the other hand, I’ve also argued that computationalism, a philosophical framework that doesn’t give people a special place, can be extremely useful in scientific speculations. When we want to understand ourselves on naturalistic terms, we must make use of naturalistic philosophy that accounts for a degree of irreducible complexity, and until someone comes up with another idea, computationalism is the only path we have to do that.

  I should also point out that computationalism can be helpful in certain engineering applications. A materialist approach to the human organism is, in fact, essential in some cases in which it isn’t necessarily easy to maintain.

  For instance, I’ve worked on surgical simulation tools for many years, and in such instances I try to temporarily adopt a way of thinking about people’s bodies as if they were fundamentally no different from animals or sophisticated robots. It isn’t work I could do as well without the sense of distance and objectivity.

  Unfortunately, we don’t have access at this time to a single philosophy that makes sense for all purposes, and we might never find one. Treating people as nothing other than parts of nature is an uninspired basis for designing technologies that embody human aspirations. The inverse error is just as misguided: it’s a mistake to treat nature as a person. That is the error that yields confusions like intelligent design.

  I’ve carved out a rough borderline between those situations in which it is beneficial to think of people as “special” and other situations when it isn’t.

  But I haven’t done enough.

  It is also important to address the romantic appeal of cybernetic totalism. That appeal is undeniable.

  Those who enter into the theater of computationalism are given all the mental solace that is usually associated with traditional religions. These include consolations for metaphysical yearnings, in the form of the race to climb to ever more “meta” or higher-level states of digital representation, and even a colorful eschatology, in the form of the Singularity. And, indeed, through the Singularity a hope of an afterlife is available to the most fervent believers.

  Is it conceivable that a new digital humanism could offer romantic visions that are able to compete with this extraordinary spectacle? I have found that humanism provides an even more colorful, heroic, and seductive approach to technology.

  This is about aesthetics and emotions, not rational argument. All I can do is tell you how it has been true for me, and hope that you might also find it to be true.

  CHAPTER 14

  Home at Last (My Love Affair with Bachelardian Neoteny)

  HERE I PRESENT my own romantic way to think about technology. It includes cephalopod envy, “post symbolic communication,” and an idea of progress that is centered on enriching the depth of communication instead of the acquisition of powers. I believe t
hat these ideas are only a few examples of many more awaiting discovery that will prove to be more seductive than cybernetic totalism.

  The Evolutionary Strategy

  Neoteny is an evolutionary strategy exhibited to varying degrees in different species, in which the characteristics of early development are drawn out and sustained into an individual organism’s chronological age.

  For instance, humans exhibit neoteny more than horses. A newborn horse can stand on its own and already possesses many of the other skills of an adult horse. A human baby, by contrast, is more like a fetal horse. It is born without even the most basic abilities of an adult human, such as being able to move about.

  Instead, these skills are learned during childhood. We smart mammals get that way by being dumber when we are born than our more instinctual cousins in the animal world. We enter the world essentially as fetuses in air. Neoteny opens a window to the world before our brains can be developed under the sole influence of instinct.

  It is sometimes claimed that the level of neoteny in humans is not fixed, that it has been rising over the course of human history. My purpose here isn’t to join in a debate about the semantics of nature and nurture. But I think it can certainly be said that neoteny is an immensely useful way of understanding the relationship between change in people and technology, and as with many aspects of our identity, we don’t know as much about the genetic component of neoteny as we surely will someday soon.

 

‹ Prev