You are not a Gadget: A Manifesto

Home > Other > You are not a Gadget: A Manifesto > Page 6
You are not a Gadget: A Manifesto Page 6

by Jaron Lanier


  Historian George Dyson suggests that Turing might have sided against the cybernetic totalists. For instance, here is an excerpt from a paper Turing wrote in 1939, titled “Systems of Logic Based on Ordinals”: “We have been trying to see how far it is possible to eliminate intuition, and leave only ingenuity. We do not mind how much ingenuity is required, and therefore assume it to be available in unlimited supply.” The implication seems to be that we are wrong to imagine that ingenuity can be infinite, even with computing clouds, so therefore intuition will never be made obsolete.

  Turing’s 1950 paper on the test includes this extraordinary passage: “In attempting to construct such machines we should not be irreverently usurping His power of creating souls, any more than we are in the procreation of children: rather we are, in either case, instruments of His will providing mansions for the souls that He creates.”

  CHAPTER 3

  The Noosphere Is Just Another Name for Everyone’s Inner Troll

  SOME OF THE fantasy objects arising from cybernetic totalism (like the noosphere, which is a supposed global brain formed by the sum of all the human brains connected through the internet) happen to motivate infelicitous technological designs. For instance, designs that celebrate the noosphere tend to energize the inner troll, or bad actor, within humans.

  The Moral Imperative to Create the Blandest Possible Bible

  According to a new creed, we technologists are turning ourselves, the planet, our species, everything, into computer peripherals attached to the great computing clouds. The news is no longer about us but about the big new computational object that is greater than us.

  The colleagues I disagree with often conceive our discussions as being a contest between a Luddite (who, me?) and the future. But there is more than one possible technological future, and the debate should be about how to best identify and act on whatever freedoms of choice we still have, not about who’s the Luddite.

  Some people say that doubters of the one true path, like myself, are like the shriveled medieval church officials who fought against poor Johannes Gutenberg’s press. We are accused of fearing change, just as the medieval Church feared the printing press. (We might also be told that we are the sort who would have repressed Galileo or Darwin.)

  What these critics forget is that printing presses in themselves provide no guarantee of an enlightened outcome. People, not machines, made the Renaissance. The printing that takes place in North Korea today, for instance, is nothing more than propaganda for a personality cult. What is important about printing presses is not the mechanism, but the authors.

  An impenetrable tone deafness rules Silicon Valley when it comes to the idea of authorship. This was as clear as ever when John Updike and Kevin Kelly exchanged words on the question of authorship in 2006. Kevin suggested that it was not just a good thing, but a “moral imperative” that all the world’s books would soon become effectively “one book” once they were scanned, searchable, and remixable in the universal computational cloud.

  Updike used the metaphor of the edges of the physical paper in a physical book to communicate the importance of enshrining the edges between individual authors. It was no use. Doctrinaire web 2.0 enthusiasts only perceived that Updike was being sentimental about an ancient technology.

  The approach to digital culture I abhor would indeed turn all the world’s books into one book, just as Kevin suggested. It might start to happen in the next decade or so. Google and other companies are scanning library books into the cloud in a massive Manhattan Project of cultural digitization. What happens next is what’s important. If the books in the cloud are accessed via user interfaces that encourage mashups of fragments that obscure the context and authorship of each fragment, there will be only one book. This is what happens today with a lot of content; often you don’t know where a quoted fragment from a news story came from, who wrote a comment, or who shot a video. A continuation of the present trend will make us like various medieval religious empires, or like North Korea, a society with a single book.*

  The ethereal, digital replacement technology for the printing press happens to have come of age in a time when the unfortunate ideology I’m criticizing dominates technological culture. Authorship—the very idea of the individual point of view—is not a priority of the new ideology.

  The digital flattening of expression into a global mush is not presently enforced from the top down, as it is in the case of a North Korean printing press. Instead, the design of software builds the ideology into those actions that are the easiest to perform on the software designs that are becoming ubiquitous. It is true that by using these tools, individuals can author books or blogs or whatever, but people are encouraged by the economics of free content, crowd dynamics, and lord aggregators to serve up fragments instead of considered whole expressions or arguments. The efforts of authors are appreciated in a manner that erases the boundaries between them.

  The one collective book will absolutely not be the same thing as the library of books by individuals it is bankrupting. Some believe it will be better; others, including me, believe it will be disastrously worse. As the famous line goes from Inherit the Wind: “The Bible is a book … but it is not the only book.” Any singular, exclusive book, even the collective one accumulating in the cloud, will become a cruel book if it is the only one available.

  Nerd Reductionism

  One of the first printed books that wasn’t a bible was 1499’s Hypnerotomachia Poliphili, or “Poliphili’s Strife of Love in a Dream,” an illustrated, erotic, occult adventure through fantastic architectural settings. What is most interesting about this book, which looks and reads like a virtual reality fantasy, is that something fundamental about its approach to life—its intelligence, its worldview—is alien to the Church and the Bible.

  It’s easy to imagine an alternate history in which everything that was printed on early presses went through the Church and was conceived as an extension of the Bible. “Strife of Love” might have existed in this alternate world, and might have been quite similar. But the “slight” modifications would have consisted of trimming the alien bits. The book would no longer have been as strange. And that tiny shift, even if it had been minuscule in terms of word count, would have been tragic.

  This is what happened when elements of indigenous cultures were preserved but de-alienated by missionaries. We know a little about what Aztec or Inca music sounded like, for instance, but the bits that were trimmed to make the music fit into the European idea of church song were the most precious bits. The alien bits are where the flavor is found. They are the portals to strange philosophies. What a loss to not know how New World music would have sounded alien to us! Some melodies and rhythms survived, but the whole is lost.

  Something like missionary reductionism has happened to the internet with the rise of web 2.0. The strangeness is being leached away by the mush-making process. Individual web pages as they first appeared in the early 1990s had the flavor of personhood. MySpace preserved some of that flavor, though a process of regularized formatting had begun. Facebook went further, organizing people into multiple-choice identities, while Wikipedia seeks to erase point of view entirely.

  If a church or government were doing these things, it would feel authoritarian, but when technologists are the culprits, we seem hip, fresh, and inventive. People will accept ideas presented in technological form that would be abhorrent in any other form. It is utterly strange to hear my many old friends in the world of digital culture claim to be the true sons of the Renaissance without realizing that using computers to reduce individual expression is a primitive, retrograde activity, no matter how sophisticated your tools are.

  Rejection of the Idea of Quality Results in a Loss of Quality

  The fragments of human effort that have flooded the internet are perceived by some to form a hive mind, or noosphere. These are some of the terms used to describe what is thought to be a new superintelligence that is emerging on a global basis on the net. Some people
, like Larry Page, one of the Google founders, expect the internet to come alive at some point, while others, like science historian George Dyson, think that might already have happened. Popular derivative terms like “blogosphere” have become commonplace.

  A fashionable idea in technical circles is that quantity not only turns into quality at some extreme of scale, but also does so according to principles we already understand. Some of my colleagues think a million, or perhaps a billion, fragmentary insults will eventually yield wisdom that surpasses that of any well-thought-out essay, so long as sophisticated secret statistical algorithms recombine the fragments. I disagree. A trope from the early days of computer science comes to mind: garbage in, garbage out.

  There are so many examples of disdain for the idea of quality within the culture of web 2.0 enthusiasts that it’s hard to choose an example. I’ll choose hive enthusiast Clay Shirky’s idea that there is a vast cognitive surplus waiting to be harnessed.

  Certainly there is broad agreement that there are huge numbers of people who are undereducated. Of those who are well educated, many are underemployed. If we want to talk about unmet human potential, we might also mention the huge number of people who are desperately poor. The waste of human potential is overwhelming. But these are not the problems that Shirky is talking about.

  What he means is that quantity can overwhelm quality in human expression. Here’s a quote, from a speech Shirky gave in April 2008:

  And this is the other thing about the size of the cognitive surplus we’re talking about. It’s so large that even a small change could have huge ramifications. Let’s say that everything stays 99 percent the same, that people watch 99 percent as much television as they used to, but 1 percent of that is carved out for producing and for sharing. The Internet-connected population watches roughly a trillion hours of TV a year … One percent of that is 98 Wikipedia projects per year worth of participation.

  So how many seconds of salvaged erstwhile television time would need to be harnessed to replicate the achievements of, say, Albert Einstein? It seems to me that even if we could network all the potential aliens in the galaxy—quadrillions of them, perhaps—and get each of them to contribute some seconds to a physics wiki, we would not replicate the achievements of even one mediocre physicist, much less a great one.

  Absent Intellectual Modesty

  There are at least two ways to believe in the idea of quality. You can believe there’s something ineffable going on within the human mind, or you can believe we just don’t understand what quality in a mind is yet, even though we might someday. Either of those opinions allows one to distinguish quantity and quality. In order to confuse quantity and quality, you have to reject both possibilities.

  The mere possibility of there being something ineffable about personhood is what drives many technologists to reject the notion of quality. They want to live in an airtight reality that resembles an idealized computer program, in which everything is understood and there are no fundamental mysteries. They recoil from even the hint of a potential zone of mystery or an unresolved seam in one’s worldview.

  This desire for absolute order usually leads to tears in human affairs, so there is a historical reason to distrust it. Materialist extremists have long seemed determined to win a race with religious fanatics: Who can do the most damage to the most people?

  At any rate, there is no evidence that quantity becomes quality in matters of human expression or achievement. What matters instead, I believe, is a sense of focus, a mind in effective concentration, and an adventurous individual imagination that is distinct from the crowd.

  Of course, I can’t describe what it is that a mind does, because no one can. We don’t understand how brains work. We understand a lot about how parts of brains work, but there are fundamental questions that have not even been fully articulated yet, much less answered.

  For instance, how does reason work? How does meaning work? The usual ideas currently in play are variations on the notion that pseudo-Darwinian selection goes on within the brain. The brain tries out different thought patterns, and the ones that work best are reinforced. That’s awfully vague. But there’s no reason that Darwinian evolution could not have given rise to processes within the human brain that jumped out of the Darwinian progression. While the physical brain is a product of evolution as we are coming to understand it, the cultural brain might be a way of transforming the evolved brain according to principles that cannot be explained in evolutionary terms.

  Another way to put this is that there might be some form of creativity other than selection. I certainly don’t know, but it seems pointless to insist that what we already understand must suffice to explain what we don’t understand.

  What I’m struck by is the lack of intellectual modesty in the computer science community. We are happy to enshrine into engineering designs mere hypotheses—and vague ones at that—about the hardest and most profound questions faced by science, as if we already possess perfect knowledge.

  If it eventually turns out that there is something about an individual human mind that is different from what can be achieved by a noosphere, that “special element” might potentially turn out to have any number of qualities. It is possible that we will have to await scientific advances that will only come in fifty, five hundred, or five thousand years before we can sufficiently appreciate our own brains.

  Or it might turn out that a distinction will forever be based on principles we cannot manipulate. This might involve types of computation that are unique to the physical brain, maybe relying on forms of causation that depend on remarkable and nonreplicable physical conditions. Or it might involve software that could only be created by the long-term work of evolution, which cannot be reverse-engineered or mucked with in any accessible way. Or it might even involve the prospect, dreaded by some, of dualism, a reality for consciousness as apart from mechanism.

  The point is that we don’t know. I love speculating about the workings of the brain. Later in the book, I’ll present some thoughts on how to use computational metaphors to at least vaguely imagine how a process like meaning might work in the brain. But I would abhor anyone using my speculations as the basis of a design for a tool to be used by real people. An aeronautical engineer would never put passengers in a plane based on an untested, speculative theory, but computer scientists commit analogous sins all the time.

  An underlying problem is that technical people overreact to religious extremists. If a computer scientist says that we don’t understand how the brain works, will that empower an ideologue to then claim that some particular religion has been endorsed? This is a real danger, but over-claiming by technical people is the greater danger, since we end up confusing ourselves.

  It Is Still Possible to Get Rid of Crowd Ideology in Online Designs

  From an engineering point of view, the difference between a social networking site and the web as it existed before such sites were introduced is a matter of small detail. You could always create a list of links to your friends on your website, and you could always send e-mails to a circle of friends announcing whatever you cared to. All that the social networking services offer is a prod to use the web in a particular way, according to a particular philosophy.

  If anyone wanted to reconsider social network designs, it would be easy enough to take a standoffish approach to describing what goes on between people. It could be left to people to communicate what they want to say about their relationships in their own way.

  If someone wants to use words like “single” or “looking” in a self-description, no one is going to prevent that. Search engines will easily find instances of those words. There’s no need for an imposed, official category.

  If you read something written by someone who used the term “single” in a custom-composed, unique sentence, you will inevitably get a first whiff of the subtle experience of the author, something you would not get from a multiple-choice database. Yes, it would be a tiny bit more work for everyone, but th
e benefits of semiautomated self-presentation are illusory. If you start out by being fake, you’ll eventually have to put in twice the effort to undo the illusion if anything good is to come of it.

  This is an example of a simple way in which digital designers could choose to be modest about their claims to understand the nature of human beings. Enlightened designers leave open the possibility of either metaphysical specialness in humans or in the potential for unforeseen creative processes that aren’t explained by ideas like evolution that we already believe we can capture in software systems. That kind of modesty is the signature quality of being human-centered.

  There would be trade-offs. Adopting a metaphysically modest approach would make it harder to use database techniques to create instant lists of people who are, say, emo, single, and affluent. But I don’t think that would be such a great loss. A stream of misleading information is no asset.

  It depends on how you define yourself. An individual who is receiving a flow of reports about the romantic status of a group of friends must learn to think in the terms of the flow if it is to be perceived as worth reading at all. So here is another example of how people are able to lessen themselves so as to make a computer seem accurate. Am I accusing all those hundreds of millions of users of social networking sites of reducing themselves in order to be able to use the services? Well, yes, I am.

  I know quite a few people, mostly young adults but not all, who are proud to say that they have accumulated thousands of friends on Face-book. Obviously, this statement can only be true if the idea of friendship is reduced. A real friendship ought to introduce each person to unexpected weirdness in the other. Each acquaintance is an alien, a well of unexplored difference in the experience of life that cannot be imagined or accessed in any way but through genuine interaction. The idea of friendship in database-filtered social networks is certainly reduced from that.

 

‹ Prev