Book Read Free

You are not a Gadget: A Manifesto

Page 3

by Jaron Lanier

These are acceptable costs, what I would call aesthetic losses. They are counterbalanced, however, by some aesthetic victories. The digital world looks better than it sounds because a community of digital activists, including folks from Xerox Parc (especially Alan Kay), Apple, Adobe, and the academic world (especially Stanford’s Don Knuth) fought the good fight to save us from the rigidly ugly fonts and other visual elements we’d have been stuck with otherwise.

  Then there are those recently conceived elements of the future of human experience, like the already locked-in idea of the file, that are as fundamental as the air we breathe. The file will henceforth be one of the basic underlying elements of the human story, like genes. We will never know what that means, or what alternatives might have meant.

  On balance, we’ve done wonderfully well! But the challenge on the table now is unlike previous ones. The new designs on the verge of being locked in, the web 2.0 designs, actively demand that people define themselves downward. It’s one thing to launch a limited conception of music or time into the contest for what philosophical idea will be locked in. It is another to do that with the very idea of what it is to be a person.

  Why It Matters

  If you feel fine using the tools you use, who am I to tell you that there is something wrong with what you are doing? But consider these points:

  Emphasizing the crowd means deemphasizing individual humans in the design of society, and when you ask people not to be people, they revert to bad moblike behaviors. This leads not only to empowered trolls, but to a generally unfriendly and unconstructive online world.

  Finance was transformed by computing clouds. Success in finance became increasingly about manipulating the cloud at the expense of sound financial principles.

  There are proposals to transform the conduct of science along similar lines. Scientists would then understand less of what they do.

  Pop culture has entered into a nostalgic malaise. Online culture is dominated by trivial mashups of the culture that existed before the onset of mashups, and by fandom responding to the dwindling outposts of centralized mass media. It is a culture of reaction without action.

  Spirituality is committing suicide. Consciousness is attempting to will itself out of existence.

  It might seem as though I’m assembling a catalog of every possible thing that could go wrong with the future of culture as changed by technology, but that is not the case. All of these examples are really just different aspects of one singular, big mistake.

  The deep meaning of personhood is being reduced by illusions of bits. Since people will be inexorably connecting to one another through computers from here on out, we must find an alternative.

  We have to think about the digital layers we are laying down now in order to benefit future generations. We should be optimistic that civilization will survive this challenging century, and put some effort into creating the best possible world for those who will inherit our efforts.

  Next to the many problems the world faces today, debates about online culture may not seem that pressing. We need to address global warming, shift to a new energy cycle, avoid wars of mass destruction, support aging populations, figure out how to benefit from open markets without being disastrously vulnerable to their failures, and take care of other basic business. But digital culture and related topics like the future of privacy and copyrights concern the society we’ll have if we can survive these challenges.

  Every save-the-world cause has a list of suggestions for “what each of us can do”: bike to work, recycle, and so on.

  I can propose such a list related to the problems I’m talking about:

  Don’t post anonymously unless you really might be in danger.

  If you put effort into Wikipedia articles, put even more effort into using your personal voice and expression outside of the wiki to help attract people who don’t yet realize that they are interested in the topics you contributed to.

  Create a website that expresses something about who you are that won’t fit into the template available to you on a social networking site.

  Post a video once in a while that took you one hundred times more time to create than it takes to view.

  Write a blog post that took weeks of reflection before you heard the inner voice that needed to come out.

  If you are twittering, innovate in order to find a way to describe your internal state instead of trivial external events, to avoid the creeping danger of believing that objectively described events define you, as they would define a machine.

  These are some of the things you can do to be a person instead of a source of fragments to be exploited by others.

  There are aspects to all these software designs that could be retained more humanistically. A design that shares Twitter’s feature of providing ambient continuous contact between people could perhaps drop Twitter’s adoration of fragments. We don’t really know, because it is an unexplored design space.

  As long as you are not defined by software, you are helping to broaden the identity of the ideas that will get locked in for future generations. In most arenas of human expression, it’s fine for a person to love the medium they are given to work in. Love paint if you are a painter; love a clarinet if you are a musician. Love the English language (or hate it). Love of these things is a love of mystery.

  But in the case of digital creative materials, like MIDI, UNIX, or even the World Wide Web, it’s a good idea to be skeptical. These designs came together very recently, and there’s a haphazard, accidental quality to them. Resist the easy grooves they guide you into. If you love a medium made of software, there’s a danger that you will become entrapped in someone else’s recent careless thoughts. Struggle against that!

  The Importance of Digital Politics

  There was an active campaign in the 1980s and 1990s to promote visual elegance in software. That political movement bore fruit when it influenced engineers at companies like Apple and Microsoft who happened to have a chance to steer the directions software was taking before lock-in made their efforts moot.

  That’s why we have nice fonts and flexible design options on our screens. It wouldn’t have happened otherwise. The seemingly unstoppable mainstream momentum in the world of software engineers was pulling computing in the direction of ugly screens, but that fate was avoided before it was too late.

  A similar campaign should be taking place now, influencing engineers, designers, businesspeople, and everyone else to support humanistic alternatives whenever possible. Unfortunately, however, the opposite seems to be happening.

  Online culture is filled to the brim with rhetoric about what the true path to a better world ought to be, and these days it’s strongly biased toward an antihuman way of thinking.

  The Future

  The true nature of the internet is one of the most common topics of online discourse. It is remarkable that the internet has grown enough to contain the massive amount of commentary about its own nature.

  The promotion of the latest techno-political-cultural orthodoxy, which I am criticizing, has become unceasing and pervasive. The New York Times, for instance, promotes so-called open digital politics on a daily basis even though that ideal and the movement behind it are destroying the newspaper, and all other newspapers.* It seems to be a case of journalistic Stockholm syndrome.

  There hasn’t yet been an adequate public rendering of an alternative worldview that opposes the new orthodoxy. In order to oppose orthodoxy, I have to provide more than a few jabs. I also have to realize an alternative intellectual environment that is large enough to roam in. Someone who has been immersed in orthodoxy needs to experience a figure-ground reversal in order to gain perspective. This can’t come from encountering just a few heterodox thoughts, but only from a new encompassing architecture of interconnected thoughts that can engulf a person with a different worldview.

  So, in this book, I have spun a long tale of belief in the opposites of computationalism, the noosphere, the Singularity, web 2.0, the long tail, a
nd all the rest. I hope the volume of my contrarianism will foster an alternative mental environment, where the exciting opportunity to start creating a new digital humanism can begin.

  An inevitable side effect of this project of deprogramming through immersion is that I will direct a sustained stream of negativity onto the ideas I am criticizing. Readers, be assured that the negativity eventually tapers off, and that the last few chapters are optimistic in tone.

  * The style of UNIX commands has, incredibly, become part of pop culture. For instance, the URLs (universal resource locators) that we use to find web pages these days, like http://www.jaronlanier.com/, are examples of the kind of key press sequences that are ubiquitous in UNIX.

  * “Cloud” is a term for a vast computing resource available over the internet. You never know where the cloud resides physically. Google, Microsoft, IBM, and various government agencies are some of the proprietors of computing clouds.

  * Facebook does have advertising, and is surely contemplating a variety of other commercial plays, but so far has earned only a trickle of income, and no profits. The same is true for most of the other web 2.0 businesses. Because of the enhanced network effect of all things digital, it’s tough for any new player to become profitable in advertising, since Google has already seized a key digital niche (its ad exchange). In the same way, it would be extraordinarily hard to start a competitor to eBay or Craigslist. Digital network architectures naturally incubate monopolies. That is precisely why the idea of the noosphere, or a collective brain formed by the sum of all the people connected on the internet, has to be resisted with more force than it is promoted.

  * Today, for instance, as I write these words, there was a headline about R, a piece of geeky statistical software that would never have received notice in the Times if it had not been “free.” R’s nonfree competitor Stata was not even mentioned. (Ashlee Vance, “Data Analysts Captivated by R’s Power,” New York Times, January 6, 2009.)

  CHAPTER 2

  An Apocalypse of Self-Abdication

  THE IDEAS THAT I hope will not be locked in rest on a philosophical foundation that I sometimes call cybernetic totalism. It applies metaphors from certain strains of computer science to people and the rest of reality. Pragmatic objections to this philosophy are presented.

  What Do You Do When the Techies Are Crazier Than the Luddites?

  The Singularity is an apocalyptic idea originally proposed by John von Neumann, one of the inventors of digital computation, and elucidated by figures such as Vernor Vinge and Ray Kurzweil.

  There are many versions of the fantasy of the Singularity. Here’s the one Marvin Minsky used to tell over the dinner table in the early 1980s: One day soon, maybe twenty or thirty years into the twenty-first century, computers and robots will be able to construct copies of themselves, and these copies will be a little better than the originals because of intelligent software. The second generation of robots will then make a third, but it will take less time, because of the improvements over the first generation.

  The process will repeat. Successive generations will be ever smarter and will appear ever faster. People might think they’re in control, until one fine day the rate of robot improvement ramps up so quickly that superintelligent robots will suddenly rule the Earth.

  In some versions of the story, the robots are imagined to be microscopic, forming a “gray goo” that eats the Earth; or else the internet itself comes alive and rallies all the net-connected machines into an army to control the affairs of the planet. Humans might then enjoy immortality within virtual reality, because the global brain would be so huge that it would be absolutely easy—a no-brainer, if you will—for it to host all our consciousnesses for eternity.

  The coming Singularity is a popular belief in the society of technologists. Singularity books are as common in a computer science department as Rapture images are in an evangelical bookstore.

  (Just in case you are not familiar with the Rapture, it is a colorful belief in American evangelical culture about the Christian apocalypse. When I was growing up in rural New Mexico, Rapture paintings would often be found in places like gas stations or hardware stores. They would usually include cars crashing into each other because the virtuous drivers had suddenly disappeared, having been called to heaven just before the onset of hell on Earth. The immensely popular Left Behind novels also describe this scenario.)

  There might be some truth to the ideas associated with the Singularity at the very largest scale of reality. It might be true that on some vast cosmic basis, higher and higher forms of consciousness inevitably arise, until the whole universe becomes a brain, or something along those lines. Even at much smaller scales of millions or even thousands of years, it is more exciting to imagine humanity evolving into a more wonderful state than we can presently articulate. The only alternatives would be extinction or stodgy stasis, which would be a little disappointing and sad, so let us hope for transcendence of the human condition, as we now understand it.

  The difference between sanity and fanaticism is found in how well the believer can avoid confusing consequential differences in timing. If you believe the Rapture is imminent, fixing the problems of this life might not be your greatest priority. You might even be eager to embrace wars and tolerate poverty and disease in others to bring about the conditions that could prod the Rapture into being. In the same way, if you believe the Singularity is coming soon, you might cease to design technology to serve humans, and prepare instead for the grand events it will bring.

  But in either case, the rest of us would never know if you had been right. Technology working well to improve the human condition is detectable, and you can see that possibility portrayed in optimistic science fiction like Star Trek.

  The Singularity, however, would involve people dying in the flesh and being uploaded into a computer and remaining conscious, or people simply being annihilated in an imperceptible instant before a new super-consciousness takes over the Earth. The Rapture and the Singularity share one thing in common: they can never be verified by the living.

  You Need Culture to Even Perceive Information Technology

  Ever more extreme claims are routinely promoted in the new digital climate. Bits are presented as if they were alive, while humans are transient fragments. Real people must have left all those anonymous comments on blogs and video clips, but who knows where they are now, or if they are dead? The digital hive is growing at the expense of individuality.

  Kevin Kelly says that we don’t need authors anymore, that all the ideas of the world, all the fragments that used to be assembled into coherent books by identifiable authors, can be combined into one single, global book. Wired editor Chris Anderson proposes that science should no longer seek theories that scientists can understand, because the digital cloud will understand them better anyway.*

  Antihuman rhetoric is fascinating in the same way that self-destruction is fascinating: it offends us, but we cannot look away.

  The antihuman approach to computation is one of the most baseless ideas in human history. A computer isn’t even there unless a person experiences it. There will be a warm mass of patterned silicon with electricity coursing through it, but the bits don’t mean anything without a cultured person to interpret them.

  This is not solipsism. You can believe that your mind makes up the world, but a bullet will still kill you. A virtual bullet, however, doesn’t even exist unless there is a person to recognize it as a representation of a bullet. Guns are real in a way that computers are not.

  Making People Obsolete So That Computers Seem More Advanced

  Many of today’s Silicon Valley intellectuals seem to have embraced what used to be speculations as certainties, without the spirit of unbounded curiosity that originally gave rise to them. Ideas that were once tucked away in the obscure world of artificial intelligence labs have gone mainstream in tech culture. The first tenet of this new culture is that all of reality, including humans, is one big information system. That doesn�
��t mean we are condemned to a meaningless existence. Instead there is a new kind of manifest destiny that provides us with a mission to accomplish. The meaning of life, in this view, is making the digital system we call reality function at ever-higher “levels of description.”

  People pretend to know what “levels of description” means, but I doubt anyone really does. A web page is thought to represent a higher level of description than a single letter, while a brain is a higher level than a web page. An increasingly common extension of this notion is that the net as a whole is or soon will be a higher level than a brain.

  There’s nothing special about the place of humans in this scheme. Computers will soon get so big and fast and the net so rich with information that people will be obsolete, either left behind like the characters in Rapture novels or subsumed into some cyber-superhuman something.

  Silicon Valley culture has taken to enshrining this vague idea and spreading it in the way that only technologists can. Since implementation speaks louder than words, ideas can be spread in the designs of software. If you believe the distinction between the roles of people and computers is starting to dissolve, you might express that—as some friends of mine at Microsoft once did—by designing features for a word processor that are supposed to know what you want, such as when you want to start an outline within your document. You might have had the experience of having Microsoft Word suddenly determine, at the wrong moment, that you are creating an indented outline. While I am all for the automation of petty tasks, this is different.

 

‹ Prev