Book Read Free

You are not a Gadget: A Manifesto

Page 14

by Jaron Lanier


  CHAPTER 9

  Retropolis

  AN ANOMALY IN popular music trends is examined.

  Second-Order Culture

  What’s gone so stale with internet culture that a batch of tired rhetoric from my old circle of friends has become sacrosanct? Why can’t anyone younger dump our old ideas for something original? I long to be shocked and made obsolete by new generations of digital culture, but instead I am being tortured by repetition and boredom.

  For example: the pinnacle of achievement of the open software movement has been the creation of Linux, a derivative of UNIX, an old operating system from the 1970s. Similarly, the less techie side of the open culture movement celebrates the creation of Wikipedia, which is a copy of something that already existed: an encyclopedia.

  There’s a rule of thumb you can count on in each succeeding version of the web 2.0 movement: the more radical an online social experiment is claimed to be, the more conservative, nostalgic, and familiar the result will actually be.

  What I’m saying here is independent of whether the typical claims made by web 2.0 and wiki enthusiasts are true. Let’s just stipulate for the sake of argument that Linux is as stable and secure as any historical derivative of UNIX and that Wikipedia is as reliable as other encyclopedias. It’s still strange that generations of young, energetic, idealistic people would perceive such intense value in creating them.

  Let’s suppose that back in the 1980s I had said, “In a quarter century, when the digital revolution has made great progress and computer chips are millions of times faster than they are now, humanity will finally win the prize of being able to write a new encyclopedia and a new version of UNIX!” It would have sounded utterly pathetic.

  The distinction between first-order expression and derivative expression is lost on true believers in the hive. First-order expression is when someone presents a whole, a work that integrates its own worldview and aesthetic. It is something genuinely new in the world.

  Second-order expression is made of fragmentary reactions to first-order expression. A movie like Blade Runner is first-order expression, as was the novel that inspired it, but a mashup in which a scene from the movie is accompanied by the anonymous masher’s favorite song is not in the same league.

  I don’t claim I can build a meter to detect precisely where the boundary between first-and second-order expression lies. I am claiming, however, that the web 2.0 designs spin out gobs of the latter and choke off the former.

  It is astonishing how much of the chatter online is driven by fan responses to expression that was originally created within the sphere of old media and that is now being destroyed by the net. Comments about TV shows, major movies, commercial music releases, and video games must be responsible for almost as much bit traffic as porn. There is certainly nothing wrong with that, but since the web is killing the old media, we face a situation in which culture is effectively eating its own seed stock.

  Schlock Defended

  The more original material that does exist on the open net is all too often like the lowest-production-cost material from the besieged, old-fashioned, copy-written world. It’s an endless parade of “News of the Weird,” “Stupid Pet Tricks,” and America’s Funniest Home Videos.

  This is the sort of stuff you’ll be directed to by aggregation services like YouTube or Digg. (That, and endless propaganda about the merits of open culture. Some stupefying, dull release of a version of Linux will usually be a top world headline.)

  I am not being a snob about this material. I like it myself once in a while. Only people can make schlock, after all. A bird can’t be schlocky when it sings, but a person can. So we can take existential pride in schlock. All I am saying is that we already had, in the predigital world, all the kinds of schlock you now find on the net. Making echoes of this material in the radical, new, “open” world accomplishes nothing. The cumulative result is that online culture is fixated on the world as it was before the web was born.

  By most estimates, about half the bits coursing through the internet originated as television, movie, or other traditional commercial content, though it is difficult to come up with a precise accounting.

  BitTorrent, a company that maintains only one of the many protocols for delivering such content, has at times claimed that its users alone are taking up more than half of the bandwidth of the internet. (BitTorrent is used for a variety of content, but a primary motivation to use it is that it is suitable for distributing large files, such as television shows and feature-length movies.)

  The internet was, of course, originally conceived during the Cold War to be capable of surviving a nuclear attack. Parts of it can be destroyed without destroying the whole, but that also means that parts can be known without knowing the whole. The core idea is called “packet switching.”

  A packet is a tiny portion of a file that is passed between nodes on the internet in the way a baton is passed between runners in a relay race. The packet has a destination address. If a particular node fails to acknowledge receipt of a packet, the node trying to pass the packet to it can try again elsewhere. The route is not specified, only the destination. This is how the internet can hypothetically survive an attack. The nodes keep trying to find neighbors until each packet is eventually routed to its destination.

  In practice, the internet as it has evolved is a little less robust than that scenario implies. But the packet architecture is still the core of the design.

  The decentralized nature of the architecture makes it almost impossible to track the nature of the information that is flowing through it. Each packet is just a tiny piece of a file, so even if you look at the contents of packets going by, it can sometimes be hard to figure out what the whole file will be when it is reassembled at the destination.

  In more recent eras, ideologies related to privacy and anonymity joined a fascination with emerging systems similar to some conceptions of biological evolution to influence engineers to reinforce the opacity of the design of the internet. Each new layer of code has furthered the cause of deliberate obscurity.

  Because of the current popularity of cloud architectures, for instance, it has become difficult to know which server you are logging into from time to time when you use particular software. That can be an annoyance in certain circumstances in which latency—the time it takes for bits to travel between computers—matters a great deal.

  The appeal of deliberate obscurity is an interesting anthropological question. There are a number of explanations for it that I find to have merit. One is a desire to see the internet come alive as a metaorganism: many engineers hope for this eventuality, and mystifying the workings of the net makes it easier to imagine it is happening. There is also a revolutionary fantasy: engineers sometimes pretend they are assailing a corrupt existing media order and demand both the covering of tracks and anonymity from all involved in order to enhance this fantasy.

  At any rate, the result is that we must now measure the internet as if it were a part of nature, instead of from the inside, as if we were examining the books of a financial enterprise. We must explore it as if it were unknown territory, even though we laid it out.

  The means of conducting explorations are not comprehensive. Leaving aside ethical and legal concerns, it is possible to “sniff” packets traversing a piece of hardware comprising one node in the net, for instance. But the information available to any one observer is limited to the nodes being observed.

  Rage

  I well recall the birth of the free software movement, which preceded and inspired the open culture variant. It started out as an act of rage more than a quarter of a century ago.

  Visualize, if you will, the most transcendently messy, hirsute, and otherwise eccentric pair of young nerds on the planet. They were in their early twenties. The scene was an uproariously messy hippie apartment in Cambridge, Massachusetts, in the vicinity of MIT. I was one of these men; the other was Richard Stallman.

  Why are so many of the more sophisticated ex
amples of code in the online world—like the page-rank algorithms in the top search engines or like Adobe’s Flash—the results of proprietary development? Why did the adored iPhone come out of what many regard as the most closed, tyrannically managed software-development shop on Earth? An honest empiricist must conclude that while the open approach has been able to create lovely, polished copies, it hasn’t been so good at creating notable originals. Even though the open-source movement has a stinging countercultural rhetoric, it has in practice been a conservative force.

  Stallman was distraught to the point of tears. He had poured his energies into a celebrated project to build a radically new kind of computer called the LISP machine. But it wasn’t just a regular computer running LISP, a programming language beloved by artificial intelligence researchers.* Instead, it was a machine patterned on LISP from the bottom up, making a radical statement about what computing could be like at every level, from the underlying architecture to the user interface. For a brief period, every hot computer science department had to own some of these refrigerator-size gadgets.

  Eventually a company called Symbolics became the primary seller of LISP machines. Stallman realized that a whole experimental subculture of computer science risked being dragged into the toilet if anything bad happened to a little company like Symbolics—and of course everything bad happened to it in short order.

  So Stallman hatched a plan. Never again would computer code, and the culture that grew up with it, be trapped inside a wall of commerce and legality. He would develop a free version of an ascendant, if rather dull, software tool: the UNIX operating system. That simple act would blast apart the idea that lawyers and companies could control software culture.

  Eventually a young programmer of the next generation named Linus Torvalds followed in Stallman’s footsteps and did something similar, but using the popular Intel chips. In 1991 that effort yielded Linux, the basis for a vastly expanded free software movement.

  But back to that dingy bachelor pad near MIT. When Stallman told me his plan, I was intrigued but sad. I thought that code was important in more ways than politics can ever be. If politically motivated code was going to amount to endless replays of relatively dull stuff like UNIX instead of bold projects like the LISP machine, what was the point? Would mere humans have enough energy to sustain both kinds of idealism?

  Twenty-five years later, it seems clear that my concerns were justified. Open wisdom-of-crowds software movements have become influential, but they haven’t promoted the kind of radical creativity I love most in computer science. If anything, they’ve been hindrances. Some of the youngest, brightest minds have been trapped in a 1970s intellectual framework because they are hypnotized into accepting old software designs as if they were facts of nature. Linux is a superbly polished copy of an antique—shinier than the original, perhaps, but still defined by it.

  I’m not anti-open source. I frequently argue for it in various specific projects. But the politically correct dogma that holds that open source is automatically the best path to creativity and innovation is not borne out by the facts.

  A Disappointment Too Big to Notice

  How can you know what is lame and derivative in someone else’s experience? How can you know if you get it? Maybe there’s something amazing happening and you just don’t know how to perceive it. This is a tough enough problem when the topic is computer code, but it’s even harder when the subject is music.

  The whole idea of music criticism is not pleasant to me, since I am, after all, a working musician. There is something confining and demeaning about having expectations of something as numinous as music in the first place. It isn’t as if anyone really knows what music is, exactly. Isn’t music pure gift? If the magic appears, great, but if it doesn’t, what purpose is served by complaining?

  But sometimes you have to at least approach critical thinking. Stare into the mystery of music directly, and you might turn into a pillar of salt, but you must at least survey the vicinity to know where not to look.

  So it is with the awkward project of assessing musical culture in the age of the internet. I entered the internet era with extremely high expectations. I eagerly anticipated a chance to experience shock and intensity and new sensations, to be thrust into lush aesthetic wildernesses, and to wake up every morning to a world that was richer in every detail because my mind had been energized by unforeseeable art.

  Such extravagant expectations might seem unreasonable in retrospect, but that is not how they seemed twenty-five years ago. There was every reason to have high expectations about the art—particularly the music—that would arise from the internet.

  Consider the power of music from just a few figures from the last century. Dissonance and strange rhythms produced a riot at the premiere of Stravinsky’s Rite of Spring. Jazz musicians like Louis Armstrong, James P. Johnson, Charlie Parker, and Thelonius Monk raised the bar for musical intelligence while promoting social justice. A global cultural shift coevolved with the Beatles’ recordings. Twentieth-century pop music transformed sexual attitudes on a global basis. Trying to summarize the power of music leaves you breathless.

  Changing Circumstances Always Used to Inspire Amazing New Art

  It’s easy to forget the role technology has played in producing the most powerful waves of musical culture. Stravinsky’s Rite of Spring, composed in 1912, would have been a lot harder to play, at least at tempo and in tune, on the instruments that had existed some decades earlier. Rock and roll—the electric blues—was to a significant degree a successful experiment in seeing what a small number of musicians could do for a dance hall with the aid of amplification. The Beatles’ recordings were in part a rapid reconnaissance mission into the possibilities of multitrack recording, stereo mixes, synthesizers, and audio special effects such as compression and varying playback speed.

  Changing economic environments have also stimulated new music in the past. With capitalism came a new kind of musician. No longer tied to the king, the whorehouse, the military parade, the Church, the sidewalk busker’s cup, or the other ancient and traditional sources of musical patronage, musicians had a chance to diversify, innovate, and be entrepreneurial. For example, George Gershwin made some money from sheet music sales, movie sound tracks, and player piano rolls, as well as from traditional gigs.

  So it seemed entirely reasonable to have the highest expectations for music on the internet. We thought there would be an explosion of wealth and of ways to become wealthy, leading to super-Gershwins. A new species of musician would be inspired to suddenly create radically new kinds of music to be performed in virtual worlds, or in the margins of e-books, or to accompany the oiling of fabricating robots. Even if it was not yet clear what business models would take hold, the outcome would surely be more flexible, more open, more hopeful than what had come before in the hobbled economy of physicality.

  The Blankness of Generation X Never Went Away, but Became the New Normal

  At the time that the web was born, in the early 1990s, a popular trope was that a new generation of teenagers, reared in the conservative Reagan years, had turned out exceptionally bland. The members of “Generation X” were characterized as blank and inert. The anthropologist Steve Barnett compared them to pattern exhaustion, a phenomena in which a culture runs out of variations of traditional designs in their pottery and becomes less creative.

  A common rationalization in the fledgling world of digital culture back then was that we were entering a transitional lull before a creative storm—or were already in the eye of one. But the sad truth is that we were not passing through a momentary lull before a storm. We had instead entered a persistent somnolence, and I have come to believe that we will only escape it when we kill the hive.

  The First-Ever Era of Musical Stasis

  Here is a claim I wish I weren’t making, and that I would prefer to be wrong about: popular music created in the industrialized world in the decade from the late 1990s to the late 2000s doesn’t have a distinct sty
le—that is, one that would provide an identity for the young people who grew up with it. The process of the reinvention of life through music appears to have stopped.

  What once seemed novel—the development and acceptance of unoriginal pop culture from young people in the mid-1990s (the Gen Xers)—has become so commonplace that we do not even notice it anymore. We’ve forgotten how fresh pop culture can be.

  Where is the new music? Everything is retro, retro, retro.

  Music is everywhere, but hidden, as indicated by tiny white prairie dog-like protuberances popping out of everyone’s ears. I am used to seeing people making embarrassingly sexual faces and moaning noises when listening to music on headphones, so it’s taken me a while to get used to the stone faces of the earbud listeners in the coffeehouse.

  Beating within the retro indie band that wouldn’t have sounded out of place even when I was a teenager there might be some exotic heart, some layer of energy I’m not hearing. Of course, I can’t know my own limits. I can’t know what I am not able to hear.

  But I have been trying an experiment. Whenever I’m around “Face-book generation” people and there’s music playing—probably selected by an artificial intelligence or crowd-based algorithm, as per the current fashion—I ask them a simple question: Can you tell in what decade the music that is playing right now was made? Even listeners who are not particularly music oriented can do pretty well with this question—but only for certain decades.

 

‹ Prev