by Jaron Lanier
If it’s important to find the edge of mystery, to ponder the things that can’t quite be defined—or rendered into a digital standard—then we will have to perpetually seek out entirely new ideas and objects, abandoning old ones like musical notes. Throughout this book, I’ll explore whether people are becoming like MIDI notes—overly defined, and restricted in practice to what can be represented in a computer. This has enormous implications: we can conceivably abandon musical notes, but we can’t abandon ourselves.
When Dave made MIDI, I was thrilled. Some friends of mine from the original Macintosh team quickly built a hardware interface so a Mac could use MIDI to control a synthesizer, and I worked up a quick music creation program. We felt so free—but we should have been more thoughtful.
By now, MIDI has become too hard to change, so the culture has changed to make it seem fuller than it was initially intended to be. We have narrowed what we expect from the most commonplace forms of musical sound in order to make the technology adequate. It wasn’t Dave’s fault. How could he have known?
Digital Reification: Lock-in Turns Philosophy into Reality
A lot of the locked-in ideas about how software is put together come from an old operating system called UNIX. It has some characteristics that are related to MIDI.
While MIDI squeezes musical expression through a limiting model of the actions of keys on a musical keyboard, UNIX does the same for all computation, but using the actions of keys on typewriter-like keyboards. A UNIX program is often similar to a simulation of a person typing quickly.
There’s a core design feature in UNIX called a “command line interface.” In this system, you type instructions, you hit “return,” and the instructions are carried out.* A unifying design principle of UNIX is that a program can’t tell if a person hit return or a program did so. Since real people are slower than simulated people at operating keyboards, the importance of precise timing is suppressed by this particular idea. As a result, UNIX is based on discrete events that don’t have to happen at a precise moment in time. The human organism, meanwhile, is based on continuous sensory, cognitive, and motor processes that have to be synchronized precisely in time. (MIDI falls somewhere in between the concept of time embodied in UNIX and in the human body, being based on discrete events that happen at particular times.)
UNIX expresses too large a belief in discrete abstract symbols and not enough of a belief in temporal, continuous, nonabstract reality; it is more like a typewriter than a dance partner. (Perhaps typewriters or word processors ought to always be instantly responsive, like a dance partner—but that is not yet the case.) UNIX tends to “want” to connect to reality as if reality were a network of fast typists.
If you hope for computers to be designed to serve embodied people as well as possible people, UNIX would have to be considered a bad design. I discovered this in the 1970s, when I tried to make responsive musical instruments with it. I was trying to do what MIDI does not, which is work with fluid, hard-to-notate aspects of music, and discovered that the underlying philosophy of UNIX was too brittle and clumsy for that.
The arguments in favor of UNIX focused on how computers would get literally millions of times faster in the coming decades. The thinking was that the speed increase would overwhelm the timing problems I was worried about. Indeed, today’s computers are millions of times faster, and UNIX has become an ambient part of life. There are some reasonably expressive tools that have UNIX in them, so the speed increase has sufficed to compensate for UNIX’s problems in some cases. But not all.
I have an iPhone in my pocket, and sure enough, the thing has what is essentially UNIX in it. An unnerving element of this gadget is that it is haunted by a weird set of unpredictable user interface delays. One’s mind waits for the response to the press of a virtual button, but it doesn’t come for a while. An odd tension builds during that moment, and easy intuition is replaced by nervousness. It is the ghost of UNIX, still refusing to accommodate the rhythms of my body and my mind, after all these years.
I’m not picking in particular on the iPhone (which I’ll praise in another context later on). I could just as easily have chosen any contemporary personal computer. Windows isn’t UNIX, but it does share UNIX’s idea that a symbol is more important than the flow of time and the underlying continuity of experience.
The grudging relationship between UNIX and the temporal world in which the human body moves and the human mind thinks is a disappointing example of lock-in, but not a disastrous one. Maybe it will even help make it easier for people to appreciate the old-fashioned physical world, as virtual reality gets better. If so, it will have turned out to be a blessing in disguise.
Entrenched Software Philosophies Become Invisible Through Ubiquity
An even deeper locked-in idea is the notion of the file. Once upon a time, not too long ago, plenty of computer scientists thought the idea of the file was not so great.
The first design for something like the World Wide Web, Ted Nelson’s Xanadu, conceived of one giant, global file, for instance. The first iteration of the Macintosh, which never shipped, didn’t have files. Instead, the whole of a user’s productivity accumulated in one big structure, sort of like a singular personal web page. Steve Jobs took the Mac project over from the fellow who started it, the late Jef Raskin, and soon files appeared.
UNIX had files; the Mac as it shipped had files; Windows had files. Files are now part of life; we teach the idea of a file to computer science students as if it were part of nature. In fact, our conception of files may be more persistent than our ideas about nature. I can imagine that someday physicists might tell us that it is time to stop believing in photons, because they have discovered a better way to think about light—but the file will likely live on.
The file is a set of philosophical ideas made into eternal flesh. The ideas expressed by the file include the notion that human expression comes in severable chunks that can be organized as leaves on an abstract tree—and that the chunks have versions and need to be matched to compatible applications.
What do files mean to the future of human expression? This is a harder question to answer than the question “How does the English language influence the thoughts of native English speakers?” At least you can compare English speakers to Chinese speakers, but files are universal. The idea of the file has become so big that we are unable to conceive of a frame large enough to fit around it in order to assess it empirically.
What Happened to Trains, Files, and Musical Notes Could Happen Soon to the Definition of a Human Being
It’s worth trying to notice when philosophies are congealing into locked-in software. For instance, is pervasive anonymity or pseudonymity a good thing? It’s an important question, because the corresponding philosophies of how humans can express meaning have been so ingrained into the interlocked software designs of the internet that we might never be able to fully get rid of them, or even remember that things could have been different.
We ought to at least try to avoid this particularly tricky example of impending lock-in. Lock-in makes us forget the lost freedoms we had in the digital past. That can make it harder to see the freedoms we have in the digital present. Fortunately, difficult as it is, we can still try to change some expressions of philosophy that are on the verge of becoming locked in place in the tools we use to understand one another and the world.
A Happy Surprise
The rise of the web was a rare instance when we learned new, positive information about human potential. Who would have guessed (at least at first) that millions of people would put so much effort into a project without the presence of advertising, commercial motive, threat of punishment, charismatic figures, identity politics, exploitation of the fear of death, or any of the other classic motivators of mankind. In vast numbers, people did something cooperatively, solely because it was a good idea, and it was beautiful.
Some of the more wild-eyed eccentrics in the digital world had guessed that it would happen—but even
so it was a shock when it actually did come to pass. It turns out that even an optimistic, idealistic philosophy is realizable. Put a happy philosophy of life in software, and it might very well come true!
Technology Criticism Shouldn’t Be Left to the Luddites
But not all surprises have been happy.
This digital revolutionary still believes in most of the lovely deep ideals that energized our work so many years ago. At the core was a sweet faith in human nature. If we empowered individuals, we believed, more good than harm would result.
The way the internet has gone sour since then is truly perverse. The central faith of the web’s early design has been superseded by a different faith in the centrality of imaginary entities epitomized by the idea that the internet as a whole is coming alive and turning into a superhuman creature.
The designs guided by this new, perverse kind of faith put people back in the shadows. The fad for anonymity has undone the great opening-of-everyone’s-windows of the 1990s. While that reversal has empowered sadists to a degree, the worst effect is a degradation of ordinary people.
Part of why this happened is that volunteerism proved to be an extremely powerful force in the first iteration of the web. When businesses rushed in to capitalize on what had happened, there was something of a problem, in that the content aspect of the web, the cultural side, was functioning rather well without a business plan.
Google came along with the idea of linking advertising and searching, but that business stayed out of the middle of what people actually did online. It had indirect effects, but not direct ones. The early waves of web activity were remarkably energetic and had a personal quality. People created personal “homepages,” and each of them was different, and often strange. The web had flavor.
Entrepreneurs naturally sought to create products that would inspire demand (or at least hypothetical advertising opportunities that might someday compete with Google) where there was no lack to be addressed and no need to be filled, other than greed. Google had discovered a new permanently entrenched niche enabled by the nature of digital technology. It turns out that the digital system of representing people and ads so they can be matched is like MIDI. It is an example of how digital technology can cause an explosive increase in the importance of the “network effect.” Every element in the system—every computer, every person, every bit—comes to depend on relentlessly detailed adherence to a common standard, a common point of exchange.
Unlike MIDI, Google’s secret software standard is hidden in its computer cloud* instead of being replicated in your pocket. Anyone who wants to place ads must use it, or be out in the cold, relegated to a tiny, irrelevant subculture, just as digital musicians must use MIDI in order to work together in the digital realm. In the case of Google, the monopoly is opaque and proprietary. (Sometimes locked-in digital niches are proprietary, and sometimes they aren’t. The dynamics are the same in either case, though the commercial implications can be vastly different.)
There can be only one player occupying Google’s persistent niche, so most of the competitive schemes that came along made no money. Behemoths like Facebook have changed the culture with commercial intent, but without, as of this time of writing, commercial achievement.*
In my view, there were a large number of ways that new commercial successes might have been realized, but the faith of the nerds guided entrepreneurs on a particular path. Voluntary productivity had to be commoditized, because the type of faith I’m criticizing thrives when you can pretend that computers do everything and people do nothing.
An endless series of gambits backed by gigantic investments encouraged young people entering the online world for the first time to create standardized presences on sites like Facebook. Commercial interests promoted the widespread adoption of standardized designs like the blog, and these designs encouraged pseudonymity in at least some aspects of their designs, such as the comments, instead of the proud extroversion that characterized the first wave of web culture.
Instead of people being treated as the sources of their own creativity, commercial aggregation and abstraction sites presented anonymized fragments of creativity as products that might have fallen from the sky or been dug up from the ground, obscuring the true sources.
Tribal Accession
The way we got here is that one subculture of technologists has recently become more influential than the others. The winning subculture doesn’t have a formal name, but I’ve sometimes called the members “cybernetic totalists” or “digital Maoists.”
The ascendant tribe is composed of the folks from the open culture/Creative Commons world, the Linux community, folks associated with the artificial intelligence approach to computer science, the web 2.0 people, the anticontext file sharers and remashers, and a variety of others. Their capital is Silicon Valley, but they have power bases all over the world, wherever digital culture is being created. Their favorite blogs include Boing Boing, TechCrunch, and Slashdot, and their embassy in the old country is Wired.
Obviously, I’m painting with a broad brush; not every member of the groups I mentioned subscribes to every belief I’m criticizing. In fact, the groupthink problem I’m worried about isn’t so much in the minds of the technologists themselves, but in the minds of the users of the tools the cybernetic totalists are promoting.
The central mistake of recent digital culture is to chop up a network of individuals so finely that you end up with a mush. You then start to care about the abstraction of the network more than the real people who are networked, even though the network by itself is meaningless. Only the people were ever meaningful.
When I refer to the tribe, I am not writing about some distant “them.” The members of the tribe are my lifelong friends, my mentors, my students, my colleagues, and my fellow travelers. Many of my friends disagree with me. It is to their credit that I feel free to speak my mind, knowing that I will still be welcome in our world.
On the other hand, I know there is also a distinct tradition of computer science that is humanistic. Some of the better-known figures in this tradition include the late Joseph Weizenbaum, Ted Nelson, Terry Winograd, Alan Kay, Bill Buxton, Doug Englebart, Brian Cantwell Smith, Henry Fuchs, Ken Perlin, Ben Schneiderman (who invented the idea of clicking on a link), and Andy Van Dam, who is a master teacher and has influenced generations of protégés, including Randy Pausch. Another important humanistic computing figure is David Gelernter, who conceived of a huge portion of the technical underpinnings of what has come to be called cloud computing, as well as many of the potential practical applications of clouds.
And yet, it should be pointed out that humanism in computer science doesn’t seem to correlate with any particular cultural style. For instance, Ted Nelson is a creature of the 1960s, the author of what might have been the first rock musical (Anything & Everything), something of a vagabond, and a counterculture figure if ever there was one. David Gelernter, on the other hand, is a cultural and political conservative who writes for journals like Commentary and teaches at Yale. And yet I find inspiration in the work of them both.
Trap for a Tribe
The intentions of the cybernetic totalist tribe are good. They are simply following a path that was blazed in earlier times by well-meaning Freudians and Marxists—and I don’t mean that in a pejorative way. I’m thinking of the earliest incarnations of Marxism, for instance, before Stalinism and Maoism killed millions.
Movements associated with Freud and Marx both claimed foundations in rationality and the scientific understanding of the world. Both perceived themselves to be at war with the weird, manipulative fantasies of religions. And yet both invented their own fantasies that were just as weird.
The same thing is happening again. A self-proclaimed materialist movement that attempts to base itself on science starts to look like a religion rather quickly. It soon presents its own eschatology and its own revelations about what is really going on—portentous events that no one but the initiated can appreciate. The Sing
ularity and the noosphere, the idea that a collective consciousness emerges from all the users on the web, echo Marxist social determinism and Freud’s calculus of perversions. We rush ahead of skeptical, scientific inquiry at our peril, just like the Marxists and Freudians.
Premature mystery reducers are rent by schisms, just like Marxists and Freudians always were. They find it incredible that I perceive a commonality in the membership of the tribe. To them, the systems Linux and UNIX are completely different, for instance, while to me they are coincident dots on a vast canvas of possibilities, even if much of the canvas is all but forgotten by now.
At any rate, the future of religion will be determined by the quirks of the software that gets locked in during the coming decades, just like the futures of musical notes and personhood.
Where We Are on the Journey
It’s time to take stock. Something amazing happened with the introduction of the World Wide Web. A faith in human goodness was vindicated when a remarkably open and unstructured information tool was made available to large numbers of people. That openness can, at this point, be declared “locked in” to a significant degree. Hurray!
At the same time, some not-so-great ideas about life and meaning were also locked in, like MIDI’s nuance-challenged conception of musical sound and UNIX’s inability to cope with time as humans experience it.