That said, I think we need to scrap nearly every single piece of software in the world and start over.
Even Linux may not be radical enough rethinking—though it is clearly far superior to anything else presently extant in the noosphere.
Early in 2000, the US National Weather Service unveiled a new IBM supercomputer capable of 2,500 billion calculations a second that would give them warning of impending weather problems up to ten days in advance. They booted it up, fed it oceans of data from around and above the planet, and that afternoon at 3:30 they announced with uncommon confidence that there was a 40 percent chance of light snow next day totaling less than one inch.
You’re way ahead of me. That night, the white hammer fell on the entire eastern seaboard. North Carolina got 51 cm (20 inches), for Chrissake. The nation’s capital took 46 cm (18 in), enough to shut down the federal government and give a quarter of a million workers two days off with pay—120 megabucks down the drain right there. “One expert,” the Times of London reported, “said the forecasters had been deceived by the computer rather than believing their own radar, which told them that heavy snow was already falling…” At least seven deaths were attributed to the unexpected storm.
Now think about this: that story was awarded half a column on page twenty-two of my local paper.
It isn’t even considered news anymore if a supercomputer screws up and paralyzes the government and half the country. By now we all know computers screw up all the time. Y2K, eh? Yet again and again we sit there staring at them, mesmerized, while just outside our window the blizzard is plainly visible on the horizon.
I was perhaps one of the first civilians to “get” that computers screw up all the time. That’s right, I’m so ancient I can actually recall a time when men in white coats said, with a straight face, “Computers don’t make mistakes.” And were believed. I’m going back a quarter-century or so, now. After seven years at three different universities (don’t ask), I’d finally completed the requirements for a degree. A week before graduation, I noticed I hadn’t received any of the standard bumpf everyone else had—invitations to order cap and gown, sit for photos, select a class ring, etc—so I wandered down to the registrar’s office and inquired.
That university, I learned, had recently become one of the first in the country to acquire a supercomputer. It cost a fortune, filled a building the size of a small high school and Did Not Make Mistakes. So they had loaded all the university’s records into it…and then trashed the obsolete hard copies.
You know what comes next. The computer contained no record of my existence. Garbage out…
To graduate, I had to obtain affidavits from every professor I’d had there. Two were dead. One was catatonic: bad STP. Another was underground, hiding somewhere in Algeria with Eldridge Cleaver and Timothy Leary. Imagine the fun. The following October, I finally received my sheepskin in a moving ceremony in the Dean of Men’s office, attended by him, his secretary and myself. I wept, with relief, and the secretary had the grace to sniffle.
The truth? I’ve always kind of liked that computers screw up often. Tell the editor your computer crashed, just as you were about to save the changes, and she will groan in genuine empathy and extend your deadline. Computer-savvy since 1984, I was nonetheless one of the last kids on my block to get e-mail…and only surrendered when I finally grokked that it was at LEAST as unreliable as the post office, that you could always claim not to have received a message, and be believed. A certain amount of inefficiency and screw-up in life affords a level of comfort and deniability I find agreeable…even if it did bite me on a tender spot once in college. And as I say, nowadays we all know computers are unreliable: at least we’re warned.
But when things are so bad that hundreds of thousands of commuters get stranded on snowbound highways without warning, and we have to (and we did have to) spend many gigabucks warding off potential Y2K disaster, and we all take it for granted that our own computer will freeze or crash regularly, and I don’t have a single friend with whom I can reliably exchange formatted documents by e-mail without tsimmis…maybe it’s time to go back to the beginning and start over.
Windows is a shell built on a shell grafted onto QDOS—“Quick and Dirty Operating System,” itself a ripoff of CPM, written at a time when nobody’d ever heard of mouse, GUI, Internet, streaming video or plastic coasters that hold sixty gigabytes. It was turned into the de facto world standard by a man who honestly believed “64K should be enough for anybody.” Gates may have renamed the system he stole MS-DOS, for Microsoft Disk Operating System, but it still remained, at bottom, a quick and dirty operating system, meant to run on the most primitive hardware imaginable. And for generations now they’ve been simply piling code on top of it, trying to mask, muffle or circumvent the profound flaws built into it.
The only reason it survives is that very soon after it appeared, this ripoff OS was itself able to rip off at least some of the “look and feel” of yet another operating system. That one was originally dreamed up by a truly extraordinary man called Jef Raskin, who named it after his favorite apple: Macintosh.
Then Steve Jobs heard how cool it was, and came and took the project over, and it became the legendary Macintosh Marathon Race To Destiny. You’ve probably read about it: Big Steve bullying a team of eccentric geniuses into outdoing themselves, working twenty-six-hour days and nine-day weeks while chanting, “Artists ship!” Perhaps you’ve heard how they got the very first Mac demo program to run correctly without crashing for the first time ever only a few hours before the Mac was to be publicly unveiled with maximum fanfare. That first Macintosh was a 128K joke, a $2000 etch-a-sketch, and the OS and all the existing programs that ran on it (both of them) were badly flawed. The romantic mystique of Jolt Cola Programming had gotten the machine out the door on deadline, all right, and it really was insanely great—at least potentially.
It just didn’t work very well.
Ever since, they’ve been applying increasingly more sophisticated and complicated band-aids. The OS that once fit onto a 400K floppy, along with a word processor and a novelette-length document, now has to be loaded from a CD-ROM because it’s so grotesquely bloated that using 1.4MB floppies would take too much time. And today it takes me three times as long to open a blank word-processing document as it used to take me in 1984 on a Fat Mac—three times as long to boot up in the first place, too.
And this is the intelligent operating system. Its imitators are worse, just barely usable. The most commonly used word processor on earth, Word, is a grotesque kludge, the Jabba the Hutt of software, and, as of this year, can no longer open and read Word v1.0 files.
This ain’t workin’—time we started over.
Like I said, I’m not a Luddite. I’d rather run Word 9 on the worst Windows machine ever built, driving a dot matrix printer with fanfold paper, than be forced to earn my living with the best electric typewriter IBM ever made.
But even more, I’d like to have a good computer, with a sensibly-designed and well-written operating system, that does not keep constantly pulling my attention away from my work. If only I could find one…
Friends tell me—quite emphatically, some of them—that the system I want is here, and it is called Linux. It never crashes, they say. And Any Day Now, there will be just dozens of applications available to run on it. And the best part, they say, is that Linux will not stop getting better until it is perfect…because nobody owns it, anybody can get at the innards and tinker with them until they’re bug-free and there’s no pressure to ship by deadline ready or not. There is a persuasive logic to all this…
And yet, I wonder if even Linux goes back far enough.
I am massively ignorant here—but it’s my understanding that Linux shares the same basic paradigm as the Mac or Windows systems. One boots up a Linux machine, and it loads software which eventually produces onscreen some sort of “desktop” (even if it doesn’t actually employ that specific visual metaphor): a conceptual environment of
some sort, a “place” containing menus of individual task-specific programs called “applications” and the archived documents produced by them. The user may then “launch” one or more applications, switch among them, use them to view, edit or create documents, and ultimately return to the desktop to launch new applications, copy documents, perform file hygiene or shut down the machine.
Is it written somewhere by the finger of God that this is necessarily the only or even the best way to design an operating system? Granted, it’s vastly superior to its immediate predecessors, systems employing the dreaded “modes” metaphor. But is it really the perfect paradigm, beyond improvement?
Since 1984 I’d believed as an article of faith that the Macintosh Metaphor, the graphical-user-interface Apple developed from concepts pioneered by Xerox, was the best possible interface. Microsoft clearly agrees, for they’ve done their level best to imitate it, given the limitations of their OS. Everyone agrees: whether Apple original or Windows knockoff, long live desktops, icons in folders, applications and documents. It must be the perfect system, since there is no other. There are a few die-hard adherents of the dreaded command line interface (just as there are guys who’ll never abandon their 8-track) but all reasonable people know the perfect conceptual paradigm has been found.
Okay, there are still people who find even GUI computers counterintuitive, unfriendly in the extreme…but those people are just stupid, or stubborn, or superstitious, or they’re just not trying…and in a few generations they’ll have been mercifully selected out of the gene pool anyway. (Never mind that these throwbacks constitute 85 percent of the population of North America and more than 95 percent of humans worldwide.)
So I believed for years. And then I seemed to hear a voice, saying, “Friend Zebadiah—are you sure?”
Just because the Mac/Windows scheme is the best in sight, is it necessarily the best possible? Does your mind naturally store data in folder-trees? What’s so inherently great about little icons? Deeper: what’s so unimprovably wonderful about the metaphor of an imaginary desktop, from which one launches task-specific applications that create mutually incompatible files? Before you got a computer, did you ever have a desktop that remotely resembled the one onscreen?
Wait, now: everyone knows graphic icons are superior to mere text-based—
In all cases? Imagine you’re a newbie, sitting down at your first computer, contemplating a button that depicts a rectangle with an isosceles trapezoid contiguous above. Say you grew up in an apartment building. Or a mansion, or an igloo, or a grass hut, or a geodesic dome, or underground in Coober Pedy, Australia. Which would you decipher quicker? That baffling graphic? Or the simple word ‘home’?
I use Microsoft Word every day of my working life. Over on the left margin is a toolbar: twenty buttons with graphical labels. What do they all mean? Well…I can tell you instantly the three I use regularly. Another three or four are fairly obvious. A diagonally slanting tube with a cap at the lower end, for example, is clearly a lit cigarette: the icon means “No smoking.”
Ah, no—I was mistaken: that’s not a cigarette, it’s a pencil, with its eraser downward, so the icon must mean “delete.” You select some text and then click that icon and it goes away, right?
Sorry, squire: in fact that eraser-head icon means “Undo.”
Most of the other icons aren’t even that self-explanatory…and some just beat the hell out of me. And I’ve been doing this a long time. What would a new user make, for instance, of the icon that depicts something she’s unlikely ever to see or use: a floppy disk?
All these heretical thoughts were put in my head by a correspondent who asked, “Do you know why every time you boot up you have to wait nearly three minutes before you can do anything? And then two more to reopen the document you were working on, find your place and resume working? It’s because the original Mac designer screwed up.”
Hey, has Microsoft got anything better, wise-guy?
“No. But I do.”
Right, I thought.
“My computer boots instantly. Flip the power switch, and you’re looking at the last screen you saw. The stuff you’ll need to resume working on it loads invisibly in the background, in the order you’re liable to need it, during the four or five seconds it’s going to take you to find your place on-screen, set your fingers on home row and decide on your first keystrokes. Everything else, the stuff you might need later on, loads whenever it gets a chance.”
That is neat. Do all the applications—
“My computer doesn’t use applications.”
Huh?
“Or you could say all the applications are active, all the time. But the user never has to think about such things. He just sits down and works. Like a surgeon, whatever tools he calls for are put in his hand as he needs them.”
I tried to picture that. Right now I have on my Mac: 1) a word processor with lousy graphics, page layout and movie modules, 2) a paint/draw program with a lousy text-processing module, 3) a photo program with lousy text processing and paint/draw powers, 4) a page layout program with lousy text-processing and graphics capability and 5) an e-mail program incorporating a rotten text-processor, lousy graphics and little else. Endless duplication of effort. And none of those programs can read the files of any of the others without conversion rituals.
“In my interface, all the applications are always open, and totally compatible: together they are the operating system. The user never has to think about anything but what he or she wants to do next.”
At this point you’re probably wondering why I was still listening to this crackpot.
One word: rep. Turns out this was the second interface this fellow’d dreamed up, and the previous one hadn’t turned out badly at all, however critical of it he may be now. He named that one after his favorite apple: Macintosh.
Yes, that man behind the curtain, to whom you definitely should pay attention, a peer of The Great and Wonderful Woz himself, is the original head of the Macintosh project, Jef Raskin. He is as close to a Renaissance Man as it is possible to be anymore, an intellectual omnivore of frightening appetite: even a partial list of his accomplishments would fill an essay this size, and a list of his hobbies would overflow this website. An outline of his interests might well occupy enough bandwidth to perceptibly slow the Net. Also he’s modest. He recently wrote me: “It is gross exaggeration to say, as you do, that I am the only living human who’s ever thought seriously about interface design. I may have been one of the earlier ones, but there are thousands. Most of whom are doing really dumb work.”
As an antidote to which, he has written a splendid book called The Humane Interface, which was published in 2000. I can count on the fingers of one foot the number of books about software design I’ve ever loved—or for that matter, seen—but this one I found as riveting and mind-expanding as a good science fiction novel. (Full disclosure: I’m quoted, briefly, in a chapter heading.) He even did his own cover illo.
The book reminded me of the late Victor Papanek’s classic Design for the Real World. Both mercilessly, often hilariously, flay existing examples of stupid design, then contrast them with intelligent design, and thus begin to derive therefrom some clear, basic, fundamental principles of what a good design should and should not do. (The products Mr. Papanek designed were physical rather than virtual.) I put Mr. Raskin’s book down understanding much better why most of the software I have to use pisses me off. I almost want to propose a law that henceforward all software designers be required to read The Humane Interface.
That system, by the way—the applicationless, instant-boot computer—actually exists. Or did. Mr. Raskin not only dreamed it up, he designed it, sold it to the money, got it built and even got the sumbitch shipped. Not by some under-financed startup, either, but by a major player, Canon. The Canon Cat, it was called. Sure, its desktopless-dancer paradigm sounds a little weird to indoctrinated Mac or Windows zombies like you and me…but they say people who’d never touched a computer befor
e sat down in front of a Cat and just…well, started working, without knowing the hierarchical file system from the patriarchal legal system. It was a computer aimed at the other 85 percent of the population, who don’t think yours is fun. Major buzz began building…
…and within weeks, Canon was bought by a larger corporation with its own new computer, and the Cat was put in a sack and tossed in the river.
Next time you get in your car, look down. Observe the layout of the accelerator and brake pedals, and take just a moment to contemplate it skeptically, as though it were not handed down by God. At once, you notice the design—absolutely standard across the planet—is fundamentally brain-damaged. Clearly it requires more time and work to get one’s foot from accelerator to brake (lift foot high, move it to the left a measured amount, lower it again) than to get it from brake to accelerator (slide foot to right until it falls off pedal)—which is exactly backwards. If your foot should slip, you won’t accidentally brake, inviting possible minor damage to you from behind…you’ll accidentally accelerate, causing likely fatality to someone else in front of you. Henry Ford just wasn’t thinking that day, that’s all.
But don’t bother designing an intelligent layout. It’s too late. Billions of people already do it the stupid way, by habit. If you introduced a smarter design—somehow miraculously destroyed every single existing car and replaced them with improved versions overnight—hundreds of thousands would still kill themselves unlearning their bad habits. The present scheme is like the Chinese alphabet or the QWERTY keyboard: a disaster it’s way too late to undo.
But at this point in history, despite what all the hype may have caused you to gather, hardly anyone on Earth owns a computer.
It is not yet too late. There’s still time to rethink operating systems and the software that runs on them from the ground up, with long-term goals and capabilities in mind for once, and get it right this time.
The Crazy Years Page 22