iWoz
Page 15
Only now all the CPU parts were on one chip, instead of a bunch of chips, and it was a microprocessor. And it had pins that came out, and all you had to do was use those pins to connect things to it, like memory chips.
Then I realized what the Altair was—that computer everyone was so excited about at the meeting. It was exactly like the Cream Soda Computer I’d designed five years before! Almost exactly. The difference was that the Altair had a microprocessor—a CPU on one chip—and mine had a CPU that was on several chips. The other difference was that someone was selling this one—for $379, as I recall. Other than that, there was pretty much no difference. And I designed the Cream Soda five years before I ever laid eyes on an Altair.
It was as if my whole life had been leading up to this point. I’d done my minicomputer redesigns, I’d done data onscreen with Pong and Breakout, and I’d already done a TV terminal. From the Cream Soda Computer and others, I knew how to connect memory and make a working system. I realized that all I needed was this Canadian processor or another processor like it and
some memory chips. Then I’d have the computer I’d always wanted!
Oh my god. I could build my own computer, a computer I could own and design to do any neat things I wanted to do with it for the rest of my life.
I didn’t need to spend $400 to get an Altair—which really was just a glorified bunch of chips with a metal frame around it and some lights. That was the same as my take-home salary, I mean, come on. And to make the Altair do anything interesting, I’d have to spend way, way more than that. Probably hundreds, even thousands of dollars. And besides, I’d already been there with the Cream Soda Computer. I was bored with it then. You never go back. You go forward. And now, the Cream Soda Computer could be my jumping-off point.
No way was I going to do that. I decided then and there I had the opportunity to build the complete computer I’d always wanted. I just needed any microprocessor, and I could build an extremely small computer I could write programs on. Programs like games, and the simulation programs I wrote at work. The possibilities went on and on. And I wouldn’t have to buy an Altair to do it. I would design it all by myself.
That night, the night of that first meeting, this whole vision of a kind of personal computer just popped into my head. All at once. Just like that.
• o •
And it was that very night that I started to sketch out on paper what would later come to be known as the Apple I. It was a quick project, in retrospect. Designing it on paper took a few hours, though it took a few months longer to get the parts and study their data sheets.
I did this project for a lot of reasons. For one thing, it was a project to show the people at Homebrew that it was possible to build a very affordable computer—a real computer you could
program for the price of the Altair—with just a few chips. In that sense, it was a great way to show off my real talent, my talent of coming up with clever designs, designs that were efficient and affordable. By that I mean designs that would use the fewest components possible.
I also designed the Apple I because I wanted to give it away for free to other people. I gave out schematics for building my computer at the next meeting I attended.
This was my way of socializing and getting recognized. I had to build something to show other people. And I wanted the engineers at Homebrew to build computers for themselves, not just assemble glorified processors like the Altair. I wanted them to know they didn’t have to depend on an Altair, which had these hard-to-understand lights and switches. Every computer up to this time looked like an airplane cockpit, like the Cream Soda Computer, with switches and lights you had to manipulate and read.
Instead they could do something that worked with a TV and a real keyboard, sort of like a typewriter. A computer like I could imagine.
As I told you before, I had already built a terminal that let you type regular words and sentences to a computer far away, and that computer could send words back to the TV. I just decided to add the computer—my microprocessor with memory—into the same case as that terminal I’d already built.
Why not make the faraway computer this little microprocessor that’s right there in the box?
I realized that since you already had a keyboard, you didn’t need a front panel. You could type things in and see things onscreen. Because you have the computer, the screen, and the keyboard, too.
So people now say this was a far-out idea—to combine my terminal with a microprocessor—and I guess it would be for other people. But for me, it was the next logical step.
That first Apple computer I designed—even though I hadn’t named it an Apple or anything else yet—well, that was just when everything fell into place. And I will tell you one thing. Before the Apple I, all computers had hard-to-read front panels and no screens and keyboards. After Apple I, they all did.
• o •
Let me tell you a little about that first computer—what is now called the Apple I—and how I designed it.
First, I started sketching out how I thought it would work on paper. This is the same way I used to design minicomputers on paper in high school and college, though of course they never got built. And the first thing was I had to decide what CPU I would use. I found out that the CPU of the Altair—the Intel 8080—cost almost more than my monthly rent. And a regular person couldn’t purchase it in small or single-unit quantities anyway. You had to be a real company and probably fill out all kinds of credit forms for that.
Luckily, though, I’d been talking to my cubicle mates at HP about the Homebrew Club and what I was planning, and Myron Tuttle had an idea. (You remember him: the guy whose plane almost crashed when I was in it.) He told me there was a deal you could get from Motorola if you were an HP employee. He told me that for about $40, I could buy a Motorola 6800 microprocessor and a couple of other chips. I thought, Oh man, that’s cheap. So very quickly I knew exactly what processor I would have.
Another thing that happened really early on was I realized— and it was an important realization—that our HP calculators were computers in a real sense. They were as real as the Altair or the Cream Soda Computer or anything else. I mean, a calculator had a processor and memory. But it had something else, too, a feature computers didn’t have at the time. When you turned a calculator on, it was ready to go: it had a program in it that started up and then it was ready for you to hit a number. So it booted up automatically and just sat there, waiting for you to tell it to do some thing. Say you hit a “5.” The processor in the calculator can see that a button is pushed, and it says, Is that a 1? No. A 2? No. A 3, 4 … it’s a 5. And it displays a 5. The program in a calculator that did that was on three little ROM (read-only memory) chips— chips that hold their information even if you turn the power off.
So I knew I would have to get a ROM chip and build the same kind of program, a program that would let the computer turn on automatically. (An Altair or even my Cream Soda Computer didn’t do anything for about half an hour after you set switches so you could put a program in.) With the Apple I, I wanted to make the job of having a program go into memory easier. This meant I needed to write one small program which would run as soon as you turned your computer on. The program would tell the computer how to read the keyboard. It would let you enter data into memory, see what data was in memory, and make the processor run a program at a specific point in memory.
What took about half an hour to load up a program on the Altair, took less than a minute using a keyboard on the Apple I.
What Is ROM?
Read-only memory (ROM) is a term you’ll hear a lot in this book. A ROM chip can only be programmed once and keeps its information even if the power is turned off. A ROM chip typically holds programs that are important for a computer to remember.
Like what to do when you turn it on, what to display how to
recognize connected devices like keyboards, printers, or monitors. In my Apple I design, I got the idea from the HP calculators (which used two RO
M chips) to include ROMs. Then I could write a “monitor” program so the computer could keep track of what keys were being pressed, and so on.
If you wanted to see what was in memory on an Altair, it might take you half an hour of looking at little lights. But on the Apple I, it took all of a second to look at it on your TV screen.
I ended up calling my little program a “monitor” program since that program’s main job was going to be to monitor, or watch, what you typed on the keyboard. This was a stepping point—the whole purpose of my computer, after all, was to be able to write programs. Specifically, I wanted it to run FORTRAN, a popular language at the time.
So the idea in my head involved a small program in read-only memory (ROM) instead of a computer front panel of lights and switches. You can input data with a real keyboard and look at your results on a real screen. I could get rid of that front panel entirely, the one that made a computer look like what you’d see in an airplane cockpit.
Every computer before the Apple I had that front panel of switches and lights. Every computer since has had a keyboard and a screen. That’s how huge my idea turned out.
• o •
My style with projects has always been to spend a lot of time getting ready to build it. Now that I saw my own computer could be a reality, I started collecting information on all the components and chips that might apply to a computer design.
I would drive to work in the morning—sometimes as early as 6:30 a.m.—and there, alone in the early morning, I would quickly read over engineering magazines and chip manuals. I’d study the specifications and timing diagrams of the chips I was interested in, like the $40 Motorola 6800 Myron had told me about. All the while, I’d be preparing the design in my head.
The Motorola 6800 had forty pins—connectors—and I had to know precisely how each one of those forty pins worked. Because I was only doing this part-time, this was a long, slow process. And several weeks passed without any actual construc
tion happening. Finally I came in one night to draw the design on paper. I had sketched it crudely before. But that night I came in and drew it carefully on my drafting board at Hewlett-Packard.
It was a small step from there to a completely built computer. I just needed the parts.
• o •
I started noticing articles saying that a new, superior-sounding microprocessor was going to be introduced soon at a show, WESCON, in San Francisco. It especially caught my attention that this new microprocessor—the 6502 from MOS Technologies in Pennsylvania—would be pin-for-pin compatible with, electrically the same as, the Motorola 6800 I had drafted my design around. That meant I could just pop it in without any redesigning at all.
The next thing I heard was that it was going to be sold over the counter at MOS Technologies’ booth at WESCON. The fact that this chip was so easy to get is how it ended up being the microprocessor for the Apple I.
And the best part is they cost half ($20) of what the Motorola chip would have cost me through the HP deal.
WESCON, on June 16-18, 1975, was being held in San Francisco’s famous Cow Palace. A bunch of us drove up there and I waited in line in front of MOS Technologies’ table, where a guy named Chuck Peddle was peddling the chips.
Right on the spot I bought a few for $20 each, plus a $5 manual.
Now I had all the parts I needed to start constructing the computer.
• o •
A couple of days later, at a regular meeting of the Homebrew Computer Club, a number of us excitedly showed the 6502 microprocessors we’d bought. More people in our club now had microprocessors than ever before.
I had no idea what the others were going to do with their 6502s, but I knew what I was going to do with mine.
To actually construct the computer, I gathered my parts together. I did this construction work in my cubicle at HP. On a typical day, I’d go home after work and eat a TV dinner or make spaghetti and then drive the five minutes back to work where I would sign in again and work late into the night. I liked to work on this project at HP, I guess because it was an engineering kind of environment. And when it came time to test or solder, all the equipment was there.
First I looked at my design on draft paper and decided exactly where I would put which chips on a flat board so that wire between chips would be short and neat-looking. In other words, I organized and grouped the parts as they would sit on the board.
The majority of my chips were from my video terminal—the terminal I’d already built to access the ARPANET. In addition, I had the microprocessor, a socket to put another board with ran-dom-access memory (RAM) chips on it, and two peripheral interface adapter chips for connecting the 6502 to my terminal.
I used sockets for all my chips because I was nuts about sockets. This traced back to my job at Electroglas, where the soldered chips that went bad weren’t easily replaced. I wanted to be able to easily remove bad chips and replace them.
I also had two more sockets that could hold a couple of PROM chips. These programmable read-only memory chips could hold data like a small program and not lose the data when the power was off.
Two of these PROM chips that were available to me in the lab could hold 256 bytes of data—enough for a very tiny program. (Today, many programs are a million times larger than that.) To give you an idea of what a small amount of memory that is, a word processor needs that much for a single sentence today.
I decided that these chips would hold my monitor program, the little program I came up with so that my computer could use a keyboard instead of a front panel.
What Was the ARPANET?
Short for the Advanced Research Projects Agency Network, and developed by the U.S. Department of Defense, the ARPANET was the first operational packet-switching network that could link computers ail over the world. It later evolved into what everyone now knows as the global Internet. The ARPANET and the Internet are based on a type of data communication called “packet switching.” A computer can break a piece of information down into packets, which can be sent over different wires independently and then reassembled at the other end. Previously, circuit switching was the dominant method—think of the old telephone systems of the early twentieth century. Every call was assigned a real circuit, and that same circuit was tied up during the length of the call.
The fact that the ARPANET used packet switching instead of circuit switching was a phenomenal advance that made the Internet possible.
• o •
Wiring this computer—actually soldering everything together— took one night. The next few nights after that, I had to write the 256-byte little monitor program with pen and paper. I was good at making programs small, but this was a challenge even for me.
This was the first program I ever wrote for the 6502 microprocessor. I wrote it out on paper, which wasn’t the normal way even then. The normal way to write a program at the time was to pay for computer usage. You would type into a computer terminal you were paying to use, renting time on a time-share terminal, and that terminal was connected to this big expensive computer somewhere else. That computer would print out a version of your program in Is and Os that your microprocessor could understand.
This 1 and 0 program could be entered into RAM or a PROM and run as a program. The hitch was that I couldn’t afford to pay for computer time. Luckily, the 6502 manual I had described what Is and Os were generated for each instruction, each step of a program. MOS Technologies even provided a pocket-size card you could carry that included all the Is and Os for each of the many instructions you needed.
So I wrote my program on the left side of the page in machine language. As an example, I might write down “LDA #44,” which means to load data corresponding to 44 (in hexadecimal) into the microprocessor’s A register.
On the right side of the page, I would write that instruction in hexadecimal using my card. For example, that instruction would translate into A9 44. The instruction A9 44 stood for 2 bytes of data, which equated to Is and Os the computer
could understand: 10101001 01000100.
Writing the program this way took about two or three pieces of paper, using every single line.
I was barely able to squeeze what I needed into that tiny 256-byte space, but I did it. I wrote two versions of it: one that let the press of a key interrupt whatever program was running, and the other that only let a program check whether the key was being struck. The second method is called “polling.”
During the day, I took my two monitor programs and some PROM chips over to another HP building where they had the equipment to permanently burn the Is and 0s of both programs into the chips.
But I still couldn’t complete—or even test—these chips without memory. I mean computer memory, of course. Computers can’t run without memory, the place where they do all their calculations and record-keeping.
The most common type of computer memory at the time was called “static RAM” (SRAM). My Cream Soda Computer, the
Altair, and every other computer at the time used that kind of memory. I borrowed thirty-two SRAM chips—each one could hold 1,024 bits—from Myron Tuttle. Altogether that was 4K bytes, which was 16 times more than the 256 bytes the Altair came with.
I wired up a separate SRAM board with these chips inside their sockets and plugged it into the connector in my board.
With all the chips in place, I was ready to see if my computer worked.
• o •
The first step was to apply power. Using the power supplies near my cubicle, I hooked up the power and analyzed signals with an oscilloscope. For about an hour I identified problems that were obviously keeping the microprocessor from working. At one point I had two pins of the microprocessor accidentally shorting each other, rendering both signals useless. At another point one pin bent while I was placing it in its socket.
But I kept going. You see, whenever I solve a problem on an electronic device I’m building, it’s like the biggest high ever. And that’s what drives me to keep doing it, even though you get frustrated, angry, depressed, and tired doing the same things over and over. Because at some point comes the Eureka moment. You solve it.