The One Device
Page 16
One thrust of one of Kay’s arguments—there are many—is that because the smartphone is designed as a consumer device, its features and appeal molded by marketing departments, it has become a vehicle for giving people even more of what they already wanted, a device best at simulating old media, rarely used as a knowledge manipulator.
That may be its chief innovation—supplying us with old media in new formats at a more rapid clip.
“I remembered praying Moore’s law would run out with Moore’s estimate,” Kay says, describing the famous law put forward by computer scientist and Intel co-founder Gordon Moore. It states that every two years, the number of transistors that can fit on a square inch of microchip will double. It was based on studied observation from an industry leader and certainly wasn’t a scientific law—it was, wonderfully, initially represented by Moore’s slapdash hand-drawn sketch of a graph—but it became something of a self-fulfilling prophecy.
“Rather than becoming something that chronicled the progress of the industry, Moore’s law became something that drove it,” Moore has said of his own law, and it’s true. The industry coalesced around Moore’s law early on, and it’s been used for planning purposes and as a way to synchronize software and hardware development ever since.
“Moore only guesstimated it for thirty years,” Kay says. “So, 1995, that was a really good time, because you couldn’t quite do television. Yet. Crank it up another couple of orders of magnitude, and all of a sudden everything that has already been a problem, everything that confused people, was now really cheap.”
Moore’s law is only now, fifty years after its conception, showing any signs of loosening its grip. It describes the reason that we can fit the equivalent of a multiple-ceiling-high supercomputer from the 1970s into a black, pocket-size rectangle today—and the reason we can stream high-resolution video seamlessly from across the world, play games with complex 3-D graphics, and store mountains of data all from our increasingly slender phones.
“If Neil were to write his book again today,” Kay quips, “it would be called Distracting Themselves to Death.”
Whether you consider the iPhone an engine of distraction, an enabler of connectivity, or both, a good place to start to understand how it’s capable of each is with the transistor.
You might have heard it said that the computer in your phone is now more powerful than the one that guided the first Apollo mission to the moon. That’s an understatement. Your phone’s computer is way, way more powerful. Like, a hundred thousand times more powerful. And it’s largely thanks to the incredible shrinking transistor.
The transistor may be the most influential invention of the past century. It’s the foundation on which all electronics are built, the iPhone included; there are billions of transistors in the most modern models. When it was invented in 1947, of course, the transistor was hardly microscopic—it was made out of a small slab of a germanium, a plastic triangle, and had gold contact points that measured about half an inch. You could fit only a handful of them into today’s slim iPhones.
The animating principle behind the transistor was put forward by Julius Lilienfeld in 1925, but his work lay buried in obscure journals for decades. It would be rediscovered and improved upon by scientists at Bell Labs. In 1947, John Bardeen and Walter Brattain, working under William Shockley, produced the first working transistor, forever bridging the mechanical and the digital.
Since computers are programmed to understand a binary language—a string of yes-or-no, on-or-off, or 1-or-0—humans need a way to indicate each position to the computer. Transistors can interpret our instructions to the computer; amplified could be yes or on or 1; not amplified, no, off, 0.
Scientists found ways to shrink those transistors to the point that they could be etched directly into a semiconductor. Placing multiple transistors on a single flat piece of semiconducting material created an integrated circuit, or a microchip. Semiconductors—like germanium and another element you might have heard of, silicon—have unique properties that allow us to control the flow of electricity when it travels through them. Silicon is cheap and abundant (it’s also called sand). Eventually, those silicon microchips would get a valley named after them.
More transistors mean, on a very basic level, that more complex commands can be carried out. More transistors, interestingly, do not mean more power consumption. In fact, because they are smaller, a larger number of transistors mean less energy is needed. So, to recap: As Moore’s law proceeds, computer chips get smaller, more powerful, and less energy intensive.
Programmers realized they could harness the extra power to create more complex programs, and thus began the cycle you know and perhaps loathe: Every year, better devices come out that can do new and better things; they can play games with better graphics, store more high-res photos, browse the web more seamlessly, and so on.
Here’s a quick timeline that should help put that into context.
The first commercial product to feature a transistor was a hearing aid manufactured by Raytheon in 1952. Transistor count: 1.
In 1954, Texas Instruments released the Regency TR-1, the first transistor radio. It would go on to start a boom that would feed the transistor industry, and it became the bestselling device in history up to that point. Transistor count: 4. So far, so good—and these aren’t even on microchips yet.
Let’s fast-forward, though.
The Apollo spacecraft, which landed humans on the moon in 1969, had an onboard computer, the famed Apollo Guidance Computer. Its transistors were a tangle of magnetic rope switches that had to be stitched together by hand. Total transistor count: 12,300.
In 1971, a scrappy upstart of a company named Intel released its first microchip, the 4004. Its transistors were spread over twelve square millimeters. There were ten thousand nanometers between each transistor. As the Economist helpfully explained, that’s “about as big as a red blood cell… A child with a decent microscope could have counted the individual transistors of the 4004.” Transistor count: 2,300.
The first iPhone processor, a custom chip designed by Apple and Samsung and manufactured by the latter, was released in 2007. Transistor count: 137,500,000.
That sounds like a lot, but the iPhone 7, released nine years after the first iPhone, has roughly 240 times as many transistors. Total count: 3.3 billion.
That’s why the most recent app you downloaded has more computing power than the first moon mission.
Today, Moore’s law is beginning to collapse because chipmakers are running up against subatomic space constraints. In the beginning of the 1970s, transistors were ten thousand nanometers apart; today, it’s fourteen nanometers. By 2020, they might be separated by five nanometers; beyond that, we’re talking a matter of a handful of atoms. Computers may have to switch to new methods altogether, like quantum computing, if they’re going to continue to get faster.
Transistors are only part of the story, though. The less-told tale is how all those transistors came to live in a chip that could fit inside a pocket-size device, provide enough muscle to run Mac-caliber software, and not drain its battery after, like, fourteen seconds.
Through the 1990s, it was assumed most computers would be plugged in and so would have a limitless supply of juice to run their microprocessors. When it came time to look for a suitable processor for a handheld device, there was really only one game in town: a British company that had stumbled, almost by accident, on a breakthrough low-power processor that would become the most popular chip architecture in the world.
Sometimes, a piece of technology is built with an explicit purpose in mind, and it accomplishes precisely that. Sometimes, a serendipitous accident leads to a surprising leap that puts an unexpected result to good use. Sometimes, both things happen.
In the early eighties, two brilliant engineers at one of Britain’s fastest-rising computer companies were trying to design a brand-new chip architecture for the central processing unit (CPU) of their next desktop machine, and they had a coupl
e of prime directives: make it powerful, and make it cheap. The slogan they lived by was “MIPS for the masses.” The idea was to make a processor capable of a million instructions per second (hence, MIPS) that the public could afford. Up to that point, chips that powerful had been tailored for industry. But Sophie Wilson and Stephen Furber wanted to make a computer that capable available to everyone.
I first saw Sophie Wilson in a brief interview clip posted on YouTube. An interviewer asks her the question that’s probably put to inventors and technology pioneers more often than any other, one that, through the course of reporting this book, I’d asked more than a few times myself: How do you feel about the success of what you created? “It’s pretty huge, and it’s got to be a surprise. You couldn’t have been thinking that in 1983—”
“Well, clearly we were thinking that it was going to happen,” Wilson cuts in, dispensing with the usual faux-humble response. “We wanted to produce a processor used by everybody.” She pauses. “And so we have.”
That’s not hyperbole. The ARM processor that Wilson designed has become the most popular in history; ninety-five billion have been sold to date, and fifteen billion were shipped in 2015 alone. ARM chips are in everything: smartphones, computers, wristwatches, cars, coffeemakers, you name it.
Speaking of naming it, get ready for some seriously nested acronyms. Wilson’s design was originally called the Acorn RISC Machine, after the company that invented it, Acorn, and RISC, which stands for reduced instruction set computing. RISC was a CPU design strategy pioneered by Berkeley researchers who had noticed that most computer programs weren’t using most of a given processor’s instruction set, yet said processor’s circuitry was nonetheless burning time and energy decoding those instructions whenever it ran. Got it? RISC was basically an attempt to make a smarter, more efficient machine by tailoring a CPU to the kinds of programs it would run.
Sophie Wilson, who is transgender, was born Roger Wilson in 1957. She grew up in a DIY family of teachers.
“We grew up in a world which our parents had made,” Wilson says, meaning that literally. “Dad had a workshop and lathes and drills and stuff, he built cars and boats and most of the furniture in the house. Mom did all the soft furnishings for everything, clothes, et cetera.”
Wilson took to tinkering. “By the time I got to university and I wanted a hi-fi, I built one from scratch. If I wanted, say, a digital clock, I built one from scratch,” she says.
She went to Cambridge, where she failed out of the math department. That turned out to be a good thing, because she switched to computer science and joined the school’s newly formed Microprocessor Society. There she met Steve Furber, another inspired gearhead; he would go on to be her engineering partner on several influential projects.
By the mid-1970s, interest in personal computing was percolating in Britain, and, just as in Silicon Valley, it was attracting businessmen on top of hackers and hobbyists. Herman Hauser, an Austrian grad student who was finishing his PhD at Cambridge and casting around for an excuse not to return home to take up the family wine business, showed up at the Microprocessor Society one day.
“Herman Hauser is somebody who is chronically disorganized,” Wilson says. “In the seventies, he was trying to run his life with notebooks and pocket organizers. And that was not brilliant—he wanted something electronic. He knew it would have to be low-power, so he went and found someone who knew about low-power electronics, and that was me.”
She agreed to design Hauser a pocket computer. “I started working out certain diagrams for him,” she says. “Then, one time, visiting him to show him the progress, I took along my whole folder which had all the designs I was doodling along with… designs for single-board small machines and big machines and all sorts of things.” Hauser was piqued. “He asked me, ‘Will these things work?’ and I said, ‘Well, of course they will.’”
Hauser founded the company that would become Acorn—named so it would appear before Apple Computer in listings. Along with Furber, Wilson was Acorn’s star engineer. She designed the first Acorn computer from the ground up, a machine that proved popular among hobbyists. At that time, the BBC was planning a documentary series about the computer revolution and wanted to feature a new machine that it could simultaneously promote as part of a new Computer Literacy Program, an initiative to give every Briton access to a PC.
The race to secure that contract was documented in a 2009 drama, Micro Men, which portrays Sophie, who still went by Roger then, as a fast-talking wunderkind whose computer genius helps Acorn win the contract. After FaceTiming with Wilson, I have to say that the portrait isn’t entirely inaccurate; she’s sharp, witty, and broadcasts a distinct suffers-no-fools-ishness.
The BBC Micro, as the computer would come to be called, was a huge success. It quickly transformed Acorn into one of the biggest tech companies in England. Wilson and the engineers stayed restless, of course. “It was a start-up; the reward of hard work was more hard work.” They set about working on the follow-up, and almost immediately ran into trouble. Specifically, they didn’t like any of the existing microprocessors they had to work with. Wilson, Furber, and the other engineers felt like they’d had to sacrifice quality to ship the Micro. “The board was upside down, the power supply wasn’t very good—there was no end of nastiness to it.” They didn’t want to have to make so many compromises again.
For the next computer, Wilson proposed they make a multiprocessor machine and leave an open slot for a second processor—that way, they’d be able to experiment until they found the right fit. Microprocessors were booming business at the time; IBM and Motorola were dominating the commercial market with high-level systems, and Berkeley and Stanford were researching RISC. Experimenting with that second slot yielded a key insight: “The complex ones that were touted as suitable for high-level languages, as so wonderful—well, the simple ones ran faster,” Wilson says.
Then the first RISC research papers from Stanford, Berkeley, and IBM were declassified, introducing Wilson to new concepts. Around then, the Acorn crew took a field trip to Phoenix to visit the company that had made their previous processor. “We were expecting a large building stacked with lots of engineers,” Wilson recalls. “What we found were a couple of bungalows on the outskirts of Phoenix, with two senior engineers and a bunch of school kids.” Wilson had an inkling that the RISC approach was their ticket but assumed that innovating a new microchip required a massive research budget. But: “Hey, if these guys could design a microprocessor, then we could too.” Acorn would design its own RISC CPU, which would put efficiency first—exactly what they needed.
“It required some luck and happenstance, the papers being published close in time to when we were visiting Phoenix,” Wilson says. “It also required Herman. Herman gave us two things that Intel and Motorola didn’t give their staff: He gave us no resources and no people. So we had to build a microprocessor the simplest possible way, and that was probably the reason that we were successful.”
They also had another thing that set them apart from the competition: Sophie Wilson’s brain. The ARM instruction set “was largely designed in my head—every lunchtime Steve and I would walk down to the pub with Herman, talking about where we’d gotten to, what the instruction set looked like, which decisions we’d taken.” That was critical in convincing their boss they could do what Berkeley and IBM were doing and build their own CPU, Wilson says, and in convincing themselves too. “We might have been timid, but [Herman] became convinced we knew what we were talking about, by listening to us.”
Back then, CPUs were already more complex than most laypeople could fathom, though they were far simpler than the subatomic transistor-stuffed microchips of today. Still, it’s pretty remarkable that the microprocessor design that laid the groundwork for the chip that powers the iPhone was originally designed just by thinking it through.
Curious as to what that process might look and feel like to the mere mortal computer-using public, I asked Wilson to walk me th
rough it.
“The first step is playing fantasy instruction set,” she says. “Design yourself an instruction set that you can comprehend and that does what you want it.” And then you bounce the ideas off your co-conspirator. “With that going on, then Steve is trying to comprehend the instruction set’s implementation. So it’s no good me dreaming up instruction sets that he can’t implement. It’s a dynamic between the two of us, designing an instruction that’s sufficiently complex to keep me as a programmer happy and sufficiently simple to keep him as a microarchitecture implementer happy. And sufficiently small to see how we can make it work and prove it.”
Furber wrote the architecture in BBC Basic on the BBC Micro. “The very first ARM was built on Acorn machines,” Wilson says. “We made ARM with computers… and only simple ones at that.”
The first ARM chips came back to the Acorn offices in April of 1985.
Furber had built a second processor board that plugged into the BBC computer and took the ARM processor as a satellite. He had debugged the board, but without a CPU, he couldn’t be sure it was right. They booted up the machine. “It ran everything that it should,” Wilson says. “We printed out pi and broke open the champagne.”
But Furber soon took a break from the celebration. He knew he had to check the power consumption, because that was the key to shipping it in the cheap plastic cases that would make the computer affordable. It had to be below five watts.