Book Read Free

The Pentium Chronicles: The People, Passion, and Politics Behind Intel's Landmark Chips (Practitioners)

Page 7

by Robert P. Colwell


  In this context, a project’s official performance projections are not just the output of a standard company simulator. Performance projections are project leadership judgments that have four critical bases:

  1. A deep knowledge of what is being designed

  2. Risks that important as-yet-unresolved issues will be settled favorably

  3. Composition of the performance benchmark suite

  4. Most important, the particular design team’s culture as modulated by the corporate culture

  Overpromise or Overdeliver? To put this relationship of design team to performance projection in a more familiar context, consider this cultural/philosophical question: Is it better for a design project to overpromise and underdeliver, or to underpromise and overdeliver?3 There are rational arguments to be made for both choices, but the point is that each design team will have its own ideas. Moreover, they must pick one and studiously avoid the other, generally making their selection a point of pride for them and justification to ridicule those who picked the other.

  To treat all teams the same is to cripple the exceptional teams while implicitly insisting that the weaker design teams somehow perform above their capability.

  Table 2.1 shows the various dimensions of the tension between the choices. The overpromising team will take fierce pride in the awesome performance numbers they’ve established as their target, and will consider all other teams to be either timid or simply underperforming. The overdelivering side will see the other teams as untrustworthy, promising the moon and stars while delivering at best a micrometeorite.

  What appears to be a simple technical determination is in fact a deep statement of how a design team sees itself and to what heights that team aspires.

  Thus, what appears to be a simple technical determination-establishing a realistic performance target for a new design is in fact a deep statement of how a design team sees itself and to what heights that team aspires.

  The overpromise side will argue that there are uncertainty bands in any simulation, measurement, or projection. If you always pick the most pessimistic edge of the uncertainty band, then you are seriously sandbagging your performance projection, and it may not be meaningful or useful. It’s a competitive world, and if you can reasonably argue that your project might be able to hit a certain performance target, then by all means make that your project goal and tell everyone about it. Trying to achieve a high target and if necessary falling slightly short might, in fact, yield the best possible final result. Besides, customers have heard unrealistic promises for so long that they routinely downgrade whatever you tell them, and if you give them a realistic performance number, it could well be noncompetitive after being judged down. It’s best to take the complete context into account when projecting performance.

  I have two words for those arguments: Get real. If I’m going to surprise a customer, I want it to be accompanied with delight, not dismay. I believe in the think-straight, talk straight school of engineering. Set a clear goal, get the design team’s buy-in on the goal, and then tell the company and the customers where you are going and how you will get there. Do performance projections knowing that the project is in an early state, that surprises are inevitable, and that they are virtually never in your favor. Allow for that by judiciously moderating any preliminary rosier-than-reality performance numbers. Don’t make official performance targets that are so low you can’t possibly miss them, but do pick targets that you have reasonable confidence you will hit. As the project proceeds and confidence grows in the numbers, adjust upward as necessary. And watch out for more aggressive projections from other teams being used to shame you into raising yours. Practice diplomatic ways of saying, “I’m sure that other project had some valid technical basis for making their ludicrous performance claims, but I do not think it’s in the company’s best interest for me to join them in their delusions.” But after all is said and done, hit your targets. The overpromise camp is expected to miss, but the overdeliver side cannot fall short if they hope to retain their credibility for the next project. With P6, we chose underpromise and overdeliver when we committed to project targets, it was because we had high confidence we could meet or exceed them.

  As with any other areas of disagreement between projects, the coexistence of these diametrically opposite viewpoints causes serious friction at the executive levels (and therefore among project engineers). A senior vice president will always have two or more project reports on his desk simultaneously, and if he cannot be reconciled to the teams’ cultural differences, he will deem the reports inconsistent and irreconcilable. The judgment and experience required to look behind the numbers and see where the judgments are being applied is exquisitely rare. Worse, the microprocessor design business moves so fast that the conservatism or aggressiveness of a design team becomes apparent just after the team has disbanded, been decommissioned, or split apart.

  The best you can do is purposely adopt the philosophy that matches a team and its leadership, make it clear to management and other projects what that philosophy is, and remind them of it at all opportunities. And then execute to it.

  CUSTOMER VISITS

  While we were working through the concepts that would underlie the P6 family, our new marketing organization was catching up on our plans, and making the right contacts in the industry so that we could get useful feedback. They scheduled customer visits with all Intel’s “tier one” customers and many of the tier two and three customers and software vendors. Typically, customer visits consisted of one or two field sales engineers, a couple of marketing people from our design division, and an architect from the design team.

  Loose Lips

  When I was first asked to go on these visits, I assumed that the field sales engineers would be delighted to have someone from the actual design team accompany them. We didn’t know all the answers to every question that might come up, but we could quickly judge the implications of a customer’s concerns or requests, and we could accurately convey that information to the design team. It turned out, however, that field sales engineers are leery of what they call “factory personnel,” and they are not at all inhibited about informing said factory personnel of their reservations.

  Field engineers have reason to worry about designers talking directly to customers because these meetings require a certain protocol. Egos are involved, as are corporate rivalries and interpersonal histories among the participants. An overriding factor is that the field sales engineer makes his living by keeping the relationship between the two companies healthy. Design engineers can damage that relationship pretty fast just by being themselves, which means blurting out what they believe is the technical truth4 and then letting the facts lead them to the right course of action. But very often what designers find in their truth trunk is really a combination of objective technical facts combined with things they happen to believe with high enough confidence to qualify as a reliable basis for conjecturing or analyzing (i.e., opinions). Worse, engineers enjoy what they do. They find technical discussions with razor smart people to be first-rate intellectual stimulation, and in the ensuing verbal meltdown they can say many things that would have been better left unsaid. So these designers can be loose cannons in meetings with their technical counterparts from other companies and can innocently say things that will take the field engineers weeks of counterspin to undo.

  Memorable Moments

  I could fill this book with accounts of customer visits, but this chapter is already on the long side, so I’ll limit myself to three standouts.

  Microsoft. It amazed me how uniformly short the planning horizons of the companies we visited seemed to be. We were talking to them in 1991 to 1992 about a microprocessor that would be in volume production by 1995, so Intel had to have at least a five-year planning horizon for its microprocessor road maps. This meant we had to at least try to look into the future that far. Initially, I had hoped that we could compare and contrast our vision for that time frame with various customers’ plan
s and visions, to the benefit of both. But what we found was that almost no companies even tried to see out beyond two years, let alone four or five, and they were not all that interested in the topic.

  Microsoft was an exception. Their developments were as long-lived as Intel’s and like Intel they were very sensitive to the long-term implications of establishing standards and precedents. At one classic meeting with them, a few of us Intel engineers found ourselves at one end of a long table. On one side sat the Chicago (Windows 95) development team. On the other was the Windows NT team. We had hoped to present the general ideas behind the P6 design and our performance targets, and then spend the rest of the meeting getting both teams’ inputs on our proposed physical address extensions.

  It was not to be. As we were presenting the preliminary P6 material, we became uncomfortably aware of a strong undercurrent in the room, some dynamic that was keeping us from getting complete mindshare from the Microsoft folks. I don’t remember what set it off, but all too soon the Windows NT team members were shouting at the Windows 95 engineers about their poor reliability and lack of understanding of their legacy code, and the Windows 95 engineers were equally loud about which team was earning money and which team was just spending it. I’m not sure they even noticed when we left the room.

  Novell. In 1992, networking had not yet been built into the operating system, so if you wanted to interconnect PCs back then, Novell was the answer. All of us sensed that net working was going to become much more important (although few of us, including me, were able to make the conceptual leap to today’s Internet), and we thought that talking to Novell might be a way to see more clearly what was ahead. Our standard pitch was to explain what out-of-order was, the general methods we were using to achieve it, and the high performance we expected. Usually, the performance numbers would elicit at least some interest and excitement as the listener pondered, “What could I do with a machine that fast!?”

  4Richard Feynman noticed this during his investigation of the Challenger disaster, and believed he got much more accurate information “from the bottom,” directly from the engineers [19].

  The Lost Art of Roadmapping

  Our customers’ inability to see beyond two or three years shocked me, because tactics must reflect strategy, and strategy must be aligned with the global trends that regularly sweep across the computing industry. Operating systems with graphical user interfaces demanded more DRAM and removable storage capacity; CD-ROMs answered that challenge but needed better buses; PCI was the 1990s answer to that.

  Taken as a group, those things formed a very capable computing platform, and together with compelling games, drove demand for better audio and graphics. And when those building blocks were in place, and CPUs had gotten fast enough and modems cheap enough, the Internet was born, the implications of which are still filtering through every corner of the computing universe.

  You don’t have to see every turn coming, but you have to try to form a general map and plot your company’s best path through it. Very few companies we visited seemed able to do that.

  But the reaction we got was very different from what we expected. The Novell personnel actually yawned, and one said, “We don’t need faster computers. The ones we have now are fast enough.”

  Staggered by this attitude, I invoked Moore’s Law. “It’s real,” I explained. “Computers are going to get orders of magnitude faster for the same or lower cost over the next 10 to 15 years. If you have no way to take advantage of that, you’re going to lose to whomever does.” They remained unimpressed. One very senior person asked to talk to me afterward, but my remaining illusions about that company were dashed when all he wanted to know about was some obscure corner case of a complicated x86 instruction.

  Compaq. Visiting Compaq was always interesting. In our first encounters, they expressed dismay over our plans to incorporate the L2 cache into the CPU package via the two-die-inpackage scheme. They pointed out that, unlike most of their competitors, they could engineer buses and cache designs in ways that gave their systems a performance edge. What Intel thought of as a boon to the OEMs, designing the L2 cache and providing it in the package, Compaq thought of as a lost opportunity to differentiate themselves from their competition. They were even more unhappy, at least initially, about our proposal for P6 to feature glueless multiprocessing.5 They correctly saw that this would lead to multiprocessing systems assembled by OEMs with much less engineering ability than historically required.

  As far as they could see, we were lowering the bar, and forcing them into new positions on the technological ladder. I replied that we were following where the fundamental silicon technology was going. The P6 generation was the first with enough transistors to do a credible multiprocessor design, and when we combined that with our two-die-inpackage approach and a new frontside bus, our best POR was clear. Compaq eventually turned its attention to designing very high performance servers and was quite successful in that market.

  Insights from Input

  The customer visit phase was enlightening on many fronts. I was amazed, horrified, and indebted by turns.

  Yes, you can find some software that wants a “find-first-one rotate through hyperspace inverse square root twice on Tuesdays” instruction, but most code won’t notice.

  Not So Secret Instructions. Every CPU architect has heard this request at least once from people who should have known better: “Would you please put a few special instructions in there, just for us? That way, our code will run faster than our competitor’s.” I learned not to roll my eyes when I heard this, but it is so wrongheaded about something so basic that I had to wonder what else they probably didn’t understand. The reality is that most code is not heavily performance-bound by the lack of one instruction. Yes, you can find some encryption software that may spend 80% of its time in a tight inner loop that wants a “find-first-one rotate through hyperspace inverse square root twice on Tuesdays” instruction that you somehow neglected to provide, but most code won’t notice that instruction or any other simple fix. So the belief that the microprocessor is but one or two instructions away from blindingly higher speed is almost always wrong.

  Worse, however, is the idea that if someone could provide such instructions, somehow competitors wouldn’t notice. Not in this architecture; code is too easy to disassemble. The time lag between “What is this opcode I’ve never seen before?” and “Oh, that’s what this does” is measured in hours or minutes, not years.

  Help from the Software World. By far the shortest planning-time horizons we saw on customer visits were at the software vendors. Those folks behaved at all times like the world would end 18 months from today and there wasn’t much point in pretending time existed beyond that. Some of them were helpful, anyway. John Carmack of ID Software comes to mind. He had some deep, useful insights about when and why a software vendor would ever use instruction subsets such as MMX, SSE, and SSE II. He also knew more than anyone else about the difficulties of writing software that would deliver a compelling gaming experience while remaining compatible with dozens of sound and video cards, on top of choosing between the then-competing Open-GL and DirectX standards. John was one of the few people we encountered who really understood the systems and CPU issues and how those would translate into the gaming experience for his Doom and Quake games. [12]

  The Truth about Hardware and Software. I rapidly discovered that software vendors and hardware vendors are not completely aligned. Hardware vendors traditionally earn their best profits on their latest, fastest, and best hardware, and that is what they normally promote most zealously in marketing programs. A few years later, the magic of cleverness, hard work, big investment, and Moore’s Law enables them to produce something even better, and the cycle repeats. Hardware vendors want the buyer to be so dissatisfied with their current products that they will replace them with the latest and greatest. They want software vendors to produce programs that are so demanding of the hardware, yet compelling to the buyers, that buyers will u
pgrade their systems so that they can run them.

  Software vendors, on the other hand, want to sell as many copies of their new title as possible. If they write the code so that it requires leading-edge hardware, they will drastically limit their immediate market. There may be at most a few million computing platforms with the fastest CPUs, but there are hundreds of millions of computing platforms running older operating systems and hardware. Software vendors must try to ensure that their code runs acceptably on these legacy systems, and even better on newer machines.

  Put in directional terms, hardware vendors are looking forward, trying to make today’s computing platforms obsolete, whereas software vendors are often looking sideways or backward, trying to sell their newest wares to those same platforms.

  ESTABLISHING THE DESIGN TEAM

  Toward the end of P6’s concept phase, we felt confident that some parts of the overall design were mature enough for design staff to begin working on them. Integer ALUs, floatingpoint functional units, the instruction decoder, the frontside bus, and the register alias tables were candidates, since they seemed unlikely to need major reworking before the project’s end. We had less experience with the out-of-order core, including the reservation stations (which began as split-integer/floatingpoint reservation stations but were later combined into one), the reorder buffer, and much of the memory subsystem.

  At this point, Randy Steck, P6’s design manager, began organizing the design troops with a secret map we had provided that detailed our best estimates of how complex the various units would ultimately be. Randy needed to put his best, most experienced engineers on the hardest parts of the design so that these units would not end up unduly stretching the schedule. At the same time, he had to integrate over 100 new college graduates, and he could not leave any of them leaderless. He also had to convince a significant number of his experienced engineers that managing these new engineers was a good thing. It is a tribute to Randy’s effectiveness as a project manager that he succeeded in extending many design engineers’ effectiveness into first-line supervisory duties while they remained the design’s principal technical drivers.6

 

‹ Prev