Book Read Free

The Innovator's Solution

Page 21

by Clayton Christensen


  The functionality of many automobiles has overshot what customers in the mainstream markets can utilize. Lexus, BMW, Mercedes, and Cadillac owners will probably be willing to pay premium prices for more of everything for years to come. But in market tiers populated by middle- and lower-price-point models, car makers find themselves having to add more and better features just to hang onto their market share, and they struggle to convince customers to pay higher prices for these improvements. The reliability of models such as Toyota’s Camry and Honda’s Accord is so extraordinary that the cars often go out of style long before they wear out. As a result, the basis of competition—what is not good enough—is changing in many tiers of the auto market. Speed to market is important. Whereas it used to take five years to design a new-model car, today it takes two. Competing by customizing features and functions to the preferences of customers in smaller market niches is another fact of life. In the 1960s, it was not unusual for a single model’s sales to exceed a million units per year. Today the market is far more fragmented: Annual volumes of 200,000 units are attractive. Some makers now promise that you can walk into a dealership, custom-order a car, and have it delivered in five days—roughly the response time that Dell Computer offers.

  In order to compete on speed and flexibility, automakers are evolving toward modular architectures for their mainstream models. Rather than uniquely designing and knitting together individual components procured from hundreds of suppliers, most auto companies now procure subsystems from a much narrower base of “tier one” suppliers of braking, steering, suspension, and interior cockpit subsystems. Much of this consolidation in the supplier base has been driven by the cost-saving opportunities that it affords—opportunities that often were identified and quantified by analytically astute consulting firms.

  The integrated American automakers have been forced to dis-integrate in order to compete with the speed, flexibility, and reduced overhead cost structure that this new world demands. General Motors, for example, spun off its component operations into a separate publicly traded company, Delphi Automotive, and Ford spun off its component operations as Visteon Corporation. Hence, the same thing is happening to the auto industry that happened to computers: Overshooting has precipitated a change in the basis of competition, which precipitated a change in architecture, which forced the dominant, integrated firms to dis-integrate.

  At the same time, the architecture is becoming progressively more interdependent within most of the subsystems. The models at lower price points in the market need improved performance from their subsystems in order to compete against higher-cost models and brands in the tiers of the market above them. If Kia and Hyundai used their low-cost Korean manufacturing base to conquer the subcompact tier of the market and then simply stayed there, competition would vaporize profits. They must move up, and once their architectures have become modular the only way to do this is to be fueled by ever-better subsystems.

  The newly interdependent architectures of many subsystems are forcing the tier-one suppliers to be less flexible at their external interface. The automobile designers are increasingly needing to conform their designs to the specifications of the subsystems, just as desktop computer makers need to conform the designs of their computers to the external interfaces of Intel’s microprocessor and Microsoft’s operating system. As a consequence, we would expect that the ability to earn attractive profits is likely to migrate away from the auto assemblers, toward the subsystem vendors.17

  In chapter 5 we recounted how IBM’s PC business outsourced its microprocessor to Intel and its operating system to Microsoft, in order to be fast and flexible. In the process, IBM hung on to where the money had been—design and assembly of the computer system—and put into business the two companies that were positioned where the money would be. General Motors and Ford, with the encouragement of their consultants and investment bankers, have just done the same thing. They had to decouple the vertical stages in their value chains in order to stay abreast of the changing basis of competition. But they have spun off the pieces of value-added activity where the money will be, in order to stay where the money has been.18

  These findings have pervasive implications for managers seeking to build successful new-growth businesses and for those seeking to keep current businesses robust. The power to capture attractive profits will shift to those activities in the value chain where the immediate customer is not yet satisfied with the performance of available products. It is in these stages that complex, interdependent integration occurs—activities that create steeper scale economics and enable greater differentiability. Attractive returns shift away from activities where the immediate customer is more than satisfied, because it is there that standard, modular integration occurs. We hope that in describing this process in these terms, we might help managers to predict more accurately where new opportunities for profitable growth through proprietary products will emerge. These transitions begin on the trajectories of improvement where disruptors are at work, and proceed up-market tier by tier. This process creates opportunities for new companies that are integrated across these not-good-enough interfaces to thrive, and to grow by “eating their way up” from the back end of an end-use system. Managers of industry-leading businesses need to watch vigilantly in the right places to spot these trends as they begin, because the processes of commoditization and de-commoditization both begin at the periphery, not the core.

  Appendix: The Law of Conservation of Attractive Profits

  Having described these cycles of commoditization and de-commoditization in terms of products, we can now make a more general statement concerning the existence of a general phenomenon that we call the law of conservation of attractive profits. Our friend Chris Rowen, CEO of Tensilica, pointed out to us the existence of this law, whose appellation was inspired by the laws of conservation of energy and matter that we so fondly remember studying in physics class. Formally, the law of conservation of attractive profits states that in the value chain there is a requisite juxtaposition of modular and interdependent architectures, and of reciprocal processes of commoditization and decommoditization, that exists in order to optimize the performance of what is not good enough. The law states that when modularity and commoditization cause attractive profits to disappear at one stage in the value chain, the opportunity to earn attractive profits with proprietary products will usually emerge at an adjacent stage.19

  We’ll first illustrate how this law operates by examining handheld devices such as the RIM BlackBerry and the Palm Pilot, which constitute the latest wave of disruption in the computing industry. The functionality of these products is not yet adequate, and as a consequence their architectures are interdependent. This is especially true for the BlackBerry, because its “always on” capability mandates extraordinarily efficient use of power. Because of this, the BlackBerry engineers cannot incorporate a one-size-fits-all Intel microprocessor into their device. It has far more capability than is needed. Rather, they need a modular microprocessor design—a system-on-a-chip that is custom-configured for the BlackBerry—so that they do not have to waste space, power, or cost on functionality that is not needed.

  The microprocessor must be modular and conformable in order to permit engineers to optimize the performance of what is not good enough, which is the device itself. Note that this is the opposite situation from that of a desktop computer, where it is the microprocessor that is not good enough. The architecture of the computer must therefore be modular and conformable in order to allow engineers to optimize the performance of the microprocessor. Thus, one side or the other must be modular and conformable to allow for optimization of what is not good enough through an interdependent architecture.

  In similar ways, application software programs that are written to run on Microsoft’s Windows operating systems need to be conformed to Windows’ external interface; the Linux operating system, on the other hand, is modular and conformable to optimize the performance of software that runs on it.
/>   We have found this “law” to be a useful way to visualize where the money will migrate in the value chain in a number of industries. It is explored in greater depth in a forthcoming book by Clayton Christensen, Scott Anthony, and Erik Roth, Seeing What’s Next (Boston: Harvard Business School Press, 2004).

  This law also has helped us understand the juxtaposition of modular products with interdependent services, because services provided with the products can go through similar cycles of commoditization and de-commoditization, with consequent implications for where attractive profitability will migrate.

  We noted previously that when the functionality and reliability of a product become more than good enough, the basis of competition changes. What becomes not good enough are speed to market and the rapid and responsive ability to configure products to the specific needs of customers in ever-more-targeted market segments. The customer interface is the place in the value chain where the ability to excel on this new dimension of competition is determined. Hence, companies that are integrated in a proprietary way across the interface to the customer can compete on these not-good-enough dimensions more effectively (and be rewarded with better margins) than can those firms that interface with their customers only in an arm’s-length, “modular” manner. Companies that integrate across the retail interface to the customer, in this circumstance, can also earn above-average profits.

  We would therefore not say that Dell Computer is a nonintegrated company, for example. Rather, Dell is integrated across the not-good-enough interface with the customer. The company is not integrated across the more-than-good-enough modular interfaces among the components within its computers. Figure 6-2 summarizes in a simplified way how the profitable points of proprietary integration have migrated in the personal computer industry.

  On the left side of the diagram, which represents the earliest years of the desktop computer industry when product functionality was extremely limited, Apple Computer, with its proprietary architecture and integrated business model, was the most successful firm and was attractively profitable. The firms that supplied the bare components and materials to Apple, and the independent, arm’s-length retailers that sold the computers, were not in nearly as attractive a position. In the late 1990s, the processes of commoditization and de-commoditization had transferred the points at which proprietary integration could build proprietary competitive advantage to

  FIGURE 6 - 2

  The Shifting Locus of Advantage in the PC Industry’s Process Value Chain

  We believe that this is an important factor that explains why Dell Computer was more successful than Compaq during the 1990s. Dell was integrated across an important not-good-enough interface, whereas Compaq was not. We also would expect that a proper cost accounting would show that Dell’s profits from retailing operations are far greater than the profits from its assembly operations.

  Notes

  1. There are two ways to think of a product or service’s value chain. It can be conceptualized in terms of its processes, that is, the value-added steps required to create or deliver it. For example, the processes of design, component manufacture, assembly, marketing, sales, and distribution are generic processes in a value chain. A value chain can also be thought of in terms of components, or the “bill of materials” that go into a product. For example, the engine block, chassis, braking systems, and electronic subassemblies that go into a car are components of a car’s value chain. It is helpful to keep both ways of thinking about a value chain in mind, since value chains are also “fractal”—that is, they are equally complex at every level of analysis. Specifically, for a given product that goes through the processes that define its value chain, various components must be used. Yet every component that is used has its own sequence of processes through which it must pass. The complexity of analyzing a product’s value chain is essentially irreducible. The question is which level of complexity one wishes to focus on.

  2. This discussion builds heavily on Professor Michael Porter’s five forces framework and his characterization of the value chain. See Michael Porter, Competitive Strategy (New York: Free Press, 1980) and Competitive Advantage (New York: The Free Press, 1985). Analysts often use Porter’s five forces framework to determine which firms in a value-added system can wield the greatest power to appropriate profit from others. In many ways, our model in chapters 5 and 6 provides a dynamic overlay on his five forces model, suggesting that the strength of these forces is not invariant over time. It shows how the power to capture an above-average portion of the industry’s profit is likely to migrate to different stages of the value chain in a predictable way in response to the phenomena we describe here.

  3. As a general observation, when you examine what seems to be the hey-day of most major companies, it was (or is) a period when the functionality and reliability of their products did not yet satisfy large numbers of customers. As a result, they had products with proprietary architectures, and made them with strong competitive cost advantages. Furthermore, when they introduced new and improved products, the new products could sustain a premium price because functionality was not yet good enough and the new products came closer to meeting what was needed. This can be said for the Bell telephone system; Mack trucks; Caterpillar earthmoving equipment; Xerox photocopiers; Nokia and Motorola mobile telephone handsets; Intel microprocessors; Microsoft operating systems; Cisco routers; the IT consulting businesses of EDS or IBM; the Harvard Business School; and many other companies.

  4. In the following text we will use the term subsystem to mean, generally, an assembly of components and materials that provides a piece of the functionality required for an end-use system to be operational.

  5. Once again, we see linkages to Professor Michael Porter’s notion that there are two viable “generic” strategies: differentiation and low cost (see chapter 2, note 12). Our model describes the mechanism that causes neither of these strategies to be sustainable. Differentiability is destroyed by the mechanism that leads to modularization and dis-integration. Low-cost strategies are viable only as long as the population of low-cost competitors does not have sufficient capacity to supply what customers in a given tier of the market demand. Price is set at the intersection of the supply and demand curves—at the cash cost of the marginal producer. When the marginal producer is a higher-cost disruptee, then the low-cost disruptors can make attractive profits. But when the high-cost competitors are gone and the entire market demand can be supplied by equally low-cost suppliers of modular products, then what was a low-cost strategy becomes an equal-cost strategy.

  6. Not all the components or subsystems in a product contribute to the specific performance attributes of value to customers. Those that drive the performance that matters are the “performance-defining” components or subsystems. In the case of a personal computer, for example, the microprocessor, the operating system, and the applications have long been the performance-defining subsystems.

  7. Analysts’ estimates of how much of the industry’s money stayed with computer assemblers and how much “leaked” through to back-end or subsystem suppliers are summarized in “Deconstructing the Computer Industry,” BusinessWeek, 23 November 1992, 90–96. As we note in the appendix to this chapter, we would expect that much of Dell’s profit comes from its direct-to-customer retailing operations, not from product assembly.

  8. With just a few seconds’ reflection, it is easy to see that the investment management industry suffers from the problem of categorization along industry-defined lines that are irrelevant to profitability and growth. Hence, they create investment funds for “technology companies” and other funds for “health care companies.” Within those portfolios are disruptors and disruptees, companies on the verge of commoditization and those on the verge of decommoditization, and so on. Michael Mauboussin, chief investment strategist at Credit Suisse First Boston, recently wrote an article on this topic. It builds on the model of theory building that we have summarized in the introduction of this book, and its app
lication to the world of investing is very insightful. See Michael Mauboussin, “No Context: The Importance of Circumstance-Based Categorization,” The Consiliant Observer, New York: Credit Suisse First Boston, 14 January 2003.

  9. Those of our readers who are familiar with the disk drive industry might see a contradiction between our statement that much of the money in the industry was earned in head and disk manufacturing and the fact that the leading head and disk makers, such as Read-Rite and Komag, have not prospered. They have not prospered because most of the leading disk drive makers—particularly Seagate—integrated into their own head and disk making so that they could capture the profit instead of the independent suppliers.

  10. IBM did have profitable volume in 3.5-inch drives, but it was at the highest-capacity tiers of that market, where capacity was not good enough and the product designs therefore had to be interdependent.

  11. A more complete account of these developments has been published in Clayton M. Christensen, Matt Verlinden, and George Westerman, “Disruption, Disintegration and the Dissipation of Differentiability,” Industrial and Corporate Change 11, no. 5 (2002): 955–993. The first Harvard Business School working papers that summarized this analysis were written and broadly circulated in 1998 and 1999.

  12. We have deliberately used present- and future-tense verbs in this paragraph. The reason is that at the time this account was first written and submitted to publishers, these statements were predictions. Subsequently, the gross margins in IBM’s 2.5-inch disk drive business deteriorated significantly, as this model predicted they would. However, IBM chose to sell off its entire disk drive business to Hitachi, giving to some other company the opportunity to sell the profitable, performance-enabling components for this class of disk drives.

 

‹ Prev