Book Read Free

The Innovator's Solution

Page 4

by Clayton Christensen


  The percent future is the percentage of the total market value that the market assigns to the company’s expected future investment. Percent future begins with the total market value (debt plus equity) less that portion attributed to the present value of existing assets and investments and divides this by the total market value of debt and equity.

  CSFB/Holt calculates the present value of existing assets as the present value of the cash flows associated with the assets’ wind down and the release of the associated nondepreciating working capital. The HOLT CFROI valuation methodology includes a forty-year fade of returns equal to the total market’s average returns.

  Percent Future = [Total Debt and Equity (market) – Present Value

  Existing Assets]/[Total Debt and Equity (market)]

  The companies listed in table 1-1 are not a sequential ranking of Fortune 500 companies, because some of the data required to perform these calculations were not available for some companies. The companies listed in this table were chosen only for illustrative purposes, and were not chosen in any way to suggest that any company’s share price is likely to increase or decline. For more information on the methodology that HOLT used, see .

  8. See Stall Points (Washington, DC: Corporate Strategy Board, 1998).

  9. In the text we have focused only on the pressure that equity markets impose on companies to grow, but there are many other sources of intense pressure. We’ll mention just a couple here. First, when a company is growing, there are increased opportunities for employees to be promoted into new management positions that are opening up above them. Hence, the potential for growth in managerial responsibility and capability is much greater in a growing firm than in a stagnant one. When growth slows, managers sense that their possibilities for advancement will be constrained not by their personal talent and performance, but rather by how many years must pass before the more senior managers above them will retire. When this happens, many of the most capable employees tend to leave the company, affecting the company’s abilities to regenerate growth.

  Investment in new technologies also becomes difficult. When a growing firm runs out of capacity and must build a new plant or store, it is easy to employ the latest technology. When a company has stopped growing and has excess manufacturing capacity, proposals to invest in new technology typically do not fare well, since the full capital cost and the average manufacturing cost of producing with the new technology are compared against the marginal cost of producing in a fully depreciated plant. As a result, growing firms typically have a technology edge over slow-growth competitors. But that advantage is not rooted so much in the visionary wisdom of the managers as it is in the difference in the circumstances of growth versus no growth.

  10. Detailed support for this estimate is provided in note 1.

  11. For example, see James Brian Quinn, Strategies for Change: Logical Incrementalism (Homewood, IL: R.D. Irwin, 1980). Quinn suggests that the first step that corporate executives need to take in building new businesses is to “let a thousand flowers bloom,” then tend the most promising and let the rest wither. In this view, the key to successful innovation lies in choosing the right flowers to tend—and that decision must rely on complex intuitive feelings, calibrated by experience.

  More recent work by Tom Peters (Thriving on Chaos: Handbook for a Management Revolution [New York: Knopf/Random House, 1987]) urges innovating managers to “fail fast”—to pursue new business ideas on a small scale and in a way that generates quick feedback about whether an idea is viable. Advocates of this approach urge corporate executives not to punish failures because it is only through repeated attempts that successful new businesses will emerge.

  Others draw on analogies with biological evolution, where mutations arise in what appear to be random ways. Evolutionary theory posits that whether a mutant organism thrives or dies depends on its fit with the “selection environment”—the conditions within which it must compete against other organisms for the resources required to thrive. Hence, believing that good and bad innovations pop up randomly, these researchers advise corporate executives to focus on creating a “selection environment” in which viable new business ideas are culled from the bad as quickly as possible. Gary Hamel, for example, advocates creating “Silicon Valley inside”—an environment in which existing structures are constantly dismantled, recombined in novel ways, and tested, in order to stumble over something that actually works. (See Gary Hamel, Leading the Revolution [Boston: Harvard Business School Press, 2001].)

  We are not critical of these books. They can be very helpful, given the present state of understanding, because if the processes that create innovations were indeed random, then a context within which managers could accelerate the creation and testing of ideas would indeed help. But if the process is not intrinsically random, as we assert, then addressing only the context is treating the symptom, not the source of the problem.

  To see why, consider the studies of 3M’s celebrated ability to create a stream of growth-generating innovations. A persistent highlight of these studies is 3M’s “15 percent rule”: At 3M, many employees are given 15 percent of their time to devote to developing their own ideas for new-growth businesses. This “slack” in how people spend their time is supported by a broadly dispersed capital budget that employees can tap in order to fund their would-be growth engines on a trial basis.

  But what guidance does this policy give to a bench engineer at 3M? She is given 15 percent “slack” time to dedicate to creating new-growth businesses. She is also told that whatever she comes up with will be subject first to internal market selection pressures, then external market selection pressures. All this is helpful information. But none of it helps that engineer create a new idea, or decide which of the several ideas she might create are worth pursuing further. This plight generalizes to managers and executives at all levels in an organization. From bench engineer to middle manager to business unit head to CEO, it is not enough to occupy oneself only with creating a context for innovation that sorts the fruits of that context. Ultimately, every manager must create something of substance, and the success of that creation lies in the decisions managers must make.

  All of these approaches create an “infinite regress.” By bringing the market “inside,” we have simply backed up the problem: How can managers decide which ideas will be developed to the point at which they can be subjected to the selection pressures of their internal market? Bringing the market still deeper inside simply creates the same conundrum. Ultimately, innovators must judge what they will work on and how they will do it—and what they should consider when making those decisions is what is in the black box. The acceptance of randomness in innovation, then, is not a stepping-stone on the way to greater understanding; it is a barrier.

  Dr. Gary Hamel was one of the first scholars of this problem to raise with Professor Christensen the possibility that the management of innovation actually has the potential to yield predictable results. We express our thanks to him for his helpful thoughts.

  12. The scholars who introduced us to these forces are Professor Joseph Bower of the Harvard Business School and Professor Robert Burgelman of the Stanford Business School. We owe a deep intellectual debt to them. See Joseph L. Bower, Managing the Resource Allocation Process (Homewood, IL: Richard D. Irwin, 1970); Robert Burgelman and Leonard Sayles, Inside Corporate Innovation (New York: Free Press, 1986); and Robert Burgelman, Strategy Is Destiny (New York: Free Press, 2002).

  13. Clayton M. Christensen and Scott D. Anthony, “What’s the BIG Idea?” Case 9-602-105 (Boston: Harvard Business School, 2001).

  14. We have consciously chosen phrases such as “increase the probability of success” because business building is unlikely ever to become perfectly predictable, for at least three reasons. The first lies in the nature of competitive marketplaces. Companies whose actions were perfectly predictable would be relatively easy to defeat. Every company therefore has an interest in behaving in deeply u
npredictable ways. A second reason is the computational challenge associated with any system with a large number of possible outcomes. Chess, for example, is a fully determined game: After White’s first move, Black should always simply resign. But the number of possible games is so great, and the computational challenge so overwhelming, that the outcomes of games even between supercomputers remain unpredictable. A third reason is suggested by complexity theory, which holds that even fully determined systems that do not outstrip our computational abilities can still generate deeply random outcomes. Assessing the extent to which the outcomes of innovation can be predicted, and the significance of any residual uncertainty or unpredictability, remains a profound theoretical challenge with important practical implications.

  15. The challenge of improving predictability has been addressed somewhat successfully in certain of the natural sciences. Many fields of science appear today to be cut and dried—predictable, governed by clear laws of cause and effect, for example. But it was not always so: Many happenings in the natural world seemed very random and unfathomably complex to the ancients and to early scientists. Research that adhered carefully to the scientific method brought the predictability upon which so much progress has been built. Even when our most advanced theories have convinced scientists that the world is not deterministic, at least the phenomena are predictably random.

  Infectious diseases, for example, at one point just seemed to strike at random. People didn’t understand what caused them. Who survived and who did not seemed unpredictable. Although the outcome seemed random, however, the process that led to the results was not random—it just was not sufficiently understood. With many cancers today, as in the venture capitalists’ world, patients’ probabilities for survival can only be articulated in percentages. This is not because the outcomes are unpredictable, however. We just do not yet understand the process.

  16. Peter Senge calls theories mental models (see Peter Senge, The Fifth Discipline [New York: Bantam Doubleday Dell, 1990]). We considered using the term model in this book, but opted instead to use the term theory. We have done this to be provocative, to inspire practitioners to value something that is indeed of value.

  17. A full description of the process of theory building and of the ways in which business writers and academics ignore and violate the fundamental principles of this process is available in a paper that is presently under review, “The Process of Theory Building,” by Clayton Christensen, Paul Carlile, and David Sundahl. Paper or electronic copies are available from Professor Christensen’s office, cchristensen@hbs.edu. The scholars we have relied upon in synthesizing the model of theory building presented in this paper (and only very briefly summarized in this book) are, in alphabetical order, E. H. Carr, What Is History? (New York: Vintage Books, 1961); K. M. Eisenhardt, “Building Theories from Case Study Research,” Academy of Management Review 14, no. 4 (1989): 532–550; B. Glaser and A. Straus, The Discovery of Grounded Theory: Strategies of Qualitative Research (London: Wiedenfeld and Nicholson, 1967); A. Kaplan, The Conduct of Inquiry: Methodology for Behavioral Research (Scranton, PA: Chandler, 1964); R. Kaplan, “The Role for Empirical Research in Management Accounting,” Accounting, Organizations and Society 4, no. 5 (1986): 429–452; T. Kuhn, The Structure of Scientific Revolutions (Chicago: University of Chicago Press, 1962); M. Poole and A. Van de Ven, “Using Paradox to Build Management and Organization Theories,” Academy of Management Review 14, no. 4 (1989): 562–578; K. Popper, The Logic of Scientific Discovery (New York: Basic Books, 1959); F. Roethlisberger, The Elusive Phenomena (Boston: Harvard Business School Division of Research, 1977); Arthur Stinchcombe, “The Logic of Scientific Inference,” chapter 2 in Constructing Social Theories (New York: Harcourt, Brace & World, 1968); Andrew Van de Ven, “Professional Science for a Professional School,” in Breaking the Code of Change, eds. Michael Beer and Nitin Nohria (Boston: Harvard Business School Press, 2000); Karl E. Weick, “Theory Construction as Disciplined Imagination,” Academy of Management Review 14, no. 4, (1989): 516–531; and R. Yin, Case Study Research (Beverly Hills, CA: Sage Publications, 1984).

  18. What we are saying is that the success of a theory should be measured by the accuracy with which it can predict outcomes across the entire range of situations in which managers find themselves. Consequently, we are not seeking “truth” in any absolute, Platonic sense; our standard is practicality and usefulness. If we enable managers to achieve the results they seek, then we will have been successful. Measuring the success of theories based on their usefulness is a respected tradition in the philosophy of science, articulated most fully in the school of logical positivism. For example, see R. Carnap, Empiricism, Semantics and Ontology (Chicago: University of Chicago Press, 1956); W. V. O. Quine, Two Dogmas of Empiricism (Cambridge, MA: Harvard University Press, 1961); and W. V. O. Quine, Epistemology Naturalized. (New York: Columbia University Press, 1969).

  19. This is a serious deficiency of much management research. Econometricians call this practice “sampling on the dependent variable.” Many writers, and many who think of themselves as serious academics, are so eager to prove the worth of their theories that they studiously avoid the discovery of anomalies. In case study research, this is done by carefully selecting examples that support the theory. In more formal academic research, it is done by calling points of data that don’t fit the model “outliers” and finding a justification for excluding them from the statistical analysis. Both practices seriously limit the usefulness of what is written. It actually is the discovery of phenomena that the existing theory cannot explain that enables researchers to build better theory that is built upon a better classification scheme. We need to do anomaly-seeking research, not anomaly-avoiding research.

  We have urged doctoral students who are seeking potentially productive research questions for their thesis research to simply ask when a “fad” theory won’t work—for example, “When is process reengineering a bad idea?” Or, “Might you ever want to outsource something that is your core competence, and do internally something that is not your core competence?” Asking questions like this almost always improves the validity of the original theory. This opportunity to improve our understanding often exists even for very well done, highly regarded pieces of research. For example, an important conclusion in Jim Collins’s extraordinary book From Good to Great (New York: HarperBusiness, 2001) is that the executives of these successful companies weren’t charismatic, flashy men and women. They were humble people who respected the opinions of others. A good opportunity to extend the validity of Collins’s research is to ask a question such as, “Are there circumstances in which you actually don’t want a humble, noncharismatic CEO?” We suspect that there are—and defining the different circumstances in which charisma and humility are virtues and vices could do a great service to boards of directors.

  20. We thank Matthew Christensen of the Boston Consulting Group for suggesting this illustration from the world of aviation as a way of explaining how getting the categories right is the foundation for bringing predictability to an endeavor. Note how important it was for researchers to discover the circumstances in which the mechanisms of lift and stabilization did not result in successful flight. It was the very search for failures that made success consistently possible. Unfortunately, many of those engaged in management research seem anxious not to spotlight instances their theory did not accurately predict. They engage in anomaly-avoiding, rather than anomaly-seeking, research and as a result contribute to the perpetuation of unpredictability. Hence, we lay much responsibility for the perceived unpredictability of business building at the feet of the very people whose business it is to study and write about these problems. We may, on occasion, succumb to the same problem. We can state that in developing and refining the theories summarized in this book, we have truly sought to discover exceptions or anomalies that the theory would not have predicted; in so doing, we have improved the theories considerably. But anomalies remain. Where we are aware of these, we have tri
ed to note them in the text or notes of this book. If any of our readers are familiar with anomalies that these theories cannot yet explain, we invite them to teach us about them, so that together we can work to improve the predictability of business building further.

  21. In studies of how companies deal with technological change, for example, early researchers suggested attribute-based categories such as incremental versus radical change and product versus process change. Each categorization supported a theory, based on correlation, about how entrant and established companies were likely to be affected by the change, and each represented an improvement in predictive power over earlier categorization schemes. At this stage of the process there rarely is a best-by-consensus theory, because there are so many attributes of the phenomena. Scholars of this process have broadly observed that this confusion is an important but unavoidable stage in building theory. See Thomas Kuhn, The Structure of Scientific Revolutions (Chicago: University of Chicago Press, 1962). Kuhn chronicles at length the energies expended by advocates of various competing theories at this stage, prior to the advent of a paradigm.

  In addition, one of the most influential handbooks for management and social science research was written by Barney G. Glaser and Anselm L. Strauss (The Discovery of Grounded Theory: Strategies of Qualitative Research [London: Wiedenfeld and Nicholson, 1967]). Although they name their key concept “grounded theory,” the book really is about categorization, because that process is so central to the building of valid theory. Their term “substantive theory” is similar to our term “attribute-based categories.” They describe how a knowledge-building community of researchers ultimately succeeds in transforming their understanding into “formal theory,” which we term “circumstance-based categories.”

 

‹ Prev