Creative Construction
Page 18
Conclusion
Synthesis—the melding of ideas from distant sources into coherent innovation concepts—is crucial for transformative innovation. Unfortunately, synthesis does not come naturally to most organizations and is particularly challenging for large enterprises. To manage scale, companies typically divide themselves into ever more isolated compartments. They create market-facing business units or divide their R&D function into specialized subunits. They hire ever more specialized people in particular technologies or in particular markets. They reward people according to goals defined by narrow organizational boundaries (e.g., How many patents did your department file? How much market share did your business unit gain? How much cost did you take out of purchased components?). None of these are, by themselves, inherently bad things to do. But, unfortunately, they impede the kinds of cross-flow of ideas, talent, expertise, and experience that might lead to transformational innovations. What is ironic, of course, is that larger organizations may have more of the pieces for transformational innovation available to them, either within their broad portfolio of businesses or through their external networks. Unfortunately, having the pieces is not enough. They need to be melded.
Transformative innovation through synthesis doesn’t happen through pure luck. It happens where organizations build that capability through a set of interrelated choices about people, processes, and structures. The capability for synthesis is rooted in people who can bridge diverse fields of knowledge and domains of expertise; it is rooted in processes that enable experimentation and learning; and it is rooted in structures that drive, rather than impede, the flow of ideas from diverse sources. All of these are, of course, completely in the hands of management. A company’s ability to do synthesis well is a function not of size but of management.
7
WHEN TO HOLD ’EM AND WHEN TO FOLD ’EM
Uncertainty, Ambiguity, and the Art and Science of Selecting Projects
In October 2003, Joshua Boger, CEO of Vertex Pharmaceuticals, and his team of senior management confronted a problem.1 The company Boger founded twenty years before was at a critical juncture. It had four molecules that could potentially become drugs for treating several different diseases. Each “candidate” (to use the parlance of the pharmaceutical industry) had thus far been tested only in animals or in small groups of human patients, but the preliminary data for all were promising. To turn any of these molecules into a drug would next require much larger and more expensive clinical trials. The company had enough resources to invest in only two of the programs for further development. Boger and his team had to pick which to pursue. It was a high-stakes decision given that these would be the first programs the company would take forward on its own (its previous projects had all been conducted through partnerships with larger pharmaceutical companies). Picking the wrong projects could lead to financial ruin—picking the right ones could be worth billions of dollars. There were many possible criteria along which to evaluate the projects (e.g., potential market size, return on investment, likelihood of technical success, fit with the company’s mission), and each project excelled along different criteria. In addition, pharmaceutical R&D is notoriously risky—most potential products do not prove to be safe or effective enough to reach the market. And even if approved, it is extremely hard to predict demand given that the benefits of the drug and thus its appeal can’t be determined until years of clinical trials have been conducted. Further complicating matters was the fact that people in the different parts of the organization had different judgments and opinions about the prospects of each project for both clinical and commercial success. The commercial people liked two very different candidates from those the scientists backed.
The challenge for Boger and his team was not only deciding which two projects to pick. Equally important was how to choose. On quantitative analytical methods of project value and financial returns? On the judgments of its scientists about the likely clinical performance of each drug? On the judgments of its commercial people about the future market potential? What kind of information should be considered? How could Boger and his team get the best available information to make the decision? His dilemma, while somewhat extreme in terms of the stakes and the level of uncertainty inherent in his business, is fairly typical of struggles I have seen in many different organizations across a variety of industries. Picking projects is hard. You rarely have enough information to identify with precision the “best” alternative. Different metrics point in different directions, and, not surprisingly, people with different expertise have different judgments about what programs should be pursued.
An organization bad at project selection, no matter how extraordinary at search and synthesis, will not be very innovative. Prospective innovations that do not get funding for development remain just that: prospects. Companies profit only from innovations, not prospective innovation. Business history is riddled with examples of those quite good at generating innovative concepts but quite poor at figuring out which projects to fund. Xerox of the 1970s is a classic and well-documented example. Xerox’s Palo Alto Research Center was clearly great at search and synthesis, inventing many of the key technologies of the digital age: laser printing using bitmapping (1971); object-oriented programming (1972); ethernet (1973); the personal computer (1973); graphical user interface windows (1975); and the page-description language used for desktop publishing (1978).2 But, with the exception of laser printing, all languished inside Xerox for lack of commitment to full commercial development. Other companies commercialized them. Graphical user interface technology became the basis for Apple’s first Macintosh. Ethernet and page-description language were commercialized by start-ups 3Com and Adobe, respectively, both founded by frustrated Xerox employees. Xerox is not an isolated example. AT&T lagged in commercializing the mobile phone technology it invented because a market forecast it commissioned in 1980 pegged the total phone market at only 900,000 units by 2000.3 The real market in 2000 turned out to be 109 million units.4 By the time AT&T realized its forecast was badly off, the only way for it to enter the market was to make a $12.6 billion acquisition of McCaw Cellular.5 Contrary to popular belief, Polaroid was actually an early mover in digital imaging technology but lagged in committing resources to full commercial development.6
Xerox, AT&T, and Polaroid didn’t fail from lack of ability to discover a problem (search) or bring together diverse ideas into a novel solution (synthesis). They failed from an inability to make good project-selection decisions. Of course, bad selection decisions do not always manifest themselves as errors of omission—great opportunities squandered by lack of funding. Errors of commission—funding or keeping alive bad projects—is also a common problem. Not every prospective project is a winner. There will be many losers in the bunch, and the capability for project selection means being able to cull them. Bad selection cuts both ways. It can lead to killing winning projects or wasting money on losers.
The Challenge of Selection: On the Knife’s Edge of Two Types of Errors
In principle, picking innovation projects is no different from any other resource allocation decision (like, say, deciding whether to build a factory, buy a piece of equipment, or advertise a product). Every innovation project requires resources, like money and people’s time. Because resources are scarce, you want to put them to the best use. This leads to a simple heuristic: pursue only those projects that are likely to generate more value than the alternative uses, adjusting, of course, for time horizons and risks. A set of well-known analytical tools such as discounted cash flow and internal rate of return for assessing whether the potential value generated from a project is worth the resources expended will be discussed in more detail later.
Selection sounds simple. Innovation projects, though, have characteristics that make choosing among them anything but a simple exercise. The first problem, as highlighted with Vertex, is uncertainty: you do not know in advance what you will find. Which project is a winner? Which one is a loser? Nor can you
be certain about how many resources or how much time will be required to complete a project. Along the way, you may discover new problems that you never thought of, and these will require more time and more resources. If you are working with newer technology or less-familiar science, you may simply not know how long it will take to solve a particular problem (or, indeed, whether you can solve that problem at all). Finally, because by definition innovation means something new, estimating basic value parameters like revenues and profits can be very difficult. The more novel the innovation—from either a technology or market perspective—the greater the uncertainties associated with both the resources required and the potential returns.
Ambiguity is the second challenge for selecting innovation projects. Ambiguity differs from the uncertainty surrounding lack of information about known parameters of the future. Uncertainty can usually be measured in terms of probability: Will it rain tomorrow (the forecast says 40 percent chance of showers)? What is the probability that the FDA will approve the drug? What is the likelihood we can complete the project by the third quarter of next year? Ambiguity is lack of knowledge about the parameters themselves. It means you do not know what you do not know, or so-called unknown unknowns.7 If I cannot tell you precisely the demand for a potential innovation next year, then that is uncertainty; if I don’t even know for which market it may be suitable, that is ambiguity. With uncertainty, you at least know the options, but you may not have all the information you need to estimate which options are most attractive; with ambiguity, you have not yet discovered the options.
Both the levels of uncertainty and ambiguity vary, of course, with the type of innovation in question. A routine innovation like Apple launching a next-generation iPhone may involve relatively low levels of uncertainty and ambiguity. Apple has sufficient experience in the smartphone market to understand reasonably well what customers want and how they react to certain features. They have likely collected massive amounts of data on customer usage and buying patterns. The basic design is reasonably mature. This does not mean there is no uncertainty. There are new components and new software, which often involve surprises, and customers’ tastes can be fickle. They do not know for sure what products Samsung and other competitors will offer or how they will price them. Some features may be a big hit, and some may flop. So there is some uncertainty, but it is bounded. Sales forecasts may disappoint or positively surprise.
As we move into the realm of disruptive, radical, and architectural innovations, an organization’s lack of experience and understanding of either the technology or the business model creates ambiguity. When an organization first begins to explore a technology completely outside its historical base of expertise—say, artificial intelligence for an auto company or gene stem cell therapy for a pharmaceutical company—it may not know enough about the technology to even develop a reasonable plan of attack. Questions like “What’s the probability this will work?” or “What’s the size of the market if this works?” are not only unanswerable, they may not even make sense. The technology itself may not be well enough defined to even assess what is or is not feasible. The set of technology choices may not be clear, and the markets in question may not yet be discovered.
Similar kinds of ambiguity can enshroud business model innovations as well. Like technologies, early versions of business models are often so poorly understood that uncertainty regarding feasibility, customer acceptance, and market size is essentially unbounded. Think about how e-commerce has evolved since its inception in the mid-1990s. In the earliest days, basic economic parameters like “Who will pay for this?” were not even understood. When Larry Page and Sergey Brin started Google from their PhD thesis work at Stanford, the prevailing view at the time among many “experts” was that there was no money in search. The concept of two-sided markets was in its infancy. Brin and Page were not choosing among “uncertain” options, each with different probabilities and different pay-offs. Early in the life of Google, the options were not clearly defined. Brin and Page had to discover them.
As should be evident, uncertainty and ambiguity make innovation project selection a risky proposition. When it comes to allocating resources, ignorance is not bliss—it is downright scary. You are riding a knife’s edge between two types of errors. On the one side, you might erroneously conclude that a project is a winner when in fact it is really a loser. This will lead to an error of commission (say, the Segway or the Supersonic Transport). On the other side, you might mistakenly conclude that the project is a loser when in reality it is a winner. This will lead to errors of omission (say, Xerox passing up on ethernet). The statistical analogy is between type 1 errors (false positives—rejecting null hypotheses that are true) and type 2 errors (false negatives—accepting a null hypothesis that is false). A key principle of statistical theory is that for any given sample there is a trade-off between type 1 and type 2 errors. Reducing the chance of a type 1 error increases the chance of a type 2 error (and vice versa). The same principle applies to selecting projects. As you raise the bar for the level of certainty you require before committing to a project, you will reduce your chances of committing to bad projects; but, at the same time, you will increase the likelihood of mistakenly culling some projects that would have turned out to be winners. And, obviously, the opposite is true. Being very lax on project selection criteria will reduce your chance of killing off a great project too soon, but it also means you waste a lot of resources on loser projects.
Making Better Judgments About Innovation Project Selection
Judgments are decisions based on limited information or limited understanding of potential outcomes from choices. By definition, these kinds of decisions are not black and white. Decisions about innovation projects are inherently judgmental. We never have enough information or understanding in advance to perfectly predict the technical or financial outcome of a project. But, obviously, there are many shades of gray, and so some types of innovation projects require more complex judgments than others. As discussed earlier, innovation outside an organization’s home court typically embodies greater degrees of uncertainty and ambiguity than routine innovation and therefore requires much more complex and subtle judgments.
As decision makers, our goal is to make rational judgments. That is, we want to weigh all the pros and cons, costs and benefits, and risks and rewards in a way that gives us the best chance to get a desired outcome (even if, in the end, things do not turn out as expected). As we can all attest from experience, this is a mentally taxing exercise. We find ourselves thinking through all the possible factors that might be relevant and trying to assess how much weight to give each. We run through alternative scenarios of the future. We bring together information from different sources, and we grapple with the different signals we get from each. We find ourselves asking “what if?” and “what about?” over and over. We desperately want to get the “right” answer when such an answer does not seem particularly anxious to reveal itself. It can be exhausting.
As if our pursuit of a rational decision is not hard enough intellectually, we now know from decades of research in behavioral economics and psychology that our judgments are clouded by any number of cognitive biases and distortions.8 We do not look at data, information, and “facts” as coldly or objectively as we might hope or think. We tend to stubbornly cling to our initial hypotheses about the correct course of action and weigh evidence more heavily that supports our view (confirmation bias). If we believe a particular project is a winner, we will tend to view data that contradict this view more skeptically than data that support this view. Our initial impressions of projects tend to persist (anchoring). Once we label a project a “winner,” we tend to stick with that view. We tend to attribute our past success to our capabilities but our failures to bad luck (attribution bias). This can lead us to be overly optimistic in assessing our capacity to execute a project. For instance, imagine that our last forecast proved to be quite accurate. This will bolster our view that we are good forecasters and
therefore should put a lot of confidence in the forecast for the current project. However, if our last forecast proved to be terrible, we will tend to write that off as bad luck. We find more recent experiences more salient (recency effects). If the last risky innovation project failed, we will tend to overestimate the risks of the current project. If it succeeded, however, we will tend to underestimate the risks. These and other cognitive biases systematically work against our capacity to reach rational decisions.
Of course, despite its intellectual demands and psychological hazards, project selection simply cannot be avoided. You may find the process stressful, aggravating, or imperfect, but, at the end of the day, resources must be committed to some projects and not to others. To make the process as rational as possible, organizations deploy various analytical tools and techniques. We examine their uses and limits below.
A (Partial) Defense of Financial Analytical Tools
Within the world of innovation, perhaps no set of management techniques is more maligned than financial analysis. Discounted cash flow, net present value, internal rate of return, and their kin have become the villains in most popular writing on innovation. My Harvard Business School colleagues Clay Christensen, Stephen Kaufman, and Willy Shih refer to financial tools as “innovation killers” in their 2008 Harvard Business Review article.9 The “alleged crimes” (their words) against financial tools include causing managers to underestimate returns on innovation, shackling incumbent firms’ responses to attackers, and tilting resource allocation toward projects that pay off in the short term. In essence, opponents of traditional financial analysis argue that these tools exacerbate rather than fix the problems of bias in resource allocation. And, specifically, they are supposed to cause managers to be excessively conservative in their resource allocation decisions. Before convicting financial analysis of these crimes, however, it is worth a bit more investigation.