Creative Construction
Page 19
The general purpose of financial analysis is to help organizations implement the basic principle of allocating scarce resources to their most valuable usage. In this regard, the basic motivation behind financial analysis is hard to argue against. Discounted cash flow, net present value, and internal rate of return analysis are the most common techniques for assessing whether a given investment is a good use of resources relative to the alternatives. Despite differences in specific mechanics, all operate according to a similar logic: they compare the resources you expect to expend on a project with the financial returns (cash flow, profits, etc.) you expect to receive in the future. Because a dollar you might earn tomorrow (if the project goes as expected) is worth less than a dollar in your hand today, you reduce the value or “discount” the value of that dollar in your calculations. There are many levels of sophistication one might employ in doing these calculations, but this comparison of resources versus potential future returns is the essence. All these techniques are just ways to structure our logic.
The first thing to recognize is that, like any tool, financial evaluation tools are designed to do some things well, some things less well, and some things not at all. A claw hammer is great at driving nails into wood and prying things open. It would be adequate but not optimal for demolition (a sledge hammer would be better). It would be downright lousy for driving a screw through a metal surface. Whether a tool is helpful—be it a hammer or discounted cash flow analysis—depends on the match between the tool and the problem it is being asked to address. The “flaws” of traditional financial analytical tools are the result of problems in how they are used, rather than the tools themselves. It is not discounted cash flow analysis that might cause companies to make bad project selection decisions; rather, it is the managers using those tools and interpreting them incorrectly.
The inherent uncertainty and ambiguity of innovation projects start to complicate the use of techniques like discounted cash flow and net present value. Uncertainty is often dealt with by estimating probabilities of costs and future revenues, in essence yielding an “expected net present value.” Another approach is to increase the discount rate used to value future cash flows. The greater the uncertainty, the less weight we give future cash flows (because of the likelihood we may never see them). Projects perceived to have higher risks get a higher discount rate, which in turn lowers the return on the program. In theory, this is not a bad idea. It says that if you are going to pursue a risky project, it better be worth it in terms of the potential return. You don’t want to take high risks without high returns. The problem in practice, though, is really getting your hands around the magnitudes. How much higher should the discount rate be given our uncertainty? How high is the uncertainty? What is the probability of a project hitting certain financial benchmarks in the future? Estimating probabilities of future events—particularly events you have never encountered—is really hard. For routine innovation, it may be possible to use data from similar past projects to create reasonable probability estimates. However, for nonroutine innovation, we typically do not have enough experience or knowledge to estimate probabilities based on data. In practice, the probability estimates used in these analyses are subjective human judgments, and that’s fine; it may be the best we can do. But, we should not fool ourselves into thinking that the presence of a “hard” number means we have an objective analysis.
Coming up with probability estimates for future uncertain forecasts or estimating levels of risk are themselves judgments, subject to all the cognitive and behavioral biases I mentioned before. As a result, our analyses just reflect our biases, rather than correcting them.
The limits of financial analytical tools become even more apparent once we consider the impact of ambiguity. Recall that ambiguity implies that we do not even have a handle on the structure of problem. We do not know the options. We do not know the range of alternative scenarios. We do not know how the key drivers of different potential outcomes. The financial models we build are representations of potential future states of the world. With enough ambiguity, our model is completely unrepresentative; it is fiction.
That financial analytical tools have their limits and are not as objective as proponents claim is quite clear. But it is not at all clear why the use of financial analytical tools should create a systematic bias against more innovative (riskier) projects, as critics claim. Remember, we can just as easily be too exuberant as too pessimistic about a project. An overly optimistic forecast is just as likely to lead us to fund a loser project as an overly pessimistic one will lead us to kill a good one. Because judgmental errors can go in either direction, the results of financial analysis based on those judgments can lead to both type 1 and type 2 project selection failures. Overreliance on financial analysis can lead to bad project selection decisions, but it does not necessarily lead to more conservative ones.
Proponents of the use of financial analysis will point out correctly that there are more-advanced analytical techniques like real option valuation that are better suited to evaluating R&D projects than basic tools like discounted cash flow and net present value. Because real option analysis takes into account the possibility of abandoning projects as more information becomes available, it avoids overestimating project risks (e.g., if I can walk away from an R&D project after spending just $1 million, it is much less risky than if I have to commit $100 million all up-front). In addition, it helps us quantify the benefits of “upside” that may become apparent only as projects unfold. Risk is not necessarily a bad thing in real option valuation because it increases the potential upside.
I agree that real option valuation can be an improvement over traditional tools, but it is not a panacea either. While it provides a logical structure to incorporate uncertainty and the value of resolving uncertainty through staged investments, real option valuation still requires subjective estimates of all the critical parameters (like future revenues, future costs). In addition, real option valuation does little to blunt the impact of ambiguity. Like its more basic brethren, real option valuation requires the analyst to have a clear understanding of the structure of the problem. The underlying decision tree should be reasonably well understood. If it is not, then applying real option valuation gets tricky. You cannot value options that are not even identified.
Despite the well-known flaws and limits of financial evaluation tools for innovation, many companies slavishly adhere to them in selecting programs to pursue. I have sat through presentations containing lists of projects rank-ordered by net present value and where a red line indicates the cutoff for which projects will be selected and which will not. I have watched project teams calculating and recalculating their financial models in preparation for senior management approval meetings. I’ve experienced interminable debates about the right terminal value to use in a discounted cash flow analysis as well as intense debate about revenue forecasts for innovative drugs that were at least ten years away from coming to market. And yet something has always bothered me in these settings. I do not think the managers involved are stupid or naïve. The vast majority, I believe, know that these kinds of analytics have limits. They know the numbers are estimates. They all know there are a lot of judgment calls about parameters like probabilities of success, time to market, costs of development, and future revenue streams. They know the number “on the board” may prove to be too optimistic or too pessimistic. When I have asked them why they are still depending on these methods, I get two kinds of responses.
The first is that they value the rigor of a quantitative analysis. Quantitative methods may use subjective inputs, but, by boiling everything down to specific numerical values, there is a sense that the output is more precise. A number is, after all, a number. The word “rigor” is often associated with “quantitative.” This is as true in academia as it is in companies. Rigor means logical, meticulous, and thorough. Rigorous analyses have a healthy appetite for facts and a propensity to subject facts to careful scrutiny. Rigorous, however,
does not necessarily mean quantitative. And quantitative approaches do not guarantee rigor. Consider a situation where you lack even the most rudimentary information needed to estimate a return on investment. A technology is so new no one really understands whether it will work and for which uses it might be valuable. If you were asked at this point about the expected return on investment, a truly rigorous response would be, “I don’t know… but here is what I might do to learn more.” It would not be a precise number like 21.576 percent spit out of a spreadsheet containing dozens of arbitrary assumptions. But, too often, we confuse the apparent precision of a number with rigor. The risk with both traditional and more sophisticated analytical tools is that they can create the illusion of objectivity and precision. If you are allowing yourself to be fooled by the false sense of security of having an “objective” number (based on highly subjective inputs), then your resource allocation process is anything but rigorous. The adage that it is better to be vaguely right than precisely wrong is worth keeping in mind the next time you ask a project team to “firm up the numbers.”
The second common response I hear about relying on analytical tools is, “There is no alternative.” Sure, net present value and other techniques have their flaws, I have been told, but they are better than nothing. Without the discipline of financial models, countless managers have explained to me, the whole resource allocation process can degenerate into a free-for-all guided by no more than wild-eyed guesses and gut feelings. And, despite their flaws, financial models provide a common language and standard reference point for comparing different projects. In essence, this argument is akin to saying that if all you have is a hammer, then a hammer is what you use regardless of the job. I have some sympathy for this argument. Gut feelings can lead any of us astray in making complex choices, and models provide a common logic structure for comparison. However, I strongly disagree with the notion that the only alternative to financial modeling is unhinged guesswork. As noted above, rigor does not exclude qualitative analysis of project valuation. Quantitative financial models and structured qualitative approaches are not even alternatives to one another. They can complement each other. We do not need to throw out financial analysis, nor should we. We just need to think about how financial modeling techniques can be used as part of a more integrative process related to exploration, inquiry, and learning. In the section below, we explore such an approach in more detail.
Selection as a Process of Learning
Deciding which projects to fund, which to start, and which to stop when confronted with high degrees of uncertainty and ambiguity requires a very different approach to decision making than is typically practiced.
As my late colleague David Garvin once argued, decision making is too often mismanaged as a discrete event.10 By this, he meant that although a leader might take some time to gather evidence, muster different opinions, and conduct detailed analyses, the actual decision itself is made in a fairly compressed time frame and generally takes a binary “up/down” or “go/no-go” form. An event-driven approach to decision making can be entirely appropriate, of course, when the alternatives are relatively clear at the outset and most of the relevant data required to decide is available in advance. Buying a new piece of equipment, for instance, has that character. There is generally plenty of data available on the equipment’s expected performance and specifications (and if you have operated that type of equipment before, you have the benefit of your own experience with it), and the alternatives are pretty sharply defined: buy it now, buy it later, or don’t buy it. Once you buy it, changing your mind might be costly (e.g., you would have to resell it).
An event-driven approach to project selection can also be reasonable when dealing with routine innovation with relatively small changes in technology or market positioning. With an event-driven approach to project selection, questions like “What’s the market size?” “What is the return on investment?” “When will this hit the market?” and “What specific customer need does this address?” are fair game. These kinds of questions should be answerable with careful analysis and reflection on the company’s past experience. Senior leadership should expect answers to these questions, and, once they have answers, then the decision to fund or not to fund is relatively straightforward.
As I discussed earlier, though, potentially transformative innovation projects do not have this character. The alternatives may be poorly understood. Uncertainty is high, and so forecasts about future technical performance and market potential are likely to be highly variable at best, and ambiguity means that you cannot even discover what you do not know until you dig into the project (somewhat like opening the walls of an old house!). The problem arises when the traditional “event-driven” resource allocation process meets the transformative innovation proposal. Questions like “How big is the market?” and “What is the return on investment?” are unanswerable up front. There are likely too many unknown unknowns. Yet, with the event-driven approach to project selection, answering (honestly), “I don’t know” to any of these questions is pretty certain to get you (and your project proposal) sent on your way. In fact, it is this attitude that leads to a bias against transformative innovation proposals, rather than the analytical tools themselves.
Instead of being event driven, selection for transformative innovation needs to be structured and managed as a learning process. There is not one best process for doing this, but there are some principles that research suggests will help you get to better judgments about making decisions in highly uncertain and ambiguous settings. Let’s examine some of the concrete ways an organization can do this and what leadership behaviors are required to support such an approach.
Build Proposals Around Working Hypotheses
Project selection proposals are generally developed and presented as an instrument of advocacy. That is, the proposers try to “make the case” for their program. There is nothing nefarious about their intentions. In general, proposers—perhaps a scientist or engineer in the organization—truly believe in their projects and see them as great opportunities for the company; their advocacy is genuine. But the problem with an advocacy approach is that it tends to distort the information available.11 Advocates try to make their case by highlighting what is known and positive. They view their task as mustering evidence in support of the program. Of course, the senior managers listening to the proposals are not fools (they have likely been successful advocates before!), so they adopt the role of skeptic. R&D and portfolio review committee meetings like this have the feel of a courtroom proceeding with advocates making their case and senior leaders cross-examining them. The process is set up to create winners (those whose projects are selected) and losers (those whose projects get rejected). It is not set up to open deeper exploration of the opportunity.
A good way to orient the process around learning is to frame proposals as a set of working hypotheses about the technology, markets, customers, value streams, and business model and strategy choices.12 A hypothesis is a proposition that can be tested against data (qualitative or quantitative). Hypotheses are generally based on specific assumptions about technology, customers, economic conditions, and so forth. The statement “Autonomously driven vehicles will open up new markets for ride-sharing services and reduce demand for privately owned vehicles by 30 percent in the next ten years” is a hypothesis (based on an assumption that autonomous vehicle technology progresses to a point where it is safe and on the assumption that necessary regulatory frameworks are in place). If you are in an auto company, framing a proposal for R&D on autonomous vehicles in this fashion would be an explicit way to acknowledge that information and understanding about the program are highly incomplete, but the goal of the process is to fill those gaps as best as possible, rather than to judge the program. Recall the earlier example of Flagship Pioneering. New venture proposals are explicitly framed as venture hypotheses containing a specific set of “if-then” questions. As Noubar Afeyan explained, venture hypotheses “draw from asking
‘what if?’ and ‘if only…’” questions pointing to hypothetical solutions that would be valuable to the world “once discovered.”
Initial hypotheses do not need to be true for the program to be ultimately valuable. At Honda, the initial hypothesis behind the light jet program was that some of Honda’s automobile technologies and engineering capabilities could be applied to the design of light jets—this turned out not to be true. As Honda began to explore aircraft technology, the company learned that there were significant differences between automobile engineering and aeronautical engineering. But having a clear hypothesis up front helps the senior leadership judge the basis on which the program should be initiated and, if information changes, whether to continue. At Honda, they decided to continue the aircraft program even after learning that their initial hypothesis was not valid (automobile technology could not be applied to aircraft design) because in the course of their early research they developed a new hypothesis about the market for light jets.
A hypothesis-driven approach is not an excuse for sloppy thinking or for keeping bad programs alive. Hypotheses should be well thought out and have clear criteria for acceptance or rejection. At Flagship, teams proposing a venture hypothesis must also specify a “killer experiment” designed to rigorously validate the science behind it. There is rigor in developing hypotheses about technology and markets. Senior leaders should ask about the assumptions underlying the hypothesis and demand clarity around the logic. There should be a clear path for how some of the critical hypotheses may be tested (as quickly and cheaply as possible). A hypothesis-driven approach also does not mean every project is approved. Some hypotheses may be untestable within the company’s resource constraints or strategy.