—Thomas A. Stewart, “How to Think with Your Gut”1
Guns and Better (Decisions)
As part of his Marine Corps training, retired lieutenant general Paul Van Riper learned classical decision making: frame the problem, formulate alternatives, and evaluate the options. Not surprisingly, he also taught the classical rational approach as the head of the Marine leadership and combat development program in the 1990s. But Van Riper realized that in combat simulations, the rational decision-making approach didn’t work the way it was supposed to.
Van Riper turned to cognitive psychologist Gary Klein, who had studied how firefighters really make decisions in complex settings. Klein’s research found that firefighters don’t weigh options at all—they use the first satisfactory idea that comes along, and then look for the next one, and so on. Firefighters don’t make decisions based on anything that resembles classical theory.
It occurred to Van Riper that the New York Mercantile Exchange trading pits had a lot in common with combat war rooms. So in 1995, he brought a group of Marines to New York and pitted them against the floor pros on a trading simulator. The traders trounced the Marines—to no one’s shock. But about a month later, the traders went to Quantico, Virginia to play war games against the Marines. The traders again trounced the Marines—to everyone’s shock.2
The study of decision making has a long history. The classical model of decision making that Daniel Bernoulli launched over two hundred fifty years ago is still the prescriptive model of choice in much of economics.3 But the model is not realistic. In the 1950s, economist Herb Simon fashioned an important case against the classical theory by noting that the theory’s informational requirements vastly exceed human cognitive capacity. Human rationality is bounded. As a result, people don’t make decisions based on optimal outcomes; they make choices based on what’s good enough. Simon argued that people don’t maximize; they “satisfice.”
In recent years, a new approach, naturalistic decision making, has emerged to explain how experts make decisions in real-world contexts that are meaningful and familiar to them.4 Evidence suggests that the key attributes and principles of naturalistic decision making apply to experienced investors. An understanding of naturalistic decision making may help investors better appreciate their own approach and has important implications for training.5
Chopping Down the Decision Tree
In a recent paper, Robert Olsen lists five conditions that are present in naturalistic tasks and relates them to investors:6 1. Ill-structured and complex problems. In these cases, no obvious best procedure exists to solve the problem. Determining a fair value for a security, for instance, is an ill-structured and complex task.
2. Information is incomplete, ambiguous, and changing. That stock picking relies on expectations about future financial performance means that there is no way to contemplate all information.
3. Ill-defined, shifting, and competing goals. Even though investing may seem to have clear-cut goals for the long term, goals can change significantly over shorter horizons. For example, a portfolio manager may take a defensive posture to preserve performance or a more aggressive stance to make up a performance shortfall.
4. Stress because of time constraints, high stakes, or both. Stress is clearly a feature of investing.
5. Decisions may involve multiple participants. This means that the decision maker may be working with various partners who may impose decision-making constraints.
How do naturalistic decision makers decide? Olsen identifies three primary behaviors. The first is an ability to rely heavily on mental imagery and simulation in order to assess a situation and possible alternatives.7
The second behavior is the ability to recognize problems based on pattern matching. Experts are able to connect a known pattern to a specific situation. Gary Klein and his colleagues studied the average move quality of chess masters and class B players under regulation (135 seconds per move) and blitz (6 seconds per move) conditions. They found that while the average move quality improved markedly for the class B players under regulation conditions, the quality of the moves under either set of conditions was relatively unchanged for the masters (see exhibit 16.1). Chess masters can glance at a board and quickly see a pattern, allowing them to make relatively good moves in a short time.8
EXHIBIT 16.1 Chess Masters Don’t Lose Average Move Quality When They Are Time Constrained
Source: Gary Klein, Sources of Power: How People Make Decisions (Cambridge, Mass.: MIT Press, 1998), 163.
The third behavior of naturalistic decision makers is that they reason through analogy. Experts have the ability to see similarities in situations that may appear dissimilar on the surface.
One intriguing facet of naturalistic decision making is how experts make decisions with very little conscious awareness. In one experiment, neuroscientist Antonio Damasio gave subjects four decks of cards, two rigged to produce gains (in play money) and two rigged to lose. He asked the subjects to flip cards, picking from any deck. Damasio hooked the subjects up to measure skin conductance responses (SCRs), the same measure as lie-detector tests, and asked periodically what they thought was going on in the game. By the time they’d turned roughly ten cards, the subjects started showing physical reactions when they reached for a losing deck. But they couldn’t articulate their hunch that two of the four decks were riskier until they had turned over about four dozen cards. And only after they turned over an additional thirty cards could the participants explain why their hunch was right. Even those subjects who could never put their hunches into words had physical reactions.9
Researcher Ray Christian sheds some light on the possible role of the unconscious in decision making. He notes that what we perceive at any given moment—our conscious bandwidth—is an extremely small subset of the information stream flowing to the sense organs. Specifically, he estimates that the capacity of our sensory system is 11 megabits per second while our conscious bandwidth is just 16 bits per second.10
Investing au Naturel
Olsen tested whether or not naturalistic decision making explains how real investors work. Since naturalistic decision theory relates to experts in a given domain, he studied investors who had earned the Chartered Financial Analyst (CFA®) designation. Of his 250-plus sample, over 90 percent had six or more years of investment experience and over 50 percent had been in the industry fifteen or more years. Olsen posed eight questions in order to understand their investment behavior.
Exhibit 16.2 shows the results. As Olsen summarizes, the response to the first question supports the idea that expert investors make heavy use of mental imagery. Over 90 percent of the respondents say that creating a story based on facts is important to their investment decisions.
The answers to questions two to four suggest that the decision process of the investment pros is context dependent. Investors change their approaches as the circumstances dictate.
EXHIBIT 16.2 Behavior Responses of CFA Charterholders
Source: Robert A. Olsen, “Professional Investors as Naturalistic Decision Makers: Evidence and Market Implications,” The Journal of Psychology and Financial Markets 3, no. 3 (2002): Table 2, 163.
Responses to the final three questions are consistent with the idea that investors use “satisficing” behavior. Investors don’t optimize in the classical sense; they ignore outcomes or collapse categories to make their decision process more tractable.
Olsen’s study strongly suggests that expert investors are naturalistic decision makers. This conclusion is not too earth shattering for anyone who’s watched a great investor up close. One important implication is that investor training might emphasize the equivalent to a flight simulator—simulations and scenario analysis complete with timely and clear feedback.
The Fine Print
Naturalistic decision making is clearly relevant for investors and the investment process. But in thinking about the importance of the theory, there are a few points worth bearing in mind.
T
o start, naturalistic decision making is most relevant in complex environments. When a problem is covered by rules or is simply complicated, classical frameworks are often very effective. Different decision-making approaches are relevant under different environmental circumstances.
A related point is that a collective of diverse individuals often solves complex problems better than the average individual. The stock market is a great example. Even “expert” investors struggle to beat the market over time. Successful experts seem to be those who can mentally represent a complex situation in their heads. Naturalistic decision making is not synonymous with beating the market.
This leads to the final point. The skill sets of the best naturalistic decision makers may not be transferable. The finest investors appear to combine innate ability (hardwiring) with hard work (diverse information input). While all investors can undoubtedly improve their decision making (even naturalistic decision making), we speculate that only a handful of investors have the combination of hardwiring and work ethic to consistently beat the market.
17
Weighted Watcher
What Did You Learn from the Last Survey?
The art of drawing conclusions from experiments and observations consists in evaluating probabilities and in estimating whether they are sufficiently great or numerous enough to constitute proofs. This kind of calculation is more complicated and more difficult than it is commonly thought to be.
—Antoine Lavoisier1
There are three kinds of lies: lies, damn lies, and statistics.
—Leonard H. Courtney2
I Do—Do You?
Not all bits of information are created equal. Saying “I do” wearing a tuxedo in front of clergy and congregation carries greater significance than replying “I do” when your host asks if you take milk with your coffee. An ability to properly weight information is very useful in life and especially important for investors.
An investment process requires gathering and analyzing information. Investors have historically emphasized either the gathering or the analyzing piece as their source of competitive advantage. But gaining an informational edge has become much more difficult in recent years as the direct result of technological advances and regulation.
For example, the ubiquity of networked personal computers has made information dissemination extremely rapid and nearly costless. Today, an online day trader has at her fingertips information and access that leading institutions could only dream about twenty-five years ago. And Regulation FD (fair disclosure) seeks to assure that all investors—from the largest fund manager to the smallest individual—receive material information at the same time.
Yet analysts have not given up their search for proprietary information. In recent years, we have seen a blossoming in the number of surveys and channel checks, as well as other less savory information-gathering attempts. While there is clearly nothing wrong with pursuing better information—and some firms do it very well—I question the investment value of much of today’s “proprietary” research.
There are three sources of skepticism. The first is whether or not investors can properly weight information. The second is sampling problems, or the degree to which the sampling techniques analysts use actually reflect the underlying population. The final issue is whether or not today’s proprietary research leads to superior investment performance.
Sifting Weights
In the mid-1990s, Bill Gates carried with him a list of Microsoft’s business priorities. The Internet, which was starting to take off, was fifth or sixth on his list. But once Gates realized the significance of the Internet to Microsoft’s future, he moved it to the top priority.3 Gates substantially reweighted already-known information, and hence added a lot of value for shareholders. Likewise, how we weight information has a significant impact on how we view the world and how we value assets.
Our degree of belief in a particular hypothesis typically integrates two kinds of evidence: the strength, or extremeness, of the evidence and the weight, or predictive validity. 4 For instance, say you want to test the hypothesis that a coin is biased in favor of heads. The proportion of heads in the sample reflects the strength, while the sample size determines the weight.
Probability theory prescribes rules for how to combine strength and weight correctly. But substantial experimental data show that people do not follow the theory. Specifically, the strength of evidence tends to dominate the weight of evidence in people’s minds.
This bias leads to a distinctive pattern of over- and underconfidence. When the strength of evidence is high and the weight is low—which accurately describes the outcome of many Wall Street-sponsored surveys—people tend to be overconfident. In contrast, when the strength is low and the evidence is high, people tend to be underconfident.
Exhibit 17.1 shows strength and weight combinations. When both are high, the conclusion is likely to be obvious. When both are low, the finding is unlikely to be relevant. In the two remaining boxes, however, we run the risk of misjudging the evidence.
The winner’s curse is another concrete example of the risk of weighting information incorrectly.5 The winner’s curse says that in a competitive auction, the highest bidder will typically overpay for the asset. Hence the bidder “wins” the auction but is “cursed” by the overpayment. When appraising an asset’s worth, investors often dwell on the average value that various bidders are likely to pay. But the only value that ultimately matters is what the highest bidder is willing to pay.6
Information weighting underscores that not all information is of equal value and relevance. Investors must be constantly diligent to avoid pitfalls related to improper information weighting.
EXHIBIT 17.1 Strength and Weight of Hypothesis Test
Source: Dale Griffin and Amos Tversky, “The Weighing of Evidence and the Determinants of Confidence,” and author analysis.
Misleading by Sample
Understanding what’s going on—what value-added resellers are saying, how employees feel about their company, or the purchase intentions of chief information officers (CIOs)—can be very useful to an investor. But getting an accurate view of the group is often not easy.
Statistics provide some guidelines for how large a population sample you need to create a reasonably accurate picture of the group. But in many cases, the underlying population is normally distributed. An appropriate sample of the height of adult women, for example, would provide a good sense of the average and distribution of female heights.
Many populations are not normally distributed, however, and here is where some problems arise. For instance, CIO surveys of expected technology spending often target Fortune 1000 companies. Assuming that technology spending as a percentage of sales is randomly distributed, which CIOs are included in the survey can make a huge difference to the outcome.
EXHIBIT 17.2 Sales Distribution for the Fortune 1000
Source: Fortune.com and author analysis.
To illustrate, the top 10 percent of the companies generate over 50 percent of Fortune 1000 aggregate sales, while the bottom 10 percent produce less than 2 percent. Weighting the responses of all CIOs equally could distort the underlying picture meaningfully unless the sample is properly stratified. Exhibit 17.2 shows the distribution of sales for the Fortune 1000.
The overconfidence that comes from strong evidence, yet weak predictive validity, seems very prominent in today’s markets. Investors appear satisfied using two or three data points to guide the next trade. This is a very difficult way to make a living.7 This leads to my final point.
Tell Me Something the Market Doesn’t Know
The most basic test of the value of survey-based research is whether or not it leads to superior stock selection. The answer is ambiguous at best, in our view.
The first reason relates to how quickly the market assimilates new information. 8 The evidence shows that the market does adjust to new information rapidly. If so, then generating excess returns from that data is unlikely. Gaining a
n informational edge is difficult: Sell-side-sponsored surveys and channel checks must be disseminated uniformly, of course, and incremental information that a large buy-side firm encounters is often reflected in share prices in short order. Information about what is going on now, or what is likely to happen in the near future, is most likely to be efficiently priced into stocks. In contrast, some evidence suggests that the market is short sighted with regard to long-term information.9
The second issue is that there is a substantial difference between understanding the fundamentals (or changes in fundamentals) of an industry or company and a grasp of the expectations built into the current stock price.10 Prices reflect collective expectations and generally incorporate more information than any one individual can claim. So the central question is whether or not information that is new to you is also new to the market.
Finally, at risk of exposing my own overconfidence, I tested the correlation between one well-known CIO survey and excess returns in the stock market (see exhibit 17.3).11 The evidence in favor of a link between the two is not persuasive.
Some investors don’t use results for assessing the sector directly surveyed but rather look for the derivative call—which other industries are affected by perceived trends. This analysis can run into the problem of contingent probability. Say that you judge the probability of a company’s orders being below expectations at 70 percent based on some new survey results. And say that there’s a 70 percent chance that a specific supplier will suffer as well. The chance that the supplier misses its number is less than 50 percent (0.70 x 0.70 = 0.49).
More Than You Know Page 11