• the framing effects and heuristics that lead individual investors to attribute too much importance to the role of skill, and too little luck, in fund performance
• a series of games, called obfuscation games, that the mutual fund industry uses to induce opaqueness into the frames of individual investors
Case Study: Peter Lynch’s Salient Record at Magellan
Peter Lynch managed the Fidelity Magellan Fund for thirteen years, from 1977 until 1990. During that period, Magellan’s performance was nothing short of amazing. A $1000 investment in Magellan in 1977 would have grown to $28,000 in 1990, an astonishing 29.2 percent return per year. Moreover, Magellan’s performance was consistent, beating the return on the S&P 500 in all but two of those years. And as far as the competition was concerned, the race was not even close: Magellan’s nearest rival earned 23.2 percent during this period.
Peter Lynch’s record is a salient aspect of mutual fund history. What are the implications, if any, of such a record? Has it generated availability bias, leading investors to believe that mutual fund managers beat the market more than they do? Has it led investors to believe, in error, that successful fund managers have “hot hands,” meaning that successful performance tends to persist from year to year?
Let it not be said that Peter Lynch believes that markets are efficient. When he took over at Magellan, he set himself the remarkable goal of beating other no-load funds by 3 to 5 percentage points a year. Not only did he manage to achieve his goal, but he also made beating both the competition and the market look like a snap.
In his book One Up on Wall Street, Lynch (1989) describes the process that led him to choose some of his winners: “Taco Bell, I was impressed with the burrito on a trip to California; La Quinta Motor Inns, somebody at a rival Holiday Inn told me about it; Volvo, my family and friends drive this car; Apple Computer, my kids had one at home and then the systems manager bought several for the office.”
To be sure, Peter Lynch did additional research; and he was very organized in the way he treated the information he obtained. He kept a series of notebooks that synthesized what he learned from such diverse sources as Value Line Investment Survey, quarterly reports, and meetings with company executives. These notes were then boiled down into what he called a “two minute monologue” that focused on a few key variables such as net worth, stock price, sales, and the profit picture.
Peter Lynch was always clear about the importance he attached to holding the stocks of companies whose situations were transparent. He once said, “Investing is not complicated. If it gets too complicated, go on to the next stock. I give balance sheets to my fourteen-year-old daughter. If she can’t figure it out, I won’t buy it.”2
Well, if it’s really that straightforward, then mutual fund managers should have little difficulty beating the market. Yet they do. Vanguard offers an index fund, the 500 Index Portfolio, that tracks the S&P 500. Vanguard reports that in the twenty-year period 1977 through 1997, the 500 Index Fund outperformed more than 83 percent of all mutual funds.3 During 1997, the 500 Index Fund actually beat most, over 90 percent, of the diversified U.S. equity mutual funds. For the year, the S&P 500 returned 32.61 percent, in comparison to the 24.36 percent return on the average equity mutual fund.
If investing is as simple as Peter Lynch suggests, then something is not adding up. Indeed, Lynch suggests that amateur investors should have little difficulty beating the professionals if they stick to companies they know, use common sense, and do a little basic research. For instance, he advocates that they invest in companies they work for, or whose products they use. And Lynch cautions them to stay away from companies they don’t understand and to avoid trying to time the market. He claims that this gives amateurs a leg up on professional investors, who tend to be removed from the companies in which they invest, instead relying on analysts’ reports. In other words, Peter Lynch urges investors to succumb to familiarity bias.
Lynch told Gerard Achstatter of Investor’s Business Daily that he honed his skills for making risky decisions by playing bridge and poker: “They help you understand the rules of chance. … If the upside isn’t terrific, maybe I should fold my hand.”4
Some Rules of Chance
To be sure, the rules of chance play a critical role when it comes to open-ended mutual fund performance. So let’s engage in some “thought experiments” to help us think about the relevant rules of chance.
The first thought experiment has you opening up a roll of shiny quarters, just obtained from your bank. Suppose that you give one of these coins to me and I toss it ten times. Each time I toss it, the coin turns up heads. What would your reaction be after the tenth head? In-credulous? Suspicious? If so, why? Is tossing ten consecutive heads that extraordinary?
Well, the odds of ten consecutive heads from a fair quarter are about one in a thousand. But whether or not the outcome is extraordinary depends on the context. Suppose that this experiment involved you, me, and nobody else. Then you would have every right to be suspicious that my having tossed ten consecutive heads was attributable to more than plain luck. I would not be in the least offended if you asked to inspect the coin closely to see, for example, whether or not it was two-headed.
Perhaps the situation with mutual fund managers is similar. Fund management involves both skill and luck. If a particular manager performs well consistently, then you could be forgiven for attributing his success more to skill than to luck. But would you be right? To answer this question, consider a second thought experiment.
In your mind’s eye, imagine three types of coins—one gold, one silver, and one bronze. Every coin has a head appearing on one side and a tail on the other. In 1998, the number of mutual funds listed in the Mutual Fund Quotations section of the Wall Street Journal was about 5,000.5 Suppose that we give a coin to each of the 5,000 managers. One third of these managers receive a gold coin, one third receive a silver coin, and the remaining third receive a bronze coin. Now we ask each manager to toss his or her coin ten consecutive times. Each time a manager tosses a head, we pay him $1. If he tosses a tail, we pay him nothing.
Can you see where we are going with this? In order to keep track of how the 5000 managers are doing, we use the imaginary Eveningscore service. Eveningscore tracks and publishes the dollar payoff to each manager, but it does not record what type of coin he was using. Eveningscore also publishes summary statistics to evaluate managers’ coin-tossing abilities.
To facilitate comparison, Eveningscore uses a benchmark. What amount of money do you think would serve as a reasonable benchmark in this experiment? Well, if the coins were all fair, then the average manager should toss five heads and five tails, thereby earning $5. So why don’t we go with that amount?
Eveningscore reports that of the 5,000 coin-tossing managers, 1,905 beat the benchmark, which is about 38 percent. Moreover six of the managers posted perfect records: ten heads in a row.
Now armed with the track records of these 5,000 managers and the Eveningscore summary data, suppose we try and pick the winners of the next ten-toss game. What do the rules of chance tell us here? Remember that we know the managers’ track records, and we know that managers toss three different types of coins. But we don’t know whether any or all of the coins are fair.
Suppose that all of the coins were fair. If this were the case, how many managers should we expect to beat the Eveningscore benchmark? Would you guess 1,905? If not, would you guess more than 1,905 or fewer?
The correct answer is 1,885. If 5,000 managers each tossed a fair coin, then we should expect that 37.7 percent of them would beat the benchmark. And what about using past track records to choose which manager to back next? Well, if every manager uses a fair coin, regardless of its color and metallic composition, track record becomes useless.
Note that I have not told you that in the second thought experiment the fund managers were all tossing fair coins. I only provided you with some information about how they performed, and related the results
to the rules of chance.
Biased Predictions Based Upon Track Record
In reality, a mutual fund manager’s track record is critical. It should come as no surprise that during Peter Lynch’s thirteen-year tenure, Fidelity Magellan became the largest mutual fund in the world, with its assets growing from $20 million to $14 billion.
In practice, investors bet on past performance.6 Wall Street Journal columnist Jonathan Clements wrote an article titled “Looking to Find the Next Peter Lynch”—in which he argues that Lynch’s success has led investors to overstate the chances of finding skilled managers. In particular, Clements describes the process by which high-performing mutual fund managers attract the attention of journalists, and what follows as a result.
If a manager specializes in, say, blue-chip growth stocks, eventually these shares will catch the market’s fancy and—providing the manager doesn’t do anything too silly—three or four years of market-beating performance might follow.
This strong performance catches the media’s attention and the inevitable profile follows, possibly in Forbes or Money or Smart Money. Unfortunately for the journalists involved, our fund manager is less interesting than most podiatrists. Surely, sir, you have an intriguing hobby? Maybe, sir, you could tell us an anecdote to illustrate your investment style?
By the time the story reaches print, our manager comes across as opinionated and insightful. The money starts rolling in. That’s when blue-chip growth stocks go out of favor. You can guess the rest.
Too harsh? Maybe I have seen too many star managers come and go. When they go, they tend to go gently into that good night, a performance whimper rather than a spectacular bust.
I think of managers such as Gabelli Asset Fund’s Mario Gabelli, Janus Fund’s James Craig, Monetta Fund’s Robert Bacarella, Parnassus Fund’s Jerome Dodson, Pennsylvania Mutual Fund’s Charles Royce, as well as Fidelity Asset Manager’s former manager Bob Beckwitt and Fidelity Capital Appreciation Fund’s former manager Tom Sweeney.7
Clements is careful not to suggest that the performance of mutual fund managers is akin to tossing a fair coin. In fact, he notes: “Past performance may be a guide to future results. But it’s a mighty tough guide to read.”
Why is past performance so difficult to evaluate? To understand the reason, let’s go back to our coin-tossing analogy. Remember that managers use three types of coins—gold, silver, and bronze. Imagine that of the three types, only the silver is fair. The gold coin is weighted toward heads, and the bronze coin is weighted toward tails. Specifically, the odds of tossing a head are 55:45 when using a gold coin and 45:55 when using a bronze coin. Only the silver coin offers even odds.
Now, suppose that we glance through Eveningscore and spot a manager who has tossed seven heads, thereby beating the benchmark by two. If we knew that this manager was actually tossing a gold coin, then it would make sense to back him in the next round. If You are going to back just one manager, then that is the best you can do—back a gold-coin tosser.
But we don’t know who the gold coin tossers are. So if we are only armed with track records, then given the track record (beat benchmark by 2), what are the odds that this manager is tossing a gold coin? The lowest number you should guess is 33.3 percent since one third of the managers are randomly assigned gold coins. The highest number is, of course, 100 percent, but that is too high because bronze-coin tossers too can toss seven heads. Where in the range 33.3 to 100 would you put your answer?
The correct answer is 46.5 percent: better than 33.3 percent, but well below 100 percent. And what about the odds that this money manager, the one with the seven-head track record, will beat the benchmark next time? The probability that he will beat his benchmark next time is 41 percent—better than 38 percent, but not by much.
What about the chances of the coin tossers’ doing at least as well as they did in the first round of ten tosses? Alas, the odds of repeating the previous performance are only 20 percent.
Even when mutual fund managers differ in their ability to produce winning performance, picking future winners based on past track record is very dicey because of the rules of chance. The gold-coin tossers are mixed together with lots of tossers of silvers and bronzes; and many of the managers tossing silvers and bronzes are likely to beat the benchmark. So sifting out the gold nuggets from the silvers and bronzes is a crude art, not a science. That is the nub of the issue.
Most investors do not understand that picking future mutual fund winners is a crude art. To begin with, they do not frame the evaluation problem correctly. Remember our two thought experiments? In the first experiment, we evaluated how likely it would be for me to toss ten heads in a row. In the second experiment we handed each of5,000 managers a coin, and we asked how likely it would be for at least one of them, but no one in particular, to toss ten consecutive heads. The point of these experiments is that what is a remote event in the first experiment is highly likely in the second. Investors need to frame the evaluation problem using the second thought experiment. But they use a heuristic whereby they isolate on the manager whose performance they are evaluating. Consequently, they attribute too much of performance to skill, and not enough to luck.
Representativeness
Here is a third thought experiment. Suppose that in the past, a particular fund manager is known to have beaten her benchmark two thirds of the time. Consider three possible short-term track records for her fund’s most recent performance. Each record is a string of B’s and M’s. A “B” denotes “beat or met benchmark,” whereas an “M” denotes a “missed benchmark.” The possible short-term records are (1) BMBBB, (2) MBMBBB, (3) MBBBBB.
Which of these three track records do you think is the most likely? Most people—about 65 percent, in fact—believe that the second track record is the most probable. However, the first track record is actually the likeliest of the three. The second track record was obtained from the first when the manager (first) missed her benchmark (an M), followed by BMBBB.
Most people get the wrong answer because they rely on representativeness to assess likelihood. Notice that the second track record has a success rate of two thirds, the same rate as the manager’s long-term success rate. So, the second track record most closely represents the manager’s long-term performance. But representativeness and probability are not synonymous. Hence, representativeness can be a misleading guide to likelihood assessment, as in this case.
In the most likely track record, BMBBB, the manager met or beat her benchmark 80 percent of the time. Representativeness leads investors to misjudge how often departures from one’s own average performance occur. Hence, investors are too quick to attribute track records such as BMBBB to skill rather than luck. Representativeness compounds the misframing problem, leading investors to attribute too much performance to skill rather than luck.
Was Peter Lynch Just Plain Lucky?
So, was Peter Lynch just plain lucky all those years? I put that question to Bob Saltmarsh, who served as treasurer of Apple Computer and was responsible for investor relations when Lynch was managing Magellan.8 Recall that Apple was one of the stocks in Magellan’s portfolio.
Saltmarsh told me that he found Peter Lynch very different from other mutual fund managers. In describing his interactions with Lynch, Saltmarsh says: “His questions were different. They were more focused and insightful. In particular, he stayed away from techno-questions, and told me: ‘I only buy what I understand.’”
Saltmarsh recalls a particular encounter that took place at a Cowan & Co. investor conference during the autumn of 1987. At that time, Apple’s sales had doubled very quickly to $4 billion, and consequently Apple found itself with between $700 million and $800 million in cash. Saltmarsh had been meeting with many fund managers at the conference, but Lynch was alone in probing Apple about its cash position. He asked Apple the same question in nine or ten different ways. Saltmarsh was uncertain what issue Lynch was trying to expose, and finally just asked Lynch what it was he wanted to k
now. The response? “Will you be another G.M.? Is that money burning a hole in your pocket? Will you stick to your knitting, or will you go off and try something that you don’t understand?”
Given Lynch’s thirteen-year track record, he may well have been using a “gold coin”: it would be silly to rule out the role of skill. Indeed since Peter Lynch retired from Magellan in 1990, three other managers have run the fund: Morris Smith, Jeffrey Vinik, and Robert Stansky. Yet Magellan has not beaten the S&P 500 since 1993–1998.9
Do Winners Repeat?
If skill is a factor in mutual fund performance, then we should expect to find that winners repeat. Successful mutual fund managers should continue to be successful, at least on average. A basketball player who seems to make every basket he shoots is said to have “hot hands.” Successful mutual fund managers should also have hot hands. Do they?
What do we know about persistence? The earliest study is by Michael Jensen (1968), who studied performance over the period 1945–1964. He found that mutual fund managers, both as a group and individually, do not have hot hands.
Jensen refrained from telling investors to forget about managers beating the market. Rather he said that a fund could only be expected to return more than the market if it held a portfolio that featured more systematic risk than the market. What Jensen did was to take the raw return to a fund and subtract out a portion that reflected the compensation for taking risk. He called the residual “alpha,” and it has come to be known as “Jensen’s alpha.” In effect, Jensen established a benchmark to compare the performance of different mutual funds. Alpha is the amount by which a fund’s return exceeds the benchmark. Essentially, Jensen found that all mutual fund alphas were indistinguishable from zero.
Beyond Greed and Fear Page 22