Beyond Greed and Fear
Page 23
Since Jensen’s pioneering work a number of studies have appeared that suggest that some mutual fund managers may indeed have hot hands. Richard Ippolito (1989) finds evidence of significant positive Jensen alphas net of expense fees, but not load charges. Mark Grinblatt and Sheridan Titman (1989) also provide evidence in this vein. William Goetzmann and Roger Ibbotson (1994a) document a hot-hand effect in that winners repeat. Daryll Hendricks, Jayendu Patel, and Richard Zeckhauser (1993) also find that winning performance persists. Mark Carhart (1997) provides the clearest evidence about the character of persistence. However, at least one recent study, by Elton Gruber, Sanjiv Das, and Matt Hlavka (1993), finds no evidence of positive alphas.
Goetzmann and Ibbotson examined the period 1976 through 1988. They divided this period into six two-year time periods and looked at measured performance in two ways—raw returns and Jensen’s alpha. In any two-year period, a fund is categorized as a winner if its return was above the median. In the absence of a hot-hand effect the chance of a winner’s repeating between successive two-year periods is 50 percent. Goetzmann and Ibbotson found that the probability that a winner repeats is actually 60 percent.
Carhart’s study offers the clearest portrayal of persistent mutual fund performance. The study covers the period 1962 through 1993, and it includes all known equity funds during this period. For each of these years, Carhart divides funds into ten groups based on the return they earned in the previous twelve months, net of all operating expenses and security-level transaction costs, but excluding sales charges.
Consider the top 10 percent performers in any given year. If managing a mutual fund involved as much skill as tossing a fair coin, then we would expect these managers to perform no differently the following year. In other words, we would expect these managers to arrange themselves quite uniformly over the ten performance groups the next year.
Is this in fact how they find themselves arranged the next year? Not exactly. Figure 12-1 contrasts the actual arrangement with a uniform arrangement. The figure shows that group 1, the top performers, are more likely to find themselves in the top-performing group the next year than in any other group. So, winners do tend to repeat! Indeed, top performers are almost twice as likely to be top performers the following year, as compared to managers in any other group. Notably, the situation is much the same no matter which group we look at. The members of each group are most likely to find themselves in the same group next year as they are now. However, Carhart finds that this is a single-year phenomenon. Except for the worst performers, the ranking distribution for mutual fund managers after two years tends to be uniform.
What explains this one-year hot-hand phenomenon? Is it skill? Or could it be risk? Do top performers simply hold riskier portfolios than other managers, thereby earning higher returns on average?
Unfortunately, measuring risk is not as straightforward as it used to be when Jensen’s mutual fund study appeared back in 1968. At that
Figure 12-1 Where the Top Ranked Funds from Last Year End Up a Year Later
There is a weak “hot hands” phenomenon for mutual funds. The top-performing funds in a given year are more likely to repeat as winners the following year than random chance would suggest. The effect lasts for one year and then disappears. time, risk was measured by beta—the extent to which a portfolio moved with the market. However, for the reasons discussed in chapter 7, risk measurement is now based on factor models. In addition to a proxy for the market portfolio, Carhart uses size, book-to-market, and a short-term momentum variable as factors. However, as you may recall from chapter 7, I suggest that size and book-to-market serve as factors that capture mispricing rather than risk.
Carhart finds that the four factors explain mutual fund performance for all but the bottom 10 percent. The worst performers underperform relative to what the factors predict. So, where does that leave us: Is performance determined by risk, or by skill in picking winners? Grinblatt, Titman, and Russ Wermers (1995) find that about 77 percent of mutual fund managers use momentum strategies, meaning that they purchase stocks that have recently gone up. But they also find that this tendency is not particularly large. In addition, Grinblatt, Titman, and Wermers find that mutual fund managers tend to display herding behavior with respect to stocks that have recently gone up: They move in to buy past winners at the same time. However, they don’t herd when it comes to selling past losers.
As noted in chapter 7, there does appear to be a short-term momentum effect for stock returns. Stocks that have gone up recently are more likely than other stocks to continue to go up. Is the momentum strategy that managers follow enough to drive the persistence effect? Carhart (1997) suggests not. He suggests that these funds “accidentally end up holding last year’s winners. Since the returns on these stocks are above average in the ensuing year, if these funds simply hold their winning stocks, they will enjoy higher one-year expected returns, and incur no additional transaction costs for this portfolio. With so many mutual funds, it does not seem unlikely that some funds will be holding many of last year’s winning stocks simply by chance (p. 73).”
Risk and Mutual Fund Ratings
Rating mutual fund ratings is no trivial manner. Investors can find these ratings in Business Week, Barron’s, Consumer Reports, and various Morningstar publications. Ratings are rarely based on raw returns alone, especially returns from the previous year. For example, Morningstar prepares a mutual fund scorecard for Business Week that adjusts returns for risk, where risk is measured by downside volatility. Interestingly, the A-rated funds in Business Week’s 1997 scorecard actually earned 0.9 percentage points less than the S&P 500; this is because they exhibited less downside volatility than the index. In fact, the Vanguard Index 500 Fund, which closely tracks the S&P 500, only received a grade of B+.
A grade of B+ for a fund that outperformed 90 percent of all mutual funds in 1997, and 83 percent of all mutual funds over the previous 20 years, may seem odd. Don Phillips, president of Morningstar explains it this way. “When you look at funds by risk-adjusted returns, you are not necessarily looking at the most profitable funds, but the most comfortable funds.”10
Obfuscation Games
As you may recall from chapter 10, the importance of comfort should not be underestimated. But it leaves investors vulnerable to the obfuscation games played by the mutual fund industry, games designed to exploit frame dependence. Recall from chapter 3 that psychologists Tversky and Kahneman (1986) classify decision frames into two categories—transparent and opaque. The purpose of obfuscation games is to make investors’ decision frames opaque rather than transparent. And these games appear to be successful.
Obfuscation games work because mutual fund investors are subject to the standard cognitive limitations described in this volume. Recent studies by the Investment Company Institute reveal that most mutual fund investors do not rely on their own judgment to choose funds, but on the judgment of advisers. The ICI reports that nearly 60 percent of investors own funds purchased only through a broker, insurance agent, financial planner, or bank representative. In contrast, 22 percent are pure “do-it-yourselfers” who purchase through the direct-market channel; that is, they purchase directly from the fund companies themselves or through discount brokerages.
A related study by Vanguard shows that most mutual fund investors lack expertise. Vanguard administered a twenty-question knowledge test to a broad group of users and found that investors fared very poorly on the test. More than three in five failed to answer even half the questions correctly. The average score was 49 percent, and only 3 percent scored 85 percent or higher.11
Mutual fund investors also exhibit selective memory. William Goetzmann and Nadav Peles (1994) found that investors have biased recollections of how well their funds performed in the past. Investors wear rose-colored glasses, meaning they think that their funds did much better than was actually the case. Goetzmann and Peles attribute this tendency to the psychological phenomenon known as cognitive dissonance.
/>
Rather than feel uncomfortable about how poorly past choices turned out, investors reconstruct the past.
So, mutual fund investors are vulnerable to obfuscation games. What are these games? They come in several varieties. Goetzmann and Ibbotson describe some of them in a 1994b paper. Some of these games are akin to what magicians do: make some items, like rabbits, appear out of nowhere and other items, like eggs, vanish before our eyes.
In the first game, which I call the incubator fund game, fund companies maintain a group of “incubator funds.” These funds appear out of nowhere. They are closed to outside investment, but those that are successful are brought to market. Why is this a game? Because it capitalizes on the same misframing phenomenon that was discussed earlier in connection with the thought experiments. Investors tend to frame their evaluation of performance by focusing on the fund in isolation rather than on the whole group of incubator funds. When there are many new funds being incubated, the likelihood that some will perform well by chance alone is very high. Unfortunately, investors are prone to interpret good performance by an incubator fund as evidence of skill rather than luck. Consequently, they fail to recognize the tendency for these funds to regress to the mean in terms of future performance.
The second game is called hiding the losers. Suppose a company is managing many funds, and some of them have been poor performers. What to do? Whisk the losers off by merging them into other funds. Jonathan Clements illustrated the point in a column aptly titled “Abracadabra, and a Putnam Fund Disappears.”12
The disappearing fund was the $334 million Putnam Strategic Income Trust. Since its inception in 1977, its cumulative return of 153 percent was well below the 324 percent earned by the S&P 500, and it earned more than the index in only three of those years. It was merged into the tiny Putnam Equity Income Fund, which had less than $1 million in assets.13 Clements quotes Don Phillips, publisher of Morningstar Mutual Funds: “That’s a classic case of burying a bad record.”
Next comes the game of opaque fees. A Wall Street Journal article by Charles Gasparino describes an address by Securities and Exchange Commission chair Arthur Levitt at a conference sponsored by the Investment Company Institute.14 Although fees are discussed in mutual fund prospectuses, they are nowhere near as salient to investors as performance. One reason for this is that fees are expressed in percentage terms, not dollars. In contrast with average performance expressed as a percent, the fees look small. But when expressed in dollars, fees get compared against a different benchmark—regular expenditure items—where they look sizable. Consequently, investors pay little attention to the long-term impact of percentage-based fees.
In a 1997 article, John Bowen Jr. and Meir Statman discuss the benchmark game. This game is about assigning funds to categories and then measuring performance relative to benchmarks. Lipper Analytical Services compiles the benchmark performance reported in the Wall Street Journal. Bowen and Statman provide an example. For the year ending June 20, 1996, the Vanguard 500 Index Fund earned a 24.3 percent return, whereas the Vanguard Index Small-Cap Index Fund earned a 24.6 percent return. The 500 Index is in the growth and income category, whereas the Index Small Company Fund is in the small-company growth category. How about the grades? Despite the raw returns, the Vanguard 500 Index Fund received a grade of A, whereas the Small Company Fund received a D. Professors who assign grades to their students in such fashion would invariably find themselves in serious trouble, and clearly should have chosen instead a career rating mutual funds.
Benchmark performance is not risk adjustment, but style adjustment. Given what we know about risk, and about investors’ comprehension about the meaning of risk, is it clear that style should be relevant for investors’ financial interests? I suggest not. But given that investors care about style, so too should mutual fund managers.
As I discussed in chapter 10, investors do not understand diversification at all well. They misinterpret variety for diversification, and pick many different kinds of funds as a result. Thus, by offering a variety of styles, flavors, and colors the mutual fund industry capitalizes on this behavior pattern. Investors buy lots of different kinds of funds. Categorical comparisons by means of benchmarks cater to this behavior.
Of course, there will be times when investors wonder whether the emperor wears clothes. After a year like 1997, when 90 percent of funds underperformed the S&P 500, managers may find it difficult to deflect investors’ attention back to the benchmark—a point made by Barron’s writer Andrew Bary:
In putting their 1997 performances in the best light, many money managers will compare their funds with other mutual funds. That’s what Fidelity loves to do. “Your fund beat the average growth fund tracked by Lipper,” is how many annual reports will begin. This comparison holds dubious value because it measures a fund against mediocre competition.
It’s like a.260 hitter calling himself a star because he plays on a last-place team. The true comparison for the vast majority of funds should be the S&P 500.15
Another game is masking the risk—another concern that Gasparino mentions in his article about the SEC’s Levitt. There are different versions of this game. One involves including derivatives in the portfolio that affect the risk in ways investors do not understand. Another concerns the behavior of some managers halfway through the year. At this point, there will be some managers who find that their funds are underperforming relative to their benchmarks. What should they do? Remember chapters 3 and 9, which discussed how people behave when they perceive themselves in “loss territory”? These managers increase their risk exposure, hoping to at least break even. Keith Brown, Van Harlow, and Laura Starks (1996) report that fund managers finding themselves in the middle of their comparison group by midyear increase the risk of their fund’s portfolio during the second half of the year.
The final game, called come out with all guns blazing, involves managers of new funds. New funds tend to have riskier portfolios. Those that do well, thereby garnering attention, tend to reduce their risk exposure after becoming established. Consider the Technology Value Fund run by Firsthand Funds in San Jose, California. This fund concentrates in technology stocks; it is the only fund based in Silicon Valley, where most of the companies in which it invests are also located.
Firsthand Funds used to be called Interactive Investments. It was started in 1994 with very little money and produced very impressive returns in its first two years—61 percent in both 1995 and 1996. Investor’s Business Daily gave Technology Value a grade of A+, based on its cumulative three-year return of 216 percent, the highest for all funds.
The fund became available to the public in December 1994. One year later, it had $900,000 under management. In December 1996, this amount had grown tenfold to $9 million. By December 1997, the amount of assets under management was $195 million. Now what were we saying about whether investors base their decisions on past performance?
In mid-June 1997, Technology Value received a five-star rating from Morningstar, Inc. But the rating was ambivalent because Technology Value concentrates its holdings in just a few high-technology stocks, and this makes it more risky than even a sector fund. The following quotation, which appeared in a June 13, 1997, Wall Street Journal article, captures the ambivalence: “High risk accompanies Technology Value fund’s hot performance. ‘I don’t see why people should take a chance on them,’ says Russ Kinnel, head of equity-fund research at Morningstar. ‘If you buy a hot fund with high expenses with a high-risk approach, don’t be surprised when you get burned,’ Mr. Kinnel warns.”16
Not four months later, Kinnel’s words proved prophetic when problems in the Asian economies hit the stocks in Technology Value’s portfolio particularly hard. For the fourth quarter of 1997 the fund underperformed its peer group and was down 18.72 percent.
The folks who run Firsthand Funds pay no attention to security risk. Instead, they concentrate on how the combination of technology and competitive advantage are likely to impact future earnings gr
owth. In this respect, the fund managers rely on the fact that they have both strong technical backgrounds as well as business backgrounds.
In a 1997 interview that appeared in Barron’s, Kevin Landis described Firsthand’s approach: “It makes sense that you have this fund in Silicon Valley, run by two people who worked in these industries, plugging into a network of industry professionals providing great input, doing fundamental research and buying great companies at great prices. People love the story, and it seems to work.”17
Does anything in this approach, about concentrating on companies you can understand, where you may have an informational advantage, sound vaguely familiar? Landis was explicit. “We were real adherents of the Peter Lynch philosophy of investing in what you know.”
Summary
We have come full circle, back to Peter Lynch, the salient Peter Lynch. Most people tend to misinterpret what his success means. To be sure, there does seem to be something of a hot-hands effect, some persistence in mutual fund managers’ performance. But most investors will misread what this performance says about the future. Because they use the wrong frame and rely on the heuristic of representativeness, they will tend to attribute too much of that success to skill rather than luck. Moreover, these biases will leave them vulnerable to a host of games played by the mutual fund industry.
Chapter 13 Closed-End Funds: What Drives Discounts?
The prices of closed-end funds present a puzzle for market efficiency.
In a closed-end fund, the number of shares is fixed after the initial offering. Therefore, the only way investors can buy shares in the fund is to purchase them from some other investor. Open-end funds are different. In an open-end fund, the number of shares is not fixed, and investors purchase shares directly from the investment company running the fund. In this case, the investment company simply issues more shares to meet investor demand.