by David Dreman
Even to be flagged on the screen, the manager had to outperform the market by 5.83 percent annually for fourteen years. When we remember that a top manager might beat the market by 11/2 or 2 percent a year over that length of time, the returns required by Jensen to pick up managers outperforming the averages were impossibly high. Only a manager in the league of Warren Buffett or John Templeton might make the grade, and certainly not every year. One fund outperformed the market by 2.2 percent a year for twenty years, but according to Jensen’s calculations, this superb performance was not statistically significant.30 “There is very little evidence,” Jensen wrote at the time, “that any individual fund was able to do significantly better than that which we expected from mere random chance.”31
In another academic paper, using standard risk adjustment techniques, the researchers showed that it was not possible, at a 95 percent confidence level, to say that a portfolio that was up more than 90 percent over ten years was better managed than another portfolio that was down 3 percent. It was also noted that “given a reasonable level of annual outperformance and variability (volatility), it takes about seventy years of quarterly data to achieve statistical significance at the 95% confidence level.”*38
One researcher, in an understatement, noted that the problem lay in weak statistical tools. Corroborating those findings, Lawrence Summers, the former head of the President’s Council of Economic Advisors, estimated that it would take 50,000 years’ worth of data to disprove the theory to the satisfaction of the stalwarts. Indeed, the EMH performance and risk measurement tools were so weak that it proved impossible to delineate even outstanding performance, which by sheer coincidence was the one thing that would invalidate the hypothesis.32 Obviously, this important “proof” that managers could not beat the market was put together with seriously inadequate statistics that coincidentally seemed to consistently give outstanding managers the short end of the count.
How, too, could the $63 billion Magellan Fund, for example, with more than a million shareholders and under three separate money managers, outperform the market for well over a decade? Or John Templeton and John Neff, the latter running billions of dollars for the Windsor Fund for more than two decades? How are these stellar results possible with only publicly available information? Is it sheer chance, as EMH adherents are forced to claim? Are these simply more on a growing list of “aberrations” (a popular term used for events that cannot be explained by a theory)? If they are, we must look at how many other institutional investors have outperformed, using statistics that can actually detect superior performance, not inadvertently filter it out, as Jensen’s methods did.
Why Weren’t Failed EMH Performance Measurements Recalculated?
Given this fact, did the supposedly impartial academics correct their work when better statistical techniques were available? Apparently not. In spite of the above and other evidence, the conclusions of Jensen’s mutual fund study, although seriously flawed, are still used to support the main premise of efficient markets.
Although Fama, French, and others showed that CAPM risk measurements were valueless, this is only a part of the story. Risk-adjusted and non-risk-adjusted mutual fund performance measurements, in addition to Professor Jensen’s, have also been shown to be misleading, because of the weakness of the statistical tools employed. Still they, too, have not been recalculated by EMH defenders to get a fairer picture of how mutual funds have really performed against markets. We have just seen how, as a result of these measurements, outstanding performance was not detected, and this was one of the most powerful “proofs” that EMH used to show that markets were efficient. As noted, the records of most managers who consistently outperformed the market were wiped out by statistical gobbledygook.
The ghosts of beta and other academic risk measurements still walk the night, defending EMH and weeding out any above-average performance not permitted by the theory. These are not the only instances of such tactics being employed by the true believers.
Revenants and errors notwithstanding, superior performance, a death knell for EMH, could not be eliminated by the believers. Next we’ll look at some of the ghost busters.
Those Dreadful Anomalies
Another major challenge to EMH is its claim that groups of investors, say, with professional knowledge or skills or methods, have consistently kept prices where they should be.33 We just saw differently. However, EMH makes an even stronger statement: that no group of investors or any investment strategy can do better than the market over time. And here again the trouble starts.
The tenet that managers do not outperform or underperform a market benchmark has a corollary: there is no method or system that consistently can provide higher returns over time. This statement is contradicted by a large body of evidence that some investment strategies consistently do better than the market and others consistently underperform over time. The jury has come in with a unanimous decision on this one: the verdict is solidly against EMH.
As we will see extensively in Part IV, a considerable body of literature demonstrates that contrarian strategies have produced significantly better returns than the market over many decades. The explanation for this explicitly contradicts the central tenet of EMH—that people behave with almost omniscient rationality in markets.
Conversely, the tenet that no group of investors and no strategies should consistently underperform in an efficient market is another rock that EMH flounders on. Below-market performance has been turned in for decades by people who buy favorite stocks, as we will see in detail when we examine contrarian strategies. Another significant underperformance finding, as noted, is the research that shows that IPOs have been dogs in the marketplace for forty years.34 So overperformance and underperformance for long periods—neither of which, EMH states, is possible—show up on both sides of the anomaly coin.
The anomalies show no sign of going away after four decades of counterchallenges; rather, they have been gaining in strength in the last few years, as dozens of articles have examined contrarian effects. The most important anomaly—contrarian strategies that beat the averages over extended periods—was as we saw documented by Professors Fama and French in 1992.35 Their own data contradict the contention that efficient markets have held up well. And the claims that these strategies are more risky have never been documented. The body of contradictory findings above challenges believers to either retract much of the theory or explain how such events can happen.
Another Challenge to Market Efficiency
Another major premise of EMH is the hypothesis that all new information is analyzed almost immediately and accurately reflected in stock prices, thus preventing investors from beating the market. Burton Malkiel, the author of A Random Walk Down Wall Street, now in its tenth edition, wrote in an article reviewing the evidence on efficient markets in 2005, “In my view, equity prices adjust to new information without delay, and, as a result, no arbitrage opportunities exist that would achieve above average returns, without accepting above average risk.”36 But do equity prices really adjust to new information “without delay”? This statement has been hard-core EMH for more than forty years and has been cited by almost every scholar in the field. True prices often react to new information about a stock, but where is the proof that they react to it correctly?
There is none. In a series of studies we are about to examine, we’ll often find that the researchers mistakenly take any market reaction to new information as the correct one. A number of these studies also make it clear that the initial market reactions are wrong. You will also see this predictable reaction to earnings surprise over thirty-eight years in chapter 9, where the first reaction repeatedly is not the correct one. It is also demonstrated in papers by Ray Ball and Philip Brown (1968)37 and Victor Bernard and Jacob Thomas (1990)38 and noted by Eugene Fama in his 1998 survey of EMH literature.39
The fact that Professor Fama finds these latter researchers’ findings to be “robust” is particularly interesting, as they directly di
spute the important assumption of efficient markets that new information is immediately and correctly reflected in stock prices. Here again we see a vital pillar of EMH begin to rock because an essential assumption of the theory was never tested by its proponents in a thorough manner. Stocks were tested merely for a reaction to new information, not for the correct reaction to the information. There are many dozens of potential prices a stock can reach on news; how do we know which one is correct? It’s almost equivalent to saying that if a man can jog he’s capable of winning the 100-meter sprint at the Olympic Games.
The Tortuous Path to Market Efficiency
To give you a fuller grasp of the depth—or lack thereof—of the testing that was used to back up this argument, let’s look at other research performed in the past few decades that supposedly left no doubt of how quickly and accurately investors interpreted market information.
Bolting the Barn Door After the Mare Gallops Off
The landmark 1969 study to show that prices adjust to new information rapidly was done by four outstanding researchers of EMH—Eugene Fama, Lawrence Fisher, Michael Jensen, and Richard Roll (hereafter, FFJR collectively).
The researchers examined all stock splits on the New York Stock Exchange from 1926 through 1960.40 The results the investigators arrived at, using extremely sophisticated statistical techniques for the time, indicated that stock prices do not move up after splits, as investors have digested all the positive information beforehand. The authors concluded that their work provides strong support for the hypothesis that the market is efficient. In truth, this, like most of the other experiments in this category, is a rather simplistic experiment of market efficiency, as it involves a very basic test of understanding uncomplicated, readily available information, hardly on a par with the complex decisions involving thousands of interacting variables that are called for in more normal investment analysis, such as that we saw in chapters 2 and 3. But to move on.
This study has been cited in hundreds of academic papers and has been taught to hundreds of thousands of graduate students as one of the major research works upholding market efficiency. However, the study is seriously flawed. The researchers knowingly measured a time period months after the information was released to gauge its effect on the market, rather than measuring at the time when the information was made public. It’s not a little like locking the barn door after the mare has galloped away.
The information enters the market at the time of the split announcement, most often two to four months before the split is distributed to the company’s shareholders. The earlier time is when the measurement should commence to see if the news resulted in a rise in stock prices as a result of the split, as it does for earnings surprises, dividend increases or decreases, or other announcements that can have a major impact on stock prices.
Sadly, this information was unavailable, so the researchers measured from the month in which the stock split was actually distributed, a period when the information had been out for two to four months, and reported that no extra return was made from that point onward. Naturally, their measurements of stock movement at that point were meaningless, as the market had already digested the news from two to four months before and the informational content was already fully reflected in the stock prices.
In Contrarian Investment Strategies: The Next Generation, I analyzed the chart the researchers provided from the examination; it was obvious that the steepest run-up after the announcements of splits came in the two-to-four-month period immediately after the announcement. In fact, the average extra monthly return for the four months in which the splits are announced is almost double the above-market returns in the previous twenty-six months.41
This raises a difficult problem for the researchers. What the chart appears to show, assuming that the majority of split announcements occurred two to four months prior to the stock distribution, is that the stocks may indeed have provided above-average returns after the announcement date. The positive adjustment to the splits, then, appears not to have been immediate but to have taken place for some months after the split’s announcement.
If this is the case, the researchers’ argument is invalid. The most logical conclusion is that the stocks continued to rise as a group for an extended period after the split announcement, which is exactly opposite to what the paper concluded.
The academics do, as noted, explain several times in the paper that the announcement date was not in their database.*39 Perhaps this was fortunate for them. If it were possible to place the split at the correct point, as the above analysis indicates, the conclusion would have been very different, the evidence helping to prove the tenet that markets react to new information in an inefficient, not an efficient, manner, which would certainly help to question the overall efficiency of markets.
Three decades later, in 1996, the research was replicated by David Ikenberry, Graeme Rankine, and Earl Stice,42 who examined 1,275 two-for-one stock splits from 1975 to 1990 on the New York Stock Exchange and the AMEX. They observed excess returns of 3.4 percent after the split announcement and 7.9 percent for the first year after, followed by higher average returns in the three-year period following the split.
Hemang Desai and Prem Jain (1997) found higher returns of 7 percent to 12 percent in the twelve months following a stock split.43 These results flatly contradicted FFJR’s 1969 paper, again providing evidence that markets react to new information in an inefficient, not an efficient, manner. The above findings are in line with our analysis.
Professor Fama, in his 1998 survey of EMH research, ignores the fact that the critical FFJR 1969 findings have been strongly refuted, and that the glaring flaw in the methodology has been identified. Instead, he seemingly questions the other researchers’ findings, noting that the time periods of the studies are different, as well as some of their other minor methodology. In doing so, it appears, he is attempting to deflect the fact that the critical focus of the FFJR paper—to determine whether the market responds almost immediately to the announcement of a stock split—was flubbed. That the time periods were different is entirely irrelevant to this work. It was a smooth maneuver; since the point of the original study was to find out whether stock splits have an immediate impact on prices, he has sidestepped the raison d’être of the 1969 study and ducked the fact that the later findings seem to disprove the FFJR research. Some spinmeisters might want to study such thinking, which seems classic to their field.
Without the FFMR paper and other similar research, which also has significant problems, the critical tenet of EMH—that investors process information quickly and correctly—collapses completely.
As noted, the FFJR study is considered by many as one of the strongest and best-known research supporting EMH.
More Leaks in the EMH Dreadnought
There’s nothing like really taking a close look at the original data. I mean really close, if you want to see what a researcher is doing. So let’s look at other studies that claim that the market adjusts quickly to new information. The first was performed by Ray Ball and Philip Brown in 1968.44 The two investigators examined the normal rates of return from 1946 to 1966 for 261 firms. They divided the stocks into two groups, those whose earnings in a given year increased relative to the market and those whose earnings decreased. The performance was measured after each year-end. They found that stocks whose earnings increased outperformed the market, while those that decreased underperformed the market. The researchers concluded that the stock prices had already anticipated most of the news of earnings announcements.
The theorists overlooked one simple fact that is well known to most investors: companies normally report quarterly, not annually. The SEC has for many years required public companies to disclose this financial information within ninety days. Furthermore, even back then, analysts provided research reports on how companies were faring, most often containing full-year earnings estimates, often supplemented by press releases from company spokesmen. Still, Ball and Brown stated that inv
estors correctly judged the prospects of companies and thus determined the movement of their stock prices when they actually had the information on hand to do so. Again the question comes up of how aware the researchers are of practical market information, such as reporting and research. To conclude that the market is efficient from this rather obvious and again simple finding is stretching the point.
Another supposedly awesome bit of evidence to support the hypothesis was a study by Myron Scholes in 1972.45 Scholes analyzed the effect of secondary offerings of stock and concluded that, on average, a stock declined 1 or 2 percent when such an offering was made. The largest declines resulted from the sale of stock by corporations or corporate officers. He also stated that the full price effects of a secondary are reflected in six days. He concluded that since the SEC does not require the identification of the seller until six days after the offering, the market anticipates the informational content of the secondary and is therefore efficient. Here again is a sweeping conclusion based on nominal price movements over a short period of time.
Secondary offerings normally bring stock prices down temporarily; this is almost a platitude. What is important is whether the stocks are brought down appropriately. How do they perform relative to the market three, six, or twelve months later? Too, many brokers disclose beforehand who the sellers are. To state that the market anticipates this information because the SEC does not require it is a chancy conclusion. Often this information is provided anyway.
Another study examined how quickly markets integrate new information into stock prices. The research considered how companies react to the announcement of merger and tender offers. Fama, in his 1991 review of efficient markets, stated: