Yes, at least on a small scale. After I finished my Ph.D., I was appointed to the faculty of the department of computer science at Columbia University. I was fortunate enough to receive a multi-million-dollar research contract from ARPA [the Advanced Research Projects Agency of the U.S. Department of Defense, which is best known for building the ARPAnet, the precursor of the Internet]. This funding allowed me to organize a team of thirty-five people to design customized integrated circuits and build a working prototype of this sort of massively parallel machine. It was a fairly small version, but it did allow us to test out our ideas and collect the data we needed to calculate the theoretically achievable speed of a full-scale supercomputer based on the same architectural principles.
Was any thought given to who would have ownership rights if your efforts to build a supercomputer were successful?
Not initially. Once we built a successful prototype, though, it became clear that it would take another $10 to $20 million to build a full-scale supercomputer, which was more than the government was realistically likely to provide in the form of basic research funding. At that point, we did start looking around for venture capital to form a company. Our motivation was not just to make money, but also to take our project to the next step from a scientific viewpoint.
At the time, had anyone else manufactured a supercomputer using parallel processor architecture?
A number of people had built multiprocessor machines incorporating a relatively small number of processors, but at the time we launched our research project, nobody had yet built a massively parallel supercomputer of the type we were proposing.
Were you able to raise any funding?
No, at least not after a couple months of trying, after which point my career took an unexpected turn. If it hadn’t, I don’t know for sure whether we would have ultimately found someone willing to risk a few tens of millions of dollars on what was admittedly a fairly risky business plan. But based on the early reactions we got from the venture capital community, I suspect we probably wouldn’t have. What happened, though, was that after word got out that I was exploring options in the private sector, I received a call from an executive search firm about the possibility of heading up a really interesting group at Morgan Stanley. At that point, I’d become fairly pessimistic about our prospects for raising all the money we’d need to start a serious supercomputer company. So when Morgan Stanley made what seemed to me to be a truly extraordinary offer, I made the leap to Wall Street.
Up to that point, had you given any thought to a career in the financial markets?
None whatsoever.
I had read that your stepfather was a financial economist who first introduced you to the efficient market hypothesis.* Did that bias you as to the feasibility of developing strategies that could beat the market? Also, given your own lengthy track record, does your stepfather still believe in the efficient market hypothesis?
Although it’s true that my stepfather was the first one to expose me to the idea that most, if not all, publicly available information about a given company is already reflected in its current market price, I’m not sure that he ever believed it was impossible to beat the market. The things I learned from him probably led me to be more skeptical than most people about the existence of a “free lunch” in the stock market, but he never claimed that the absence of evidence refuting the efficient market hypothesis proved that the markets are, in fact, efficient.
Actually, there is really no way to prove that is the case. All you can ever demonstrate is that the specific patterns being tested do not exist. You can never prove that there aren’t any patterns that could beat the market.
That’s exactly right. All that being said, I grew up with the idea that, if not impossible, it was certainly extremely difficult to beat the market. And even now, I find it remarkable how efficient the markets actually are. It would be nice if all you had to do in order to earn abnormally large returns was to identify some sort of standard pattern in the historical prices of a given stock. But most of the claims that are made by so-called technical analysts, involving constructs like support and resistance levels and head-and-shoulders patterns, have absolutely no grounding in methodologically sound empirical research.
But isn’t it possible that many of these patterns can’t be rigorously tested because they can’t be defined objectively? For example, you might define a head-and-shoulders pattern one way while I might define it quite differently. In fact, for many patterns, theoretically, there could be an infinite number of possible definitions.
Yes, that’s an excellent point. But the inability to precisely explicate the hypothesis being tested is one of the signposts of a pseudo-science. Even for those patterns where it’s been possible to come up with a reasonable consensus definition for the sorts of patterns traditionally described by people who refer to themselves as technical analysts, researchers have generally not found these patterns to have any predictive value. The interesting thing is that even some of the most highly respected Wall Street firms employ at least a few of these “prescientific” technical analysts, despite the fact that there’s little evidence they’re doing anything more useful than astrology.
But wait a minute. I’ve interviewed quite a number of traders who are purely technically oriented and have achieved return-to-risk results that were well beyond the realm of chance.
I think it depends on your definition of technical analysis. Historically, most of the people who have used that term have been members of the largely unscientific head-and-shoulders-support-and-resistance camp. These days, the people who do serious, scholarly work in the field generally refer to themselves as quantitative analysts, and some of them have indeed discovered real anomalies in the marketplace. The problem, of course, is that as soon as these anomalies are published, they tend to disappear because people exploit them. Andrew Lo at MIT is one of the foremost academic experts in the field. He is responsible for identifying some of these historical inefficiencies and publishing the results. If you talk to him about it, he will probably tell you two things: first, that they tend to go away over time; second, that he suspects that the elimination of these market anomalies can be attributed at least in part to firms like ours.
What is an example of a market anomaly that existed but now no longer works because it was publicized?
We don’t like to divulge that type of information. In our business, it’s as important to know what doesn’t work as what does. For that reason, once we’ve gone to the considerable expense that’s often involved in determining that an anomaly described in the open literature no longer exists, the last thing we want to do is to enable one of our competitors to take advantage of this information for free by drawing attention to the fact that the published results no longer hold and the approach in question thus represents a dead end.
Are the people who publish studies of market inefficiencies in the financial and economic journals strictly academics or are some of them involved in trading the markets?
Some of the researchers who actually trade the markets publish certain aspects of their work, especially in periodicals like the Journal of Portfolio Management, but overall, there’s a tendency for academics to be more open about their results than practitioners.
Why would anyone who trades the markets publish something that works?
That’s a very good question. For various reasons, the vast majority of the high-quality work that appears in the open literature can’t be used in practice to actually beat the market. Conversely, the vast majority of the research that really does work will probably never be published. But there are a few successful quantitative traders who from time to time publish useful information, even when it may not be in their own self-interest to do so. My favorite example is Ed Thorpe, who was a real pioneer in the field. He was doing this stuff well before almost anyone else. Ed has been remarkably open about some of the money-making strategies he’s discovered over the years, both within and outside of the field of finan
ce. After he figured out how to beat the casinos at blackjack, he published Beat the Dealer. Then when he figured out how to beat the market, he published Beat the Market, which explained with his usual professorial clarity exactly how to take advantage of certain demonstrable market inefficiencies that existed at the time. Of course, the publication of his book helped to eliminate those very inefficiencies.
In the case of blackjack, does eliminating the inefficiencies mean that the casinos went to the use of multiple decks?
I’m not an expert on blackjack, but it’s my understanding that the casinos not only adopted specific game-related countermeasures of this sort, but they also became more aware of “card counters” and became more effective at expelling them from the casinos.
I know that classic arbitrage opportunities are long gone. Did such sitting-duck trades, however, exist when you first started?
Even then, those sorts of true arbitrage opportunities were few and far between. Every once in a while, we were able to engage in a small set of transactions in closely related instruments that, taken together, locked in a risk-free or nearly risk-free profit. Occasionally, we’d even find it possible to execute each component of a given arbitrage trade with a different department of the same major financial institution—something that would have been impossible if the institution had been using technology to effectively manage all of its positions on an integrated firmwide basis. But those sorts of opportunities were very rare even in those days, and now you basically don’t see them at all.
Have the tremendous advances in computer technology, which greatly facilitate searching for market inefficiencies that provide a probabilistic edge, caused some previous inefficiencies to disappear and made new ones harder to find?
The game is largely over for most of the “easy” effects. Maybe someday, someone will discover a simple effect that has eluded all of us, but it’s been our experience that the most obvious and mathematically straightforward ideas you might think of have largely disappeared as potential trading opportunities. What you are left with is a number of relatively small inefficiencies that are often fairly complex and which you’re not likely to find by using a standard mathematical software package or the conventional analytical techniques you might learn in graduate school. Even if you were somehow able to find one of the remaining inefficiencies without going through an extremely expensive, long-term research effort of the sort we’ve conducted over the past eleven years, you’d probably find that one such inefficiency wouldn’t be enough to cover your transaction costs.
As a result, the current barriers to entry in this field are very high. A firm like ours that has identified a couple dozen market inefficiencies in a given set of financial instruments may be able to make money even in the presence of transaction costs. In contrast, a new entrant into the field who has identified only one or two market inefficiencies would typically have a much harder time doing so.
What gives you that edge?
It’s a subtle effect. A single inefficiency may not be sufficient to overcome transaction costs. When multiple inefficiencies happen to coincide, however, they may provide an opportunity to trade with a statistically expected profit that exceeds the associated transaction costs. Other things being equal, the more inefficiencies you can identify, the more trading opportunities you’re likely to have.
* * *
How could the use of multiple strategies, none of which independently yields a profit, be profitable? As a simple illustration, imagine that there are two strategies, each of which has an expected gain of $100 and a transaction cost of $110. Neither of these strategies could be applied profitably on its own. Further assume that the subset of trades in which both strategies provide signals in the same direction has an average profit of $180 and the same $110 transaction cost. Trading the subset could be highly profitable, even though each individual strategy is ineffective by itself. Of course, for Shaw’s company, which trades scores of strategies in many related markets, the effect of strategy interdependencies is tremendously more complex.
* * *
As the field matures, you need to be aware of more and more inefficiencies to identify trades, and it becomes increasingly harder for new entrants. When we started trading eleven years ago, you could have identified one or two inefficiencies and still beat transaction costs. That meant you could do a limited amount of research and begin trading profitably, which gave you a way to fund future research. Nowadays, things are a lot tougher. If we hadn’t gotten started when we did, I think it would have been prohibitively expensive for us to get where we are today.
Do you use only price data in your model, or do you also employ fundamental data?
It’s definitely not just price data. We look at balance sheets, income statements, volume information, and almost any other sort of data we can get our hands on in digital form. I can’t say much about the sorts of variables we find most useful in practice, but I can say that we use an extraordinary amount of data, and spend a lot of money not just acquiring it but also putting it into a form in which it’s useful to us.
Would it be fair to summarize the philosophy of your firm as follows? Markets can be predicted only to a very limited extent, and any single strategy cannot provide an attractive return-to-risk ratio. If you combine enough strategies, however, you can create a trading model that has a meaningful edge.
That’s a really good description. The one thing that I would add is that we try to hedge as many systematic risk factors as possible.
I assume you mean that you balance all long positions with correlated short positions, thereby removing directional moves in the market as a risk factor.
Hedging against overall market moves within the various markets we trade is one important element of our approach to risk management, but there are also a number of other risk factors with respect to which we try to control our exposure whenever we’re not specifically betting on them. For example, if you invest in IBM, you’re placing an implicit bet not only on the direction of the stock market as a whole and on the performance of the computer industry relative to the overall stock market, but also on a number of other risk factors.
Such as?
Examples would include the overall level of activity within the economy, any unhedged exchange rate exposure attributable to IBM’s export activities, the net effective interest rate exposure associated with the firm’s assets, liabilities, and commercial activities, and a number of other mathematically derived risk factors that would be more difficult to describe in intuitively meaningful terms. Although it’s neither possible nor cost-effective to hedge all forms of risk, we try to minimize our net exposure to those sources of risk that we aren’t able to predict while maintaining our exposure to those variables for which we do have some predictive ability, at least on a statistical basis.
Some of the strategies you were using in your early years are now completely obsolete. Could you talk about one of these just to provide an illustration of the type of market inefficiency that at least at one time offered a trading opportunity.
In general, I try not to say much about historical inefficiencies that have disappeared from the markets, since even that type of information could help competitors decide how to more effectively allocate scarce research resources, allowing them a “free ride” on our own negative findings, which would give them an unfair competitive advantage. One example I can give you, though, is undervalued options [options trading at prices below the levels implied by theoretical models]. Nowadays, if you find an option that appears to be mispriced, there is usually a reason. Years ago, that wasn’t necessarily the case.
When you find an apparent anomaly or pattern in the historical data, how do you know it represents something real as opposed to a chance occurrence?
The more variables you have, the greater the number of statistical artifacts that you’re likely to find, and the more difficult it will generally be to tell whether a pattern you uncover actually has any predictive value. We take
great care to avoid the methodological pitfalls associated with “overfitting the data.”
Although we use a number of different mathematical techniques to establish the robustness and predictive value of our strategies, one of our most powerful tools is the straightforward application of the scientific method. Rather than blindly searching through the data for patterns—an approach whose methodological dangers are widely appreciated within, for example, the natural science and medical research communities—we typically start by formulating a hypothesis based on some sort of structural theory or qualitative understanding of the market, and then test that hypothesis to see whether it is supported by the data.
Unfortunately, the most common outcome is that the actual data fail to provide evidence that would allow us to reject the “null hypothesis” of market efficiency. Every once in a while, though, we do find a new market anomaly that passes all our tests, and which we wind up incorporating in an actual trading strategy.
I heard that your firm ran into major problems last year [1998], but when I look at your performance numbers, I see that your worst equity decline ever was only 11 percent—and even that loss was recovered in only a few months. I don’t understand how there could have been much of a problem. What happened?
The performance results you’re referring to are for our equity and equity-linked trading strategies, which have formed the core of our proprietary trading activities since our start over eleven years ago. For a few years, though, we also traded a fixed income strategy. That strategy was qualitatively different from the equity-related strategies we’d historically employed and exposed us to fundamentally different sorts of risks. Although we initially made a lot of money on our fixed income trading, we experienced significant losses during the global liquidity crisis in late 1998, as was the case for most fixed income arbitrage traders during that period. While our losses were much smaller, in both percentage and absolute dollar terms, than those suffered by, for example, Long Term Capital Management, they were significant enough that we’re no longer engaged in this sort of trading at all.
Stock Market Wizards Page 29