Aftermath

Home > Other > Aftermath > Page 16
Aftermath Page 16

by James Rickards


  ETFs and ETNs are not the most exotic creatures in the passive-investing menagerie. Another more vicious creature goes by the name risk parity. A risk parity strategy is an asset allocation plan that aims to maximize returns for a given level of volatility. These asset allocations are supposedly an improvement on simpler asset allocation plans such as 60/40, a mixture of 60 percent equities and 40 percent bonds. In the case of individual 401(k) plans, the traditional 60/40 allocation was typically adjusted to increase bond exposure and reduce equity exposure as an individual grew older, in order to account for risk aversion; bonds have lower volatility and older investors have less time to recoup losses, so reducing portfolio risk made sense.

  With risk parity, portfolio composition is optimized by risk-adjusted returns rather than fixed percentages of stocks and bonds. Risk parity strategies allocate investor funds among stocks, bonds, commodities, and other asset classes based on the risk weight in each asset class, rather than dollar weightings such as 60/40. Since stocks are far riskier than bonds, a risk-parity portfolio might begin with a smaller allocation to stocks than a traditional portfolio. In a simple case, the dollar weighting of stocks in a risk-parity portfolio might be 33 percent rather than 60 percent, because 33 percent in stocks represents close to 60 percent of the total risk in the portfolio.

  When risk parity strategy was introduced by hedge fund giant Bridgewater Associates in 1996, it performed exceptionally well. In the panic of 2008, risk parity strategies outperformed other strategies because they had proportionally smaller stock allocations and therefore had smaller losses, since 2008 losses were heavily concentrated in stocks.

  Still, there are fatal flaws in risk parity that have now come to the fore. The first flaw is that metrics used to optimize allocations in a risk parity portfolio implicitly rely on the efficient market hypothesis, which as we have seen is junk science. Portfolios that are based on EMH perform as expected most of the time because the overlap in the degree distribution of risk between the bell curve (used in EMH) and the power curve (used in complex dynamic systems) is large. This fools the observer into believing EMH is a good representation of reality. When events occur outside the bell curve distribution (but consistent with a power curve distribution), the results are catastrophic to portfolios modeled on EMH, including risk parity.

  The other flaw is the positive feedback loop between volatility and asset allocation. If equities enter a period of low volatility, as they did in 2017, computers and robo-advisers detect that equities are less risky and therefore deserve a larger dollar allocation under risk parity. The larger dollar allocation results in more buying, higher equity prices, and lower observed volatility, as equity prices seem never to fall. This feedback loop among lower volatility, higher equity allocations, and higher equity prices has been amplified by a simple buy-the-dips strategy, in which every small equity drawdown is immediately met with a surge of buying, on the theory that low volatility and ever higher equity prices are a semipermanent state space supported by central bank interventions.

  Risk parity strategies also rely on a presumed inverse correlation between movements in equity and bond prices. In a simple model, a slowing economy produces lower equity prices and lower interest rates, which means higher bond prices. The gains on bonds offset the losses on stocks and helps reduce the overall volatility of the portfolio. This inverse relationship contributed to the success of risk parity strategies in 2008, as plunging stock prices also mean plunging interest rates and huge gains in bonds.

  Yet from February 2–8, 2018, markets were shocked when stocks and bonds became highly positively correlated, with stocks falling 11 percent in one week and bonds falling in tandem on fears of inflation and higher interest rates. With higher U.S. deficits producing debt death-spiral fears and higher rates, this correlation between lower stock prices and lower bond prices persists along with higher volatility overall. The falsity of the assumptions behind risk parity are being exposed. Now there is nowhere for investors to hide.

  Another vogue in this passive-investing procession is smart beta. This strategy involves indexing, yet the index is rule based rather than an off-the-shelf index such as the S&P 500. Wall Street loves smart beta because the strategy is so ill-defined almost any confection that links loosely to a market factor rule passes muster. Smart beta indices can be based on book values, cash flows, or more exotic factors, including demographic trends and natural resources. This gives banks and brokers unlimited leeway to concoct new products.

  Smart beta is similar to risk parity in the sense that rules used to construct an index pay more attention to volatility than traditional factors such as market capitalization. This means smart beta suffers from the same flaws as risk parity, specifically a fatal fixed point attractor or map sink in which higher allocations produce lower volatility, which produces higher allocations, and still lower volatility, and so on until the system can no longer evolve and nears collapse.

  The mother of all passive strategies is VaR, or value at risk. VaR was originally a risk-management strategy developed in 1989 by JPMorgan as a proprietary tool, then later spun off as a separate company called RiskMetrics. The VaR tool was made freely available to the marketplace and was widely adopted.

  In its simplest form (there are sophisticated variations), VaR looks at historic time series of prices of each security or position in a portfolio. It then computes the covariance of components, or the extent to which two positions tend to move together, in opposite directions, or exhibit no correlation at all. This identifies positions in a portfolio that might constitute natural hedges producing lower risk than either position taken in isolation. Finally, the aggregate portfolio risk based on historic prices and covariance is calculated in terms of standard deviation, or the likelihood that certain extreme loss events might occur. This entire process is usually summed up in expressions such as, “Our $1 billion portfolio has a less than 1% chance of losing more than $100 million in any three-month period.”

  Like the other passive strategies, VaR is haunted by the specter of scientism. The historical time series of prices used to compute risk are typically too short. Quants use 20, 30, or 50 year time series, when they should look at 100 or 500 year time series (using proxy prices as needed) to get a better grasp of the possible. Even if a given time series were adequate, the assumption of a normal distribution (bell curve) of risk behind the concept of a standard deviation is contrary to empirical evidence from markets. The fact that VaR is a grossly deficient methodology is doubly disconcerting, since VaR-like methods underlie other indexing strategies such as risk parity.

  These passive-indexing strategies—ETFs, ETNs, risk parity, buy-the-dips, smart beta, VaR, and more—are made more dangerous by the rise of robo-advisers and machine trading. Since these strategies are data driven, it makes sense that computers can be used to aggregate the data on historic prices and do the processing on covariance, risk weightings, and standard deviations. Yet the machines have now gone beyond data aggregation and processing into the realms of machine learning and artificial intelligence. Machine learning involves the ability of machines to make predictions based on hidden correlations that are difficult for humans to discern, or new correlations machines discover on their own after working with training data. Artificial intelligence refers to actionable recommendations made by a machine, which can either be relayed to a human for action or executed by the machine itself.

  Machine trading, now widely embraced by wealth managers, hedge funds, and banks, removes the last shred of human intuition or reserve from the investment process. Investment managers and traders who rely on passive-indexing strategies will seek to squeeze alpha from the index by being the first to adjust their quantitative allocations based on changed risk measures or factor weights. Active managers are even more eager than others to execute ahead of the crowd since their returns live or die based on inside information, market timing, and diminution of the market impact of their moves. Artificial intelligence pleases
both camps, with speed and stealth. The portfolio is on autopilot. The fact that the plane is flying through a thick fog of false assumptions about historic prices and degree distributions seems not to perturb passengers in the least.

  No Bid

  On Monday, October 19, 1987, I sat at my desk in a small office overlooking Greenwich Harbor in Connecticut. I was chief credit officer of one of the world’s largest dealers in U.S. government securities. That day, I witnessed the greatest crash of U.S. stocks in one trading session in history. The Dow Jones Industrial Average fell 22.6 percent, 508 points at the time, equivalent to 5,600 Dow points from today’s levels. In percentage terms, the 1987 crash was almost double the October 28, 1929, crash that is generally cited as the start of the Great Depression.

  Yet the October 1987 crash did not presage a depression, it did not even signal a recession; the long expansion that started in November 1982 continued for almost three more years, until the next recession began in July 1990. The 1987 crash told us nothing about economic fundamentals. Yet it told us volumes about the operation of complex system dynamics in capital markets. It was a warning that the new age of the flash crash was upon us. That warning was little understood at the time and has since been consistently ignored.

  There was no shortage of after-the-fact explanations for why stocks crashed. By 1987, economic growth had slowed from the torrid pace of 1983–1986. A weakening U.S. dollar threatened U.S. growth despite Treasury secretary James Baker’s efforts to halt the dollar’s slide through the Louvre Accord, signed on February 22, 1987. The Treasury bond market crashed in the spring of 1987, six months before the stock market crash. Oil prices dropped 50 percent in the first half of 1986, as OPEC discipline evaporated. The “tanker war” in the Persian Gulf, which began with the sinking of the U.S. frigate Stark by an Iraqi missile in May 1987, had escalated in the days immediately before the stock market crash. Iranian missiles hit two U.S.-flagged oil tankers on October 15 and 16. On October 19, the day of the crash, the United States attacked Iranian oil wells in the Gulf as retaliation for the tanker attacks.

  Economic historians can reenact Murder on the Orient Express by profiling the sundry suspects—a falling dollar, sinking bonds, collapsing oil prices, slowing growth, a hot war in the Middle East—and accusing one as the market murderer. There is no definitive answer, and in a way it doesn’t matter. The market was already primed to fall the same way a snowpack is set to become an avalanche. One catalyst, one snowflake, does as well as another. What matters is what comes after the catalyst, a chaotic cascade in desperate search of a bottom.

  Once the stock selling began in New York that Monday, an early species of passive trading called portfolio insurance was set in motion. In its fundamental form, portfolio insurance required institutional investors to sell stock index futures as stock markets declined. The short futures positions insure against declines in the stock portfolio itself. The more stocks declined, the more futures are sold to protect the portfolio, an early example of risk parity.

  This analytically arrested approach suffered from what Keynes called a fallacy of composition. What may work in a single case does not work in the aggregate; the whole is different from the sum of its parts. As portfolio managers sold stock index futures, other market participants on the Chicago futures exchanges had to buy them. This gave Chicago futures brokers and locals a long index position, which they hedged by selling stocks! The risk of a declining stock market had not been hedged at all in the aggregate; it had merely been recycled from New York to Chicago and back again. This feedback loop was amplified by the fact that Chicago stock index futures were under such pressure that they traded below their equivalent value in New York Stock Exchange stocks. This triggered arbitrage activity consisting of buying “cheap” futures and selling “expensive” stocks to capture the spread. The arbitrage selling added to the already fierce downward price pressure on stocks. The only force that stopped the bloodbath was the clang of the closing bell.

  The October 19, 1987, flash crash in stocks displayed the dynamics of every flash crash since and, ominously, those to come. The term “flash crash” is common parlance today, but is a relatively new phenomena. Of course, markets have crashed periodically as long as markets have existed. The United States experienced notable market crashes, usually yet not always accompanied by recessions or depressions, in 1825, 1837, 1873, 1893, and 1907, among others. Older examples from Europe include the bursting of Netherlands’s Tulip Mania Bubble in 1637, France’s Mississippi Bubble in 1720, and the U.K.’s South Sea Bubble, also in 1720.

  All of these crashes followed similar and predictable patterns. The status of a particular asset such as land, railroads, royal favor, bitcoin, and bizarrely, tulip bulbs, is singled out for attention. Promoters point to unique properties of the privileged asset. The media bring scrutiny to the sudden price increases. Leverage is applied to amplify gains. Suddenly small investors are sucked into the whirlwind as the prospect of money for nothing is impossible to resist. Then the price peaks, the spell is broken, reality intrudes, and a crash begins with frantic selling by those hoping to capture gains with no bidders in sight. Most investors are ruined, yet some walk away with fantastic gains if they sold before the top. Depending on how much leverage was used and the health of lenders, crashes may or may not spread to the real economy and cause a downturn due to contagion.

  Boom-and-bust patterns still occur today—bitcoin comes to mind—but a contemporary flash crash is different in at least one respect. A flash crash may include traditional elements like euphoria and leverage; fear and greed never go out of style. Yet these elements are not required. Flash crashes emerge literally from nowhere with no clear catalyst. The main difference between today’s flash crashes and past market crashes is automation. The ubiquity of automated trading, the similarity of algorithms across platforms, and the speed of execution mean that markets crash instantaneously, without waiting for ripples to spread from trader to trader. Computers never sleep in, take a vacation, or drop the kids at school like human traders. There is no latency.

  The infamous flash crash in 10-year Treasury note yields on October 15, 2014, is a case in point. Yields plunged beginning at 9:33 A.M. that day, and just as quickly rallied back at 9:39 A.M., covering a 37-basis-point range in twelve minutes. Volatility of greater magnitude had occurred only three times from 1998 to 2014, and in each instance it was the result of a surprise policy announcement. There was no surprise on October 15, 2014. An extensive review of that flash crash conducted by the Treasury, Fed, and other financial regulators, published on July 13, 2015, nine months after the crash, found no reason for it. In fact, there was no reason for it except that certain algorithms began buying Treasury notes, which triggered more buying by other algorithms until the price surge and yield crash spun out of control in a hypersynchronous event. Eventually computer buying triggered computer selling in the form of limit orders deep in the back of the book, at which point the buying momentum turned on a dime. Matt Levine, a writer for Bloomberg, offers a good summary:

  There is an obvious dumbness to this: The algorithms stopped their orgy of buying not because they got some new economic data, or because a new buyer spotted value and jumped into the market, but just because they saw their shadow and got spooked ….5 We—and the regulators—don’t know what set the algorithms off on their buying spree, but a reasonable guess would be that, whatever it was, it was dumb ….

  But what I like about the Treasury flash crash is just how convincingly the algorithms mimicked human folly. For six minutes one morning in October, some computers built themselves a bubble. They bid up the prices of assets for no particular reason, just because all their algorithm friends were doing it too, and what algorithm would want to sell when everyone else was buying? And then they saw some big sell orders that spooked them and made them realize that they were at the top. So for the next six minutes they busily popped their bubble, selling down to more or less where they started. They did most o
f the work themselves: Algorithms bought from algorithms on the way up, and sold to algorithms on the way down (emphasis added).

  This particular flash crash came and went without contagion or collateral market damage. That pattern is not preordained. A different program could have canceled sell orders from the back of the book before the buying spree reached that level, leaving prices to rise and yields to crash with no cushion until serious damage was done to dealers or banks.

  Other notable flash crashes in recent years include the U.S. stock market crash on May 6, 2010, a 9 percent plunge that wiped out over $1 trillion of wealth in just over thirty minutes. The market quickly recovered most of that loss. On January 15, 2015, the euro crashed 20 percent against the Swiss franc in thirty minutes. On June 24, 2016, pounds sterling plunged 12 percent against the U.S. dollar over the course of a few hours. The 2010 stock market collapse had no clear catalyst. The 2015 euro crash was due to the Swiss National Bank breaking a currency peg unexpectedly. The 2016 sterling crash was due to the surprise U.K. vote to leave the EU, the so-called Brexit. With or without an identifiable catalyst, the fact remains that these crashes and others were made violent in their speed and amplitude by automated trading.

  As a result of these diverse developments, markets now confront a lethal brew of passivity, product proliferation, automation, and hypersynchronous behavioral responses. This accumulation of risk factors is entirely new, and outside the experience of any trader or quant.

 

‹ Prev