Currency Wars: The Making of the Next Global Crisis

Home > Other > Currency Wars: The Making of the Next Global Crisis > Page 22
Currency Wars: The Making of the Next Global Crisis Page 22

by James Rickards


  The Romer-Bernstein plan almost certainly saved some jobs in the unionized government sector. However, few had argued that the stimulus would produce no jobs, merely that the hidden costs were too high. The combination of deficit spending, monetary ease and bank bailouts had boosted the economy in the short run. The problem was that the recovery was artificial and not self-sustaining, because it had been induced by government spending and easy money rather than by private sector consumption and investment. This led to a political backlash against further deficit spending and quantitative easing.

  The increased debt from the failed Keynesian stimulus became a cause célèbre in the currency wars. These wars were primarily about devaluing a country’s currency, which is a form of default. A country defaults to its foreign creditors when its claims suddenly become worth less through devaluation. A country defaults to its own people through inflation and higher prices for imported goods. With debt in the hands of foreign investors reaching unprecedented levels, the international impact of devaluation was that much greater, so the currency wars would be fought that much harder.

  Because debt and deficits are now so large, the United States has run out of dry powder. If the United States were struck by another financial crisis or a natural disaster of the magnitude of Hurricane Katrina or greater, its ability to resort to deficit spending would be impaired. If the United States were confronted with a major war in the Middle East or East Asia, it would not have the financial wherewithal to support a war effort as it had done in World War II. Vulnerability to foreign creditors is now complete. In the face of any one of these crises—financial, natural or military—the United States would be forced to resort to emergency measures, as had FDR in 1933 and Nixon in 1971. Bank closings, gold seizures, import tariffs and capital controls would be on the table. America’s infatuation with the Keynesian illusion has now resulted in U.S. power being an illusion. America can only hope that nothing bad happens. Yet given the course of events in the world, that seems a slim reed on which to lean.

  Financial Economics

  At about the same time that Paul Samuelson and others were developing their Keynesian theories, another group of economists were developing a theory of capital markets. From the faculties of Yale, MIT and the University of Chicago came a torrent of carefully reasoned academic papers by future Nobel Prize winners such as Harry Markowitz, Merton Miller, William Sharpe and James Tobin. Their papers, published in the 1950s and 1960s, argued that investors cannot beat the market on a consistent basis and that a diversified portfolio that broadly tracks the market will produce the best results over time. A decade later, a younger generation of academics, including Myron Scholes, Robert C. Merton (son of famed sociologist Robert K. Merton) and Fischer Black, came forward with new theories on the pricing of options, opening the door to the explosive growth of financial futures and other derivatives contracts ever since. The work of these and other scholars, accumulated over fifty years and continuing today, constitutes the branch of economic science known as financial economics.

  University biologists working with infectious viruses have airtight facilities to ensure that the objects of their study do not escape from the laboratory and damage the population at large. Unfortunately, no such safeguards are imposed on economics departments. For every brilliant insight there are some dangerous misconceptions that have infected the world’s financial bloodstream and caused incalculable harm. None of these ideas has done more harm than the twin toxins of financial economics known as “efficient markets” and the “normal distribution of risk.”

  The idea behind the efficient market is that investors are solely interested in maximizing their wealth and will respond in a rational manner to price signals and new information. The theory assumes that when material new information arrives it is factored into prices immediately, so that prices move smoothly from one level to another based on the news. Since the markets efficiently price in all of this new information immediately, no investor can beat the market except by pure luck, because any information that an investor might want to use to make an investment decision is already reflected in the market price. Since the next piece of new information is unknowable in advance, future price movements are unpredictable and random.

  The idea of normally distributed risk is that since future price movements are random, the severity and frequency of price swings will also be random, like a coin toss or roll of the dice. Mild events happen frequently and extreme events happen infrequently. When the frequent mild events and infrequent severe events are put on a graph, it takes the shape of the famous bell curve. The large majority of outcomes are bunched in the area of low severity, with far fewer events in the high severity region. Because the curve tails off steeply, highly extreme events are so rare as to be almost impossible.

  In Figure 1 below, the height of the curve shows how often events happen and the width of the curve shows how severe they are, either positive or negative. The area centered on 0 traces those mild events that happen frequently. Consider the area of the curve beyond −3 and +3 This area represents events of much greater severity, events like stock market crashes or the bursting of housing bubbles. Yet, according to this bell curve, they almost never happen. This is shown by the fact that the curve practically touches the horizontal baseline, which signifies things that never happen at all.

  FIGURE 1: A bell curve showing a normal distribution of risk

  The problem with the Nobel Prize–winning theories based on the bell curve is that empirical evidence shows they do not correspond to the real world of markets and human behavior. Based on an enormous body of statistical and social science research, it is clear that markets are not efficient, that price movements are not random and risk is not normally distributed.

  The academic counterattack on these tenets of financial economics have come from two directions. From the fields of psychology, sociology and biology came a flood of studies showing that investors are irrational after all, at least from the perspective of wealth maximization. From iconoclastic mathematical genius Benoît Mandelbrot came insights that showed future prices are not independent of the past—that the market had a kind of “memory” that could cause it to react or overreact in disruptive ways, giving rise to alternating periods of boom and bust.

  Daniel Kahneman and his colleague Amos Tversky demonstrated in a series of simple but brilliantly constructed experiments that individuals were full of irrational biases. The subjects of their experiments were more concerned about avoiding a loss than achieving a gain, even though an economist would say the two outcomes had exactly the same value. This trait, called risk aversion, helps to explain why investors will dump stocks in a panic but be slow to reenter the market once it turns around.

  When economists began searching capital markets data for the kinds of irrationality that Kahneman and Tversky had demonstrated, they had no trouble finding it. Among the anomalies discovered were that trends, once set in motion, were more likely to continue than to reverse—the basis of “momentum” investing. It also appeared that small-cap stocks outperform large-cap stocks. Others identified the so-called January effect, which showed that stocks performed better in January than other months. None of these findings are consistent with efficient markets or random price movements.

  The debate between the efficient markets theorists and the social scientists would be just another arcane academic struggle but for one critical fact. The theory of efficient markets and its corollaries of random price movements and a bell curve distribution of risk had escaped from the lab and infected the entire trading apparatus of Wall Street and the modern banking system. The application of these flawed theories to actual capital markets activity contributed to the 1987 stock market crash, the 1998 implosion of Long-Term Capital Management and the greatest catastrophe of all—the Panic of 2008. One contagious virus that spread the financial economics disease was known as value at risk, or VaR.

  Value at risk is the method Wall Street used to manage ri
sk in the decade leading up to the Panic of 2008 and it is still in widespread use today. It is a way to measure risk in an overall portfolio—certain risky positions are offset against other positions to reduce risk, and VaR claims to measure that offset. For example, a long position in ten-year Treasury notes might be offset by a short position in five-year Treasury notes so that the net risk, according to VaR, is much less than either of the separate risks of the notes. There is no limit to the number of complicated offsetting baskets that can be constructed. The mathematics quickly become daunting, because clear relationships such as longs and shorts in the same bond give way to the multiple relationships of many items in the hedging basket.

  Value at risk is the mathematical culmination of fifty years of financial economics. Importantly, it assumes that future relationships between prices will resemble the past. VaR assumes that price fluctuations are random and that risk is embedded in net positions—long minus short—instead of gross positions. VaR carries the intellectual baggage of efficient markets and normal distributions into the world of risk management.

  The role of VaR in causing the Panic of 2008 is immense but has never been thoroughly explored. The Financial Crisis Inquiry Commission barely considered trading risk models. The highly conflicted and fraudulent roles of mortgage brokers, investment bankers and ratings agencies have been extensively examined. Yet the role of VaR has remained hidden. In many ways, VaR was the invisible thread that ran through all the excesses that led to the collapse. What was it that allowed the banks, ratings agencies and investors to assume that their positions were safe? What was it that gave the Federal Reserve and the SEC comfort that the banks and brokers had adequate capital? Why did bank risk managers continually assure their CEOs and boards of directors that everything was under control? The answers revolve around value at risk and its related models. The VaR models gave the all clear to higher leverage and massive off–balance sheet exposures.

  Since the regulators did not know as much about VaR as the banks, they were in no position to question the risk assessments. Regulators allowed the banks to self-regulate when it came to risk and leverage. It was as if the U.S. Nuclear Regulatory Commission allowed the builders of nuclear power plants to set their own safety specifications with no independent review.

  Many scholars and practitioners had been aware of the flaws and limitations in VaR. The truth is that the flaws were well known and widely discussed for over a decade both in academia and on Wall Street. The banks continued to use VaR not because it worked but because it permitted a pretense of safety that allowed them to use excessive leverage and make larger profits while being backstopped by the taxpayers when things went wrong. Using VaR to manage risk is like driving a car at a hundred miles per hour while the speedometer has been rigged to stay at fifty miles per hour. Regulators in the backseat of the car glance at the speedometer and see 50, then go back to sleep. Meanwhile the car careens wildly, like something from a scene in Mad Max.

  The destructive legacy of financial economics, with its false assumptions about randomness, efficiency and normal risk distributions, is hard to quantify, but $60 trillion in destroyed wealth in the months following the Panic of 2008 is a good estimate. Derivatives contracts did not shift risk to strong hands; instead derivatives concentrated risk in the hands of those too big to fail. VaR did not measure risk; it buried it behind a wall of equations that intimidated regulators who should have known better. Human nature and all its quirks were studiously ignored by the banks and regulators. When the financial economy was wrecked and its ability to aid commerce was well and truly destroyed, the growth engine went into low gear and has remained there ever since.

  Washington and Wall Street—the Twin Towers of Deception

  By the start of the new currency war in 2010, central banking was based not on principles of sound money but on the ability of central bankers to use communication to mislead citizens about their true intentions. Monetarism was based on unstable relationships between velocity and money that made it ineffective as a policy tool. Keynesianism was applied recklessly based on a mythical multiplier that was presumed to create income but actually destroyed it. Financial economics was a skyscraper erected on the quicksand of efficient markets and normal risk distributions that bore no relation to real behavior in capital markets. The entire system of fiscal policy, monetary policy, banking and risk management was intellectually corrupt and dishonest, and the flaws persist to this day.

  Recently new and better economic paradigms have emerged. However, Washington and Wall Street both have a vested interest in the flawed models from the past. For Washington, Keynesianism is an excuse to expand spending and monetarism is an excuse to concentrate power at the Fed. For Wall Street, the theories of financial economics provide cover for high leverage and deceptive sales practices for off–balance sheet derivatives. On Wall Street, profits come first and good science second. If some theory, however flawed or out of date, can be trotted out with the right academic pedigree to provide a rationale for risk taking, then that is fine. If politicians and regulators are even further behind the learning curve than Wall Street, then that is fine too. As long as the profits continue on Wall Street, the hard questions will not be asked, let alone answered.

  CHAPTER 10

  Currencies, Capital and Complexity

  “The difficulty lies, not in the new ideas, but in escaping from the old ones.”

  John Maynard Keynes, 1935

  Despite the theoretical and real-world shortcomings of both the Keynesian multiplier and the monetarist quantity approach to money, these are still the dominant paradigms used in public policy when economic growth falters. One need look only at the Obama stimulus and the Bernanke quantitative easing programs to see the hands of John Maynard Keynes and Milton Friedman hard at work. This persistence of the old school is also one driver of the new currency war, because of the expansion of public debt. This debt can be repaid only with help from inflation and devaluation. When growth falters, taking growth from other countries through currency devaluation is irresistible. Far better solutions are needed.

  Fortunately, economic science has not stood entirely still. A new paradigm has emerged in the past twenty years from several schools of thought, including behavioral economics and complexity theory, among others. This new thinking comes with a healthy dose of humility—practitioners in many cases acknowledge the limitations of what is possible with the tools at hand. The new schools avoid the triumphalism of Keynes’s claim to a “general theory” and Friedman’s dictum that inflation is “always and everywhere” monetary.

  The most promising new school is complexity theory. Despite the name, complexity theory rests on straightforward foundations. The first is that complex systems are not designed from the top down. Complex systems design themselves through evolution or the interaction of myriad autonomous parts. The second principle is that complex systems have emergent properties, which is a technical way of saying the whole is greater than the sum of its parts—the entire system will behave in ways that cannot be inferred from looking at the pieces. The third principle is that complex systems run on exponentially greater amounts of energy. This energy can take many forms, but the point is that when you increase the system scale by a factor of ten, you increase the energy requirements by a factor of a thousand, and so on. The fourth principle is that complex systems are prone to catastrophic collapse. The third and fourth principles are related. When the system reaches a certain scale, the energy inputs dry up because the exponential relationship between scale and inputs exhausts the available resources. In a nutshell, complex systems arise spontaneously, behave unpredictably, exhaust resources and collapse catastrophically. When you apply this paradigm to finance, you begin to see where the currency wars are headed.

  Complexity theory has a strong empirical foundation and has had wide application in a variety of natural and man-made settings, including climate, seismology and the Internet. Significant progress has been made in applyi
ng complexity to capital and currency markets. However, a considerable challenge arises when one considers the interaction of human behavior and market dynamics. The complexity of human nature sits like a turbocharger on top of the complexity of markets. Human nature, markets and civilization more broadly are all complex systems nested inside one another like so many Russian matryoshka dolls. An introduction to behavioral economics will provide a bridge to a broader consideration of complexity theory and how underlying dynamics may determine the fate of the dollar and the endgame in the currency wars.

  Behavioral Economics and Complexity

  Contemporary behavioral economics has its roots in mid-twentieth-century social science. Pioneering sociologists such as Stanley Milgram and Robert K. Merton conducted wide-ranging experiments and analyzed data to develop new insights into human behavior.

  Robert K. Merton’s most famous contribution was the formalization of the idea of the self-fulfilling prophecy. The idea is that a statement given as true, even if initially false, can become true if the statement itself changes behavior in such a way as to validate the false premise. Intriguingly, to make his point Merton used the example of a run on the bank in the days before deposit insurance. A bank can begin the day on a sound basis with ample capital. A rumor that the bank is unsound, although false, can start a stampede of depositors trying to get their money out all at once. Even the best banks do not maintain 100 percent cash on hand, so a true bank run can force the bank to close its doors in the face of depositor demands. The bank fails by the end of the day, thus validating the rumor even though the rumor started out as false. The interaction of the rumor, the resulting behavior and the ultimate bank failure is an illustration of a positive feedback loop between information and behavior.

 

‹ Prev