The concept of probability is used in many other fields, including medicine – particularly surgery, where lives may be at risk. Compare this with retirement, where one’s money (and lifestyle) is at risk. Arguably, if it’s good enough for medicine, it’s good enough for retirement planning.
In his book, Against the Gods, Peter Bernstein highlights the crucial role of probability in modern-day risk management in many fields. He notes that29, ‘Without a command of probability theory and other instruments of risk management, engineers could never have designed the great bridges that span our widest rivers, homes would still be heated by fireplaces or parlor stoves, electric power utilities would not exist, polio would still be maiming children, no airplanes would fly, and space travel would be just a dream.’
He goes on to say,
‘As the years passed, mathematicians transformed probability theory from a gamblers’ toy into a powerful instrument for organizing, interpreting, and applying information. As one ingenious idea was piled on top of another, quantitative techniques of risk management emerged that have helped trigger the tempo of modern times.’
Bernstein concludes that,
‘Without numbers, there are no odds and no probabilities; without odds and probabilities, the only way to deal with risk is to appeal to the gods and the fates. Without numbers, risk is wholly a matter of gut30.’
The point here is that probability theory is a well-established and powerful way to quantify and manage risk.
It’s important to understand the difference between probability and prediction. Probability estimates the chances of an event, based on previously observed behaviour. Prediction, on the other hand, is a futile exercise.
Charlie Bilello, Director of Research at Pension Partners, LLC, sums it up rather nicely. ‘The difference between a prediction and a probability is the difference between a pundit and a professional. One makes concentrated bets on the belief that they can predict the future and the other diversifies with the understanding that they cannot.’
Estimating probability – where do you draw the line?
‘Plans are useless. Planning is indispensable.’
– President Dwight D. Eisenhower
Estimating the probability of success of a retirement plan is just an attempt to quantify risk and prepare for it. Think of PoS as probability of staying on track, and PoF as probability of adjustment.
As my friend and legendary financial planner Michael Kitces notes, using a Monte Carlo model helps planners to ‘quantify how often the future scenarios are likely to turn out to be problematic. We might run 10,000 future scenarios, find that 9,500 of them succeed and 500 of them fall short and quantify the results as a “95% probability of success” in achieving the goal. Yet rarely does anyone in “the other 5%” of scenarios actually just keep on spending under the original plan until one day he/she wakes up broke and all the checks are bouncing. Instead, at some point, an adjustment occurs to get back on track. Of course, the later the adjustment occurs, the more significant it may have to be in order to get back on track. But ultimately, most probabilities of “failure” are really just probabilities of needing to make an adjustment to get back on track31.’
My view is that working to an overall probability of success of 80%-90% or more is reasonable in retirement income planning. This means that there’s a 10%-20% chance you’ll have to make an unplanned adjustment to a client’s withdrawal plan if they experience poor returns in the early part of their retirement.
The key here is that you need to monitor sequence risk closely. Kitces sums this up rather nicely, ‘In the military context, battle plans are recognised as essential. And this is true despite the famous saying that “no battle plan ever survives contact with the enemy” because the process of engaging the plan, progressing towards the goals, and seeing what happens once the enemy is engaged, will itself change and alter what the next step should be. Notwithstanding this challenge, the military engages in planning because it’s only by trying to consider what the plan should be, and how it might be impacted by future events, that contingency plans can be created to know how to handle “unexpected” problems that arise.’
In a sense, retirement income plans should be viewed as battle plans. When plans meet the real world, the real world doesn’t yield to your plan. You must adapt whatever you’re doing to reality.
Have a contingency?
As the saying goes, everybody has a plan until they get punched in the face!
It’s crucial to consider the probability of success or failure of a withdrawal strategy as well as how long the pot would last under severe market conditions. It’s just as important to have your contingency plan in place. For example, does the client have other assets like the family property that they can rely on if the worst-case scenario happens? Perhaps extreme longevity and poor sequence of return?
It’s important that each retiree is comfortable with the probability of success. They should perhaps start with a much lower withdrawal rate or adopt a safety-first approach if they can’t accept a 10%-20% chance that they’ll need to make some adjustment to their income.
Modelling retirement outcomes
It’s a challenge for financial planners and providers to illustrate the inherent risk in retirement planning to clients. It’s a common approach to use some form of deterministic projection. This is a straight-line projection with a static assumption for investment returns, inflation and longevity.
An Independent Review of Retirement Income32 commissioned by the Labour Party and published in March 2016 by the Pension Institute’s Professor David Blake and Dr Debbie Harrison has some key recommendations on the use of deterministic projections. Specifically, the report recommends that:
the use of deterministic projections of the returns on products should be banned
they should be replaced with stochastic projections that take into account important real-world issues, such as sequence-of-returns risk, inflation, and transactions costs in dynamic investment strategies
there should be a commonly agreed parameterisation for the stochastic projection model used, ie a standard model should be developed
there should be a commonly agreed set of good practice principles for modelling the outcomes from retirement income products
I don’t personally support an outright ban on using straight-line projections, but I see strong reasons to rethink how we use them. I want to examine the strengths and weaknesses of the models financial planning uses, particularly in the area of retirement income.
Let’s be clear. I truly believe in the role advisers play to help clients navigate the challenges of retirement income planning. But if advisers use models that lack empirical rigour, they do themselves and their clients a great disservice. These models fail to help clients understand the inherent risks in their plan and to prepare adequately.
After all, financial planning is a bit like a battle plan. No plan survives contact with the enemy. We need to be able to model scenarios, explore hidden risks and prepare accordingly.
Straight-line cash flow model
This is the simplest and most commonly used model by financial planners. A straight-line projection assumes the key variables of any financial plan (such as investment returns, inflation and longevity) are static.
Typical assumptions are often based on historical averages:
investment returns = 5%pa
inflation = 3%pa
life expectancy = 90 years old
In straight-line projections, volatility doesn’t exist, and investment losses are rare. Some planners build investment losses into specific years (eg, a 20% loss in year two and 15% loss in year six of the plan). But the rest of the periods are based on fixed, constant return assumptions (eg 3%pa).
There are two main problems with this. First, it bears no resemblance to reality. Investment returns have never behaved this way. And they most likely never will. By their very nature, investment returns and inflation are stochastic – which
means that they are non-linear and largely unpredictable.
The second problem is that it inhibits planning by limiting the number of scenarios explored to one, or at best, a handful.
Clients end up with little or no idea about what kinds of market conditions could ruin their plans and what they might have to do to salvage the plan under those circumstances.
Milton Friedman once told us, ‘never try to walk across a river just because it has an average depth of four feet.’ Sadly, this is what we do when we use straight-line projections in retirement income planning.
The weaknesses of straight-line projections in financial planning are well recognised and documented by early planners.
In 1994, financial planner Larry Bierwith noted the weaknesses of straight-line projections in an article33 in the Journal of Financial Planning. He wrote, ‘Traditional retirement planning ledgers create various scenarios of the future for a client by projecting constant rates of return and inflation over hypothetical future years. However, this approach can create a false sense of security for the client. Investment returns and inflation are never constant over time.’
His article proposes an alternative approach: developing projections based on real historical data.
He noted that, ‘By testing various investment approaches against historical data, the client can see the effects of varying rates of return and inflation through overlapping time periods, and the inevitable ups and downs of a portfolio over a typical retirement. By understanding the range of historical results, the client is better able to make informed investment policy decisions regarding the future.’
Bierwith’s article prompted financial planner Bill Bengen to publish his first research that culminated in what is known today as safe withdrawal rate. In his article34, Bengen noted, ‘The logical fallacy that got our hypothetical planner into trouble was assuming that average returns and average inflation rates are a sound basis for computing how much a client can safely withdraw from a retirement fund over a long time.’
The late financial planning legend Lynn Hopewell succinctly highlights why straight-line cash flow projections are grossly unfit for purpose when making financial decisions for unknown and unpredictable future events.
In an article35 in the October 1997 edition of the Journal of Financial Planning he notes, ‘In spite of the increasing improvement of financial planning software since the early 1980s, I know of no tools that explicitly deal with the uncertain nature of problem variables. The tools are deterministic. No matter how well designed and how faithfully the software models a particular problem, it allows you to specify only one value for a variable. Yet, for real-world problems, the essential variables are uncertain; they can cover a wide range of values, and each value can have a different probability of occurring. Thus, stochastic tools are needed.’
It breaks my heart to think that in 2018, 20 years after Hopewell’s first paper, not much has changed in terms of the primary model used in most financial planning software.
Historical scenario model
Historic market data offers us an interesting perspective on how investment markets work.
It’s often said that past performance offers no guide to the future, particularly in relation to investment managers and funds. But if we look at asset class behaviour, this statement isn’t entirely true. Extensive past performance going back over 200 years gives great insight into their behaviour. It doesn’t tell us what the precise return might be next year, or in 10 or 20 years, but it offers an important perspective on the range of possible outcomes.
Why do we invest in equities, rather than keep money under the mattress over a very long term? Because equity past performance tells us that they’ll most likely outperform cash over the long term. How do we know that equities tend to outperform bonds over the long term? Past performance tells us so. And of course, basic reasoning backs this up.
Renowned academics, from Harry Markowitz, Paul Samuelson and William Sharpe to Robert Shiller and Gene Fama, have greatly improved our understanding of how the capital markets work. In the process, they’ve won Nobel Prizes! Much of their work is based on the exploration of asset classes using extensive historical performance data. If it’s good enough for Fama or Sharpe, it’s good enough for me.
Financial planners can gain incredible insight by looking at how a financial plan would have fared under various real, past market scenarios. I am not talking about using a single historical market scenario or limited data based on a few years. That’s almost as bad as a deterministic model. I am suggesting the idea of using sensitive historical data of 100 years or more, to gain colourful insight into how a plan fares under various scenarios.
Suppose we’re working on a 30-year financial plan and we want to look at various scenarios over the last 115 years – how that plan fared in the periods between 1901 and 1930, 1902 and 1931 …. 1985 and 2014….1986 and 2015, and so on.
This gives at least 86 historical scenarios to look at. They include some of the most severe market conditions: two world wars, the Great Depression, periods of double-digit inflation, several recessions, booms and busts. We can see how the plan held up and what we could have done to prevent poor market conditions from ruining it.
There are a number of brilliant resources that provide extensive data for this sort of historical modelling.
Morningstar DMS Database has been compiled by three brilliant professors: Elroy Dimson, Paul Marsh and Mike Staunton. It contains returns for equities, bonds, bills, inflation and currencies for 22 developed countries going back to 1900. A detailed explanation and summary data can be found in their book, Triumph of the Optimists: 101 Years of Global Investment Returns.
Barclays Equity Gilt Study is a reliable source of data on long-term returns on UK equities, gilts, bills and inflation.
Stocks, Bonds, Bills, and Inflation (SBBI) Yearbook is the industry standard performance data reference, with comprehensive records of US stocks, long-term government bonds, long-term corporate bonds, Treasury bills, and the Consumer Price Index dating back to 1926.
Global Financial Data offers even more extensive data going back 200 years. The dataset includes annual and monthly returns of major asset classes, inflation and currency, as well as other important metrics such as bond yield, equity yields and PE ratios. https://www.globalfinancialdata.com/
Bank of England: A millennium of macroeconomic data v3.1 (2016) was originally called Three Centuries of Macroeconomic Data, but has now been renamed to reflect its broader coverage. The dataset contains a broad set of macroeconomic and financial data for the UK, stretching back in some cases to the 13th century. (Credit: Thomas, R and Dimsdale, N (2017) “A Millennium of UK Data”, Bank of England OBRA dataset.)
The main criticism of the historical model is that there simply aren’t enough scenarios in history to account for the wide range of possible outcomes. Some periods overlap and aren’t entirely independent of each other. Also, global markets are more complicated today than they’ve ever been and returns could be worse in the future.
As Peter Bernstein36 eloquently highlights, this is an age-old argument. He notes that there’s always been ‘a persistent tension between those who assert that the best decisions are based on quantification and numbers, determined by the patterns of the past, and those who base their decisions on more subjective degrees of belief about the uncertain future. This is a controversy that has never been resolved. The issue boils down to one’s view about the extent to which the past determines the future. We cannot quantify the future, because it is an unknown, but we have learned how to use numbers to scrutinize what happened in the past’.
An article in the Economist37 sums up why historical data, for all its faults, may be the most objective way to measure risk. ‘When you use a financial model it requires assumptions about the underlying assets. These assumptions often are, but not limited to, the assets’ expected price and volatility. Financial models find a price, and hedge against future fluctuations
, based on these data points. There are two ways you can come up with these assumptions. You can use historical data or a personal view (from instinct, experience, or divine inspiration).
The problem with a personal view is that there always exists a temptation to use assumptions that make your product most attractive. When times are bad the market might question such optimism, but in the midst of a bubble few will (other than your boss who’ll ask why your view makes less money than your rivals). Historical data, for all its faults, is the only objective way to measure risk.’
The author concludes, ‘Historical data may be imperfect, but it remains the only unbiased way to measure risk and make assumptions about the future. Perhaps quantitative modellers in the future will reconsider what the appropriate length of history is. They may also test models more strenuously, forcing them to consider risk outside of historical bounds. Perhaps their managers will ask more questions about the implications of using particular data. Even these safeguards leave room for arbitrary decision-making. Still, during the next bubble, historical data will be the only thing that grounds finance in some reality.’
Stochastic models
One way to overcome the weaknesses of a purely historical model is to use random simulations of historical averages or draw actual historical returns in a random order.
Bootstrapping: this method involves randomly drawing actual historical monthly or annual returns to create likely future patterns of return. You’re still using actual returns, but not necessarily in the order and combination that happened in the past. Bootstrap algorithms can be used to create as many scenarios as you want, using the actual behaviour of the asset classes involved.
Beyond The 4% Rule Page 8