Calculating Risk Under Market Uncertainty:
There is No Right Answer
When newcomers to the field of quantitative finance are assigned the task of writing up an analysis, they will often show numbers using five, six or even more digits to the right of the decimal point. This may be driven by the part of the brain that craves precision and exactness. While all of those decimal places may be critical for certain calculations (computing trajectories of fast-moving objects over long distances comes to mind), that level of precision is simply not a realistic goal when forecasting future paths in financial markets. We deal with variables that are stochastic and non-stationary, and typically don’t know the true shapes of their distributions.
Given the unstable nature of financial data, does that mean it is a fool’s errand to try to estimate risk? Definitely not, but the uncertainty does call for awareness. It is important to recognize the assumptions we make in calculating risk, to ask, “what other choices could we have made,” and to assess the impact of those assumptions (“does it change the answer much when we switch from Assumption A, to B, or C?”).
This brief article reviews some key choices made in computing volatilities and correlations, perhaps the most frequently used metrics for assessing risk in finance, and how the results change depending upon those choices. This may be a review for many readers, but we believe the examples provided here are important reminders of the implications of the choices made in computing risk in volatile times.
To keep things simple we focus on only three asset classes, but the principles apply to every type of investable asset. We use (1) large cap U.S. equities, specifically the S&P 500, (2) interest rates, represented by the yield of the U.S. 10-year constant maturity (CMT), and (3) gold, specifically the 3 pm London fixing in USD. We use 10 years of data, from June 2010 through May 2020.
First, we compute volatilities to compare the general riskiness of these asset classes. Right away, we confront a dilemma: what frequency should we use to compute our volatilities? Based on the “more is better” school of thought we use daily data, and produce the following from the 10-year time series. Note that the volatility of the 10-year CMT uses changes in yield, not price.
This is one way to estimate the volatility of returns for these asset classes, but not the only way. For example, 10 years may sound like a good amount of time, one that is likely to cover different market environments without going too far back, but is it? After all, the first half of the 2010s was the first half of the decade-long bull market in U.S. equities, characterized by low volatility. Is that relevant in the era of COVID-19, trade tensions, and other challenges facing the global economy today? The yield on the 10-year Treasury was over 3% for most of the first 12 months of the 10-year period, versus less than 2% for most of the last twelve months in the time series – a roughly 33% decline in yield. The 10-year CMT was the most volatile of the three asset classes, based on changes in yields (not prices). With the 10-year Treasury now yielding less than 1%, a handful of basis points that would be an almost trivial change in a higher rate environment but is now substantial in terms of volatility. Do we expect yields to stay this low over our risk horizon?
Since financial data is typically non-stationary, it is useful to see the impact of using different time periods when computing volatilities. We divide the 10-year period into “earlier” and “later” five-year periods, and also consider only the past three years, as one can argue persuasively that more recent data may be more relevant to the near future than data from many years ago (of course, the contrary view that relying too heavily on recent data is narrow and therefore biased is equally valid).
Clearly, the choice of the time period makes a notable difference. There isn’t a clear right or wrong view, but the period of time used produces different answers. Now we add correlations to the mix, as correlation is, of course, an essential element of any portfolio risk calculation. Particularly in times of great market uncertainty, it is worthwhile to scrutinize correlation assumptions. In addition to asking whether the conventional wisdom that most correlations move toward 1.0 in a crisis is holding up, there can be times when signs flip rather dramatically. For example, airlines and oil prices are usually negatively correlated but Covid-19 lockdowns caused them to both fall significantly.
For brevity, we show the correlations computed from daily observations, using data for the past 10 years, and for the past three years.
Again, there are no right or wrong answers but the differences that arise from choosing one period or the other to compute correlation have a potentially large impact on portfolio risk. Interestingly, the only correlation that is not affected by using 10 versus three years of data is between the S&P 500 and the 10 Year CMT Yield. We re-checked the numbers more than once – that’s the actual result.
Next, we consider how much these results are affected by using weekly instead of daily observations. It is well known that using daily data presents challenges for global portfolios, as the close of Day 1 in Asia is the start of Day 1 in the U.S., and so on. Here, we use weekly observations, every Friday, to compute the same metrics:
In every case, the volatilities computed from Weekly data are lower than those based on Daily data (although the differences for Gold are small). Does that mean the Weekly observations are “better”? It does suggest that to some extent, daily fluctuations offset each other over a week’s time, and that using Daily observations might overstate risk. However, if you are looking at a portfolio that is heavily traded on a daily basis, you could argue that using Weekly observations understates risk.
These differences in volatility measured using Daily versus Weekly observations leads to the question, what about correlations?
Choosing 10 years of weekly data versus three years has a large impact on these correlations, and looking back to the correlations computed using Daily data we see substantial differences arising from the choice of Daily versus Weekly.
We conclude that no matter how thoughtful and precise we may be, we cannot escape the inherent uncertainty involved in measuring risk. We need to compute these key measures in different ways, calculate portfolio risk under various approaches and see whether the largest risk results are within a tolerable limit. In summary there is no precise answer to the question “how much risk does this portfolio contain” (no matter how many digits we show to the right of the decimal).
Imagine Software provides the tools to calculate volatilities in various ways, including a choice of time horizons, by focusing exclusively on downside risk, and by using an exponentially-weighted moving average, as well as the ability to shock volatilities across the board. Contact us to learn more.
Those responsible for maintaining a margin system often feel that they are drowning in data management issues. In part two of this series we discuss ways to make margin calculations far more efficient and meet the firm’s need for answers in real-time.
Computing, optimizing and monitoring real-time margin requirements across a multitude of instruments and customer accounts is a Herculean task. In this article, Imagine discusses the key challenges in this arena and will dive deeper into the specifics of solving its challenges.