News

The Lognormal Bridge

The lognormal process is the default assumption used to model the distribution of asset prices in most asset pricing models, and for the distribution of most of the underlying risk factors in finance.  Although a lognormal process precludes negative values, this fact did not seem to present any practical concern because such values were assumed to be out of range.  After all, stock prices, FX rates, real estate values, commodities prices, and almost every asset class one can think of couldn’t have a negative price, and interest rates would always be positive (obviously!).

That all changed after the financial crisis of 2008, when a number of benchmark rates in many countries turned negative and stayed there. More recently we saw oil prices drop below zero for a short time, as supply far exceeded demand, storage capacity was nonexistent and holders of expiring futures contracts did not want to take delivery of barrels of oil and thus were willing to pay to get out of those contracts.

Even when interest rates are positive but near zero, the assumption of a lognormal price distribution can produce astronomical volatilities that severely skew a number of risk measurements, rendering them unusable.

Subsequently if market conditions revert to “normalcy”, meaning benchmark interest rates return to positive territory and stay there, the negative rate values will persist in the historical time series that are needed to compute HVAR (Historical VAR), and to calculate volatilities and correlations.  So the problem cannot be ignored now, or in the future.

This note discusses the use of a “lognormal bridge” to estimate interest rate statistics and to calculate HVAR. It can also be applied in the case of negative asset prices to produce an option pricing model that seamlessly transitions from a lognormal process to a normal process as prices fall, an application to be discussed in a separate note.

Low/Negative Interest Rates

When interest rates are positive but near zero, small movements can produce enormous lognormal volatility calculations; a two basis point move is inconsequential when the level of interest rates is 5.0%, but when rates are only 0.05%, a 2 bp change in one day is a 40% movement  and so quite significant. Suppose that rates are now safely above zero, say at 3.0%.  When an HVAR calculation comes across that day in the past, it will apply a 40% shock to the current level of 3.0 %, for a change of 120 basis points.  Clearly, the HVAR calculation will jump, especially if the historical rates had remained at that low level for a few weeks!  Chances are that every date from that period would produce values that end up in the tail of the HVAR P&L distribution.

But in a 3% rate environment, we know that a two basis point move is probably not significant.  So how should we deal with this scenario?

Ad hoc methods

#1        Ignore the points you don’t like
One approach is to simply ignore such dates. So, in your HVAR simulations low and negative rates are treated as “holidays”. That is, if rates drop below some threshold – say, 0.10% – those dates are ignored.  That might be acceptable if interest rates made only a brief sojourn into low/negative territory, but we know that certain benchmark rates (for example, on Japanese and Swiss Treasuries) that have spent long stretches below zero. In this case, ignoring them is impractical.

#2        Apply a linear shift
This method shifts interest rates upward by an amount that is sufficient to ensure the shifted value is safely above zero. Then, all risk calculations such as volatility and historical shocks are computed using the shifted rates.

However, this approach has a number of drawbacks:

  • Since all of the rates in the historical series are shifted by this same amount, you will be underestimating the risk of ordinary, nonnegative interest rate moves. This can be easily seen by considering two calculations
    formula

where R0 is the fixed amount of the shift.  R0 drops out of the numerator, but not the denominator.

  • Also, if rates decline further and the fixed shift R0 needs to be increased, what value should be used? It’s clearly arbitrary, and arbitrary has no business in risk management.

 

#3        Switch to a Normal Process
Of course, if we assume a normal process, any difficulty arising from low/negative rates vanishes.  However, you are then stuck with that assumption of normality even for higher rates, which will understate risk. For a global portfolio, it is easy to envision a situation where some of the interest rate exposures in the portfolio can be safely modeled by assuming lognormality, while others are better handled using a normal process.  This approach is more justifiable than the other two in that there is a bona fide mathematical model and logic underpinning it; however, it may be more intrusive than one would like because you’re locked in, “Now that rates are well above zero, I don’t want to keep using a normal model for this currency.”

The Lognormal Bridge

This new method is closest to the Switch to a Normal Process approach, but seamlessly handles the transition back-and-forth from normal to lognormal, and can be implemented for all your interest rates at once.  If rates are safely above zero, there is no effect; it only kicks in when it needs to.  And, unlike the Shift Method, it has no arbitrary parameters.

Consider the nonlinear mapping (transform) and its inverse
formula

This has the property that it is everywhere continuous to first order: at the breakpoint r0, we have

g(r0) = r0 and g'(r0) = 1.  Below, we graph both y = r and y = g(r) with the breakpoint set at r0 = 1

graph

Note that we always have g(r) > 0, so we can require the following assumption to hold everywhere:

Assumption: g = g(r)is lognormal.  If we also choose r0 = 1, we immediately obtain that
formula

Thus if g = g(r) is lognormal, r is lognormal above r = 1, and normal below r = 1This is exactly what we want.

Application I: Computing volatility.
Given a time series {r1, r2, … rN }, we replace it with the “mapped” one {g1, g2, … gN }.  From the equation above, we see that the estimated standard deviation of g can be interpreted as both the normal volatility and lognormal volatility in the respective regions r≤1 and r≥1.  Similarly, we use the transformed time series to compute correlations with other factors, once again treating the g-series as lognormal.

In practice, we don’t really replace the time series, but instead only modify the return calculation:
formula

Example: Comparing volatilities computed the standard way, and by using the transform

In this example, historical interest rates never fell below zero so we could still compute the lognormal volatility. However that volatility becomes artificially inflated when rates get very low.  The green line is the interest rate level (reading off of the right axis), and the horizontal red line is the breakpoint, r = 1.  The red columns represent the rolling volatility of the transformed interest rate time series, and the blue columns are the volatilities computed from the untransformed rates.  We can see how the volatility calculated from the untransformed rates persists at very high levels once interest rates drops below 1.0, even though this is not reflected in the actual movements of the interest rate; it is simply an artifact of lognormality for low rates.  When rates remain at or above 1.0, the two volatility calculations are the same, but they begin to differ once rates drop below 1.0.

Comparison of Standard vs. Transformed

graph

Of course, had rates dropped below zero the lognormal calculation would have failed completely.

Application II: HVAR
HVAR requires an historical time series of returns, which is then applied to the current interest rate.  However, now the returns are extracted from the g-series, which are applied to the transformed rate, and then inverse transformed.

formula

Conclusion

The lognormal bridge is a simple and natural approach to the challenges presented by low and negative interest rates.  Using this transform, all of the difficulties of the ad hoc approaches simply vanish.  There is no need to ignore or massage historical data; only the way in which returns are calculated from that data is adjusted.

More generally, this approach can be applied to other situations in which asset prices turn negative, so that volatilities and correlations can still be estimated from the actual time series.  Moreover, we can leverage this approach to create an options model that similarly transitions between the lognormal and normal processes (to be described in a future note).

Related News

Altair and Imagine Software

ALTAIR PANOPTICON™ & IMAGINE – A Compelling Combination

Steven Harrison, President of Imagine Software, and Bruce Zulu, Director of Technical Support Services for the Business Intelligence division at Panopticon’s parent company, Altair, explain the collaboration and how it benefits clients.

November 9, 2020

How a Stressful Election Could Stress Your Portfolios

Most market observers agree that the many uncertainties swirling around the US Presidential election are likely to generate volatility in the US equity market that could last well beyond November 3rd.

October 16, 2020

Latest Tweet